'Engineering Management in the Age of GenAI' with Suresh Bysani, Director of Engineering, Eightfold

How do engineering leaders stay relevant in the age of Generative AI?

With the rise of GenAI, engineering teams are rethinking productivity, prototyping, and scalability. But AI is only as powerful as the engineering practices behind it.

In this episode of the groCTO by Typo Podcast, host Kovid Batra speaks with Suresh Bysani, Director of Engineering at Eightfold, about the real-world impact of AI on engineering leadership. From writing boilerplate code to scaling enterprise platforms, Suresh shares practical insights and hard-earned lessons from the frontlines of tech.

What You’ll Learn in This Episode:


AI Meets Engineering: How GenAI is transforming productivity, prototyping & software workflows.


Platform vs. Product Teams: Why technical expectations differ — and how to lead both effectively.


Engineering Practices Still Matter: Why GenAI can’t replace fundamental principles like scalability, testing, and reliability.


Avoiding AI Pitfalls: Common mistakes in adopting AI for internal tooling & how to avoid them.


Upskilling for the Future: Why managers & engineers need to build AI fluency now.


A Leader’s Journey: Suresh shares personal stories that shaped his perspective as a people-first tech leader.

Closing Insight: AI isn’t a silver bullet, but a powerful tool. The best engineering leaders combine AI innovation with strong fundamentals, people-centric leadership, and a long-term view.

Timestamps

  • 00:00 — Let’s Begin!
  • 00:55 — Suresh at Eightfold: Role & Background
  • 02:00 — Career Milestones & Turning Points
  • 04:15 — GenAI’s Impact on Engineering Management
  • 07:59 — Why Technical Depth Still Matters
  • 11:58 — AI + Legacy Systems: Key Insights
  • 15:40 — Common GenAI Adoption Mistakes
  • 23:42 — Measuring AI Success
  • 28:08 — AI Use Cases in Engineering
  • 31:05 — Final Advice for Tech Leaders

Links & Mentions

Episode Transcript

Kovid Batra: Hi everyone. This is Kovid, back with another episode of groCTO by Typo. Today with us, we have a very special guest who is an expert in AI and machine learning. So we are gonna talk a lot about Gen AI, engineering management with them, but let me quickly introduce Suresh to all of you. Hi, Suresh.

Suresh Bysani: Hello.

Kovid Batra: So, Suresh is an Engineering, uh, Director at Eightfold and he holds a postgraduate degree in AI and machine learning from USC, and he has almost 10 to 12 years of experience in engineering and leadership. So today, uh, Suresh, we are, we are grateful to have you here. And before we get started with the main section, which is engineering management in the age of GenAI, we would love to know a little bit more about you, maybe your hobbies, something inspiring from your life that defines who you are today. So if you could just take the stage and tell us something about yourself that your LinkedIn profile doesn’t there.

Suresh Bysani: Okay. So, thanks Kovid for having me. Hello everybody. Um, yeah, so if I have to recall a few incidents, I’ll probably recall one or two, right? So right from my childhood, um, I was not an outstanding student, let me put it that way. I have a record of, uh, you know, failing every subject until 10th grade, right? So I’m a totally different person. I feel sometimes, you know, uh, that gave me a lot of confidence in life because, uh, at a very early age, I was, you know, uh, exposed to what failure means, or how does being in failure for a very long time mean, right. That kind of gave me a lot of, you know, mental stability or courage to face failures, right? I’ve seen a lot of friends who were, you know, outstanding students right from the beginning and they get shaken aback when they see a setback or a failure in life. Right? So I feel that defined my personality to take aggressive decisions and moves in my life. That’s, that’s one thing.

Kovid Batra: That’s interesting.

Suresh Bysani: Yeah. And the second thing is, uh, during undergrad we went to a program called Net Tech. So it’s organized by, um, a very famous person in India. It’s most of, mostly an educational thing, right, around, uh, cybersecurity and ethical hacking. So I kind of met the country’s brightest minds in this program. All people from all sorts of background came to this program. Mostly, mostly the good ones, right? So it kind of helped me calibrate where I am across the country’s talent and gave me a fresh perspective of looking beyond my current institution, et cetera. Right. So these are two life defining moments for me in terms of my career growth.

Kovid Batra: Perfect. Perfect. I think you become more resilient, uh, when you’ve seen failures, and I think the openness to learn and exposure definitely gives you a perspective that takes you, uh, in your career, not linearly, but it gives you a geometric progression probably, or exponential progression in your life. So totally relate to that and great start to this. Uh, so Suresh, I think today, now we can jump onto the main section and, uh, talk more about, uh, AI, implementation of AI, Agent ai. But again, that is something that I would like to touch upon, uh, little later. First, I would want to understand from your journey, you are an engineering director, uh, and you have spent good enough time in this management and moving from management to the senior management or a leadership position, I would say. Uh, what’s your perspective of engineering management in today’s world? How is it evolving? What are the things that you see, uh, are kind of set and set as ideals in, um, in engineering management, but might not be very right? So just throw some light on your journey of engineering management and how you see it today evolving.

Suresh Bysani: Yep. Um, before we talk about the evolution, I will just share my thoughts about what does being an engineering manager or a leader means in general, and how is it very different from an IC. I get, I get asked this question quite a lot. A lot of people, a lot of, you know, very strong ICs come to me, uh, with this question of, I want to become a manager or can I become a manager? Right. And this happens quite a lot in, in Bay Area as well as Bangalore.

Kovid Batra: Yeah.

Suresh Bysani: So the first question I ask them is, why do you want to become a manager? Right? What are your reasons for it? I, I hear all great sorts of answers, right? Some folks generally come and say, I like execution. I like to drive from front. I’m responsible. I mean, I want to be the team leader sort of thing, right? I mean, all great answers, right? But if you think about it, execution, project management, JIRA management, or leading from the front; these are all characteristics of any technical leader, not just engineering manager. Even if you’re a staff engineer or an architect or a principal engineer, uh, you are responsible for a reasonable degree of execution, project management, planning, mentorship, getting things done, et cetera. After all, we are all evaluated by execution. So that is not a satisfactory answer for me. The main answer that I’m looking for is I like to grow people. I can, I want to see success in people who are around me. So as an engineering manager, it’s quite a tricky role because most of the time you are only as good as your team. You are evaluated by your team’s progress, team’s success, team’s delivery. Until that point, most ICs are only responsible for their work, right? I mean, they’re doing a project.

Kovid Batra: Yeah.

Suresh Bysani: They do amazing work in their project, and most of the time they get fantastic ratings and materialistic benefits. But all of a sudden when you become an engineering manager or leader, you are spending probably more number of hours to get things done because you have to coordinate the rest of the team, but they don’t necessarily translate to your, you know, growth or materialistic benefits because you are only as good as an average person in your team. So the first thing people have to evaluate is, am I or do I get happiness in growing others? If the answer is yes, if that’s your P0, you are going to be a great engineering leader. Everything else will follow. Now to the second question that you asked. This has been, this remained constant across the years from last 25 years. This is the number one characteristic of an engineering leader. Now, the evolution part. As the technology evolves, what I see as challenge is, uh, in a, an engineering manager should typically understand or go to a reasonable depth into people’s work. Technically, I mean. So as the technologies evolves, most of the engineering managers are typically 10 years, 15 years, 20 years experienced as ICs, right?

Kovid Batra: Yeah.

Suresh Bysani: Now, uh, most of these new engineering managers or seasoned engineering managers, they don’t understand what new technology evolution is. For example, all the recent advancements that we are seeing in AI, GenAI, you know, the engineering managers have no clue about it. If the, most of the time when there is bottom up innovation, how are engineering managers going to look at all of this and evaluate all of this from a technical standpoint? What this means is that there is a constant need for upskilling, and we’ll talk about that, uh, you know, in your questions.

Kovid Batra: Sure. Yeah. But I think, uh, I, I would just like to, uh, ask one question here. I mean, I have been working with a lot of engineering managers in my career as well, and, uh, I’ve been talking to a lot of them. There is always a debate around how much technical an engineering manager should be.

Suresh Bysani: Yeah.

Kovid Batra: And I think that lies in a little more detail and probably you could tell with some of your examples. Uh, an engineering manager who is working more on the product team and product side, uh, and an engineering manager who is probably involved in a platform team or maybe infrastructure team, I think things change a little bit. What’s, what’s your thought on that part?

Suresh Bysani: Yeah, so I think, uh, good question by the way. Uh, my general guidance to most engineering manager’s is they have to be reasonably technical. I mean, it is just that they are given a different responsibility in the company, but that’s it. Right? The, it is not an excuse for them, not for not being technical. Yes, they don’t have to code a 100% of the time that’s given. Right. It, it, so how much time they should be spending coding or doing the technical design? It totally depends on the company, project, situation, et cetera. Right? But they have to be technical. But you have a very interesting question around product teams versus platform teams, right?

Kovid Batra: Yeah.

Suresh Bysani: Engineering manager for product teams generally, you know, deals with a lot of stakeholders, whether it is PMs or customers or you know, uh, uh, the, the potential people and the potential new customers that are going to the company. So their time, uh, is mostly spent there. They hardly have enough time to, you know, go deep within the product. That’s the nature of their job. But at the same time, they do, uh, they are also expected to be, uh, reasonably technical, but not as technical as engineering leaders of platform teams or infrastructure teams. The plat for the platform teams and infrastructure teams, yes. They also engage with stakeholders, but their stakeholders are mostly internal and other engineering managers. That’s, That’s the general setup.

Kovid Batra: Yeah. Yeah.

Suresh Bysani: And, you know, uh, just like how engineering managers are able to guide how the product should look like, platform managers and infrastructure managers should, you know, uh, go deep into what platform or infrastructure we should provide to the rest of the company. And obviously, as the problem statement sounds, that requires a lot more technical depth, focus than, than the rest of the engineering leaders. So yes, engineering managers for platform and infrastructure are required to be reasonably technically stronger than the rest of the leaders.

Kovid Batra: Totally. I think that’s, that’s the key here. And the balance is something that one needs to identify based on their situation, project, how much of things they need to take care of in their teams. So totally agree to it. Uh, moving on. Uh, I think the most burning piece, uh, I think everyone is talking about it, which is AI, Agent AI, implementing, uh, AI into the most core legacy services in, in a team, in a company. But I think things need to be highlighted, uh, in a way where people need to understand what needs to be done, why it needs to be done, and, uh, while we were talking a few days back, uh, you mentioned about mentoring a few startups and technical founders who are actually doing it at this point of time, and you’re guiding them and you have seen certain patterns where you feel that there is a guidance required in the industry now.

Suresh Bysani: Yeah.

Kovid Batra: So while we have you here, my next question to you is like, what should an engineering manager do in this age of GenAI to, let’s say, stay technically equipped and take the right decisions moving forward?

Suresh Bysani: Yeah. I, I’ll start with this. The first thing is upskilling, right? As we were talking about in our previous, uh, question, uh, most engineering managers have not coded in the GenAI era, right? Because it’s just started.

Kovid Batra: Yeah.

Suresh Bysani: So, but all the new ideas or the new projects, uh, there is a GenAI or an AI flavor to it. That’s where the world is moving towards. I mean, uh, let’s be honest, right? If we don’t upskill ourselves in AI right now, we will be termed legacy. So when there is bottom up innovation happening within the team, how is the engineering manager supposed to, you know, uh, technically calibrate the project/design/code that is happening in the team? So that is why I say there is a need for upskilling. At Eightfold, uh, what we did is one of our leader, uh, he said, uh, all the engineering managers, let’s not do anything for a week. Let’s create something with GenAI that is useful for the company and all of you code it, right? I really loved the idea because periodically engineering managers are supposed to step back like this, whether it is in the form of hackathons or ideas or whatever it is, right? They should get their hands dirty in this new tech to get some perspective. And once I did that, it gave me a totally new perspective and I started seeing every idea with this new lens of GenAI, right? And I started asking fundamental questions like why can’t we write an agent to do this? Why can’t we do this? Should we spend a lot of time writing business logic for this, right? That is important for every engineering leader. How do you periodically step back and get your hands dirty and go to the roots? Sometimes it’s not easy because of the commitments that you have. So you have to spend your weekends or, you know, or after time to go read about some of this, read some papers, write some code, or it could, it doesn’t have to be something outside. It can be, you know, uh, part of your projects too. Go pick up like five to 10% of your code in one of the projects. Get your hands dirty. So you’ll start being relevant and the amount of confidence that you will get will automatically improve. And the kind of questions that you’ll start asking for your, you know, uh, immediate reportees will also change and they will start seeing this too. They’ll start feeling that my leader is reasonably technical and I can go and talk to him about anything. So this aspect is very, very important.

Now coming to your second question which is, uh, what are the common mistakes people are doing with this, you know, GenAI or this advancements of technologies? See, um, GenAI is great in terms of, you know, um, writing a lot of code on behalf of an engineer, right? Writing a lot of monotonic code on behalf of an engineer. But it is an evolving technology. It’ll have limitations. The fundamental mistake that I’m seeing a lot of people are making is they’re assuming that GenAI or the LLMs can replace a lot of strong engineers; maybe in the future, but that’s not the case right now. They’re great for prototyping. They’re great for writing agents. They’re great for, you know, automating some routine mundane tasks, right, and make your product agentic too. That’s all great. They’re moving with great velocity. But the thing is, there’s a lot of difference, uh, between showing this initial prototype and productionizing this. Let’s face it, enterprise customers have a very high bar. They don’t want, you know, something that breaks at scalability or reliability in production, right? Which means while LLM and Agentic worlds offer a lot of fancy ways of doing things, you still need solid engineering design practices around all of this to make sure that your product does not break in production. So that is where I spend a lot of time advising these new founders or, you know, people in large companies who are trying to adopt AI into their SDLC, that this is not going to be a, you know, magical replacement for everything that you guys are doing. It is, think of it as a friend who is going to assist you or you know, improve your productivity by 10x, but everything around a solid engineering design or an organization, it’s not a replacement for that or at least not yet.

Kovid Batra: Makes sense. I think I’d like to deep dive a little bit more on this piece itself, where if you could give us some examples of how, so first of all, where you have seen these problems occurring, like people just going out and implementing AI or agents, uh, without even thinking whether it is gonna make some sense or not, and if you need to do it in the right way..

Suresh Bysani: Yeah.

Kovid Batra: Can you give us some examples? Like, okay, if this is a case, this is how one should proceed step by step. And I think I, I don’t mind if you get a little more technical here explaining what exactly needs to be done.

Suresh Bysani: Yeah. So let’s take a very basic product, right, which, uh, any SaaS application which has all the layers from infrastructure to authentication to product, to, you know, some workflow that SaaS application is supposed to do. So in the non-agentic/AI world, we are all familiar with how to do this, right? We probably do some microservices, we deploy them in Kubernetes or any other compute infrastructure that people are comfortable with. And you know, we write tons and tons of business logic saying, if this is the request, do this. If this is the request, do this. That’s, that is the programming style we are used to, and that’s still very popular. In the world of agents, agents can be thought of, you know, uh, an LLM abstraction where instead of writing a lot of business logic yourself, you have a set of tools that you author, typically the functions or utils that you call, you, you have in your microservices. And agents kind of decide what are the right set of tools to execute in order to get things done. The claim is there’s a lot of time people spend in writing business logic and not the utils itself. So you write this utils/tools one time and let agents do the business logic. That’s okay. That’s a very beautiful claim, right? But where it’ll fail is if I, if you think about enterprise customers, yes, we’ll talk about consumer applications, but let’s talk about enterprise because that’s where most of the immediate money is, right? Enterprise customers allow determinism. So for example, let’s take an application like Jira or you know, Asana, or whatever application you want to think about, right? They expect a lot of determinism. So let’s say you move a Jira ticket from ‘in-progress’ to say ‘completed’, I mean, I, I’m taking Jira as an example because this is a common enterprise product everybody is familiar with, so they expect it to work deterministically. Agents, as we know, are just wrappers around LLM and they are still hallucinating models, right? Uh, so, determinism is a question mark, right? Yes, we, we, there are a lot of techniques and tools people are using to improve the determinism factor, but if the determinism is a 100%, it’s as good as AI can do everything, right? It’s never going to be the case. So we have to carefully pick and choose the parts of the product, which are okay to be non-deterministic. We’ll talk about what they can be. And we have, we obviously know the parts of the product which cannot be non-deterministic. For example, all the permission boundaries, right? One of the common mistakes I see early startups making is they just code permission boundaries with agents. So let’s say given a logged in user, what are the permissions this person is supposed to have? We can’t let agents guess that. It has to be deterministic because what if there is a mistake and you start seeing your boss’s salary, right? It’s not acceptable. Uh, so similarly, permission boundaries, authentications, authorizations, any, anything in this layer, definitely no agents. Uh, anything that has a strong deterministic workflow requirements, basically moving the state missions and moving from one state to another in a very deterministic way, definitely no agents, but there’s, a lot of parts of the product where we can get away with not having deterministic code. It’s okay to take one path versus the other, for example, you know, uh, uh, how, how do I, how do I say it? Uh, let’s say you have an agent, you know, which is trying to, uh, act as a, as a, um, as a, as a persona, let me put it that way. So one of the common example I can take is, let’s say you are trying to use Jira, uh, and somebody’s trying to generate some reports with Jira, right? So think of it as offline reporting. So whether you do report number 1, 2, 3, or whether you do report number 3, 2, 1 in different order, it’s okay. Nobody’s going to, you know, uh, nobody’s going to make a big deal about it. So you get the idea, right? So anywhere there is acceptability in terms of non-determinism, it’s okay to code agents, so that you will reduce on the time you’re spending on the business logic. But any, anywhere you need determinism, you definitely have to have solid code which obeys, you know, the rules of determinism.

Kovid Batra: Yeah, totally. I think that’s a very good example to explain where things can be implemented and where you need to be a little cautious. I think one more thing that comes to my mind is that every time you’re implementing something, uh, talking in terms of AI, uh, you also need to show the results.

Suresh Bysani: Yeah.

Kovid Batra: Right? Let’s say if I implement GitHub Copilot in my team, I need to make sure, uh, the coding standards are improving, or at least the speed of writing the code is improving. There are lesser performance issues. There are lesser, let’s say, vulnerability or security issues. So similarly, I think, uh, at Eightfold or at any other startup where you are an advisor, do you see these implementations happening and people consciously measuring whether, uh, things are improving or not, or they’re just going by, uh, the thing that, okay, if it’s the age of AI, let’s implement and do it, and everything is all positive? They’re not looking at results. They’re not measuring. And if they are, how are they measuring? Can you again, give an example and help us understand?

Suresh Bysani: Yeah. So I think I’ve seen both styles. Uh, the answer to this largely relies on the, you know, influence of, uh, founders and technical leaders within the team. For example, Eightfold is an AI company. Most of the leaders at Eightfold are from strong AI backgrounds. So even before GenAI, they knew how to evaluate a, a model and how to make sure that AI is doing its job. So that goodness will continue even in the GenAI world, right? Typically people do this with Evals frameworks, right? They log everything that is done by AI. And, you know, they kind of understand if, uh, what percentage of it is accurate, right? I mean, they can start with something simple and we can take it all the way fancy. But yes, there are many companies where founders or technical leaders have not worked or they don’t understand AI a lot, right? I mean, there’s, they’re still upskilling, just like all of us.

Kovid Batra: Yeah.

Suresh Bysani: And they don’t know how to really evaluate how good of a job AI is doing. Right? I mean, they are just checking their box saying that yes, I have agents. Yes, I’m using AI. Yes, I’m using LLMs, and whatnot, right? So that’s where the danger is. And, and that’s where I spend a lot of time advising them that you should have a solid framework around observability to understand, you know, how much of these decisions are accurate. You know, what part, how much of your productivity is getting a boost, right? Uh, totally. Right. I think people are now upskilling. That’s where I spend a lot of time educating these new age founders, especially the ones who do not have the AI background, uh, to help them understand that you need to have strong Evals frameworks to understand accuracy and use of this AI for everything that you are, that you’re doing. And, and I see a huge, you know, improvement in, in, in their understanding over time.

Kovid Batra: Perfect. Anything specific that you would like to mention here in terms of your evaluation frameworks for AI, uh, that could really help the larger audience to maybe approach things fundamentally?

Suresh Bysani: Oh, I mean, so there are tons of Evals frameworks on the internet, right? I mean, pick a basic one. Nothing fancy. Especially, I mean, obviously it depends on the size of your project and the impact of your AI model. Things can change significantly. But for most of the agents that people are developing in-house, pick a very simple Evals framework. I mean, if you, I, I see a lot of people are using LangGraph and LangSmith nowadays, right? I mean, I’m not married to a framework. People can, are free to use any framework, but. LangSmith is a good example of what observability in the GenAI world should look like, right? So they’ll, they’re, they’re nicely logging all the conversations that we are having with, with LLM, and you can start looking at the impact of each of these conversations. And over time, you will start understanding whether to tweak your prompt or start providing more context or, you know, maybe build a RAG around it. The whole idea is to understand your interactions with AI because these are all headless agents, right? These are not GPT-like conversations where a user is trying to enter this conversation. Your product is doing this on behalf of you, so which means you are not actually seeing what is happening in terms of interactions with this LLM. So having these Evals frameworks will, you know, kind of nicely log everything that we are doing with LLM and we can start observing what to do in order to improve the accuracy and, you know, get, get better results. That’s, that’s the first idea. So I, I would, I would start with LangSmith and people can get a lot of ideas from LangSmith, and yes, we can go all fancy from there.

Kovid Batra: Great. I think before we, uh, uh, complete this discussion and, uh, say goodbye to you, I think one important thing that comes to my mind is that implementing AI in any tech organization, there could be various areas, various dimensions where you can take it to, but anything that you think is kind of proven already where people should invest, engineering managers should invest, like blindly, okay, this is something that we can pick and like, see the impact and improve the overall engineering efficiency?

Suresh Bysani: Yes. I, I generally recommend people to start with internal productivity because it is not customer-facing AI. So you’re okay to do experiments and fail, and it’ll give a nice headway for people within the company to upskill for Agentic worlds. There are tons of problems, right? Whether it is, I mean, I have a simple goal. 10% of my PRs that are generated within the company should be AI-generated. It looks like a very big number, but if you think about it, you can, all the unit tests can be written by AI, all the, you know, uh, PagerDuty problems can be, can, can be taken at first shot by agents and write simple PRs, right? There are tons of internal things that we can just do with agents. Now, agents are becoming very good at code writing and, you know, code generation. Obviously there are still limitations, but for simple things like unit test, bugs, failures, agents can definitely take a first shot at it. That’s one. And second thing is if we think about all these retro documents, internal confluence documents, or bunch of non-productive things that a lot of engineering people do, right? Uh, agents can do it without getting any boredom, right? I mean, think about it. You don’t need to pay any salaries for agents, right? They can continuously work for you. They’ll automate and do all the repetitive and mundane tasks. But in this process, as we’re talking about it, we should start learning the several frameworks and improve the accuracy of these internal agents, and thereby, because internal agents are easy to measure, right? 10% of my PRs. My bugs have reduced this much by a month or 1 month. Bugs as in, the overall bugs will not reduce. The number of bugs that a developer had to fix versus an agent had to fix, that will reduce, uh, over time, right? So these are very simple metrics to measure and learn, and improve on the agent’s accuracy. Once you have this solid understanding, engineers are the best people. They have fantastic product context, so they will start looking at gaps. Oh, I can put an agent here. I can put an agent here. Maybe I can do an agent for this part in the product. That’s the natural evolution I recommend people. I don’t recommend people to start agents in the product direction.

Kovid Batra: Makes sense. Great. I think Suresh, this was a really interesting session. We got some very practical advice around implementing AI and avoiding the pitfalls. Uh, anything else that you would like to say to our audience, uh, as parting advice?

Suresh Bysani: Yeah, so, uh, I’m sure there is a lot of technical audience that are going to see this. Uh, upskill yourself in agents or AI in general. Uh, I think five years ago it was probably not seen as a requirement, uh, with, there was a group of people who were doing AI and generating models, and majority of the world was just doing backend/full stack engineering. But right now, the definition of a full stack engineer has changed completely. Right? So a full stack engineer is now writing agents, right? So it doesn’t have to be fine tuning your models or going into the depth of models, right? That is still models experts’ job, you know? Uh, but at least learning to write programs using agents and incorporating agents as a first class citizen in your projects; definitely spend a lot of time on that.

Kovid Batra: Great. Thank you so much. That’s our time for today. Pleasure having you.

Suresh Bysani: Thank you. Bye-bye.