Typo's Picks

Introduction

In today's fast-paced and rapidly evolving software development landscape, effective project management is crucial for engineering teams striving to meet deadlines, deliver quality products, and maintain customer satisfaction. Project management not only ensures that tasks are completed on time but also optimizes resource allocation enhances team collaboration, and improves communication across all stakeholders. A key tool that has gained prominence in this domain is JIRA, which is widely recognized for its robust features tailored for agile project management.

However, while JIRA offers numerous advantages, such as customizable workflows, detailed reporting, and integration capabilities with other tools, it also comes with limitations that can hinder its effectiveness. For instance, teams relying solely on JIRA dashboard gadget may find themselves missing critical contextual data from the development process. They may obtain a snapshot of project statuses but fail to appreciate the underlying issues impacting progress. Understanding both the strengths and weaknesses of JIRA dashboard gadget is vital for engineering managers to make informed decisions about their project management strategies.

The Limitations of JIRA Dashboard Gadgets

Lack of Contextual Data

JIRA dashboard gadgets primarily focus on issue tracking and project management, often missing critical contextual data from the development process. While JIRA can show the status of tasks and issues, it does not provide insights into the actual code changes, commits, or branch activities that contribute to those tasks. This lack of context can lead to misunderstandings about project progress and team performance. For example, a task may be marked as "in progress," but without visibility into the associated Git commits, managers may not know if the team is encountering blockers or if significant progress has been made. This disconnect can result in misaligned expectations and hinder effective decision-making.

Static Information

JIRA dashboards having road map gadget or sprint burndown gadget can sometimes present a static view of project progress, which may not reflect real-time changes in the development process. For instance, while a JIRA road map gadget or sprint burndown gadget may indicate that a task is "done," it does not account for any recent changes or updates made in the codebase. This static nature can hinder proactive decision-making, as managers may not have access to the most current information about the project's health. Additionally, relying on historical data can create a lag in response to emerging issues in issue statistics gadget. In a rapidly changing development environment, the ability to react quickly to new information is crucial for maintaining project momentum hence we need to move beyond default chart gadget like road map gadget or burndown chart gadget.

Limited Collaboration Insights

Collaboration is essential in software development, yet JIRA dashboards often do not capture the collaborative efforts of the team. Metrics such as code reviews, pull requests, and team discussions are crucial for understanding how well the team is working together. Without this information, managers may overlook opportunities for improvement in team dynamics and communication. For example, if a team is actively engaged in code reviews but this activity is not reflected in JIRA gadgets or sprint burndown gadget, managers may mistakenly assume that collaboration is lacking. This oversight can lead to missed opportunities to foster a more cohesive team environment and improve overall productivity.

Overemphasis on Individual Metrics

JIRA dashboard or other copy dashboard can sometimes encourage a focus on individual performance metrics rather than team outcomes. This can foster an environment of unhealthy competition, where developers prioritize personal achievements over collaborative success. Such an approach can undermine team cohesion and lead to burnout. When individual metrics are emphasized, developers may feel pressured to complete tasks quickly, potentially sacrificing code quality and collaboration. This focus on personal performance can create a culture where teamwork and knowledge sharing are undervalued, ultimately hindering project success.

Inflexibility in Reporting

JIRA dashboard layout often rely on predefined metrics and reports, which may not align with the unique needs of every project or team. This inflexibility can result in a lack of relevant insights that are critical for effective project management. For example, a team working on a highly innovative project may require different metrics than a team maintaining legacy software. The inability to customize reports can lead to frustration and a sense of disconnect from the data being presented.

The Power of Integrating Git Data with JIRA

Integrating Git data with JIRA provides a more holistic view of project performance and developer productivity. Here’s how this integration can enhance insights:

Real-Time Visibility into Development Activity

By connecting Git repositories with JIRA, engineering managers can gain real-time visibility into commits, branches, and pull requests associated with JIRA issues & issue statistics. This integration allows teams to see the actual development work being done, providing context to the status of tasks on the JIRA dashboard gadet. For instance, if a developer submits a pull request that relates to a specific JIRA ticket, the project manager instantly knows that work is ongoing, fostering transparency. Additionally, automated notifications for changes in the codebase linked to JIRA issues keep everyone updated without having to dig through multiple tools. This integrated approach ensures that management has a clear understanding of actual progress rather than relying on static task statuses.

Enhanced Collaboration and Communication

Integrating Git data with JIRA facilitates better collaboration among team members. Developers can reference JIRA issues in their commit messages, making it easier for the team to track changes related to specific tasks. This transparency fosters a culture of collaboration, as everyone can see how their work contributes to the overall project goals. Moreover, by having a clear link between code changes and JIRA issues, team members can engage in more meaningful discussions during stand-ups and retrospectives. This enhanced communication can lead to improved problem-solving and a stronger sense of shared ownership over the project.

Improved Risk Management

With integrated Git and JIRA data, engineering managers can identify potential risks more effectively. By monitoring commit activity and pull requests alongside JIRA issue statuses, managers can spot trends and anomalies that may indicate project delays or technical challenges. For example, if there is a sudden decrease in commit activity for a specific task, it may signal that the team is facing challenges or blockers. This proactive approach allows teams to address issues before they escalate, ultimately improving project outcomes and reducing the likelihood of last-minute crises.

Comprehensive Reporting and Analytics

The combination of JIRA and Git data enables more comprehensive reporting and analytics. Engineering managers can analyze not only task completion rates but also the underlying development activity that drives those metrics. This deeper understanding can inform better decision-making and strategic planning for future projects. For instance, by analyzing commit patterns and pull request activity, managers can identify trends in team performance and areas for improvement. This data-driven approach allows for more informed resource allocation and project planning, ultimately leading to more successful outcomes.

Best Practices for Integrating Git Data with JIRA

To maximize the benefits of integrating Git data with JIRA, engineering managers should consider the following best practices:

Select the Right Tools

Choose integration tools that fit your team's specific needs. Tools like Typo can facilitate the connection between Git and JIRA smoothly. Additionally, JIRA integrates directly with several source control systems, allowing for automatic updates and real-time visibility.

Sprint analysis in Typo

If you’re ready to enhance your project delivery speed and predictability, consider integrating Git data with your JIRA dashboards. Explore Typo! We can help you do this in a few clicks & make it one of your favorite dashboards.

Establish Commit Message Guidelines

Encourage your team to adopt consistent commit message guidelines. Including JIRA issue keys in commit messages will create a direct link between the code change and the JIRA issue. This practice not only enhances traceability but also aids in generating meaningful reports and insights. For example, a commit message like 'JIRA-123: Fixed the login issue' can help managers quickly identify relevant commits related to specific tasks.

Automate Workflows

Leverage automation features available in both JIRA and Git platforms to streamline the integration process. For instance, set up automated triggers that update JIRA issues based on events in Git, such as moving a JIRA issue to 'In Review' once a pull request is submitted in Git. This reduces manual updates and alleviates the administrative burden on the team.

Train Your Team

Providing adequate training to your team ensures everyone understands the integration process and how to effectively use both tools together. Conduct workshops or create user guides that outline the key benefits of integrating Git and JIRA, along with tips on how to leverage their combined functionalities for improved workflows.

Monitor and Adapt

Implement regular check-ins to assess the effectiveness of the integration. Gather feedback from team members on how well the integration is functioning and identify any pain points. This ongoing feedback loop allows you to make incremental improvements, ensuring the integration continues to meet the needs of the team.

Utilize Dashboards for Visualization

Create comprehensive dashboards that visually represent combined metrics from both Git and JIRA. Tools like JIRA dashboards, Confluence, or custom-built data visualization platforms can provide a clearer picture of project health. Metrics can include the number of active pull requests, average time in code review, or commit activity relevant to JIRA task completion.

Encourage Regular Code Reviews

With the changes being reflected in JIRA, create a culture around regular code reviews linked to specific JIRA tasks. This practice encourages collaboration among team members, ensures code quality, and keeps everyone aligned with project objectives. Regular code reviews also lead to knowledge sharing, which strengthens the team's overall skill set.

Case Study:

25% Improvement in Task Completion with Jira-Git Integration at Trackso

To illustrate the benefits of integrating Git data with JIRA, let’s consider a case study of a software development team at a company called Trackso.

Background

Trackso, a remote monitoring platform for Solar energy, was developing a new SaaS platform that consisted of a diverse team of developers, designers, and project managers. The team relied heavily on JIRA for tracking project statuses, but they found their productivity hampered by several issues:

  • Tasks had vague statuses that did not reflect actual progress to project managers.
  • Developers frequently worked in isolation without insight into each other's code contributions.
  • They could not correlate project delays with specific code changes or reviews, leading to poor risk management.

Implementation of Git and JIRA Integration

In 2022, Trackso's engineering manager decided to integrate Git data with JIRA. They chose GitHub for version control, given its robust collaborative features. The team set up automatic links between their JIRA tickets and corresponding GitHub pull requests and standardized their commit messages to include JIRA issue keys.

Metrics of Improvement

After implementing the integration, Trackso experienced significant improvements within three months:

  • Increased Collaboration: There was a 40% increase in code review participation as developers began referencing JIRA issues in their commits, facilitating clearer discussions during code reviews.
  • Reduced Delivery Times: Average task completion times decreased by 25%, as developers could see almost immediately when tasks were being actively worked on or if blockers arose.
  • Improved Risk Management: The team reduced project delays by 30% due to enhanced visibility. For example, the integration helped identify that a critical feature was lagging due to slow pull request reviews. This enabled team leads to improve their code review workflows.
  • Boosted Developer Morale: Developer satisfaction surveys indicated that 85% of team member felt more engaged in their work due to improved communication and clarity around task statuses.

Challenges Faced

Despite these successes, Trackso faced challenges during the integration process:

  • Initial Resistance: Some team member were hesitant to adopt new practices & new personal dashboard. The engineering manager organized training sessions to showcase the benefits of integrating Git and JIRA & having a personal dashboard, promoting buy-in from the team and leaving the default dashboard.
  • Maintaining Commit Message Standards: Initially, not all developers consistently used the issue keys in their commit messages. The team revisited training sessions and created a shared repository of best practices to ensure adherence.

Conclusion

While JIRA dashboards are valuable tools for project management, they are insufficient on their own for engineering managers seeking to improve project delivery speed and predictability. By integrating Git data with JIRA, teams can gain richer insights into development activity, enhance collaboration, and manage risks more effectively. This holistic approach empowers engineering leaders to make informed decisions and drive continuous improvement in their software development processes. Embracing this integration will ultimately lead to better project outcomes and a more productive engineering culture. As the software development landscape continues to evolve, leveraging the power of both JIRA and Git data will be essential for teams looking to stay competitive and deliver high-quality products efficiently.

Developer productivity is the new buzzword across the industry. Suddenly, measuring developer productivity has started going mainstream after the remote work culture, and companies like McKinsey are publishing articles titled - ”Yes, you can measure software developer productivity” causing a stir in the software development community, So we thought we should share our take on- Developer Productivity.

We will be covering the following Whats, Whys & Hows about Developer Productivity in this piece-

  • What is developer productivity?
  • Why do we need to measure developer productivity?
  • How do we measure it at the Team and individual level? & Why is it more complicated to measure developer productivity than Sales or Hiring productivity?
  • Challenges & Dangers of measuring developer productivity & What not to measure.
  • What is the impact of measuring developer productivity on engineering culture?

What is Developer Productivity?

Developer productivity refers to the effectiveness and efficiency with which software developers create high-quality software that meets business goals. It encompasses various dimensions, including code quality, development speed, team collaboration, and adherence to best practices. For engineering managers and leaders, understanding developer productivity is essential for driving continuous improvement and achieving successful project outcomes.

Key Aspects of Developer Productivity

Quality of Output: Developer productivity is not just about the quantity of code or code changes produced; it also involves the quality of that code. High-quality code is maintainable, readable, and free of significant bugs, which ultimately contributes to the overall success of a project.

Development Speed: This aspect measures how quickly developers (usually referred as developer velocity) can deliver features, fixes, and updates. While developer velocity is important, it should not come at the expense of code quality. Effective engineering teams strike a balance between delivering quickly and maintaining high standards.

Collaboration and Team Dynamics: Successful software development relies heavily on effective teamwork. Collaboration tools and practices that foster communication and knowledge sharing can significantly enhance developer productivity. Engineering managers should prioritize creating a collaborative environment that encourages teamwork.

Adherence to Best Practices for Outcomes: Following coding standards, conducting code review, and implementing testing protocols are essential for maintaining development productivity. These practices ensure that developers produce high-quality work consistently, which can lead to improved project outcomes.

Why do we need to measure dev productivity?

We all know that no love to be measured but the CEOs & CFOs have an undying love for measuring the ROI of their teams, which we can't ignore. The more the development productivity, the more the RoI. However, measuring developer productivity is essential for engineering managers and leaders too who want to optimize their teams' performance- We can't improve something that we don't measure.

Understanding how effectively developers work can lead to improved project outcomes, better resource allocation, and enhanced team morale. In this section, we will explore the key reasons why measuring developer productivity is crucial for engineering management.

Enhancing Team Performance

Measuring developer productivity allows engineering managers to identify strengths and weaknesses within their teams. By analyzing developer productivity metrics, leaders can pinpoint areas where new developer excel and where they may need additional support or resources. This insight enables managers to tailor training programs, allocate tasks more effectively, and foster a culture of continuous improvement.

Team's insights in Typo

Driving Business Outcomes

Developer productivity is directly linked to business success. By measuring development team productivity, managers can assess how effectively their teams deliver features, fix bugs, and contribute to overall project goals. Understanding productivity levels helps align development efforts with business objectives, ensuring that the team is focused on delivering value that meets customer needs.

Improving Resource Allocation

Effective measurement of developer productivity enables better resource allocation. By understanding how much time and effort are required for various tasks, managers can make informed decisions about staffing, project timelines, and budget allocation. This ensures that resources are utilized efficiently, minimizing waste and maximizing output.

Fostering a Positive Work Environment

Measuring developer productivity can also contribute to a positive work environment. By recognizing high-performing teams and individuals, managers can boost morale and motivation. Additionally, understanding productivity trends can help identify burnout or dissatisfaction, allowing leaders to address issues proactively and create a healthier workplace culture.

Developer surveys insights in Typo

Facilitating Data-Driven Decisions

In today’s fast-paced software development landscape, data-driven decision-making is essential. Measuring developer productivity provides concrete data that can inform strategic decisions. Whether it's choosing new tools, adopting agile methodologies, or implementing process changes, having reliable developer productivity metrics allows managers to make informed choices that enhance team performance.

Investment distribution in Typo

Encouraging Collaboration and Communication

Regularly measuring productivity can highlight the importance of collaboration and communication within teams. By assessing metrics related to teamwork, such as code reviews and pair programming sessions, managers can encourage practices that foster collaboration. This not only improves productivity but overall developer experience by strengthening team dynamics and knowledge sharing.

Ultimately, understanding developer experience and measuring developer productivity leads to better outcomes for both the team and the organization as a whole.

How do we measure Developer Productivity?

Measuring developer productivity is essential for engineering managers and leaders who want to optimize their teams' performance.

Strategies for Measuring Productivity

Focus on Outcomes, Not Outputs: Shift the emphasis from measuring outputs like lines of code to focusing on outcomes that align with business objectives. This encourages developers to think more strategically about the impact of their work.

Measure at the Team Level: Assess productivity at the team level rather than at the individual level. This fosters team collaboration, knowledge sharing, and a focus on collective goals rather than individual competition.

Incorporate Qualitative Feedback: Balance quantitative metrics with qualitative feedback from developers through surveys, interviews, and regular check-ins. This provides valuable context and helps identify areas for improvement.

Encourage Continuous Improvement: Position productivity measurement as a tool for continuous improvement rather than a means of evaluation. Encourage developers to use metrics to identify areas for growth and work together to optimize workflows and development processes.

Lead by Example: As engineering managers and leaders, model the behavior you want to see in your team & team members. Prioritize work-life balance, encourage risk-taking and innovation, and create an environment where developers feel supported and empowered.

Measuring Dev productivity involves assessing both team and individual contributions to understand how effectively developers are delivering value through their development processes. Here’s how to approach measuring productivity at both levels:

Team-Level Developer Productivity

Measuring productivity at the team level provides a more comprehensive view of how collaborative efforts contribute to project success. Here are some effective metrics:

DORA Metrics

The DevOps Research and Assessment (DORA) metrics are widely recognized for evaluating team performance. Key metrics include:

  • Deployment Frequency: How often the software engineering team releases code to production.
  • Lead Time for Changes: The time taken for committed code to reach production.
  • Change Failure Rate: The percentage of deployments that result in failures.
  • Time to Restore Service: The time taken to recover from a failure.

Issue Cycle Time

This metric measures the time taken from the start of work on a task to its completion, providing insights into the efficiency of the software development process.

Team Satisfaction and Engagement

Surveys and feedback mechanisms can gauge team morale and satisfaction, which are critical for long-term productivity.

Collaboration Metrics

Assessing the frequency and quality of code reviews, pair programming sessions, and communication can provide insights into how well the software engineering team collaborates.

Individual Developer Productivity

While team-level metrics are crucial, individual developer productivity also matters, particularly for performance evaluations and personal development. Here are some metrics to consider:

  • Pull Requests and Code Reviews: Tracking the number of pull requests submitted and the quality of code reviews can provide insights into an individual developer's engagement and effectiveness.
  • Commit Frequency: Measuring how often a developer commits code can indicate their active participation in projects, though it should be interpreted with caution to avoid incentivizing quantity over quality.
  • Personal Goals and Outcomes: Setting individual objectives related to project deliverables and tracking their completion can help assess individual productivity in a meaningful way.
  • Skill Development: Encouraging developers to pursue training and certifications can enhance their skills, contributing to overall productivity.

Measuring developer productivity metrics presents unique challenges compared to more straightforward metrics used in sales or hiring. Here are some reasons why:

  • Complexity of Work: Software development involves intricate problem-solving, creativity, and collaboration, making it difficult to quantify contributions accurately. Unlike sales, where metrics like revenue generated are clear-cut, developer productivity encompasses various qualitative aspects that are harder to measure for project management.
  • Collaborative Nature: Development work is highly collaborative. Team members often intertwine with team efforts, making it challenging to isolate the impact of one developer's work. In sales, individual performance is typically more straightforward to assess based on personal sales figures.
  • Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity of a pragmatic engineer. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
  • Varied Work Activities: Developers engage in various activities beyond coding, including debugging, code reviews, and meetings. These essential tasks are often overlooked in productivity measurements, whereas sales roles typically have more consistent and quantifiable activities.
  • Productivity Tools and Software development Process: The developer productivity tools and methodologies used in software development are constantly changing, making it difficult to establish consistent metrics. In contrast, sales processes tend to be more stable, allowing for easier benchmarking and comparison.

By employing a balanced approach that considers both quantitative and qualitative factors, with a few developer productivity tools, engineering leaders can gain valuable insights into their teams' productivity and foster an environment of continuous improvement & better developer experience.

Challenges of measuring Developer Productivity - What not to Measure?

Measuring developer productivity is a critical task for engineering managers and leaders, yet it comes with its own set of challenges and potential pitfalls. Understanding these challenges is essential to avoid the dangers of misinterpretation and to ensure that developer productivity metrics genuinely reflect the contributions of developers. In this section, we will explore the challenges of measuring developer productivity and highlight what not to measure.

Challenges of Measuring Developer Productivity

  • Complexity of Software Development: Software development is inherently complex, involving creativity, problem-solving, and collaboration. Unlike more straightforward fields like sales, where performance can be quantified through clear metrics (e.g., sales volume), developer productivity is multifaceted and includes various non-tangible elements. This complexity makes it difficult to establish a one-size-fits-all metric.
  • Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
  • Team Dynamics and Collaboration: Measuring individual productivity can overlook the collaborative nature of software development. Developers often work in teams where their contributions are interdependent. Focusing solely on individual metrics may ignore the synergistic effects of collaboration, mentorship, and knowledge sharing, which are crucial for a team's overall success.
  • Context Ignorance: Developer productivity metrics often fail to consider the context in which developers work. Factors such as project complexity, team dynamics, and external dependencies can significantly impact productivity but are often overlooked in traditional assessments. This lack of context can lead to misleading conclusions about a developer's performance.
  • Potential for Misguided Incentives: Relying heavily on specific metrics can create perverse incentives. For example, if developers are rewarded based on the number of commits, they may prioritize frequent small commits over meaningful contributions. This can lead to a culture of "gaming the system" rather than fostering genuine productivity and innovation.

What Not to Measure

  • Lines of Code (LOC): While LOC can provide some insight into coding activity, it is not a reliable measure of productivity. More code does not necessarily equate to better software. Instead, focus on the quality and impact of the code produced.
  • Commit Frequency: Tracking how often developers commit code can give a false sense of productivity. Frequent commits do not always indicate meaningful progress and can encourage developers to break down their work into smaller, less significant pieces.
  • Bug Counts: Focusing on the number of bugs reported or fixed can create a negative environment where developers feel pressured to avoid complex tasks that may introduce bugs. This can stifle innovation and lead to a culture of risk aversion.
  • Time Spent on Tasks: Measuring how long developers spend on specific tasks can be misleading. Developers may take longer on complex problems that require deep thinking and creativity, which are essential for high-quality software development.

Measuring developer productivity is fraught with challenges and dangers that engineering managers must navigate carefully. By understanding these complexities and avoiding outdated or superficial metrics, leaders can foster a more accurate and supportive environment for their development team productivity.

What is the impact of measuring Dev productivity on engineering culture?

Developer productivity improvements are a critical factor in the success of software development projects. As engineering managers or technology leaders, measuring and optimizing developer productivity is essential for driving development team productivity and delivering successful outcomes. However, measuring development productivity can have a significant impact on engineering culture & software engineering talent, which must be carefully navigated. Let's talk about measuring developer productivity while maintaining a healthy and productive engineering culture.

Measuring developer productivity presents unique challenges compared to other fields. The complexity of software development, inadequate traditional metrics, team dynamics, and lack of context can all lead to misguided incentives and decreased morale. It's crucial for engineering managers to understand these challenges to avoid the pitfalls of misinterpretation and ensure that developer productivity metrics genuinely reflect the contributions of developers.

Remember, the goal is not to maximize metrics but to create a development environment where software engineers can thrive and deliver maximum value to the organization.

Development teams using Typo experience a 30% improvement in Developer Productivity. Want to Try Typo?

Member's insights in Typo

In this episode of the groCTO Originals podcast, host Kovid Batra talks to Venkat Rangasamy, the Director of Engineering at Oracle & an advisory member at HBR, about 'How AI is Revolutionizing Software Engineering'.

Venkat discusses his journey from a humble background to his current role and his passion for mentorship and generative AI. The main focus is on the revolutionary impact of AI on the Software Development Life Cycle (SDLC), making product development cheaper, more efficient, and of higher quality. The conversation covers the challenges of using public LLMs versus local LLMs, the evolving role of developers, and actionable advice for engineering leaders in startups navigating this transformative phase.

Timestamps

  • 00:00 - Introduction
  • 00:58 - Venkat's background
  • 01:59 - Venkat's Personal and Professional Journey
  • 05:11 - The Importance of Mentorship and Empathy
  • 09:19 - AI's Role in Modern Engineering
  • 15:01 - Security and IP Concerns with AI
  • 28:56 - Actionable Advice for Engineering Leaders
  • 32:56 - Conclusion and Final Thoughts

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of the groCTO podcast. And today with us, we have a very special guest, Mr. Venkat Rangasamy. He's the Director of Engineering at Oracle. He is the advisor at HBR Advisory Council, where he's helping HBR create content on leadership and management. He comes with 18 plus years of engineering and leadership experience. It's a pleasure to have you on the show, Venkat. Welcome. 

Venkat Rangasamy: Yup. Likewise. Thank you. Thanks for the opportunity to discuss on some of the hot topics what we have. I'm, I'm pleasured to be here. 

Kovid Batra: Great, Venkat. So I think there is a lot to talk about, uh, what's going on in the engineering landscape. And just for the audience, uh, today's topic is around, uh, how AI is impacting the overall engineering landscape and Venkat coming from that space with an immense experience and exposure, I think there will be a lot of insights coming in from your end. Uh, but before we move on to that section, uh, I would love to know a little bit more about you. Our audience would also love to know a little bit more about you. So anything that you would like to share, uh, from your personal life, from your professional journey, any hobbies, any childhood memories that shape up who you are today, how things have changed for you. We would love to hear about you. Yeah. 

Venkat Rangasamy: Yup. Um, in, in, in my humble background, I started, um, without nothing much in place, where, um, started my career and even studies, I did really, really on like, not even electricity to go through to, when we went for studies. That's how I started my study, whole schooling and everything. Then moved on to my college. Again, everything on scholarship. It's, it's like, that's where I started my career. One thing kept me motivated to go to places where, uh, different things and exploring opportunities, mentorship, right? That something is what shaped me from my school when I didn't have even, have food to eat for a day. Still, the mentorship and people who helped me is what I do today. 

With that context, why I'm passionate about the generative AI and other areas where I, I connect the dots is usually we used to have mentorship where people will help you, push you, take you in the right direction where you want to be in the different challenges they put together, right? Over a period of time, the mentorship evolved. Hey, I started with a physical mentor. Hey, this is how they handhold you, right? Each and every step of the way what you do. Then when your career moves along, then that, that handholding becomes little off, like it becomes slowly, it becomes like more of like instructions. Hey, this is how you need to do, get it done, right? The more you grow, even it will be abstracted. The one piece what I miss is having the handholding mentorship, right? Even though you grow your career, in the long run, you need something to be handholding you to progress along the way as needed. I see one thing that's motivated me to be part of the generative AI and see what is going on is, it could be another mentor for you to shape your roles and responsibility, your career, how do you want to proceed, bounce your ideas and see where, where you want to go from there on the problem that you have, right? In the context of the work-related stuff. 

Um, how, how you can, as a person, you can shape your career is something I'm vested, interested in people to be successful. In the long run, that's my passion to make people successful. The path that I've gone through, I just want to help people in a way to make them successful. That's my belief. I think making, pulling like 10 to 100, how many people you can pull out. The way when you grow is equally important. It's just not your growth yourself. Being part of that whole ecosystem, bring everybody around it. Everybody's career is equally important. I'm passionate about that and I'm happy to do that. And in my way, people come in. I want to make sure we grow together and and make them successful. 

Kovid Batra: Yeah, I think it's, uh, it's because of your humble background and the hardships that you've seen in the early of your, uh, childhood and while growing up, uh, you, you share that passion and, uh, you want to help other folks to grow and evolve in their journeys. But, uh, the biggest problem, uh, like when, when I see, uh, with people today is they, they lack that empathy and they lack that motivation to help people. Why do you think it's there and how one can really overcome this? Because in my foundation, uh, in my fundamental beliefs, we, as humans are here to give back to the community, give back to this world, and that's the best feeling, uh, that I have also experienced in my life, uh, over the last few years. I am not sure how to instill that in people who are lacking that motivation to do so. In your experience, how do you, how do you see, how do you want to inspire people to inspire others? 

Venkat Rangasamy: Yeah. No, it's, it's, it's like, um, It goes both ways, right? When you try to bring people and make them better is where you can grow yourself. And it becomes like, like last five to 10 years, the whole industry's become like really mechanics, like the expectation went so much, the breathing space. We do not have a breathing space. Hey, I want to chase my next, chase my next, chasing the next one. We leave the bottom food chain, like, hey, bring the food chain entirely with you until you see the taste of it in one product building. Bringing entire food chain to the ecosystem to bring them success is what makes your team at the end of the day. If we start seeing the value for that, people start spending more time on growing other people where they will make you successful. It's important. And that food chain, if it breaks, if it broke, or you, you kind of keep the food chain outside of your progression or growth, that's not actual growth because at one point of time, you get the roadblocks, right? At that point of time, your complete food chain is broken, right? Similar way, your career, the whole team, food chain is, it's completely broken. It's hard to bring them back, get the product launched at the time what you want to do. It's, it's, it's about building a trust, bring them up to speed, make them part of you, is what you have to do make yourself successful. Once you start seeing that in building a products, that will be the model. I think the people will follow that. 

The part is you rightly pointed out empathy, right? Have some empathy, right? Career can, it can be, can, can, it can go its own progress, but don't, don't squeeze too much to make it like I want to be like, it won't happen like in a timely manner like every six months and a year. No, it takes its own course of action. Go with this and make it happen, right? There are ups and downs in careers. Don't make, don't think like every, every quarter and every year, my career should be successful. No, that's not how it works. Then, then there is no way you see failure in your career, right? That's not the way equilibrium is. If that happened, everybody becomes evil. That's not a point, right? Every, everything in the context of how do you bring, uplift people is equally important. And I think people should start focusing more on the empathy and other stuff than just bringing as an IC contributor. Then you want to be successful in your own role, be an IC contributor, then don't be a professional manager bringing your whole.. There's a chain under you who trust you and build their career on top of your growth, right? That's important. When you have that responsibility, be meaningful, how do you bring them and uplift them is equally important. 

Kovid Batra: Cool. I think, uh, thanks a lot, uh, for this sweet and, uh, real intro about yourself. Uh, we got to, uh, know you a little more now. And with that, I, I'm sorry, but I was talking to you on LinkedIn, uh, from some time and I see that you have been passionately working with different startups and companies also, right, uh, in the space of AI. So, uh, With this note, I think let's move on to our main section, um, where you would, uh, be, where we would be interested in knowing, uh, what kind of, uh, ideas and thoughts, uh, are, uh, encompassing this AI landscape now, where engineering is changing on a day-in and day-out basis. So let's move on to our main section, uh, how AI is impacting or changing the engineering landscape. So, starting with your, uh, uh, advisories and your startups that you're working with, what are the latest things that are going on in the market you are associated with and how, how is technology getting impacted there? 

Venkat Rangasamy: Here is, here is what the.. Git analogy, I just want to give some history background about how AI is getting mainstream and people are not quite realizing what's happening around us, right? The part is I think 2010, when we started presenting cloud computing to folks, um, in the banking industry, I used to work for a banking customer. People really laughed at it. Hey, my data will be with me. I don't think it will move any time closer to cloud or anything. It will be with, with and on from, it is not going to change, right? But, you know, over a period of time, cloud made it easy. And, and any startups that build an application don't need to set up any infrastructure or anything, because it gives an easy way to do it. Just put your card, your infrastructure is up and running in a couple of hours, right? That revolutionized a lot the way we deploy and manage our applications.

The second pivotal moment in our history is mobile apps, right? After that, you see the application dominance was with enterprise most of the time. Over a period of time, when mobile got introduced, the distribution channels became easier to reach out to end users, right? Then a lot of billion-dollar unicorns like Uber and Spotify, everything got built out. That's the second big revolution happening. After mobile, I would say there were foundations happening like big data and data analytics. There is some part of ML, it, over a period of time it happened. But revolutionizing the whole aspect of the software, like how cloud and mobile had an impact on the industry, I see AI become the next one. The reason is, um, as of now, the software are built in a way, it's traditional SDLC practice, practice set up a long time ago. What, what's happening around now is that practice is getting questioned and changed a bit in the context of how are we going to develop a software, make them cheaper, more productive and quality deliverables. We used to do it in the 90s. If you've worked during that time, right, COBOL and other things, we used to do something called extreme programming. Peer programming and extreme programming is you, you have an assistant, you sit together, write together a bunch of instructions, right? That's how you start coding and COBOL and other things to validate your procedures. The extreme programming went away. And we started doing code based, IDE based suggestions and other things for developers. But now what's happening is it's coming 360, and everything is how Generative AI is influencing the whole aspect of software industry is, is, is it's going to be impactful for each and every life cycle of the software industry.

And it's just at the initial stage, people are figuring out what to do. From my, my interaction and what I do in my free time with NJ, Generative AI to Change this SDLC process in a meaningful way, I see there will be a profound impact on what we do in a software as software developers. From gathering requirements until deploying, deploying that software into customers and post support into a lifecycle will have a meaningful impact, impact. What does that mean? It'll have cheaper product development, quality deliverables. and having good customer service. What does it bring in over a period of time? It'll be a trade off, but that's where I think it's heading at this point of time. Some folks have started realizing, injecting their SDLC process into generative AI in some shape and form to make them better.

We can go in detail of like how each phases will look like, but that's, that's what I see from industry point of view, how folks are approaching generative AI. There is, there is, it's very conservative. I understand because that's how we started with cloud and other areas, but it's going to be mainstream, but it's going to be like, each and every aspect of it will be relooked and the chain management point of view in a couple of years, the way we see an SDLC will be quite different than what we have today. That's my, my, my belief and what I see in the industry. That's how it's getting there. Yep. Especially the software development itself. It's like eating your own dog food, right? It happened for a long time. This is the first time we do a software development, that whole development itself, it's going to be disturbed in a way. It'll be, it'll be, it'll be more, uh, profound impact on the whole product development. And it'll be cheaper. The product, go to market will be much cheaper. Like how mobile revolutionized, the next evolution will be on using, um, generative AI-like capability to make your product cheaper and go to market in a short term. That's, that's, that's going to happen eventually. 

Kovid Batra: Right. I think, uh, this, this is bound to happen. Even I believe so. It is, it is already there. I mean, it's not like, uh, you're talking about real future, future. It's almost there. It's happening right now. But what do you think on the point where this technology, which is right now, uh, not hosted locally, right? Uh, we are talking about inventing, uh, LLMs locally into your servers, into your systems. How do you see that piece evolving? Because lately I have been seeing a lot of concerns from a lot of companies and leaders around the security aspect, around the IP aspect where you are putting all your code into a third-party server to generate new code, right? You can't stop developers from doing that because they've already started doing it. Earlier, the method was going to stack overflow, taking up some code from there, going to GitHub repositories or GitLab repositories, taking up some code. But now this is happening from a single point of source, which is cloud hosted and you have to share your code with third parties. That has started becoming a concern. So though the whole landscape is going to change, as you said, but I think there is a specific direction in which things are moving, right? Very soon people realized that there is an aspect of security and IP that comes along with using such tools in the system. So how do you see that piece progressing in the market right now? And what are the things, what are the products, what are the services that are coming up, impacting this landscape? 

Venkat Rangasamy: It's a good question, actually. We, after a couple of years, right, what the realization even I came up with now, the services which are hosted on a cloud, like, uh, like, uh, public LLMs, right, which, you can use an LLM to generate some of these aspects. From a POC point of view, it looks great. You can see it, what is coming your way. But when it comes to the real product, making product in a production environment is not, um, well-defined because as I said, right, security audit complaints, code IP, right? And, and your compliance team, it's about who owned the IP part of it, right? It's those aspects as well as having the code, your IP goes to some trained public LLM. And it's, it's kind of a compromise where there is, there is, there is some concern around that area and people have started and enterprises have started looking upon something to make it within their workspace. End of the day, from a developer point of view, the experience what developer has, it has to be within that IDE itself, right? That's where it becomes successful. And keeping outside of that IDE is not fully baked-in or it's not fully baked-in part of the developer life cycle, which means the tool set, it has to be as if like it's running in local, right? If you ask me, like, is it doable? For sure. Yes. If you’d asked me an year back, I'd have said no. Um, running your own LLM within a laptop, like another IDE, like how do you run an IDE? It's going to be really challenging if you’d asked me an year back. But today, I was doing some recent experiment on this, um, similar challenges, right? Where corporates and other folks, then the, the, the, any, any big enterprises, right? Any security or any talk to a startup founders, the major, the major roadblock is I didn't want to share my IPR code outside of my workspace. Then bringing that experience into your workspace is equally important. 

With that context, I was doing some research with one of the POC project with, uh, bringing your Code Llama. Code Llama is one of the LLMs, public LLM, uh, trained by Meta for different languages, right? It's just the end of the day, the smaller the LLMs, the better on these kinds of tasks, right? You don't need to have 700 billion, 70 billion, those, those parameters are, is, it's irrelevant at this point of coding because coding is all about a bunch of instructions which need to be trained, right? And on top of it, your custom coding and templates, just a coding example. Now, how to solve this problem, set up your own local LLM. Um, I've tested and benchmarked in both Mac and PC. Mac is phenomenally well, I won't see any difference. You should be able to set up your LLM. There is a product called Ollama. Ollama is, uh, where you can use, set up your LLM within your workspace as if it's running, like running in your laptop. There's nothing going out of your laptop. Set up that and go to your IDE, create a simple plugin. I created a VC plugin, visual source plugin, connected to your local LLM, because Ollama will give you like a REST API, just connect it. Now, now, within your IDE, whatever code is there, that is going to talk to your LLM, which means every developer can have their own LLM. And as long as you have a right trained data set for basic language, Java, Python, and other thing, it works phenomenally well, because it's already trained for it. If you want to have a custom coding and custom templating, you just need to train that aspect of it, of your coding standards.

Once you train, keep it in your local, just run like part of an IDE. It's a whole integrated experience, which runs within developer workspaces, is what? Scalable and long run. It, if anything, if it goes out of that, which we, we, we have seen that many times, right, past couple of years. Even though we say our LLMs are good enough to do larger tasks in the coding side, if it's, if you want to analyze the complete file, if you send it to a public LLM, with some services available, uh, through some coding and other testing services, what we have, the challenges, number of the size of the tokens what you can send back, right? There is a limit in the number of tokens, which means if you want to analyze the entire project repository what you have, it's not possible with the way it's, these are set up now in a public site, right? Which means you need to have your own LLM within the workspace, which can work and in, in, it's like a, it's part of your workspace, that's what I would say. Like, how do you run your database? Run it part of your workspace, just make it happen. That is possible. And that's going to be the future. I don't think going any public LLM or setting up is, is, is not a viable option, but having the pipeline set up, it's like a patching or giving a database to your developers, it runs in local. Have that set up where everybody can use it within the local workspace itself. It's going to be the future and the tools and tool sets around that is really happening. And it's, it's at the phase where in an year's time from here, you won't even see that's a big thing. It's just like part of your skill. Just set up and connect your editor, whatever source code editor you have, just connect it to LLM, just run with it. I see that's a feature for the coding part of you. Other SDLCs have different nuance to it, but coding, I think it should be pretty straightforward in a year time frame. That's going to be the normal practice. 

Kovid Batra: So I think, uh, from what I understand of your opinion is that the, most of the market would be shifting towards their Local LLM models, right? Yeah. Uh, that that's going to be the future, but I'm not sure if I'm having the right analogy here, but let's talk about, uh, something like GitHub, which is, uh, cloud-sourced and one, which is in-house, right? Uh, the teams, the companies always had that option of having it locally, right? But today, um, I'm not sure of the percentage, uh, how many teams are using a cloud-based GitHub on a locally, uh, operated GitHub. But in that situation, they are hosting their code on a third party, right? The code is there. 

Venkat Rangasamy: Yup. 

Kovid Batra: The market didn't shape that way if we look at it from that perspective of code security and IP and everything. Uh, why do you think that this would happen for, uh, local LLMs? Like wouldn't the market be fragmented? Like large-scale organizations who have grown beyond a size have that mindset now, “Let's have something in-house.” and they would put it out for the local LLMs. Whereas the small companies who are establishing themselves and then, I mean, can it not be the similar path that happened for how you manage your code? 

Venkat Rangasamy: I think it is very well possible. The only difference between GitHub and LLM is, um, the artifact, the, GitHub is more like an artifact management, right? When you have your IP, you're just keeping it's kind of first repository to keep everything safe, right? It just with the versioning, branching and other stuff.

Kovid Batra: Right. 

Venkat Rangasamy: Um, the only problem there related to security is who's, um, is there any vulnerability within your code? Or it's that your repository is secure, right? That is kind of a compliance or everything needs to be there. As long as that's satisfied, we're good for that. But from an LLM lifecycle point of view, the, the IP, what we call so far in a software is a code, what you write as a code. Um, and the business logic associated to that code and the customizations happenening around that is what your IP is all about. Now, as of now, those IPs are patent, which means, hey, this is what my patent is all about. This is my IP all about. Now you have started giving your IP data to a public LLM, it'll be challenging because end of the day, any data goes through, it can be trained on its own. Using the data set, what user is going through, any LLM can be trained using the dataset. If you ask me, like, every application is critical where you cannot share an IP, not really. Building simple web pages or having REST services is okay because those things, I don't think any IP is bound to have. Where you have the core business of running your own workflows or your own calculations and that is where it's going to be more tough to use any public LLM.

And another challenge is, what I see in a community is, the small startups, right, they won't do much customization on the frameworks. Like they take Java means Java, right, Node means Node, they take React, just plain vanilla, just run through end-to-end, right? Their, their goal is to get the product up to market quicker, right, in the initial stage of when we have 510 developers. But when it grows, the team grows, what happens is, we, the, every enterprise it's bound to happen, I, I've gone through a couple of cycles of that, you start putting together a framework around the whole standardization of coding, the, the scaffolding, the creating your test cases, the whole life cycle will have enforced your own standard on top of it, because to make it consistent across different developers, and because the team became 5 to 1000, 1000 to 10,000, it's hard to manage if you don't have standards around it, right? That's where you have challenges using public LLM because you will have challenges of having your own code with your own standards, which is not trained by LLM, even though it's a simple application. Even simple application will have a challenge at those points of time. But from a basic point of view, still you can use it. But again, you will have a challenge of how big a file you can analyze using public LLM. It's the one challenge you might have. But the answer to your question, yes, it will be hybrid. It won't be 100 percent saying everybody needs to have their own LLM trained and set up. Initial stages, it's totally fine to use it because that's how it's going to grow, because startup companies don't have much resources to put together to build their own frameworks. But once they get in a shape where they want to have the standardized practices, like how they build their own frameworks and other things. Similar way, one point of time, they'd want to bring it up on their own setup and run with it. For large enterprise, for sure, they are going to have their own developer productivity suite, like what they did with their frameworks and other platforms. But for a small startup, start with, they might use public, but long run, eventually over a point of, over a period of time, that might get changed. 

And the benefit of getting hybrid is where you will, you'll make your product quick to market, right? Because end of the day, that's important for startups. It's not about getting everything the way they want to set up. It's important, but at the same time, you need to go to market, the amount of money what you have, where you want to prioritize your money. If I take it that way, still code generation and the whole LLM will play a crucial role on a, on the development. But how do you use and what third-party they can use? Of course, there will be some choices where I think in the future, what this, what I see is even these LLMs will be set up and trained for your own data in a, in a more of a hybrid cloud instead of a public cloud, which means your LLM, what you trained in a, in a hybrid cloud has visibility only to your code. It's not going, it's not a public LLM, it's more of a private LLM trained and deployed on, on a cloud can be used by your team. That'll, that'll, that'll be the hybrid approach in the long run. It's going to scale. 

Kovid Batra: Got it. Great. Uh, with that, I think, uh, just to put out some actionable advice, uh, for all the engineering leaders out there who are going through this phase of the AI transformation, uh, anything as an actionable advice for those leaders from your end, like what should they focus on right now, how they should make that transition? And I'm talking about, uh, companies where these engineering leaders are working, which are, uh, Series B, Series A, Series C kind of a bracket. I know this is a huge bracket, but what kind of advice you would give to these companies? Because they're in the growing phase of the, of the whole cycle of a company, right? So what, what should they focus on right now at this stage?

Venkat Rangasamy: Here, here is where some start. I was talking to some couple of, uh, uh, ventures, uh, recently about similar topic, how the landscape is going to change as for software development, right? One thing came up in that call frequently was cheaper to develop a product, go to market faster, and the expectation around software development has become changing quite a while, right? In the sense, the expectation around software development and the cost associated to that software development is where it's going to, it's going to be changing drastically. Same time, be clear about your strategy. It's not like we can change 50 percent of productivity overnight now. But at the same time, keep it realistic, right? Hey, this is what I want to make. Here is my charter to go through, from start from ideations to go to market. Here are the meaningful places where I can introduce something which can help the developers and other roles like PMs. Could be even post support, right? Have a meaningful strategy. Just don't go blank with the traditional way what you have, because your investors and advisors are going to start ask questions because they're going to see a similar pattern from others, right? Because that's how others have started looking into it. I would say proactively start going through that landscape and map your process to see where we can inject some of the meaningful, uh, area where it can have impacts, right?

And, and have, be practical about it. Don't think, don't give a commitment. Hey, I make 50 percent cheaper on my development and overnight you might burn because that's not reality, but just.. In my unit test cases and areas where I can build quality products within this money and I can guarantee that can be an industry benchmark. I can do that with introducing some of these practices like test cases, post customer support, writing code in some aspects, right? Um, that is what you need to set up, uh, when you started, uh, going for a venture fund. And have a relook of your SDLC process. That's important. And see how do you inject, and in the long term, that'll help you. And it'll be iterative, but at the end of the day, see, we've gone from waterfall to agile. Agile to many, many other paradigms within agile over a period of time. But, uh, the one thing what we're good at doing is in a software as an industry adapting to a new trend, right? This could be another trend. Keep an eye on it. Make it something where you can make it, make some meaningful impact on your products. I would, I would say, before your investor comes and talked about hey, can you do optimization here? I see another, my portfolio company does this, does this, does this. That's, it's, it's better to start yourself. Be collaborative and see if we can make something meaningful and learn across, share it in the community where other founders can leverage something from you. It will be great. That's my advice to any startup founders who can make a difference. Yep. 

Kovid Batra: Perfect. Perfect. Thank you, Venkat. Thank you so much for this insightful, uh, uh, information about how to navigate the situation of changing landscape due to AI. So, uh, it was really interesting. Uh, we would love to have you one another time on this show. I am sure, uh, you have more than these insights to share with us, but I think in the interest of time, we'll have to close it for today, and, uh, we'll see you soon again. 

Venkat Rangasamy: See you. Bye.

Typo hosted an exclusive live webinar titled 'The Hows and Whats of DORA', featuring Bryan Finster and Richard Pangborn. With over 150+ attendees, we explored how DORA can be misused and learnt practical tips for turning engineering metrics into dev team success.

Bryan Finster, Value Stream Architect at Defense Unicorns and co-author of 'How to Misuse and Abuse DORA Metrics’, and Richard Pangborn, Software Development Manager at Method and advocate for Typo, brought valuable perspectives to the table.

The discussion covered DORA metrics' implementation and challenges, highlighting the critical role of continuous delivery and value stream management. Bryan provided insights from his experience at Walmart and Defense Unicorns, explaining the pitfalls of misusing DORA metrics. Meanwhile, Richard shared his hands-on experience with implementation challenges, including data collection difficulties and the crucial need for accurate observability. They also reinforced the idea that DORA metrics should serve as health indicators rather than direct targets. Bryan and Richard offered parting advice on using observability effectively and ensuring that metrics lead to meaningful improvements rather than superficial compliance. They both emphasized the importance of a supportive culture that sees metrics as tools for improvement rather than instruments of pressure.

The event concluded with an interactive Q&A session, allowing attendees to ask questions and gain deeper insights.

P.S.: Our next live webinar is on September 25, featuring DORA expert Dave Farley. We hope to see you there!

Timestamps

  • 00:00 - Introduction
  • 00:59 - Meet Richard Pangborn
  • 02:58 - Meet Bryan Finster
  • 04:49 - Bryan's Journey with Continuous Delivery
  • 07:33 - Challenges & Misuse of DORA Metrics
  • 20:55 - Richard's Experience with DORA Metrics
  • 27:43 - Ownership of MTTR & Measurement Challenges
  • 28:27 - Cultural Resistance to Measurement
  • 29:37 - Team Metrics vs Individual Metrics
  • 31:29 - Value Stream Mapping Insights
  • 33:56 - Q&A Session
  • 40:19 - Setting Realistic Goals with DORA Metrics
  • 45:31 - Final Thoughts & Advice

Links and Mentions 

Transcript

Kovid Batra: Hi, everyone. Thanks for joining in for our DORA exclusive webinar, The Hows and Whats of DORA, powered by Typo. I'm Kovid, founding member at Typo and your host for today's webinar. With me, we have two special people. Please welcome the DORA expert for tonight, Bryan Finster, who is an exceptional Value Stream Architect at Defense Unicorns and the co-author of the ebook, 'How to Misuse and Abuse DORA Metrics', and one of our product mentors, and Typo advocates, Richard Pangborn, who is a Software Development Manager at Method. Thanks, Bryan. Thanks, Rich, for joining in. 

Bryan Finster: Thanks for having me. 

Richard Pangborn: Yeah, no problem. 

Kovid Batra: Great. So, like, before we, uh, get started and discuss about how to implement DORA, how to misuse DORA, uh, Rich, you have some questions to ask, uh, we would love to know a little bit about you both. So if you could just spare a minute and tell us about yourself. So I think we can get started with you, Rich. Uh, and then we can come back to Bryan. 

Richard Pangborn: Sure. Yeah, sounds good. Uh, my name is Richard Pangborn. I'm the Software Developer Manager here at Method. Uh, I've been a manager for about three years now. Um, but I do come from a Tech Lead role of five or more years. Um, I started here as a junior developer when we were just in the startup phase. Um, went through the series funding, the investments, the exponential growth. Today we're, you know, over a 100-person company with six software development teams. Um, and yeah, Typo is definitely something that we've been using to help us measure ourselves and succeed. Um, some interesting things about myself, I guess, is I was part of the company and team that succeeded when we did a Intuit hackathon. Um, it was pretty impactful to me. Um, We brought this giant check, uh, back with us from Cali all the way to Toronto, where we're located. Uh, we got to celebrate with, uh, all of the company, everyone who put in all the hard and hard work to, to help us succeed. Um, that's, that's sort of what pushed me into sort of a management path to sort of mentor and help those, um, that are junior or intermediate, uh, have that same sort of career path, uh, and set them up for success.

Kovid Batra: Perfect. Perfect. Thanks, Richard. And something apart from your professional life, anything that you want to share with the audience about yourself? 

Richard Pangborn: Uh, myself, um, I'm a gamer, um, I do like to golf, I do like to, um, exercise, uh, something interesting also is, um, I met my, uh, wife here at the company who I still work with today.

Kovid Batra: Great. Thank you so much, Rich. Bryan, over to you. 

Bryan Finster: Oh, yes. I'm Bryan Finster. I've been a software developer for, oh, well, since 1996. So I'll let you do the math. I'm mostly doing enterprise development. I worked for Walmart for 19 of those years, um, in logistics for most of that time and, uh, helped pilot continuous delivery at Walmart inside logistics. I've got scars to show for it. Um, and then later moved to platform at Walmart, where I was originally in charge of the delivery metrics we were gathering to help teams understand how to do continuous delivery so they can compare themselves to what good continuous delivery looked like. And then later was asked to start a dojo at Walmart to directly pair with teams to help them solve the problem of how do we do CD. And then about a little over three years ago, I was, I joined Defense Unicorns as employee number three of three, uh, and we're, we're now, um, over 150 people. We're focused on how do we help the Department of Defense deliver, um, you know, do continuous delivery and secure environments. So it's a fun path.

Kovid Batra: Great, great. Perfect. And the same question to you. Something that LinkedIn doesn't tell about you, you would like to share with the audience. 

Bryan Finster: Um, computers aren't my hobby. Uh, I, you know, it's a lot better than roofing. My dad had a construction company, so I know what that's like. Um, but I, I very much enjoy photography, uh, collecting watches, ride motorcycles, and build plastic models. So that's where I spend my time. 

Kovid Batra: Nice. Great to know that. All right. So now I think, uh, we are good to go and start with the main section of, of our webinar. So I think first, uh, let's, let's start with you, Bryan. Um, I think you have been a long-time advocate of value streams, continuous delivery, DORA metrics. You just briefly told us about how this journey started, but let's, let's deep dive a little bit more into this. Uh, tell us about how value stream management, continuous delivery, all this as a concept started appealing to you from the point of Walmart and then how it has evolved over time for you in your life.

Bryan Finster: Sure. Uh, no, at Walmart, um, continuous delivery was the answer to a problem. It wasn't, it was, we had a business problem, you know, our lead time for change in logistics was a year. We were delivering every quarter with massive explosions. Every time we piloted, I mean, it was really stressful. Um, any, anytime we did a big change of recorder, we had planned 24 by 7 support for at least a week and sometimes longer, um, And it was just a complete nightmare. And our SVP, instead of hiring in a bunch of consultants, cause we've been through a whole bunch of agile transformations over the years, asked the senior engineers in the area to figure out how we can deliver every two weeks. Now, if you can imagine these giant explosions happening every two weeks instead of every quarter, we didn't want that. And so we started digging in, how do we get that done? And my partner in crime bought a copy of continuous delivery. We started reading that book cover to cover, pulling out everything we could, uh, started building Jenkins pipelines with templates, so the teams didn't have to go and build their own pipeline. They can just extend the base template which was a pattern we took forward later. And, and, uh, we built a global platform. I started trying to figure out how do we actually do the workflow that enables continuous delivery. I mean, we weren't testing at all. Think how scary that is. Uh, other than, you know, handing it off to QA and say, "Hey, test this for us.

And so I had to really dig into how do we do continuous integration. And then that led into what's the communication problems that are stopping us from getting information so we can test before we commit code. Um, and then once you start doing that at the team level, what's preventing us from getting all the other information that we need outside the team? How do we get the connection? You know that, all the, all the roadblocks that are preventing us from doing continuous delivery, how do we fix those? Which kind of let me fall backwards in the value stream management because now you're looking at the broader value stream. It's beyond just what your team can do. Um, and so it's, uh, it's, it's been just a journey of solving that problem of how do we allow every team to independently deploy from any other team as frequently as they can. 

Kovid Batra: Great. And, and how do, uh, DORA metrics and engineering metrics, while you are implementing these projects, taking up these initiatives, play a role in it?

Bryan Finster: Well, so, you know, all this effort that we went on predated Accelerate coming out, but I was going to DevOps Enterprise Summit and learning as much as I could starting in 2015 and talking to people about how do we measure things, cause I was actually sent to DevOps Enterprise Summit the first time to figure out how do we measure if we're doing it well, and then started pulling together, you know, some metrics to show that we're progressing on this path to CD, you know, how frequently integrating code, how many defects are being generated over time, you know, and how, how often can individuals on the team deploy as like, you know, deploys per day per developer was a metric that Jim proposed back in 2015 as just a health metric. How are we doing? And then later in the, and when we started the dojo in platform at Walmart, we were using a metrics-based approach to help teams. Continuous delivery was the method we were using to improve engineering excellence in the organization. We, you know, we weren't doing any Agile frameworks. It was just, why can't we deliver change daily? Um, and early on when we started building the platform, the first tool was the CI tool. Second tool was how do we measure. And we brought in CapitalOne's Hygieia, and then we gamified delivery metrics so we can show teams with a star rating how they were doing on integration frequency, build time, build success rate, deploy frequency, you know, and code complexity, that sort of thing, to show them, you know, this is what good looks like, and here's where you are. That's it. Now, I learned a lot from that, and there's some things I would still game today, and some things I would absolutely not gamify. Um, but that's where I, you know, I spent a long time running that as the game master about how do we, how do we run the game to get teams to want to, want to move and have shown where to go.

And then later, Accelerate came out, and the big thing that Accelerate did was it validated everything we thought was true. All the experiences we had, because the reason I'm so passionate about it is that first, first experience with CD was such a morale improvement on the team that I, nobody ever wanted to work any other way, and when things later changed, they were forced to not work that way by new leadership, everyone who could left. And that's just the reality of it. And, but accelerate came out and said these exact things that we were seeing. And it wasn't just a one-off. It wasn't just this, you know, just localized to. What we were saying, it was everywhere.

Kovid Batra: Yeah, totally makes sense. I think, uh, it's been a burning topic now, and a lot of, uh, talks have been around it. In fact, like, these things are at team-level, system-level. In fact, uh, I'm, uh, the McKinsey article that came out, uh, talking about dev productivity also. So I, I have actually a question there. So, uh. 

Bryan Finster: Oh, I shouldn't have read the article. Yeah, go ahead. 

Kovid Batra: I mean, it's basically, it's basically talking about individual, uh, dev productivity, right? People say that it can be measured. So yeah. What's your take on that? 

Bryan Finster: That's, that's really dumb. If you want to absolutely kill outcomes, uh, focus on HR metrics instead of outcome metrics, you know. And, and so, I want to touch a little bit on the DORA metrics I think. You know, I've, having worked to apply those metrics on top of the metrics we're already using, there's some of them that are useful, but you have to understand those came from surveys, and there's some of them that are, that if you try to measure them directly, you won't get the results you want, you won't get useful data from measuring directly. Um, you know, and they don't tell you things are going well, they only tell you things are going poorly and you can't use those as your, your, the thing that tells you whether, whether you're delivering value well, you know? It's just something that you, cause you to ask questions about what might be going wrong or not, but it's not, it's not something you use like a dashboard. 

Kovid Batra: Makes sense. And I think, uh, the book that you have written, uh, 'How to Misuse and Abuse DORA Metrics', I think, let's, let's talk, talk about that a little bit. Like you have summarized a lot of things there, how DORA metrics should not be used, or Engineering metrics for that matter should not be used. So like, when do you think, how do you think teams should be using it? When do the teams actually feel the need of using these metrics and in which areas? 

Bryan Finster: Well, I think observability in general is something people don't pay enough attention to. And not just, you know, not just production observability, but how are we working as a team. And, and really what we're trying to do is you have to think of it first from what are we trying to do with product development. Um, a big mistake people make is assuming that their idea is correct, and all we have to do is build something according to spec, make sure it tests according to spec and deliver it when we're done. When fundamentally, the idea is probably wrong. And so the question is, how big of a chunk of wrong idea do I want to deliver to the end user and which money do I want to spend doing that? So what we're trying to do is we're trying to become much more efficient about how we make change so we can make smaller change at lower costs so that we can be more effective about delivering value and deliver less wrong stuff. And so what you're really trying to do is you're trying to measure the, the, the way that we work, the way we test, to find areas where we can improve that workflow, so that we can reduce the cost and increase the velocity, which we can deliver change. So we can deliver smaller units of work more frequently, get faster feedback and adjust our idea, right? And so if you're not, if you're just looking at, "Oh, we just need to deliver faster." But you're not looking at why do we want to deliver faster is to get faster feedback on the idea. And also from my perspective, after 20 years of carrying a pager, fix production very, very quickly and safely, I think those are both key things.

And so what we're trying to do with the metrics is we're trying to identify where those problems are. And so in the paper I wrote for IT revolution, which was about twice as long as they asked me for on, on how to misuse and abuse DORA metrics, I went into the details of how we apply those metrics in real life. At Walmart, when we were working with teams to help them improve and also, you know, using them on ourselves, I think if a team really wants to focus on improving, the first thing they should measure is how well they're doing at continuous integration, you know, how frequently are we integrating code, how long does it take us to finish whatever a unit of work is, and what's our, uh, how many defects we're generating, uh, over time as a trend. And measure trends and improve all those trends at the same time. 

Kovid Batra: How do we measure this piece where we are talking about measuring the continuous integration? 

Bryan Finster: So, as an average on the team, how frequently are we integrating code? And you really want to be at least daily, right? And that's integrated to the trunk, not to some develop branch. And then also, you know, people generally work on a task or a story or whatever it is. How long does it take to go from when we start that work until it's delivered? What's that time frame? And there's, there's other times within that we can measure and that was when we get into value stream mapping. We can talk about that later, but, uh, we want small units of work because you get higher quality information if you get smaller units work and you're more predictable on delivery of that unit of work, which takes a lot of pressure off, it eliminates story points. But then you also have to balance those with the quality of what we did, and you can't measure that quality until it's in production, because test to spec doesn't mean it's good. 'Fit for purpose' means the user finds it good. 

Kovid Batra: Right. Can you give us some examples of where you have seen implementing these metrics went completely south instead of working positively? Like how exactly were they abused and misused in a scenario? 

Bryan Finster: Yeah, every single time somebody builds a dashboard without really understanding what the problems you're trying to solve are, I see, I've seen lots of people over the years since Accelerate was published, building dashboards to sell, but they don't understand the core problem they're trying to solve. But also, you know, when you have management who reads a book and says, Oh, look, here's an end, you know, I helped cause this problems, which is why I work so hard to fix it by saying, "Hey, look at these four key metrics." Aren't you? You know, this, this can tell you some things, but then they start using them as goals instead of health indicators that are contextual to individual teams. And when you start saying, "Hey, all teams must have this, this level of delivery frequency." Well, maybe, but everybody has their own delivery context. You're not going to deliver to an air-gapped environment as frequently as you are to, you know, AWS, right? And so, you have to understand what it is you're actually trying to do. What, what decisions are you going to make with any metric? What questions are you trying to answer before you go and measure it? You have to define what the problem is before you try to measure that you're successful at correcting the problem. 

Kovid Batra: Right. Makes sense. There are challenges that I've seen in teams. Uh, so of course, Typo is getting implemented in various organizations here. What we have commonly come across is teams tend to start using it, but sometimes it happens that when there are certain indicators highlighted from those metrics, they're not sure of what to do next.

Bryan Finster: Right. 

Kovid Batra: So I'm sure you must. 

Bryan Finster: Well, but the reason why is because they didn't know why they were measuring it in the first place, right? And so, like I said, you know, DORA metrics in specific, they tell you something, but they're very much trailing metrics, which is why I point to CI because CI is really the, the CI workflow is really the engine that starts driving improvement. And then, you know, once you get better at that, you say, "Well, why can't I deliver today's work today?" And you start finding other things in the value stream that are broken, but then you have to identify, okay, well, We see this issue here with code review. We see this issue here. We have this handoff to another team downstream of development before we can deploy. How do we improve those? And how can we measure that we are improving? So you have to ask the question first. And then come up with the metrics that you're using to evaluate success. 

And so, people are saying, well, I don't know what to do with this number. It's because they don't, they didn't, they started with a metric and then tried to figure out what to do with it because someone told him it was a good metric. No metric is a good metric unless you know what you're doing with it. I mean, if I put a tachometer on a car and you think that more is better but you don't understand what the tachometer is telling you, then you'll just blow up your engine. 

Kovid Batra: But don't you think like there is a possible way to actually not know what to measure, but to identify what to measure also from these metrics itself? For example, like, uh, we have certain benchmarks for, uh, different industries for each metric, right? And let's say I start looking at the lead time, I start looking at the deployment frequency, mean time to restore, there are various other metrics. And from there, I try to identify where my engineering efficiency or productivity is, productivity is getting impacted. So can, can it not be a top-down approach where we find out what we need to actually measure and improve upon from those metrics itself? 

Bryan Finster: Only if you start with a question you're trying to answer. But I wouldn't compare. So one of the other problems I have with the DORA metrics specifically is that the, and I've talked to DORA at Google about this as well, it's, it's like some of the questions are nonspecific. So for your, the system you work on most of the time, how frequently you deliver. Well, are you talking about a thousand developers, a hundred developers, a team of eight, right? And so, your delivery frequency is going to be very much relative to the number of people working on it, plus other constraints outside of it. And so you, yes, high performers deliver, you know, multiple times a day with, uh, you know, lead times of less than an hour, except that what's the definition of lead time? Well, there's two inside Accelerate, and they're different depending on how you read it. And, but that doesn't mean that you should just copy what it says. You should look at that and say, "Okay, now what, what am I trying to accomplish? And how can I apply these ideas? Not necessarily the metrics directly, but how can I apply these ideas to measure what I'm trying to measure to find out where my problems are?" So you have to deep dive into where your problems are. And so just like, "Hey, measure these things and here's your benchmarks.

Kovid Batra: Makes sense. Makes sense. Richard, do you have a point that I think we have been talking for a long, if you have any question, uh, I think let's, let's hear from Richard also. Um, he has used Typo, uh, has been using it for a while now, and I'm sure, uh, in this journey of implementing engineering metrics, DORA metrics in his team, he would have seen certain challenges. Richard, I think the stage is yours. 

Richard Pangborn: Yeah, sure. Um, so my research into using DORA metrics stem from, um, building high-performing teams. So, um, we always, we're looking for continuous improvement, but we're really looking for ways to measure ourselves that, that makes sense, that can't be totally gamed, that, um, that are like standards. Uh, what I liked about DORA was it had some counterbalancing metrics like throughput versus quality, time to repair versus time to build, speed for stability. That's, it's a, it's a nice counterbalancing, um, effect. Um, and high-performing teams, they care about stuff like continuous improvement, they want to do better than they did last quarter or, or last month, they want to, um, they want help with decision-making. So better data to drive some of the guesswork about, you know, what, what area needs, um, The most improvement or what area is, uh, broken in our pipeline, maybe for like continuous delivery for quality. Um, I want to make sure that they're making a difference, that they're moving a needle, um, ladders up. So a lot of times, a lot of companies, uh, have different measurements at different levels, like company level, department level, team level, individual level. So DORA, we were able to identify some that do ladder up, which is great. 

Some of the there are some challenges with implementing DORA, like when we first started. Um, so I think part of the challenge, one of the first ones was the complexity around data collection. Um, so, you know, accurately tracking and measuring DORA metrics. So deployment frequency, lead time for changes, failure rate, recovery, um, they all come from different sources. So CI/CD pipelines, version control systems, incident management tools. So integrating these data sources and ensuring they provide consistent results can be a little time consuming. Um, it can be a little difficult to understand. Yeah, so that was, that was definitely one part of it. Uh, we haven't rolled out all four yet. We're still in the process, just ensuring that, you know, what we are measuring is accurate.

Bryan Finster: Yeah, and I'm glad you touched on the accuracy thing. Um, When we would go and work with teams and start collecting data, so number one, we had data from the pipeline because it was embedded into the platform, but we also knew that when we worked with teams that the Git data was accurate, but the workload was going to be garbage unless the teams actually cared about using Jira correctly. And so, education step number one was while we were cleaning up the, the data in Jira, educating them on why Jira actually should matter to them, instead of just as a, it's not, it's not a time-tracking tool, it's a communication tool. You know, and educating them so that they would actually take it seriously so that the workflow data would be accurate so that they could then use that to help them identify where the improvements could happen because we're going to try to teach them how to improve, we weren't just trying to teach them to do what we said. Um, and yeah, I built a few data collection tools since we started this, and yeah, the collecting the data and showing where, um, accuracy problems happen as part of the dashboard is something that needs to be understood because people will just say, "Oh, the data's right." But yeah, I mean, especially with workflow data, one of the things we really did on the last one I built was show where, where the, you know, where we're out of bounds, very high or very low, you know. I talked to management. I was like, "Well, look, we're doing really good. I've got stuff closing here really fast." I'm like, you're telling me it took 30 seconds to do that, give it a work. Yeah, the accuracy issues. And MTTR is something that DORA's talked about ditching entirely because it's a far too noisy metric if you're trying to collect it automatically. 

Richard Pangborn: Yeah, we haven't started tracking MTTR yet. Um, we're more concerned with the throughput versus stability that would have the biggest, um, change at the department level, at the team level. Um, I think, I think that's made the difference so far. Also, we have a challenge with, um, yeah, just doing a lot of stuff manually. So lack of tooling and automation. Um, there's a lot of manual measurements that are taking place. So like you said, error-prone for data collection, inconsistent processes. Um, once we get to a more automated state, I feel like it will be a bit more successful.

Bryan Finster: Yeah. There's a dashboard I built for the, for the Air Force. I'll send you a link later. It might, it might be useful, I'm not sure. But also the other thing is change failure rate is something that people misunderstand a lot, uh, and I've, I've combed through Accelerate multiple times. Uh, uh, Walmart has actually asked to reverse engineer the survey for the book, so I've gone back in depth. Change failure rate is any defect. It's not an incident. If you go and read what it says about change failure rate, it's any defect, which it should be because also the idea is wrong. If the user's reporting it's defective, and you say, "Well, that's a new feature." No, the idea was defective. We're not, it's not fit for purpose in most, you know, unless it's some edge case, but we should track that as well, because that's part of our quality process and change failure rate's trying to track our quality process. 

Richard Pangborn: Another problem we had is, um, mean, uh, meantime to recovery. So because we track our bugs or defects differently, they have different priorities. So, um, P0s here has to be done, has to be fixed in less than 24 hours. Um, P, priority 1 means, you know, five days, priority two, you have two weeks. So trying to come up with a, an algorithm to accurately identify, um, time to fix, I guess you'd have like three, three or four different ones instead of one. 

Bryan Finster: I've tried to solve that problem too, and especially on distributed systems, it becomes very difficult. So who's getting measured on MTTR? I mean, I'm sorry. Yes, yes. Who's getting measured, right? It's going to be because MTTR, by definition, is when the user sees impact. And so really, that's whoever has the user interface owns that metric. If you're trying to help a team improve their processes for recovery. So it's, it's, it's just a really difficult metric to try to do anything with unless, um, well, you can't, it's, I've, I've, I've tried to measure it directly. I've talked to Verizon, CapitalOne, uh, you know, other people in the dojo consortium, they've tried to make, nobody's been successful at measuring it. But yeah. I think better metrics are out there for how fast we can resolve defects. 

Richard Pangborn: Um, one of the things we were concerned about at the beginning was like a resistance to measurement. Um, some people don't want to be measured. 

Bryan Finster: That's because they have management meeting over the head and using it as, as the reason why it's a massive fear thing. And it's part of the, it's a cultural thing. I mean, as long as you, it's, you have to have a generative culture to make these metrics effective. One of the things we would do when we start working with teams is number one, we'd explain to them, we're not trying to judge you. We're like your doctor. We're working with you. We're in the trenches with you. These are all of our metrics. They're not yours. And here's how to use them to help you improve. And if a manager comes and starts trying to beat you up with them, just, you know, stop making the data valid. 

Richard Pangborn: Yeah. Well, some developers do want to know am I doing well, how do I measure myself? Um, So this gives them a way to do it a little bit, but we told them, um, you know, you set your own goals. Improve yourself. Don't measure yourself next to a developer, another developer on your team or, or someone else where you're looking for your own improvement. 

Bryan Finster: Well, I think it's also really important that the smallest unit that's measured with delivery metrics is team and not person. If, if, if individuals are being measured, they're going to optimize for themselves instead of optimizing for team goals. And this is something I've seen, uh, frequently, uh, there was one, uh, with, you know, on, on our, on the dojo team, we can walk into your team and see that if there was filters by individual developer, your team was seriously broken. Uh, and I've seen managers who measured team members by how many Jira issues they closed, which meant that code review is going to be delayed, uh, mentoring was not going to happen, um, uh, you'd have senior engineers focusing on easy tasks to get their numbers up instead of focusing on solving the hard problems, design was not going to happen well because it wasn't a ticket, you know, and so you focus on team outcomes and measure team goals and individual performance because everybody has different roles on the teams. People know that from an HR perspective, coaching by walking around is how you find out who's struggling. You go to the gimbal, you find out who's struggling, you can't measure people directly, that way it'll impact team goals, business goals. 

Richard Pangborn: Yeah, I don't think we measure it as a, um, whether they're not successful, it's just something for them to, to watch themselves.

Bryan Finster: As long as somebody else can see it. I mean. 

Richard Pangborn: Yeah, it's just for them, isn't it? Not for anyone else. 

Bryan Finster: Yeah. 

Richard Pangborn: Um, cool. Yeah. Yeah. That's, that's about it for me. I think at the moment. 

Kovid Batra: Perfect, perfect. I think, uh, Rich, if, if you are done with your questions, we have already started seeing questions from the audience. 

Bryan Finster: There's one other thing I'd like to mention real quick before we go there.

Kovid Batra: Sure. 

Bryan Finster: I also gave a talk about how to misuse and abuse DORA metrics, and the fact that people think there's, yes, there's four key metrics they focus on, but read Accelerate. There's a lot more in that book for things that you should measure, including culture. Uh, it's, it's important that you look at this as a holistic thing and not just focus on these metrics to show how well we're doing at CD. Cool, but the most valuable thing in Accelerate is Appendix A and not the four key metrics. So that's number one. But number two, value stream maps, they're manual, but they give you far deeper insights into what's going wrong than the 4 key metrics will. So learn how to do value stream maps and learn how to use them to identify problems and fix those problems.

Kovid Batra: And how exactly, uh, so just an example, I'm expecting an example here, like when, when you are dealing with value stream maps, you're collecting data from system, you're collecting data from people through surveys and what exactly are you creating here? 

Bryan Finster: No, I don't collect any data from the system initially. So if I'm doing a value stream map, it'll be bringing a team together. We're not doing it at the, at the organization level. We're doing it at the team level. So you bring a team together and then you talk about the process, starting from delivery and working backwards to initiation of how we deliver change. Uh, you get a consensus from the team about how long things take, how long things are waiting to start. And then you start seeing things like, Oh, we do asynchronous code review, and so I'm ready for code review to start. Four to eight hours later, somebody picks it up and they review it. And then I find out later that they've done and there's changes being made, you know, maybe the next day. And then I go make those changes, resubmit it, and like four to eight hours later, somebody would go re-review it. And, and you see things like, Oh, well, what if we just sat down and discuss the change together and just fix it on the fly, um, and remove all that wait time? How much, you know, that would encourage smaller pieces of work? And we can deliver more frequently and get faster feedback and see, you can see just immediate improvements from things like that, just by doing a value stream map. But bringing the team together will give you much higher quality data than trying to instrument that because not all of those things are, there's data being collected anywhere.

Kovid Batra: Makes sense. All right. We'll take a minute break and we'll start with the Q and A after that. So audience, uh, please shoot out all your questions that you have.

All right. Uh, we have the first question. 

Bryan Finster: Yeah. So MTTR is a metric measuring customer impact. So the moment from when a customer is impacted or user impact until they are no longer impacted. And that doesn't mean you fix the defect. It means that you are no, they are no longer being impacted. So roll back, roll forward, doesn't matter. That's what MTTR has mentioned. 

Kovid Batra: Perfect. Let's, let's move on to the next one. 

Bryan Finster: Yeah. So, um, there's some things where I can set hard targets on as, as ways to know that we're doing well. Integration frequency is one of those, you know, if, if we're integrating once per day or better into the trunk, then we're doing a really good job of breaking down our work. We're doing a good job of testing, or as long as we keep our defects from blowing up, you know, we should be testing. But you can set targets for that. You can also set targets as a team, not something you impose on a team. This is something we as a team do that we want to keep a story size of two days or less. Paul Hammett would say one day or less. Uh, but I think two days is, is a good time limit, that if we, if it takes us more than two days, we'll start running into other dysfunctions that cause quality impact and, and issues with delivery. So I've built dashboards where I have a line on those two graphs that say "this is what good looks like", so the teams can compare themselves to good. Other things that you don't want to gamify, you don't ever want to measure test coverage and say, "Hey, this is what good test coverage looks like." Because test coverage doesn't measure quality. It just measures how much code is executed by code that says it's a test whether it's a test or not. So don't want to do that. That's a fail. I learned that the hard way. Delivery frequency, of course, it's, that's relative to their delivery problem. Uh, you may be delivering every day, every hour, every week, and that all could be good. It just depends. Um, but you can make objective measurements on integration frequency and how long a unit of work takes to do. 

Kovid Batra: Cool. Moving on to the next one. Uh, any recommendations where you learn, uh, where we can learn value stream maps? 

Bryan Finster: Yeah, so Steve Pereira and Andrew Davis released 'Flow Engineering', which is basically, because there's lots of books on value stream mapping, but it's, from the past, but they're mostly focused on manufacturing and Steve and Andrew released the Flow Engineering book where they talk about using value stream maps to identify problems and how to go about fixing those things. So it was just released earlier this year. 

Kovid Batra: Cool. Moving on to the next one. When would you start and how to convince upper management? They want KPI now and we are trying to get a VSM expert to come in and help. It's a hard sell. 

Bryan Finster: Yeah, yeah. We want easy numbers. Okay. Well, you know, I would, I would start with having a conversation about what problems we're trying to solve. It's very much like the conversation you have when you're trying to convince management that we want to do continuous delivery. They don't care about continuous delivery unless that they're, they're deep into the topic. But they do care about, uh, you know, delivering better about business value. So you talk about the business value. When you're talking about performance indicators, well, what performance are we trying to measure? And we really need to have that hard conversation about, are we trying to measure how much, how many lines of code are getting dumped onto the end user? How much value are we delivering? Are we trying to, you know, reduce the size and cost of delivering change so we can be more effective about this, or are we just trying to make sure people are busy? And so if you have management that just wants to make sure people are productive, uh, and they're not opening to listening to why they're wrong, I'd quit.

Kovid Batra: All right. Can we move on to the next one then?

Bryan Finster: Where's the next one? 

Kovid Batra: Yeah. 

Bryan Finster: Oh, okay. 

Kovid Batra: Is there any scientific evidence we can use to point out that working on small steps iteratively is better than working in larger batches? The goal is to avoid anecdotal evidence while discussing what can improve the development process. 

Bryan Finster: You know, the hard thing about software, uh, in an industry is that people don't like sharing their information, uh, the real information because it can be stock impacting. And so we're, we're going to get a scientific study from a private company. Um, but we have a, you know, a few centuries worth of, of knowledge about knowing that if you build a whole bunch of the wrong thing, that you're not going to sell it. Um, there's, you don't have to do a scientific study because we have knowledge from manufacturing. Uh, you know, the, the, the Simpsons, the documentary The Simpsons, where they talk about the Homer car, where they build the entirely wrong car and put the company out of business without, because there was no feedback loop on that car at all until it was unveiled. Right? That's, that's really the problem. We're doing product development. And if you go off and say, I have this brilliant, well, you know, like, uh, uh, what was the, uh, Silicon Valley, they spent so much money building something nobody wanted and they kept iterating and trying to find the right thing, but they kept building the complete thing and building the wrong thing and just burning money. And this, this is the problem we're trying to solve. And so you're, you're trying to get faster feedback about when we're wrong, because we're inventing something new. Edison didn't build a million wrong light bulbs and see if any, I see if they worked.

Kovid Batra: All right. I think we can move on to the next one. Uh, what strategies do you recommend for setting realistic yet ambitious goals based on our current DORA metrics? 

Bryan Finster: Uh, I would start with why can't we deliver today's work today? Well, I'd do that right after why can't we integrate today's work today? And then start finding out what those problems are and solving them. Uh, as far as ambitious goals, I mean, I think it's ambitious to be doing continuous delivery. Why can't we do continuous delivery? Uh, you know, one of the reasons why we put minimumcd. org together several years ago was because it's a list of problems to solve, and if you solve those problems, you can't solve those problems with an organization that's not a great place to work. You just can't. And the goal is to make it a better place to work. So solve those problems. That's an ambitious goal. Do CD. 

Kovid Batra: Richard, do you have a question? 

Richard Pangborn: Uh, myself? No? 

Kovid Batra: Yup. 

Richard Pangborn: Nope. 

Kovid Batra: Okay. One last one we'll take here. Uh, yeah. 

Bryan Finster: Yeah, so common pitfalls, and I think we touched on some of these before, is trying to instrument all but two of them. You could instrument two of them mostly, I think that, uh, you know, and change fail rate is not named well because of the description. It's really defect arrival rate. But even then, that depends on being able to collect data from defects and whether or not that's being collected in a disciplined manner. Um, delivery frequency, you know, people frequently measure that at the organization level, but that doesn't really tell you anything. You really need to get down to where the work is happening and try to measure that there. But then setting targets around delivery frequency, instead of identifying how do we improve, right? And it's, it's, it's all it is, is how do we, how do we get better, um, using them as goals? They're absolutely not goals. They're health indicators. You know, like I talked about the tachometer before, I don't have a goal of, we're going to run at 5, 000 RPM. I mean, number one, it depends on the engine, right? I mean, that would be really terrible for a sport bike, would blow up a diesel. So we, we need to, using them naively without understanding what they mean and what it is we're trying to do. I see it constantly. Uh, I and others who were early adopters of these met out, screaming about this for several years, and that's why I'm on here today is please, please don't use them incorrectly because it just hurts things.

Kovid Batra: Perfect. Uh, Bryan, I have one question. Uh, uh, like when, when teams are setting these benchmarks for different metrics that they have identified to be measured, what should be the ideal strategy, ideal way of setting those benchmarks? Because that's a question I get asked a lot. 

Bryan Finster: Let's say, they were never benchmarks in Accelerate either. What they said was is that we're seeing a correlation between companies with these outcomes and metrics that look like this. So those aren't industry benchmarks, that's a correlation they're making. And correlation is not equal causation. I will tell you that being really good at continuous delivery means that you can, if you have good ideas, deliver good ideas well, but being good at CD doesn't mean you're going to be good at, at, at, you know, meeting your business goals because it depends, you know, garbage in, garbage out. Um, and so, you don't set them as benchmarks. They're not benchmarks. They're health indicators. Use them as health indicators. How do we make this better? Use them as, as things to cause you to ask questions. Why can't we deliver more than once a month? 

Kovid Batra: So basically, if we are, let's say, for a lack of a better term, we use 'benchmarks'. There should, those should be set on the basis of the cadence of our own team, how they are working, how they are designed to deliver. That's how we should be doing. Is that what you mean? 

Bryan Finster: No, I would absolutely use them as health indicators, you know, track trends. Are we trending up? Are we trending down? And then use that as the basis of starting an investigation into why are we trending up? Why are we trending down? I mean, are we trending up because people think it's a goal? And were there some other metric that's going south that we're not aware of while we're, while we're focusing on this one thing getting better? I mean, this is Richard, I mean, you pointed out exactly. It's a good balance set of metrics if they're measured correctly unlike if it's collected correctly. And you can't, you know, another problem I see is people focusing on 1. I remember a director telling his area, "Hey, we're going to start using DORA metrics. But for change management purposes, we're only going to start by focusing on MTTR instead of anything else." They're a set, they go together, you know? You can't just peel one out. Um, so. 

Kovid Batra: Got it, got it. Yeah, that absolutely answers my question. All right. I think with that, we come to the end of this session. Uh, before we part, uh, any parting advice from you, Bryan, Rich? 

Richard Pangborn: Um, just what we found successful in our own journey. Every, every company is different. They all have their own different processes, their own way of doing things, their own way of building things. So, there's not exactly one right way to do it. It's usually by trial and error for each, probably each company, uh, I would say. Depending on the tooling that you want to choose, the way you want to break down tasks and deliver stories. Like for us, we chose one day tasks in Jira. Um, we didn't choose, uh, long-lived branches. Um, we're not trunk-based explicitly, but we're, our PRs last no longer than a day. Um, so this is what we find works well for us. We're delivering daily. We haven't gotten yet to the, um, you know, delivering multiple times a day, but that's, that's somewhere in the future that we're going to get to, but you have to balance that with business goals. You need to get buy-in from stakeholders before you can get, um, development time to sort of build out that, that structure. So, um, it's a process. Um, everyone's different. Um, but I think bringing in some of these KPIs or, or sorry, benchmarks or health metrics, whatever you want to call them, um, has worked for us in the way where we have more observability into how we operate as engineers than we've ever had in the past. Um, so it's been pretty beneficial for us. 

Bryan Finster: Yeah. I'd say that the observability is critical. Um, you know, I've, I've built a few dashboards for showing these things. And for people, for development teams who were, uh, focusing on "we want to improve", they always found value in those things. Um, but I, one, one caution I have is that if you are showing metrics on a dashboard, understand that the user experience of that will change people's behaviors. It's so important people understand. And whenever I'm building a dashboard, I'm showing offsetting metrics together in a way that they can't be separated, um, because you, otherwise you'll just focus on one. I want you to focus on those offsetting metrics as a group, make them all better. Um, but it only matters if people are looking at it. And if it's not a constant topic of conversation, um, it, it, it won't help at all. And I know, uh, Abi Noda and I have a difference of opinion on how, on data collection. You know, I'm big on, I want real-time data because I'm trying to improve quickly. Uh, he's big on surveys, but for me, and I don't get feedback fast enough on, um, with a survey to be able to correct the course correctly if I'm trying to do, if I'm trying to improve CI and CD. It's good for other stuff. Good for culture. So that's the difference. Um, but make sure that you're not just going out and buying a tool to measure these things that shows data in a way or has, you know, that causes bad behavior, um, or shows, or collects data in a way where it's not collecting it correctly. Really understand what you're doing before you go and implement a tool. 

Kovid Batra: Cool. Thanks for that piece of advice, Bryan, Rich. Uh, with that, I think that's our time. Just a quick announcement about the next webinar session, which is with the pioneer of CD, the co-author of the book 'Continuous Delivery', Dave Farley. That will be on 25th of September. So audience, stay tuned. I'll be sharing the link with you guys, sending you emails. Thank you so much. That's it for today. 

Bryan Finster: Thanks so much. 

Richard Pangborn: I appreciate it. 

Kovid Batra: Thanks, Rich. Thanks, Bryan.

Engineering Analytics

View All

Webinar: ‘The Hows and Whats of DORA.' with Bryan Finster and Richard Pangborn

Typo hosted an exclusive live webinar titled 'The Hows and Whats of DORA', featuring Bryan Finster and Richard Pangborn. With over 150+ attendees, we explored how DORA can be misused and learnt practical tips for turning engineering metrics into dev team success.

Bryan Finster, Value Stream Architect at Defense Unicorns and co-author of 'How to Misuse and Abuse DORA Metrics’, and Richard Pangborn, Software Development Manager at Method and advocate for Typo, brought valuable perspectives to the table.

The discussion covered DORA metrics' implementation and challenges, highlighting the critical role of continuous delivery and value stream management. Bryan provided insights from his experience at Walmart and Defense Unicorns, explaining the pitfalls of misusing DORA metrics. Meanwhile, Richard shared his hands-on experience with implementation challenges, including data collection difficulties and the crucial need for accurate observability. They also reinforced the idea that DORA metrics should serve as health indicators rather than direct targets. Bryan and Richard offered parting advice on using observability effectively and ensuring that metrics lead to meaningful improvements rather than superficial compliance. They both emphasized the importance of a supportive culture that sees metrics as tools for improvement rather than instruments of pressure.

The event concluded with an interactive Q&A session, allowing attendees to ask questions and gain deeper insights.

P.S.: Our next live webinar is on September 25, featuring DORA expert Dave Farley. We hope to see you there!

Timestamps

  • 00:00 - Introduction
  • 00:59 - Meet Richard Pangborn
  • 02:58 - Meet Bryan Finster
  • 04:49 - Bryan's Journey with Continuous Delivery
  • 07:33 - Challenges & Misuse of DORA Metrics
  • 20:55 - Richard's Experience with DORA Metrics
  • 27:43 - Ownership of MTTR & Measurement Challenges
  • 28:27 - Cultural Resistance to Measurement
  • 29:37 - Team Metrics vs Individual Metrics
  • 31:29 - Value Stream Mapping Insights
  • 33:56 - Q&A Session
  • 40:19 - Setting Realistic Goals with DORA Metrics
  • 45:31 - Final Thoughts & Advice

Links and Mentions 

Transcript

Kovid Batra: Hi, everyone. Thanks for joining in for our DORA exclusive webinar, The Hows and Whats of DORA, powered by Typo. I'm Kovid, founding member at Typo and your host for today's webinar. With me, we have two special people. Please welcome the DORA expert for tonight, Bryan Finster, who is an exceptional Value Stream Architect at Defense Unicorns and the co-author of the ebook, 'How to Misuse and Abuse DORA Metrics', and one of our product mentors, and Typo advocates, Richard Pangborn, who is a Software Development Manager at Method. Thanks, Bryan. Thanks, Rich, for joining in. 

Bryan Finster: Thanks for having me. 

Richard Pangborn: Yeah, no problem. 

Kovid Batra: Great. So, like, before we, uh, get started and discuss about how to implement DORA, how to misuse DORA, uh, Rich, you have some questions to ask, uh, we would love to know a little bit about you both. So if you could just spare a minute and tell us about yourself. So I think we can get started with you, Rich. Uh, and then we can come back to Bryan. 

Richard Pangborn: Sure. Yeah, sounds good. Uh, my name is Richard Pangborn. I'm the Software Developer Manager here at Method. Uh, I've been a manager for about three years now. Um, but I do come from a Tech Lead role of five or more years. Um, I started here as a junior developer when we were just in the startup phase. Um, went through the series funding, the investments, the exponential growth. Today we're, you know, over a 100-person company with six software development teams. Um, and yeah, Typo is definitely something that we've been using to help us measure ourselves and succeed. Um, some interesting things about myself, I guess, is I was part of the company and team that succeeded when we did a Intuit hackathon. Um, it was pretty impactful to me. Um, We brought this giant check, uh, back with us from Cali all the way to Toronto, where we're located. Uh, we got to celebrate with, uh, all of the company, everyone who put in all the hard and hard work to, to help us succeed. Um, that's, that's sort of what pushed me into sort of a management path to sort of mentor and help those, um, that are junior or intermediate, uh, have that same sort of career path, uh, and set them up for success.

Kovid Batra: Perfect. Perfect. Thanks, Richard. And something apart from your professional life, anything that you want to share with the audience about yourself? 

Richard Pangborn: Uh, myself, um, I'm a gamer, um, I do like to golf, I do like to, um, exercise, uh, something interesting also is, um, I met my, uh, wife here at the company who I still work with today.

Kovid Batra: Great. Thank you so much, Rich. Bryan, over to you. 

Bryan Finster: Oh, yes. I'm Bryan Finster. I've been a software developer for, oh, well, since 1996. So I'll let you do the math. I'm mostly doing enterprise development. I worked for Walmart for 19 of those years, um, in logistics for most of that time and, uh, helped pilot continuous delivery at Walmart inside logistics. I've got scars to show for it. Um, and then later moved to platform at Walmart, where I was originally in charge of the delivery metrics we were gathering to help teams understand how to do continuous delivery so they can compare themselves to what good continuous delivery looked like. And then later was asked to start a dojo at Walmart to directly pair with teams to help them solve the problem of how do we do CD. And then about a little over three years ago, I was, I joined Defense Unicorns as employee number three of three, uh, and we're, we're now, um, over 150 people. We're focused on how do we help the Department of Defense deliver, um, you know, do continuous delivery and secure environments. So it's a fun path.

Kovid Batra: Great, great. Perfect. And the same question to you. Something that LinkedIn doesn't tell about you, you would like to share with the audience. 

Bryan Finster: Um, computers aren't my hobby. Uh, I, you know, it's a lot better than roofing. My dad had a construction company, so I know what that's like. Um, but I, I very much enjoy photography, uh, collecting watches, ride motorcycles, and build plastic models. So that's where I spend my time. 

Kovid Batra: Nice. Great to know that. All right. So now I think, uh, we are good to go and start with the main section of, of our webinar. So I think first, uh, let's, let's start with you, Bryan. Um, I think you have been a long-time advocate of value streams, continuous delivery, DORA metrics. You just briefly told us about how this journey started, but let's, let's deep dive a little bit more into this. Uh, tell us about how value stream management, continuous delivery, all this as a concept started appealing to you from the point of Walmart and then how it has evolved over time for you in your life.

Bryan Finster: Sure. Uh, no, at Walmart, um, continuous delivery was the answer to a problem. It wasn't, it was, we had a business problem, you know, our lead time for change in logistics was a year. We were delivering every quarter with massive explosions. Every time we piloted, I mean, it was really stressful. Um, any, anytime we did a big change of recorder, we had planned 24 by 7 support for at least a week and sometimes longer, um, And it was just a complete nightmare. And our SVP, instead of hiring in a bunch of consultants, cause we've been through a whole bunch of agile transformations over the years, asked the senior engineers in the area to figure out how we can deliver every two weeks. Now, if you can imagine these giant explosions happening every two weeks instead of every quarter, we didn't want that. And so we started digging in, how do we get that done? And my partner in crime bought a copy of continuous delivery. We started reading that book cover to cover, pulling out everything we could, uh, started building Jenkins pipelines with templates, so the teams didn't have to go and build their own pipeline. They can just extend the base template which was a pattern we took forward later. And, and, uh, we built a global platform. I started trying to figure out how do we actually do the workflow that enables continuous delivery. I mean, we weren't testing at all. Think how scary that is. Uh, other than, you know, handing it off to QA and say, "Hey, test this for us.

And so I had to really dig into how do we do continuous integration. And then that led into what's the communication problems that are stopping us from getting information so we can test before we commit code. Um, and then once you start doing that at the team level, what's preventing us from getting all the other information that we need outside the team? How do we get the connection? You know that, all the, all the roadblocks that are preventing us from doing continuous delivery, how do we fix those? Which kind of let me fall backwards in the value stream management because now you're looking at the broader value stream. It's beyond just what your team can do. Um, and so it's, uh, it's, it's been just a journey of solving that problem of how do we allow every team to independently deploy from any other team as frequently as they can. 

Kovid Batra: Great. And, and how do, uh, DORA metrics and engineering metrics, while you are implementing these projects, taking up these initiatives, play a role in it?

Bryan Finster: Well, so, you know, all this effort that we went on predated Accelerate coming out, but I was going to DevOps Enterprise Summit and learning as much as I could starting in 2015 and talking to people about how do we measure things, cause I was actually sent to DevOps Enterprise Summit the first time to figure out how do we measure if we're doing it well, and then started pulling together, you know, some metrics to show that we're progressing on this path to CD, you know, how frequently integrating code, how many defects are being generated over time, you know, and how, how often can individuals on the team deploy as like, you know, deploys per day per developer was a metric that Jim proposed back in 2015 as just a health metric. How are we doing? And then later in the, and when we started the dojo in platform at Walmart, we were using a metrics-based approach to help teams. Continuous delivery was the method we were using to improve engineering excellence in the organization. We, you know, we weren't doing any Agile frameworks. It was just, why can't we deliver change daily? Um, and early on when we started building the platform, the first tool was the CI tool. Second tool was how do we measure. And we brought in CapitalOne's Hygieia, and then we gamified delivery metrics so we can show teams with a star rating how they were doing on integration frequency, build time, build success rate, deploy frequency, you know, and code complexity, that sort of thing, to show them, you know, this is what good looks like, and here's where you are. That's it. Now, I learned a lot from that, and there's some things I would still game today, and some things I would absolutely not gamify. Um, but that's where I, you know, I spent a long time running that as the game master about how do we, how do we run the game to get teams to want to, want to move and have shown where to go.

And then later, Accelerate came out, and the big thing that Accelerate did was it validated everything we thought was true. All the experiences we had, because the reason I'm so passionate about it is that first, first experience with CD was such a morale improvement on the team that I, nobody ever wanted to work any other way, and when things later changed, they were forced to not work that way by new leadership, everyone who could left. And that's just the reality of it. And, but accelerate came out and said these exact things that we were seeing. And it wasn't just a one-off. It wasn't just this, you know, just localized to. What we were saying, it was everywhere.

Kovid Batra: Yeah, totally makes sense. I think, uh, it's been a burning topic now, and a lot of, uh, talks have been around it. In fact, like, these things are at team-level, system-level. In fact, uh, I'm, uh, the McKinsey article that came out, uh, talking about dev productivity also. So I, I have actually a question there. So, uh. 

Bryan Finster: Oh, I shouldn't have read the article. Yeah, go ahead. 

Kovid Batra: I mean, it's basically, it's basically talking about individual, uh, dev productivity, right? People say that it can be measured. So yeah. What's your take on that? 

Bryan Finster: That's, that's really dumb. If you want to absolutely kill outcomes, uh, focus on HR metrics instead of outcome metrics, you know. And, and so, I want to touch a little bit on the DORA metrics I think. You know, I've, having worked to apply those metrics on top of the metrics we're already using, there's some of them that are useful, but you have to understand those came from surveys, and there's some of them that are, that if you try to measure them directly, you won't get the results you want, you won't get useful data from measuring directly. Um, you know, and they don't tell you things are going well, they only tell you things are going poorly and you can't use those as your, your, the thing that tells you whether, whether you're delivering value well, you know? It's just something that you, cause you to ask questions about what might be going wrong or not, but it's not, it's not something you use like a dashboard. 

Kovid Batra: Makes sense. And I think, uh, the book that you have written, uh, 'How to Misuse and Abuse DORA Metrics', I think, let's, let's talk, talk about that a little bit. Like you have summarized a lot of things there, how DORA metrics should not be used, or Engineering metrics for that matter should not be used. So like, when do you think, how do you think teams should be using it? When do the teams actually feel the need of using these metrics and in which areas? 

Bryan Finster: Well, I think observability in general is something people don't pay enough attention to. And not just, you know, not just production observability, but how are we working as a team. And, and really what we're trying to do is you have to think of it first from what are we trying to do with product development. Um, a big mistake people make is assuming that their idea is correct, and all we have to do is build something according to spec, make sure it tests according to spec and deliver it when we're done. When fundamentally, the idea is probably wrong. And so the question is, how big of a chunk of wrong idea do I want to deliver to the end user and which money do I want to spend doing that? So what we're trying to do is we're trying to become much more efficient about how we make change so we can make smaller change at lower costs so that we can be more effective about delivering value and deliver less wrong stuff. And so what you're really trying to do is you're trying to measure the, the, the way that we work, the way we test, to find areas where we can improve that workflow, so that we can reduce the cost and increase the velocity, which we can deliver change. So we can deliver smaller units of work more frequently, get faster feedback and adjust our idea, right? And so if you're not, if you're just looking at, "Oh, we just need to deliver faster." But you're not looking at why do we want to deliver faster is to get faster feedback on the idea. And also from my perspective, after 20 years of carrying a pager, fix production very, very quickly and safely, I think those are both key things.

And so what we're trying to do with the metrics is we're trying to identify where those problems are. And so in the paper I wrote for IT revolution, which was about twice as long as they asked me for on, on how to misuse and abuse DORA metrics, I went into the details of how we apply those metrics in real life. At Walmart, when we were working with teams to help them improve and also, you know, using them on ourselves, I think if a team really wants to focus on improving, the first thing they should measure is how well they're doing at continuous integration, you know, how frequently are we integrating code, how long does it take us to finish whatever a unit of work is, and what's our, uh, how many defects we're generating, uh, over time as a trend. And measure trends and improve all those trends at the same time. 

Kovid Batra: How do we measure this piece where we are talking about measuring the continuous integration? 

Bryan Finster: So, as an average on the team, how frequently are we integrating code? And you really want to be at least daily, right? And that's integrated to the trunk, not to some develop branch. And then also, you know, people generally work on a task or a story or whatever it is. How long does it take to go from when we start that work until it's delivered? What's that time frame? And there's, there's other times within that we can measure and that was when we get into value stream mapping. We can talk about that later, but, uh, we want small units of work because you get higher quality information if you get smaller units work and you're more predictable on delivery of that unit of work, which takes a lot of pressure off, it eliminates story points. But then you also have to balance those with the quality of what we did, and you can't measure that quality until it's in production, because test to spec doesn't mean it's good. 'Fit for purpose' means the user finds it good. 

Kovid Batra: Right. Can you give us some examples of where you have seen implementing these metrics went completely south instead of working positively? Like how exactly were they abused and misused in a scenario? 

Bryan Finster: Yeah, every single time somebody builds a dashboard without really understanding what the problems you're trying to solve are, I see, I've seen lots of people over the years since Accelerate was published, building dashboards to sell, but they don't understand the core problem they're trying to solve. But also, you know, when you have management who reads a book and says, Oh, look, here's an end, you know, I helped cause this problems, which is why I work so hard to fix it by saying, "Hey, look at these four key metrics." Aren't you? You know, this, this can tell you some things, but then they start using them as goals instead of health indicators that are contextual to individual teams. And when you start saying, "Hey, all teams must have this, this level of delivery frequency." Well, maybe, but everybody has their own delivery context. You're not going to deliver to an air-gapped environment as frequently as you are to, you know, AWS, right? And so, you have to understand what it is you're actually trying to do. What, what decisions are you going to make with any metric? What questions are you trying to answer before you go and measure it? You have to define what the problem is before you try to measure that you're successful at correcting the problem. 

Kovid Batra: Right. Makes sense. There are challenges that I've seen in teams. Uh, so of course, Typo is getting implemented in various organizations here. What we have commonly come across is teams tend to start using it, but sometimes it happens that when there are certain indicators highlighted from those metrics, they're not sure of what to do next.

Bryan Finster: Right. 

Kovid Batra: So I'm sure you must. 

Bryan Finster: Well, but the reason why is because they didn't know why they were measuring it in the first place, right? And so, like I said, you know, DORA metrics in specific, they tell you something, but they're very much trailing metrics, which is why I point to CI because CI is really the, the CI workflow is really the engine that starts driving improvement. And then, you know, once you get better at that, you say, "Well, why can't I deliver today's work today?" And you start finding other things in the value stream that are broken, but then you have to identify, okay, well, We see this issue here with code review. We see this issue here. We have this handoff to another team downstream of development before we can deploy. How do we improve those? And how can we measure that we are improving? So you have to ask the question first. And then come up with the metrics that you're using to evaluate success. 

And so, people are saying, well, I don't know what to do with this number. It's because they don't, they didn't, they started with a metric and then tried to figure out what to do with it because someone told him it was a good metric. No metric is a good metric unless you know what you're doing with it. I mean, if I put a tachometer on a car and you think that more is better but you don't understand what the tachometer is telling you, then you'll just blow up your engine. 

Kovid Batra: But don't you think like there is a possible way to actually not know what to measure, but to identify what to measure also from these metrics itself? For example, like, uh, we have certain benchmarks for, uh, different industries for each metric, right? And let's say I start looking at the lead time, I start looking at the deployment frequency, mean time to restore, there are various other metrics. And from there, I try to identify where my engineering efficiency or productivity is, productivity is getting impacted. So can, can it not be a top-down approach where we find out what we need to actually measure and improve upon from those metrics itself? 

Bryan Finster: Only if you start with a question you're trying to answer. But I wouldn't compare. So one of the other problems I have with the DORA metrics specifically is that the, and I've talked to DORA at Google about this as well, it's, it's like some of the questions are nonspecific. So for your, the system you work on most of the time, how frequently you deliver. Well, are you talking about a thousand developers, a hundred developers, a team of eight, right? And so, your delivery frequency is going to be very much relative to the number of people working on it, plus other constraints outside of it. And so you, yes, high performers deliver, you know, multiple times a day with, uh, you know, lead times of less than an hour, except that what's the definition of lead time? Well, there's two inside Accelerate, and they're different depending on how you read it. And, but that doesn't mean that you should just copy what it says. You should look at that and say, "Okay, now what, what am I trying to accomplish? And how can I apply these ideas? Not necessarily the metrics directly, but how can I apply these ideas to measure what I'm trying to measure to find out where my problems are?" So you have to deep dive into where your problems are. And so just like, "Hey, measure these things and here's your benchmarks.

Kovid Batra: Makes sense. Makes sense. Richard, do you have a point that I think we have been talking for a long, if you have any question, uh, I think let's, let's hear from Richard also. Um, he has used Typo, uh, has been using it for a while now, and I'm sure, uh, in this journey of implementing engineering metrics, DORA metrics in his team, he would have seen certain challenges. Richard, I think the stage is yours. 

Richard Pangborn: Yeah, sure. Um, so my research into using DORA metrics stem from, um, building high-performing teams. So, um, we always, we're looking for continuous improvement, but we're really looking for ways to measure ourselves that, that makes sense, that can't be totally gamed, that, um, that are like standards. Uh, what I liked about DORA was it had some counterbalancing metrics like throughput versus quality, time to repair versus time to build, speed for stability. That's, it's a, it's a nice counterbalancing, um, effect. Um, and high-performing teams, they care about stuff like continuous improvement, they want to do better than they did last quarter or, or last month, they want to, um, they want help with decision-making. So better data to drive some of the guesswork about, you know, what, what area needs, um, The most improvement or what area is, uh, broken in our pipeline, maybe for like continuous delivery for quality. Um, I want to make sure that they're making a difference, that they're moving a needle, um, ladders up. So a lot of times, a lot of companies, uh, have different measurements at different levels, like company level, department level, team level, individual level. So DORA, we were able to identify some that do ladder up, which is great. 

Some of the there are some challenges with implementing DORA, like when we first started. Um, so I think part of the challenge, one of the first ones was the complexity around data collection. Um, so, you know, accurately tracking and measuring DORA metrics. So deployment frequency, lead time for changes, failure rate, recovery, um, they all come from different sources. So CI/CD pipelines, version control systems, incident management tools. So integrating these data sources and ensuring they provide consistent results can be a little time consuming. Um, it can be a little difficult to understand. Yeah, so that was, that was definitely one part of it. Uh, we haven't rolled out all four yet. We're still in the process, just ensuring that, you know, what we are measuring is accurate.

Bryan Finster: Yeah, and I'm glad you touched on the accuracy thing. Um, When we would go and work with teams and start collecting data, so number one, we had data from the pipeline because it was embedded into the platform, but we also knew that when we worked with teams that the Git data was accurate, but the workload was going to be garbage unless the teams actually cared about using Jira correctly. And so, education step number one was while we were cleaning up the, the data in Jira, educating them on why Jira actually should matter to them, instead of just as a, it's not, it's not a time-tracking tool, it's a communication tool. You know, and educating them so that they would actually take it seriously so that the workflow data would be accurate so that they could then use that to help them identify where the improvements could happen because we're going to try to teach them how to improve, we weren't just trying to teach them to do what we said. Um, and yeah, I built a few data collection tools since we started this, and yeah, the collecting the data and showing where, um, accuracy problems happen as part of the dashboard is something that needs to be understood because people will just say, "Oh, the data's right." But yeah, I mean, especially with workflow data, one of the things we really did on the last one I built was show where, where the, you know, where we're out of bounds, very high or very low, you know. I talked to management. I was like, "Well, look, we're doing really good. I've got stuff closing here really fast." I'm like, you're telling me it took 30 seconds to do that, give it a work. Yeah, the accuracy issues. And MTTR is something that DORA's talked about ditching entirely because it's a far too noisy metric if you're trying to collect it automatically. 

Richard Pangborn: Yeah, we haven't started tracking MTTR yet. Um, we're more concerned with the throughput versus stability that would have the biggest, um, change at the department level, at the team level. Um, I think, I think that's made the difference so far. Also, we have a challenge with, um, yeah, just doing a lot of stuff manually. So lack of tooling and automation. Um, there's a lot of manual measurements that are taking place. So like you said, error-prone for data collection, inconsistent processes. Um, once we get to a more automated state, I feel like it will be a bit more successful.

Bryan Finster: Yeah. There's a dashboard I built for the, for the Air Force. I'll send you a link later. It might, it might be useful, I'm not sure. But also the other thing is change failure rate is something that people misunderstand a lot, uh, and I've, I've combed through Accelerate multiple times. Uh, uh, Walmart has actually asked to reverse engineer the survey for the book, so I've gone back in depth. Change failure rate is any defect. It's not an incident. If you go and read what it says about change failure rate, it's any defect, which it should be because also the idea is wrong. If the user's reporting it's defective, and you say, "Well, that's a new feature." No, the idea was defective. We're not, it's not fit for purpose in most, you know, unless it's some edge case, but we should track that as well, because that's part of our quality process and change failure rate's trying to track our quality process. 

Richard Pangborn: Another problem we had is, um, mean, uh, meantime to recovery. So because we track our bugs or defects differently, they have different priorities. So, um, P0s here has to be done, has to be fixed in less than 24 hours. Um, P, priority 1 means, you know, five days, priority two, you have two weeks. So trying to come up with a, an algorithm to accurately identify, um, time to fix, I guess you'd have like three, three or four different ones instead of one. 

Bryan Finster: I've tried to solve that problem too, and especially on distributed systems, it becomes very difficult. So who's getting measured on MTTR? I mean, I'm sorry. Yes, yes. Who's getting measured, right? It's going to be because MTTR, by definition, is when the user sees impact. And so really, that's whoever has the user interface owns that metric. If you're trying to help a team improve their processes for recovery. So it's, it's, it's just a really difficult metric to try to do anything with unless, um, well, you can't, it's, I've, I've, I've tried to measure it directly. I've talked to Verizon, CapitalOne, uh, you know, other people in the dojo consortium, they've tried to make, nobody's been successful at measuring it. But yeah. I think better metrics are out there for how fast we can resolve defects. 

Richard Pangborn: Um, one of the things we were concerned about at the beginning was like a resistance to measurement. Um, some people don't want to be measured. 

Bryan Finster: That's because they have management meeting over the head and using it as, as the reason why it's a massive fear thing. And it's part of the, it's a cultural thing. I mean, as long as you, it's, you have to have a generative culture to make these metrics effective. One of the things we would do when we start working with teams is number one, we'd explain to them, we're not trying to judge you. We're like your doctor. We're working with you. We're in the trenches with you. These are all of our metrics. They're not yours. And here's how to use them to help you improve. And if a manager comes and starts trying to beat you up with them, just, you know, stop making the data valid. 

Richard Pangborn: Yeah. Well, some developers do want to know am I doing well, how do I measure myself? Um, So this gives them a way to do it a little bit, but we told them, um, you know, you set your own goals. Improve yourself. Don't measure yourself next to a developer, another developer on your team or, or someone else where you're looking for your own improvement. 

Bryan Finster: Well, I think it's also really important that the smallest unit that's measured with delivery metrics is team and not person. If, if, if individuals are being measured, they're going to optimize for themselves instead of optimizing for team goals. And this is something I've seen, uh, frequently, uh, there was one, uh, with, you know, on, on our, on the dojo team, we can walk into your team and see that if there was filters by individual developer, your team was seriously broken. Uh, and I've seen managers who measured team members by how many Jira issues they closed, which meant that code review is going to be delayed, uh, mentoring was not going to happen, um, uh, you'd have senior engineers focusing on easy tasks to get their numbers up instead of focusing on solving the hard problems, design was not going to happen well because it wasn't a ticket, you know, and so you focus on team outcomes and measure team goals and individual performance because everybody has different roles on the teams. People know that from an HR perspective, coaching by walking around is how you find out who's struggling. You go to the gimbal, you find out who's struggling, you can't measure people directly, that way it'll impact team goals, business goals. 

Richard Pangborn: Yeah, I don't think we measure it as a, um, whether they're not successful, it's just something for them to, to watch themselves.

Bryan Finster: As long as somebody else can see it. I mean. 

Richard Pangborn: Yeah, it's just for them, isn't it? Not for anyone else. 

Bryan Finster: Yeah. 

Richard Pangborn: Um, cool. Yeah. Yeah. That's, that's about it for me. I think at the moment. 

Kovid Batra: Perfect, perfect. I think, uh, Rich, if, if you are done with your questions, we have already started seeing questions from the audience. 

Bryan Finster: There's one other thing I'd like to mention real quick before we go there.

Kovid Batra: Sure. 

Bryan Finster: I also gave a talk about how to misuse and abuse DORA metrics, and the fact that people think there's, yes, there's four key metrics they focus on, but read Accelerate. There's a lot more in that book for things that you should measure, including culture. Uh, it's, it's important that you look at this as a holistic thing and not just focus on these metrics to show how well we're doing at CD. Cool, but the most valuable thing in Accelerate is Appendix A and not the four key metrics. So that's number one. But number two, value stream maps, they're manual, but they give you far deeper insights into what's going wrong than the 4 key metrics will. So learn how to do value stream maps and learn how to use them to identify problems and fix those problems.

Kovid Batra: And how exactly, uh, so just an example, I'm expecting an example here, like when, when you are dealing with value stream maps, you're collecting data from system, you're collecting data from people through surveys and what exactly are you creating here? 

Bryan Finster: No, I don't collect any data from the system initially. So if I'm doing a value stream map, it'll be bringing a team together. We're not doing it at the, at the organization level. We're doing it at the team level. So you bring a team together and then you talk about the process, starting from delivery and working backwards to initiation of how we deliver change. Uh, you get a consensus from the team about how long things take, how long things are waiting to start. And then you start seeing things like, Oh, we do asynchronous code review, and so I'm ready for code review to start. Four to eight hours later, somebody picks it up and they review it. And then I find out later that they've done and there's changes being made, you know, maybe the next day. And then I go make those changes, resubmit it, and like four to eight hours later, somebody would go re-review it. And, and you see things like, Oh, well, what if we just sat down and discuss the change together and just fix it on the fly, um, and remove all that wait time? How much, you know, that would encourage smaller pieces of work? And we can deliver more frequently and get faster feedback and see, you can see just immediate improvements from things like that, just by doing a value stream map. But bringing the team together will give you much higher quality data than trying to instrument that because not all of those things are, there's data being collected anywhere.

Kovid Batra: Makes sense. All right. We'll take a minute break and we'll start with the Q and A after that. So audience, uh, please shoot out all your questions that you have.

All right. Uh, we have the first question. 

Bryan Finster: Yeah. So MTTR is a metric measuring customer impact. So the moment from when a customer is impacted or user impact until they are no longer impacted. And that doesn't mean you fix the defect. It means that you are no, they are no longer being impacted. So roll back, roll forward, doesn't matter. That's what MTTR has mentioned. 

Kovid Batra: Perfect. Let's, let's move on to the next one. 

Bryan Finster: Yeah. So, um, there's some things where I can set hard targets on as, as ways to know that we're doing well. Integration frequency is one of those, you know, if, if we're integrating once per day or better into the trunk, then we're doing a really good job of breaking down our work. We're doing a good job of testing, or as long as we keep our defects from blowing up, you know, we should be testing. But you can set targets for that. You can also set targets as a team, not something you impose on a team. This is something we as a team do that we want to keep a story size of two days or less. Paul Hammett would say one day or less. Uh, but I think two days is, is a good time limit, that if we, if it takes us more than two days, we'll start running into other dysfunctions that cause quality impact and, and issues with delivery. So I've built dashboards where I have a line on those two graphs that say "this is what good looks like", so the teams can compare themselves to good. Other things that you don't want to gamify, you don't ever want to measure test coverage and say, "Hey, this is what good test coverage looks like." Because test coverage doesn't measure quality. It just measures how much code is executed by code that says it's a test whether it's a test or not. So don't want to do that. That's a fail. I learned that the hard way. Delivery frequency, of course, it's, that's relative to their delivery problem. Uh, you may be delivering every day, every hour, every week, and that all could be good. It just depends. Um, but you can make objective measurements on integration frequency and how long a unit of work takes to do. 

Kovid Batra: Cool. Moving on to the next one. Uh, any recommendations where you learn, uh, where we can learn value stream maps? 

Bryan Finster: Yeah, so Steve Pereira and Andrew Davis released 'Flow Engineering', which is basically, because there's lots of books on value stream mapping, but it's, from the past, but they're mostly focused on manufacturing and Steve and Andrew released the Flow Engineering book where they talk about using value stream maps to identify problems and how to go about fixing those things. So it was just released earlier this year. 

Kovid Batra: Cool. Moving on to the next one. When would you start and how to convince upper management? They want KPI now and we are trying to get a VSM expert to come in and help. It's a hard sell. 

Bryan Finster: Yeah, yeah. We want easy numbers. Okay. Well, you know, I would, I would start with having a conversation about what problems we're trying to solve. It's very much like the conversation you have when you're trying to convince management that we want to do continuous delivery. They don't care about continuous delivery unless that they're, they're deep into the topic. But they do care about, uh, you know, delivering better about business value. So you talk about the business value. When you're talking about performance indicators, well, what performance are we trying to measure? And we really need to have that hard conversation about, are we trying to measure how much, how many lines of code are getting dumped onto the end user? How much value are we delivering? Are we trying to, you know, reduce the size and cost of delivering change so we can be more effective about this, or are we just trying to make sure people are busy? And so if you have management that just wants to make sure people are productive, uh, and they're not opening to listening to why they're wrong, I'd quit.

Kovid Batra: All right. Can we move on to the next one then?

Bryan Finster: Where's the next one? 

Kovid Batra: Yeah. 

Bryan Finster: Oh, okay. 

Kovid Batra: Is there any scientific evidence we can use to point out that working on small steps iteratively is better than working in larger batches? The goal is to avoid anecdotal evidence while discussing what can improve the development process. 

Bryan Finster: You know, the hard thing about software, uh, in an industry is that people don't like sharing their information, uh, the real information because it can be stock impacting. And so we're, we're going to get a scientific study from a private company. Um, but we have a, you know, a few centuries worth of, of knowledge about knowing that if you build a whole bunch of the wrong thing, that you're not going to sell it. Um, there's, you don't have to do a scientific study because we have knowledge from manufacturing. Uh, you know, the, the, the Simpsons, the documentary The Simpsons, where they talk about the Homer car, where they build the entirely wrong car and put the company out of business without, because there was no feedback loop on that car at all until it was unveiled. Right? That's, that's really the problem. We're doing product development. And if you go off and say, I have this brilliant, well, you know, like, uh, uh, what was the, uh, Silicon Valley, they spent so much money building something nobody wanted and they kept iterating and trying to find the right thing, but they kept building the complete thing and building the wrong thing and just burning money. And this, this is the problem we're trying to solve. And so you're, you're trying to get faster feedback about when we're wrong, because we're inventing something new. Edison didn't build a million wrong light bulbs and see if any, I see if they worked.

Kovid Batra: All right. I think we can move on to the next one. Uh, what strategies do you recommend for setting realistic yet ambitious goals based on our current DORA metrics? 

Bryan Finster: Uh, I would start with why can't we deliver today's work today? Well, I'd do that right after why can't we integrate today's work today? And then start finding out what those problems are and solving them. Uh, as far as ambitious goals, I mean, I think it's ambitious to be doing continuous delivery. Why can't we do continuous delivery? Uh, you know, one of the reasons why we put minimumcd. org together several years ago was because it's a list of problems to solve, and if you solve those problems, you can't solve those problems with an organization that's not a great place to work. You just can't. And the goal is to make it a better place to work. So solve those problems. That's an ambitious goal. Do CD. 

Kovid Batra: Richard, do you have a question? 

Richard Pangborn: Uh, myself? No? 

Kovid Batra: Yup. 

Richard Pangborn: Nope. 

Kovid Batra: Okay. One last one we'll take here. Uh, yeah. 

Bryan Finster: Yeah, so common pitfalls, and I think we touched on some of these before, is trying to instrument all but two of them. You could instrument two of them mostly, I think that, uh, you know, and change fail rate is not named well because of the description. It's really defect arrival rate. But even then, that depends on being able to collect data from defects and whether or not that's being collected in a disciplined manner. Um, delivery frequency, you know, people frequently measure that at the organization level, but that doesn't really tell you anything. You really need to get down to where the work is happening and try to measure that there. But then setting targets around delivery frequency, instead of identifying how do we improve, right? And it's, it's, it's all it is, is how do we, how do we get better, um, using them as goals? They're absolutely not goals. They're health indicators. You know, like I talked about the tachometer before, I don't have a goal of, we're going to run at 5, 000 RPM. I mean, number one, it depends on the engine, right? I mean, that would be really terrible for a sport bike, would blow up a diesel. So we, we need to, using them naively without understanding what they mean and what it is we're trying to do. I see it constantly. Uh, I and others who were early adopters of these met out, screaming about this for several years, and that's why I'm on here today is please, please don't use them incorrectly because it just hurts things.

Kovid Batra: Perfect. Uh, Bryan, I have one question. Uh, uh, like when, when teams are setting these benchmarks for different metrics that they have identified to be measured, what should be the ideal strategy, ideal way of setting those benchmarks? Because that's a question I get asked a lot. 

Bryan Finster: Let's say, they were never benchmarks in Accelerate either. What they said was is that we're seeing a correlation between companies with these outcomes and metrics that look like this. So those aren't industry benchmarks, that's a correlation they're making. And correlation is not equal causation. I will tell you that being really good at continuous delivery means that you can, if you have good ideas, deliver good ideas well, but being good at CD doesn't mean you're going to be good at, at, at, you know, meeting your business goals because it depends, you know, garbage in, garbage out. Um, and so, you don't set them as benchmarks. They're not benchmarks. They're health indicators. Use them as health indicators. How do we make this better? Use them as, as things to cause you to ask questions. Why can't we deliver more than once a month? 

Kovid Batra: So basically, if we are, let's say, for a lack of a better term, we use 'benchmarks'. There should, those should be set on the basis of the cadence of our own team, how they are working, how they are designed to deliver. That's how we should be doing. Is that what you mean? 

Bryan Finster: No, I would absolutely use them as health indicators, you know, track trends. Are we trending up? Are we trending down? And then use that as the basis of starting an investigation into why are we trending up? Why are we trending down? I mean, are we trending up because people think it's a goal? And were there some other metric that's going south that we're not aware of while we're, while we're focusing on this one thing getting better? I mean, this is Richard, I mean, you pointed out exactly. It's a good balance set of metrics if they're measured correctly unlike if it's collected correctly. And you can't, you know, another problem I see is people focusing on 1. I remember a director telling his area, "Hey, we're going to start using DORA metrics. But for change management purposes, we're only going to start by focusing on MTTR instead of anything else." They're a set, they go together, you know? You can't just peel one out. Um, so. 

Kovid Batra: Got it, got it. Yeah, that absolutely answers my question. All right. I think with that, we come to the end of this session. Uh, before we part, uh, any parting advice from you, Bryan, Rich? 

Richard Pangborn: Um, just what we found successful in our own journey. Every, every company is different. They all have their own different processes, their own way of doing things, their own way of building things. So, there's not exactly one right way to do it. It's usually by trial and error for each, probably each company, uh, I would say. Depending on the tooling that you want to choose, the way you want to break down tasks and deliver stories. Like for us, we chose one day tasks in Jira. Um, we didn't choose, uh, long-lived branches. Um, we're not trunk-based explicitly, but we're, our PRs last no longer than a day. Um, so this is what we find works well for us. We're delivering daily. We haven't gotten yet to the, um, you know, delivering multiple times a day, but that's, that's somewhere in the future that we're going to get to, but you have to balance that with business goals. You need to get buy-in from stakeholders before you can get, um, development time to sort of build out that, that structure. So, um, it's a process. Um, everyone's different. Um, but I think bringing in some of these KPIs or, or sorry, benchmarks or health metrics, whatever you want to call them, um, has worked for us in the way where we have more observability into how we operate as engineers than we've ever had in the past. Um, so it's been pretty beneficial for us. 

Bryan Finster: Yeah. I'd say that the observability is critical. Um, you know, I've, I've built a few dashboards for showing these things. And for people, for development teams who were, uh, focusing on "we want to improve", they always found value in those things. Um, but I, one, one caution I have is that if you are showing metrics on a dashboard, understand that the user experience of that will change people's behaviors. It's so important people understand. And whenever I'm building a dashboard, I'm showing offsetting metrics together in a way that they can't be separated, um, because you, otherwise you'll just focus on one. I want you to focus on those offsetting metrics as a group, make them all better. Um, but it only matters if people are looking at it. And if it's not a constant topic of conversation, um, it, it, it won't help at all. And I know, uh, Abi Noda and I have a difference of opinion on how, on data collection. You know, I'm big on, I want real-time data because I'm trying to improve quickly. Uh, he's big on surveys, but for me, and I don't get feedback fast enough on, um, with a survey to be able to correct the course correctly if I'm trying to do, if I'm trying to improve CI and CD. It's good for other stuff. Good for culture. So that's the difference. Um, but make sure that you're not just going out and buying a tool to measure these things that shows data in a way or has, you know, that causes bad behavior, um, or shows, or collects data in a way where it's not collecting it correctly. Really understand what you're doing before you go and implement a tool. 

Kovid Batra: Cool. Thanks for that piece of advice, Bryan, Rich. Uh, with that, I think that's our time. Just a quick announcement about the next webinar session, which is with the pioneer of CD, the co-author of the book 'Continuous Delivery', Dave Farley. That will be on 25th of September. So audience, stay tuned. I'll be sharing the link with you guys, sending you emails. Thank you so much. That's it for today. 

Bryan Finster: Thanks so much. 

Richard Pangborn: I appreciate it. 

Kovid Batra: Thanks, Rich. Thanks, Bryan.

Top 6 Jellyfish Alternatives

Software engineering teams are important assets for the organization. They build high-quality products, gather and analyze requirements, design system architecture and components, and write clean, efficient code. Measuring their success and identifying the potential challenges they may be facing is important. However, this isn’t always easy and takes a lot of time. 

And that’s how Engineering Analytics Tools comes to the rescue. One of the popular tools is Jellyfish which is widely used by engineering leaders and CTOs across the globe. 

While this is usually the best choice for the organizations, there might be chances that it doesn’t work for you. Worry not! We’ve curated the top 6 Jellyfish alternatives that you can consider when choosing an engineering analytics tool for your company.

What is Jellyfish? 

Jellyfish is a popular engineering management platform that offers real-time visibility into engineering organization and team progress. It translates tech data into information that the business side can understand and offers multiple perspectives on resource allocation. It also shows the status of every pull request and commits on the team. Jellyfish can be integrated with third-party tools such as Bitbucket, Github, Gitlab, JIRA, and other popular HR, Calendar, and Roadmap tools. 

However, its UI can be tricky initially and has a steep learning curve due to the vast amount of data it provides, which can be overwhelming for new users. 

Top Jellyfish Alternatives 

Typo 

Typo is another Jellyfish alternative that maximizes the business value of software delivery by offering features that improve SDLC visibility, developer insights, and workflow automation. It provides comprehensive insights into the deployment process through key DORA and other engineering metrics and offers engineering benchmarks to compare the team’s results across industries. Its automated code tool helps development teams identify code issues and auto-fix them before merging to master. It captures a 360-degree view of developers’ experience and includes an effective sprint analysis that tracks and analyzes the team’s progress. Typo can be integrated with tech tools such as GitHub, GitLab, Jira, Linear, and Jenkins. 

Price

  • Free: $0/dev/month
  • Starter: $16/dev/month
  • Pro: $24/dev/month
  • Enterprise: Quotation on request

LinearB 

LinearB is another leading software engineering intelligence platform that provides insights for identifying bottlenecks and streamlining software development workflow. It highlights automatable tasks to save time and enhance developer productivity. It also tracks DORA metrics and collects data from other tools to provide a holistic view of performance. Its project delivery tracker reflects project delivery status updates using planning accuracy and delivery reports. LinearB can be integrated with third-party applications such as Jira, Slack, and Shortcut. 

Price

  • Free: $0/dev/month
  • Business: $49/dev/month
  • Enterprise: Quotation on request

Waydev

Waydev is a software development analytics platform that provides actionable insights on metrics related to bug fixes, velocity, and more. It uses the agile method for tracking output during the development process and allows engineering leaders to see data from different perspectives. It emphasizes market-based metrics and ROI, unlike other platforms. Its resource planning assistance feature allows for avoiding scope creep and offers an understanding of the cost and progress of deliverables and key initiatives. Waydev can be integrated with well-known tools such as Gitlab, Github, CircleCI, and AzureOPS.

Price

  • Quotation on request

Pluralsight Flow 

Pluralsight Flow is a popular tool that tracks DORA metrics and helps to benchmark DevOps practices. It aggregates GIT data into comprehensive insights and offers a bird-eye view of what’s happening in development teams. Its sprint feature helps to make better plans and dive into the team’s accomplished work and whether they are committed or unplanned. Its team-level ticket filters, GIT tags, and other lightweight signals streamline pulling data from different sources. Pluralsight Flow can be integrated with manual and automated testing tools such as Azure DevOps, and GitLab.

Price

  • Core: $38/mo
  • Plus: $50/mo

Code Climate Velocity

Code Climate Velocity is a popular tool that uses repos to synthesize data and offers visibility into code coverage, coding practices, and security risks. It tracks issues in real time to help quickly move through existing workflows and allow engineering leaders to compile data on dev velocity and code quality. It has JIRA and GIT support that compresses into real-time analytics. Its customized dashboard and trends provide a view into each individual’s day-to-day tasks to long progress. Code Climate Velocity also provides technical debt assessment and style check in every pull request.

Price

  • Open Source: $0 (Free forever)
  • Startup: $0: Up to 4 seats
  • Team: $16.67/month/seat billed annually ($20 billed monthly)

Swarmia 

Swarmia is another well-known engineering effectiveness platform that provides quantitative insights into the software development pipeline. It offers visibility into three key areas: Business outcomes, developer productivity, and developer experience. It allows engineering leaders to create flexible and audit-ready software cost capitalization reports. It also identifies and fixes common teamwork antipatterns such as siloing and too much work in progress. Swarmia can be integrated with popular tools such as Slack, JIRA, Gitlab, Azure DevOps, and more. 

Price

  • Free: 0£/dev/month
  • Lite: 20£/dev/month
  • Standard: 39£/dev/month

Conclusion 

While we have shared top software development analytics tools, don’t forget to conduct thorough research before selecting for your engineering team. Check whether it aligns well with your requirements, facilitates team collaboration and continuous improvement, integrates seamlessly with your existing and upcoming tools, and so on. 

All the best! 

Cycle Time Breakdown: Minimizing PR Review Time

Cycle time is a critical metric that assesses the efficiency of your development process and captures the total time taken from the first commit to when the PR is merged or closed. 

PR Review Time is the third stage i.e. the time taken from the Pull Request creation until it gets merged or closed. Efficiently reducing PR Review is crucial for optimizing the development workflow. 

In this blog post, we'll explore strategies to effectively manage and reduce review time to boost your team's productivity and success.

What is Cycle Time?

Cycle time is a crucial metric that measures the average time PR spends in all stages of the development pipeline. These stages are: 

  • The Coding time represents the time taken to write and complete the code changes.
  • The Pickup time denotes the time spent before a pull request is assigned for review.
  • The Review time encompasses the time taken for peer review and feedback on the pull request.
  • The Merge time shows the duration from the approval of the pull request to its integration into the main codebase.

A shorter cycle time indicates an optimized process and highly efficient teams. This can be correlated with higher stability and enables the team to identify bottlenecks and quickly respond to issues with change. 

Why Measuring Cycle Time Matters? 

  • PR cycle time allows software development teams to understand how efficiently they are working. Low cycle time indicates a faster review process and quick integrations of code changes, leading to a high level of efficiency. 
  • Measuring cycle time helps to identify stages in the development process where work is getting stuck or delayed. This allows teams to pinpoint bottlenecks and areas that require attention. 
  • Monitoring PR Cycle Time regularly informs process improvements. Hence, helping teams create and implement more effective and streamlined workflows.
  • Cycle time fosters continuous improvement. This enables them to adapt to changing requirements more quickly, maintain a high level of productivity, and ship products faster. 
  • Cycle time allows better forecasting and planning which allows engineering teams to estimate project timelines regularly and manage stakeholder expectations.  

What is PR Review Time? 

The PR Review Time encompasses the time taken for peer review and feedback on the pull request. It is a critical component of PR Cycle Time that represents the duration of a Pull Request (PR) spent in the review stage before it is approved and merged. Review time is essential for understanding the efficiency of the code review process within a development team.

Conducting code reviews as frequently as possible is crucial for a team that strives for ongoing improvement. Ideally, code should be reviewed in near real-time, with a maximum time frame of 2 days for completion.

If your review time is high, the platform will display the review time as red - 

How to Identify High Review Time?

Long reviews can be identifed in the "Pull Request" tab and see all the open PRs.

You can also identify all the PRs having a high cycle time by clicking on view PRs in the cycle time card. 

See all the pending reviews in the “Pull Request” and identify them with the oldest review in sequence. 

Causes of High Review Time

Unawareness of the PR being issued

It's common for teams to experience communication breakdowns, even the most proficient ones. To address this issue, we suggest utilizing typo's Slack alerts to monitor requests that are left hanging. This feature allows channels to receive notifications only after a specific time period (12 hrs as default) has passed, which can be customized to your preference.

Another helpful practice is assigning a reviewer to work alongside developers, particularly those new to the team. Additionally, we encourage the team to utilize personal Slack alerts, which will directly notify them when they are assigned to review a code.

Large PRs

When a team is swamped with work, extensive pull requests may also be left unattended if reviewing them requires significant time. To avoid this issue, it's recommended to break down tasks into shorter and faster iterations. This approach not only reduces cycle time but also helps to accelerate the pickup time for code reviews.

Team is diverted to other work

When a bug is discovered that requires a patch to be made, a high-priority feature comes down from the CEO. In such situations, countless unexpected events may demand immediate attention, causing other ongoing work, including code reviews, to take a back seat.

Too much WIP

Code reviews are frequently deprioritized in favor of other tasks, such as creating pull requests with your own changes. This behavior is often a result of engineers misunderstanding how reviews fit into the broader software development lifecycle (SDLC). However, it's important to recognize that code waiting for review is essentially at the finish line, ready to be incorporated and provide value. Every hour that a review is delayed means one less hour of improvement that the new code could bring to the application.

Too few people are assigned to do reviews

Certain teams restrict the number of individuals who can conduct PR reviews, typically reserving this task for senior members. While this approach is well-intentioned and ensures that only top-tier code is released into production, it can create significant bottlenecks, with review requests accumulating on the desks of just one or a few people. This ultimately results in slower cycle times, even if it improves code quality.

Ways to Reduce Review Time

Here are some steps on how you can monitor and reduce your review time

Set Goals for the review time

With typo, you can set the goal to keep the review time under 24 hrs recommended by us. After setting the goal, the system sends personal Slack real-time alerts when PRs are assigned to be reviewed. 

Focus on high-priority items

Prioritize the critical functionalities and high-risk areas of the software during the review, as they are more likely to have significant issues. This can help you focus on the most critical items first and reduce review time.

Regular code reviews 

Conduct code reviews frequently to catch and fix issues early on in the development cycle. This ensures that issues are identified and resolved quickly, rather than waiting until the end of the development cycle.

Create standards and guidelines 

Establish coding standards and guidelines to ensure consistency in the codebase, which can help to identify potential issues more efficiently. Keep a close tab on the following metrics that can impact your review time  - 

  • PR merged w/o review
  • Pickup time
  • PR size

Effective communication 

Ensure that there is clear communication among the development team and stakeholders to quickly identify issues and resolve them timely. 

Conduct peer reviews 

Peer reviews can help catch issues that may have been missed during individual code reviews. By having team members review each other's code, you can ensure that all issues are caught and resolved quickly.

Conclusion

Minimizing PR review time is crucial for enhancing the team's overall productivity and efficient development workflow. By implementing these, organizations can significantly reduce cycle times and enable faster delivery of high-quality code. Prioritizing these practices will lead to continuous improvement and greater success in software development process.

Become an Elite Team With Dora Metrics

In the world of software development, high performing teams are crucial for success. DORA (DevOps Research and Assessment) metrics provide a powerful framework to measure the performance of your DevOps team and identify areas for improvement. By focusing on these metrics, you can propel your team towards elite status.

What are DORA Metrics?

DORA metrics are a set of four key metrics that measure the efficiency and effectiveness of your software delivery process:

  • Deployment Frequency: This metric measures how often your team successfully releases new features or fixes to production.
  • Lead Time for Changes: This metric measures the average time it takes for a code change to go from commit to production.
  • Change Failure Rate: This metric measures the percentage of deployments that result in production incidents.
  • Mean Time to Restore (MTTR): This metric measures the average time it takes to recover from a production incident.

Why are DORA Metrics Important?

DORA metrics provide valuable insights into the health of your DevOps practices. By tracking these metrics over time, you can identify bottlenecks in your delivery process and implement targeted improvements. Research by DORA has shown that high-performing teams (elite teams) consistently outperform low-performing teams in all four metrics. Here's a quick comparison:

These statistics highlight the significant performance advantage that elite teams enjoy. By striving to achieve elite performance in your DORA metrics, you can unlock faster deployments, fewer errors, and quicker recovery times from incidents.

How to Achieve Elite Levels of DORA Metrics

Here are some key strategies to achieve elite levels of DORA metrics:

  • Embrace a Culture of Continuous Delivery:
    A culture of continuous delivery emphasizes automating the software delivery pipeline. This allows for faster and more frequent deployments with lower risk.
  • Invest in Automation:
    Automating manual tasks in your delivery pipeline can significantly reduce lead times and improve deployment frequency. This includes automating tasks such as testing, building, and deployment.
  • Break Down Silos:
    Effective collaboration between development, operations, and security teams is essential for high performance. Break down silos between these teams to foster a shared responsibility for delivery.
  • Implement Continuous Feedback Loops:
    Establish feedback loops throughout your delivery pipeline to identify and fix issues early. This can involve practices like code reviews, automated testing, and performance monitoring.
  • Focus on Error Prevention:
    Shift your focus from fixing errors in production to preventing them from occurring in the first place. Utilize tools and techniques like static code analysis and unit testing to catch errors early in the development process.
  • Measure and Monitor:
    Continuously track your DORA metrics to identify trends and measure progress. Use data-driven insights to guide your improvement efforts.
  • Promote a Culture of Learning:
    Create a culture of continuous learning within your team. Encourage team members to experiment with new technologies and approaches to improve delivery performance.

By implementing these strategies and focusing on continuous improvement, your DevOps team can achieve elite levels of DORA metrics and unlock significant performance gains. Remember, becoming an elite team is a journey, not a destination. By consistently working towards improvement, you can empower your team to deliver high-quality software faster and more reliably.

Additional Tips

In addition to the above strategies, here are some additional tips for achieving elite DORA metrics:

  • Set clear goals for your DORA metrics and track your progress over time.
  • Communicate your DORA metrics goals to your entire team and get everyone on board.
  • Celebrate successes and milestones along the way.
  • Continuously seek feedback from your team and stakeholders and adapt your approach as needed.

By following these tips and focusing on continuous improvement, you can help your DevOps team reach new heights of performance.

Leveraging LLM Models to Achieve DevOps Excellence

As you embark on your journey to DevOps excellence, consider the potential of Large Language Models (LLMs) to amplify your team's capabilities. These advanced AI models can significantly contribute to achieving elite DORA metrics.

Specific Use Cases for LLMs in DevOps

Code Generation and Review:

  • Autogenerate boilerplate code, unit tests, or even entire functions based on natural language descriptions.
  • Assist in code reviews by suggesting improvements, identifying potential issues, and enforcing coding standards.

Incident Response and Root Cause Analysis:

  • Analyze log files, error messages, and monitoring data to swiftly identify the root cause of incidents.
  • Generate incident reports and suggest remediation steps.

Documentation Generation:

  • Create and maintain up-to-date documentation for codebases, infrastructure, and processes.
  • Generate API documentation, user manuals, and knowledge bases.

Predictive Analytics:

  • Analyze historical data to forecast potential issues, such as infrastructure bottlenecks or application performance degradation.
  • Provide early warnings to prevent service disruptions.

Chatbots and Virtual Assistants:

  • Develop intelligent chatbots to provide support to developers and operations teams.
  • Automate routine tasks and answer frequently asked questions.

Natural Language Querying of DevOps Data:

  • Allow users to query DevOps metrics and data using natural language.
  • Generate insights and visualizations based on user queries.

Automation Scripting:

  • Assist in generating scripts for infrastructure provisioning, configuration management, and deployment automation.
  • Improve automation efficiency and reduce human error.

By strategically integrating LLMs into your DevOps practices, you can enhance collaboration, improve decision-making, and accelerate software delivery. Remember, while LLMs offer significant potential, human expertise and oversight remain crucial for ensuring accuracy and reliability.

Cycle Time Breakdown: Minimizing Coding Time

Cycle time is a critical metric for assessing the efficiency of your development process that captures the total time taken from the start to the completion of a task.

Coding time is the first stage i.e. the duration from the initial commit to the pull request submission. Efficiently managing and reducing coding time is crucial for maintaining swift development cycles and ensuring timely project deliveries.

Focusing on minimizing coding time can enhance their workflow efficiency, accelerate feedback loops, and ultimately deliver high-quality code more rapidly. In this blog post, we'll explore strategies to effectively manage and reduce coding time to boost your team's productivity and success.

What is Cycle Time?

Cycle time measures the total elapsed time taken to complete a specific task or work item from the beginning to the end of the process.

  • The Coding time represents the time taken to write and complete the code changes.
  • The Pickup time denotes the time spent before a pull request is assigned for review.
  • The Review time encompasses the time taken for peer review and feedback on the pull request.
  • The Merge time shows the duration from the approval of the pull request to its integration into the main codebase.

Longer cycle time leads to delayed project deliveries and hinders overall development efficiency. On the other hand, Short cycle time enables faster feedback, quicker adjustments, and more efficient development, leading to accelerated project deliveries and improved productivity.

Why Measuring Cycle Time Improves Engineering Efficiency? 

Measuring cycle time provides valuable insights into the efficiency of a software engineering team's development process. Below are some of how measuring cycle time can be used to improve engineering team efficiency:

  • Measuring cycle time for individual tasks or user stories can identify stages in the development process where work tends to get stuck or delayed. This helps to pinpoint bottlenecks and areas that need improvement.
  • Cycle time indicates the overall efficiency of your development process. Shorter cycle times generally reflect a streamlined and efficient workflow.
  • Understanding cycle time helps with better forecasting and planning. Knowing how long it typically takes to complete tasks can accurately estimate project timelines and manage stakeholder expectations.
  • Measuring cycle time allows you to evaluate the impact of process changes. 
  • Effective cycle time data for individual team members provides insights into their productivity and can be used for performance evaluations.
  • Tracking cycle time across multiple projects or teams allows process standardization and best practice identification.

What is Coding Time? 

Coding time is the time it takes from the first commit to a branch to the eventual submission of a pull request. It is a crucial part of the development process where developers write and refine their code based on the project requirements. High coding time can lead to prolonged development cycles, affecting delivery timelines. Managing the coding time efficiently is essential to ensure the code completion is done on time with quicker feedback loops and a frictionless development process. 

To achieve continuous improvement, it is essential to divide the work into smaller, more manageable portions. Our research indicates that on average, teams require 3-4 days to complete a coding task, whereas high-performing teams can complete the same task within a single day.

In the Typo platform, If your coding time is high, your main dashboard will display the coding time as red.

Screenshot 2024-03-16 at 1.14.10 AM.png

 

Screenshot 2024-05-12 at 12.22.04 AM.png

Benchmarking coding time helps teams identify areas where developers may be spending excessive time, allowing for targeted improvements in development processes and workflows. It also enables better resource allocation and project planning, leading to increased productivity and efficiency.

How to Identify High-Coding Time?

Identify the delay in the “Insights” section at the team level & sort the teams by the cycle time. 

Screenshot 2024-03-16 at 12.29.43 AM.png

Click on the team to deep dive into the cycle time breakdown of each team & see the delays in the coding time. 

Causes of High Coding Time

There are broadly three main causes of high coding time

  • The task is too large on its own
  • Task requirements need clarification
  • Too much work in progress

The Task is Too Large

Frequently, a lengthy coding time can suggest that the tasks or assignments are not being divided into more manageable segments. It would be advisable to investigate repositories that exhibit extended coding times for a considerable number of code changes. In instances where the size of a PR is substantial, collaborating with your team to split assignments into smaller, more easily accomplishable tasks would be a wise course of action.

“Commit small, commit often” 

Task Requirements Need clarification

While working on an issue, you may encounter situations where seemingly straightforward tasks unexpectedly grow in scope. This may arise due to the discovery of edge cases, unclear instructions, or new tasks added after the assignment. In such cases, it is advisable to seek clarification from the product team, even if it may take longer. Doing so will ensure that the task is appropriately scoped, thereby helping you complete it more effectively

There are occasions when a task can prove to be more challenging than initially expected. It could be due to a lack of complete comprehension of the problem, or it could be that several "unknown unknowns" emerged, causing the project to expand beyond its original scope. The unforeseen difficulties will inevitably increase the overall time required to complete the task.

Too Much Work in Progress

When a developer has too many ongoing projects, they are forced to frequently multitask and switch contexts. This can lead to a reduction in the amount of time they spend working on a particular branch or issue, increasing their coding time metric.

Use the work log to understand the dev’s commits over a timeline to different issues. If a developer makes sporadic contributions to various issues, it may be indicative of frequent context switching during a sprint. To mitigate this issue, it is advisable to balance and rebalance the assignment of issues evenly and encourage the team to avoid multitasking by focusing on one task at a time. This approach can help reduce coding time.

Screenshot 2024-03-16 at 12.52.05 AM.png

Ways to Prevent High Coding Time

Set up Slack Alerts for High-Risk Work

Set goals for the work at risk where the rule of thumb is keeping the PR with less than 100 code changes & refactor size as above 50%. 

To achieve the team goal of reducing coding time, real-time Slack alerts can be utilised to notify the team of work at risk when large and heavily revised PRs are published. By using these alerts, it is possible to identify and address issues, story-points, or branches that are too extensive in scope and require breaking down.

Balance Workload in the Team

To manage workloads and assignments effectively, it is recommended to develop a habit of regularly reviewing the Insights tab, and identifying long PRs on a weekly or even daily basis. Additionally, examining each team member's workload can provide valuable insights. By using this data collaboratively with the team, it becomes possible to allocate resources more effectively and manage workloads more efficiently.

Use a Framework

Using a framework, such as React or Angular, can help reduce coding time by providing pre-built components and libraries that can be easily integrated into the application

Code Reuse

Reusing code that has already been written can help reduce coding time by eliminating the need to write code from scratch. This can be achieved by using code libraries, modules, and templates.

Rapid Prototyping

Rapid prototyping involves creating a quick and simple version of the application to test its functionality and usability. This can help reduce coding time by allowing developers to quickly identify and address any issues with the application.

Use Agile Methodologies

Agile methodologies, such as Scrum and Kanban, emphasize continuous delivery and feedback, which can help reduce coding time by allowing developers to focus on delivering small, incremental improvements to the application

Pair Programming

Pair programming involves two developers working together on the same code at the same time. This can help reduce coding time by allowing developers to collaborate and share ideas, which can lead to faster problem-solving and more efficient coding.

Conclusion

Optimizing coding time, a key component of the overall cycle time enhances development efficiency and accelerates project delivery. By focusing on reducing coding time, software development teams can streamline their workflows and achieve quicker feedback loops. This leads to a more efficient development process and timely project completions. Implementing strategies such as dividing tasks into smaller segments, clarifying requirements, minimizing multitasking, and using effective tools and methodologies can significantly improve both coding time and cycle time.

Top 5 Waydev Alternatives

Software engineering teams are the engine that drives your product forward. They write clean, efficient code, gather and analyze requirements, design system architecture and components, and build high-quality products. And since the tech industry is ever-evolving, it is crucial to understand how well they are performing and what needs to be fixed. 

This is where software development analytics tools come in. These tools provide insights into various metrics related to the development workflow, measure progress, and help to make informed decisions.

One such tool is Waydev that is used by development teams across the globe. While this is usually the best choice for the organizations, there might be chances that it doesn’t work for you. 

We’ve curated the top 5 Waydev alternatives that you can consider when selecting engineering analytics tools for your company.

What is Waydev?

Waydev is a leading software development analytics platform that puts more emphasis on market-based metrics. It allows development teams to compare the ROI of specific products to identify which features need improvement or removal. It also gives insights into the cost and progress of deliverables and key initiatives. Waydev can be seamlessly integrated with Github, Gitlab, CircleCI, Azure DevOps, and other popular tools. 

However, this analytics tool can be expensive, particularly for smaller teams or startups and may lack certain functionalities, such as detailed insights into pull request statistics or ticket activity. 

Top Waydev Alternatives 

A few of the best Waydev alternatives are: 

Typo 

Typo is a software engineering analytics platform that offers SDLC visibility, actionable insights, and workflow automation for building high-performing software teams. It tracks essential DORA and other engineering metrics to assess their performance and improve DevOps practices. It allows engineering leaders to analyze sprints with detailed insights on tasks and scope and provides an AI-powered team insights summary. Typo’s built-in automated code analysis helps find real-time issues and hotspots across the code base to merge clean, secure, high-quality code, faster. With its holistic framework to capture developer experience, Typo helps understand how devs are doing and what can be done to improve their productivity. Its pre-built integration in the dev tool stack can highlight developer blockers, predict sprint delays, and measure business impact.

Price:

  • Free: $0/dev/month
  • Starter: $16/dev/month
  • Pro: $24/dev/month
  • Enterprise: Quotation on request

LinearB

LinearB is another software delivery intelligence platform that provides insights to help engineering teams identify bottlenecks and improve software development workflow. It highlights automatable tasks to save time and resources and enhance developer productivity. It provides real-time alerts to development teams regarding project risks, delays, and dependencies and allows teams to create customized dashboards for tracking various engineering metrics such as cycle time and DORA metrics. LinearB’s project delivery forecast alerts the team to stay on schedule and communicate project delivery status updates. It can also be integrated with third-party applications such as Jira, Slack, Shortcut, and other popular tools.‍ 

Price:

  • Free: $0/dev/month
  • Business: $49/dev/month
  • Enterprise: Quotation on request

Jellyfish 

Jellyfish is an engineering management platform that aligns engineering data with business priorities. provides real-time visibility into engineering work quickly and allows the team members to track key metrics such as PR statuses, code commits, and overall project progress. It can be integrated with various development tools such as GitHub, GitLab, JIRA, and other third-party applications. Jellyfish offers multiple perspectives on resource allocation and helps track investments made during product development. It also generates reports tailored for executives and finance teams, including insights into R&D capitalization and engineering efficiency. 

Price

  • Quotation on request

Swarmia 

Swarmia is an engineering effectiveness platform that provides visibility into three key areas: business outcome, developer productivity, and developer experience. Its working agreement feature includes 20+ work agreements, allowing teams to adopt and measure best practices from high-performing teams. It tracks healthy engineering measures and provides insights into the development pipeline. Swarmia’s Investment balance gives insights into the purpose of each action and money spent by the company on each category. It can be integrated with tech tools like source code hosting, issue trackers, and chat systems.

Price

  • Free: 0£/dev/month
  • Lite: 20£/dev/month
  • Standard: 39£/dev/month

Pluralsight Flow

Pluralsight Flow, a software development analytics platform, aggregates GIT data into comprehensive insights. It gathers important engineering metrics such as DORA metrics, code commits, and pull requests, all displayed in a centralized dashboard. It can be integrated with manual and automated testing such as Azure DevOps and GitLab. Pluralsight Flow offers a comprehensive view of team health, allowing engineering leaders to proactively diagnose issues. It also sends real-time alerts to keep teams informed about critical changes and updates in their workflows.

Price

  • Core: $38/mo
  • Plus: $50/mo

How to Select the Right Software Development Analytics Tool for your Team?

Picking the right analytics tool is important for the software engineering team. Check out these essential factors below before you make a purchase:

Scalability

Consider how the tool can accommodate the team’s growth and evolving needs. It should handle increasing data volumes and support additional users and projects.

Error Detection

Error detection feature must be present in the analytics tool as it helps to improve code maintainability, mean time to recovery, and bug rates.

Security Capability

Developer analytics tools must compile with industry standards and regulations regarding security vulnerabilities. It must provide strong control over open-source software and indicate the introduction of malicious code.

Ease of Use

These analytics tools must have user-friendly dashboards and an intuitive interface. They should be easy to navigate, configure, and customize according to your team’s preferences.

Integrations

Software development analytics tools must be seamlessly integrated with your tech tools stack such as CI/CD pipeline, version control system, issue tracking tools, etc.

Conclusion

Given above are a few Waydev competitors. Conduct thorough research before selecting the analytics tool for your engineering team. Check whether it aligns well with your requirements. It must enhance team performance, improve code quality and reduce technical debt, drive continuous improvement in your software delivery and development process, integrate seamlessly with third-party tools, and more.

All the best!

Top DevOps Metrics and KPIs (2024)

As an engineering leader, showcasing your team’s efficiency and alignment with business goals can be challenging. DevOps metrics and KPIs are essential tools that provide clear insights into your team’s performance and the effectiveness of your DevOps practices.

Tracking the right metrics allows you to measure the DevOps processes’ success, identify areas for improvement, and ensure that your software delivery meets high standards. 

In this blog post, let’s delve into key DevOps metrics and KPIs to monitor to optimize your DevOps efforts and enhance organizational performance.

What are DevOps Metrics and KPIs? 

DevOps metrics showcase the performance of the DevOps software development pipeline. These metrics bridge the gap between development and operations and measure and optimize the efficiency of processes and people involved. Tracking DevOps metrics enables DevOps teams to quickly identify and eliminate bottlenecks, streamline workflows, and ensure alignment with business objectives.

DevOps KPIs are specific, strategic metrics to measure progress towards key business goals. They assess how well DevOps practices align with and support organizational objectives. KPIs also provide insight into overall performance and help guide decision-making.

Why Measure DevOps Metrics and KPIs? 

Measuring DevOps metrics and KPIs is beneficial for various reasons:

  • DevOps Metrics help identify areas where processes may be inefficient or problematic. Hence, enabling teams to address issues and optimize performance.
  • Tracking metrics allows development teams to maintain high standards for software quality and reliability.
  • They provide a basis for evaluating the effectiveness of DevOps practices and making data-driven decisions to drive continuous improvement and enhance processes.
  • KPIs ensure that DevOps efforts are aligned with broader business objectives. This allows organizations to achieve strategic goals and deliver value to the end-users. 
  • They provide visibility into the DevOps process that fosters better communication and collaboration within DevOps teams. 
  • Measuring metrics continuously allows teams to monitor progress, set benchmarks, and assess the impact of changes and improvements.
  • They help make strategic decisions, allowing resource utilization effectively and prioritizing initiatives based on their impact.

Key DevOps Metrics and KPIs

There are many DevOps metrics available. Focus on the key performance indicators that align with your business needs and requirements. 

A few important DevOps metrics and KPIs are:

Deployment Frequency

Deployment Frequency measures how often the code is deployed to production. It considers everything from bug fixes and capability improvements to new features. It monitors the rate of change in software development, highlights potential issues, and is a key indicator of agility and efficiency. High deployment Frequency indicates regular deployments and a streamlined pipeline, allowing teams to deliver features and updates faster. 

Lead Time for Changes

Lead Time for Changes is a measure of time taken by code changes to move from inception to deployment. It tracks the speed and efficiency of software delivery and provides valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies. Short lead times allow new features and improvements to reach users quickly and enable organizations to test new ideas and features. 

Change Failure Rate

This DevOps metric tracks the percentage of newly deployed changes that caused failure or glitches in production. It reflects reliability and efficiency and relates to team capacity, code complexity, and process efficiency, impacting speed and quality. Tracking CFR helps identify bottlenecks, flaws, or vulnerabilities in processes, tools, or infrastructure that can negatively affect the software delivery’s quality, speed, and cost. 

Mean Time to Recovery

Mean Time to Recovery measures the average time a system or application takes to recover from any failure or incident. It highlights the efficiency and effectiveness of an organization’s incident response and resolution procedures. Reduced MTTR minimizes system downtime, faster recovery from incidents, and identifies and addresses potential issues quickly. 

Cycle Time

Cycle Time metric measures the total elapsed time taken to complete a specific task or work item from the beginning to the end of the process. Measuring cycle time can provide valuable insights into the efficiency and effectiveness of an engineering team's development process. These insights can help assess how quickly the team can turn around tasks and features, identify trends and failures, and forecast how long future tasks will take.

Mean Time to Detection

Mean Time to Detection is a key performance indicator that tracks how long the DevOps team takes to identify issues or incidents. High time to detect results in bottlenecks that may interrupt the entire workflow. On the other hand, shorter MTTD indicates issues are identified rapidly, improving incident management strategies and enhancing overall service quality. 

Defect Escape Rate

Defect Escape Rate tracks how many issues slipped through the testing phase. It monitors how often defects are uncovered in the pre-production vs. production phase. It highlights the effectiveness of the testing and quality assurance process and guides improvements to improve software quality. Reduced Defect Escape Rate helps maintain customer trust and satisfaction by decreasing the bugs encountered in live environments. 

Code Coverage

Code coverage measures the percentage of a codebase tested by automated tests. It helps ensure that the tests cover a significant portion of the code, and identifies untested parts and potential bugs. It assists in meeting industry standards and compliance requirements by ensuring comprehensive test coverage and provides a safety net for the DevOps team when refactoring or updating code. Hence, they can quickly catch and address any issues introduced by changes to the codebase. 

Work in Progress

Work in Progress represents the percentage breakdown of Issue tickets or Story points in the selected sprint according to their current workflow status. It monitors and manages workflow within DevOps teams. It visualizes their workload, assesses performance, and identifies bottlenecks in the dev process. Work in Progress enables how much work the team handles at a given time and prevents them from being overwhelmed. 

Unplanned Work

Unplanned work tracks any unexpected interruptions or tasks that arise and prevents engineering teams from completing their scheduled work. It helps DevOps teams understand the impact of unplanned work on their productivity and overall workflow and assists in prioritizing tasks based on urgency and value.

Pull Request Size

PR Size tracks the average number of lines of code added and deleted across all merged pull requests (PRs) within a specified time period. Measuring PR size provides valuable insights into the development process, helps development teams identify bottlenecks, and streamline workflows. Breaking down work into smaller PRs encourages collaboration and knowledge sharing among the DevOps team. 

Error Rates

Error Rates measure the number of errors encountered in the platform. It identifies the stability, reliability, and user experience of the platform. Monitoring error rates help ensure that applications meet quality standards and function as intended otherwise it can lead to user frustration and dissatisfaction. 

Deployment Time

Deployment time measures how long it takes to deploy a release into a testing, development, or production environment. It allows teams to see where they can improve deployment and delivery methods. It enables the development team to identify bottlenecks in the deployment workflow, optimize deployment steps to improve speed and reliability, and achieve consistent deployment times. 

Uptime

Uptime measures the percentage of time a system, service, or device remains operational and available for use. A high uptime percentage indicates a stable and robust system. Constant uptime tracking maintains user trust and satisfaction and helps organizations identify and address issues quickly that may lead to downtime.

Improve Your DevOps KPIs with Typo

Typo is one of the effective DevOps tools that offer SDLC visibility, developer insights, and workflow automation to deliver high-quality software to end-users. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools. It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, PR size, code coverage, and deployment frequency. Its automated code review tool helps identify issues in the code and auto-fixes them before you merge to master.

  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Analyze the code coverage report within a few minutes and provide detailed coverage reports.
  • Auto-analyses codebase and pull requests to find issues and auto-generates fixes before you merge to master. 
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.

Conclusion

DevOps metrics are vital for optimizing DevOps performance, making data-driven decisions, and aligning with business goals. Measuring the right key indicators can gain insights into your team’s efficiency and effectiveness. Choose the metrics that best suit the organization’s needs, and use them to drive continuous improvement and achieve your DevOps objectives.

Webinar: ‘The Hows & Whats of DORA’ with Nathen Harvey and Ido Shveki

Typo recently hosted an engaging live webinar titled “The Hows and Whats of DORA,” featuring DORA expert Nathen Harvey and special guest Ido Shveki. With over 170 attendees, we explored DORA and other crucial engineering metrics in depth.

Nathen, the DORA Lead & Developer Advocate at Google Cloud, and Ido, the VP of R&D at BeeHero, one of our valued customers, brought their unique insights to the discussion.

The session explored why only 5-10% of engineering teams actively use DORA metrics and examined the current state of data-driven metrics like DORA and SPACE. It also highlighted the organizational and cultural elements essential for successfully implementing these metrics.

Further, Nathen explained how advanced frameworks such as DORA become critical based on team size and DevOps maturity, and offered practical guidance on choosing the most relevant metrics and benchmarks for the organization.

The event concluded with an engaging Q&A session, allowing attendees to ask questions and gain valuable insights.

P.S.: Our next live webinar is on August 28, featuring DORA expert Bryan Finster. We hope to see you there!

Want to Implement DORA Metrics for Improving Dev Visibility and Performance?

Timestamps

  • 00:00 - Introduction
  • 02:11 - Understanding the Low Uptake of Metrics
  • 08:11 - Mindset Shifts Essential for Metrics Implementation
  • 10:11 - Ideal Team Size for Metrics Implementation
  • 15:36 - How to Identify Benchmarks?
  • 22:06 - Aligning Business with Engineering Metrics
  • 25:04 - Choosing the Right Metrics
  • 30:49 - Q&A Session
  • 45:43 - Conclusion

Links and Mentions 

Webinar Transcript

Kovid Batra: All right. Hi, everyone. Thanks for joining in for our DORA Exclusive webinar- The Hows & Whats of DORA, powered by Typo. This is Kovid, founding member at Typo and your host for today's webinar. And with me, I have two special co-hosts. Please welcome the DORA expert tonight, Nathen Harvey. He's the Lead and Dev Advocate at Google Cloud. And we have with us one of our product mentors, Typo Advocates, Ido Shveki, who is VP of R&D at BeeHero. Thanks, Nathen. Thanks, Ido, for joining in. 

Nathen Harvey: Oh, thanks for having us. I'm really excited to be here today. Thanks, Kovid. 

Ido Shveki: Me too. Thanks, Kovid. 

Kovid Batra: Guys, um, honestly, like before we get started, uh, I have to share this with the, with our audience today. Uh, both of you have been really nice. It was just one message and you were so positive in the first response itself to join this event. And honestly, uh, I feel that this, these kinds of events are really helpful for the engineering community because we are picking up a topic which is growing, people want to learn more, and, uh, Nathen, Ido,, once again, thanks a lot for, for joining in on this event. 

Nathen Harvey: Oh yeah, it really is my pleasure and I totally agree that these events are so important. Um, I often say that, you know, you can't improve alone. Uh, and that's true that each individual, we can't improve our entire organization or even our entire team on our own. It requires the entire team, but even an entire team within one organization, there's so much that we can learn from each other when we look into other organizations around the world and other challenges that people are running into, how they've overcome them. And I truly believe that each and every one of us has something to share with the community, uh, even if you were just getting started, uh, maybe you found a new pitfall that others should avoid. Uh, so you can bring along those cautionary tales and share those with, with the global community. I think it's so important that we continue to learn from and, and be inspired by one another. 

Kovid Batra: Totally. I totally agree with that. All right. So I think, we'll just get started with you, Nathen. Uh, so I think the first thing that I want to talk about is very fundamental to the implementation of DORA, right? We know lately we had a Gartner report saying there were only 5 to 10 percent of teams who actually implement such frameworks through tools or through processes in their, in their organizations. Whereas, I mean, I have grown up in my professional career hearing that if we are measuring something, only then we can improve it. So if you go to any department or any, uh, business unit for that matter, everyone follows some sophisticated processes or tooling to measure those KPIs, right? Uh, why is it, why this number is so low in our engineering teams? And if let's say, they are following something only through What's the current landscape according to you? I mean, you have been such a great believer of all this data-driven DORA metrics, engineering metrics, SPACE. So what's, what's your thought around it? 

Nathen Harvey: Yeah, it's a, it's a good question. And I think it's really interesting to think about. I think when you look at the practice of software engineering or development, or even operations like reliability engineering and things along those lines, these all tend to be, um, one creative work, right? When you're writing software, you're probably writing things that have never been written before. You're trying to solve a new problem that's very specific to your context. Um, it can be very difficult to measure, what does that look like? I mean, we've, we've used hundreds of different measures over the years. Some are terrible. You know, I think back to a while ago, and hopefully no one watching is under this measurement today. But how many lines of code did you commit to the repository? That's, that's a measure that has certainly been used in the past to figure out, is this a develop, is this developer being productive or not? Uh, we all know, hopefully by now that that's a, it's a terrible way to measure whether or not you're delivering value, whether or not you're actually being productive. So, I think that that's, that's part of it. 

I also think, frankly, that, uh, until a few years ago, the world was working in a, in a way in which finances were easy to get. We were kind of living in this zero interest rate, uh, world. Um, and engineers, you know, we're, we're special. We do work that is, that can't be understood by anyone else because we have this depth of knowledge in exactly what we're doing. That's kind of a lie. Uh, those salespeople, those marketing people, they have a depth of knowledge that we don't understand, that we couldn't do their job in the same way that they couldn't do our job. And that's, that's not to say that one is better than the other, or one is more special than the other, but we absolutely need different ways to measure. And even ways that we have to measure other sort of disciplines, uh, don't actually give us the whole picture. Take sales, for example, right? You might look at well, uh, how much, uh, how much revenue is this particular salesperson bringing in to the organization? That is certainly one measure of the productivity of that salesperson, but it doesn't really give you the whole picture, right? How is that salesperson's experience? How are the people that are interacting with that salesperson? How is their experience? So I think that it is really difficult to agree on a good set of measures to understand what those measures are. And frankly, and this, this might be a little bit shocking, Kovid, but look, I, I, I am a big proponent of DORA and the research and everything that we've done here. But between you and me, I don't want you to do DORA metrics. I don't want you to. I don't care about the DORA metrics. What I care about is that you and your team are improving, improving the practices and the processes that you have to deliver and operate software, improving the well-being of the members of your team, improving the value that you're creating for your business, and improving the experience that you're creating for your customers.

Now, none of those are the DORA metrics. Of course, Measuring the DORA metrics helps us assess some of those things and what we've been able to show through the research is that improving things like software delivery performance have positive outcomes or positive predictive nature of better organizational success, better customer satisfaction, better well-being for your teams. And so, I think there's there's this point where, you know, there's, uh, maybe this challenge, right, do you want, do you want me to spend as an engineer? Do you want me to spend time measuring the work that I'm doing, measuring how much value am I delivering, or do you want me delivering more value? Right? And it's not really an either or trade-off, but this is kind of some of the mindsets I have. And I think that this is some of the, the blockers that come in place when people want to try to bring in a measurement framework or a metrics framework. And then finally, Uh, you know, between you and me, nobody really likes their work to be measured. I want to feel like I'm providing valuable work and, and know that that's the case, but if you ask me to measure it, I start to get really worried about why are you asking that question. Are you asking that question because you want to give me a raise and a promotion and more money? Great. I'm gonna make sure that these numbers look really good. If you're asking that question to figure out if you need to keep me on board, or maybe you can let me go, now I'm getting really nervous about the questions that you're asking.

And so I think there's a lot of like human nature in the prevention of adopting these sorts of frameworks. And it really gets back to, like, who are these frameworks for? And again, I'll just go back to what I said sort of towards the beginning. I don't want you to do DORA metrics. I want you to improve. I want you to get better. And so, if we think about it in that perspective, really the DORA metrics are for me and my teammates. They aren't necessarily for my leaders. Because it's me and my teammates that are going to make those improvement efforts. 

Kovid Batra: Totally. I think, um, very wise words there. One thing that I just picked up from what you just said, uh, from the narrative, like there is a huge organizational cultural play in this, right? People are at the center of how things get implemented. So, you have been experiencing this with a lot of teams. You have implemented this. What's the difference that you have seen? What are those mindsets which make these things implement actually? What are those organizational factors that make these things implement? 

Nathen Harvey: Yeah, that's a, that's a good question. I would say, first it starts with, uh, the team that you're going to start measuring, or the application, the group of people and the technology that you want to start measuring. First, these people have to want to change, because if we're, if we're going to make a measure on something, presumably we're making that measure so that we understand how we are, so that we can improve. And to improve, we have to change something. So it starts with the people wanting to change. Oh, except I have to be honest, that's not enough. Wanting to change actually isn't enough. We all want to change. We all want to get better. Actually, maybe we all just want to get better, but we don't want to have to change anything. Like I'm very comfortable in the way that I work. So can it, can it just produce better results? The truth is, I think we have to find teams that need to change. There has to be some. Motivating factor that's really pushing them to change because after we look at the dashboard, after we see some numbers, if we're not truly motivated, if there isn't a need for us to change, we're probably not going to change our behavior. So I think that's the first critical component is this need to improve, this fundamental desire that goes beyond just the desire. It's, it's a motivating factor. You have to do this. You have to get better because the competition is coming after you, because you're feeling burnt out, because for a myriad of reasons. So I think that that's a big first step in it. 

Kovid Batra: A lot of times, what I have seen while talking to a lot of my Typo clients also, uh, is, uh, they feel that there is a stage when this needs to be implemented, right? So people use Git metrics, Jira metrics to make sure things are running fine. And I kind of agree to them, like very small teams can, can rely on that. Like maybe under 10 size teams are good. But, what do you think, uh, what, what's the DevOps maturity? What's the team size that impacts this, where you need to get into a sophisticated framework or a process like DORA to make sure things are, uh, in, in the right visibility? 

Nathen Harvey: Yeah, that's, that's, that's a really good question. And I think unfortunately, of course, the answer is it, it depends, right? It is pretty context-specific. I do think it matters that, uh, it matters the level at which you're measuring these things. You know, the DORA metrics have always been meant, and if you look at our survey, we always sort of prepend our questions with, for the primary application or service that you're working on. So when we think about those DORA metrics, those software delivery metrics in particular, we aren't talking about an organization. What is the, you know, we don't ask, for example, what is the deployment frequency at Typo? But instead, we ask about specific applications within Typo, and we expect that you're going to have variation across the applications within your team. And so, when you have to get into this sort of more formal measurement program, I think that really is context-specific. It really depends on the business and even what are you measuring? In fact, if if your team has, uh, more of a challenge with developing code than they do with shipping code, then maybe the DORA metrics aren't the right metrics to start with. You want to sort of find your constraint within your organization, and DORA is very much focused on software delivery and operational performance. So on the software delivery piece, it's really about are we able to take this code that was written and get it out the door, put it in front of customers. Of course, there's a lot of things on the development side that enable that. There's a lot of things on the operational side that benefit from that. It all kind of comes together, but it is really looking at finding that particular pain point or friction point within your organization. 

And then, I think one other thing that I'll just comment on really quickly here is that as teams start to adopt these frameworks, there's often an overfitting for precision. We need precise data when it comes to this. And honestly, again, if you go back to the methods that DORA uses, each year we run an annual survey. We ask people, what is your average time or your typical time from code committed to code in production? We're not hooking into your Git systems or your software delivery pipelines or your, uh, task backlog management systems. We're not hooking into any of those things. We're asking about your experience. Now, we have to do that given that we're asking the entire world. We can't simply integrate with all of those systems. But this level of precision is very helpful at some point. But it doesn't necessarily need to be where you start. Right? Um, I always find it's best to start with a conversation. Kind of like what we're having today. 

Kovid Batra: But yeah, I think, uh, the toolings that are coming into the, into the picture now are solving that piece also. So I think both the things are getting, uh, balanced there because I feel the survey part is also very critical to really understand what's going on. And on top of that, you have some data coming from the systems without any effort that reduces your pain and trust on what you are looking at. So yeah, that makes sense. 

Nathen Harvey: Yeah, absolutely. And, and, and there is a cautionary tale built in there. I've seen, I've seen too many teams go off and try to integrate all of these systems together to get all of the precise data and beautiful dashboards. Sometimes that effort ends up taking months. Sometimes that effort ends up taking years. But what those teams fail to do over those months or years is actually try to improve anything. All they're trying to improve is the precision of the data that they have. And so, at the end of that process, they have more precise, a more precise understanding of what they knew at the beginning of that process.

And they haven't made any improvements. So that's where a tool like Typo, uh, or others of this nature like really come in because now I don't have to think about as much, all of that integration, I can, I can take something off the shelf, uh, and run it in my systems and immediately start to get value from that. 

Kovid Batra: Totally. I think, uh, when it comes to using the product, uh, Ido has been, uh, one of the people who has connected with me almost thrice in the last few days, giving me some feedback around how to do things. And I would let Ido have some of his, uh, questions here. And, uh, I have, uh, my demo dashboard also ready. So if there is anything that you want to refer back to Ido or Nathen, to like highlight some metrics that they can look at, I, I'll be happy to share my screen also. Uh, over to you, Ido. I'll, I'll put you on the main screen so that the audience sees you well. 

Ido Shveki: Oh, thanks, Kovid. And hi again, Nathen. Uh, first of all, very interesting, uh, and you speaking about it. I also find this topic, uh, close to my heart. So I, uh, I, it's a fascinating to hear you talk about it. I wanted to know, uh, if you have any, you mentioned before that among the different, like you said, it may be inside the Typo as a company, there are like different benchmarks, different, uh, so how can you identify this, uh, benchmark? Maybe my questions are a bit practical, but let me know if that's the case, but yeah, I just want to know how to identify this benchmark because as you mentioned, and also at BeeHero, we have like, uh, uh, different teams, different sizes, different maturity, uh, different, uh, I mean, uh, seniority level. So how can I start with these benchmarks?

Nathen Harvey: Yeah, yeah. That's a, that's a really great question. So, um, one of the things that I like to do when I get together with a new team is we first kind of, or a new organization first, let's, let's pick an application or two. So at, BeeHero, uh, I, I, I know very little about what BeeHero does, you know, tell us a little bit about BeeHero. Give us, give us like a 30-second pitch on BeeHero. What do you do there? 

Ido Shveki: Cool. So we are an Israeli startup where we deal with agriculture. What we do is we place, uh, sensors inside beehives as the, as the name might, uh, you know, give you a hint. Uh, we put sensors inside beehives and this way we can give a lot of, uh, we, we collect metrics and we give great, uh, like, uh, good insights, interesting insights to beekeepers, uh, so that they can know what to do with their bee colony, how to treat it, and how to maintain the bee colony. So, this is, you know, basically, and if, if I'm, uh, to your question, so we have, yeah, uh, different platforms. We have the infra platforms, we have the firmware guys, we have mobile app, et cetera. So. But I assume that like every company has this, different angles of a product. 

Nathen Harvey: Yeah. Yeah. Yeah. Of course. Every company has hundreds, maybe thousands of different products that they're maintaining. Yeah, for sure. Um, not, well that's first, that's super cool. Um, keeping the farmers and the bees happy. Now, so what I like to do with, with a new team or organization that I'm working with is we start with an application on our service. So maybe, maybe we take the mobile application that BeeHero has and what we want to do is bring together, in the perfect world, we bring together into a physical room, everyone that's responsible for prioritizing work for that application, designing that work, writing the software, shipping the software, running the service, answering customer requests, all of that stuff. Uh, perhaps we'd let the bees stay in the hives. We don't bring them into the room with us. Um, software engineers aren't, aren't known for being good with bees, I guess. So, but.. 

Ido Shveki: They do affect the metrics though. Yeah, I don't want, I don't want that. 

Nathen Harvey: Absolutely. Absolutely. So, so we'll bring these people together. And I like to just start with a conversation, uh, at dora.dev, we have a quick check, that allows you to quickly answer those for software development or software delivery performance metrics. You know, the deployment frequency, change lead time, your change failure rate and your failed deployment recovery time. But even before we get to those metrics, I like to start with a simpler question. Okay, so together as a team, a developer has just committed a change to the version control system. As a team, let's go to the board and let's map out every step in the process, every handoff that has to happen between that code commit and that code landing in production, right, so that the users can use it. And the reason we bring together a cross-functional team is because in many organizations, I don't know how big BeeHero is, but in many organizations, there are handoffs that happen from one team to the next, sort of that chain of custody, if you will, to get to production. Unfortunately, every single one of those handoffs is an opportunity for introducing friction, for hiding information, you know. I've, I've worked with teams as an example where the development team is responsible for building a package, testing that package and then they hand it off to the test team. Well, the test team does, takes that package and they discard it. They go back to the Git repo. They actually clone the Git repo and then they build another package and then start testing that. So now, the developers have built a package that gets discarded. Now the testers build another package that they test against that probably gets discarded and then someone else builds a third package for production. So there's, as you can imagine, there's lots of ways for that handoff and those three different packages to be different from one another. This is, it's, it's mind boggling. But until we put all those people in the room together, you might not even see that friction and that waste in the process. So I start there to really identify where are those friction points? Where are those pain points? And oftentimes you have immediate sort of low hanging fruit, if you will, immediate improvement opportunities.

And the most exhilarating part of that process as a facilitator is to see those aha moments. "Oh my gosh! I didn't realize that you did that." "Oh, I thought I packaged it this way so that you could do this thing that you're not even doing. You're just rubber stamping and passing it on." Or whatever it is. Right? So you find those things, but once you've done that map, then you go back to those four questions. How's my, what are my, you know, we used a quick check in that process. What does my software delivery performance look like? This gives us a baseline. This is how we're doing today. But in this process, we've already started to identify some of those areas for improvement that we want to set next. Now I do this from one team to the next or encourage the teams to do this on their own. And this way we aren't really comparing, you know, what is your mobile app look like versus the front end website, right? Should they have the same deployment frequency? I don't know. They have different customers. They have different needs. They have different teams that are working on them. So you expect them to be different. And the thing that I don't really care about over time is that everyone gets to the top level or a consistent performance across all of the teams. What I'd much rather see is that everyone is improving over time, right? So in other words, I'd rather reward the most improved team than the team that has the highest performance. Does that make sense? 

Ido Shveki: Yeah, a lot actually. 

Nathen Harvey: All right. 

Ido Shveki: Thanks. 

Nathen Harvey: Awesome. Yeah. 

Ido Shveki: Kovid, do we have another, time for another question? 

Kovid Batra: Yeah, I do. I mean, uh, you can go ahead, please. Uh, we have another three minutes. Yeah. 

Ido Shveki: Oh, cool. I'll make it quick. I'm actually interested in how do you aligned the business to DORA metrics? Because I usually I find myself talking to the management, CEO, CTO, trying to explain to them what's, what's happening under the hood in the developer team and it's not always that easy. Do you have some tips there?

Nathen Harvey: Yeah, you know, as your CEO come to you and said, You know, you know, last year you did 250 deploys. If you do 500 this year, I'm going to double your salary. They probably never said that to you. Did that? 

Ido Shveki: No, no. 

Nathen Harvey: No, no. Primarily because your CEO probably doesn't care how many deploys you delivered. Your CEO. 

Ido Shveki: And I think that's, I mean, I wouldn't want them to. 

Nathen Harvey: You don't want them to. You're, you're exactly right. But they do care about other things, right? They care about, I don't, I don't know, I'm going to make up some metrics. They care about how many, uh, like the health of the hives that each farmer has, right? Like, that's what they care about. They care about how many new farmers have signed up or how many new beekeepers have signed up, what is their experience like with BeeHero. And, and so really, as you go to get your executives and your management and, and the business tied into these metrics, it's probably best not to talk about these metrics, but better to talk in terms of the value that they care about, the measures that they care about. So, you know, our onboarding experience has left some room for improvement. If we ship software faster, we can improve that onboarding experience. And really it's a hypothesis. We believe that by improving our software delivery performance, we'll be able to respond faster to the market needs, and we'll be able to therefore improve our onboarding process as an example, right? And so now you can talk to your CEO or other business counterparts about look, as we've improved these engineering capacities and capabilities, we've seen this direct impact on our customers, on the business value that we care about. DORA shows, through our data collection, that software delivery performance is predictive of better organizational performance. 

But it's up to you to prove that, right? It's up to you, essentially, we encourage you to replicate our study. We see this when we look across teams. Do you see this on your team? Do you see that improving? And that's really, I think, how you should talk about it with your business counterparts. And frankly, um, you, you are the business as well. So it also encourages you and the rest of the engineers on your team to remember, we aren't creating this application because we want to use the new, uh, serverless technology, or we want to play with the latest, greatest AI. We're building this application to help with the health of bees, right? And so, keeping that connection back to the business, I think is really important. 

Kovid Batra: Okay. On your behalf, can I ask one question? 

Yeah. So I think, uh, there are certain things that we also struggle with, with not just Ido, but, uh, various other clients also that, which metrics to pick up. So can we just run through a quick example from your, uh, history of clients where you have, uh, probably highlighted for, uh, let's say, a 100-member dev team. What metrics make sense in what scenario? I, I'll quickly share my screen. Uh, I have some metrics highlighted for, for DORA and more than that on Typo. 

Nathen Harvey: Oh, great! 

Kovid Batra: You can tell me which metrics one should look at and how one should navigate through it. 

Nathen Harvey: Yeah, for sure. That, that'd be awesome. That'd be awesome. So I think as you're pulling up your screen, I'll just start with, you know, the, the reason that the DORA software delivery metrics are nice, is kind of multifold, right? First, there's only four of them. So you can, you can count them on one hand. That's, that's a good thing. Uh, you aren't over, like over, have too many metrics that you just don't know. How many, which lever should we pull? There's too many in front of me, right? Second. Um, they, they represent both lagging and leading indicators. In other words, they're lagging indicators for what does your engineering process look like? What's engineering excellence or delivery excellence look like within your organization? These DORA metrics can tell you. Those are the lagging indicators. You have to change things over here to make them improve. But they're leading indicators for those business KPIs, right? Organizational performance, well-being for the people on your team. So as we improve these, we expect those things to improve as well. And so, the nice thing about starting with those four metrics is that it gives you a good sense of where you are. Gives you a nice baseline. 

And so, I'm just going to make my screen a little bit bigger so I can see your, uh, yeah, that's much better. I can see your dashboard now. All right. So you've got, uh, you've got those, uh, looks like those four, uh, a couple of those delivery metrics you got, uh, oh, actually tell me what, what do you have here, Kovid? 

Kovid Batra: Yeah. So we have these four DORA metrics for us, the cycle time, deployment frequency, change failure rate, and mean time to restore. So we also believe in the same thing where we start off with these fundamental metrics. And then, um, we, we have more to deep dive into, like, uh, you can see things at team level, so there are different teams in one single view where you can see each team on high level, how their velocity, quality, and throughput looks like. And when you deep dive, you find out those specific metrics that basically contribute to velocity, quality, and throughput of the teams. And these are driven from DORA and various other metrics that we realized were important and critical for people to actually measure what's going on.

Nathen Harvey: Yeah. Yep, that's great. And so I really like that you can see the trend over time because honestly, the, the single number doesn't really mean anything to you. It's like getting on the scale in the morning. There's a number on the scale. I don't know if that's good or bad. It depends on what it was yesterday and what it will be tomorrow. So seeing that trend is the really important thing here because then you can start to make decisions and commitments as a team on experiments that you want to run, right? And so in this particular case, you see your cycle time going up. So now what I want to do is kind of dig in. Well, what's, what's behind the cycle time, what's causing this? And that's where the things like the, that map and, and you see here, we've got a little map that shows you exactly sort of what happens along that flow. So let's take a look at those. We have coding, pick up, review and merge, right? Okay, yup. And so the, nice thing there is that the pickup seems like it's going pretty well, right? One of the things that we found last year in our survey was that teams with faster code reviews have 50 percent better software delivery performance. And so it looks like this team is doing pretty good job. I imagine that pickup is you're reviewing that code, right? 

Kovid Batra: Yeah. Yeah. Yeah. 

Nathen Harvey: Mm hmm. Yeah. So, so that's good. It's good to see that. But what's the review? Oh, I see. So pickup must be when you first grab the PR and then review maybe incorporates all the sort of back and forth feedback time. 

Kovid Batra: Yes, yes. And finally, when you're merging it to your main branch, so the time frame between that is your review time. 

Nathen Harvey: Ah, gotcha, gotcha, gotcha. Okay, so for me, this would be a good place to dig in. What's, what's happening there? Because if you look between that pickup and review, that's about 8 hours of your 5, 10, 15, uh, 18 hours. So it's a significant portion there is, sort of in that code review cycle. This is something I'd want to look at. 

Kovid Batra: Perfect. 

Nathen Harvey: Yeah. Yeah. And we see this, we see this a lot. Um, one, one organization I worked with, um, the, the challenge that they had was not necessarily in code review, but in approvals, they were in a regulated industry and they sent all changes off to a change approval board that had to approve them, that change approval board only met so frequently, as you can imagine, that really slowed down their cycle time. Uh, it also did not help with their stability, right? Um, changes were just as likely to fail when they went to production as not, uh, regardless of whether or not they went through that change approval board. So we really looked at that change approval process and worked to help them automate that. The net result is I think they're deploying about 600 times more frequently today than they were before we started the process, which is pretty incredible. 

Kovid Batra: Cool. That's really helpful, Nathen. And thanks for those examples that fit into the context of a lot of our audience here. In fact, this question, I just realized was asked by Benny Doan also. So I think he would be happy to hear you. And, uh, I think now it's time. I feel the audience can ask their questions. So, um, we'll start with a 15 minute Q&A round where all the audience, you are free to comment in the comment sections with all the questions that you have. And, uh, Nathen, Ido, uh, would be happy to listen out to you on those particular questions. 

Ido Shveki: Kovid, should we just start answering these questions? 

Kovid Batra: Yeah. 

Nathen Harvey: Yeah, I'm having trouble switching to the comments tab. So maybe you could read some of the questions. I can't see them. 

Ido Shveki: Um, I can see a question that was also asked by Benny, which I worked in the past. Oh, hi, Benny. Nice that you're here. Um, about how to, like, uh it was by Nitish and Benny as well, asking about how does the, the dev people, the developers won't feel micromanaged when we are using, um, uh, the DORA metrics with them. Um, I can begin, I'll let you Nathen, uh, uh, elaborate on it in a second. I can begin with my experience thing that first of all, it is a slippery slope. I mean, I do find it not trivial to, to, um, like if you would just show them that I'm looking at the times from this PR to the improvement and line of codes, et cetera, like Nathan said in the beginning. Yeah. I mean, they would feel micromanage. Um, I, uh, first of all, I, I usually talk about it on a, on a team level or an organization level. And when I do want to raise this, uh, questions or maybe like address them as a growth opportunities for a certain developer. Uh, personally, I don't look at it as a, like criticism. I, it's like a, it's a beginning of a conversation. It's not like I don't know. I didn't make up my mind before. Uh, and because of this metric looks like this, then I'm not pleased with how you perform. It's just like, All right. I've seen that there is a decrease here. Uh, is there a reason? Let's talk about, let's discuss it. I'm easily convinced if there are like, uh, ways to be convinced. And, but, but yeah, I do look at it as a growth. Um, I try to, to, to convince and I do look at it as a, like a, growth opportunity for the developer to, to look at, uh, yeah, that's, that's at least my take over this. 

Nathen Harvey: Yeah, I definitely agree with that, you know, because I think that this question really gets to a question of trust. Um, and how do you build trust with your teammates? And I think the way that you build trust is through your actions. Right? And so if you start measuring and then start like taking punitive action against individual developers or even teams, that's going to, your actions are going to tell people, you should be afraid of these metrics. You should do whatever you can to not be measured by these metrics, right? But instead, if and DORA talks a lot about culture, if you lean in and use this as an opportunity to improve, an opportunity to learn more about how the team is going. And, and I like your approach there where you're taking sort of an inquisitive approach. Hey, as an example, you know, Hey, I see that the PRs, uh, that you started to submit fewer PRs than you have in the past, what's going on? It may be that that person has, for the time being, prioritized code reviews. So they're doing less PRs. It may be that they're working on some new architectural thing. They're doing less PRs. It may be that, uh, they've had a family emergency and they've been out of the office more. That's going to lower their PRs. That's the, the, the fact that they have fewer PRs is not enough information for you to go on. It is a good place to start a conversation. 

And then, I think the other thing that really helps is that you use these metrics at the team level. So if you as a team start reviewing them, maybe during your regular retrospectives or planning sessions, and then, importantly, it comes back to what are you going to change? Is the team going to try something different based on what they've learned from these metrics? Oh, we see that our lead time is going up, maybe our continuous integration practices, we need to put some more effort into those or some more automated testing. So over the next sprint or time block, we're going to add, you know, 20 percent more capacity for automated testing. And let's see how that impacts things. So seeing that these metrics are being used to inform improvements, that's how you prevent that slippery slope, I think. 

Kovid Batra: Totally. Okay. I think we can move on to this next question. Uh, this is Nitish. Uh, how can DORA and data-driven approach be implemented in a way that devs don't feel micromanaged? Yeah, I think. 

Nathen Harvey: Yeah, I think, I think we've covered a little bit of this in the previous question here, Nitish. And I think that it really comes back to remembering that these are not measures that should be at the individual level. We're not asking, Kovid, what's your deployment frequency? You know, what's yours? Oh, one of you is better than the other. Something's going to change. No, no, no. That's not how we, that's not how we use these measures. They're really meant for that application or service level. When it comes to developing, delivering, operating software or any technology, that's a team sport. It's not an individual sport. 

Kovid Batra: All right. Then we have from Abderrahmane, uh, how are the market segments details used for benchmark collected? 

Nathen Harvey: Yeah, this is a really good question. Thanks for that. So, uh, as you know, we run a survey each year and we ask about what industry are you in? Um, and what we found surprisingly, maybe, maybe not surprisingly is that over the years, industry is not really a determinant of how your software delivery performance is going to be. In other words, what we see across every industry, whether it's technology or retail or government or finance, we see teams that have really good software delivery performance. We also see in all of those industries, teams that have route rather poor software, but lots of opportunities to improve their software delivery performance, I should say. Yeah. 

So we see that, uh, the market segments are there and, and honestly, we, we publish that data so that people can see that, look, this can happen in our industry too. Um, I always worry that, you know, someone might use their industry as a reason not to question the status quo. Oh, we're in a regulated industry, so we can't do any better. It doesn't matter what industry you're in. You can always do better. You can always do worse as well. So just be careful, like focus on that improvement. 

Kovid Batra: Cool. Uh, next question is from Thomas. Uh, how do you plan the ritual with engineers and stakeholders when you're looking at this metric? Yeah, this is a very, uh, important question. I think Nathen, would you like to take this up? 

Nathen Harvey: Yeah, I'll take this. I'd love to hear how Ido is doing this as well, sort of incorporating the metrics into their daily work. But I think it's, it's, it's just that as you go into your planning or retrospective cycle, maybe as a team, you think about the last period and you pull up maybe the DORA quick check, or if you're using Typo or something like it, you pull up the dashboard and say, "Look, over the last two weeks over the last month, here's where we're trending. What are we going to do about that? Is there something that we'd like to change about that? What can we learn about that?" Start just with those questions. Start thinking about that. So I think really just using it as a, as a discussion point in those retrospectives, maybe an agenda item in those retrospectives is a really powerful thing that you can do.

Ido, what's your experience? 

Ido Shveki: Yeah. So, um, I totally agree. And I think in most parts, this is what I, what I also do, we're also doing at BeeHero, in their perspectives, maybe not on, bi-weekly basis, like every two weeks because sometimes they find it too often and not too much to, to, you know, to, uh, I want it to be, uh, tell them something new, let's say. Um, but also I do find it in what we, when we are doing some rituals for some incident that happened and we're discussing this issue, I really put emphasis, and I think this is the cultural part that you mentioned before. Uh, in this rituals of, uh, incident, uh, rituals. I really try to point out and look at, uh, uh, how long did it take us to mitigate it? Um, when, like how, how long, uh, until the, the customer didn't see the issue. And from these, uh, points, you can, I mean, I hope the team understands the culture that I'm pushing towards. And from that point, they will also want to implement DORA metrics without even knowing the name DORA. We don't really care about the name. I mean, we don't really, it doesn't really matter if they know how to call it. Just like to, as you mentioned before, I don't want you to know about DORA. Just get better or just be better at this. So yeah, that's basically it. 

Nathen Harvey: Thanks. Awesome. 

Kovid Batra: All right. I think there is one thing that I wanted to ask from this. It's good with the engineers, probably, and you can just pull in every time. But when it comes to other stakeholders in the business, what I have seen and experienced with my clients is they find it hard to explain these DORA metrics in terms of the business language. I think Nathen, you touched upon this in the beginning. I would like to just highlight this again for the audiences' sake. 

Nathen Harvey: Yeah, I think that, I think that's really important. And I think that when it comes to dashboards, uh, it, it would be really good to put your delivery performance metrics right next to your organizational performance metrics, right? Are we seeing better customer, like, are we seeing the same trend? As software delivery improves, so do customer signups, so do, uh, revenue that we get per customer or something along those lines. That's, you know, if you think about it, we're really just trying to validate an experiment. We think that by shipping this feature, we're going to improve revenue. Let's test that. Let's look at that side-by-side. 

Kovid Batra: Totally. All right. Uh, we have a lot of questions coming in. Uh, so sorry, audience, I'm not able to pick all of those because we are running short on time. We'll pick one last question. Uh, okay. That's from Julia. Uh, are there any variations of DORA metrics you have found in customer deployed or installed software? Example, deployment frequency may not be directly relevant. A very relevant question. So yeah. 

Nathen Harvey: Yeah, absolutely. Uh, I think, I think the beauty of the four key metrics is that they are very simple, except they're not, they are very simple on the surface, right? If, and if you take just, let's just take one of them, change lead time. In DORA's language, that starts at like change is committed and it ends when that changes in production. Okay. What does committed mean? Is it committed to a branch? Is it committed to the main line? Is that branch, has that branch been merged into the main line? Who knows? Um, I have a perspective, but it doesn't really matter what my perspective is. When it comes to production. What does it mean to be in production? If we're doing, um, progressive deploys, does it mean the first user in production has it or only when 100 percent of users have it? Is that when it's in production? Or somewhere in between? Or we're running mobile applications where we ship it off to the app store and we have to wait for it to get approved, or installed software where we package it up and we shrink wrap it into a box and we ship out a CD. Is that deployed? I mean, I don't, I don't know that anyone does that any, well, I'm sure it happens. I know in the Navy they put software on helicopters and they fly it out to ships. So that's, you know, all of these things happen. Here's the thing. For your application, what you need to do is think about those four metrics and write down for, for this application, commit, change, change lead time starts here at this event ends here at that event. We're going to write that down probably in maybe something like an architectural decision record and ADR, put it into the code base. And as you write it down, make sure that it's clear, make sure that everyone agrees to it, and probably just as importantly, make sure that when you write it down, you also write down the date at which we will revisit this decision, right? Because it doesn't have to be set in stone. Maybe this is how we're going to measure things starting today, and we'll come back to this in six months. And some of the things that drive that might be the mechanics of how you deliver software. Some of the things that drive that might be the data that you have access to, right? And over time, you may have access to more precise data, additional data that you can then start to use in that. So the important thing is that you take these metrics and you contextualize them for your team. You write down what those metrics are, what their definitions are for your team and you revisit those decisions over time. 

Kovid Batra: Perfect. Perfect. All right. I think, uh, it's already time. Nathen, would you like to take one more question? 

Nathen Harvey: Uh, I'm happy to take one more question. Yes. 

Kovid Batra: All right. All right. So this is going to be the last one. Sorry if someone's question is not being asked here. But let's, let's take this up. Uh, this is Jimmy. Uh, do you ever try to map a change in behavior/automation/process to a change in, in the macro-DORA performance? Or should we have faith that our good practices is what is driving positive DORA trends? 

Nathen Harvey: Um, I think that, uh, having faith is a good thing to do. Uh, but validating your experiments is an even better thing to do. So, uh, as an example, uh, let's see, trying to change a behavior automation process to a change in the macro performance. Okay. Uh, I'll, I'll, I'll pick a change that you might have or an automation that you might have. Let's say that, uh, today, your deployment process is a manual process, uh, and you're doing, and there's lots of steps that are manual, uh, and you want to automate that process. Uh, so, we can figure out what are our, what's our software delivery performance look like today, you can use a Typo dashboard, you could use the DORA quick check. Write that number down. Now make some investments in automation, deployment automation, you've deployed, instead of having 50 manual steps, you now have 10 manual steps that you take and 40 that have been automated. Now let's go back and remeasure those DORA performance metrics. Did they improve? One would think and one would have faith that they will have improved. You may find for some reason that they didn't. But, validating an experiment and invalidating an experiment are kind of the same thing. In either case, it's really about the approach that you take next. Are you using this as an opportunity to learn and decide how are we going to respond to the, the new information that we have? It really is about a process of continuous learning, And hopefully continuous improvement, but with every improvement, there may be setbacks along the way. 

Kovid Batra: Great. All right. On that note, I think that's our time. We tried to answer all the questions, but of course we couldn't. So we'll have more sessions like this, uh, to help all the audience over here. So thanks a lot. Uh, thank you for being such a great audience. Uh, we hope this session helped you build some great confidence around how to implement DORA metrics in your teams.

And in the end, a heartfelt thanks to my cohosts, Nathen and Ido, and to my Typo team who made this event possible. Thanks a lot, guys. Thank you. 

Nathen Harvey: Thank you so much. Bye bye. 

Ido Shveki: Thanks for having us. Bye. 

Top Software Development Metrics (2024)

What are Software Development Metrics?

Software metrics track how well software projects and teams are performing. These metrics help to evaluate the performance, quality, and efficiency of the software development process and software development teams' productivity. Hence, guiding teams to make data-driven decisions and process improvements.

Importance of Software Development Metrics:

  • Software engineering metrics evaluate the productivity and efficiency of development teams, ensuring that projects are progressing as planned.
  • These metrics ensure that the projects are progressing as planned and potential bottlenecks are taken into consideration as early as possible.
  • Software quality metrics help to identify areas for improving software quality and stability.
  • These metrics monitor progress, manage timelines, and enable software developers to make informed decisions about project scope and deadlines.
  • Regular reviewing and analysis of metrics allow team members to identify weaknesses and optimize processes for better performance and efficiency.
  • Metrics assist in understanding resource utilization which leads to better allocation and management of development resources.
  • Software engineering metrics related to user feedback and satisfaction ensure that the software meets user needs and expectations and drives enhancements based on actual user experience.

Process Metrics

Process Metrics are quantitative measurements that evaluate the efficiency and effectiveness of processes within an organization. They assess how well processes are performing and identify areas for improvement. A few key metrics are:

Development Velocity

Development Velocity is the amount of work completed by a software development team during a specific iteration or sprint. It is typically measured in terms of story points, user stories, or other units of work. It helps in sprint planning and allows teams to track their performance over time.

Lead Time for Changes

Lead Time for Changes is a measure of time taken by code changes to move from inception to deployment. It tracks the speed and efficiency of software delivery and provides valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.

Cycle Time

This metric measures the total elapsed time taken to complete a specific task or work item from the beginning to the end of the process. It Helps assess how quickly the team can turn around tasks and features, identify trends and failures, and forecast how long future tasks will take. 

Change Failure Rate

Change Failure Rate measures the percentage of newly deployed changes that caused failure or glitches in production. It reflects reliability and efficiency and relates to team capacity, code complexity, and process efficiency, hence, impacting speed and quality. 

Performance Metrics

The software performance Metrics quantitatively measure how well an individual, team, or organization performs in various aspects of their operations. They offer insights into how well goals and objectives are being met and highlight potential bottlenecks. 

Deployment Frequency

Deployment Frequency tracks how often the code is deployed to production. It measures the rate of change in software development and highlights potential issues. A key indicator of agility and efficiency, regular deployments indicate a streamlined pipeline, which further allows teams to deliver features and updates faster.

Mean Time to Restore

Mean Time to Restore measures the average time taken by a system or application to recover from any failure or incident. It highlights the efficiency and effectiveness of an organization’s incident response and resolution procedures.

Code Quality Metrics

Code Quality Metrics measure various aspects of the code quality within a software development project such as readability, maintainability, performance, and adherence to best practices. Some of the common metrics are: 

Code Coverage

Code coverage measures the percentage of a codebase that is tested by automated tests. It helps ensure that the tests cover a significant portion of the code, and identifies untested parts and potential bugs.

Code Churn

Code churn measures the frequency of changes made to a specific piece of code, such as a file, class, or function during development. High code churn suggests frequent modifications and potential instability, while low code churn usually reflects a more stable codebase but could also signal slower development progress.

Focus Metrics

Focus Metrics are KPIs that organizations prioritize to target specific areas of their operations or processes for improvement. They address particular challenges or goals within the software development projects or organization and offer detailed insights into targeted areas. Few metrics include: 

Developer Workload 

Developer Workload represents the count of Issue tickets or Story points completed by each developer against the total Issue tickets/Story points assigned to them in the current sprint. It helps to understand how much work developers are handling, and crucial for balancing workloads, improving productivity, and preventing burnout. 

Work in Progress (WIP) 

Work progress represents the percentage breakdown of Issue tickets or Story points in the selected sprint according to their current workflow status. It highlights how much work the team handles at a given time, which further helps to maintain a smooth and productive workflow. 

Customer Satisfaction 

Customer Satisfaction tracks how happy or content customers are with a product, service, or experience. It usually involves users' feedback through various methods and analyzing that data to understand their satisfaction level. 

Technical Debt

Technical Debt metrics measure and manage the cost and impact of technical debt in the software development lifecycle. It helps to ensure that most critical issues are addressed first, provides insights into the cost associated with maintaining and fixing technical debt, and identifies areas of the codebase that require improvement.

Test Metrics

Test Coverage

Test coverage measures percentage of the codebase or features covered by tests. It ensure that tests are comprehensive and can identify potential issues within the codebase which further improves quality and fewer bugs.

Defect Density

This metric measures the number of defects found per unit of code or functionality (e.g., defects per thousand lines of code). It helps to assess the code quality and the effectiveness of the testing process.

Test Automation Rate

This metric tracks the proportion of test cases that are automated compared to those that are manual. It offers insight into the extent to which automation is integrated into the testing process and assess the efficiency and effectiveness of testing practices.

Productivity Metrics

This software metric helps to measure how efficiently dev teams or individuals are working. Productivity metrics provide insights into various aspects of productivity. Some of the metrics are:

Code Review Time

This metric measures how long it takes for code reviews to be completed from the moment a PR or code change is submitted until it is approved and merged. Regular and timely reviews foster better collaboration between team members, contribute to higher code quality by catching issues early, and ensure adherence to coding standards.

Sprint Burndown

Sprint Burndown tracks the amount of work remaining in a sprint versus time for scrum teams. It helps development teams visualize progress and productivity throughout a sprint, helps identify potential issues early, and stay focused.

Operational Metrics

Operational Metrics are key performance indicators that provide insights into operational performance aspects, such as productivity, efficiency, and quality. They focus on the routine activities and processes that drive business operations and help to monitor, manage, and optimize operational performance. These metrics are: 

Incident Frequency

Incident Frequency tracks how often incidents or outages occur in a system or service. It helps to understand and mitigate disruptions in system operations. High Incident Frequency indicates frequent disruptions, while low incident frequency suggests a stable system but requires verification to ensure incidents aren’t underreported. 

Error Rate

Error Rate measures the frequency of errors occurring in the system, typically expressed as errors per transaction, request, or unit of time. It helps gauge system reliability and quality and highlights issues in performance or code that need addressing to improve overall stability.

Mean Time Between Failures (MTBF)

The mean Time between Failures tracks the average time between system failures. It signifies how often the failures are expected to occur in a given period. High MTBF indicates that the software is less reliable and needs less frequent maintenance. 

Security Metrics 

Security Metrics evaluate the effectiveness of an organization's security posture and its ability to protect information and systems from threats. They provide insights into understanding how well security measures function, identify vulnerabilities, and security control effectiveness. Key metrics are: 

Mean Time to Detect (MTTD) 

Mean Time to Detect tracks how long a team takes to detect threats. The longer the threat is unidentified, there is a high chance of an escalated problem. MTTD helps minimize the issues' impact in the early stages and refine monitoring and alert processes. 

Number of Vulnerabilities 

The number of Vulnerabilities measures the total vulnerabilities identified in the codebase. It assesses the system’s security posture, and remediation efforts and provides insights into the impact of security practices and tools. 

Mean Time to Patch

Mean Time to Patch reflects the time taken to fix security vulnerabilities, soft bugs, or other security issues. It assesses how quickly an organization can respond and manage vulnerabilities in the software delivery processes. 

Conclusion

Software development metrics play a vital role in aligning software development projects with business goals. These metrics help guide software engineers in making data-driven decisions and process improvements and ensure that projects progress smoothly, boost team performance, meet user needs, and drive overall success. Regularly analyzing these metrics optimizes development processes, manages technical debt, and ultimately delivers high-quality software to the end-users.

What are the Signs of Declining DORA Metrics?

Software development is an ever-evolving field that thrives on teamwork, collaboration, and productivity. Many organizations started shifting towards DORA metrics to measure their development processes as these metrics are like the golden standards of software delivery performance. 

But here’s the thing: Focusing solely on DORA Metrics isn’t just enough! Teams need to dig deep and uncover the root causes of any pesky issues affecting their metrics.

Enter the notorious world of underlying indicators! These troublesome signs point to deeper problems lurking in the development process that can drag down DORA metrics. Identifying and tackling these underlying issues helps to improve their development processes and, in turn, boost their DORA metrics.

In this blog post, we’ll dive into the uneasy relationship between these indicators and DORA Metrics, and how addressing them can help teams elevate their software delivery performance.

What are DORA Metrics?

Developed by the DevOps Research and Assessment team, DORA Metrics are key performance indicators that measure the effectiveness and efficiency of software development and delivery processes. With its data-driven approach, software teams can evaluate of the impact of operational practices on software delivery performance.

Four Key Metrics

  • Change Failure Rate measures the code quality released to production during software deployments.
  • Mean Time to Recover measures the time to recover a system or service after an incident or failure in production.

In 2021, the DORA Team added Reliability as a fifth metric. It is based upon how well the user’s expectations are met, such as availability and performance, and measures modern operational practices.

Signs leading to Poor DORA Metrics

Deployment Frequency

Deployment Frequency measures how often a team deploys code to production. Symptoms affecting this metric include:

  • High Rework Rate -  Frequent modifications to deployed code can delay future deployments as teams focus on fixing issues.
  • Oversized Pull Requests -  Large pull requests can complicate the review process, causing delays in deployment.
  • Manual Deployment Processes -  Reliance on manual steps can introduce errors and slow down the release cycle.
  • Poor Test Coverage -  Insufficient automated testing can lead to hesitancy in deploying changes, impacting frequency.
  • Low Team Morale -  Frustration from continuous issues can reduce motivation to deploy frequently.
  • Lack of Clear Objectives -: Unclear goals leads misalignment and wasted efforts which hinders deployment frequency.
  • Inefficient Branching Strategy -  A poorly designed branching strategy result in merge conflicts, integration issues, and delays in merging changes into the main branch which further impacts deployment frequency.
  • Inadequate Monitoring and Observability -  Lack of effective monitoring and observability tools can make it difficult to identify and troubleshoot issues in production. 

Lead Time for Changes 

Lead Time for Changes measures the time taken from code commit to deployment. Symptoms impacting this metric include:

  • High Technical Debt - Accumulated technical debt can complicate code changes, extending lead times.
  • Inconsistent Code Review Practices -  Variability in review quality can lead to delays in approval and testing.
  • High Cognitive Load -  Overloaded team members may struggle to focus, leading to slower progress on changes.
  • Frequent Context Switching - Team members shifting focus between tasks can increase lead time due to lost productivity.
  • Poor Communication -  Lack of collaboration can result in misunderstandings and delays in the development process.
  • Unclear Requirements -  Ambiguity in project requirements can lead to rework and extended lead times.
  • Inefficient Issue Tracking -  Poorly managed issue tracking systems can lead to lost or forgotten tasks, duplicated efforts, and delays in addressing issues, ultimately extending lead times.
  • Lack of Automated Testing -  Insufficient automated testing can lead to manual testing bottlenecks, delaying the integration and deployment of changes.

Change Failure Rate

Change Failure Rate indicates the percentage of changes that result in failures. Symptoms affecting this metric include:

  • Poor Test Coverage -  Insufficient testing increases the likelihood of bugs in production.
  • High Pull Request Revert Rate -  Frequent rollbacks suggest instability in the codebase, indicating a high change failure rate.
  • Lightning Pull Requests -  Rapid submissions without adequate review can introduce errors and increase failure rates
  • Inadequate Incident Response Procedures -  Poorly defined processes can lead to higher failure rates during deployments.
  • Knowledge Silos -  Lack of shared knowledge within the team can lead to mistakes and increased failure rates.
  • High Code Quality Bugs - Frequent bugs in the code can indicate underlying quality issues, raising the change failure rate.
  • Lack of Feature Flags -  The absence of feature flags can make it difficult to roll back changes or experiment with new features, increasing the risk of failures in production.
  • Insufficient Monitoring and Alerting - Inadequate monitoring and alerting systems can make it challenging to detect and respond to issues in production, leading to prolonged failures and increased change failure rates.

Mean Time to Restore Service

Mean Time to Restore Service measures how long it takes to recover from a failure. Symptoms impacting this metric include:

  • High Technical Debt -  Complexity in the codebase can slow down recovery efforts, extending MTTR.
  • Recurring High Cognitive Load -  Overburdened team members may take longer to diagnose and fix issues.
  • Poor Documentation -  Lack of clear documentation can hinder recovery efforts during incidents.
  • Inconsistent Incident Management -  Variability in handling incidents can lead to longer recovery times.
  • High Rate of Production Incidents -  Frequent issues can overwhelm the team, extending recovery times.
  • Lack of Post-Mortem Analysis -  Not analyzing incidents can prevent learning from failures, which can result in repeated issues and longer recovery times.
  • Insufficient Automation - Lack of automation in incident response and remediation processes causes manual, time-consuming troubleshooting, extending recovery times.
  • Inadequate Monitoring and Observability -  Insufficient monitoring and observability tools can make it difficult to quickly identify and diagnose issues in production which further delay the restoration of service.
  • Siloed Incident Response -  Lack of cross-functional collaboration and communication during incidents lead to delays in restoring service. As team members may not have a complete understanding of the issue or the necessary context to resolve it swiftly. 

Improve your DORA Metrics using Typo

Software analytics tools are an effective way to measure DORA DevOps metrics. These tools can automate data collection from various sources and provide valuable insights. They also offer centralized dashboards for easy visualization and analysis to identify bottlenecks and inefficiencies in the software delivery process. They also facilitate benchmarking against industry standards and previous performance to set realistic improvement goals. These software analytics tools promote collaboration between development and operations by providing a common framework for discussing performance. Hence, enhancing the ability to make data-driven decisions, drive continuous improvement, and improve customer satisfaction.

Typo is a powerful software engineering platform that enhances SDLC visibility, provides developer insights, and automates workflows to help you build better software faster. It integrates seamlessly with tools like GIT, issue trackers, and CI/CD systems. It offers a single dashboard with key DORA and other engineering metrics — providing comprehensive insights into your deployment process. Additionally, Typo includes engineering benchmarks for comparing your team's performance across industries.

Conclusion

DORA metrics are essential for evaluating software delivery performance, but they reveal only part of the picture. Addressing underlying issues affecting these metrics such as high deployment frequency or lengthy change lead time, can lead to significant improvements in software quality and team efficiency.

Use tools like Typo to gain deeper insights and benchmarks, enabling more effective performance enhancements.

View All

Software Delivery

View All
why jira dashboards are insufficient

Why JIRA Dashboard is Insufficient?- Time for JIRA-Git Data Integration

Introduction

In today's fast-paced and rapidly evolving software development landscape, effective project management is crucial for engineering teams striving to meet deadlines, deliver quality products, and maintain customer satisfaction. Project management not only ensures that tasks are completed on time but also optimizes resource allocation enhances team collaboration, and improves communication across all stakeholders. A key tool that has gained prominence in this domain is JIRA, which is widely recognized for its robust features tailored for agile project management.

However, while JIRA offers numerous advantages, such as customizable workflows, detailed reporting, and integration capabilities with other tools, it also comes with limitations that can hinder its effectiveness. For instance, teams relying solely on JIRA dashboard gadget may find themselves missing critical contextual data from the development process. They may obtain a snapshot of project statuses but fail to appreciate the underlying issues impacting progress. Understanding both the strengths and weaknesses of JIRA dashboard gadget is vital for engineering managers to make informed decisions about their project management strategies.

The Limitations of JIRA Dashboard Gadgets

Lack of Contextual Data

JIRA dashboard gadgets primarily focus on issue tracking and project management, often missing critical contextual data from the development process. While JIRA can show the status of tasks and issues, it does not provide insights into the actual code changes, commits, or branch activities that contribute to those tasks. This lack of context can lead to misunderstandings about project progress and team performance. For example, a task may be marked as "in progress," but without visibility into the associated Git commits, managers may not know if the team is encountering blockers or if significant progress has been made. This disconnect can result in misaligned expectations and hinder effective decision-making.

Static Information

JIRA dashboards having road map gadget or sprint burndown gadget can sometimes present a static view of project progress, which may not reflect real-time changes in the development process. For instance, while a JIRA road map gadget or sprint burndown gadget may indicate that a task is "done," it does not account for any recent changes or updates made in the codebase. This static nature can hinder proactive decision-making, as managers may not have access to the most current information about the project's health. Additionally, relying on historical data can create a lag in response to emerging issues in issue statistics gadget. In a rapidly changing development environment, the ability to react quickly to new information is crucial for maintaining project momentum hence we need to move beyond default chart gadget like road map gadget or burndown chart gadget.

Limited Collaboration Insights

Collaboration is essential in software development, yet JIRA dashboards often do not capture the collaborative efforts of the team. Metrics such as code reviews, pull requests, and team discussions are crucial for understanding how well the team is working together. Without this information, managers may overlook opportunities for improvement in team dynamics and communication. For example, if a team is actively engaged in code reviews but this activity is not reflected in JIRA gadgets or sprint burndown gadget, managers may mistakenly assume that collaboration is lacking. This oversight can lead to missed opportunities to foster a more cohesive team environment and improve overall productivity.

Overemphasis on Individual Metrics

JIRA dashboard or other copy dashboard can sometimes encourage a focus on individual performance metrics rather than team outcomes. This can foster an environment of unhealthy competition, where developers prioritize personal achievements over collaborative success. Such an approach can undermine team cohesion and lead to burnout. When individual metrics are emphasized, developers may feel pressured to complete tasks quickly, potentially sacrificing code quality and collaboration. This focus on personal performance can create a culture where teamwork and knowledge sharing are undervalued, ultimately hindering project success.

Inflexibility in Reporting

JIRA dashboard layout often rely on predefined metrics and reports, which may not align with the unique needs of every project or team. This inflexibility can result in a lack of relevant insights that are critical for effective project management. For example, a team working on a highly innovative project may require different metrics than a team maintaining legacy software. The inability to customize reports can lead to frustration and a sense of disconnect from the data being presented.

The Power of Integrating Git Data with JIRA

Integrating Git data with JIRA provides a more holistic view of project performance and developer productivity. Here’s how this integration can enhance insights:

Real-Time Visibility into Development Activity

By connecting Git repositories with JIRA, engineering managers can gain real-time visibility into commits, branches, and pull requests associated with JIRA issues & issue statistics. This integration allows teams to see the actual development work being done, providing context to the status of tasks on the JIRA dashboard gadet. For instance, if a developer submits a pull request that relates to a specific JIRA ticket, the project manager instantly knows that work is ongoing, fostering transparency. Additionally, automated notifications for changes in the codebase linked to JIRA issues keep everyone updated without having to dig through multiple tools. This integrated approach ensures that management has a clear understanding of actual progress rather than relying on static task statuses.

Enhanced Collaboration and Communication

Integrating Git data with JIRA facilitates better collaboration among team members. Developers can reference JIRA issues in their commit messages, making it easier for the team to track changes related to specific tasks. This transparency fosters a culture of collaboration, as everyone can see how their work contributes to the overall project goals. Moreover, by having a clear link between code changes and JIRA issues, team members can engage in more meaningful discussions during stand-ups and retrospectives. This enhanced communication can lead to improved problem-solving and a stronger sense of shared ownership over the project.

Improved Risk Management

With integrated Git and JIRA data, engineering managers can identify potential risks more effectively. By monitoring commit activity and pull requests alongside JIRA issue statuses, managers can spot trends and anomalies that may indicate project delays or technical challenges. For example, if there is a sudden decrease in commit activity for a specific task, it may signal that the team is facing challenges or blockers. This proactive approach allows teams to address issues before they escalate, ultimately improving project outcomes and reducing the likelihood of last-minute crises.

Comprehensive Reporting and Analytics

The combination of JIRA and Git data enables more comprehensive reporting and analytics. Engineering managers can analyze not only task completion rates but also the underlying development activity that drives those metrics. This deeper understanding can inform better decision-making and strategic planning for future projects. For instance, by analyzing commit patterns and pull request activity, managers can identify trends in team performance and areas for improvement. This data-driven approach allows for more informed resource allocation and project planning, ultimately leading to more successful outcomes.

Best Practices for Integrating Git Data with JIRA

To maximize the benefits of integrating Git data with JIRA, engineering managers should consider the following best practices:

Select the Right Tools

Choose integration tools that fit your team's specific needs. Tools like Typo can facilitate the connection between Git and JIRA smoothly. Additionally, JIRA integrates directly with several source control systems, allowing for automatic updates and real-time visibility.

Sprint analysis in Typo

If you’re ready to enhance your project delivery speed and predictability, consider integrating Git data with your JIRA dashboards. Explore Typo! We can help you do this in a few clicks & make it one of your favorite dashboards.

Establish Commit Message Guidelines

Encourage your team to adopt consistent commit message guidelines. Including JIRA issue keys in commit messages will create a direct link between the code change and the JIRA issue. This practice not only enhances traceability but also aids in generating meaningful reports and insights. For example, a commit message like 'JIRA-123: Fixed the login issue' can help managers quickly identify relevant commits related to specific tasks.

Automate Workflows

Leverage automation features available in both JIRA and Git platforms to streamline the integration process. For instance, set up automated triggers that update JIRA issues based on events in Git, such as moving a JIRA issue to 'In Review' once a pull request is submitted in Git. This reduces manual updates and alleviates the administrative burden on the team.

Train Your Team

Providing adequate training to your team ensures everyone understands the integration process and how to effectively use both tools together. Conduct workshops or create user guides that outline the key benefits of integrating Git and JIRA, along with tips on how to leverage their combined functionalities for improved workflows.

Monitor and Adapt

Implement regular check-ins to assess the effectiveness of the integration. Gather feedback from team members on how well the integration is functioning and identify any pain points. This ongoing feedback loop allows you to make incremental improvements, ensuring the integration continues to meet the needs of the team.

Utilize Dashboards for Visualization

Create comprehensive dashboards that visually represent combined metrics from both Git and JIRA. Tools like JIRA dashboards, Confluence, or custom-built data visualization platforms can provide a clearer picture of project health. Metrics can include the number of active pull requests, average time in code review, or commit activity relevant to JIRA task completion.

Encourage Regular Code Reviews

With the changes being reflected in JIRA, create a culture around regular code reviews linked to specific JIRA tasks. This practice encourages collaboration among team members, ensures code quality, and keeps everyone aligned with project objectives. Regular code reviews also lead to knowledge sharing, which strengthens the team's overall skill set.

Case Study:

25% Improvement in Task Completion with Jira-Git Integration at Trackso

To illustrate the benefits of integrating Git data with JIRA, let’s consider a case study of a software development team at a company called Trackso.

Background

Trackso, a remote monitoring platform for Solar energy, was developing a new SaaS platform that consisted of a diverse team of developers, designers, and project managers. The team relied heavily on JIRA for tracking project statuses, but they found their productivity hampered by several issues:

  • Tasks had vague statuses that did not reflect actual progress to project managers.
  • Developers frequently worked in isolation without insight into each other's code contributions.
  • They could not correlate project delays with specific code changes or reviews, leading to poor risk management.

Implementation of Git and JIRA Integration

In 2022, Trackso's engineering manager decided to integrate Git data with JIRA. They chose GitHub for version control, given its robust collaborative features. The team set up automatic links between their JIRA tickets and corresponding GitHub pull requests and standardized their commit messages to include JIRA issue keys.

Metrics of Improvement

After implementing the integration, Trackso experienced significant improvements within three months:

  • Increased Collaboration: There was a 40% increase in code review participation as developers began referencing JIRA issues in their commits, facilitating clearer discussions during code reviews.
  • Reduced Delivery Times: Average task completion times decreased by 25%, as developers could see almost immediately when tasks were being actively worked on or if blockers arose.
  • Improved Risk Management: The team reduced project delays by 30% due to enhanced visibility. For example, the integration helped identify that a critical feature was lagging due to slow pull request reviews. This enabled team leads to improve their code review workflows.
  • Boosted Developer Morale: Developer satisfaction surveys indicated that 85% of team member felt more engaged in their work due to improved communication and clarity around task statuses.

Challenges Faced

Despite these successes, Trackso faced challenges during the integration process:

  • Initial Resistance: Some team member were hesitant to adopt new practices & new personal dashboard. The engineering manager organized training sessions to showcase the benefits of integrating Git and JIRA & having a personal dashboard, promoting buy-in from the team and leaving the default dashboard.
  • Maintaining Commit Message Standards: Initially, not all developers consistently used the issue keys in their commit messages. The team revisited training sessions and created a shared repository of best practices to ensure adherence.

Conclusion

While JIRA dashboards are valuable tools for project management, they are insufficient on their own for engineering managers seeking to improve project delivery speed and predictability. By integrating Git data with JIRA, teams can gain richer insights into development activity, enhance collaboration, and manage risks more effectively. This holistic approach empowers engineering leaders to make informed decisions and drive continuous improvement in their software development processes. Embracing this integration will ultimately lead to better project outcomes and a more productive engineering culture. As the software development landscape continues to evolve, leveraging the power of both JIRA and Git data will be essential for teams looking to stay competitive and deliver high-quality products efficiently.

How to Build a DevOps Culture?

In an ever-changing tech landscape, organizations need to stay agile and deliver high-quality software rapidly. DevOps plays a crucial role in achieving these goals by bridging the gap between development and operations teams. 

In this blog, we will delve into how to build a DevOps culture within your organization and explore the fundamental practices and strategies that can lead to more efficient, reliable, and customer-focused software development.

What is DevOps? 

DevOps is a software development methodology that integrates development (Dev) and IT operations (Ops) to enhance software delivery’s speed, efficiency, and quality. The primary goal is to break down traditional silos between development and operations teams and foster a culture of collaboration and communication throughout the software development lifecycle.  This creates a more efficient and agile workflow that allows organizations to respond quickly to changes and deliver value to customers faster.

Why DevOps Culture is Beneficial? 

DevOps culture refers to a collaborative and integrated approach between development and operations teams. It focuses on breaking down silos, fostering a shared sense of responsibility, and improving processes through automation and continuous feedback.

  • Fostering collaboration between development and operations allows organizations to innovate more rapidly, and respond to market changes and customer needs effectively. 
  • Automation and streamlined processes reduce manual tasks and errors to increase efficiency in software delivery. This efficiency results in faster time-to-market for new features and updates.
  • Continuous integration and delivery practices improve software quality by early detection of issues. This helps maintain system stability and reliability.
  • A DevOps culture encourages teamwork and mutual trust to improve collaboration between previously siloed teams. This cohesive environment fosters innovation and collective problem-solving. 
  • DevOps culture results in faster recovery time as they can identify and address issues more swiftly, reducing downtime and improving overall service reliability.
  • Delivering high-quality software quickly and efficiently enhances customer satisfaction and loyalty, which is vital for long-term success. 

The CALMS Framework of DevOps 

The CALMS framework is used to understand and implement DevOps principles effectively. It breaks down DevOps into five key components:

Culture

The culture pillar focuses on fostering a collaborative environment where shared responsibility and open communication are prioritized. It is crucial to break down silos between development and operations teams and allow them to work together more effectively. 

Automation

Automation emphasizes minimizing manual intervention in processes. This includes automating testing, deployment, and infrastructure management to enhance efficiency and reliability.

Lean

The lean aspect aims to optimize workflows, manage work-in-progress (WIP), and eliminate non-value-adding activities. This is to streamline processes to accelerate software delivery and improve overall quality.

Measurement

Measurement involves collecting data to assess the effectiveness of software delivery processes and practices. It enables teams to make informed, fact-based decisions, identify areas for improvement, and track progress. 

Sharing

The sharing component promotes open communication and knowledge transfer among teams It facilitates cross-team collaboration, fosters a learning environment, and ensures that successful practices and insights are shared and adopted widely.

Tips to Build a DevOps Culture

Start Simple 

Don’t overwhelm teams completely with the DevOps haul. Begin small and implement DevOps practice gradually. You can start first with the team that is better aligned with DevOps principles and then move ahead with other teams in the organization. Build momentum with early wins and evolve practices as you gain experience.

Foster Communication and Collaborative Environment 

Communication is a key. When done correctly, it promotes collaboration and a smooth flow of information across the organization. This further aligns organization operations and lets the engineering leaders make informed decisions. 

Moreover, the combined working environment between the development and operations teams promotes a culture of shared responsibility and common objectives. They can openly communicate ideas and challenges, allowing them to have a mutual conversation about resources, schedules, required features, and execution of projects. 

Create Common Goal 

Apart from encouraging communication and a collaborative environment, create a clear plan that outlines where you want to go and how you will get there. Ensure that these goals are realistic and achievable. This will allow teams to see the bigger picture and understand the desired outcome, motivating them to move in the right direction.

Focus on Automation 

Tools such as Slack, Kubernetes, Docker, and Jfrog help build automation capabilities for DevOps teams. These tools are useful as they automate repetitive and mundane tasks and allow teams to focus on value-adding work. This allows them to fail fast, build fast, and deliver quickly which enhances their efficiency and process acceleration, positively impacting DevOps culture. Ensure that instead of assuming, ask your team directly what part can be automated and further support facilities to automate it. 

Implement CI/CD pipeline

The organization must fully understand and implement CI/CD to establish a DevOps culture and streamline the software delivery process. This allows for automating deployment from development to production and releasing the software more frequently with better quality and reduced risks. The CI/CD tools further allow teams to catch bugs early in the development cycle, reduce manual work, and minimize downtime between releases. 

Foster Continuous Learning and Improvement

Continuous improvement is a key principle of DevOps culture. Engineering leaders must look for ways to encourage continuous learning and improvement such as by training and providing upskilling opportunities. Besides this, give them the freedom to experiment with new tools and techniques. Create a culture where they feel comfortable making mistakes and learning from them. 

Balance Speed and Security 

The teams must ensure that delivering products quickly doesn’t mean compromising security. In DevOps culture, the organization must adopt a ‘Security-first approach’ by integrating security practices into the DevOps pipeline. To maintain a strong security posture, regular security audits and compliance checks are essential. Security scans should be conducted at every stage of the development lifecycle to continuously monitor and assess security.

Monitor and Measure 

Regularly monitor and track system performance to detect issues early and ensure smooth operation. Use metrics and data to guide decisions, optimize processes, and continuously improve DevOps practices. Implement comprehensive dashboards and alerts to ensure teams can quickly respond to performance issues and maintain optimal health. 

Prioritize Customer Needs

In DevOps culture, the organization must emphasize the ever-evolving needs of the customers. Encourage teams to think from the customer’s perspective and keep their needs and satisfaction at the forefront of the software delivery processes. Regularly incorporate customer feedback into the development cycle to ensure the product aligns with user expectations.

Typo - An Effective Platform to Promote DevOps Culture

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through DORA and other key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion 

Building a DevOps culture is essential for organizations to improve their software delivery capabilities and maintain a competitive edge. Implementing key practices as mentioned above will pave the way for a successful DevOps transformation. 

What Lies Ahead: Platform Engineering Predictions

As platform engineering continues to evolve, it brings both promising opportunities and potential challenges. 

As we look to the future, what changes lie ahead for Platform Engineering? In this blog, we will explore the future landscape of platform engineering and strategize how organizations can stay at the forefront of innovation.

What is Platform Engineering? 

Platform Engineering is an emerging technology approach that enables software developers with all the required resources. It acts as a bridge between development and infrastructure which helps in simplifying the complex tasks and enhancing development velocity. The primary goal is to improve developer experience, operational efficiency, and the overall speed of software delivery.

Importance of Platform Engineering

  • Platform engineering helps in creating reusable components and standardized processes. It also automates routine tasks, such as deployment, monitoring, and scaling, to speed up the development cycle.
  • Platform engineering integrates security measures into the platform to ensure that applications are built and deployed securely. This allows the platform to meet regulatory and compliance requirements.
  • It ensures efficient use of resources to balance performance and expenditure. It also provides transparency into resource usage and associated costs to help organizations make informed decisions about scaling and investment.
  • By providing tools, frameworks, and services, platform engineering tool empowers developers to build, deploy, and manage applications more effectively.
  • A well-engineered platform allows organizations to adapt quickly to market changes, new technologies, and customer needs.

Key Predictions for Platform Engineering

More Focus on Developer Experience

The rise in Platform Engineering will enhance developer experience by creating standard toolchains and workflow. In the coming time, the platform engineering team will work closely with developers to understand what they need to be productive. Moreover, the platform tool will be integrated and closely monitored through DevEx and reports. This will enable developers to work efficiently and focus on the core tasks by automating repetitive tasks, further improving their productivity and satisfaction. 

Rise in Internal Developer Platform 

Platform engineering is closely associated with the development of IDP. In today’s times, organizations are striving for efficiency, hence, the creation and adoption of internal development platforms will rise. This will streamline operations, provide a standardized way of deploying and managing applications, and reduce cognitive load. Hence, reducing time to market for new features and products, allowing developers to focus on delivering high-quality products more efficiently rather than managing infrastructure. 

Growing Trend of Ephemeral Environment 

Modern software development demands rapid iteration. The ephemeral environments, temporary, ideal environments, will be an effective way to test new features and bugs before they are merged into the main codebase. These environments will prioritize speed, flexibility, and cost efficiency. Since they are created on-demand and short-lived, they will align perfectly with modern development practices. 

Integration with Generative AI 

As times are changing, AI-driven tools become more prevalent. These Generative AI tools such as GitHub Copilot and Google Gemini will enhance capabilities such as infrastructure as code, governance as code, and security as code. This will not only automate manual tasks but also support smoother operations and improved documentation processes. Hence, driving innovation and automating dev workflow. 

Extension to DevOps 

Platform engineering is a natural extension of DevOps. In the future, the platform engineers will work alongside DevOps rather than replacing it to address its complexities and scalability challenges. This will provide a standardized and automated approach to software development and deployment leading to faster project initialization, reduced lead time, and increased productivity. 

Shift to Product-Centric Funding Model 

Software organizations are now shifting from project project-centric model towards product product-centric funding model. When platforms are fully-fledged products, they serve internal customers and require a thoughtful and user-centric approach in their ongoing development. It also aligns well with the product lifecycle that is ongoing and continuous which enhances innovation and reduces operational friction. It will also decentralize decision making which allows platform engineering leaders to make and adjust funding decisions for their teams. 

Why Staying Updated on Platform Engineering Trends is Crucial?

  • Platform Engineering is a relatively new and evolving field. Hence, platform engineering teams need to keep up with rapid tech changes and ensure the platform remains robust and efficient.
  • Emerging technologies such as serverless computers and edge computers will shape the future of platform engineering. Moreover, Artificial intelligence and machine learning also help in optimizing various aspects of software development such as testing and monitoring. 
  • Platform engineering trends are introducing new ways to automate processes, manage infrastructure, and optimize workflows. This enables organizations to streamline operations, reduce manual work, and focus on more strategic tasks, leading to enhanced developer productivity. 
  • A platform aims to deliver a superior user experience. When platform engineers stay ahead of the learning curve, they can implement features and improvements that improve the end-user experience, resulting in higher customer satisfaction and retention.
  • Trends in platform engineering highlight new methods for building scalable and flexible systems. It allows platform engineers to design platforms that can easily adapt to changing demands and scale without compromising performance.

Typo - An Effective Platform Engineering Tool 

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion 

The future of platform engineering is both exciting and dynamic. As this field continues to evolve, staying ahead of these developments is crucial for organizations aiming to maintain a competitive edge. By embracing these predictions and proactively adapting to changes, platform engineering teams can drive innovation, improve efficiency, and deliver high-quality products that meet the demands of an ever-changing tech landscape.

Common Mistakes in Platform Engineering

Platform engineering is a relatively new and evolving field in the tech industry. However, like any evolving field, it comes with its share of challenges. If overlooked can limit its effectiveness.

In this blog post, we dive deep into these common missteps and provide actionable insights to overcome them, so that your platform engineering efforts are both successful and sustainable.

What is Platform Engineering?

Platform Engineering refers to providing foundational tools and services to the development team that allow them to quickly and safely deliver their applications. This aims to increase developer productivity by providing a unified technical platform to streamline the process which helps reduce errors and enhance reliability. 

Core Components of Platform Engineering 

Internal Developer Platform (IDPs) 

The core component of Platform Engineering is IDP i.e. centralized collections of tools, services, and automated workflows that enable developers to self-serve resources needed for building, testing, and deploying applications. It empowers developers to deliver faster by reducing reliance on other teams, automating repetitive tasks, reducing the risk of errors, and ensuring every application adheres to organizational standards.

Platform Team 

The platform team consists of platform engineers who are responsible for building, maintaining, and configuring the IDP. The platform team standardizes workflows, automates repetitive tasks, and ensures that developers have access to the necessary tools and resources. The aim is to create a seamless experience for developers. Hence, allowing them to focus on building applications rather than managing infrastructure. 

Automation and Standardization

Platform engineering focuses on the importance of standardizing processes and automating infrastructure management. This includes creating paved roads for common development tasks such as deployment scripts, testing, and scaling to simplify workflows and reduce friction for developers. Curating a catalog of resources, following predefined templates, and establishing best practices ensure that every deployment follows the same standards, thus enhancing consistency across development efforts while allowing flexibility for individual preferences. 

Continuous Improvement 

Platform engineering is an iterative process, requiring ongoing assessment and enhancement based on developer feedback and changing business needs. This results in continuous improvement that ensures the platform evolves to meet the demands of its users and incorporates new technologies and practices as they emerge. 

Security and Compliance

Security is a key component of platform engineering. Integrating security best practices into the platform such as automated vulnerability scanning, encryption, and compliance monitoring is the best way to protect against vulnerabilities and ensure compliance with relevant regulations. This proactive approach is integrated into all stages of the platform helps mitigate risks associated with software delivery and fosters a secure development environment. 

Common Mistakes in Platform Engineering

Focusing Solely on Dashboards

One of the common mistakes platform engineers make is focusing solely on dashboards without addressing the underlying issues that need solving. While dashboards provide a good overview, they can lead to a superficial understanding of problems instead of encouraging genuine process improvements. 

To avoid this, teams must combine dashboards with automated alerts, tracing, and log analysis to get actionable insights and a more comprehensive observability strategy for faster incident detection and resolution. 

Building without Understanding the Developers’ Needs

Developing a platform based on assumptions ends up not addressing real problems and does not meet the developers’s needs. The platform may lack important features for developers leading to dissatisfaction and low adoption. 

Hence, establishing clear objectives and success criteria vital for guiding development efforts. Engage with developers now and then. Conduct surveys, interviews, or workshops to gather insights into their pain points and needs before building the platform.

Overengineering the Platform 

Building an overlay complex platform hinders rather than helps development efforts. When the platform contains features that aren’t necessary or used by developers, it leads to increased maintenance costs and confusion among developers that further hampers their productivity. 

The goal must be finding the right balance between functionality and simplicity. Hence, ensuring the platform effectively meets the needs of developers without unnecessary complications and iterating it based on actual usage and feedback.

Encouraging One-Size-Fits-All Solution

The belief that a single platform caters to all development teams and uses cases uniformly is a fallacy. Different teams and applications have varying needs, workflows, and technology stacks, necessitating tailored solutions rather than a uniform approach. As a result, the platform may end up being too rigid for some teams and overly complex for some resulting in low adoption and inefficiencies. 

Hence, design a flexible and customizable platform that adapts to diverse requirements. This allows teams to tailor the platform to their specific workflows while maintaining shared standards and governance.

Overplanning and under-executing

Spending excessive time in the planning phase leads to delays in implementation, missed opportunities, and not fully meeting the evolving needs of end-users. When the teams focus on perfecting every detail before implementation it results in the platform remaining theoretical instead of delivering real value.

An effective way is to create a balance between planning and executing by adopting an iterative approach. In other words, focus on delivering a minimum viable product (MVP) quickly and continuously improving it based on real user feedback. This allows the platform to evolve in alignment with actual developer needs which ensures better adoption and more effective outcomes.

Failing to Prioritize Security

Building the platform without incorporating security measures from the beginning can create opportunities for cyber threats and attacks. This also exposes the organization to compliance risks, vulnerabilities, and potential breaches that could be costly to resolve.

Implementing automated security tools, such as identity and access management (IAM), encrypted communications, and code analysis tools helps continuously monitor for security issues and ensure compliance with best practices. Besides this, provide ongoing security training that covers common vulnerabilities, secure coding practices, and awareness of evolving threats.

Benefits of Platform Engineering 

When used correctly, platform engineering offers many benefits: 

  • Platform engineering improves developer experience by offering self-service capabilities and standardized tools. It allows the team to focus on building features and deliver products more efficiently and effectively.
  • It increases the reliability and security of applications by providing a stable foundation and centralized infrastructure management.
  • Engineering teams can deploy applications and updates faster with a robust and automated platform that accelerates the time-to-market for new features and products.
  • Focusing on scalable solutions allows Platform engineering to enable the underlying systems to handle increased demand without compromising performance and grow their applications and services efficiently.
  • A solid platform foundation allows teams to experiment with new technologies and methodologies. Hence, supporting innovation and the adoption of modern practices.

Typo - An Effective Platform Engineering Tool 

Typo is an effective platform engineering tool that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion

Platform engineering has immense potential to streamline development and improve efficiency, but avoiding common pitfalls is key. By focusing on the pitfalls mentioned above, you can create a platform that drives productivity and innovation. 

All the best! :) 

A Guide to Clean Code Principles 

What is Clean Code? 

Robert C. Martin introduced the ‘Clean Code’ concept in his book ‘Clean Code: A Handbook of Agile Software Craftsmanship’. He defined clean code as: 

“A code that has been taken care of. Someone has taken the time to keep it simple and orderly. They have laid appropriate attention to details. They have cared.”

Clean code is easy to read, understand, and maintain. It is well structured and free of unnecessary complexity, code smell, and anti-patterns. 

Key Characteristics that Define Clean Code

  • The code is easy to read and understand. The names are descriptive of variables, functions, and classes, and the code is structured for a clear purpose. 
  • The code is simple and doesn’t include any unnecessary complexity. 
  • The code is consistent in naming conventions, formatting, and organization to help maintain readability. 
  • The code is easy to test and free from bugs and errors. 
  • The code is easy to update and modify. 
  • Clean code is regularly refactored and free from redundancy. 

Clean Code Principles 

Single Responsibility Principle 

This principle states that each module or function should have a defined responsibility and one reason to change. Otherwise, it can result in bloated and hard-to-maintain code. 

Example: the code’s responsibilities are separated into three distinct classes: User, Authentication, and EmailService. This makes the code more modular, easier to test, and easier to maintain.

class User {

  constructor(name, email, password) {

    this.name = name;

    this.email = email;

    this.password = password;

  }

}

class Authentication {

  login(user, password) {

    // ... login logic

  }

  register(user, password) {

    // ... registration logic

  }

}

class EmailService {

  sendVerificationEmail(email) {

    // ... email sending logic

  }

}

DRY Principle (Don’t Repeat Yourself) 

The DRY Principle states that unnecessary duplication and repetition of code must be avoided. If not followed, it can increase the risk of inconsistency and redundancy. Instead, you can abstract common functionality into reusable functions, classes, or modules.

Example: The common greeting formatting logic is extracted into a reusable formatGreeting function, which makes the code DRY and easier to maintain.

function formatGreeting(name, message) {

  return message + ", " + name + "!";

}

function greetUser(name) {

  console.log(formatGreeting(name, "Hello"));

}

function sayGoodbye(name) {

  console.log(formatGreeting(name, "Goodbye"));

}

YAGNI – you aren’t gonna need it

YAGNI is an extreme programming practice that states “Always implement things when you actually need them, never when you just foresee that you need them.” 

It doesn’t mean avoiding flexibility in code but rather not overengineer everything based on assumptions about future needs. The principle means delivering the most critical features on time and prioritizing them based on necessity. 

Kiss - Keep it Simple, Stupid 

This principle states that the code must be simple over complex to enhance comprehensibility, usability, and maintainability. Direct and clear code is better to avoid making it bloated or confusing. 

Example: The function directly multiplies the length and width to calculate the area and there are no extra steps or conditions that might confuse or complicate the code.

def calculate_area(length, width):

    return length * width

The Boy Scout Rule 

According to ‘The Boy Scout Rule’, always leave the code in a better state than you found it. In other words, make continuous, small enhancements whenever engaging with the codebase. It could be either adding a feature or fixing a bug. It encourages continuous improvement and maintains a high-quality codebase over time. 

Example: The original code had unnecessary complexity due to the redundant variable and nested conditional. The cleaned-up code is more concise and easier to understand.

Before: 

def factorial(n):

    if n == 0:

        return 1

    else:

        return n * factorial(n - 1)

# Before:

result = factorial(5)

print(result)

# After:

print(factorial(5))

After: 

def factorial(n):

    return 1 if n == 0 else n * factorial(n - 1)

Fail Fast

This principle indicates that the code must fail as early as possible. This limits the bugs that make it into production and promptly addresses errors. This ensures the code remains clean, reliable, and usable. 

Open/Closed Principle 

As per the Open/Closed Principle, the software entities should be open to extension but closed to modification. This means that team members must add new functionalities to an existing software system without changing the existing code. 

Example: The Open/Closed Principle allows adding new employee types (like "intern" or "contractor") without modifying the existing calculate_salary function. This makes the system more flexible and maintainable.

Without the Open/Closed Principle 

def calculate_salary(employee_type):

    if employee_type == "regular":

        return base_salary

    elif employee_type == "manager":

        return base_salary * 1.5

    elif employee_type == "executive":

        return base_salary * 2

    else:

        raise ValueError("Invalid employee type")

With the Open/Closed Principle 

class Employee:

    def calculate_salary(self):

        raise NotImplementedError()

class RegularEmployee(Employee):

    def calculate_salary(self):

        return base_salary

class Manager(Employee):

    def calculate_salary(self):

        return base_salary * 1.5

class Executive(Employee):

    def calculate_salary(self):

        return base_salary * 2

Practice Consistently 

When you choose to approach something in a specific way, ensure maintaining consistency throughout the entire project. This includes consistent naming conventions, coding styles, and formatting. It also ensures that the code aligns with team standards, to make it easier for others to understand and work with. Consistent practice also allows you to identify areas for improvement and learn new techniques.

Favor composition over inheritance

This means to use ‘has-a’ relationships (containing instances of other classes) instead of ‘is-a’ relationships (inheriting from a superclass). This makes the code more flexible and maintainable.

Example: In this example, the SportsCar class has a Car object as a member, and it can also have additional components like a spoiler. This makes it more flexible, as we can easily create different types of cars with different combinations of components.

class Engine:

    def start(self):

        pass

class Car:

    def __init__(self, engine):

        self.engine = engine

class SportsCar(Car):

    def __init__(self, engine, spoiler):

        super().__init__(engine)

        self.spoiler = spoiler

Avoid Hard-Coded Number

Avoid hardcoded numbers, rather use named constants or variables to make the code more readable and maintainable.

Example: 

Instead of: 

discount_rate = 0.2

Use: 

DISCOUNT_RATE = 0.2

This makes the code more readable and easier to modify if the discount rate needs to be changed.

Typo - An Automated Code Review Tool

Typo’s automated code review tool enables developers to catch issues related to code issues and detect code smells and potential bugs promptly. 

With automated code reviews, auto-generated fixes, and highlighted hotspots, Typo streamlines the process of merging clean, secure, and high-quality code. It automatically scans your codebase and pull requests for issues, generating safe fixes before merging to master. Hence, ensuring your code stays efficient and error-free.

The ‘Goals’ feature empowers engineering leaders to set specific objectives for their tech teams that directly support writing clean code. By tracking progress and providing performance insights, Typo helps align teams with best practices, making it easier to maintain clean, efficient code. The goals are fully customizable, allowing you to set tailored objectives for different teams simultaneously.

Conclusion 

Writing clean code isn’t just a crucial skill for developers. It is an important way to sustain software development projects.

By following the above-mentioned principles, you can develop a habit of writing clean code. It will take time but it will be worth it in the end.

Platform Engineering Best Practices

Platform engineering is a relatively new and evolving field in the tech industry. To make the most of Platform Engineering, there are several best practices you should be aware of.

In this blog, we explore these practices in detail and provide insights into how you can effectively implement them to optimize your development processes and foster innovation.

What is Platform Engineering?

Platform Engineering, an emerging technology approach, is the practice of designing and managing the infrastructure and tools that support software development and deployment. This is to help them perform end-to-end operations of software development lifecycle automation. The aim is to reduce overall cognitive load, increase operational efficiency, and remove process bottlenecks by providing a reliable and scalable platform for building, deploying, and managing applications. 

Importance of Platform Engineering

  • Platform engineering improves developer experience by offering self-service capabilities and standardized tools. It allows the team to focus on building features and deliver products more efficiently and effectively. 
  • It increases the reliability and security of applications by providing a stable foundation and centralized infrastructure management.
  • Engineering teams can deploy applications and updates faster with a robust and automated platform that accelerates the time-to-market for new features and products.
  • Focusing on scalable solutions allows Platform engineering to enable the underlying systems to handle increased demand without compromising performance and grow their applications and services efficiently. 
  • A solid platform foundation allows teams to experiment with new technologies and methodologies. Hence, supporting innovation and the adoption of modern practices.

Platform Engineering Best Practices

The platform Must be Developer-Centric

Always treat your platform engineering team as paying customers. This allows you to understand developers’ pain points, preferences, and requirements and focus on making the development process easier and more efficient. Some of the key points that are taken into consideration:

  • User-friendly tools to streamline the workflow. 
  • Must feel at ease while navigating the platform. 
  • Seamlessly integrates with existing and other third-party applications. 
  • Allow them to access and manage resources without needing extensive support.

When the above-mentioned needs and requirements are met, end-users are likely to adopt this platform enthusiastically. Hence, making the platform more effective and productive. 

Adopt Security Best Practices

Implement security control at every layer of the platform. Make sure that audit security posture is conducted regularly and that everyone on the team is updated with the latest security patches. Besides this, conduct code reviews and code analysis to identify and fix security vulnerabilities quickly. Educate your platform engineering team about security practices and offer them ongoing training and mentorship so they are constantly upskilling. 

Foster Continuous Improvement and Feedback Loops

Continuous improvement must be a core principle to allow the platform to evolve according to technical trends. Integrate feedback mechanisms with the internal developer platform to gather insights from the software development lifecycle. Regularly review and improve the platform based on feedback from development teams. This enables rapid responses to any impediments developers face. 

Encourage a Culture of Collaboration

Foster communication and knowledge sharing among platform engineers. Align them with common goals and objects and recognize their collaborative efforts. This helps teams to understand how their work contributes to the overall success of the platform which further, fosters a sense of unity and purpose. It also ensures that all stakeholders understand how to effectively use the platform and contribute to its continuous improvement. 

Platform Team must have a Product Mindset

View your internal platform as a product that requires management and ongoing development. The platform team must be driven by a product mindset that includes publishing roadmaps, gathering user feedback, and fostering a customer-centric approach. They must focus on what offers real value to their internal customers and app developers based on the feedback, so it addresses the pain points quickly. 

Maintain DevOps Culture

Emphasize the importance of a DevOps culture that prioritizes collaboration between development and operations teams that focuses on learning and improvement rather than assigning time. It is crucial to foster an environment where platform engineering can thrive and foster a shared responsibility for the software lifecycle.

Typo - An Effective Platform Engineering Tool 

Typo is an effective platform engineering tool that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion

Platform Engineering is reshaping how we approach software development by streamlining infrastructure management and improving operational efficiency. Adhering to best practices allows organizations to harness the full potential of their platforms. Embracing these principles will optimize your development processes, drive innovation, and ensure a stable foundation for future growth.

Effective DevOps Strategies for Startups

The era when development and operations teams worked in isolation, rarely interacting, is over. This outdated approach led to significant delays in developing and launching new applications. Modern IT leaders understand that DevOps is a more effective strategy.

DevOps fosters collaboration between software development and IT operations, enhancing the speed, efficiency, and quality of software delivery. By leveraging DevOps tools, the software development process becomes more streamlined through improved team collaboration and automation.

Understanding DevOps

DevOps is a methodology that merges software development (Dev) with IT operations (Ops) to shorten the development lifecycle while maintaining high software quality.

Creating a DevOps culture promotes collaboration, which is essential for continuous delivery. IT operations and development teams share ideas and provide prompt feedback, accelerating the application launch cycle.

Importance of DevOps for Startups

In the competitive startup environment, time equates to money. Delayed product launches risk competitors beating you to market. Even with an early market entry, inefficient development processes can hinder timely feature rollouts that customers need.

Implementing DevOps practice helps startups keep pace with industry leaders, speeding up development without additional resource expenditure, improving customer experience, and aligning with business needs.

Core Principles of DevOps

The foundation of DevOps rests on the principles of culture, automation, measurement, and sharing (CAMS). These principles drive continuous improvement and innovation in startups.

Key Benefits of DevOps for Startups

Faster Time-to-Market

DevOps accelerates development and release processes through automated workflows and continuous feedback integration.

  • Startups can rapidly launch new features, fix bugs, and update software, gaining a competitive advantage.
  • Implement continuous integration and continuous deployment (CI/CD) pipelines.
  • Use automated testing to identify issues early.

Improved Efficiency

DevOps enhances workflow efficiency by automating repetitive tasks and minimizing manual errors.

  • Utilize configuration management tools like Ansible and Chef.
  • Implement containerization with Docker for consistency across environments.
  • Jenkins for CI/CD
  • Docker for containerization
  • Kubernetes for orchestration

Enhanced Reliability

DevOps ensures code changes are continuously tested and validated, reducing failure risks.

  • Conduct regular automated testing.
  • Continuously monitor applications and infrastructure.
  • Increased reliability leads to higher customer satisfaction and retention.

DevOps Practices for Startups

Embrace Automation with CI/CD Tools

Automation tools are essential for accelerating the software delivery process. Startups should use CI/CD tools to automate testing, integration, and deployment. Recommended tools include:

  • Jenkins: An open-source automation server that supports building and deploying applications.
  • GitLab CI/CD: Integrated CI/CD capabilities within GitLab for seamless pipeline management.
  • CircleCI: A cloud-based CI/CD tool that offers fast builds and easy integration with various services.

Implement Continuous Integration and Continuous Delivery (CI/CD)

CI/CD practices enable frequent code changes and deployments. Key components include:

  • Version Control Systems (VCS): Use Git with platforms like GitHub or Bitbucket for efficient code management.
  • Build Automation: Tools like Maven or Gradle for Java projects, or npm scripts for Node.js, automate the build process.
  • Deployment Automation: Utilize tools like Spinnaker or Argo CD for managing Kubernetes deployments.

Utilize Infrastructure as Code (IaC)

IaC allows startups to manage infrastructure through code, ensuring consistency and reducing manual errors. Consider using:

  • Terraform: For provisioning and managing cloud infrastructure in a declarative manner.
  • AWS CloudFormation: For defining infrastructure using YAML or JSON templates.
  • Ansible: For configuration management and application deployment.

Adopt Containerization

Containerization simplifies deployment and improves resource utilization. Use:

  • Docker: To package applications and their dependencies into lightweight, portable containers.
  • Kubernetes: For orchestrating containerized applications, enabling scaling and management.

Monitor and Measure Performance

Implement robust monitoring tools to gain visibility into application performance. Recommended tools include:

  • Prometheus: For real-time monitoring and alerting.
  • Grafana: For visualizing metrics and logs.
  • ELK Stack (Elasticsearch, Logstash, Kibana): For centralized logging and data analysis.

Integrate Security (DevSecOps)

Incorporate security practices into the DevOps pipeline using:

  • Snyk: For identifying vulnerabilities in open-source dependencies.
  • SonarQube: For continuous inspection of code quality and security vulnerabilities.
  • HashiCorp Vault: For managing secrets and protecting sensitive data.

Leverage Software Engineering Intelligence (SEI) Platforms

SEI platforms provide critical insights into the engineering processes, enhancing decision-making and efficiency. Key features include:

  • Data Integration: SEI platforms like Typo ingest data from various tools (e.g., GitHub, JIRA) to provide a holistic view of the development pipeline.
  • Actionable Insights: These platforms analyze data to identify bottlenecks and inefficiencies, enabling teams to optimize workflows and improve delivery speed.
  • DORA Metrics: SEI platforms track key metrics such as deployment frequency, lead time for changes, change failure rate, and time to restore service, helping teams measure their performance against industry standards.

Foster Collaboration and Communication

Utilize collaborative tools to enhance communication among team members. Recommended tools include:

  • Slack: For real-time communication and integration with other DevOps tools.
  • JIRA: For issue tracking and agile project management.
  • Confluence: For documentation and knowledge sharing.

Encourage Continuous Learning

Promote a culture of continuous learning through:

  • Internal Workshops: Regularly scheduled sessions on new tools or methodologies.
  • Online Courses: Encourage team members to take courses on platforms like Coursera or Udemy.

Establish Clear Standards and Documentation

Create a repository for documentation and coding standards using:

  • Markdown: For easy-to-read documentation within code repositories.
  • GitHub Pages: For hosting project documentation directly from your GitHub repository.

How Typo Helps DevOps Teams?

Typo is a powerful tool designed specifically for tracking and analyzing DevOps metrics. It provides an efficient solution for dev and ops teams seeking precision in their performance measurement.

  • With pre-built integrations in the dev tool stack, the dashboard provides all the relevant data within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real time.
  • It gives real-time visibility into a team’s KPI and lets them make informed decisions.

Conclusion

Implementing DevOps best practices can markedly boost the agility, productivity, and dependability of startups.

By integrating continuous integration and deployment, leveraging infrastructure as code, employing automated testing, and maintaining continuous monitoring, startups can effectively tackle issues like limited resources and skill shortages.

Moreover, fostering a cooperative culture is essential for successful DevOps adoption. By adopting these strategies, startups can create durable, scalable solutions for end users and secure long-term success in a competitive landscape.

Pros and Cons of DORA Metrics for Continuous Delivery

DORA metrics offer a valuable framework for assessing software delivery performance throughout the software delivery lifecycle. Measuring DORA key metrics allows engineering leaders to identify bottlenecks, improve efficiency, and enhance software quality, which impacts customer satisfaction. It is also a key indicator for measuring the effectiveness of continuous delivery pipelines.

In this blog post, we delve into the pros and cons of utilizing DORA metrics to optimize continuous delivery processes, exploring their impact on performance, efficiency, and delivering high-quality software

What are DORA Metrics?

DORA metrics were developed by the DORA team founded by Gene Kim, Jez Humble, and Dr. Nicole Forsgren. These metrics are key performance indicators that measure the effectiveness and efficiency of the software delivery process and provide a data-driven approach to evaluate the impact of operational practices on software delivery performance.

Four Key DORA Metrics

  • Change Failure Rate measures the code quality released to production during software deployments.
  • Mean Time to Recover measures the time to recover a system or service after an incident or failure in production.

In 2021, the DORA Team added Reliability as a fifth metric. It is based upon how well the user’s expectations are met, such as availability and performance, and measures modern operational practices.

Importance of Continuous Delivery for DORA Metrics

Continuous delivery (CD) is a primary aspect of modern software development that automatically prepares code changes for release to a production environment. It is combined with continuous integration (CI) and together, these two practices are known as CI/CD.

CD pipelines hold significant importance compared to traditional waterfall-style development. A few of them are:

Faster Time to Market

Continuous Delivery allows more frequent releases, allowing new features, improvements, and bug fixes to be delivered to end-users more quickly. It provides a competitive advantage by keeping the product up-to-date and responsive to user needs, which enhances customer satisfaction.

Improved Quality and Reliability

Automated testing and consistent deployment processes catch bugs and issues early. It improves the overall quality and reliability of the software and reduces the chances of defects reaching production.

Reduced Deployment Risk

When updates are smaller and more frequent, it reduces the complexity and risk associated with each deployment. If an issue does arise, it becomes easier to pinpoint the problem and roll back the changes.

Scalability

CD practices can be scaled to accommodate growing development teams and more complex applications. It helps to manage the increasing demands of modern software development.

Innovation and Experimentation

Continuous delivery allows teams to experiment with new ideas and features efficiently. This encourages innovation by allowing quick feedback and iteration cycles. 

Enhances Performance Visibility

  • Deployment Frequency: High deployment frequency indicates a team’s ability to deliver updates and new features quickly and consistently.
  • Lead Time for Changes: Short lead times suggest a more efficient delivery process.
  • Change Failure Rate: A lower rate highlights better testing and higher quality in releases.
  • Mean Time to Restore (MTTR): A lower MTTR indicates a team’s capability to respond to and fix issues rapidly.

Increases Operational Efficiency

Implementing DORA metrics encourages teams to streamline their processes, reducing bottlenecks and inefficiencies in the delivery pipeline. It also allows the team to regularly measure and analyze these metrics which fosters a culture of continuous improvement. As a result, teams are motivated to identify and resolve inefficiencies.

Fosters Collaboration and Communication

Tracking DORA metrics encourages collaboration between DevOps and other stakeholders. Hence, fostering a more integrated and cooperative approach to software delivery. It further provides objective data that teams can use to make informed decisions, prioritize work, and align their efforts with business goals.

Improves Software Quality

Continuous Delivery relies heavily on automated testing to catch defects early. DORA metrics help software teams track the testing processes’ effectiveness which ensures higher software quality. Faster deployment cycles and lower lead times enable quicker feedback from end-users. It allows software development teams to address issues and improve the product more swiftly.

Increases Reliability and Stability

Software teams can ensure that their deployments are more reliable and less prone to issues by monitoring and aiming to reduce the change failure rate. A low MTTR demonstrates a team’s capability to quickly recover from failures which minimizes downtime and its impact on users. Hence, increases the reliability and stability of the software.

Effective Incident Management

Incident management is an integral part of CD as it helps quickly address and resolve any issues that arise. This aligns with the DORA metric for Time to Restore Service as it ensures that any disruptions are quickly addressed, minimizing downtime, and maintaining service reliability.

Cons of DORA Metrics for Continuous Delivery

Implementation Challenges

The process of setting up the necessary software to measure DORA metrics accurately can be complex and time-consuming. Besides this, inaccurate or incomplete data can lead to misleading metrics which can affect decision-making and process improvements.

Resource Allocation Issues

Implementing and maintaining the necessary infrastructure to track DORA metrics can be resource-intensive. It potentially diverts resources from other important areas and increases the risk of disproportionately allocating resources to high-performing teams or projects to improve metrics.

Limited Scope of Metrics

DORA metrics focus on specific aspects of the delivery process and may not capture other crucial factors including security, compliance, or user satisfaction. It is also not universally applicable as the relevance and effectiveness of DORA metrics can vary across different types of projects, teams, and organizations. What works well for one team may not be suitable for another.

Cultural Resistance

Implementing DORA DevOps metrics requires changes in culture and mindset, which can be met with resistance from teams that are accustomed to traditional methods. Apart from this, ensuring that DORA metrics align with broader business goals and are understood by all stakeholders can be challenging.

Subjectivity in Measurement

While DORA Metrics are quantitative in nature, their interpretation and application of DORA metrics can be highly subjective. The definition and measuring metrics like ‘Lead Time for Changes’ or ‘MTTR’ can vary significantly across teams. It may result in inconsistencies in how these metrics are understood and applied across different teams.

How does Typo Solve this Issue?

As the tech landscape is evolving, there is a need for diverse evaluation tools in software development. Relying solely on DORA metrics can result in a narrow understanding of performance and progress. Hence, software development organizations necessitate a multifaceted evaluation approach.

And that’s why, Typo is here at your rescue!

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools. It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.‍

Features

  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Includes effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint.
  • Provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.‍

Conclusion

While DORA metrics offer valuable insights into software delivery performance, they have their limitations. Typo provides a robust platform that complements DORA metrics by offering deeper insights into developer productivity and workflow efficiency, helping engineering teams achieve the best possible software delivery outcomes.

Improving Scrum Team Performance with DORA Metrics

Scrum is known to be a popular methodology for software development. It concentrates on continuous improvement, transparency, and adaptability to changing requirements. Scrum teams hold regular ceremonies, including Sprint Planning, Daily Stand-ups, Sprint Reviews, and Sprint Retrospectives, to keep the process on track and address any issues.

With the help of DORA DevOps Metrics, Scrum teams can gain valuable insights into their development and delivery processes.

In this blog post, we discuss how DORA metrics help boost scrum team performance. 

What are DORA Metrics? 

DevOps Research and Assessment (DORA) metrics are a compass for engineering teams striving to optimize their development and operations processes.

In 2015, The DORA (DevOps Research and Assessment) team was founded by Gene Kim, Jez Humble, and Dr. Nicole Forsgren to evaluate and improve software development practices. The aim is to enhance the understanding of how development teams can deliver software faster, more reliably, and of higher quality.

Four key DORA metrics are: 

  • Deployment Frequency: Deployment Frequency measures the rate of change in software development and highlights potential bottlenecks. It is a key indicator of agility and efficiency. High Deployment Frequency signifies a streamlined pipeline, allowing teams to deliver features and updates faster.
  • Lead Time for Changes: Lead Time for Changes measures the time it takes for code changes to move from inception to deployment. It tracks the speed and efficiency of software delivery and offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.
  • Change Failure Rate: Change Failure Rate measures the frequency of newly deployed changes leading to failures, glitches, or unexpected outcomes in the IT environment. It reflects reliability and efficiency and is related to team capacity, code complexity, and process efficiency, impacting speed and quality.
  • Mean Time to Recover: Mean Time to Recover measures the average duration a system or application takes to recover from a failure or incident. It concentrates on determining the efficiency and effectiveness of an organization's incident response and resolution procedures.

Reliability is a fifth metric that was added by the DORA team in 2021. It is based upon how well your user’s expectations are met, such as availability and performance, and measures modern operational practices. It doesn’t have standard quantifiable targets for performance levels rather it depends upon service level indicators or service level objectives.

Why DORA Metrics are Useful for Scrum Team Performance? 

DORA metrics are useful for Scrum team performance because they provide key insights into the software development and delivery process. Hence, driving operational performance and improving developer experience.

Measure Key Performance Indicators (KPIs)

DORA metrics track crucial KPIs such as deployment frequency, lead time for changes, mean time to recovery (MTTR), and change failure rate which helps Scrum teams understand their efficiency and identify areas for improvement.

Enhance Workflow Efficiency

Teams can streamline their software delivery process and reduce bottlenecks by monitoring deployment frequency and lead time for changes. Hence, leading to faster delivery of features and bug fixes.

Improve Reliability 

Tracking the change failure rate and MTTR helps software teams focus on improving the reliability and stability of their applications. Hence, resulting in more stable releases and fewer disruptions for users.

Encourage Data-Driven Decision Making 

DORA metrics give clear data that helps teams decide where to improve, making it easier to prioritize the most impactful actions for better performance and enhanced customer satisfaction.

Foster Continuous Improvement

Regularly reviewing these metrics encourages a culture of continuous improvement. This helps software development teams to set goals, monitor progress, and adjust their practices based on concrete data.

Benchmarking

DORA metrics allow DevOps teams to compare their performance against industry standards or other teams within the organization. This encourages healthy competition and drives overall improvement.

Provide Actionable Insights

DORA metrics provide actionable data that helps Scrum teams identify inefficiencies and bottlenecks in their processes. Analyzing these metrics allows engineering leaders to make informed decisions about where to focus improvement efforts and reduce recovery time.

Best Practices for Implementing DORA Metrics in Scrum Teams

Understand the Metrics 

Firstly, understand the importance of DORA Metrics as each metric provides insight into different aspects of the development and delivery process. Together, these metrics offer a comprehensive view of the team’s performance and allow them to make data-driven decisions. 

Set Baselines and Goals

Scrum teams should start by setting baselines for each metric to get a clear starting point and set realistic goals. For instance, if a scrum team currently deploys once a month, it may be unrealistic to aim for multiple deployments per day right away. Instead, they could set a more achievable goal, like deploying once a week, and gradually work towards increasing their frequency.

Regularly Review and Analyze Metrics

Scrum teams must schedule regular reviews (e.g., during sprint retrospectives) to discuss the metrics to identify trends, patterns, and anomalies in the data. This helps to track progress, pinpoint areas for improvement, and further allow them to make data-driven decisions to optimize their processes and adjust their goals as needed.

Foster Continuous Growth

Use the insights gained from the metrics to drive ongoing improvements and foster a culture that values experimentation and learning from mistakes. By creating this environment, Scrum teams can steadily enhance their software delivery performance. Note that, this approach should go beyond just focusing on DORA metrics. it should also take into account other factors like developer productivity and well-being, collaboration, and customer satisfaction.

Ensure Cross-Functional Collaboration and Communicate Transparently

Encourage collaboration between development, operations, and other relevant teams to share insights and work together to address bottlenecks and improve processes. Make the metrics and their implications transparent to the entire team. You can use the DORA Metrics dashboard to keep everyone informed and engaged.

How Typo Leverages DORA Metrics? 

Typo is a powerful tool designed specifically for tracking and analyzing DORA metrics. It provides an efficient solution for DevOps and Scrum teams seeking precision in their performance measurement.

  • With pre-built integrations in the dev tool stack, the DORA metrics dashboard provides all the relevant data within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency, and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real-time.
  • It gives real-time visibility into a team’s KPI and allows real-time them to make informed decisions.

Conclusion 

Leveraging DORA Metrics can transform Scrum team performance by providing actionable insights into key aspects of development and delivery. When implemented the right way, teams can optimize their workflows, enhance reliability, and make informed decisions to build high-quality software.

Top Platform Engineering KPIs You Need to Monitor

Platform Engineering is becoming increasingly crucial. According to the 2024 State of DevOps Report: The Evolution of Platform Engineering, 43% of organizations have had platform teams for 3-5 years. The field offers numerous benefits, such as faster time-to-market, enhanced developer happiness, and the elimination of team silos.

However, there is one critical piece of advice that Platform Engineers often overlook: treat your platform as an internal product and consider your wider teams as your customers.

So, how can they do this effectively? It's important to measure what’s working and what isn’t using consistent indicators of success.

In this blog, we’ve curated the top platform engineering KPIs that software teams must monitor:

What is Platform Engineering?

Platform Engineering, an emerging technology approach, enables the software engineering team with all the required resources. This is to help them perform end-to-end operations of software development lifecycle automation. The goal is to reduce overall cognitive load, enhance operational efficiency, and remove process bottlenecks by providing a reliable and scalable platform for building, deploying, and managing applications. 

Importance of Tracking Platform Engineering KPIs

Helps in Performance Monitoring and Optimization

Platform Engineering KPIs offer insights into how well the platform performs under various conditions. They also help to identify loopholes and areas that need optimization to ensure the platform runs efficiently.

Ensures Scalability and Capacity Planning

These metrics guide decisions on how to scale resources. It also ensures the capacity planning i.e. the platform can handle growth and increased load without performance degradation. 

Quality Assurance

Tracking KPIs ensure that the platform remains robust and maintainable. This further helps to reduce technical debt and improve the platform’s overall quality. 

Increases Productivity and Collaboration

They provide in-depth insights into how effectively the engineering team operates and help to identify areas for improvement in team dynamics and processes.

Fosters a Culture of Continuous Improvement

Regularly tracking and analyzing KPIs fosters a culture of continuous improvement. Hence, encouraging proactive problem-solving and innovation among platform engineers. 

Top Platform Engineering KPIs to Track 

Deployment Frequency 

Deployment Frequency measures how often code is deployed into production per week. It takes into account everything from bug fixes and capability improvements to new features. It is a key metric for understanding the agility and efficiency of development and operational processes and highlights the team’s ability to deliver updates and new features.

The higher frequency with minimal issues reflects mature CI/CD processes and how platform engineering teams can quickly adapt to changes. Regularly tracking and adapting Deployment Frequency helps in continuous improvement as it reduces the risk of large, disruptive changes and delivers value to end-users effectively. 

Lead Time for Changes

Lead Time is the duration between a code change being committed and its successful deployment to end-users. It is correlated with both the speed and quality of the platform engineering team. Higher lead time gives a clear sign of roadblocks in processes and the platform needs attention. 

Low lead time indicates that the teams quickly adapt to feedback and deliver products timely. It also gives teams the ability to make rapid changes, allowing them to adapt to evolving user needs and market conditions. Tracking it regularly helps in streamlining workflows and reducing bottlenecks. 

Change Failure Rate

Change Failure Rate refers to the proportion or percentage of deployments that result in failure or errors. It indicates the rate at which changes negatively impact the stability or functionality of the system. CFR also provides a clear view of the platform’s quality and stability eg: how much effort goes into addressing problems and releasing code.

Lower CFR indicates that deployments are reliable, changes are thoroughly tested, and less likely to cause issues in production. Moreover, it also reflects a well-functioning development and deployment processes, boosting team confidence and morale. 

Mean Time to Restore

Mean Time to Restore (MTTR) represents the average time taken to resolve a production failure/incident and restore normal system functionality each week.  Low MTTR indicates that the platform is resilient, quickly recovers from issues, and efficiency of incident response. 

Faster recovery time minimizes the impact on users, increasing their satisfaction and trust in service. Moreover, it contributes to higher system uptime and availability and enhances your platform’s reputation, giving you a competitive edge. 

Resource Utilization 

This KPI tracks the usage of system resources. It is a critical metric that optimizes resource allocation and cost efficiency. Resource Utilization balances several objectives with a fixed amount of resources. 

It allows platform engineers to distribute limited resources evenly and efficiently and understand where exactly to spend. Resource Utilization also aids in capacity planning and helps in avoiding potential bottlenecks. 

Error Rates

Error Rates measure the number of errors encountered in the platform. It identifies the stability, reliability, and user experience of the platform. High Error Rates indicate underlying problems that need immediate attention which can otherwise, degrade user experience, leading to frustration and potential loss of users.

Monitoring Error Rates helps in the early detection of issues, enabling proactive response, and preventing minor issues from escalating into major outages. It also provides valuable insights into system performance and creates a feedback loop that informs continuous improvement efforts. 

Team Velocity 

Team Velocity is a critical metric that measures the amount of work completed in a given iteration (e.g., sprint). It highlights the developer productivity and efficiency as well as in planning and prioritizing future tasks. 

It helps to forecast the completion dates of larger projects or features, aiding in long-term planning and setting stakeholder expectations. Team Velocity also helps to understand the platform teams’ capacity to evenly distribute tasks and prevent overloading team members. 

How to Develop a Platform Engineering KPI Plan? 

Define Objectives 

Firstly, ensure that the KPIs support the organization’s broader objectives. A few of them include improving system reliability, enhancing user experience, or increasing development efficiency. Always focus on metrics that reflect the unique aspects of platform engineering. 

Identify Key Performance Indicators 

Select KPIs that provide a comprehensive view of platform engineering performance. We’ve shared some critical KPIs above. Choose those KPIs that fit your objectives and other considered factors. 

Establish Baseline and Targets

Assess current performance levels of software engineers to establish baselines. Set targets and ensure they are realistic and achievable for each KPI. They must be based on historical data, industry benchmarks, and business objectives.

Analyze and Interpret Data

Regularly analyze trends in the data to identify patterns, anomalies, and areas for improvement. Set up alerts for critical KPIs that require immediate attention. Don’t forget to conduct root cause analysis for any deviations from expected performance to understand underlying issues.

Review and Refine KPIs

Lastly, review the relevance and effectiveness of the KPIs periodically to ensure they align with business objectives and provide value. Adjust targets based on changes in business goals, market conditions, or team capacity.

Typo - An Effective Platform Engineering Tool 

Typo is an effective platform engineering tool that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion

Monitoring the right KPIs is essential for successful platform teams. By treating your platform as an internal product and your teams as customers, you can focus on delivering value and driving continuous improvement. The KPIs discussed above, provide a comprehensive view of your platform's performance and areas for enhancement. 

There are other KPIs available as well that we have not mentioned. Do your research and consider those that best suit your team and objectives.

All the best! 

View All

DevEx

View All
measuring developer productivity

Measuring and Improving Developer Productivity

Developer productivity is the new buzzword across the industry. Suddenly, measuring developer productivity has started going mainstream after the remote work culture, and companies like McKinsey are publishing articles titled - ”Yes, you can measure software developer productivity” causing a stir in the software development community, So we thought we should share our take on- Developer Productivity.

We will be covering the following Whats, Whys & Hows about Developer Productivity in this piece-

  • What is developer productivity?
  • Why do we need to measure developer productivity?
  • How do we measure it at the Team and individual level? & Why is it more complicated to measure developer productivity than Sales or Hiring productivity?
  • Challenges & Dangers of measuring developer productivity & What not to measure.
  • What is the impact of measuring developer productivity on engineering culture?

What is Developer Productivity?

Developer productivity refers to the effectiveness and efficiency with which software developers create high-quality software that meets business goals. It encompasses various dimensions, including code quality, development speed, team collaboration, and adherence to best practices. For engineering managers and leaders, understanding developer productivity is essential for driving continuous improvement and achieving successful project outcomes.

Key Aspects of Developer Productivity

Quality of Output: Developer productivity is not just about the quantity of code or code changes produced; it also involves the quality of that code. High-quality code is maintainable, readable, and free of significant bugs, which ultimately contributes to the overall success of a project.

Development Speed: This aspect measures how quickly developers (usually referred as developer velocity) can deliver features, fixes, and updates. While developer velocity is important, it should not come at the expense of code quality. Effective engineering teams strike a balance between delivering quickly and maintaining high standards.

Collaboration and Team Dynamics: Successful software development relies heavily on effective teamwork. Collaboration tools and practices that foster communication and knowledge sharing can significantly enhance developer productivity. Engineering managers should prioritize creating a collaborative environment that encourages teamwork.

Adherence to Best Practices for Outcomes: Following coding standards, conducting code review, and implementing testing protocols are essential for maintaining development productivity. These practices ensure that developers produce high-quality work consistently, which can lead to improved project outcomes.

Why do we need to measure dev productivity?

We all know that no love to be measured but the CEOs & CFOs have an undying love for measuring the ROI of their teams, which we can't ignore. The more the development productivity, the more the RoI. However, measuring developer productivity is essential for engineering managers and leaders too who want to optimize their teams' performance- We can't improve something that we don't measure.

Understanding how effectively developers work can lead to improved project outcomes, better resource allocation, and enhanced team morale. In this section, we will explore the key reasons why measuring developer productivity is crucial for engineering management.

Enhancing Team Performance

Measuring developer productivity allows engineering managers to identify strengths and weaknesses within their teams. By analyzing developer productivity metrics, leaders can pinpoint areas where new developer excel and where they may need additional support or resources. This insight enables managers to tailor training programs, allocate tasks more effectively, and foster a culture of continuous improvement.

Team's insights in Typo

Driving Business Outcomes

Developer productivity is directly linked to business success. By measuring development team productivity, managers can assess how effectively their teams deliver features, fix bugs, and contribute to overall project goals. Understanding productivity levels helps align development efforts with business objectives, ensuring that the team is focused on delivering value that meets customer needs.

Improving Resource Allocation

Effective measurement of developer productivity enables better resource allocation. By understanding how much time and effort are required for various tasks, managers can make informed decisions about staffing, project timelines, and budget allocation. This ensures that resources are utilized efficiently, minimizing waste and maximizing output.

Fostering a Positive Work Environment

Measuring developer productivity can also contribute to a positive work environment. By recognizing high-performing teams and individuals, managers can boost morale and motivation. Additionally, understanding productivity trends can help identify burnout or dissatisfaction, allowing leaders to address issues proactively and create a healthier workplace culture.

Developer surveys insights in Typo

Facilitating Data-Driven Decisions

In today’s fast-paced software development landscape, data-driven decision-making is essential. Measuring developer productivity provides concrete data that can inform strategic decisions. Whether it's choosing new tools, adopting agile methodologies, or implementing process changes, having reliable developer productivity metrics allows managers to make informed choices that enhance team performance.

Investment distribution in Typo

Encouraging Collaboration and Communication

Regularly measuring productivity can highlight the importance of collaboration and communication within teams. By assessing metrics related to teamwork, such as code reviews and pair programming sessions, managers can encourage practices that foster collaboration. This not only improves productivity but overall developer experience by strengthening team dynamics and knowledge sharing.

Ultimately, understanding developer experience and measuring developer productivity leads to better outcomes for both the team and the organization as a whole.

How do we measure Developer Productivity?

Measuring developer productivity is essential for engineering managers and leaders who want to optimize their teams' performance.

Strategies for Measuring Productivity

Focus on Outcomes, Not Outputs: Shift the emphasis from measuring outputs like lines of code to focusing on outcomes that align with business objectives. This encourages developers to think more strategically about the impact of their work.

Measure at the Team Level: Assess productivity at the team level rather than at the individual level. This fosters team collaboration, knowledge sharing, and a focus on collective goals rather than individual competition.

Incorporate Qualitative Feedback: Balance quantitative metrics with qualitative feedback from developers through surveys, interviews, and regular check-ins. This provides valuable context and helps identify areas for improvement.

Encourage Continuous Improvement: Position productivity measurement as a tool for continuous improvement rather than a means of evaluation. Encourage developers to use metrics to identify areas for growth and work together to optimize workflows and development processes.

Lead by Example: As engineering managers and leaders, model the behavior you want to see in your team & team members. Prioritize work-life balance, encourage risk-taking and innovation, and create an environment where developers feel supported and empowered.

Measuring Dev productivity involves assessing both team and individual contributions to understand how effectively developers are delivering value through their development processes. Here’s how to approach measuring productivity at both levels:

Team-Level Developer Productivity

Measuring productivity at the team level provides a more comprehensive view of how collaborative efforts contribute to project success. Here are some effective metrics:

DORA Metrics

The DevOps Research and Assessment (DORA) metrics are widely recognized for evaluating team performance. Key metrics include:

  • Deployment Frequency: How often the software engineering team releases code to production.
  • Lead Time for Changes: The time taken for committed code to reach production.
  • Change Failure Rate: The percentage of deployments that result in failures.
  • Time to Restore Service: The time taken to recover from a failure.

Issue Cycle Time

This metric measures the time taken from the start of work on a task to its completion, providing insights into the efficiency of the software development process.

Team Satisfaction and Engagement

Surveys and feedback mechanisms can gauge team morale and satisfaction, which are critical for long-term productivity.

Collaboration Metrics

Assessing the frequency and quality of code reviews, pair programming sessions, and communication can provide insights into how well the software engineering team collaborates.

Individual Developer Productivity

While team-level metrics are crucial, individual developer productivity also matters, particularly for performance evaluations and personal development. Here are some metrics to consider:

  • Pull Requests and Code Reviews: Tracking the number of pull requests submitted and the quality of code reviews can provide insights into an individual developer's engagement and effectiveness.
  • Commit Frequency: Measuring how often a developer commits code can indicate their active participation in projects, though it should be interpreted with caution to avoid incentivizing quantity over quality.
  • Personal Goals and Outcomes: Setting individual objectives related to project deliverables and tracking their completion can help assess individual productivity in a meaningful way.
  • Skill Development: Encouraging developers to pursue training and certifications can enhance their skills, contributing to overall productivity.

Measuring developer productivity metrics presents unique challenges compared to more straightforward metrics used in sales or hiring. Here are some reasons why:

  • Complexity of Work: Software development involves intricate problem-solving, creativity, and collaboration, making it difficult to quantify contributions accurately. Unlike sales, where metrics like revenue generated are clear-cut, developer productivity encompasses various qualitative aspects that are harder to measure for project management.
  • Collaborative Nature: Development work is highly collaborative. Team members often intertwine with team efforts, making it challenging to isolate the impact of one developer's work. In sales, individual performance is typically more straightforward to assess based on personal sales figures.
  • Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity of a pragmatic engineer. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
  • Varied Work Activities: Developers engage in various activities beyond coding, including debugging, code reviews, and meetings. These essential tasks are often overlooked in productivity measurements, whereas sales roles typically have more consistent and quantifiable activities.
  • Productivity Tools and Software development Process: The developer productivity tools and methodologies used in software development are constantly changing, making it difficult to establish consistent metrics. In contrast, sales processes tend to be more stable, allowing for easier benchmarking and comparison.

By employing a balanced approach that considers both quantitative and qualitative factors, with a few developer productivity tools, engineering leaders can gain valuable insights into their teams' productivity and foster an environment of continuous improvement & better developer experience.

Challenges of measuring Developer Productivity - What not to Measure?

Measuring developer productivity is a critical task for engineering managers and leaders, yet it comes with its own set of challenges and potential pitfalls. Understanding these challenges is essential to avoid the dangers of misinterpretation and to ensure that developer productivity metrics genuinely reflect the contributions of developers. In this section, we will explore the challenges of measuring developer productivity and highlight what not to measure.

Challenges of Measuring Developer Productivity

  • Complexity of Software Development: Software development is inherently complex, involving creativity, problem-solving, and collaboration. Unlike more straightforward fields like sales, where performance can be quantified through clear metrics (e.g., sales volume), developer productivity is multifaceted and includes various non-tangible elements. This complexity makes it difficult to establish a one-size-fits-all metric.
  • Inadequate Traditional Metrics: Traditional metrics such as Lines of Code (LOC) and commit frequency often fail to capture the true essence of developer productivity. These metrics can incentivize quantity over quality, leading developers to produce more code without necessarily improving the software's functionality or maintainability. This focus on superficial metrics can distort the understanding of a developer's actual contributions.
  • Team Dynamics and Collaboration: Measuring individual productivity can overlook the collaborative nature of software development. Developers often work in teams where their contributions are interdependent. Focusing solely on individual metrics may ignore the synergistic effects of collaboration, mentorship, and knowledge sharing, which are crucial for a team's overall success.
  • Context Ignorance: Developer productivity metrics often fail to consider the context in which developers work. Factors such as project complexity, team dynamics, and external dependencies can significantly impact productivity but are often overlooked in traditional assessments. This lack of context can lead to misleading conclusions about a developer's performance.
  • Potential for Misguided Incentives: Relying heavily on specific metrics can create perverse incentives. For example, if developers are rewarded based on the number of commits, they may prioritize frequent small commits over meaningful contributions. This can lead to a culture of "gaming the system" rather than fostering genuine productivity and innovation.

What Not to Measure

  • Lines of Code (LOC): While LOC can provide some insight into coding activity, it is not a reliable measure of productivity. More code does not necessarily equate to better software. Instead, focus on the quality and impact of the code produced.
  • Commit Frequency: Tracking how often developers commit code can give a false sense of productivity. Frequent commits do not always indicate meaningful progress and can encourage developers to break down their work into smaller, less significant pieces.
  • Bug Counts: Focusing on the number of bugs reported or fixed can create a negative environment where developers feel pressured to avoid complex tasks that may introduce bugs. This can stifle innovation and lead to a culture of risk aversion.
  • Time Spent on Tasks: Measuring how long developers spend on specific tasks can be misleading. Developers may take longer on complex problems that require deep thinking and creativity, which are essential for high-quality software development.

Measuring developer productivity is fraught with challenges and dangers that engineering managers must navigate carefully. By understanding these complexities and avoiding outdated or superficial metrics, leaders can foster a more accurate and supportive environment for their development team productivity.

What is the impact of measuring Dev productivity on engineering culture?

Developer productivity improvements are a critical factor in the success of software development projects. As engineering managers or technology leaders, measuring and optimizing developer productivity is essential for driving development team productivity and delivering successful outcomes. However, measuring development productivity can have a significant impact on engineering culture & software engineering talent, which must be carefully navigated. Let's talk about measuring developer productivity while maintaining a healthy and productive engineering culture.

Measuring developer productivity presents unique challenges compared to other fields. The complexity of software development, inadequate traditional metrics, team dynamics, and lack of context can all lead to misguided incentives and decreased morale. It's crucial for engineering managers to understand these challenges to avoid the pitfalls of misinterpretation and ensure that developer productivity metrics genuinely reflect the contributions of developers.

Remember, the goal is not to maximize metrics but to create a development environment where software engineers can thrive and deliver maximum value to the organization.

Development teams using Typo experience a 30% improvement in Developer Productivity. Want to Try Typo?

Member's insights in Typo

Optimizing Code Reviews to Boost Developer Productivity

Code review is all about improving the code quality. However, it can be a nightmare for developers when not done correctly. They may experience several code review challenges and slow down the entire development process. This further reduces their morale and efficiency and results in developer burnout.

Hence, optimizing the code review process is crucial for both code reviewers and developers. In this blog post, we have shared a few tips on optimizing code reviews to boost developer productivity.

Importance of Code Reviews

The Code review process is an essential stage in the software development life cycle. It has been a defining principle in agile methodologies. It ensures high-quality code and identifies potential issues or bugs before they are deployed into production.

Another notable benefit of code reviews is that it helps to maintain a continuous integration and delivery pipeline to ensure code changes are aligned with project requirements. It also ensures that the product meets the quality standards, contributing to the overall success of sprint or iteration.

With a consistent code review process, the development team can limit the risks of unnoticed mistakes and prevent a significant amount of tech debt.

They also make sure that the code meets the set acceptance criteria, and functional specifications and whether the code base follows consistent coding styles across the codebase.

Lastly, it provides an opportunity for developers to learn from each other and improve their coding skills which further helps in fostering continuous growth and helps raise the overall code quality.

How do Ineffective Code Reviews Decrease Developer Productivity?

Unclear Standards and Inconsistencies

When the code reviews lack clear guidelines or consistent criteria for evaluation, the developers may feel uncertain of what is expected from their end. This leads to ambiguity due to varied interpretations of code quality and style. It also takes a lot of their time to fix issues on different reviewers’ subjective opinions. This leads to frustration and decreased morale among developers.

Increase in Bottlenecks and Delays

When developers wait for feedback for an extended period, it prevents them from progressing. This slows down the entire software development lifecycle, resulting in missed deadlines and decreased morale. Hence, negatively affecting the deployment timeline, customer satisfaction, and overall business outcomes.

Low Quality and Delayed Feedback

When reviewers communicate vague, unclear, and delayed feedback, they usually miss out on critical information. This leads to context-switching for developers which makes them lose focus on their current tasks. Moreover, they need to refamiliarize themselves with the code when the review is completed. Hence, resulting in developers losing their productivity.

Increase Cognitive Load

Frequent switching between writing and reviewing code requires a lot of mental effort. This makes it harder for developers to be focused and productive. Poorly structured, conflicting, or unclear feedback also confuses developers on which of them to prioritize first and understand the rationale behind suggested changes. This slows down the progress, leading to decision fatigue and reducing the quality of work.

Knowledge Gaps and Lack of Context

Knowledge gaps usually arise when reviewers lack the necessary domain knowledge or context about specific parts of the codebase. This results in a lack of context which further misguides developers who may overlook important issues. They may also need extra time to justify their decision and educate reviewers.

How to Optimize Code Review Process to Improve Developer Productivity?

Set Clear Goals and Standards

Establish clear objectives, coding standards, and expectations for code reviews. Communicate in advance with developers such as how long reviews should take and who will review the code. This allows both reviewers and developers to focus their efforts on relevant issues and prevent their time being wasted on insignificant matters.

Use a Code Review Checklist

Code review checklists include a predetermined set of questions and rules that the team will follow during the code review process. A few of the necessary quality checks include:

  • Readability and maintainability: This is the first criterion and cannot be overstated enough.
  • Uniform formatting: Whether the code with consistent indentation, spacing, and naming convention easy to understand?
  • Testing and quality assurance: Whether it have meticulous testing and quality assurance processes?
  • Boundary testing: Are we exploring extreme scenarios and boundary conditions to identify hidden problems?
  • Security and performance: Are we ensuring security and performance in our source code?
  • Architectural integrity: Whether the code is scalable, sustainable, and has a solid architectural design?

Prioritize High-Impact Issues

The issues must be prioritized based on their severity and impact. Not every issue in the code review process is equally important. Take up those issues first which affect system performance, security, or major features. Review them more thoroughly rather than the ones that have smaller and less impactful changes. It helps in allocating time and resources effectively.

Encourage Constructive Feedback

Always share specific, honest, and actionable feedback with the developers. The feedback must point in the right direction and must explain the ‘why’ behind it. It will reduce follow-ups and give necessary context to the developers. This also helps the engineering team to improve their skills and produce better code which further results in a high-quality codebase.

Automate Wherever Possible

Use automation tools such as style check, syntax check, and static code analysis tools to speed up the review process. This allows for routine checks for style, syntax errors, potential bugs, and performance issues and reduces the manual effort needed on such tasks. Automation allows developers to focus on more complex issues and allocate time more effectively.

Keep Reviews Small and Focused

Break down code into smaller, manageable chunks. This will be less overwhelming and time-consuming. The code reviewers can concentrate on details, adhere to the style guide and coding standards, and identify potential bugs. This will allow them to provide meaningful feedback more effectively. This helps in a deeper understanding of the code’s impact on the overall project.

Recognize and Reward Good Work

Acknowledge and celebrate developers who consistently produce high-quality code. This enables developers to feel valued for their contributions, leading to increased engagement, job satisfaction, and a sense of ownership in the project’s success. They are also more likely to continue producing high-quality code and actively participate in the review process.

Encourage Pair Programming or Pre-Review

Encourage pair programming or pre-review sessions to by enabling real-time feedback, reducing review time, and improving code quality. This fosters collaboration, enhances knowledge sharing, and helps catch issues early. Hence, leading to smoother and more effective reviews. It also promotes team bonding, streamlines communication, and cultivates a culture of continuous learning and improvement.

Use a Software Engineering Analytics Platform

Using an Engineering analytics platform in an organization is a powerful way to optimize the code review process and improve developer productivity. It provides comprehensive insights into the code quality, technical debt, and bug frequency which allow teams to proactively identify bottlenecks and address issues in real time before they escalate. It also allow teams to monitor their practices continuously and make adjustments as needed.

Typo — Automated Code Review Tool

Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.

Key Features

  • Supports top 8 languages including C++ and C#.
  • Understands the context of the code and fixes issues accurately.
  • Optimizes code efficiently.
  • Provides automated debugging with detailed explanations.
  • Standardizes code and reduces the risk of a security breach

Learn More About Typo

Conclusion

If you prioritize the code review process, follow the above-mentioned tips. It will help in maximizing code quality, improve developer productivity, and streamline the development process.

Happy reviewing!

Mastering Developer Productivity with the SPACE Framework

In the crazy world of software development, getting developers to be productive is like finding the Holy Grail for tech companies. When developers hit their stride, turning out valuable work at breakneck speed, it’s a win for everyone. But let’s be honest—traditional productivity metrics, like counting lines of code or tracking hours spent fixing bugs, are about as helpful as a screen door on a submarine.

Say hello to the SPACE framework: your new go-to for cracking the code on developer productivity. This approach doesn’t just dip a toe in the water—it dives in headfirst to give you a clear, comprehensive view of how your team is doing. With the SPACE framework, you’ll ensure your developers aren’t just busy—they’re busy being awesome and delivering top-quality work on the dot. So buckle up, because we’re about to take your team’s productivity to the next level!

Introduction to the SPACE Framework

The SPACE framework is a modern approach to measuring developer productivity, introduced in a 2021 paper by experts from GitHub and Microsoft Research. This framework goes beyond traditional metrics to provide a more accurate and holistic view of productivity.

Nicole Forsgren, the lead author, emphasizes that measuring productivity by lines of code or speed can be misleading. The SPACE framework integrates several key metrics to give a complete picture of developer productivity.

Detailed Breakdown of SPACE Metrics

The five SPACE framework dimensions are:

Satisfaction and Well-being

When developers are happy and healthy, they tend to be more productive. If they enjoy their work and maintain a good work-life balance, they're more likely to produce high-quality results. On the other hand, dissatisfaction and burnout can severely hinder productivity. For example, a study by Haystack Analytics found that during the COVID-19 pandemic, 81% of software developers experienced burnout, which significantly impacted their productivity. The SPACE framework encourages regular surveys to gauge developer satisfaction and well-being, helping you address any issues promptly.

Performance

Traditional metrics often measure performance by the number of features added or bugs fixed. However, this approach can be problematic. According to the SPACE framework, performance should be evaluated based on outcomes rather than output. This means assessing whether the code reliably meets its intended purpose, the time taken to complete tasks, customer satisfaction, and code reliability.

Activity

Activity metrics are commonly used to gauge developer productivity because they are easy to quantify. However, they only provide a limited view. Developer Activity is the count of actions or outputs completed over time, such as coding new features or conducting code reviews. While useful, activity metrics alone cannot capture the full scope of productivity.

Nicole Forsgren points out that factors like overtime, inconsistent hours, and support systems also affect activity metrics. Therefore, it's essential to consider routine tasks like meetings, issue resolution, and brainstorming sessions when measuring activity.

Collaboration and Communication

Effective communication and collaboration are crucial for any development team's success. Poor communication can lead to project failures, as highlighted by 86% of employees in a study who cited ineffective communication as a major reason for business failures. The SPACE framework suggests measuring collaboration through metrics like the discoverability of documentation, integration speed, quality of work reviews, and network connections within the team.

Efficiency and Flow

Flow is a state of deep focus where developers can achieve high levels of productivity. Interruptions and distractions can break this flow, making it challenging to return to the task at hand. The SPACE framework recommends tracking metrics such as the frequency and timing of interruptions, the time spent in various workflow stages, and the ease with which developers maintain their flow.

Benefits of the SPACE Framework

The SPACE framework offers several advantages over traditional productivity metrics. By considering multiple dimensions, it provides a more nuanced view of developer productivity. This comprehensive approach helps avoid the pitfalls of single metrics, such as focusing solely on lines of code or closed tickets, which can lead to gaming the system.

Moreover, the SPACE framework allows you to measure both the quantity and quality of work, ensuring that developers deliver high-quality software efficiently. This integrated view helps organizations make informed decisions about team productivity and optimize their workflows for better outcomes.

Implementing the SPACE Framework in Your Organization

Implementing the SPACE productivity framework effectively requires careful planning and execution. Below is a comprehensive plan and roadmap to guide you through the process. This detailed guide will help you tailor the SPACE framework to your organization's unique needs and ensure a smooth transition to this advanced productivity measurement approach.

Step 1: Understanding Your Current State

Objective: Establish a baseline by understanding your current productivity measurement practices and developer workflow.

  1. Conduct a Productivity Audit
    • Review existing metrics and tools like Typo used for tracking productivity. 
    • Identify gaps and limitations in current measurement methods.
    • Gather feedback from developers and managers on existing practices.
  2. Analyze Team Dynamics and Workflow
    • Map out your development process, identifying key stages and tasks.
    • Observe how teams collaborate, communicate, and handle interruptions.
    • Assess the overall satisfaction and well-being of your developers.

Outcome: A comprehensive report detailing your current productivity measurement practices, team dynamics, and workflow processes.

Step 2: Setting Goals and Objectives

Objective: Define clear goals and objectives for implementing the SPACE framework.

  1. Identify Key Business Objectives
    • Align the goals of the SPACE framework with your company's strategic objectives.
    • Focus on improving areas such as time-to-market, code quality, customer satisfaction, and developer well-being.
  2. Set Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) Goals
    • Example Goals
      • Increase developer satisfaction by 20% within six months.
      • Reduce average bug resolution time by 30% over the next quarter.
      • Improve code review quality scores by 15% within the next year.

Outcome: A set of SMART goals that will guide the implementation of the SPACE framework.

Step 3: Selecting and Customizing SPACE Metrics

Objective: Choose the most relevant SPACE metrics and customize them to fit your organization's needs.

  1. Review SPACE Metrics
    • Satisfaction and Well-being
    • Performance
    • Activity
    • Collaboration and Communication
    • Efficiency and Flow
  2. Customize Metrics
    • Tailor each metric to align with your organization's specific context and objectives.
    • Example Customizations
      • Satisfaction and Well-being: Conduct quarterly surveys to measure job satisfaction and work-life balance.
      • Performance: Track the reliability of code and customer feedback on delivered features.
      • Activity: Measure the number of completed tasks, code commits, and other relevant activities.
      • Collaboration and Communication: Monitor the quality of code reviews and the speed of integrating work.
      • Efficiency and Flow: Track the frequency and duration of interruptions and the time spent in flow states.

Outcome: A customized set of SPACE metrics tailored to your organization's needs.

Step 4: Implementing Measurement Tools and Processes

Objective: Implement tools and processes to measure and track the selected SPACE metrics.

  1. Choose Appropriate Tools
    • Use project management tools like Jira or Trello to track activity and performance metrics.
    • Implement collaboration tools such as Slack, Microsoft Teams, or Confluence to facilitate communication and knowledge sharing.
    • Utilize code review tools like CodeIQ by Typo to monitor the quality of code and collaboration.
  2. Set Up Data Collection Processes
    • Establish processes for collecting and analyzing data for each metric.
    • Ensure that data collection is automated wherever possible to reduce manual effort and improve accuracy.
  3. Train Your Team
    • Provide training sessions for developers and managers on using the new tools and understanding the SPACE metrics.
    • Encourage open communication and address any concerns or questions from the team.

Outcome: A fully implemented set of tools and processes for measuring and tracking SPACE metrics.

Step 5: Regular Monitoring and Review

Objective: Continuously monitor and review the metrics to ensure ongoing improvement.

  1. Establish Regular Review Cycles
    • Conduct monthly or quarterly reviews of the SPACE metrics to track progress towards goals.
    • Hold team meetings to discuss the results, identify areas for improvement, and celebrate successes.
  2. Analyze Trends and Patterns
    • Look for trends and patterns in the data to gain insights into team performance and productivity.
    • Use these insights to make informed decisions and adjustments to workflows and processes.
  3. Solicit Feedback
    • Regularly gather feedback from developers and managers on the effectiveness of the SPACE framework.
    • Use this feedback to make continuous improvements to the framework and its implementation.

Outcome: A robust monitoring and review process that ensures the ongoing effectiveness of the SPACE framework.

Step 6: Continuous Improvement and Adaptation

Objective: Adapt and improve the SPACE framework based on feedback and evolving needs.

  1. Iterate and Improve
    • Continuously refine and improve the SPACE metrics based on feedback and observed results.
    • Adapt the framework to address new challenges and opportunities as they arise.
  2. Foster a Culture of Continuous Improvement
    • Encourage a culture of continuous improvement within your development teams.
    • Promote openness to change and a willingness to experiment with new ideas and approaches.
  3. Share Success Stories
    • Share success stories and best practices with the broader organization to demonstrate the value of the SPACE framework.
    • Use these stories to inspire other teams and encourage the adoption of the framework across the organization.

Outcome: A dynamic and adaptable SPACE framework that evolves with your organization's needs.

Conclusion

Implementing the SPACE framework is a strategic investment in your organization's productivity and success. By following this comprehensive plan and roadmap, you can effectively integrate the SPACE metrics into your development process, leading to improved performance, satisfaction, and overall productivity. Embrace the journey of continuous improvement and leverage the insights gained from the SPACE framework to unlock the full potential of your development teams.

SPACE Framework: How to Measure Developer Productivity

In today’s fast-paced software development world, understanding and improving developer productivity is more crucial than ever. One framework that has gained prominence for its comprehensive approach to measuring and enhancing productivity is the SPACE Framework. This framework, developed by industry experts and backed by extensive research, offers a multi-dimensional perspective on productivity that transcends traditional metrics.

This blog delves deep into the genesis of the SPACE Framework, its components, and how it can be effectively implemented to boost developer productivity. We’ll also explore real-world success stories of companies that have benefited from adopting this framework.

The genesis of the SPACE Framework

The SPACE Framework was introduced by researchers Nicole Forsgren, Margaret-Anne Storey, Chandra Maddila, Thomas Zimmermann, Brian Houck, and Jenna Butler. Their work was published in a paper titled “The SPACE of Developer Productivity: There’s More to it than You Think!” emphasising that a single metric cannot measure developer productivity. Instead, it should be viewed through multiple lenses to capture a holistic picture.

Components of the SPACE Framework

The SPACE Framework is an acronym that stands for:

  1. Satisfaction and Well-being
  2. Performance
  3. Activity
  4. Communication and Collaboration
  5. Efficiency and Flow

Each component represents a critical aspect of developer productivity, ensuring a balanced approach to measurement and improvement.

Detailed breakdown of the SPACE Framework

1. Satisfaction and Well-being

Definition: This dimension focuses on how satisfied and happy developers are with their work and environment. It also considers their overall well-being, which includes factors like work-life balance, stress levels, and job fulfillment.

Why It Matters: Happy developers are more engaged, creative, and productive. Ensuring high satisfaction and well-being can reduce burnout and turnover, leading to a more stable and effective team.

Metrics to Consider:

  • Employee satisfaction surveys
  • Work-life balance scores
  • Burnout indices
  • Turnover rates

2. Performance

Definition: Performance measures the outcomes of developers’ work, including the quality and impact of the software they produce. This includes assessing code quality, deployment frequency, and the ability to meet user needs.

Why It Matters: High performance indicates that the team is delivering valuable software efficiently. It helps in maintaining a competitive edge and ensuring customer satisfaction.

Metrics to Consider:

  • Code quality metrics (e.g., number of bugs, code review scores)
  • Deployment frequency
  • Customer satisfaction ratings
  • Feature adoption rates

3. Activity

Definition: Activity tracks the actions developers take, such as the number of commits, code reviews, and feature development. This component focuses on the volume and types of activities rather than their outcomes.

Why It Matters: Monitoring activity helps understand workload distribution and identify potential bottlenecks or inefficiencies in the development process.

Metrics to Consider:

  • Number of commits per developer
  • Code review participation
  • Task completion rates
  • Meeting attendance

4. Communication and Collaboration

Definition: This dimension assesses how effectively developers interact with each other and with other stakeholders. It includes evaluating the quality of communication channels and collaboration tools used.

Why It Matters: Effective communication and collaboration are crucial for resolving issues quickly, sharing knowledge, and fostering a cohesive team environment. Poor communication can lead to misunderstandings and project delays.

Metrics to Consider:

  • Frequency and quality of team meetings
  • Use of collaboration tools (e.g., Slack, Jira)
  • Cross-functional team interactions
  • Feedback loops

5. Efficiency and Flow

Definition: Efficiency and flow measure how smoothly the development process operates, including how well developers can focus on their tasks without interruptions. It also looks at the efficiency of the processes and tools in place.

Why It Matters: High efficiency and flow indicate that developers can work without unnecessary disruptions, leading to higher productivity and job satisfaction. It also helps in identifying and eliminating waste in the process.

Metrics to Consider:

  • Cycle time (time from task start to completion)
  • Time spent in meetings vs. coding
  • Context switching frequency
  • Tool and process efficiency

Implementing the SPACE Framework in real life

Implementing the SPACE Framework requires a strategic approach, involving the following steps:

Establish baseline metrics

Before making any changes, establish baseline metrics for each SPACE component. Use existing tools and methods to gather initial data.

Actionable Steps:

  • Conduct surveys to measure satisfaction and well-being.
  • Use code quality tools to assess performance.
  • Track activity through version control systems.
  • Analyze communication patterns via collaboration tools.
  • Measure efficiency and flow using project management software.

Set clear goals

Define what success looks like for each component of the SPACE Framework. Set achievable and measurable goals.

Actionable Steps:

  • Increase employee satisfaction scores by 10% within six months.
  • Reduce bug rates by 20% over the next quarter.
  • Improve code review participation by 15%.
  • Enhance cross-team communication frequency.
  • Shorten cycle time by 25%.

Implement changes

Based on the goals set, implement changes to processes, tools, and practices. This may involve adopting new tools, changing workflows, or providing additional training.

Actionable Steps:

  • Introduce well-being programs to improve satisfaction.
  • Adopt automated testing tools to enhance performance.
  • Encourage regular code reviews to boost activity.
  • Use collaboration tools like Slack or Microsoft Teams to improve communication.
  • Streamline processes to reduce context switching and improve flow.

Monitor and adjust

Regularly monitor the metrics to evaluate the impact of the changes. Be prepared to make adjustments as necessary to stay on track with your goals.

Actionable Steps:

  • Use dashboards to track key metrics in real time.
  • Hold regular review meetings to discuss progress.
  • Gather feedback from developers to identify areas for improvement.
  • Make iterative changes based on data and feedback.

Integrating the SPACE Framework with DORA Metrics

SPACE Dimension

Definition

DORA Metric Integration

Actionable Steps

Satisfaction and Well-being

Measures happiness, job fulfillment, and work-life balance

High deployment frequency and low lead time improve satisfaction; high failure rates increase stress

– Conduct satisfaction surveys 

– Correlate with DORA metrics

 – Implement well-being programs

Performance

Assesses the outcomes of developers’ work

Direct overlap with DORA metrics like deployment frequency and lead time

– Use DORA metrics for benchmark

– Track and improve key metrics

 – Address failure causes

Activity

Tracks volume and types of work (e.g., commits, reviews)

Frequent, high-quality activities improve deployment frequency and lead time

– Track activities and DORA metrics

 – Promote high-quality work practices

– Balance workloads

Communication and Collaboration

Evaluates effectiveness of interactions and tools

Effective communication and collaboration reduce failure rates and restoration times

– Use communication tools (e.g., Slack)

– Conduct retrospectives

 – Encourage cross-functional teams

Efficiency and Flow

Measures smoothness and efficiency of processes

Efficient workflows lead to higher deployment frequencies and shorter lead times

– Streamline processes <br> – Implement CI/CD pipelines

 – Monitor cycle times and context switching

Real-world success stories

GitHub

GitHub implemented the SPACE Framework to enhance its developer productivity. By focusing on communication and collaboration, they improved their internal processes and tools, leading to a more cohesive and efficient development team. They introduced regular team-building activities and enhanced their internal communication tools, resulting in a 15% increase in developer satisfaction and a 20% reduction in project completion time.

Microsoft

Microsoft adopted the SPACE Framework across several development teams. They focused on improving efficiency and flow by reducing context switching and streamlining their development processes. This involved adopting continuous integration and continuous deployment (CI/CD) practices, which reduced cycle time by 30% and increased deployment frequency by 25%.

Key software engineering metrics mapped to the SPACE Framework

This table outlines key software engineering metrics mapped to the SPACE Framework, along with how they can be measured and implemented to improve developer productivity and overall team effectiveness.

Satisfaction

Key Metrics

Measurement Tools/Methods

Implementation Steps

Satisfaction and Well-being

Employee Satisfaction Score

Employee surveys, engagement platforms (e.g.,Typo)

– Conduct regular surveys

– Analyze results to identify pain points

– Implement programs for well-being and work-life balance

Work-life Balance

Survey responses, self-reported hours

Employee surveys, time tracking tools (e.g., Toggl)

– Encourage flexible hours and remote work

– Monitor workload distribution

Burnout Index

Burnout survey scores

Surveys, tools like Typo, Gallup Q12

– Monitor and address high burnout scores

– Offer mental health resources

Turnover Rate

Percentage of staff leaving

HR systems, exit interviews

– Analyze reasons for turnover

– Improve work conditions based on feedback

Performance

Key Metrics

Measurement Tools/Methods

Implementation Steps

Code Quality

Number of bugs, code review scores

Static analysis tools (e.g., Typo, SonarQube), code review platforms (e.g., GitHub)

– Implement code quality tools

– Conduct regular code reviews

Deployment Frequency

Number of deployments per time period

CI/CD pipelines (e.g., Jenkins, GitLab CI/CD)

– Adopt CI/CD practices

– Automate deployment processes

Lead Time for Changes

Time from commit to production

CI/CD pipelines, version control systems (e.g., Git)

– Streamline deployment pipeline

– Optimize testing processes

Change Failure Rate

Percentage of failed deployments

Incident tracking tools (e.g., PagerDuty, Jira)

– Implement thorough testing and QA

– Analyze and learn from failures

Time to Restore Service

Time to recover from incidents

Incident tracking tools (e.g., PagerDuty, Jira)

– Develop robust incident response plans

– Conduct post-incident reviews

Activity

Key Metrics

Measurement Tools/Methods

Implementation Steps

Number of Commits

Commits per developer

Version control systems (e.g., Git)

– Track commits per developer

– Ensure commits are meaningful

Code Review Participation

Reviews per developer

Code review platforms (e.g., GitHub, Typo)

– Encourage regular participation in reviews

– Recognize and reward contributions

Task Completion Rates

Completed tasks vs. assigned tasks

Project management tools (e.g., Jira, Trello)

– Monitor task completion

– Address bottlenecks and redistribute workloads

Meeting Attendance

Attendance records

Calendar tools, project management tools

– Schedule necessary meetings

– Ensure meetings are productive and focused

Communication and Collaboration

Key Metrics

Measurement Tools/Methods

Implementation Steps

Team Meeting Frequency

Number of team meetings

Calendar tools, project management tools (e.g., Jira)

– Schedule regular team meetings

– Ensure meetings are structured and purposeful

Use of Collaboration Tools

Activity in tools (e.g., Slack messages, Jira comments)

Collaboration tools (e.g., Slack, Jira)

– Promote use of collaboration tools

– Provide training on tool usage

Cross-functional Interactions

Number of interactions with other teams

Project management tools, communication tools

– Encourage cross-functional projects

– Facilitate regular cross-team meetings

Feedback Loops

Number and quality of feedback instances

Feedback tools, retrospectives

– Implement regular feedback sessions

– Act on feedback to improve processes

Efficiency and Flow

Key Metrics

Measurement Tools/Methods

Implementation Steps

Cycle Time

Time from task start to completion

Project management tools (e.g., Jira)

– Monitor cycle times 

– Identify and remove bottlenecks

Time Spent in Meetings vs. Coding

Hours logged in meetings vs. coding

Time tracking tools, calendar tools

– Optimize meeting schedules

– Minimize unnecessary meetings

Context Switching Frequency

Number of task switches per day

Time tracking tools, self-reporting

– Reduce unnecessary interruptions

– Promote focused work periods

Tool and Process Efficiency

Time saved using tools/processes

Productivity tools, surveys

– Regularly review tool/process efficiency

– Implement improvements based on feedback

What engineering leaders can do

Engineering leaders play a crucial role in the successful implementation of the SPACE Framework. Here are some actionable steps they can take:

Promote a culture of continuous improvement

Encourage a mindset of continuous improvement among the team. This involves being open to feedback and constantly seeking ways to enhance productivity and well-being.

Actionable Steps:

  • Regularly solicit feedback from team members.
  • Celebrate small wins and improvements.
  • Provide opportunities for professional development and growth.

Invest in the right tools and processes

Ensure that developers have access to the tools and processes that enable them to work efficiently and effectively.

Actionable Steps:

  • Conduct regular tool audits to ensure they meet current needs.
  • Invest in training programs for new tools and technologies.
  • Streamline processes to eliminate unnecessary steps and reduce bottlenecks.

Foster collaboration and communication

Create an environment where communication and collaboration are prioritized. This can lead to better problem-solving and more innovative solutions.

Actionable Steps:

  • Organize regular team-building activities.
  • Use collaboration tools to facilitate better communication.
  • Encourage cross-functional projects to enhance team interaction.

Prioritize well-being and satisfaction

Recognize the importance of developer well-being and satisfaction. Implement programs and policies that support a healthy work-life balance.

Actionable Steps:

  • Offer flexible working hours and remote work options.
  • Provide access to mental health resources and support.
  • Recognize and reward achievements and contributions.

Conclusion

The SPACE Framework offers a holistic and actionable approach to understanding and improving developer productivity. By focusing on satisfaction and well-being, performance, activity, communication and collaboration, and efficiency and flow, organizations can create a more productive and fulfilling work environment for their developers.

Implementing this framework requires a strategic approach, clear goal setting, and ongoing monitoring and adjustment. Real-world success stories from companies like GitHub and Microsoft demonstrate the potential benefits of adopting the SPACE Framework.

Engineering leaders have a pivotal role in driving this change. By promoting a culture of continuous improvement, investing in the right tools and processes, fostering collaboration and communication, and prioritizing well-being and satisfaction, they can significantly enhance developer productivity and overall team success.

Top Developer Experience tools (2024)

In the software development industry, while user experience is an important aspect of the product life cycle, organizations are also considering Developer Experience.

A positive Developer Experience helps in delivering quality products and allows developers to be happy and healthy in the long run.

However, it is not always possible for organizations to measure and improve developer experience without any good tools and platforms.

What is Developer Experience?

Developer Experience is about the experience software developers have while working in the organization. It is the developers’ journey while working with a specific framework, programming languages, platform, documentation, general tools, and open-source solutions.

Positive Developer Experience = Happier teams

Developer Experience has a direct relationship with developer productivity. A positive experience results in high dev productivity, leading to high job satisfaction, performance, and morale. Hence, happier developer teams.

This starts with understanding the unique needs of developers and fostering a positive work culture for them.

Why Developer Experience is important?

Smooth onboarding process

Good DX ensures the onboarding process is as simple and smooth as possible. It includes making them familiar with the tools and culture and giving them the support they need to proceed further in their career. It also allows them to know other developers which helps in collaboration, open communication, and seeking help, whenever required.

Improves product quality

A positive Developer Experience leads to 3 effective C’s – Collaboration, communication, and coordination. Besides this, adhering to coding standards, best practices, and automated testing helps promote code quality and consistency and fix issues early.  As a result, development teams can easily create products that meet customer needs and are free from errors and glitches.  

Increases development speed

When Developer Experience is handled with care, software developers can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are few ways to boost development speed.  It also lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.

Attracts and retains top talents

Developers usually look out for a strong tech culture. So they can focus on their core skills and get acknowledged for their contributions. Great DX increases job satisfaction and aligns their values and goals with the organization. In return, developers bring the best to the table and want to stay in the organization for the long run.

Enhances collaboration

The right kind of Developer Experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings. Developers can easily discuss issues, share feedback, and work together on tasks. It helps streamline the development process and results in high-quality work.

Best developer experience tools

Time management tools

Clockwise

A powerful time management tool that streamlines and automates the calendar and protects developers’ flow time. It helps to strike a balance between meetings and coding time with a focus time feature.

Key features
  • Seamlessly integrates with third-party applications such as Slack, Google Calendar, and Asana.
  • Determines the most suitable meeting times for both developers and engineering leaders.
  • Creates custom smart holds i.e. protected time throughout the hold.
  • Reschedules the meetings that are marked as ‘Flexible’.
  • Provides a quick summary of how much meetings and focus time was spent last week.

Toggle track

A straightforward time-tracking, reporting, and billing tool for software developers. It lets development teams view tracked team entries in a grid or calendar format.

Key features
  • ‘Dashboard and Reporting’ feature offers in-depth analysis and lets engineering leaders create customized dashboards.
  • Simple and easy-to-use interface.
  • Preferable for those who avoid real-time tracking rather than track their time manually.
  • Offers a PDF invoice template that can be downloaded easily.
  • Includes optional Pomodoro setting that allows developers to take regular quick breaks.

Software development intelligence

Typo

Typo is an intelligent engineering management platform used for gaining visibility, removing blockers, and maximizing developer effectiveness. It gives a comparative view of each team’s performance across velocity, quality, and throughput. This tool can be integrated with the tech stack to deliver real-time insights. Git, Slack, Calenders, and CI/CD to name a few.

Key features
  • Seamlessly integrates with third-party applications such as Git, Slack, Calenders, and CI/CD tools.
  • ‘Sprint analysis’ feature allows for tracking and analyzing the team’s progress throughout a sprint.
  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.
Software development intelligence

Code intelligence tools

Sourcegraph (Cody)

An AI code-based assistant tool that provides code-specific information and helps in locating precise code based on natural language description, file names, or function names.

Key features
  • Explain complex lines of code in simple language.
  • Identifies bugs and errors in a codebase and provides suggestions.
  • Offers documentation generation.
  • Answers questions about existing code.
  • Generates code snippets, fixes, and improves code.

GitHub Copilot

Developed by GitHub in collaboration with open AI, it uses an open AI codex for writing code quickly. It draws context from the code and suggests whole lines or complete functions that developers can accept, modify, or reject.

Key features
  • Creates predictive lines of code from comments and existing patterns in the code.
  • Generates code in multiple languages including Typescript, Javascript, Ruby, C++, and Python.
  • Seamlessly integrates with popular editors such as Neovim, JetBrains IDEs, and Visual Studio.
  • Create dictionaries of lookup data
  • Writes test cases and code comments

Communication and collaboration

Slack

A widely used communication platform that enables developers to real-time communication and share files. It also allows team members to share and download files and create external links for people outside of the team.

Key features
  • Seamlessly integrates with third-party applications such as Google Calendar, Hubspot, Clickup, and Salesforce.
  • ‘Huddle’ feature that includes phone and video conferencing options.
  • Accessible on both mobile and desktop (Application and browser).
  • Offers ‘Channel’ feature i.e. similar to groups, team members can create projects, teams, and topics.
  • Perfect for asynchronous communication and collaboration.

Project and task management

JIRA

A part of the Atlassian group, JIRA is an umbrella platform that includes JIRA software, JIRA core, and JIRA work management. It relies on the agile way of working and is purposely built for developers and engineers.

Key features
  • Built for agile and scrum workflows.
  • Offers Kanban view.
  • JIRA dashboard helps users to plan projects, measure progress, and track due dates.
  • Offers third-party integrations with other parts of Atlassian groups and third-party apps like Github, Gitlab, and Jenkins.
  • Offers customizable workflow states and transitions for every issue type.

Linear

A project management and issue-tracking tool that is tailored for software development teams. It helps the team plan their projects and auto-close and auto-archive issues.

Key features
  • Simple and straightforward UI.
  • Easy to set up.
  • Breaks larger tasks into smaller issues.
  • Switches between list and board layout to view work from any angle.
  • Quickly apply filters and operators to refine issue lists and create custom views.
Linear

Automated software testing

Lambda test

A cloud-based cross-browser testing platform that provides real-time testing on multiple devices and simulators. It is used to create and run both manual and automatic tests and functions via the Selenium Automation Grid.

Key features
  • Seamlessly integrates with other testing frameworks and CI/CD tools.
  • Offers detailed automated logs such as exception logs, command logs, and metadata.
  • Runs parallel tests in multiple browsers and environments.
  • Offers command screenshots and video recordings of the script execution.
  • Facilitates responsive testing to ensure the application works well on various devices and screen sizes.

Postman

A widely used automation testing tool for API. It provides a streamlined process for standardizing API testing and monitoring it for usage and trend insights.

Key features
  • Seamlessly integrates with CI/CD pipelines.
  • Enable users to mimic real-world scenarios and assess API behavior under various conditions.
  • Creates mock servers, and facilitates realistic simulations and comprehensive testing.
  • Provides monitoring features to gain insights into API performance and usage trends.
  • Friendly and easy-to-use interface equipped with code snippets.

Continuous integration/continuous deployment

Circle CI

Certified with FebRamp and SOC Type II compliant, It helps in achieving CI/CD in open-source and large-scale projects. Circle CI streamlines the DevOps process and automates builds across multiple environments.

Key features
  • Seamlessly integrates with third-party applications with Bitbucket, GitHub, and GitHub Enterprise.  
  • Tracks the status of projects and keeps tabs on build processes
  • ‘Parallel testing’ feature helps in running tests in parallel across different executors.
  • Allows a single process per project.
  • Provides ways to troubleshoot problems and inspect things such as directory path, log files, and running processes

Documentation

Swimm

Specifically designed for software development teams. Swimm is an innovative cloud-based documentation tool that integrates continuous documentation into the development workflow.

Key features
  • Seamlessly integrates with development tools such as GitHub, VSC, and JetBrains IDEs.
  • ‘Auto-sync’ feature ensures the document stays up to date with changes in the codebase.
  • Creates new documents, rewrites existing ones, or summarizes information.
  • Creates tutorials and visualizations within the codebase for better understanding and onboarding new members.
  • Analyzes the entire codebase, documentation sources, and data from enterprise tools.

Developer engagement

DevEx by Typo

A valuable tool for development teams that captures 360 views of developer experience and helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins.

Key features
  • Research-backed framework that captures parameters and uncovers real issues.
  • In-depth insights are published on the dashboard.
  • Combines data-driven insights with proactive monitoring and strategic intervention.
  • Identifies the key priority areas affecting developer productivity and well-being.
  • Sends automated alerts to identify burnout signs in developers at an early stage.
DevEx by Typo

GetDX

A comprehensive insights platform that is founded by researchers behind the DORA and SPACE framework. It offers both qualitative and quantitative measures to give a holistic view of the organization.

Key features
  • Provides a suite of tools that capture data from surveys and systems in real-time.
  • Breaks down results based on personas.
  • Streamlines developer onboarding with real-time insights.
  • Contextualizes performance with 180,000+ industry benchmark samples.
  • Uses advanced statistical analysis to identify the top opportunities.

Conclusion

Overall Developer Experience is crucial in today’s times. It facilitates effective collaboration within engineering teams, offers real-time feedback on workflow efficiency and early signs of burnout, and enables informed decision-making. By pinpointing areas for improvement, it cultivates a more productive and enjoyable work environment for developers.

There are various tools available in the market. We’ve curated the best Developer Experience tools for you. You can check other tools as well. Do your own research and see what fits right for you.

All the best!

Measuring Developer Productivity: A Comprehensive Guide

The software development industry constantly evolves, and measuring developer productivity has become crucial to success. It is the key to achieving efficiency, quality, and innovation. However, measuring productivity is not a one-size-fits-all process. It requires a deep understanding of productivity in a development context and selecting the right metrics to reflect it accurately.

This guide will help you and your teams navigate the complexities of measuring dev productivity. It offers insights into the process’s nuances and equips teams with the knowledge and tools to optimize performance. By following the tips and best practices outlined in this guide, teams can improve their productivity and deliver better software.

What is Developer Productivity?

Development productivity extends far beyond the mere output of code. It encompasses a multifaceted spectrum of skills, behaviors, and conditions that contribute to the successful creation of software solutions. Technical proficiency, effective collaboration, clear communication, suitable tools, and a conducive work environment are all integral components of developer productivity. Recognizing and understanding these factors is fundamental to devising meaningful metrics and fostering a culture of continuous improvement.

Benefits of developer productivity

  • Increased productivity allows developers to complete tasks more efficiently. It leads to shorter development cycles and quicker delivery of products or features to the market.
  • Productivity developers can focus more on code quality, testing, and optimization, resulting in higher-quality software with fewer bugs and issues.
  • Developers can accomplish more in less time, reducing development costs and improving the organization’s overall return on investment.
  • Productive developers often experience less stress and frustration due to reduced workloads and smoother development processes that lead to higher job satisfaction and retention rates.
  • With more time and energy available, developers can dedicate resources to innovation, continuous learning, experimenting with new technologies, and implementing creative solutions to complex problems.

Metrics for Measuring Developer Productivity

Measuring software developers’ productivity cannot be any arbitrary criteria. This is why there are several metrics in place that can be considered while measuring it. Here we can divide them into quantitative and qualitative metrics. Here is what they mean:

Quantitative Metrics

Lines of Code (LOC) Written

While counting lines of code isn’t a perfect measure of productivity, it can provide valuable insights into coding activity. A higher number of lines might suggest more work done, but it doesn’t necessarily equate to higher quality or efficiency. However, tracking LOC changes over time can help identify trends and patterns in development velocity. For instance, a sudden spike in LOC might indicate a burst of productivity or potentially code bloat, while a decline could signal optimization efforts or refactoring.

Time to Resolve Issues/Bugs

The swift resolution of issues and bugs is indicative of a team’s efficiency in problem-solving and code maintenance. Monitoring the time it takes to identify, address, and resolve issues provides valuable feedback on the team’s responsiveness and effectiveness. A shorter time to resolution suggests agility and proactive debugging practices, while prolonged resolution times may highlight bottlenecks in the development process or technical debt that needs addressing.

Number of Commits or Pull Requests

Active participation in version control systems, as evidenced by the number of commits or pull requests, reflects the level of engagement and contribution to the codebase. A higher number of commits or pull requests may signify active development and collaboration within the team. However, it’s essential to consider the quality, not just quantity, of commits and pull requests. A high volume of low-quality changes may indicate inefficiency or a lack of focus.

Code Churn

Code churn refers to the rate of change in a codebase over time. Monitoring code churn helps identify areas of instability or frequent modifications, which may require closer attention or refactoring. High code churn could indicate areas of the code that are particularly complex or prone to bugs, while low churn might suggest stability but could also indicate stagnation if accompanied by a lack of feature development or innovation. Furthermore, focusing on code changes allows teams to track progress and ensure that updates align with project goals while emphasizing quality code ensures that these changes maintain or improve the overall codebase integrity and performance.

Qualitative Metrics

Code Review Feedback

Effective code reviews are crucial for maintaining code quality and fostering a collaborative development environment in engineering org. Monitoring code review feedback, such as the frequency of comments, the depth of review, and the incorporation of feedback into subsequent iterations, provides insights into the team’s commitment to quality and continuous improvement. A culture of constructive feedback and iteration during code reviews indicates a quality-driven approach to development.

Team Satisfaction and Morale

High morale and job satisfaction among engineering teams are key indicators of a healthy and productive work environment. Happy and engaged teams tend to be more motivated, creative, and productive. Regularly measuring team satisfaction through surveys, feedback sessions, or one-on-one discussions helps identify areas for improvement and reinforces a positive culture that fosters teamwork, productivity, and collaboration.

Rate of Feature Delivery

Timely delivery of features is essential for meeting project deadlines and delivering value to stakeholders. Monitoring the rate of feature delivery, including the speed and predictability of feature releases, provides insights into the team’s ability to execute and deliver results efficiently. Consistently meeting or exceeding feature delivery targets indicates a well-functioning development process and effective project management practices.

Customer Satisfaction and Feedback

Ultimately, the success of development efforts is measured by the satisfaction of end-users. Monitoring customer satisfaction through feedback channels, such as surveys, reviews, and support tickets, provides valuable insights into the effectiveness of the software in delivering meaningful solutions. Positive feedback and high satisfaction scores indicate that the development team has successfully met user needs and delivered a product that adds value. Conversely, negative feedback or low satisfaction scores highlight areas for improvement and inform future development priorities.

Best Practices for Measuring Developer Productivity

While analyzing the metrics and measuring software developer productivity, here are some things you need to remember:

  • Balance Quantitative and Qualitative Metrics: Combining both types of metrics provides a holistic view of productivity.
  • Customize Metrics to Fit Team Dynamics: Tailor metrics to align with the development team’s unique objectives and working styles.
  • Ensure Transparency and Clarity: Communicate clearly about the purpose and interpretation of metrics to foster trust and accountability.
  • Iterate and Adapt Measurement Strategies: Continuously evaluate and refine measurement approaches based on feedback and evolving project requirements.

How does Generative AI Improve Developer Productivity?

Below are a few ways in which Generative AI can have a positive impact on developer productivity:

Focus on meaningful tasks: Generative AI tools take up tedious and repetitive tasks, allowing developers to give their time and energy to meaningful activities, resulting in productivity gains within the team members’ workflow.

Assist in their learning graph: Generative AI lets software engineers gain practical insights and examples from these AI tools and enhance team performance.

Assist in pair programming: Through Generative AI, developers can collaborate with other developers easily.

Increase the pace of software development: Generative AI helps in the continuous delivery of products and services and drives business strategy.

How does Typo Measure Developer Productivity?

There are many developer productivity tools available in the market for tech companies. One of the tools is Typo – the most comprehensive solution on the market.

Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the developer experience. It offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It helps in measuring the overall team’s productivity while keeping individual’ strengths and weaknesses in mind.

Here are three ways in which Typo measures the team productivity:

Software Development Visibility

Typo provides complete visibility in software delivery. It helps development teams and engineering leaders to identify blockers in real time, predict delays, and maximize business impact. Moreover, it lets the team dive deep into key DORA metrics and understand how well they are performing across industry-wide benchmarks. Typo also enables them to get real-time predictive analysis of how time is performing, identify the best dev practices, and provide a comprehensive view across velocity, quality, and throughput.

Hence, empowering development teams to optimize their workflows, identify inefficiencies, and prioritize impactful tasks. This approach ensures that resources are utilized efficiently, resulting in enhanced productivity and better business outcomes.

Code Quality Automation

Typo helps developers streamline the development process and enhance their productivity by identifying issues in your code and auto-fixing them before merging to master. This means less time reviewing and more time for important tasks hence, keeping code error-free, making the whole process faster and smoother. The platform also uses optimized practices and built-in methods spanning multiple languages. Besides this, it standardizes the code and enforces coding standards which reduces the risk of a security breach and boosts maintainability.

Since the platform automates repetitive tasks, it allows development teams to focus on high-quality work. Moreover, it accelerates the review process and facilitates faster iterations by providing timely feedback.  This offers insights into code quality trends and areas for improvement, fostering an engineering culture that supports learning and development.

Developer Experience

Typo helps with early indicators of developers’ well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers. It includes pulse surveys, built on a developer experience framework that triggers AI-driven pulse surveys.

Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help engineering managers analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.

Hence, by addressing these aspects, Typo’s holistic approach combines data-driven insights with proactive monitoring and strategic intervention to create a supportive and high-performing work environment. This leads to increased developer productivity and satisfaction.

Track Developer Productivity Effectively

Measuring developers’ productivity is not straightforward, as it varies from person to person. It is a dynamic process that requires careful consideration and adaptability.

To achieve greater success in software development, the development teams must embrace the complexity of productivity, select appropriate metrics, use relevant tools, and develop a supportive work culture.

There are many developer productivity tools available in the market. Typo stands out to be the prevalent one. It’s important to remember that the journey toward productivity is an ongoing process, and each iteration presents new opportunities for growth and innovation.

How to Measure and Improve Engineering Productivity?

As technology rapidly advances, software engineering is becoming an increasingly fast-paced field where maximizing productivity is critical for staying competitive and driving innovation. Efficient resource allocation, streamlined processes, and effective teamwork are all essential components of engineering productivity. In this guide, we will delve into the significance of measuring and improving engineering productivity, explore key metrics, provide strategies for enhancement, and examine the consequences of neglecting productivity tracking.

What is Engineering Productivity?

Engineering productivity refers to the efficiency and effectiveness of engineering teams in producing work output within a specified timeframe while maintaining high-quality standards. It encompasses various factors such as resource utilization, task completion speed, deliverable quality, and overall team performance. Essentially, engineering productivity measures how well a team can translate inputs like time, effort, and resources into valuable outputs such as completed projects, software features, or innovative solutions.

Tracking software engineering productivity involves analyzing key metrics like productivity ratio, throughput, cycle time, and lead time. By assessing these metrics, engineering managers can pinpoint areas for improvement, make informed decisions, and implement strategies to optimize productivity and achieve project objectives. Ultimately, engineering productivity plays a critical role in ensuring the success and competitiveness of engineering projects and organizations in today’s fast-paced technological landscape.

Why does Engineering Productivity Matter?

Impact on Project Timelines and Deadlines

Engineering productivity directly affects project timelines and deadlines. When teams are productive, they can deliver projects on schedule, meeting client expectations and maintaining stakeholder satisfaction.

Influence on Product Quality and Customer Satisfaction

High productivity levels correlate with better product quality. By maximizing productivity, engineering teams can focus on thorough testing, debugging, and refining processes, ultimately leading to increased customer satisfaction.

Role in Resource Allocation and Cost-Effectiveness

Optimized engineering productivity ensures efficient resource allocation, reducing unnecessary expenditures and maximizing ROI. By utilizing resources effectively, tech companies can achieve their goals within budgetary constraints.

The Importance of Tracking Engineering Productivity

Insights for Performance Evaluation and Improvement

Tracking engineering productivity provides valuable insights into team performance. By analyzing productivity metrics, organizations can identify areas for improvement and implement targeted strategies for enhancement.

Facilitates Data-Driven Decision-Making

Data-driven decision-making is essential for optimizing engineering productivity. Organizations can make informed decisions about resource allocation, process optimization, and project prioritization by tracking relevant metrics.

Helps in Setting Realistic Goals and Expectations

Tracking productivity metrics allows organizations to set realistic goals and expectations. By understanding historical productivity data, teams can establish achievable targets and benchmarks for future projects.

Factors Affecting Engineering Productivity

Team Dynamics and Collaboration

Effective teamwork and collaboration are essential for maximizing engineering productivity. Organizations can leverage team members’ diverse skills and expertise to achieve common goals by fostering a collaboration and communication culture.

Work Environment and Organizational Culture

The work environment and organizational culture play a significant role in determining engineering productivity. A supportive and conducive work environment fosters team members’ creativity, innovation, and productivity.

Resource Allocation and Workload Management

Efficient resource allocation and workload management are critical for optimizing engineering productivity. By allocating resources effectively and balancing workload distribution, organizations can ensure that team members work on tasks that align with their skills and expertise.

Strategies to Improve Engineering Productivity

Identifying Productivity Roadblocks and Bottlenecks

Identifying and addressing productivity roadblocks and bottlenecks is essential for improving engineering productivity. By conducting thorough assessments of workflow processes, organizations can identify inefficiencies, focus on workload distribution, and implement targeted solutions for improvement.

Implementing Effective Tools and Practices for Optimization

Leveraging effective tools and best practices is crucial for optimizing engineering productivity. By adopting agile methodologies, DevOps practices, and automation tools, engineering organizations can streamline processes, reduce manual efforts, enhance code quality, and accelerate delivery timelines.

Prioritizing Tasks Strategically

Strategic task prioritization, along with effective time management and goal setting, is key to maximizing engineering productivity. By prioritizing tasks based on their impact and urgency, organizations can ensure that team members focus on the most critical activities, leading to improved productivity and efficiency.

Promoting Collaboration and Communication

Promoting collaboration and communication within engineering teams is essential for maximizing productivity. By fostering open communication channels, encouraging knowledge sharing, and facilitating cross-functional collaboration, organizations can leverage the collective expertise of team members to drive innovation, and motivation and achieve common goal setting.

Continuous Improvement through Feedback Loops and Iteration

Continuous improvement is essential for maintaining and enhancing engineering productivity. By soliciting feedback from team members, identifying areas for improvement, and iteratively refining processes, organizations can continuously optimize productivity, address technical debt, and adapt to changing requirements and challenges.

Consequences of Not Tracking Engineering Productivity

Risk of Missed Deadlines and Project Delays

Neglecting to track engineering productivity increases the risk of missed deadlines and project delays. Without accurate productivity tracking, organizations may struggle to identify and address issues that could impact project timelines and deliverables.

Decreased Product Quality and Customer Dissatisfaction

Poor engineering productivity can lead to decreased product quality and customer dissatisfaction. Organizations may overlook critical quality issues without effective productivity tracking, resulting in negative business outcomes, subpar products, and unsatisfied customers.

Inefficient Resource Allocation and Higher Costs

Failure to track engineering productivity can lead to inefficient resource allocation and higher costs. Without visibility into productivity metrics, organizations may allocate resources ineffectively, wasting time, effort, and budgetary overruns.

Best Practices for Engineering Productivity

Setting SMART Goals

Setting SMART (specific, measurable, achievable, relevant, time-bound) goals is essential for maximizing engineering productivity. By setting clear and achievable goals, organizations can focus their efforts on activities that drive meaningful results and contribute to overall project success.

Establishing a Culture of Accountability and Ownership

Establishing a culture of accountability and ownership is critical for maximizing engineering productivity. Organizations can foster a sense of ownership and commitment that drives productivity and excellence by empowering team members to take ownership of their work and be accountable for their actions.

Promoting Work-Life Balance

Ensure work-life balance at the organization by promoting policies that support flexible schedules, encouraging regular breaks, and providing opportunities for professional development and personal growth. This can help reduce stress and prevent burnout, leading to higher productivity and job satisfaction.

Embracing Automation and Technology

Embracing automation and technology is key to streamlining processes and accelerating delivery timelines. By leveraging automation tools, DevOps practices, and advanced technologies, organizations can automate repetitive tasks, reduce manual efforts, and improve overall productivity and efficiency.

Investing in Employee Training and Skill Development

Investing in employee training and skill development is essential for maintaining and enhancing engineering productivity. By providing ongoing training and development opportunities, organizations can equip team members with the skills and knowledge they need to excel in their roles and contribute to overall project success.

Using Typo for Improved Engineering Productivity

Typo offers innovative features to streamline workflow processes, enhance collaboration, and boost overall productivity in engineering teams. It includes engineering metrics that can help you take action with in-depth insights.

Understanding Engineering Productivity Metrics

Below are a few important engineering metrics that can help in measuring their productivity:

Merge Frequency

Merge Frequency represents the rate at which the Pull Requests are merged into any of the code branches per day. Engineering teams can optimize their development workflows, improve collaboration, and increase team efficiency.

Cycle Time

Cycle time measures the time it takes to complete a single iteration of a process or task. Organizations can identify opportunities for process optimization and efficiency improvement by tracking cycle time.

Deployment PR

Deployment PRs represent the average number of Pull Requests merged in the main/master/production branch per week. Measuring it helps improve Engineering teams’ efficiency by providing insights into code deployments’ frequency, timing, and success rate.

Planning Accuracy

Planning Accuracy represents the percentage of Tasks Planned versus Tasks Completed within a given time frame. Its benchmarks help engineering teams measure their performance, identify improvement opportunities, and drive continuous enhancement of their planning processes and outcomes.

Code Coverage

Code coverage is a measure that indicates the percentage of a codebase that is tested by automated tests. It helps ensure that the tests cover a significant portion of the code, identifying code quality, untested parts, and potential bugs.

Screenshot 2024-05-20 at 2.42.17 PM.png

How does Typo Help in Enhancing Engineering Productivity?

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools. It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Features

  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Includes effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint.
  • Provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.

Improve Engineering Productivity Always to Stay Ahead

Measuring and improving engineering productivity is essential for achieving project success and driving business growth. By understanding the importance of productivity tracking, leveraging relevant metrics, and implementing effective strategies, organizations can optimize productivity, enhance product quality, and deliver exceptional results in today’s competitive software engineering landscape.

In conclusion, engineering productivity is not just a metric; it’s a mindset and a continuous journey towards excellence.

Measure Developer Experience with Typo

A software development team is critical for business performance. They wear multiple hats to complete the work and deliver high-quality software to end-users. On the other hand, organizations need to take care of their well-being and measure developer experience to create a positive workplace for them.

Otherwise, this can negatively impact developers’ productivity and morale which makes their work less efficient and effective. As a result, disrupting the developer experience at the workplace.

With Typo, you can capture qualitative insights and get a 360 view of your developer experience. Let’s delve deeper into it in this blog post:

What is Developer Experience?

Developer experience refers to the overall experience of developer teams when using tools, platforms, and services to build software applications. This means right from the documentation to coding and deployment and includes tangible and intangible experience.

Happy developers = positive developer experience. It increases their productivity and morale. It further leads to a faster development cycle, developer workflow, methods, and working conditions.

Not taking care of developer experience can make it difficult for businesses to retain and attract top talent.

Why is Developer Experience Beneficial?

Developer experience isn’t just a buzzword. It is a crucial aspect of your team’s productivity and satisfaction.

Below are a few benefits of developer experience:

Smooth Onboarding Process

Good devex ensures the onboarding process is as simple and smooth as possible. It includes making engineering teams familiar with the tools and culture and giving them the support they need to proceed further in their career. It also allows them to know other developers which can help them in collaboration and mentorship.

Improves Product Quality

A positive developer experience leads to 3 effective C’s – Collaboration, communication, and coordination. Adhering to coding standards, best practices and automated testing also helps in promoting code quality and consistency and catching and fixing issues early. As a result, they can easily create products that meet customer needs and are free from errors and glitches.  

Increases Development Speed

When developer experience is handled carefully, team members can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are a few of the ways to boost development speed. It lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.

Attracts and Retains Top Talents

Developers usually look out for a strong tech culture so they can focus on their core skills and get acknowledged for their contributions. A good developer experience results in developer satisfaction and aligns their values and goals with the organization. In return, developers bring the best to the table and want to stay in the organization for the long run.

Enhances Collaboration

Great developer experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings. Through collaborative approaches, developers can easily discuss issues, share feedback, and work together on tasks.

How to Measure Developer Experience with Typo?

Typo helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins on the experience of the developers.

Below is the process that Typo follows to gain insights into developer experience effectively:

Step 1: Pulse Surveys

Pulse surveys refer to short, periodic questionaries used to gather feedback from developers to assess their engagement, satisfaction, and overall organizational health.

Typo’s pulse surveys are specifically designed for the software engineering team as it is built on a developer experience framework. It triggers AI-driven pulse surveys where each developer receives a notification periodically with a few conversational questions.

We highly recommend doing surveys once a month as to keep a tab on your team’s wellbeing & experiences and build a continuous loop of feedback. However, you can customize the frequency of these surveys according to the company’s suitability and needs.

And don’t worry, these surveys are anonymous.

Step 2: Developer Experience Analytics

Based on the responses to the pulse surveys over time, insights are published on the Typo dashboard. These insights help to analyze how developers feel at the workplace, what needs immediate attention, how many developers are at risk of burnout and much more.

Below are key components of Typo’s developer experience analytics dashboard:

DevEx Score

The DevEx score indicates the overall state of well-being or happiness within an organization. It reflects the collective emotional and mental health of the developers.

Also known as the employee net promoter score, this score ranges between 1 – 10 as shown in the image below. It is based on the developer feedback collected. A high well-being score suggests that people are generally content and satisfied while a low score may indicate areas of concern or areas needing improvement.

Response Rate

It is the percentage of people who responded to the check-in. A higher response rate represents a more reliable dataset for analyzing developer experience metrics and deriving insights.

This is a percentage number along with the delta change. You will also see the exact count to drive this percentage.  It also includes the trend graph showing the data from the last 4 weeks.

It also includes trending sentiments that show you the segregation of employees based on the maximum re-occurring sentiments as mentioned by developer team.

Recent comments

This section shows all the concerns raised by developers which you can reply to and drive meaningful conversations. This offers valuable insights into their workflow challenges, addresses issues promptly, and boosts developer satisfaction.

Heatmap

In this section, you can slice and dice your data to deep-dive further on the level of different demographics. The list of demo graphies is as follows:

  • Designation
  • Location
  • Team
  • Tenure

Burnout Alerts

Typo sends automated alerts to your communication to help you identify burnout signs in developers at an early stage. This enables leaders to track developer engagement and support their well-being, maintain productivity, and create a positive and thriving work environment.

Typo tracks the work habits of developers across multiple activities, such as commits, PRs, reviews, comments, tasks, and merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the burnout zone or at risk of burnout. These benchmarks can be customized to meet your specific needs.

Developer Experience Framework, Powered by Typo

Typo’s developer experience framework suggests to engineering leaders what they should focus on for measuring the dev productivity and experience.  

Below are the key focus areas and their drivers incorporated in the developer experience framework:

Key Focus Areas

Manager Support

It refers to the level of assistance, guidance, and resources provided by managers or team leads to support developers in their work.

Sub focus areas

Description

Questions

Empathy

The ability to understand and relate to developers, actively listen, and show compassion in interactions.

  • Do you feel comfortable sharing your concerns or personal challenges with your manager?
  • Do you feel comfortable expressing yourself in this space?
  • Does your manager actively listen to your ideas without judgment?

Coach and guide

The role of managers is to provide expertise, advice, and support to help developers improve their skills, overcome challenges, and achieve career goals.

  • Does your manager give constructive feedback regularly?
  • Does your manager give you the guidance you need in your work?
  • Does your manager help you learn and develop new skills?

Feedback

The ability to provide timely and constructive feedback on performance, skills, and growth areas helping developers gain insights, refine their skills, and work towards achieving their career objectives.

  • Do you feel that your manager’s feedback helps you understand your strengths and areas for improvement?
  • Do you feel comfortable providing feedback to your manager?
  • How effectively does your manager help you get support for technical growth?

Developer Flow

It is a state of optimal engagement and productivity that developers experience when fully immersed and focused on their work.

Sub focus areas

Description

Questions

Work-life balance

Maintaining a healthy equilibrium between work responsibilities and personal life promotes well-being, boundaries, and resources for managing workload effectively.

  • How would you rate the work-life balance in your current role?
  • Do you feel supported by your team in maintaining a good work-life balance?
  • How would you rate the work-life balance in your current role?

Autonomy

Providing developers with the freedom and independence to make decisions, set goals, and determine their approach and execution of tasks.

  • Do you feel free to make decisions for your work?
  • Do you feel encouraged to explore new ideas and experiment with different solutions?
  • Do you think your ideas are well-supported by the team?

Focus time

The dedicated periods of uninterrupted work where developers can deeply concentrate on their tasks without distractions or interruptions.

  • How often do you have time for focused work without interruptions?
  • How often do you switch context during focus time?
  • How often can you adjust your work schedule to improve conditions for focused work when needed?

Goals

Setting clear objectives that provide direction, motivation, and a sense of purpose in developers’ work, enhances their overall experience and productivity.

  • Have you experienced success in meeting your goals?
  • Are you able to track your progress towards your goals?
  • How satisfied are you with the goal-setting process within your team?

Product Management

The practices involved overseeing a software product’s lifecycle, from ideation to development, launch, and ongoing management.

Sub focus areas

Description

Questions

Clear requirements

Providing developers with precise and unambiguous specifications, ensuring clarity, reducing ambiguity, and enabling them to meet the expectations of stakeholders and end-users.

  • Are the requirements provided for your projects clear and well-defined?
  • Do you have the necessary information you need for your tasks?
  • Do you think the project documentation covers everything you need?

Reasonable timelines

Setting achievable and realistic project deadlines, allowing developers ample time to complete tasks without undue pressure or unrealistic expectations.

  • Do you have manageable timeframes and deadlines that enhance the quality of your work?
  • Are you provided with the resources you need to meet the project timelines?
  • How often do you encounter unrealistic project timelines?

Collaborative discussions

Fostering open communication among developers, product managers, and stakeholders, enabling constructive discussions to align product strategies, share ideas, and resolve issues.

  • Are your inputs valued during collaborative discussions?
  • Does your team handle conflicts well in product meetings?
  • How often do you actively participate during collaborative discussions?

Development Releases

It refers to creating and deploying software solutions or updates, emphasizing collaboration, streamlined workflows, and reliable deployment to enhance the developer experience.

Sub focus areas

Description

Questions

Tools and technology

Providing developers with the necessary software tools, frameworks, and technologies to facilitate their work in creating and deploying software solutions. 

  • Are you satisfied with the tools provided to you for your development work?
  • Has the availability of tools positively impacted your development process?
  • To what extent do you believe that testing tools adequately support your work?

Code review

Evaluating code changes for quality, adherence to standards, and identifying issues to enhance software quality and promote collaboration among developers.

  • Do you feel that code reviews contribute to your growth and development as a developer?
  • How well does your team addresses the issues identified during code reviews?
  • How often do you receive constructive feedback during code reviews that help improve your coding skills?

Code health

Involves activities like code refactoring, performance optimization, and enforcing best practices to ensure code quality, maintainability, and efficiency, thereby enhancing the developer experience and software longevity.

  • Are coding standards and best practices consistently followed in the development process?
  • Do you get enough support with technical debt & code-related issues?
  • Are you satisfied with the overall health of the codebase you’re currently working on?

Frictional releases

Streamlining software deployment through automation, standardized procedures, and effective coordination, reducing errors and delays for a seamless and efficient process that enhances the developer experience.

  • Do you often have post-release reviews to identify areas for improvement?
  • Do you feel that the release process is streamlined in your projects?
  • Is the release process in your projects efficient?

Culture and Values

It includes shared beliefs, norms, and principles that shape a positive work environment. It includes collaboration, open communication, respect, innovation, diversity, and inclusion, fostering creativity, productivity, and satisfaction among developers.

Sub focus areas

Description

Questions

Psychological safety

Creating an environment where developers feel safe to express their opinions, take risks, and share their ideas without fear of judgment or negative consequences.

  • Do you feel that your team creates an atmosphere where trust, respect, and openness are valued?
  • Do you feel comfortable sharing your thoughts without worrying about judgement?
  • Do you believe that your team fosters a culture where everyone’s opinions are valued?

Recognition

Acknowledging and appreciating developers’ contributions and achievements through meaningful recognition, fostering a positive and motivating environment that boosts morale and engagement.

  • Does recognition at your workplace make you happier and more involved in your job?
  • Do you feel that your hard work is acknowledged by your team members and manager?
  • Do you believe that recognition motivates you to perform better in your role?

Team collaboration

Fostering open communication, trust, and knowledge sharing among developers, enabling seamless collaboration, and idea exchange, and leveraging strengths to achieve common goals.

  • Is there a strong sense of teamwork and cooperation within your team?
  • Are you confident in your team’s ability to solve problems together?
  • Do you believe that your team leverages individual expertise to enhance collaboration?

Learning and growth

Continuous learning and professional development, offering skill-enhancing opportunities, encouraging a growth mindset, fostering curiosity and innovation, and supporting career progression.

  • Does your organization encourage your professional growth?
  • Are there any training programs you would like to see implemented?
  • Does your organization invest enough in employee training and development?

Conclusion

Measuring developer experience continuously is crucial in today’s times. It helps to provide real-time feedback on workflow efficiency, early signs of burnout, and overall satisfaction levels. This further identifies areas for improvement and fosters a more productive and enjoyable work environment for developers.

To learn more about DevEx, visit our website!

|

Developer Experience Framework: A Comprehensive Guide to Improving Developer Productivity

In today’s times, developer experience has become an integral part of any software development company. A direct relationship exists between developer experience and developer productivity. A positive developer experience leads to high developer productivity, increasing job satisfaction, efficiency, and high-quality products.

When organizations don’t focus on developer experience, they may encounter many problems in workflow. This negatively impacts the overall business performance.

In this blog, let’s learn more about the developer experience framework that is beneficial to developers, engineering managers, and organizations.

What is Developer Experience?

In simple words, Developer experience is about the experience software developers have while working in the organization.

It is the developers’ journey while working with a specific framework, programming languages, platform, documentation, general tools, and open-source solutions.

Positive developer experience = Happier teams

Developer experience has a direct relationship with developer productivity. A positive experience results in high dev productivity which further leads to high job satisfaction, performance, and morale. Hence, happier developer teams.

This starts with understanding the unique needs of developers and fostering a positive work culture for them.

Benefits of Developer Experience

Smooth Onboarding Process

DX ensures that the onboarding process is simple and smooth as possible. This includes making them familiar with the tools and culture as well as giving them the support they need to proceed further in their career.

It also allows them to know other developers which help in collaboration, open communication, and seeking help, whenever required.

Improves Product Quality

A positive developer experience leads to 3 effective C’s - Collaboration, communication, and coordination. Besides this, adhering to coding standards, best practices, and automated testing helps in promoting code quality and consistency and catching and fixing issues early.

As a result, they can easily create products that can meet customer needs and are free from errors and glitches.  

Increases Development Speed

When developer experience is handled with care, software developers can work more smoothly and meet milestones efficiently. Access to well-defined tools, clear documents, streamlined workflow, and a well-configured development environment are a few of the ways to boost development speed.

It also lets them minimize the need to switch between different tools and platforms which increases the focus and team productivity.

Attract and Retain Top Talents

Developers usually look out for a strong tech culture. So they can focus on their core skills and get acknowledged for their contributions. A good developer experience increases job satisfaction and aligns their values and goals with the organization.

In return, developers bring the best to the table and want to stay in the organization for the long run.

Enhanced Collaboration

The right kind of developer experience encourages collaboration and effective communication tools. This fosters teamwork and reduces misunderstandings.

Through collaborative approaches, developers can easily discuss issues, share feedback, and work together on tasks. It helps streamline the development process and results in high-quality work.

Two Key Frameworks and Their Limitations

There are two frameworks to measure developer productivity. However, they come with certain drawbacks. Hence, a new developer framework is required to bridge the gap in how organizations approach developer experience and productivity.

Let’s take a look at DORA metrics and SPACE frameworks along with their limitations:

DORA Metrics

DORA metrics have been identified after 6 years of research and surveys by DORA. It assists engineering leaders to determine two things:

  • The characteristics of a top-performing team
  • How their performance compares to the rest of the industry

It defines 4 key metrics:

Deployment frequency

Deployment Frequency measures the frequency of deployment of code to production or releases to end-users in a given time frame.

Lead Time for Changes

Also known as cycle time. Lead Time for Changes measures the time between a commit being made and that commit making it to production.

Mean Time to Recover

This metric is also known as the mean time to restore. Mean Time to Recover measures the time required to solve the incident i.e. service incident or defect impacting end-users.

Change Failure Rate

Change Failure Rate measures the proportion of deployment to production that results in degraded services.

Use Four Keys metrics like change failure rate to measure your DevOps  performance | Google Cloud Blog

Limitations of DORA metrics

It Doesn't Take into Consideration All the Factors that Add to the Success of the Development Process

DORA metrics are a useful tool for tracking and comparing DevOps team performance. Unfortunately, it doesn’t take into account all the factors for a successful software development process. For example, assessing coding skills across teams can be challenging due to varying levels of expertise. These metrics also overlook the actual efforts behind the scenes, such as debugging, feature development, and more.

It Doesn't Provide Full Context

While DORA metrics tell us which metric is low or high, it doesn’t reveal the reason behind it. Suppose, there is an increase in lead time for changes, it could be due to various reasons. For example, DORA metrics might not reflect the effectiveness of feedback provided during code review. Hence, overlooking the true impact and value of the code review process.

The Software Development Landscape is Constantly Evolving

The software development landscape is changing rapidly. Hence, the DORA metrics may not be able to quickly adapt to emerging programming practices, coding standards, and other software trends. For instance, Code review has evolved to include not only traditional peer reviews but also practices like automated code analysis. DORA metrics may not be able to capture the new approaches fully. Hence, it may not be able to assess the effectiveness of these reviews properly.

SPACE Framework

This framework helps in understanding and measuring developer productivity. It takes into consideration both the qualitative and quantitative aspects and uses various data points to gauge the team's productivity.

The 5 dimensions of this framework are:

Satisfaction and Well-Being

The dimension of developers’ satisfaction and well-being is often evaluated through developer surveys, which assess whether team members are content, happy, and exhibiting healthy work practices. There is a strong connection between contentment, well-being, and productivity, and teams that are highly productive but dissatisfied are at risk of burning out if their well-being is not improved.

Performance

The SPACE Framework originators recommend evaluating a developer’s performance based on their work outcome, using metrics like Defect Rate and Change Failure Rate. Every failure in production takes away time from developing new features and ultimately harms customers.

Activity

The Velocity framework includes activity metrics that provide insights into developer outputs, such as on-call participation, pull requests opened, the volume of code reviewed, or documents written, which are similar to older productivity measures. However, the framework emphasizes that such activity metrics should not be viewed in isolation but should be considered in conjunction with other metrics and qualitative information.

Communication and Collaboration:

Teams that are highly transparent and communicative tend to be the most successful. This enables developers to have a clear understanding of their priorities, and how their work contributes to larger projects, and also facilitates knowledge sharing among team members.

Indicators that can be used to measure collaboration and communication may include the extent of code review coverage and the quality of documentation.

Efficiency and Flow

The concept of efficiency in the SPACE framework pertains to an individual’s ability to complete tasks quickly with minimal disruption, while team efficiency refers to the ability of a group to work effectively together. These are essential factors in reducing developer frustration.

SPACE framework: a quick primer

Limitations of SPACE framework

It Doesn’t Tell You WHY

While the SPACE framework measures dev productivity, it doesn’t tell why certain measurements have a specific value nor can tell the events that triggered a change. This framework offers a structured approach to evaluating internal and external factors but doesn’t delve into the deeper motivations driving these factors.

Limited Scope for Innovation

Too much focus on efficiency and stability can stifle developers’ creativity and innovation. The framework can make teams focus more on hitting specific targets. A culture that embraces change, experiments, and a certain level of uncertainty doesn’t align with the framework principles.

Too Many Metrics

This framework has 5 different dimensions and multiple metrics. Hence, it produces an overwhelming amount of data. Further, engineering leaders need to set up data, maintain data accuracy, and analyze these results. This makes it difficult to identify critical insights and prioritize actions.

Need for a new Developer Experience Framework

This new framework suggests to organizations and engineering leaders what they should focus on for measuring the dev productivity and experience.  

Below are the key focus areas and their drivers incorporated in the Developer Experience Framework:

Manager Support

Refers to the level of assistance, guidance, and resources provided by managers or team leads to support developers in their work.

Empathy

The ability to understand and relate to developers, actively listen, and show compassion in interactions.

Coach and Guide

The role of managers is to provide expertise, advice, and support to help developers improve their skills, overcome challenges, and achieve career goals.

Feedback

The ability to provide timely and constructive feedback on performance, skills, and growth areas helping developers gain insights, refine their skills, and work towards achieving their career objectives.

Developer flow

Refers to a state of optimal engagement and productivity that developers experience when they are fully immersed and focused on their work.

Work-Life Balance

Maintaining a healthy equilibrium between work responsibilities and personal life promotes well-being, boundaries, and resources for managing workload effectively.

Autonomy

Providing developers with the freedom and independence to make decisions, set goals, and determine their approach and execution of tasks.

Focus Time

The dedicated periods of uninterrupted work where developers can deeply concentrate on their tasks without distractions or interruptions.

Goals

Setting clear objectives that provide direction, motivation, and a sense of purpose in developers' work, enhances their overall experience and productivity.

Product Management

Refers to the practices involved in overseeing the lifecycle of a software product, from ideation to development, launch, and ongoing management.

Clear Requirements

Providing developers with precise and unambiguous specifications, ensuring clarity, reducing ambiguity, and enabling them to meet the expectations of stakeholders and end-users.

Reasonable Timelines

Setting achievable and realistic project deadlines, allowing developers ample time to complete tasks without undue pressure or unrealistic expectations.

Collaborative Discussions

Fostering open communication among developers, product managers, and stakeholders, enabling constructive discussions to align product strategies, share ideas, and resolve issues.

Development and Releases

Refers to creating and deploying software solutions or updates, emphasizing collaboration, streamlined workflows, and reliable deployment to enhance the developer experience.

Tools and Technology

Providing developers with the necessary software tools, frameworks, and technologies to facilitate their work in creating and deploying software solutions.

Code Health

Involves activities like code refactoring, performance optimization, and enforcing best practices to ensure code quality, maintainability, and efficiency, thereby enhancing the developer experience and software longevity.

Frictionless Releases

Streamlining software deployment through automation, standardized procedures, and effective coordination, reducing errors and delays for a seamless and efficient process that enhances the developer experience.

Culture and Values

Refers to shared beliefs, norms, and principles that shape a positive work environment. It includes collaboration, open communication, respect, innovation, diversity, and inclusion, fostering creativity, productivity, and satisfaction among developers.

Psychological Safety

Creating an environment where developers feel safe to express their opinions, take risks, and share their ideas without fear of judgment or negative consequences.

Recognition

Acknowledging and appreciating developers' contributions and achievements through meaningful recognition, fostering a positive and motivating environment that boosts morale and engagement.

Team Collaboration

Fostering open communication, trust, and knowledge sharing among developers, enabling seamless collaboration, and idea exchange, and leveraging strengths to achieve common goals.

Learning and Growth

Continuous learning and professional development, offering skill-enhancing opportunities, encouraging a growth mindset, fostering curiosity and innovation, and supporting career progression.

Conclusion

The developer experience framework creates an indispensable link between developer experience and productivity. Organizations that neglect developer experience face workflow challenges that can harm business performance.  

Prioritizing developer experience isn’t just about efficiency. It includes creating a work culture that values individual developers, fosters innovation, and propels software development teams toward unparalleled success.

Typo aligns seamlessly with the principles of the Developer Experience Framework, empowering engineering leaders to revolutionize their teams.

|||

7 Tips to Improve Developer Happiness

Happy developers are more engaged and productive in the organization. They are more creative and less likely to quit.

But, does developer happiness only come from the fair compensation provided to them? While it is one of the key factors, other aspects also contribute to their happiness.

As the times are changing, there has been a shift in developers’ perspective too. From ‘What we do for a Living’, they now believe in ‘How we want to live’. Happiness is now becoming a major driving force in their decision whether to take, stay, or leave a job.

In this blog, let’s delve deeper into developer happiness and ways to improve it in the organization:

What is Developer Happiness?

In simple words, Developer happiness can be defined as a ‘State of having a positive attitude and outlook on one’s work’.

It is one of the essential elements of organizational success. An increase in developer happiness results in higher engagement and job satisfaction. This gives software developers the freedom to be human and survive and thrive in the organization.

The making of a good Developer Experience (DX) | by Teerapong Singthong  👨🏻‍💻 | Medium

Below are a few benefits of having happy developers in the workplace:

Breed Innovation

Happy developers have a positive mindset toward their jobs and organization. They are most likely to experiment with new ideas and contribute to creative solutions. They are more likely to take calculated risks and step out of their comfort zone to foster innovation and try new approaches.

Faster Problem Resolution

Having a positive mindset leads to quicker problem-solving. When software developers are content, they are open to collaboration with other developers and increase communication. This facilitates faster issue resolution, anticipates potential issues, and addresses them before they escalate.

Ownership and Accountability

Developer happiness comes from a positive work environment. When they feel valued and happy about their work, they take responsibility for resolving issues promptly and align their work and the company goals. They become accountable for their work and want to give their best. This not only increases their work satisfaction but developer experience as well.

Improves Code Quality

Happy developers are more likely to pay attention to the details of their code. They ensure that the work is clean and adheres to the best practices. They are more open to and cooperative during code reviews and take feedback as a way to improve and not criticism.

Health and Well-Being

A positive work environment, supportive team, and job satisfaction result in developer happiness. This reduces their stress and burnout and hence, improves developers’ overall mental and physical well-being.

The above-mentioned points also result in increased developer productivity. In the next section, let’s understand how developer happiness is related to developer productivity.

How Developer Happiness and Developer Productivity are Related to Each Other?

According to the Social Market Foundation, Happy employees are 12% more productive than unhappy employees on average.

Developer Happiness is closely linked to Developer Productivity.

Happy developers perform their tasks well. They treat their customers well and take their queries seriously. This results in happy customers as well. These developers are also likely to take fewer sick leaves and work breaks. Hence, showcasing their organization and its work culture in a good picture.

Moreover, software developers find fulfillment in their roles. This increases their enthusiasm and commitment to their work. They wouldn’t mind going the extra mile to achieve project goals and perform their tasks well.

As a result, happy developers are highly motivated and engaged in their work which leads to increased productivity and developer experience.

Three Core Aspects of Developer Happiness

Following are the three pillars of developer happiness:

Right Tools and Technologies

Tools have a huge impact on developer happiness and retention. The latest and most reliable tools save a lot of time and effort. It makes them more effective in their roles and improves their day-to-day tasks. This helps in creating the flow state, comfortable cognitive leads, and feedback loops.

Passionate Developer Teams

When developers have more control over their roadmaps, it challenges them intellectually and allows them to make meaningful decisions. Having autonomy and a sense of ownership over their work, allows them to deliver efficient and high-quality software products.

Positive Engineering Culture

The right engineering culture creates space for developers to learn, experiment, and share. It allows them to have an ownership mindset, encourages strong agile practices, and is a foundation for productive and efficient teams that drive the business forward. A positive engineering culture also prioritizes psychological safety.

Ways to Boost Developer Happiness in the Workplace

Use Relevant Tools and Technologies

One of the main ways to improve developer happiness is to invest in the right tools and technologies. Experiment with these tools from time to time and monitor the progress of it. If it seems to be the right fit, go ahead with them. However, be cautious to not include every latest tool and technology that comes your way. Use those that you seem are relevant, updated, and compatible with the software. You can also set policies for how someone can obtain new equipment.

The combination of efficient workspace and modern learning tools helps in getting better work from developers. This also increases their productivity and hence, results in developer happiness.

Flexible Work Arrangement

When developers have control over their working style and work schedules, it gives them a healthy work-life balance. As in changing times, we all are changing our perspective not everyone is meant for 9-5 jobs. The flexibility allows developers to when their productivity is at its peak. This becomes a win-win situation for developer satisfaction and project success.

For team communication and collaboration, particular core hours can be set. I.e. 12 PM - 5 PM. After that, anyone can work at any time of the day. Apart from this, asynchronous communication can also be encouraged to accommodate varied work schedules.

Ensure that there are open communication channels to understand the evolving needs and preferences of the development team.

Keep Realistic Expectations

Ensure that you don’t get caught up in completing objectives. Understand that you are dealing with human beings. Hence, keep realistic expectations and deadlines from your developers. Know that good software takes time. It includes a lot of planning, effort, energy, and commitment to create meaningful projects. Also, consider their time beyond work as well. By taking note of all of these, set expectations accordingly.

But that’s not all! Ensure that you prioritize quality over quantity. This not only boosts their confidence in skills and abilities but also allows them to be productive and satisfied with their role.

It doesn't end well : r/ProgrammerHumor

Enable Deep Focus

Software development job is demanding and requires a lot of undivided focus. Too many meetings or overwork can distract them and lose their focus. The state of flow is important for deep work. If a developer is working from the office, having booths can be helpful. They can isolate themselves and focus deeply on work. If they are working remotely, developers can turn off notifications or set the status as focused time.

Focus time can be as long as two hours to less than half an hour. Make sure that they are taking breaks between these focus sessions. You can make them aware of time management techniques. So, that they know how to manage their time effectively and efficiently.

Promote Continuous Learning

Software development is an ever-changing field. Hence, developers need to upskill and stay up to date with the latest developments. Have cross-sharing and recorded training sessions so that they are aware of the current trends in the software development industry. You can also provide them with the necessary courses, books, and newsletters.

Apart from this, you can also do task-shifting so they don’t feel their work to be monotonous. Give them time and space to skill up before any new project starts.

Appreciation and Recognition

Developers want to know their work counts and feel proud of their job. They want to be seen, valued, heard and understood. To foster a positive work environment, celebrate their achievements. Even a ‘Thankyou’ goes a long way.

Since developers' jobs are demanding, they have a strong emotional need to be recognized for their accomplishments. They expect genuine appreciation and recognition that match the impact or output. It should be publicly acknowledged.

You can give them credit for their work in daily or weekly group meetings. This increases their job satisfaction, retention, and productivity.

Improves Overall Well-Being

The above-mentioned points help developers improve their physical and mental well-being. It not only helps them in the work front but also their personal lives. When developers aren’t loaded with lots of work, it lets them be more creative in solving problems and decision-making. It also encourages a healthy lifestyle and allows them to have proper sleep.

You can also share mental health resources and therapists' details with your developers. Besides this, you can have seminars and workshops on how health is important and promote physical activities such as walking, playing outdoor games, swimming, and so on.

Conclusion

Fostering developer happiness is not just a desirable goal, but rather a driving force for an organization’s success. By investing in supportive cultures, effective tools, and learning opportunities, Organizations can empower developers to perform their development tasks well and unleash their full potential.

Typo helps in revolutionizing your team's efficiency and happiness.

View All

Podcasts

View All

'How AI is Revolutionizing Software Engineering' with Venkat Rangasamy, Director of Engineering at Oracle

In this episode of the groCTO Originals podcast, host Kovid Batra talks to Venkat Rangasamy, the Director of Engineering at Oracle & an advisory member at HBR, about 'How AI is Revolutionizing Software Engineering'.

Venkat discusses his journey from a humble background to his current role and his passion for mentorship and generative AI. The main focus is on the revolutionary impact of AI on the Software Development Life Cycle (SDLC), making product development cheaper, more efficient, and of higher quality. The conversation covers the challenges of using public LLMs versus local LLMs, the evolving role of developers, and actionable advice for engineering leaders in startups navigating this transformative phase.

Timestamps

  • 00:00 - Introduction
  • 00:58 - Venkat's background
  • 01:59 - Venkat's Personal and Professional Journey
  • 05:11 - The Importance of Mentorship and Empathy
  • 09:19 - AI's Role in Modern Engineering
  • 15:01 - Security and IP Concerns with AI
  • 28:56 - Actionable Advice for Engineering Leaders
  • 32:56 - Conclusion and Final Thoughts

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of the groCTO podcast. And today with us, we have a very special guest, Mr. Venkat Rangasamy. He's the Director of Engineering at Oracle. He is the advisor at HBR Advisory Council, where he's helping HBR create content on leadership and management. He comes with 18 plus years of engineering and leadership experience. It's a pleasure to have you on the show, Venkat. Welcome. 

Venkat Rangasamy: Yup. Likewise. Thank you. Thanks for the opportunity to discuss on some of the hot topics what we have. I'm, I'm pleasured to be here. 

Kovid Batra: Great, Venkat. So I think there is a lot to talk about, uh, what's going on in the engineering landscape. And just for the audience, uh, today's topic is around, uh, how AI is impacting the overall engineering landscape and Venkat coming from that space with an immense experience and exposure, I think there will be a lot of insights coming in from your end. Uh, but before we move on to that section, uh, I would love to know a little bit more about you. Our audience would also love to know a little bit more about you. So anything that you would like to share, uh, from your personal life, from your professional journey, any hobbies, any childhood memories that shape up who you are today, how things have changed for you. We would love to hear about you. Yeah. 

Venkat Rangasamy: Yup. Um, in, in, in my humble background, I started, um, without nothing much in place, where, um, started my career and even studies, I did really, really on like, not even electricity to go through to, when we went for studies. That's how I started my study, whole schooling and everything. Then moved on to my college. Again, everything on scholarship. It's, it's like, that's where I started my career. One thing kept me motivated to go to places where, uh, different things and exploring opportunities, mentorship, right? That something is what shaped me from my school when I didn't have even, have food to eat for a day. Still, the mentorship and people who helped me is what I do today. 

With that context, why I'm passionate about the generative AI and other areas where I, I connect the dots is usually we used to have mentorship where people will help you, push you, take you in the right direction where you want to be in the different challenges they put together, right? Over a period of time, the mentorship evolved. Hey, I started with a physical mentor. Hey, this is how they handhold you, right? Each and every step of the way what you do. Then when your career moves along, then that, that handholding becomes little off, like it becomes slowly, it becomes like more of like instructions. Hey, this is how you need to do, get it done, right? The more you grow, even it will be abstracted. The one piece what I miss is having the handholding mentorship, right? Even though you grow your career, in the long run, you need something to be handholding you to progress along the way as needed. I see one thing that's motivated me to be part of the generative AI and see what is going on is, it could be another mentor for you to shape your roles and responsibility, your career, how do you want to proceed, bounce your ideas and see where, where you want to go from there on the problem that you have, right? In the context of the work-related stuff. 

Um, how, how you can, as a person, you can shape your career is something I'm vested, interested in people to be successful. In the long run, that's my passion to make people successful. The path that I've gone through, I just want to help people in a way to make them successful. That's my belief. I think making, pulling like 10 to 100, how many people you can pull out. The way when you grow is equally important. It's just not your growth yourself. Being part of that whole ecosystem, bring everybody around it. Everybody's career is equally important. I'm passionate about that and I'm happy to do that. And in my way, people come in. I want to make sure we grow together and and make them successful. 

Kovid Batra: Yeah, I think it's, uh, it's because of your humble background and the hardships that you've seen in the early of your, uh, childhood and while growing up, uh, you, you share that passion and, uh, you want to help other folks to grow and evolve in their journeys. But, uh, the biggest problem, uh, like when, when I see, uh, with people today is they, they lack that empathy and they lack that motivation to help people. Why do you think it's there and how one can really overcome this? Because in my foundation, uh, in my fundamental beliefs, we, as humans are here to give back to the community, give back to this world, and that's the best feeling, uh, that I have also experienced in my life, uh, over the last few years. I am not sure how to instill that in people who are lacking that motivation to do so. In your experience, how do you, how do you see, how do you want to inspire people to inspire others? 

Venkat Rangasamy: Yeah. No, it's, it's, it's like, um, It goes both ways, right? When you try to bring people and make them better is where you can grow yourself. And it becomes like, like last five to 10 years, the whole industry's become like really mechanics, like the expectation went so much, the breathing space. We do not have a breathing space. Hey, I want to chase my next, chase my next, chasing the next one. We leave the bottom food chain, like, hey, bring the food chain entirely with you until you see the taste of it in one product building. Bringing entire food chain to the ecosystem to bring them success is what makes your team at the end of the day. If we start seeing the value for that, people start spending more time on growing other people where they will make you successful. It's important. And that food chain, if it breaks, if it broke, or you, you kind of keep the food chain outside of your progression or growth, that's not actual growth because at one point of time, you get the roadblocks, right? At that point of time, your complete food chain is broken, right? Similar way, your career, the whole team, food chain is, it's completely broken. It's hard to bring them back, get the product launched at the time what you want to do. It's, it's, it's about building a trust, bring them up to speed, make them part of you, is what you have to do make yourself successful. Once you start seeing that in building a products, that will be the model. I think the people will follow that. 

The part is you rightly pointed out empathy, right? Have some empathy, right? Career can, it can be, can, can, it can go its own progress, but don't, don't squeeze too much to make it like I want to be like, it won't happen like in a timely manner like every six months and a year. No, it takes its own course of action. Go with this and make it happen, right? There are ups and downs in careers. Don't make, don't think like every, every quarter and every year, my career should be successful. No, that's not how it works. Then, then there is no way you see failure in your career, right? That's not the way equilibrium is. If that happened, everybody becomes evil. That's not a point, right? Every, everything in the context of how do you bring, uplift people is equally important. And I think people should start focusing more on the empathy and other stuff than just bringing as an IC contributor. Then you want to be successful in your own role, be an IC contributor, then don't be a professional manager bringing your whole.. There's a chain under you who trust you and build their career on top of your growth, right? That's important. When you have that responsibility, be meaningful, how do you bring them and uplift them is equally important. 

Kovid Batra: Cool. I think, uh, thanks a lot, uh, for this sweet and, uh, real intro about yourself. Uh, we got to, uh, know you a little more now. And with that, I, I'm sorry, but I was talking to you on LinkedIn, uh, from some time and I see that you have been passionately working with different startups and companies also, right, uh, in the space of AI. So, uh, With this note, I think let's move on to our main section, um, where you would, uh, be, where we would be interested in knowing, uh, what kind of, uh, ideas and thoughts, uh, are, uh, encompassing this AI landscape now, where engineering is changing on a day-in and day-out basis. So let's move on to our main section, uh, how AI is impacting or changing the engineering landscape. So, starting with your, uh, uh, advisories and your startups that you're working with, what are the latest things that are going on in the market you are associated with and how, how is technology getting impacted there? 

Venkat Rangasamy: Here is, here is what the.. Git analogy, I just want to give some history background about how AI is getting mainstream and people are not quite realizing what's happening around us, right? The part is I think 2010, when we started presenting cloud computing to folks, um, in the banking industry, I used to work for a banking customer. People really laughed at it. Hey, my data will be with me. I don't think it will move any time closer to cloud or anything. It will be with, with and on from, it is not going to change, right? But, you know, over a period of time, cloud made it easy. And, and any startups that build an application don't need to set up any infrastructure or anything, because it gives an easy way to do it. Just put your card, your infrastructure is up and running in a couple of hours, right? That revolutionized a lot the way we deploy and manage our applications.

The second pivotal moment in our history is mobile apps, right? After that, you see the application dominance was with enterprise most of the time. Over a period of time, when mobile got introduced, the distribution channels became easier to reach out to end users, right? Then a lot of billion-dollar unicorns like Uber and Spotify, everything got built out. That's the second big revolution happening. After mobile, I would say there were foundations happening like big data and data analytics. There is some part of ML, it, over a period of time it happened. But revolutionizing the whole aspect of the software, like how cloud and mobile had an impact on the industry, I see AI become the next one. The reason is, um, as of now, the software are built in a way, it's traditional SDLC practice, practice set up a long time ago. What, what's happening around now is that practice is getting questioned and changed a bit in the context of how are we going to develop a software, make them cheaper, more productive and quality deliverables. We used to do it in the 90s. If you've worked during that time, right, COBOL and other things, we used to do something called extreme programming. Peer programming and extreme programming is you, you have an assistant, you sit together, write together a bunch of instructions, right? That's how you start coding and COBOL and other things to validate your procedures. The extreme programming went away. And we started doing code based, IDE based suggestions and other things for developers. But now what's happening is it's coming 360, and everything is how Generative AI is influencing the whole aspect of software industry is, is, is it's going to be impactful for each and every life cycle of the software industry.

And it's just at the initial stage, people are figuring out what to do. From my, my interaction and what I do in my free time with NJ, Generative AI to Change this SDLC process in a meaningful way, I see there will be a profound impact on what we do in a software as software developers. From gathering requirements until deploying, deploying that software into customers and post support into a lifecycle will have a meaningful impact, impact. What does that mean? It'll have cheaper product development, quality deliverables. and having good customer service. What does it bring in over a period of time? It'll be a trade off, but that's where I think it's heading at this point of time. Some folks have started realizing, injecting their SDLC process into generative AI in some shape and form to make them better.

We can go in detail of like how each phases will look like, but that's, that's what I see from industry point of view, how folks are approaching generative AI. There is, there is, it's very conservative. I understand because that's how we started with cloud and other areas, but it's going to be mainstream, but it's going to be like, each and every aspect of it will be relooked and the chain management point of view in a couple of years, the way we see an SDLC will be quite different than what we have today. That's my, my, my belief and what I see in the industry. That's how it's getting there. Yep. Especially the software development itself. It's like eating your own dog food, right? It happened for a long time. This is the first time we do a software development, that whole development itself, it's going to be disturbed in a way. It'll be, it'll be, it'll be more, uh, profound impact on the whole product development. And it'll be cheaper. The product, go to market will be much cheaper. Like how mobile revolutionized, the next evolution will be on using, um, generative AI-like capability to make your product cheaper and go to market in a short term. That's, that's, that's going to happen eventually. 

Kovid Batra: Right. I think, uh, this, this is bound to happen. Even I believe so. It is, it is already there. I mean, it's not like, uh, you're talking about real future, future. It's almost there. It's happening right now. But what do you think on the point where this technology, which is right now, uh, not hosted locally, right? Uh, we are talking about inventing, uh, LLMs locally into your servers, into your systems. How do you see that piece evolving? Because lately I have been seeing a lot of concerns from a lot of companies and leaders around the security aspect, around the IP aspect where you are putting all your code into a third-party server to generate new code, right? You can't stop developers from doing that because they've already started doing it. Earlier, the method was going to stack overflow, taking up some code from there, going to GitHub repositories or GitLab repositories, taking up some code. But now this is happening from a single point of source, which is cloud hosted and you have to share your code with third parties. That has started becoming a concern. So though the whole landscape is going to change, as you said, but I think there is a specific direction in which things are moving, right? Very soon people realized that there is an aspect of security and IP that comes along with using such tools in the system. So how do you see that piece progressing in the market right now? And what are the things, what are the products, what are the services that are coming up, impacting this landscape? 

Venkat Rangasamy: It's a good question, actually. We, after a couple of years, right, what the realization even I came up with now, the services which are hosted on a cloud, like, uh, like, uh, public LLMs, right, which, you can use an LLM to generate some of these aspects. From a POC point of view, it looks great. You can see it, what is coming your way. But when it comes to the real product, making product in a production environment is not, um, well-defined because as I said, right, security audit complaints, code IP, right? And, and your compliance team, it's about who owned the IP part of it, right? It's those aspects as well as having the code, your IP goes to some trained public LLM. And it's, it's kind of a compromise where there is, there is, there is some concern around that area and people have started and enterprises have started looking upon something to make it within their workspace. End of the day, from a developer point of view, the experience what developer has, it has to be within that IDE itself, right? That's where it becomes successful. And keeping outside of that IDE is not fully baked-in or it's not fully baked-in part of the developer life cycle, which means the tool set, it has to be as if like it's running in local, right? If you ask me, like, is it doable? For sure. Yes. If you’d asked me an year back, I'd have said no. Um, running your own LLM within a laptop, like another IDE, like how do you run an IDE? It's going to be really challenging if you’d asked me an year back. But today, I was doing some recent experiment on this, um, similar challenges, right? Where corporates and other folks, then the, the, the, any, any big enterprises, right? Any security or any talk to a startup founders, the major, the major roadblock is I didn't want to share my IPR code outside of my workspace. Then bringing that experience into your workspace is equally important. 

With that context, I was doing some research with one of the POC project with, uh, bringing your Code Llama. Code Llama is one of the LLMs, public LLM, uh, trained by Meta for different languages, right? It's just the end of the day, the smaller the LLMs, the better on these kinds of tasks, right? You don't need to have 700 billion, 70 billion, those, those parameters are, is, it's irrelevant at this point of coding because coding is all about a bunch of instructions which need to be trained, right? And on top of it, your custom coding and templates, just a coding example. Now, how to solve this problem, set up your own local LLM. Um, I've tested and benchmarked in both Mac and PC. Mac is phenomenally well, I won't see any difference. You should be able to set up your LLM. There is a product called Ollama. Ollama is, uh, where you can use, set up your LLM within your workspace as if it's running, like running in your laptop. There's nothing going out of your laptop. Set up that and go to your IDE, create a simple plugin. I created a VC plugin, visual source plugin, connected to your local LLM, because Ollama will give you like a REST API, just connect it. Now, now, within your IDE, whatever code is there, that is going to talk to your LLM, which means every developer can have their own LLM. And as long as you have a right trained data set for basic language, Java, Python, and other thing, it works phenomenally well, because it's already trained for it. If you want to have a custom coding and custom templating, you just need to train that aspect of it, of your coding standards.

Once you train, keep it in your local, just run like part of an IDE. It's a whole integrated experience, which runs within developer workspaces, is what? Scalable and long run. It, if anything, if it goes out of that, which we, we, we have seen that many times, right, past couple of years. Even though we say our LLMs are good enough to do larger tasks in the coding side, if it's, if you want to analyze the complete file, if you send it to a public LLM, with some services available, uh, through some coding and other testing services, what we have, the challenges, number of the size of the tokens what you can send back, right? There is a limit in the number of tokens, which means if you want to analyze the entire project repository what you have, it's not possible with the way it's, these are set up now in a public site, right? Which means you need to have your own LLM within the workspace, which can work and in, in, it's like a, it's part of your workspace, that's what I would say. Like, how do you run your database? Run it part of your workspace, just make it happen. That is possible. And that's going to be the future. I don't think going any public LLM or setting up is, is, is not a viable option, but having the pipeline set up, it's like a patching or giving a database to your developers, it runs in local. Have that set up where everybody can use it within the local workspace itself. It's going to be the future and the tools and tool sets around that is really happening. And it's, it's at the phase where in an year's time from here, you won't even see that's a big thing. It's just like part of your skill. Just set up and connect your editor, whatever source code editor you have, just connect it to LLM, just run with it. I see that's a feature for the coding part of you. Other SDLCs have different nuance to it, but coding, I think it should be pretty straightforward in a year time frame. That's going to be the normal practice. 

Kovid Batra: So I think, uh, from what I understand of your opinion is that the, most of the market would be shifting towards their Local LLM models, right? Yeah. Uh, that that's going to be the future, but I'm not sure if I'm having the right analogy here, but let's talk about, uh, something like GitHub, which is, uh, cloud-sourced and one, which is in-house, right? Uh, the teams, the companies always had that option of having it locally, right? But today, um, I'm not sure of the percentage, uh, how many teams are using a cloud-based GitHub on a locally, uh, operated GitHub. But in that situation, they are hosting their code on a third party, right? The code is there. 

Venkat Rangasamy: Yup. 

Kovid Batra: The market didn't shape that way if we look at it from that perspective of code security and IP and everything. Uh, why do you think that this would happen for, uh, local LLMs? Like wouldn't the market be fragmented? Like large-scale organizations who have grown beyond a size have that mindset now, “Let's have something in-house.” and they would put it out for the local LLMs. Whereas the small companies who are establishing themselves and then, I mean, can it not be the similar path that happened for how you manage your code? 

Venkat Rangasamy: I think it is very well possible. The only difference between GitHub and LLM is, um, the artifact, the, GitHub is more like an artifact management, right? When you have your IP, you're just keeping it's kind of first repository to keep everything safe, right? It just with the versioning, branching and other stuff.

Kovid Batra: Right. 

Venkat Rangasamy: Um, the only problem there related to security is who's, um, is there any vulnerability within your code? Or it's that your repository is secure, right? That is kind of a compliance or everything needs to be there. As long as that's satisfied, we're good for that. But from an LLM lifecycle point of view, the, the IP, what we call so far in a software is a code, what you write as a code. Um, and the business logic associated to that code and the customizations happenening around that is what your IP is all about. Now, as of now, those IPs are patent, which means, hey, this is what my patent is all about. This is my IP all about. Now you have started giving your IP data to a public LLM, it'll be challenging because end of the day, any data goes through, it can be trained on its own. Using the data set, what user is going through, any LLM can be trained using the dataset. If you ask me, like, every application is critical where you cannot share an IP, not really. Building simple web pages or having REST services is okay because those things, I don't think any IP is bound to have. Where you have the core business of running your own workflows or your own calculations and that is where it's going to be more tough to use any public LLM.

And another challenge is, what I see in a community is, the small startups, right, they won't do much customization on the frameworks. Like they take Java means Java, right, Node means Node, they take React, just plain vanilla, just run through end-to-end, right? Their, their goal is to get the product up to market quicker, right, in the initial stage of when we have 510 developers. But when it grows, the team grows, what happens is, we, the, every enterprise it's bound to happen, I, I've gone through a couple of cycles of that, you start putting together a framework around the whole standardization of coding, the, the scaffolding, the creating your test cases, the whole life cycle will have enforced your own standard on top of it, because to make it consistent across different developers, and because the team became 5 to 1000, 1000 to 10,000, it's hard to manage if you don't have standards around it, right? That's where you have challenges using public LLM because you will have challenges of having your own code with your own standards, which is not trained by LLM, even though it's a simple application. Even simple application will have a challenge at those points of time. But from a basic point of view, still you can use it. But again, you will have a challenge of how big a file you can analyze using public LLM. It's the one challenge you might have. But the answer to your question, yes, it will be hybrid. It won't be 100 percent saying everybody needs to have their own LLM trained and set up. Initial stages, it's totally fine to use it because that's how it's going to grow, because startup companies don't have much resources to put together to build their own frameworks. But once they get in a shape where they want to have the standardized practices, like how they build their own frameworks and other things. Similar way, one point of time, they'd want to bring it up on their own setup and run with it. For large enterprise, for sure, they are going to have their own developer productivity suite, like what they did with their frameworks and other platforms. But for a small startup, start with, they might use public, but long run, eventually over a point of, over a period of time, that might get changed. 

And the benefit of getting hybrid is where you will, you'll make your product quick to market, right? Because end of the day, that's important for startups. It's not about getting everything the way they want to set up. It's important, but at the same time, you need to go to market, the amount of money what you have, where you want to prioritize your money. If I take it that way, still code generation and the whole LLM will play a crucial role on a, on the development. But how do you use and what third-party they can use? Of course, there will be some choices where I think in the future, what this, what I see is even these LLMs will be set up and trained for your own data in a, in a more of a hybrid cloud instead of a public cloud, which means your LLM, what you trained in a, in a hybrid cloud has visibility only to your code. It's not going, it's not a public LLM, it's more of a private LLM trained and deployed on, on a cloud can be used by your team. That'll, that'll, that'll be the hybrid approach in the long run. It's going to scale. 

Kovid Batra: Got it. Great. Uh, with that, I think, uh, just to put out some actionable advice, uh, for all the engineering leaders out there who are going through this phase of the AI transformation, uh, anything as an actionable advice for those leaders from your end, like what should they focus on right now, how they should make that transition? And I'm talking about, uh, companies where these engineering leaders are working, which are, uh, Series B, Series A, Series C kind of a bracket. I know this is a huge bracket, but what kind of advice you would give to these companies? Because they're in the growing phase of the, of the whole cycle of a company, right? So what, what should they focus on right now at this stage?

Venkat Rangasamy: Here, here is where some start. I was talking to some couple of, uh, uh, ventures, uh, recently about similar topic, how the landscape is going to change as for software development, right? One thing came up in that call frequently was cheaper to develop a product, go to market faster, and the expectation around software development has become changing quite a while, right? In the sense, the expectation around software development and the cost associated to that software development is where it's going to, it's going to be changing drastically. Same time, be clear about your strategy. It's not like we can change 50 percent of productivity overnight now. But at the same time, keep it realistic, right? Hey, this is what I want to make. Here is my charter to go through, from start from ideations to go to market. Here are the meaningful places where I can introduce something which can help the developers and other roles like PMs. Could be even post support, right? Have a meaningful strategy. Just don't go blank with the traditional way what you have, because your investors and advisors are going to start ask questions because they're going to see a similar pattern from others, right? Because that's how others have started looking into it. I would say proactively start going through that landscape and map your process to see where we can inject some of the meaningful, uh, area where it can have impacts, right?

And, and have, be practical about it. Don't think, don't give a commitment. Hey, I make 50 percent cheaper on my development and overnight you might burn because that's not reality, but just.. In my unit test cases and areas where I can build quality products within this money and I can guarantee that can be an industry benchmark. I can do that with introducing some of these practices like test cases, post customer support, writing code in some aspects, right? Um, that is what you need to set up, uh, when you started, uh, going for a venture fund. And have a relook of your SDLC process. That's important. And see how do you inject, and in the long term, that'll help you. And it'll be iterative, but at the end of the day, see, we've gone from waterfall to agile. Agile to many, many other paradigms within agile over a period of time. But, uh, the one thing what we're good at doing is in a software as an industry adapting to a new trend, right? This could be another trend. Keep an eye on it. Make it something where you can make it, make some meaningful impact on your products. I would, I would say, before your investor comes and talked about hey, can you do optimization here? I see another, my portfolio company does this, does this, does this. That's, it's, it's better to start yourself. Be collaborative and see if we can make something meaningful and learn across, share it in the community where other founders can leverage something from you. It will be great. That's my advice to any startup founders who can make a difference. Yep. 

Kovid Batra: Perfect. Perfect. Thank you, Venkat. Thank you so much for this insightful, uh, uh, information about how to navigate the situation of changing landscape due to AI. So, uh, it was really interesting. Uh, we would love to have you one another time on this show. I am sure, uh, you have more than these insights to share with us, but I think in the interest of time, we'll have to close it for today, and, uh, we'll see you soon again. 

Venkat Rangasamy: See you. Bye.

‘Product vs Engineering: Building Bridges, Not Walls’ with James Charlesworth, Director of Engineering at Pendo

In the recent episode of ‘groCTO: Originals’, host Kovid Batra engages in a thoughtful discussion with James Charlesworth, Director of Engineering at Pendo, who brings over 15 years of experience in engineering and leadership roles. The episode centers around the theme “Product vs Engineering: Building Bridges, Not Walls.” 

James begins by sharing how his lifelong passion for technology and software engineering, along with pivotal moments in his life have shaped his career. Moving on to the main section, James addresses the age-old tussle between product and engineering teams, emphasizing that these teams should collaborate closely rather than operate in silos. He shares strategies for fostering empathy, collaboration, and effective collaboration while highlighting the importance of understanding each team’s priorities and the impact of misalignment. 

James also underscores the value of one-on-one meetings for having meaningful conversations, building strong relationships, and understanding team members on a deeper level. He also explores the significant role of Engineering Managers in enabling their teams to overcome these challenges, ensuring smooth team dynamics, and achieving successful product outcomes.

Timestamps

  • 00:00 - Introduction 
  • 01:56 - James’ Background 
  • 05:44 - Product vs. Engineering 
  • 07:41 - Empathy & Communication: Bridging the Gap
  • 15:28 - The Role of Engineering Managers
  • 18:32 - Building Trust Through One-on-Ones
  • 22:19 - Practical Advice for Introverts in Tech
  • 27:54 - Consequences of Team Friction
  • 30:19 - Conclusion: Collaborative Success

Links and Mentions 

Episode Transcript

Kovid Batra: Hi everyone. This is Kovid, back with a new episode of the groCTO podcast, and today with us, we have a very special guest. He's the Head of Engineering at Pendo, and he has more than 15 years of engineering and leadership experience. Welcome to the show, James. Happy to have you here. 

James Charlesworth: Hi, Kovid. Thank you so much for having me on. I'm actually not Head of Engineering at Pendo. I am a Director of Engineering and I run the Sheffield office here in the UK. So thank you for having me on. 

Kovid Batra: Oh, all right. My bad then. Okay. So I think today, uh, we are going to have a very interesting discussion with James. We're going to talk about the age-old tussle between product and engineering, and James, uh, is an expert at handling those situations. So he's going to tell us what are the tactics and what are the frameworks he's using here. But before, James, we move on to that section, uh, we would love to know a little bit more about you. Uh, maybe some of your hobbies, some of the life-defining events for you, who James is basically. Please go on. 

James Charlesworth: Thanks, Kovid. Um, yeah, this sounds super nerdy, but my hobby has always been technology and software engineering. Um, I first started doing software engineering when I was probably about 11 or 12 years old. I had a Cyon Series 3 that my parents bought me from a boot fair, and I just learned how to program that. Like, I'll just sit there for ages typing these tiny little keys. Um, and my hobby has been like using software and coding to actually solve problems in the real world and build products. And that's kind of led me towards web development and SaaS products. And that's ultimately what we do at Pendo, is help people build better products. So, um, yeah, that's a pretty boring answer to your question about my hobbies. I do also like play music and things. Um, I played guitar in a band for a long time. Um, so that's the only non-techie hobby I guess I have. 

Kovid Batra: No, that's great. Thank you for that sweet, small intro about yourself. Anything that, uh, that entices you from your childhood or from your maybe teenage that has defined you a lot? I mean, this, this is something that I usually ask a lot of people, but from there, we, we get to know a lot more about our guests. So if you don't mind, can you just share some of that, uh, experience from your past that defines you today? 

James Charlesworth: Yeah, I think the biggest defining moment that a lot of people go through is when they first leave home for the first time and they don't have a direction because I didn't have much of a direction when I was like 18 years old and I left home. I did the wrong degree. I did a degree in control systems engineering and I ended up doing software. So it took me a while to get into web development because of doing the wrong degree. Um, and actually because I had no real direction, I was just sort of fluttering in the wind and just doing whatever. But through that process of just giving yourself a bit of freedom and going out and into the world and doing whatever you want, you really learn about yourself and you learn about other people, and I think that's when I went from being obsessed with computers to being obsessed with people and the way that people interact with each other and, um, you know, like I met people from all different walks of life, and you notice the similarities between anybody from all across the world, but you want to notice the differences, and you can notice how you can celebrate those differences.

And so I think, like, having that moment of moving away from home and, um, like, living by yourself and stuff like that, um, really opens your eyes up to, like, who you want to be and where you want your place in the world to be. So I'm sorry if that's a little bit, um, esoteric but it's, yeah, there was no like one defining moment really. I mean, it was just one of those and then like being in a band goes in with that because I always wanted to be a rock star. It never really worked out. But this idea of you can just get some friends, get together, get a van and just like go touring and play music, um, across the country, that's really cool, and that's really cool when you're in your sort of early twenties and you just want that freedom. Um, and that goes hand in hand with meeting people from, from all over the place. So yeah, like, you know, I'm obsessed with people. I'm obsessed with like human interactions and the way people, um, the way people like carry themselves and interact with each other and what they care about and how we can all align that. Yeah. 

Kovid Batra: That's really interesting now. I mean, uh, the kind of role you are into where you are into leadership, you're leading teams, you're a Director of Engineering and this aspect of being aware of different aspects of different people and culture makes you more comfortable when you are, uh, leading people, you, you bring more empathy, you bring more understanding to their situations, and I'm sure that has come, uh, from there, and it, it is definitely growing as you are moving into your career.

So I think, James, this was, this was, uh, really, really interesting. Uh, let's move on to our main section. I think, uh, everyone is waiting to hear about that. Uh, so this has been an age-old tussle, as I said. Uh, the engineers have never liked the product managers. I'm not generalizing it, but just saying, so please, please don't, don't take it wrong. Uh, but yeah, usually the engineers are not very comfortable, uh, in those discussions and this has been an age-old tussle, we all know, know about it. When we talk about product and engineering teams, I personally never think these two as two separate teams. Like it, it never works like that. One thing that I learned as soon as I moved into this industry is it's 'product engineering'. It's not product and engineering separately. So it's not healthy for a team to have this kind of a tussle when you actually are moving towards the same goal and almost every engineering team that I see, there, there is some level of friction there and it's, it's natural to be there because the product managers usually might not be that well, uh, hands-on with the code, hands-on with the kind of daily practices our dev goes through. And then, planning according to that, keeping in mind that, okay, it should be, uh, pushy as well as comfortable for the developers to deliver something. So that's where the main friction starts and you come up with unreasonable requirements which the developers might not be able to relate to that, how it is going to impact the product.

So there are multiple reasons due to which this gap, this friction could be there. So today, I think with that note, I would, uh, hand over the mic to you and, uh, would want to know how you have had such experiences in your past, in your current role and how you end up resolving this so that people operate, like developers and product operates as one single team towards that one goal of making the business successful.

James Charlesworth: Yeah, absolutely. And what you said there about coming together to solve a problem together is really, really important. I think like the number one thing that underpins this is that everybody, product managers, engineers, designers, managers, needs to remember that you're all employed by the same organization and you've got the same shared goals and your, um, contribution to that is no more or less valuable than anybody else's. Like you mentioned that word 'empathy' in your introduction, like empathy is, we're gonna talk about empathy a lot today, right, because empathy is all about putting yourself in somebody else's shoes and seeing what their goals are. Um, and firstly, like trying to steer their goals to what you need, but also trying to like, um, emphasize what your own goals are, um, and align those to the others.

Like the way I always think about product managers is a lot of engineers, they feel like they're on feature factory teams. They feel like they're just being told what to build, and you get into this feature factory loop. Um, and it just seems like all the Product Manager wants to do is add features into the product, add features into the product, add features into the product. Um, and it can feel sometimes like product managers are paid like on commission, like they get a certain commission based on how many features they deliver at a point or something. That's not true. Product managers are paid a salary just like you do, and the way that your success is ultimately measured is the same way that your product manager's success is ultimately measured. And so, it's really, really important to realize that you do align around this goal, and you need to have a two-way conversation about it. Like you need to, you need to really, really explain like what your, you think the priorities should be, and you need to encourage your product manager to explain what they think their priorities should be for the team, and then you can align and find some middle ground that ultimately works best for the business. 

But yeah, like in my experience anyway, it's just, you say age-old, like this has been quite a long thing. And before products managers, it was business people. It was maybe, you know, one of my first jobs in software engineering, um, we didn't really have products managers. We just had like the Director of engineering, product research, design, whatever, um, who would just come up with the idea and just say, &quot;This is what we're building.&quot; And that's very difficult, um, because you've reported in to that person. So you basically had to just do exactly what they said, and that was super, super unhealthy because that builds up a huge amount of resentment. And I much, much prefer the model we have now, where we have product managers, where engineers don't report into the product managers, because that means that product managers had to lead the product without authority, um, and engineers have to lead the best engineering direction without authority. So you have this thing where you're encouraged to influence your peers on the same team as opposed to just doing the thing that your boss tells you to do, which is how it used to work when I started in this industry. 

So it's got, it's got a lot better. Um, and the, yeah, as I've gone through my career and I've worked with some really, really good product managers and some really good product leaders, I've noticed that pattern between the, the product managers that are really, really good, that are really successful are the ones that have that empathy and we will talk about empathy a lot, right? Because it's super, super important. But product managers can have that empathy that can empathize with what engineers actually want to get out of a situation, um, and then align that with their own goals. 

Kovid Batra: I have a question here. When you, when you say empathy, I think, uh, in your introduction also, you mentioned, like, when you meet different people from different cultures, different backgrounds, you tend to understand. Your, your brain develops that empathy naturally towards different situations and different people. But that has happened only because you have seen things differently, right? When we put this context into product and engineering, a product manager who has probably never coded in their life, right? Who does not have the context of, uh, how things work in, in the development workspace, right? In that situation, how a manager like that should be able to come to this piece that, okay, uh, if the developer is saying that this is going to take five days or this is difficult and this is complicated to implement and it won't add much value. So in those scenarios, a person who is not hands-on with coding or has never done that piece on his own or her own, uh, how do you think, uh, in a professional environment that empathy would come in? And of course, the Product Manager has his or her own, uh, deliverables, the metrics that need to be looked at. So how does that work in that situation? 

James Charlesworth: Well, the same way it works the other way around as well. So the situation you've just described, right? You've got a Product Manager who is trying to get what they need to get done, but they don't understand the full details behind the implementation. You've also got an engineer that does understand the full details behind the implementation, but they don't understand the full business context behind what you're trying to build, right? Because that's the Product Manager's job. So the engineers, they might know exactly how the database is structured and how all of the backend architecture works, which is very complicated, but they don't understand, like they haven't been speaking to customers. They don't know the kinds of things that the Product Manager knows. So both sides need to essentially understand what the other person's priorities are, and that's what empathy is. Empathy is understanding what somebody wants, and not necessarily always giving them what they want, but the very least like comprehending and considering what somebody's goals are in your own way you deal with them, right? 

So, um, back to your situation about software engineering. Okay, so if let's say, a Product Manager has come to you and said, &quot;We just need to add this button to this page. It's super, super important. We want to, we want this button to send an email out.&quot; And the engineers come back and they say, &quot;Oh, we actually don't have any backend email architecture that can send emails out. So we're going to actually have to build all of that.&quot; Um, that, you know, the Product Manager can go, &quot;Well, what's so difficult about that? Just put a button there and send an email out.&quot; And the engineers are kind of caught in this rock where they're in a hard place where they're sort of saying, &quot;Well, this is a lot of work. Like that's weeks and weeks and weeks of work, but how do I go to the business and say 'It's a lot of work'?&quot; Um, and so, the solution is to really, really explain and break down to your Product Manager why this is more work than they realize and the Product Manager's job is to turn around to you and explain why we really, really need this. So you both need to align and you both need to understand. Product managers need to understand that some stuff is complicated and the only way they're going to understand it is complicated is if you just explain it to them, right? Like there's no secrets in software engineering. If you spend an hour sitting down and explaining to a Product Manager how your backend is architected and how your databases all fit together and you know, what email service we're using and what the limitations are of that email service, and then they'll understand it. It'll take you an hour to explain that. And equally, your Product Manager can sit down with you and they can show you the customer calls where people are really, really wanting this feature, right? And they can educate you on why we really need this feature, and then ultimately, you'll come, you'll come together where you understand why your Product Manager is pushing for this so hard, and your Product Manager will understand why you're pushing back against this so hard, and you'll find a solution that makes everybody happy in the end. But you do need to listen. You need to listen to the other person's, um, goals and what they want to get out of it. Um, and that's the empathy side. Eventually, it's like, it's on this respecting somebody's motives. It's respecting somebody's, um, like what they've been given, their mandate that they've been given for a certain situation. 

Kovid Batra: Right. I think this is one scenario where I definitely see putting in effort in explaining to the other person what it really means, what it stands for. Obviously, anyone cannot be so inconsiderate about the other person when they're working together. So maybe in one or two, uh, situations like this, let's say, I'm a Product Manager, uh, where I have to explain things to the developer, and if I do that for, let's say, two or three such instances, from the fourth or fifth time, automatically that level of trust is built, and you are in a position to maybe, uh, not even explain a lot of times. You get that synchronization in place where things are working well with you.

And on that note, I really feel that people who are joining in large size teams, like, uh, a Product Manager joining in or a developer joining in, usually in large size teams, we have started to see this pattern of having engineering managers also, right? So in your perspective, uh, how much does an Engineering Manager play a role in, uh, bridging this gap and reducing this friction? Because, uh, few of my very close friends who have been from the engineering background have chosen to be in the management space now, and they, they usually tell us what things they are working upon right now. And I feel that that really helps the business as well as the developers to deliver the right things on time and you get a lot of context from both the sides. So what's your perspective on that, uh, of bringing those engineering managers into the system? 

James Charlesworth: Yeah. I mean, I think the primary, number one responsibility of an Engineering Manager is to empower the engineers to do all those things that you've just been speaking about, right? So like your number one responsibility, engineering managers tend to have better people skills than engineers. That's why people go into management. Um, and your job is to teach the engineers on your team how to do that, all of those things you've just described. Sometimes you have to step in and sometimes there's a high pressure situation where you do actually have to say, you know, &quot;I'm going to bridge the gap here between engineers and product.&quot; But your primary job as an Engineering Manager is to enable the engineers on your team to all have that kind of conversation with the product managers and with the business. Um, and so it's coaching. So it's support. So it's, um, career development, and also, you know, hiring the right people, that's quite a large job of an Engineering Manager. Performance management. Um, and so, a lot of that. Engineering managers should never be the one person that bridges gap between product and engineering because then they're going to become a bottleneck, and also the engineers are never really going to learn to do that themselves.

Um, so yeah, that's always been, and I learned from some really good engineering managers or software development managers about this, about like, um, you know, empowering the people that you've put in charge. Engineering managers aren't in their position because they're necessarily better at everything than the engineers. They're usually better at one or two things. Um, but they're not as good at things like technical architecture. So as an Engineering Manager myself, I would never overrule an IC's opinion on a software architecture because that's not my job. My job is not that. I might know, I might have been doing software for years and years and years and I understand how systems are architected and how databases work and stuff. But I'm also employing people who are better at that than me, and that's the point. And so I would never overrule them, and I would never overrule how they collaborate with their Product Manager. But I would guide and coach them towards being able to do that. Um, and so, that's the case of speaking to engineers, speaking to product managers, trying to find out if they're talking past each other, trying to find out, you know, where, where the disconnect is, and then trying to solve that between the two groups of people. So I think the answer to your question is like the main role of an Engineering Manager is to become a force multiplier on their team, essentially, and to enable everybody to do that. Um, yeah, you can't have engineer managers who are just there to fix the gap. It's just not scalable. That's not a good thing. 

Kovid Batra: No, I totally understand that. So when we are talking about bringing, uh, this level of comfort where people are working together, talking about your experience in your teams, there must have been such scenarios and you must have like put in some thoughts at the time of orientation, at the time of onboarding the team members that how they should be working to ensure that things work as a team, uh, can you just tell us about a few incidents and how you ended up solving them and how you put in the right, uh, onboarding for the team members to have that inculcated in the culture? 

James Charlesworth: Yeah, the best onboarding is like that group effect of just observing something happening and then joining in with it. So like, by standard the best way to onboard somebody is just to add them to a high-performing team. Like honestly, you just put someone on the team that's super, super collaborative and they will witness how people can collaborate. But I've had you know, I've had positive and negative experiences in the past with joining a team, primarily back when I was a software engineer. I remember I once joined a team for the very first time and I just never really got on with my Product Manager. Like I don't think we clicked as people. We never really had any kind of conversations or anything. Um, and I was never really onboarded properly. So at the start I did have a slightly, um, rocky relationship with this Product Manager where I just couldn't understand, I couldn't understand what they were trying to do. They never explained anything. They just said, &quot;This is what we are doing.&quot; So I just had to say, &quot;Well, that's going to take longer than you think.&quot; And I tried this for ages. And I spoke to my manager. My manager sort of gave the advice that I've just been trying to give, um, your listeners here, which is like, you know, you need to go out and do it yourself. I'm not going to fix this for you. So what I did is I took this Product Manager and I just said, &quot;Look, let's go for a coffee once every two weeks. We'll just have a one-to-one.&quot; This was before COVID. So you can actually go out and do these kinds of things. Um, and we would, every lunch, every Monday, every two weeks, we would just go down the road and have a coffee in a cafe, um, in London where I was working at the time. And I just got to know them as a person. And I really, really got to understand that like, this is a person that is under a lot of pressure in their job and they're very, very stressed out, and they take that sometimes out on their team. It's not necessarily their fault, but that is the way that they deal with things. Um, and if I can just be a little bit, have a little bit of sympathy to that sort of situation they're put in and I can work out what's going on behind that, and I would ask them about like what they want to do, what their career aspirations, what do you know, what do they want to be one day, where they want to work and this sort of stuff. And those kinds of small conversations, like I say, half an hour every two weeks, just a one to one, um, completely fixed the relationship and completely fixed everything else, because you just build up so much more trust with somebody if you're just having small one-on-one conversations with them. 

And my kind of hack for engineers, if you like, is to have one-to-ones. People think one-to-ones is just for managers and it's for people to talk to their boss or it's for people to talk to people that report to them. Anyone can have a one-to-one with anyone in the business and set up a regular, no-agenda meeting every couple of weeks. That's just half an hour where you just chat with somebody, and that is a super, super valuable way of building up rapport with people that will pay off dividends in the future, like half an hour invested between a Senior Engineer and their Product Manager, half an hour every couple of weeks will pay off dividends in the future when you meet, uh, when you meet a conflict and you realize that, oh! Actually, I know this person really quite well now because we have coffee every two weeks, every Monday, right? And so you don't need to be somebody, you don't really need a massive reason to have a one-to-one with somebody. Just put it in the calendar, chat to them and say, &quot;Look, I, you know, I really like us to work more effectively together. Um, let's have half an hour every Monday. I'll buy you a croissant or something, whatever it is you want to do.&quot; Um, and then just ask them about their life, asking about their career goals, ask them about like what kind of challenges they're facing. And yeah, before you know it, you'll be helping each other out. You'll be desperate to help each other out because that is human nature. We like helping each other. So yeah.

Kovid Batra: I think I would like to, just because I've been working with a lot of engineers and engineering managers these days, what I have really felt is that throughout their initial years of career, they have been talking with a computer, right? It's very difficult to find out what to talk about. I think the advice that you have given is very simple and I think very impactful. I have experienced that myself, but I have, I would say, I have been an exception to the engineering and development space because I have been a little extrovert and have been talking about things lately, at least in my comfort zone. Uh, so I was able to find out that space with people who are themselves very introvert, uh, but still I could break through and I could break that ice.

It's very difficult for the other side of the people who are developers throughout their career to come back and like start these conversations on their own. So what are the things that you really think we should be talking about? Like, even if a Product Manager is going to the engineer or the engineer is wanting to break that ice and like build in that empathy and understand that person, what kind of things you look forward to, uh, in such kind of conversations, let's say? 

James Charlesworth: That's an interesting one. Like a lot of, there's a lot of people that are introverted in this industry. A lot of people use introversion as like a crutch or as an excuse, and they shouldn't. Be just, you know, being introverted doesn't mean you can't connect with other people. It just means you connect with other people differently. It means that, you know, you look inwards for experiences and things. Um, and so the practical advice would be to try and recognize how another person functions. You might find that your Product Manager is actually more introverted than they let on. A lot of people just put on a show. A lot of people are super, super introverted, but they put on a show in day-to-day, especially in work life, and they'll, you know, pretend to be all extroverted and they'll pretend to be all confident, but they're not. And I've known many people like this, that if you have an actual conversation with them, they'll admit to you that they're actually super introverted and they get super, super nervous whenever they have to talk to people, but they do it and they force themself to do it because they've learned throughout their careers. 

So, um, yeah, I'm not suggesting people should push themselves too far out of their comfort zone, but In terms of practical advice, speaking in statements is quite a big thing, though a lot of people don't realise, but, um, I can't remember where I read this, this is from some book or something on like how to make friends and influence people, I don't think it's that exact book, but essentially, if all you're doing is asking somebody questions, then you're putting all of the onus in the conversation on them, and that's not actually that comfortable in conversation. So if you're talking to a Product Manager and you're just asking them, like, &quot;Why are we doing this?&quot; &quot;Why does the customer want that?&quot; &quot;What's the point of this feature?&quot; That's actually not, that's not a nice thing to do because you're making them lead everything. What you really want to do is just talk in statements. You just want to say, &quot;Hey, like I'm just building this. It's really cool. We've got, we've got connectivity going on between this WebSocket and this backend database. There you go. That's the thing. Um, we've just realized that this is a little bit late. And so, it's going to be a few days extra, but we found an area over here where we can cut some corners.&quot; Like just to say things, say things that are going on, tell people stuff that's going on in your life. Um, and then there's no pressure on them to intervene. And this is like standard small talk, right? If you just tell people things, then they can decide to walk away or they can decide to engage you in conversation, but you're not putting too much pressure on them. You're not asking them a barrage of questions that they feel like they have to answer. So that's standard advice for introverts. I think if you are introverted and you feel like you need to talk to somebody to share, share details about what you're working on, share details about, um, you know, your current goals and where you are and see what happens, they might do the same. You might learn something. 

Kovid Batra: Yeah, sure. I think, I think everyone actually wants to do that, but it's just that there has to be an initiation from one side. And if it's more relevant and feels like, uh, coming very naturally to build that bond, I think this would really, really work out. 

Cool. So I think this was, this was really, uh, again, I would say a very simple, but a very impactful advice that one-on-ones are really things that work, right? At the end of the day, we are humans and at least for developers, I think because they are day-in and day-out just interacting with their computers, I think this is a good escape for them to actually probably go back and talk to people and have those real conversations and build that bond. So yeah, I think that's really amazing. Anything else? 

James Charlesworth: You can talk about computers. 

Kovid Batra: Sorry? 

James Charlesworth: You can talk about computers. Like, if you're really into software, like, this is why gaming, I'm not really a gamer, but I know a lot of people connect over gaming, a lot of people bond over gaming because that's something that you get into as like an introspective thing, and then you find out that somebody else is also into it, you're into the same game, and you can connect over that, and it turns something that was a really insular, inward looking experience into like a shared group sociable thing, right? So like, yeah, I'm in many ways, I'm quite jealous of people that are into gaming because it does have that social aspect. So yeah, talk about computers. Like, just because your life is staring at a screen and talking to a computer, you can still share that with other people and even product managers as well. Product managers in tech companies, they're super, super technical. They might not be able to code, but they definitely know how computers work and they definitely know how systems work. They're designing these things. So talk to them about it. Talk to them about, um, you know, the latest Microsoft Windows version that can spy on your history of AI. Like, talk to your Product Manager about that. They will have opinions on this sort of stuff. And so, yeah, sorry to cut you off, but like, honestly, just because you're into computers and you're into coding, like that, that can be a way to connect with people. It doesn't have to be a way to stay isolated.

Kovid Batra: Definitely, definitely. Uh, perfect. I think more or less the idea is to have that empathy for people, do more one-on-ones and build that trust in the team, and I think that would really solve this problem. And I think one thing that we should have actually talked in the beginning itself, uh, the impact of this problem, actually. Like for our audience, I won't let that question go away like that. Uh, there must have been experiences where you would have dealt with consequences of having this friction in the team, right? So maybe the engineering managers, the product managers, the engineering teams out there, uh, who are looking at delivering successfully, I think they should be, uh, aware of what are the consequences of not putting some focus and effort in solving this problem before it becomes something big. So any of your, uh, experiences that could highlight the impact of this problem in the teams, I think that would be appreciated. 

James Charlesworth: I mean, like pretty much all systems out there that are absolutely laden with tech debt and every product is late, is the result of this. It's the result of the breakdown between what the business needs or what the product owner needs and what the engineers are building. And I've worked on many, many systems that have been massively over-engineered because the engineers were given too much free reign. And so it was, you know, you know, not much tech debt, but really, really, really over complicated and took forever to deliver any value to any customers. They've also worked on systems that were massively under-engineered and they fall down and they break and there's bugs all the time. And that's because the engineers weren't given enough reign to do things properly. So you need to find that middle ground. And yeah, like honestly, I've worked, I've seen so many situations where the breakdown in conversation between product managers and engineers has just led to runaway bugs, runaway tech debt, runaway like people leaving their jobs. I've seen that happen before as well. Yeah, and that's, that's really bad. These are all bad outcomes for the entire business, right? You don't want engineers quitting because they don't get on with their Product Manager, and that's something I've seen before. Um, you don't want a huge amount of tech debt piling up because engineers are too scared to put their hands up and say, &quot;Look, we're accruing tech debt here. This approach isn't working.&quot; So they're too scared to do that. So they just do the feature factory thing and they ultimately, build up loads of tech debt and then, a huge bug is released. You don't want that. Um, but you also don't want engineers to be just left to it and put in a room for a month and come back with some massively elaborate overengineered system that doesn't actually solve the problem for the customer. So all of these things are bad situations. The good situation is the one that is an iterative approach with feedback and collaboration between engineers and product. It's the only real way of doing it. 

Kovid Batra: Definitely. Great, James. Thanks a lot, uh, for giving us so much practical and insightful advice on, uh, how to deal with this situation, and I'm sure this would be really helpful for a lot of engineering teams, product engineering teams out there to be more successful. And we would love to have you back on the show once again and talking about such insightful topics and challenges of the engineering ecosystem. But for today, I think it's time. Uh, thank you so much once again. It was great to have you on the show. 

James Charlesworth: No worries. Thanks very much, Kovid.

‘Inside Jedox: The Buy vs. Build Debate’ with Vladislav Maličević, CTO at Jedox

In the recent episode of ‘groCTO: Originals’, host Kovid Batra engages in an insightful conversation with Vladislav Maličević, CTO at Jedox. The central theme of the discussion revolves around “Inside Jedox: The Buy vs. Build Debate”. 

The episode starts with Vladislav recounting his 20-year journey from being one of Jedox’s first developers to stepping into the role of CTO. Moving forward, He sheds light on the company's vision, the transformation from an open-source project to a full-fledged cloud platform, and the various hurdles and achievements along the way, such as competing with industry giants like IBM and SAP. He also points out that many early team members remain with the company to this day.

Vladislav then dives into important decisions surrounding whether to build in-house or outsource various parts of their product, explaining that spending constraints often guide these choices. He also emphasizes the 80/20 rule (Pareto principle) and highlights the importance of integrating with Microsoft Excel as a key factor in their success. 

Timestamps

  • 00:00 - Intro
  • 00:58 - Vlado’s background
  • 03:21 - Jedox's evolution & market position
  • 07:10 - Role of open source in Jedox's growth
  • 15:14 - Transition to cloud and key decisions
  • 25:03 - Building vs. Outsourcing: Strategic choices
  • 31:40 - Conclusion

Links and Mentions 

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with a new episode of groCTO. Today on our show, we have Vlado from Jedox. Welcome to the show. Great to have you here. 

Vladislav Maličević: It's a pleasure to be here. Hi. 

Kovid Batra: Hey, Vlado. All right. Like before, um, I start off with a beautiful discussion with you around the age-old 'Buy vs Build', I would love to know a little bit more about you, um, your hobbies, what you do at Jedox. So let's, let's start with a quick, cute intro about yourself. 

Vladislav Maličević: Yeah. So, uh, my name is Vlado. The long name is Vladislav Maličević, uh, long name coming. It's a, it's a Serbian name and, uh, coming, coming originally from, from Bosnia, but I've been living and working here in Germany for the past 22 or 23 years. I started, uh, 20 years ago this year, uh, with Jedox. I was one of the first, uh, employees, one of the first developers and, uh, slash the, uh, employee of the company, went through the ranks over the years. I was lucky to follow the growth of the company and went through the ranks in the, in the engineering department. I was, the Head of, uh, Development and the Director of Development, VP, um, Development, uh, and later on added support, uh, um, coined the cloud team back in the day. And, um, a few years back, I joined the C-level as a CTO with the company of 450-500 people today. Right. And it was an incredible journey, um, to, to, to look at, uh, from, from within, uh, observe and participate in, in this, uh, in this long journey. 

So, um, yeah, but more about personal. So I'm a, I'm a father of three girls. Um, I also have a sausage dog and, uh, yeah, with my wife, uh, we live, uh, with my, with our kids here in Karlsruhe in Germany, which is a university city, um, let's say, more, uh, in the southern part of Germany. Yeah. So that's, that's about it. 

Kovid Batra: Cool. I think, uh, this is really amazing to see. I mean, rarely we see this someone spending such long time joining in as an employee and then growing to that C-level in a 20-25 year journey. So that, that has been, uh, one of my first experiences, actually, with someone on this show. I would love to know how it all started and what is Jedox about, uh, what was your vision and the whole company's goals and vision at that point of time? Now, 20, 20 years hence, how, how do things look like? 

Vladislav Maličević: Yeah, sure. Yeah, I mean, the only constant in life is change, right? And, and many things, uh, stay unplanned. Initially, I, I really didn't, didn't intend to, to stay with Jedox. I thought it was in-between, uh, just an in-between jobs kind of thing, and, um, also the setup, it was a very small company, small office, uh, just a few, few people, uh, which by the way, all of them are still with the company. So all the people that were in the company when I joined are still with the company, which is also one, one quality, I must say. Like I said, I initially didn't, didn't, didn't plan to stick too long, but the challenge was there and it, it, it became more and more interesting from day-to-day. And, um, we were kind of, it's, it's easy when you have a kind of a black, uh, blank canvas, yeah. There is no product and then you start from scratch and you start building something and you know, over time, it becomes, uh, you see more and more of a product and you, you see more and more customers and it's sticking with the, resonating well with the, with the, with this huge community. And then you also add ecosystem to the, to the mix, you have partners in between the customers, growing globally, opening new offices, adding more people and things like that. So it is, it is simply, um, it was, it was an incredible journey. Usually you start off, like you said, either you hop from one, one, one job to another every few years and change, or you join as a, as a founder, right? Uh, you could also be, uh, it's not, not unusual to have a founder on the team, uh, being early on there and then, you know, doing something with the company and moving on, right? I indeed, I wasn't the founder, but was one of the first early people. So I, I sticked with the, with the, with the product and with the company, and, this is, um, resonates well with my, uh, passion. I kind of map myself or I reflect a lot of my, my work life and life with the product that we built over the years. 

What Jedox actually is, is, um, I mean, uh, we are proud, uh, leader in the magic quadrant, Gartner's magic quadrant for, for EPM, CPM, or enterprise performance management, corporate performance management, or XPNA, how they call it nowadays. Being in the upper right corner, it was obviously not, uh, not, it was a journey. It's not like we showed up immediately there, right? From, from zero to hero, right? It took us a few years to move slowly through the, through the, from the lower left to the, to the upper right corner. And certainly, you know, competing there with the, with others, with the big names like Oracle, like SAP, like Anaplan, um, it certainly make, makes us proud because we are by, by far a much smaller company, uh, by, by sheer size, and, um, to some extent also by history or by tenure, right? Um, but yeah, it shows that, um, you don't need a lot of people and a lot of money to build good products and, and make them stick with the, with the customer. 

One of the things that helped us in the beginning, I mean, that, that's also, we evolved over time. Uh, one of the things that helped us in the beginning to put a foot in the door in the market is the fact that initially we were, um, actually we started as, as a freeware and then switched to open source, which is kinda, you know, 20 years ago, it was things like Linux started showing up around. I mean, actually it was, uh, uh, Linux was, was way before, but, but around that time, there was like a boom of open source. And we were, um, I belie, theve first product in the market to offer planning, uh, as open source, and that was a big shock in the market and it helped us a lot to, to, to spread the word, uh, globally and become known in the market, although we are, uh, we had the low or no, no marketing budget whatsoever. Right? Um, and then over the years we, we matured, we kind of, um, made a clear separation between, between open source and the commercial bit. And, and, uh, we curated both brands in parallel. But over time, we, we, we focus nowadays, we are focused totally on, on our cloud product under the name Jedox. And, um, basically open source is, is the past. It's also not something that we see in the market nowadays anymore in this, in this, uh, let's say in this bubble. It's relatively, you could say, it's a, it's a niche, but it's a quite, quite, uh, I wouldn't say lucrative, but quite, quite a big niche. It's a specific need. From the business to be able to quickly plan any kind of data, usually finance data, but any kind of numbers, being headcount, in any vertical, in any industry. Yeah. Nowadays, it's even, you see it in every, literally every company needs to do some kind of planning. And doing that with a tool like, like Jedox, makes it less error prone and, uh, very, very seamlessly integrated, allowing, um, to connect to, to the existing third-party systems, um, connecting all the data from all the different systems that you usually find, on average companies nowadays have 150 plus tools or services that they consume. Jedox is well-versed in, in accessing all these different existing products in the, in the customer's ecosystem and then combining those in, in Jedox. 

In a nutshell, Jedox is, is a, is a platform. It's a local platform for building business applications, right, speaking less technically. But, what you have in that platform are components. There's a lot of IP, Jedox IP in there. You have your own in-memory database. You have your own ETL tool. You have your, your backend, middleware. You have, uh, frontend for, for mobile, for web, obviously, and we have quite a good integration with, uh, Office, in particular with Microsoft Excel, which is kind of a go-to application for any business user nowadays, right? Most of the time, the journey of our future customers starts somewhere in Excel, they did something over the years in Excel, and, um, they built it, they invested hours and hours in it and they've been living it, but, you know, they, over the years it, it became cumbersome to, to maintain it, multiple copies of it, multiple versions of it, uh, sharing it across the team or even teams, uh, error prone, and it's, it's, uh, known as an Excel chaos, which we actually try to, to solve. Right. 

A lot of product, obviously, 20 years we weren't sitting, um, so we were quite busy developing that, but nowadays, it's quite extensive and mature, very grown up, uh, enterprise platform for building business applications, right? And coincidentally, majority of the, let's say, first, first-time users come from somewhere from the office of finance. Usually, that is the, that is the entry point where users come from. But, uh, it's not limited to, right, it's just the, usually the entry point, but we spread quite quickly within the organizations because they see the value of the product. 

Kovid Batra: Got it. I think this is very, uh, interesting, competing in a landscape where you have MNCs and legacy players already there. You have been there from the very beginning, so the company founders and the company belief in that respect on day zero and today, uh, would be very different, right? At that time, you guys might not have even imagined where you would be 20 years hence. Of course, people have a vision there, but what was it like for the Jedox team and Jedox founders at that point of time? 

Vladislav Maličević: Yeah, I mean, I mean the, the vision was there, but the, the vision, I could say that the, the vision was to, to, to rule the world or rule the bubble, rule the, rule, this, this, let's say, small niche, even back in the day. Appetite was certainly there, but we were also realistic. We knew that, you know, it, it will take a while to, um, even meet, uh, let alone exceed, the functionalities of the, of the established product in the market back in the day. Already, the market was there. It was booming. It was ruled by IBM. IBM was the absolute leader. A company called Infor was, was, uh, also quite prominent in the market back in the day. Actually, they weren't even called Infor back in the day, but through acquisitions, they, they grew into Infor, um, and they still exist, uh, to date. We knew we were on the, on the, let's say, on the, on the lower end and we weren't the disruptor, but one of the vehicles was, was definitely open source and coming through the open source, uh, on the one hand side, you have a, you have a behemoth, let's say, or a mature, um, established leader in the market selling, you know, I don't know, a couple thousand dollars per user, per seat, um, license. And then all of a sudden, this small team from Germany comes with a product that almost, almost, right, not, not really back in the day, but, but, um, almost matches the, the functionality, brings in, let's say, a subset of the functionality for no, no cost at all. It was open source and everybody was open also to contribute back in the day. There was no GitHub back in the day. We used to use SourceForge, sourceforge.net. That was, uh, that was the platform of choice back in the day where we hosted our code. 

The word spread quite quickly and, um, the adoption, we saw traction very early. I think I joined in November, October-November 2004 and we had the first version of the product that you pretty much you can recognize even today and in today's products. So everything you need to know, everything you need to be able to work, you already had. Um, I believe we, we shipped in February of 2006. So it's a year and a half. It took us 18 months to put, uh, put the product together, and already there was, uh, in-memory database. We had a frontend for Excel. We also had, uh, let's say some primitive way of ingesting data, let's say, um, some, some baby version of, of ETL within the product. We had a predecessor to our, uh, today's web frontend. We had it, uh, it was, uh, Web 1.0, the old, old school, uh, web frontend that was already connected to, to, to this, um, to, to Excel and to the database. So we, we had web frontend. So we were ready to, to, to rock or ready to run. 

Later on, additions came in, including ETL, including a modernized version of, uh, the web frontend. And nowadays, obviously, everything is happening in the web and you are doing, also authoring within the web and Web 2.0 was a thing back in the day. Like we quickly jumped on the boat. Later on, other innovations happened. Shift to Kubernetes, so, microservices and things like that. So going from the legacy, I mean, actually the, the first shift was the cloud cloud was the thing. There was no cloud back in the day, right? Maybe there was some hosting somewhere, but usually customers were running it on their own, even on their laptops or, um, within their corporate network, server, client server kind of thing. And then later, 2012-2013, we saw cloud kind of picking up and this is where we started our excursion into cloud. And from there, um, we moved on. Today, we are a cloud company, a SaaS product. 

Kovid Batra: Yeah. So, in this, in this whole journey, I think you survived, in fact, you, you thrived as, as a product, as a, as, as a company, right? Of course, you mentioned that you became an open source product, right? So that, that was one critical move which probably helped you a lot in exceeding what your competitors or counterparts for doing, but there must have been much more deeper technical decisions, and with this question, I think I'm trying to understand from you how many times you had to take those critical calls which impacted the business in an immense way, and whether those were decisions where you were building products in-house or you were outsourcing it and how, how did that journey come along, now, 20 years hence, when you look in, in the, in the retrospective. 

Vladislav Maličević: I mean, in retrospective, I wouldn't say there were too many critical events, right? The situation in the market is dynamic and you have to react on, literally on a daily basis. You have to make decisions on, uh, really, literally, some, some important decisions are made on a daily basis. However, the strategy, you don't change every, every two days. I would say, we had three, four waves, uh, over the course of 20 years where we were, for example, cloud decision to go all in on cloud. Um, it took us a while, right? Because we are, I'd say, uh, first of all, it's quite conservative, the market itself is quite conservative. You are usually working with financial data. Financial data is very critical. People are not eager to, to expose financial data out of their corporate network. So when the cloud showed up, it was kind of, oh, do we, do we, do we even jump onto this boat? And, and I remember, uh, vividly, so there were, there were like pros and cons, and, and there were voices in the company. I would say majority of the voices were, were, uh, against cloud. &quot;Hey, nobody will jump on this boat.&quot; &quot;Hey, nobody wants to put data in public cloud.&quot; &quot;This will not fly.&quot; And indeed, it didn't, uh, it didn't, uh, didn't fly immediately, right? It took us a while and depending on the market, obviously here in Germany, it's quite, I would say a bit conservative market. So it takes a few years for, for things to become mainstream, and for the adoption, one thing was technically not nothing, I wouldn't say nothing special, but was a smart move by Microsoft back in the day when they introduced, uh, something, a thing called German cloud, um, which was kind of an idea to, to bring in sovereign German cloud on German soil, operated on German soil, but German company, right? Disconnected from the rest of the world, kind of from, from the corp in, in the US and things like that. This kind of brought more trust into it, of course, with additional marketing and massaging customers, um, across the German and Dach region, Austria, Switzerland, and the Germany region. But it definitely helped in adopting cloud more. And then a few, few years later, all of a sudden, it became, you know, cloud became commodity, somewhat delayed, but became commodity even in Germany. And then it was no brainer. Yeah, okay. Let's go. You don't have these conversations anymore or, or very rarely. Yeah. 

That was one thing which was kind of critical back in the day. We started talking about 2011-2012, but the real push came around 2015-2016. So it took us a few years to come from zero to, to really, hey, full-steam ahead, let's, let's, let's do this, this cloud thing. That would be one thing, I think being close to, to, to Excel. So initially being open source helped promoting the brand. You would need to spend millions and millions globally on marketing to spread, spread the word about Jedox, um, normally, and with open source, it kind of went word of mouth and it very quickly spread. Um, again, context, right? We're not in the Google business, right? We're not a commodity that every consumer is using, but let's say, in our bubble, the word spread super-quickly. And, um, later I remember I spoke, we acquired one company, uh, in, in Australia, uh, back in the day and, and they became, before we acquired them, they run as a partner for a few years. And, um, I had a pleasure talking to, to one of the founders of that company, and he said that he remembers well when we announced the first initial version. He was back in the day using IBM and he read it on some forum that there is, um, there is a new product called Jedox and it's open source and it's free where you can download it and use it, and, and he does almost everything, um, that we used to pay for and he wrote to his colleagues, emails saying, &quot;Germans are coming.&quot; And I use that reference often. It's quite quite interesting because hey, where's Australia from here, right? On the opposite side of the world, but it really, uh, the word spread quickly. So I think that was a good decision to go with open source initially. It didn't, on the other hand, it didn't create any traction. I must say, very little, almost no traction on the development community, right? We had it, we had it up, but we were the sole maintainers, very little input from the community, right? Maybe it's too technical, yeah? And again, maybe it's a context and a niche which we are in. It's not some commodity that everybody needs on their desktop. So maybe that's one thing. 

So, open source, cloud, uh, being close to Excel was always, uh, I think was a good thing. Uh, Excel is the, I don't know. Um, there's this big question. Yeah. What is the most common functionality in every enterprise software? And that's actually 'export to Excel', right? In every enterprise software, you will find that button or option 'export to Excel', in every enterprise software. So, I don't know, there are billions of users of Excel, certainly hundreds of millions of Excel users globally, and, um, this is a kind of citizen developer. Uh, if you, if you, if you look at like from, from that aspect and us being close to that, let's say, both, both literally close. We, we are very compatible with Excel. We integrate well with Excel. We also understand well, uh, Excel format. We can ingest it and import and export it in Excel format. So literally that, but also the, this concept of spreadsheets, I think this was also a good, right choice.

Kovid Batra: Yeah, I think that most of the time it really helps, like instead of going out and introducing something completely new, which is not in the behavior of the user. It might be a hit, of course, like there are revolutionary ideas which are not a part of the behavior and then people form a behavior around it. But mostly, what I've seen is that any product team building a product closer to the existing behavior of people and the way they are using the current solutions, if you are close to it, then it is very easy for adoption, easy for people to understand and relate to, and they quickly start using. And then, of course, you can put them on a journey of gradual learning where you introduce new features, you put in more services and then they grow from there. But the initial hooks should be like that where you are close to the existing solution and yet offering something very impactful and having more data than what the current solution is giving them. So yeah, I think it's, it's a mix of, um, few good decisions, I would say. There was, there is no single bullet, magic bullet that is going to solve for sustaining or a business thriving in this market. It was constantly your eagerness to learn, your eagerness to explore, and then changing and adopting towards things that are coming in, like cloud came in and then, of course, at every point you understand that user behavior as well as what the market trends are saying and moved in that direction. So, of course, this really worked out well for you. 

Few more things that I would want to learn here is that on this journey, when you're building such an immense product used by thousands and maybe millions of users, I'm not sure how many users you have right now, but, uh, I'm sure like there are hard decisions that you're taking about, like, uh, building something in-house and, uh, acquiring other companies, like once you just mentioned about one. So when, when there is a decision around building something in-house and, uh, like outsourcing it, can you just give us some example from your journey and, uh, how did you conclude on certain decisions like that? 

Vladislav Maličević: Yeah, some, some of those, uh, decisions were, were made quite easily due to whatever constraints, right? So if you have, let's say, uh, if money is your constraint, obviously, and you, let's say, you have a few idle people on payroll, obviously, you go and build, um, start building, especially when you don't understand the magnitude of the, complexity of the problem, right? You don't see, you don't see the big picture in the beginning, right? You go into it somewhat naive. I can say, I mean, I was, uh, I was quite young at the time. If you don't know the, the, the magnitude of the, of the problem, if you don't see the, the, the whole, uh, scope of it, and you are, you have a white canvas, you go for it and you simply, you give it a try. So there was no, there was actually no, in the beginning there was no, um, um, nothing to, no decision to be made whether to invest or acquire or whatnot, because the money was not there, right? So, So let's say in the beginning, majority of decisions were, were built. Um, and then over time, you have the opportunity to change. You can focus, you can keep the core, core of the product to yourself and then, whatever is not core, you can try to outsource. The good thing with a, with a platform like ours is that once you have the platform in place, you can start building actually these, um, applications on top of it and you can make those applications to, to a product. So for example, uh, one more time, one more thing that happened last year, we entered magic, another magic quadrant last year for financial consolidation, which is a, let's say, um, off the shelf separate product from our core product, but it's built on top of the platform and it, and it's sold separately. Right? So there are, there are players obviously in the market who do just that, uh, have a product just for that, but we built it on top of our platform. So you get the platform and you get a, you get a financial consolidation of it and you can build any kind of, uh, business application with it. So, um, once you have applications like that, then, and if it's easy to package it like it is with Jedox, then you can come up with things like a marketplace, which we do have, right? We do have a marketplace. So we put these applications in the marketplace and then you can easily install them from there, and then, it's, I don't know, it covers 60, 70, 80, some, some, some cover 90 percent of the functionality out of the box, and then you just fill it up with life, with your information, and then, uh, off you go, right? It's, it's configured and you can use it, it's ready to use. 

Um, so, uh, the kind of the decision is definitely the, um, um, you have, you have spend constraints, so, so you have to be cautious on, on the spend. Similarly with cloud, right? Do you go to public cloud or do you build and host, you know, you do collocate your servers, you build them yourself and run them yourself, uh, somewhere, right? Yeah, that kind of decision someone needs to make, and in the beginning, maybe decision-making is super easy. Yeah, of course, you go and you buy a pizza box and you put discs in it and CPUs and then you collocate it to the, at your nearest host of choice. But then, if you want to run it at scale, things like compliance come into place and you need to, attestations, you need ISO, SOX and CSA star, and whatnot. You cannot manage them anymore on the commodity hardware that, that fails every, every three months, something's broken and you need to take the system offline and things like that. So yeah, this is where, where you go into cloud and use services and build it, build it like a Lego, cherry-pick services that you need and build, build something out of it and outsource kind of responsibility to a, to a infrastructure provider of your choice. Same with code, right? Usually, in the beginning, if you have monetary crusades, but you have the means, if you have, uh, well, just a couple of people should be sufficient to get you going, right? That's the thing, right? In the beginning, this Pareto of 80/20, with 20 percent of, uh, personnel, of people, you can build a lot of products, you can build 80 percent of the product. But the last 20 percent of the product are the hardest, and then you need additional 80 percent of people to, to, to add on board. If you have the means, this is where you, you can make, uh, make decisions, whether you, you outsource it, uh, let it be built, or you take managed service, OM solution for the means, for the, for the particular case. I mean, nowadays, it would be totally different, right? You look from different perspective or from different dimensions and cost being just one of those, right? 

Kovid Batra: Yeah. 

Vladislav Maličević: You, uh. So, yeah, it depends. It depends on the context, right? In what kind of context and what kind of setup you are. We are scaled up and we are a mature company, profitable company nowadays, so we can afford ourselves to take a bit more time to make decisions such as outsourcing or buying, acquiring pieces of product into the whole. Whereas when you are at the beginning, usually you only have an idea and your free time and then you go and roll up your sleeves and you work, you code, you usually don't have money, right, to, to go and buy expensive services from others, right? Um, yeah. 

Kovid Batra: Cool, Vlado. I think that's interesting. Um, we are running short on time, so we'll have to wrap up now, but it was a really interesting talk. Would, would love to talk more, uh, and know more details about what happened in those 20 years, what things you actually felt were very challenging that were solved, on those pieces, but I think that would need another episode and I would love to have you for another episode, absolutely. 

Vladislav Maličević: Thanks a lot. Yeah. Let's, let's do that some other time. 

Kovid Batra: Sure, absolutely. All right. So that's it for today. Thank you, Vlado. Thank you for your time. Uh, looking forward to host you once again, uh, very soon. 

Vladislav Maličević: Thanks a lot. Bye-bye. 

Kovid Batra: All right. See you. Bye. 

‘Thriving in Recession: Guide for Tech Leaders’ with Leonid Bugaev, Head of Engineering at Tyk

In the latest episode of 'groCTO Originals' podcast, host Kovid Batra engages in a thought-provoking conversation with Leonid Bugaev, Head of Engineering at Tyk. The episode delves into ‘Thriving in recession: Guide for tech leaders.’

The episode starts with Leonid sharing his background, his approach to balancing work at Tyk with side projects, and the key differences between remote and distributed companies. He explains the impact of economic downturns on businesses, stressing survival as the primary objective. He also shares communication techniques for announcing layoffs to developers and explores challenges in managing teams and maintaining operational efficiency in difficult situations. Leo also advises engineering leaders to prioritize customer retention and think in business terms instead of engineering and R&D. He suggests encouragement through additional bonuses & learning opportunities to employees who stay after layoffs. 

Lastly, Leonid concludes with essential advice to view change as a driver of innovation and growth rather than a threat. 

Links and mentions 

Timestamps

  • 01:08 - Leonid’s background
  • 05:54 - Patterns of Economic Downturns
  • 09:35 - Riding the Recession Wave Successfully
  • 13:04 - Business Context for Engineering Teams
  • 18:31 - How to Avoid Chaos
  • 24:45 - Maintaining Motivation & Operation
  • 33:22 - Building Trust with Transparency
  • 37:27 - Leo’s Top Takeaways
  • 41:23 - Parting Advice

Episode Transcript

Kovid Batra: Kovid Batra: Hi, everyone. This is Kovid, back with all new episode of the groCTO podcast, formerly known as Beyond the Code. And today with us on our episode, we have a very special guest who has 18 years of engineering and leadership experience. He's currently heading Engineering for Tyk. Welcome to the show. Leonid.

Leonid Bugaev: Hello. 

Kovid Batra: Great to have you here. 

Leonid Bugaev: Yeah. Glad to be here. So, it's a good chance to, and a very interesting topic, which was suggested. Uh, so I'm, you know, like I have been in IT for the last 20 years. So a lot of things, I see the companies rising and falling, uh, uh, tried various technologies, uh, I've been both like very deep in engineering, uh, and now I'm in more leadership roles for the last 10 years. So it will be interesting to, you know, like share some of this experience, I guess. 

Kovid Batra: Sure. Absolutely, absolutely. But before, uh, like we get started on our today's topic, uh, which is very interesting, and I think nobody has talked about it yet, even though it has been there from the last few years. So we are definitely.. For the audience, I'm just putting it out loud, uh, we are going to talk about how to navigate and lead dev teams during the time of recession. So that's the topic for today. But before we jump into this topic, Leo, I think, uh, we would love to know a little bit more about you. Uh, something, uh, around your hobbies, your personal life that you think would be interesting and would love to share with the audience. So please go ahead and tell us who Leo is. 

Leonid Bugaev: Yeah, absolutely. So, you know, like, uh, I was also, you know, like a person who is, who likes challenges, and I was also into like, you know, like startup side projects and all this kind of stuff. So like, I had my first business, it's like at 17 in university, and you know, like, uh, I always worked for startups as well and I really enjoyed it. So I've never really been in, like in a big corporate environment. So, similar, always fast-paced rhythm, and I really enjoy it. So as for now, I'm, you know, like I'm currently living in Istanbul. I'm, you know, like I have two kids, a beautiful wife. And like, uh, I personally, uh, kind of like my hobbies and like my work is kind of like a good intersection. I still, uh, up to this day, I do have a lot of side projects. Some of them I even try to monetize actually, some of them just like for fun and I always stay up-to-date, especially like with the current AI hype and all this kind of stuff. So I'm very, very curious, and, uh, yeah, okay, that's it. 

Kovid Batra: Great, great. And what about your current role at Tyk, like when you were heading engineering for such a company, uh, which is multinational, like you have offices in different parts of the world. How is your experience, uh, working at a multinational? Whereas when you say, uh, you are very curious, you have a lot of side projects, don't you think it is very contrasting, like, uh, on a daily life, how you see things? 

Leonid Bugaev: Uh, well, I don't know if, you know, like, uh, uh, if it's actually contrasting or not, but you know, like an interesting thing is that I probably would never again work in the office, for example, that's definitely, you know, affected my life in the last 10 years, I'm working like fully remotely, uh, for various, like clients in the US in the Europe, et cetera. And it, uh, changed my lifestyle a lot. Uh, it changed the way how I, uh, manage my work-life balance, how I find time for my, like side projects as well and so, because like it allows you to save some of the time as well. And, uh, yeah, so it's, uh, it's very interesting. And being like a distributed company, you know, like there is also kind of a bit of a difference between like remote and distributed company because when we're talking about remote companies, for example, like you have an office in like your country and then employees like working from other cities, for example, in that country, and you are still close to each other, but being distributed means that people are literally spread across the world, a lot of mixed, different time zones. It's, uh, very, very challenging for building in general, like, uh, teams, the communications, all the channels, and you know, like being, you know, like efficient in communication and so on, becomes like super important and actually like essential for survival of such teams, I guess.

Kovid Batra: Yeah. 

Leonid Bugaev: So it's for sure affected a lot, you know, like as the way how I think, how I build the teams, what kind of people I hire and so on. Yeah. 

Kovid Batra: Perfect. Great. All right. Thanks for that quick intro, Leo. Let's move on to, uh, our main topic for the day. And, uh, Let's start discussing about these economic crisis that happen time to time in the world, right? And, uh, the latest one is something that we are going through and almost out of it. We are really not sure about it, but we have seen the, yeah. 

Leonid Bugaev: I highly doubt it. 

Kovid Batra: Yeah, but we have definitely seen the consequences of it from various angles, in our society, in our companies, everywhere. So I think I just want to first understand, uh, from your perspective, how do you see these economic downturns and how do they play into, uh, companies and businesses when they come?

Leonid Bugaev: Yeah. Overall, like, uh, when you live long enough, you start seeing the patterns. When it repeats again and again, you, like, are not surprised anymore that much, and you kind of like know what to expect and what to prepare for. And I think that's like one of the most important things to understand here. So it's always like, uh, economics, et cetera, it's always in cycles. So we had some amount, for example, of time with cheap money, uh, short, like percent rates, uh, a lot of loans, we see market is booming, everyone gets investments and so on. And now, we are in the, like an interface. So like money is very, very hard. Uh, loans under like a big percentage, and the way how companies get treated, uh, from the outside and from the inside change dramatically because, like your values in the company change a lot. And, uh, in a lot of the cases, you know, like, um, I think one of the main things to understand during these times is that it's not.. First of all, it's a time of chance because the ones who will survive this period, will afterwards get a way nice bonus and a very big boost.

So your first goal as a company during such times is to survive, and it's actually not that easy, especially, you know, like if company gets used to, for example, to the VC money, constant growth, and so on, because like, uh, as I mentioned, like I do feel it, I have some like insights on how the stuff works and right now, getting the investment, et cetera, is much, much harder. In the past, it was enough to get to, you know, like to convince investors that like you have some traction, some good ideas, numbers, et cetera. Now they're looking for the cashflow. How much money do you have in the bank? How much time, how much money you spend, et cetera? Are you profitable or not? I remember those times when companies that were bought, were measured in Instagrams. Like, uh, this company was bought for two Instagram, for three Instagram, et cetera. You know, like, uh, and, uh, they, some of them even didn't have revenue, like you know, like the market was booming. And also, you know, like I do see a lot of consolidation in the market happening. Uh, so yeah, if it's even, like when applied to like our market, like it were like API management and so on. I do see some vendors literally like bankrupt, uh, in the lighter industries as well. Some of them get bought by bigger vendors, et cetera. So it's, you know, like very challenging times. And as I mentioned, like survival, uh, not even growth, but survival is probably one of the main, uh, ideas which you need to understand when going through recession, I think, and it actually involves a lot of steps and, uh, and changing the mindset on how you use a company and its values. Yeah. 

Kovid Batra: Cool. Totally. I think it's very important to understand, as you said, that these are the patterns and they are bound to happen, and with that, I just want to like move on to something which I feel that everyone should, whether you are in the engineering department of a company or you are in any other department of the company, you should be financially, economically, um, aware that these things can happen anytime and one needs to be prepared for such kinds of turmoil. And that's applicable not just to individuals, but also to the businesses as well. What do you think? What's your thought on that? How companies that have come out well out of this situation have been able to navigate it or handle it better? 

Leonid Bugaev: Yeah. Well, first of all, like for example, my role is like an engineering leader. Like I understand that like if you actually spend more time with engineering, like 90 percent of your time is spent with engineering and not with, like the leadership team and business. You're doing the job wrong, especially in such times, because in such times, you really need to spend way, way more time to communicating like, uh, and understanding the business part, how you actually earn the money. And if you're talking about, like, you know, like metrics, like there's actual money on the table, at that time, it's the main metric, and, uh, you need to, you know, like clearly understand where you spend the money and what is your, like, uh, like return on investment. Let's say so. That's why, you know, like during such times, uh, we may see that some of the like research and development projects get closed. There are some like, uh, uh, optimizations of the talent, which was maybe too hard, like too much during the, like good times and so on. The important part here is that if it's not done right, it can actually also like harm the company because like, you know, like if you see that like your cash flow is not where you want it to be. And, uh, if the leadership team, for example, that is, like, not maybe like that's like technical or similar, and you do not have a good connection, the decisions are still made from the above. And if they don't fully understand, like the product, and if you don't fully understand the product, there can be some consequences. All of it needs to be synchronized with the business. 

Kovid Batra: Right. 

Leonid Bugaev: And, uh, so if you have some, for example, multiple products which you offer to the customers, you need to clearly understand like how much each of those products actually bring you money, and how much time, for example, you spend on the customer support, on the development time, and so on and so forth. And you need to, like, manage all this, like, uh, like, even, like, spreadsheets, and so on. And same with, you know, like money and understand, like, things like budgeting for the tooling, for the HR, and so on and so forth. I know that, like for a lot of people, especially, from, like engineering, it's very hard to talk about money and very hard to deal with these kinds of like routine tasks as well.

And I've been there myself. I mean, like, uh, and it's still very hard for me. It's still like a cause, like, you know, like procrastination and fear and you know, like, I just want to do things. Uh, but you know, like the tip here is, it's like if you will not be able to like optimize this money, et cetera, you will not have a company to work with or for, and like, uh, you won't be able to do things anymore.

Kovid Batra: And I think the biggest, I think the biggest challenge for someone who is coming from a technical background is having that context right. The first step is that, like first you understand what exactly the business is saying. And then as a leader, like, as you said, you have experienced it yourself. I'm sure you would have been in that place where you would have gotten that understanding that where you are right now. But then, you have the whole team to lead along, right? And like take along and lead them so that they are also aligned with the business goals. So I think that's more difficult than, uh, for yourself to first understand what's going on. So I might be, um, having that exposure as an engineering leader on a day-to-day basis with the business, with the product and everything. And it would be still a little easier for me to understand what's going on there. And based on that, I can gain that context and I can definitely align my thoughts to that. But when it comes down to delivering that context to someone who has less exposure to the business, to the product, and let's be very frank and true about it. Like, we try to bring developers to that point where they have complete customer empathy, they have complete exposure to all the things that are happening in the business. But of course there is a layer where leaders are talking, engineering managers and product managers are talking who have context for both sides, but developers are still in that silo where when you start explaining to them these concepts, it might not come easy to them, right? 

Leonid Bugaev: Yeah. Yeah. It's really tricky. And, you know, like especially when you know, like a company starts to get like mature, you kind of like, you have to build the layers, like managers, like managers of managers, some, like the board, et cetera. And not everything, first of all, can be, you know, like exposed. Sometimes you can expose it only at the, like, at the final minute or similar. There were a bunch of, like posts on the Hacker News, et cetera, with some examples on how people get fired via Zoom without, like, uh, you know, like a previous notice, et cetera, et cetera. And I've, you know, like, I've been through similar situations. There is always a story behind it. It's not easy, and, you know, there is no easy answer behind it on how to, like deliver such news. So it's very challenging. Working in this role, I've been through multiple, like leadership trainings and one of them, which I found very interesting is, basically they interview, uh, like quite a lot, like multiple sessions and they kind of build like your, like psychological profile with your values, et cetera. And that's you know, quite an interesting document in the end, uh, where you can better understand yourself and it's one that you can also share with your peers, so they will understand like what kind of a person you are, what are your values, et cetera. And my profile was quite unique, uh, in the sense that like, uh, uh, I had like two major like, um, motivators, how they call it. And my major was like, uh, 'peace and harmony', and the second was 'enjoy life and be happy'. You know, like it really contrasts with what I actually need to do sometimes when like, you know, like when such thing happens and, uh, it was, you know, like way hard for me, like it was like a mental shift for myself on how to approach the situations, how to not blame myself, how to, you know, like be more peaceful, how to be reasonable and how to explain it to like people whom you manage, why these changes are required and so on and so forth.

But it's never easy. It's never easy, but once you get, uh, some clear picture and some clear message, and especially, if this clear message is coming from the company, not just, like from you, like this is the direction where we're going, this is what we need to do, et cetera, it becomes actually much easier because like, and especially like, um, for example, uh, at Tyk, we try to share all our financial numbers, churn, new customers, et cetera, et cetera, in the company dashboard, and we share it with everyone publicly in our Slack every month. So every month, everyone can see our numbers, where we're going, are we good, are we bad, et cetera. And every few months, uh, we have like a call with the leadership team where we also try to be open, uh, about the challenges. Obviously, sometimes when, you know, like, uh, for example, we had like one round of layoffs, like about like one and a half year ago, and, uh, sometimes you can't mention all the things. You can put some, you can start preparing for the challenges, you can start, uh, like showing the data, et cetera. It takes some time for people also to prepare for you. It obviously, also rises anxiety, and you need to somehow deal with this anxiety. And that's where, you know, like the personal relationships, uh, with your team is very, very important. You can't treat them just like as employees, you have to be very close to them. And so, they will trust you and your judgment and so on. But yeah, it's, it's challenging. It's challenging. 

Kovid Batra: I totally understand, and I feel you there. In these situations, I think the most important part is like first keep your calm in place and then keep everyone and treat everyone as humans, not just employees there. I think that's the biggest factor that would drive how you are communicating things. Of course it really matters what the company's communicating and within that concise communication you are putting across the next steps from there, because for any human, uh, it's very true. Like when you tell something and you keep it open-ended and it's bad news, people would run into chaos, right? They wouldn't know what's going on. They would have their own interpretations. They would decide their own next steps, right? But when it comes with the thought of empathy, with clear, concise communication, and you as a leader are connected and you have your calm in place., I think you would be able to navigate through this situation in a much, much better way as compared to someone who is not doing this.

So in this situation, when you did something, can you recollect some of those incidents, some of those, uh, anecdotal things that happened at that point of time, uh, which you did and could be a good example to explain where you took care of these aspects and, uh, you felt that you did right in that situation? The person who came to you asking, uh, what's going on and you were able to actually help them understand what exactly happened and how things could look like in the near future. So has anything of that sort happened with you? 

Leonid Bugaev: Yeah. So, uh, first of all, as I mentioned, not all information can be, you know, unfortunately, you know, like shared with, like people whom you manage, et cetera, and there comes a period of time when you like, you know, like are by yourself and you know the news, but you can't tell them, and you know, like this tricky situation when you jump on a call with someone, but you know, for example, that they will leave and, but you can't tell it. So like, uh, that's definitely like a tricky one. The last time when we had a similar situation, it was very important to actually track some metrics as well, because, um, if there is, uh, one case when you've had to let go a very good employee and like everyone is asking, "Why? What's happened?" I don't know why. And another case is that like, uh, when there is some known issue, yes, maybe you try to fix it, some performance improvement plan or similar. Um, and also if it's like covered by some metrics, uh, like sprint points or similar, that's another case. And especially, if, uh, such metrics are visible to your, like engineering leaders, to your managers, for example, and so on, and then like, in such situations, there always will be people who are confused, afraid, angry, and so on and so forth. What's very important is that like, you can't please everyone. That's like, that's a bad situation from all angles, no matter how you take it. But it's very important that like, uh, You're like a core team, you're like a managing team, etc. You'll be prepared for it and you'll have the right questions, the right answers to the questions. In advance, before all it happens, I've tried to like prepare, like some, like a questionnaire, whatever questions they may end up with, from the people whom they manage and so on. And it makes them like, uh, like much more calm, much more easier because like they know what to answer. 

And another case is that like a person will leave and will not get any, like exit package or similar. Another thing is if you know that like this person will be treated well, and like, you know, the company will give them like some good exit package and so on. So having such details and mentioning them to like managers again, et cetera, is also very important. So you need to, uh, clearly explain them the reasons and these should be very valid reasons and give them all the documentation, uh, all the numbers. And, uh.. 

Kovid Batra: I think it becomes all the way, all the way more important to have this level of clear communication, better performance reviews, and understanding for yourself, as well as for someone whom you're talking to. So like starting from your, uh, clear and concise communication which is transparent enough to gain that trust, and then coming down to doing proper performance reviews, even emphasizing more in those times, because people would ask for explanation that would be in a very chaotic mind-shape. So for sure, these are some of the leadership techniques that one can learn from this discussion that one should have when they are going through this particular situation. Apart from this, like, yeah. 

Leonid Bugaev: I just wanted, like to add one more thing, like from, actually from a good point, from the good side. The trick is that sometimes such a shape for the company is actually a very good thing. So sometimes you get used to like a more relaxed rhythm to like, um, more like everything is good, like, uh, like everything is, everyone doing well, et cetera, and people are starting to be like calm, relaxed. And when we actually did this action, like the round of layoffs, and as I mentioned, like we focused only on people who had an issue with performance, whom we identified. We actually found it's like our overall, like velocity and like the ability to deliver actually increased. 

Kovid Batra: Yeah. 

Leonid Bugaev: So, uh, that actually went really good for the company.

Kovid Batra: And I think it should be like that in those situations, because when you talk about the leadership or the founders, in such situations, they become more aggressive because they have to like adjust to what is going on and like adapt to what is going on and like aggressively fight in those situations, and similarly, if as a team, as a company, if you're not doing that, of course, there are chances that you would fall apart. So I think it's definitely good what you are saying that will bring everything in place from the point of view of a leader to deliver what is needed in those situations. So, yeah, cool. And I think with that note, where we're talking about optimizing things and going through this, uh, stress situation, there is a lot that needs to be done in those times because you have a lower number of resources now and you have to deliver more, right? So how do you take up that challenge? Because, um, however we may put it, when people feel the fear of, uh, losing their jobs, that kind of a situation always sometimes pushes them to do even more, but also, people take a backseat, right? They know that anything wrong can happen. So how do you manage teams? How do you deliver operational efficiency in that situation? 

Leonid Bugaev: Yeah. If your company did it at least once, then people would expect that most things come in the future and, you know, like the order levels of anxiety always, you know, like it will rise. No matter what you do, you can't just like smooth it somehow. But, uh, if you, like are regularly doing it, like, as I mentioned, some clear communication and being very open about your actions, uh, they start to understand it. And also, um, it's not about, you know, like providing, uh, these actions during it when it happens, it's also giving them, for example, you know, like what actually gives people a feeling of like productivity or feeling good, et cetera. Usually, it's progress. It's like seeing that you're like progressing somewhere and it can be seen in various things. So it can be, for example, in enhancing performance reviews, as you mentioned, so like, uh, when you start focusing on the performance. Like, and what we actually, one of the actions we did is that we provided a very clear format on how performance reviews should be done, but, you know, focus not on, you know, like, kind of like punishing people if they don't hit some goals, like they have an issue, but, you know, like trying to work together with them to provide some opportunities to grow. Like, "You want to go through some, like, a Kubernetes certification course? No problem. We'll give you budget for this." So, like, let's give some opportunities for learning and so on. So, uh, those people who are like left in the company, they should be encouraged, uh, and they should be motivated, they should maybe get some like additional bonuses and so on. And sometimes like if your company offers it, et cetera, sometimes maybe it also makes sense to motivate with additional, I don't know, like stock options, for example, or like some salary bumps, or similar. So like that's kind of like also essential. 

And also like product metrics are also very important because like when the team is working on some feature, some product and they don't have visibility on, uh, does this actually bring money to the company or not? Like some feedback loops from the customers, they feel a bit disconnected from the business and they don't really understand like what's happening. But when you start connecting them with all of the business metrics, uh, when you stream them, all the, like customer feedback, et cetera, they start to feel that they are part of something bigger. They start to get, like faster feedback loops. They start to see that like, uh, we got this client because of you guys, because you build this, and this actually, you know, like, uh, brings a lot of motivation. Product feedback loops is one of the most important things to motivate your teams and improve stability. Yeah. 

Kovid Batra: Yeah. And I think in these times it becomes even more important, right? Uh, less resources, you have to move fast, you have to innovate more to stay in the market, up in the game. So, I think in general, the philosophy, the ideals say that one should be working on high-impact features. Uh, but I think in these times it becomes even more impactful, right? That you have to focus on these areas. You have to work on these metrics. You can't let your customers churn in this situation. You have to do every possible thing that requires your customer to be there to pay you. And of course, that cannot happen without prioritizing things in the right direction. So in these situations, how do you ensure that things are moving in the right direction at an even faster rate? 

Leonid Bugaev: Yeah, I think it's also, you know, like, um, important to understand that the overall, like company structure and how the teams get, like built. It makes a lot of, you know, a lot of difference. And sometimes it does make sense to rethink how you actually structure your processes, how you structure your team and so on. So let me give you an example. So, um, we came up with a scheme, like, um, about like two years ago. And right now, we're refining, and we're actually constantly refining it. So, uh, I think like first of all, constantly moving and constantly changing your processes, it's like a, it's a key thing. Don't be afraid of change. Then like, uh, don't be afraid to hire people who do the best, a good job for those things you don't like to do. So, I, for example, I am good with people. I'm good at engineering, but I really don't like, and I'm pretty bad at budgeting at like, uh, long-term planning, at various reporting and so on and so forth. So a few years ago, like when we, um, started scaling the team, and it was like a weird jump, like from, uh, like 10 to 50, for example. It's like a five times change. We realized it's like in order to keep up this growth, we need to support operational, uh, and delivery, um, uh, initiatives. So we hired a person, like who's like really amazing, and we had a really good bond, who took some of those responsibilities out of me. And the same was also about the product vision as well. So when you grow, you need someone to have, uh, uh, especially like if you have multiple departments, multiple teams, et cetera, like some coherent vision of like, uh, so all of those teams will work as like, you know, together and everything that they deliver will be somehow connected to each other. You will be developing it as one team. It's much more efficient rather than you all work separately. And from this point of view, they also, such teams also, uh, really allowed us to help, um, get enough time to build good benchmarks, to build like good metrics for your product, what we want to measure, how we want to measure, what exactly we want to build. So this process is super, super important, especially like in such times going very deep into engineering is not an answer. You need to start optimizing your product for the customers, for the churn, maybe for the user experience and so on. So it's not time for like research and development, and et cetera.

So, yeah, that's for sure. Like, uh, we've changed a lot. And right now, for example, we're also trying to like change and optimize some of the processes we, you know, like, uh, trying to, I would say, uh, involve, uh, our technical leads even more into the product area, even more like into non-engineering tasks. So they, together with like product managers, they will plan some like roadmaps in advance, find some brokers, et cetera, and try to think as a product, as a business, as a, as from the money point of view, how are we going to sell it? How are we going to like, you know, like what kind of like customers and jobs are to be done. We want to be covered. So we're trying to like teach people how to think business-wise, in business language, not in the engineering language. 

Kovid Batra: Makes sense. The understanding that I'm building here to actually navigate such situations is to have the right culture coming in place with people. And along with that, of course, you would put more focus on those important metrics, which will be impactful, whether they are from the operational efficiency point of view, or whether these are some metrics that directly link to customer satisfaction, customer engagement. So I think a combination of the culture, the right operational metrics and right product and business metrics would sum up to give you a framework, uh, on how teams should operate in such situations. So even though we have talked about the operational part and the customer-centricity part, I would want to understand, like, when you are trying to bring in this culture, that things have changed and you're bringing that transparent communication, you're trying to bring people together, how do you ensure that people are really onboarded? Like, do you go an extra mile to understand that from your teams? Or what exactly do you do to understand whether everyone is coming on the same boat or not? 

Leonid Bugaev: Yes. So first of all, when you know, like such things happen, like when there are some big changes, uh, it's going through like multiple phases. First, you have a phase, uh, like preparation. And during this preparation, I usually try to prepare like engineering leaders, uh, with the message. And like, as I mentioned, like prepare them in advance, but the team members still like don't know about it, majority of them. So when, uh, such as something here, like some big announcements happen, some changes, uh, usually it's like a company-wide announcement, like from the founders, from the board, et cetera. And then it should be followed really, uh, to be at the team-level, uh, like calls, uh, more private ones, et cetera. It's very important that every team will have, uh, how to say, like a 'safe zone' to speak out. Usually, it's first of all, people feel safer when they are in a smaller group, especially when they are within the team. Also they need to feel that like you are part of the team, not just like some boss from the top. So like as they should be able to openly speak with you about the problems, about the concerns and so on. And you as a person when you like joint such calls, you should be like super-transparent and super-open. You can't afford to say something like, "I don't know." You can't afford to say like, "I can't tell you this." Or similar. You should be very confident and you should be able to answer any kind of, like concern, or like follow-up later on it and so on. So if you will not be confident, then like, as they will also will not be confident and that's like, that's super important.

So like preparation and getting in advance all the questions, all the answers is super important. And also another thing is that, uh, when you build the, um, like hierarchy of the, like management, et cetera, sometimes maybe, people, uh, so it actually also depends on the cultures a lot. Like while working in distributed teams, I realized that like people from different cultures are very different in communication. Some of them are like very open, uh, some of them are very direct. Some of them are afraid to ask questions in public. And, uh, when you know this, like nuances, you can treat it better. So you need to know your people, you need to know your team. And for some of the team members, to whom I see that, like, they are, like, quiet, they, like, uh, they, and, and I see that by their face, by their emotion, like, something is not right, I will follow-up directly one-on-one with them, or, like, if I know that, like, they have a very good relationship with their manager, for example, I will ask the manager to follow-up with them, and if they have any concerns, to, like, communicate with me directly, and so on. But you need to be constantly on the line and constantly be available for any kind of communication, especially like, uh, for the first week, like, it should be like your main job. Communication, communication, communication. Your main job when such things are happening is to come to people, to give them answers and to be available to them. So the worst thing you can do is that like send some announcement and then like close Slack, whatever you use. And like, you know, like don't answer and, you know, like be back in a few days and so on. So that's, yeah, the worst thing. 

Kovid Batra: Cool. I think that that's some really, uh, I would say, hands-on advice coming from someone who has seen situations, and that seems to be very helpful and we can totally relate to it. So now, when, uh, you have, uh, made us understand how things should move in such situations, what's your forward-looking strategy? Or what's your, like major pointers that someone should take away from the whole discussion that we have had? So I would like to quickly summarize that, uh, as we are running out of time. And, uh, hear from you the last few important concluding pointers. 

Leonid Bugaev: Yeah, right. So, like, I think like the number one is that like, uh, uh, you should stop, um, being afraid of money-related questions. So, money becomes the number one essential metric, and you need to understand how the company makes money, how the product makes money, which exact parts of the product make money, and which of them are actually, you know, like don't and actually drain your money. So that's number one to understand. You can't do it if you do not have the proper metrics in place. So like, uh, configuring, first of all, like product usage metrics is super important, but also having team metrics, uh, is equally important, specifically like knowing how much in terms of like code category of tickets, the features, et cetera, how much time do you spend? And how much is, for example, uh, change failure rate, or maybe like cycle times for specific, like product features. And, you know, like in the end, like if you, for example, know that like your team spends, I don't know, like $20,000 per sprint, then when you start like making the features, uh, like some customer asks you to like build some feature. Okay, it will take us one sprint to build. It'll cost you like a $20k, you know, like, is it actually worth doing or not? You know, like. 

Kovid Batra: Yeah. 

Leonid Bugaev: So money becomes very important. And, uh, when you actually need to make the hard decisions, you need to like, uh, the communication is the key. The transparency, if you like as a leader, don't have the answers, if you're not confident, people will not be confident the same. And you should be very close to your team. They should treat you as a team member. You should build the trust lines and you should do it before. It's not like you're coming one day, like, "Hello, I'm your friend. "No, like, it's like, it takes time. And, um, you should have, uh, uh, especially like if you like to have a more complex, bigger team, you should have a very good chain of communication. And also another thing is that you should have, um, very good performance review, uh, process and the performance improvement process. Everything should be audited. Everything should have, uh, been written down, et cetera. It will help you so much in the future if you need to make some hard decisions. And when you make them, when, for example, you need to like lay off some people, especially like if it's like monthly or similar, the best thing you can do as a leader is to be with the people like, uh, "I'll be available.” Your calendar should be open for anyone and you should be proactively following up with everyone, answering all the questions and being as transparent as possible. 

Kovid Batra: Makes sense. Makes sense. Totally. It is very, very important to communicate to the teams that they should be involven during these phases in constant innovation and learning so that they can find out areas where they can actually create the impact even in such times. So motivating them in the other direction, like telling them that things would be fine if we move with this motivation of continuous learning and improvement and taking steps towards more innovation. I think, uh, would definitely bring that change and make it easy for everyone, uh, to navigate such situations.

I think with that, uh, I think, uh, we come down to the closing notes. Uh, any parting advice, Leo, for our audience, like, uh, who are aspiring engineering leaders, passionate engineering leaders out there, anything you want to share? 

Leonid Bugaev: Uh, don't be afraid to change. So the change is always, you know, frightening, but it's essential for innovation and growth. So that's, yeah, the major, I think, advice. 

Kovid Batra: Perfect, man. Perfect. All right. Thanks a lot, Leo. It was great having you on the show. Uh, really, really insightful thoughts. And I think the hands-on experience always says for itself. So cheers, man. 

Leonid Bugaev: Thank you so much for having me here.

‘Maslow's Hierarchy for Tech Teams’ with Sergio Visinoni, Fractional CTO and Tech Advisor

In the latest episode of 'groCTO Originals' podcast (formerly: 'Beyond the Code'), host Kovid Batra engages in a thought-provoking discussion with Sergio Visinoni, a Fractional CTO, Tech Advisor & Mentor at groCTO Community. He’s also the author of the newsletter ‘Sudo Makes me A CTO’ and runs a tech leadership coaching startup as a solopreneur. The heart of their conversation revolves around ‘Maslow's Hierarchy for Tech Teams’.

The episode kicks off with Sergio discussing his background & then introducing a framework inspired by Maslow’s Hierarchy, categorising tech team maturity into 3 levels: vital infrastructure, application service quality, and developer performance. He explains how this framework helps align technical and business strategies, identify issues, and communicate needs to business leaders. Through a case study, Sergio illustrates that high feature output doesn’t always equate to high performance.

Lastly, Sergio addresses challenges like standardizing DORA metrics across diverse teams and justifying infrastructure needs, emphasizing how the framework aids in balancing stability and performance for data-driven decision-making and effective communication.

Timestamps

  • 00:57 - Sergio’s background
  • 05:28 - Creating Maslow’s hierarchy for tech teams
  • 10:06 - Levels of Maslow’s pyramid
  • 15:00 - Benefits of Maslow’s pyramid
  • 19:10 - Hierarchy serving business needs
  • 23:17 - Implementing data-driven decision-making
  • 29:01 - Conclusion

Links and Mentions

Episode Transcript 

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have a very special humble guest. He comes with 24 plus years of engineering and leadership experience. He has been a Tech Mentor, and a Fractional CTO at multiple startups and orgs. He's also the author of the newsletter 'Sudo Make Me a CTO'. Currently, he's a solopreneur running his own tech leadership coaching startup, and I would like to welcome our guest from Spain, Sergio. Welcome to the show. 

Sergio Visinoni: Hi, everyone. Hi, Kovid. I'm really happy to be here. 

Kovid Batra: Same here. So today, Sergio, I think, uh, we have a very interesting topic to talk about and I derived it from the previous conversation that we were having. It's about the Maslow's hierarchy for tech teams. So I think it's very interesting. It's something related to, uh, personal life also, like how Maslow's hierarchy plays a role in our lives and how that maps to tech teams actually. But before we jump into that, I think, uh, the audience would love to know a little bit more about you. You can share your personal things also. Let us know who is Sergio. Over to you. 

Sergio Visinoni: Yeah, sure. Thank you, Kovid. Yeah. So before we get into the very interesting topic, let me bore you to death with some personal details. Uh, no jokes aside, I'm Sergio. As you said, I've been spending more than 20 years in the tech industry. I'm originally from Italy. Um, I've been living in many countries, so mostly Europe, but then I spent a bit of time also in Mexico and then I moved here to Spain in 2016. The biggest chunk of my career has been in online marketplaces. That's where I basically went from being a software engineer and to actually being a VP, uh, of, like overseeing more than 300 engineers across multiple countries. So that's been like the peak of complexity that I've been dealing with. Uh, and after that, I've been, um, getting my hands dirty again with smaller companies and startups, until finally, at the end of last year, I took the decision to, you know, uh, move out of the traditional employment setup, uh, because I felt like I wanted more flexibility. I have a family, I have two kids, as you want more personal details. So I want to spend, be able to spend more time with them when needed. Like if there is something at school, I want to be able to go there without, you know, having to, uh, have too much justifications to add and also want to spend a bit more time for myself as well. So that's why I took the jump from January 1st, I became what I like to say a 'solopreneur', which makes it sound cool. Uh, the reality is that what I'm doing is mostly consulting for now. At the moment I have three B2B clients, three companies that I'm consulting with in different capacities. One of them is a traditional fractional CTO. Others are more project-specific. Uh, in parallel to that, I'm, uh, building up my own coaching and mentoring, uh, practice. I really like working with individuals, be them senior engineers or tech leaders who just want to get better in their professions. I really enjoy that because it reminds me of one of the things that I loved the most when I was an engineering leader, which was like working with people in my team, right? 

Kovid Batra: Yeah. 

Sergio Visinoni: And then, you know, eventually, I also want to start building, you know, online products that can sell themselves. That's why I consider this a solopreneurship. Consulting is a part of it. I don't want to be a full-time consultant forever, but I need to start close to what I can do, uh, right now, as I build my business over time. 

On the personal side, I said that I have a family, have two kids. Uh, my main hobby is actually woodworking. So I like to have analog activities to do, uh, lots of pieces of furniture here at home. Uh, I've been there myself. And you can actually see the progression from the first pieces, it's not very good to the last pieces getting better It's like software, you know, when you look back at the software you wrote one year ago, you realize that's not very good. Uh same goes with woodworking. And then yeah, I like reading a lot. I'm an avid reader. Um, I spend time, as you said, producing my own content as well. I have my own newsletter. I write a lot of content on LinkedIn, trying to, you know, be part of this community, and as well, you know, I'm not hiding that it is a big channel for me to market my own brand, right? I try to show my knowledge through all that content so that hopefully people will feel confident, uh, you know, signing up for my service. 

Kovid Batra: Sure. I'm sure they would. Great. Thank you so much for this sweet, quick intro. Uh, and I wish you all the best on this solopreneur journey, and I really appreciate you for that. 

Cool. So I think now is the time we move on to making dev teams better, enabling engineering leaders to make dev teams better, and I really would love to, uh, hear about your theory of building great dev teams and how you bring in Maslow's hierarchy in that. Uh, over to you, man. 

Sergio Visinoni: Yeah. So I'm going to start with a bit of context, right? Um, I remember the year when the 'Accelerate' book came out. Uh, that was 2018, and I actually remember it was, it's fun. I still have a clear, vivid memory of a Slack chat I was having with a software architect that was in Norway. I was already here in Barcelona. And we're talking about, you know, engineering metrics, how they were using certain metrics in their team, et cetera, et cetera. Because I had been kind of banging my head against the wall on this topic. We know it's a difficult topic, right? It's very challenging. It's very easy to come up with wrong metrics. At the same time, I didn't want to just give up and say, "It's impossible to measure anything." Right? Um, and he said, "You know what? This new book just came out and they claim that they found causality, relationship between certain tech metrics and business outcomes." I I said, "Wow! I need to read this book." So that's how, you know, I saw the link on Amazon. I bought it immediately and I started reading it, and it came at a very good moment, because again, as I said, I was kind of thinking a lot about that space. That's where, you know, after reading the book, I've been working with a collaborator, and I also actually need to say that a lot of the credit for what we did goes to him. All right. Uh, he was in my team. He was the Director of Engineering of my team. Uh, he was tasked with helping me figuring out how we can look at metrics across the whole portfolio. 

And this is the second piece of context that is important. At that moment in time, as I said, y'know, overseeing a big engineering organization, I was actually overseeing multiple teams operating in different countries for different marketplaces. So it was a very heterogeneous setup where we had a combination of shared platform services, but a lot, I would say more than 80 percent of the code and systems was run locally, right? And this for historical reasons. So this company has been growing very quickly by, uh, forking, uh, different versions of the same platform in different countries to allow for maximum customization, but also through acquisitions. So it's very common, you know, every company or actually no company grows following the ideal path. It's always like bumps on the road. And then, you know, on the engineering side, you're left to deal with some historical, uh, legacy to deal with. So we were facing this very heterogeneous setup. At the same time, you know, I was responsible for making something out of it, right? So I was responsible to oversee all of it and try to figure out how we can look at this portfolio and how we can understand where each team, where each organization is in terms of maturity. And that's where we started thinking about, okay, we need to develop a 'tech maturity framework' to be able to look at this and be able to talk about where every team is and also help the GMs and the CEOs understand what type of investments they're supposed to make to be able to improve the situation, right? So we wanted this to become not a tool to just assess or judge or evaluate. Actually, we wanted this to be a tool to help all the local tech leads in the different countries to have stronger arguments, better arguments to justify investments in architectural refactoring, you know, building better tools or improving processes and whatnot. 

So that's where, you know, on the one side, the Accelerate book came out. On the other side, we were having this, uh, challenge on our own. Uh, and that's where with my colleague, Marco, uh, Marco Cupidi. Uh, I recommend him for another podcast, by the way. I think you should definitely interview him. We came out with this idea of the Maslow pyramid for, uh, engineering teams. We started investigating this idea of building the analogous of the Maslow pyramid for tech teams, uh, in this optics of figuring out what, how we can look at tech teams in terms of their maturity, right, because if you think about the Maslow pyramid for humans, you can roughly say, you know, the more you move up on the pyramid in terms of being able to focus on higher and higher needs, the higher the level of maturity your personality has reached, right, because you're not only focusing on surviving, you're not only focusing on recognition, you actually, at some point, you reach the level of self-realization. I think that's the peak of the pyramid. 

Kovid Batra: Yes, self-actualization. 

Sergio Visinoni: Self-actualization, exactly. So we started working with this idea and now we didn't complete the work, meaning that initially we thought we will have five or six levels of the pyramid. We ended up with three, but that was enough. Honestly, you know, we didn't actually want to stick with the same amount of levels in the pyramid of Maslow pyramid, but we ended up with three levels, and basically, they were organized as such. The first level, the base of the pyramid, which is analogous to, you know, feeding, uh, having food and providing, for Maslow, was focused on what we were calling the vital infrastructure layer. So basically, "Is your platform stable?" And the reason for that is that I've seen many times teams focusing too much on velocity or going faster where actually their main problem is with quality and stability. And whenever a team is facing a lot of stability issues, that has a lot of implications in terms of their ability to work effectively. So it introduces a lot of disruptions, right? They are constantly interrupted by issues, by problems. So there is no way for the team to focus on improving output before they improve their predictability of the cycle. 

Kovid Batra: Right. 

Sergio Visinoni: And predictability is always negatively affected by how many incidents you're having, right? Every time there is an incident, your attention is redirected to dealing with it, and therefore, again, it disrupts and it adds, creates a lot of waste in the process. 

So that first layer, I remember, it included metrics such as availability, but also it had metrics around, uh, latency of the most important pages. We're mostly dealing with websites and with mobile apps as well. So there were metrics on crash rates on the apps, for instance, etc, etc. I don't remember the full list because, y'know, this was a few years ago, but it doesn't really matter because what we came up with was.. Yeah? 

Kovid Batra: The highlight is stability like the first level. 

Sergio Visinoni: Exactly. So, 'vital infrastructure' is infrastructure, the basic level of what you build, operating in a healthy way, is it healthy, right? Or does it need to constantly look for food because it's starving, right? That was kind of the concept. 

And then, the second layer above that was focusing more at the application level. What is the quality of service of your application? And here we're looking more in terms of error rates on the app, or, uh, you know, some of the Core Web Vitals were included there. Not all of them because it was still early days. Uh, actually I don't think we were calling them Core Web Vitals yet, if my memory serves me well. But Google was already quite advanced on, you know, pushing the idea of these important metrics on web performance that reflect the user experience. So this was a combination of, you know, application-level and the quality of service that you're providing to your users, not in terms of user flow, right? So it was not the functional part of user experience, but the non-functional part of user experience. Is it fast enough? Is it reliable? Is it loading in the right way? et cetera, et cetera. So that was the second layer, and the third layer, uh, was basically what we're calling developer performance, I think, and it was mapped into the DORA metrics. So these were the four DORA metrics because that's where you get to, from our perspective, the optimization stage, right? Once the first two layers are in a good place, you can really start focusing your attention on the third layer. 

Now, as Maslow says as well, you don't need to complete 100 percent of one layer to start looking at the next layer, right? So it's more like you want a slice where always the base is much larger than the layers above, but you don't want to only focus on feeding yourself until you have enough food to survive for the rest of your life. So there's always a balance. But for us, y'know, going back to when we introduced it, when we introduced this framework, and again, lots of credit goes to Marco. Uh, I was there mostly to help, but also to help socialize this and get buy-in from the business counterparts, right, because until that point, the engineering side of the organization was really considered as a black box for any people outside of engineering. 

Kovid Batra: Yeah. 

Sergio Visinoni: So, yeah, it's no, things are slow. We have lots of legacy. Old platform. Now, there were all these common words being thrown around, but most people didn't really understand what was going on, and then they resorted to kind of comparing across countries based on how quickly they could ship new features, regardless of whether those features were actually the same or not, right? There was a lot of oversimplification in the discourse around, "Is my team or is your team performing well from a CEO perspective?" And I wanted to not put an end to that, but actually help business leaders understand better, have a deeper understanding of, okay, this is the situation within your team. Now, how do we have them move to a better place, right? 

So, using this analogy of the Maslow Pyramid turned out to be a very effective and powerful way to communicate because, you know, every man, every person and their mother have heard about the Maslow Pyramid, and even if they don't, it's very easy to understand the concept once you explain it, right? Of course, we're always putting a lot of caveats, right? Saying, you know, "Don't just translate everything Maslow said into this context and assume that it's going to work." Because there is a lot of simplification going on here, but analogies are useful because they help people grasp the concept more easily. 

Kovid Batra: Yeah, it makes a model in your head where you can always understand where you are, what you exactly need to do. So that way it makes it easier for people to make those decisions and then move forward according to that. Yeah. Makes sense. I understand. 

Sergio Visinoni: Yeah. So, that. On the one side, so there was the benefit for the business leaders, so they will understand better, you know, their situation. Secondly, it was helping local CTOs because again, I was VP of Engineering, but I had CTOs reporting to me because we had different titles depending on the countries. It was very heterogeneous. Don't get too fixated on the titles. But basically, I had local tech leads who were reporting to their CEO, but then they were dotted line reporting to me, and many of them were struggling to find arguments to justify investments in technology, right? Because especially when you're a small, medium-sized team where the competition on the market is very fierce, you know, you get a lot of pressure to just push new features, right? You just, you need to launch this new feature because we need the revenue, blah, blah, and we all, we've all been there. There are lots of good reasons for that to be the case and some bad ones, right? And especially, first-time tech leaders tend to have a hard time, balancing that need, that business need with, okay, I also need to make sure that we ensure this system will perform well in the long run, right, not just tomorrow, especially with businesses that were well-funded with good business models. So it was not a typical startup that might disappear in a couple of months. Uh, this was a proven model that we're rolling out across different countries. So the long-term horizon perspective was justified even in the early stages. 

So we put together the model. We put together a series of metrics, but then, the hard part of the work started, which was how do we collect the metrics, right? So initially, it was a massive, uh, Google spreadsheet, and we're asking local CTOs to, you know, fill in the data on a monthly basis, and the interesting thing for us was to look at the trends. Like, we learn a lot from when we work with DORA metrics. The absolute value is largely meaningless, because it's really very context-dependent, and also depends on how you define the metric, right? Every one of these metrics, you know, you work with that, you need to start with defining, okay, "How do I actually want to measure it?" Because DORA doesn't tell you how to measure them, but even when you talk about availability, there are different ways to look at the numbers and figure out how to calculate this. So, it was a lot of work to kind of standardize on those metrics. And once we got to that level of standardization, then every team will see how they were evolving on a monthly basis, and we're generating monthly reports with different levels of granularity, right? So, for the executive team, we will look at an index that was calculated as a kind of summary or a compilation or, yeah, aggregation of all the submetrics to show, okay, "This team is at 75 percent in the journey. Last month they were at 70%." So there's been a significant improvement. And if you look back last six months, they've been increasing month over month, et cetera, et cetera.

Whereas some other teams maybe were either flat or even decreasing, declining. That proved to be another very useful tool because in my perspective, one of the best usages of engineering metrics is what they call, um, conversation starters. So those metrics are a way for you to spot that there is something going on. It's very dangerous to jump to conclusions just based on metrics. 

Kovid Batra: One question. 

Sergio Visinoni: But at least they're telling you.. Yeah, go ahead. Sorry. 

Kovid Batra: So I think what I am understanding from your, uh, analogy and how this hierarchy is helping people to actually build a strategy, it's more towards a technical strategy. Am I getting it right or is it more around mapping the business strategy also in place? So, that's the real problem for the tech leaders, right, that you are really not sure about what your business needs and what your tech needs, and then you're trying to map both of them so that you survive well, and in fact, thrive in certain situations where you really want to move fast. So this hierarchy, uh, analogy is it helping purely on the tech strategy point or, uh, is it bringing in the business aspect also? 

Sergio Visinoni: That is a fantastic question, Kovid. I would start by saying that you cannot have a tech strategy that is decoupled from the business strategy to begin with, right? Because otherwise, so there is no absolute tech strategy that is right for every context. Uh, the tech strategy needs to be tailored to the specific company, team, whatever the scope we're looking at. So by definition, this was helping them better the business strategy into the tech strategy. But, at the same time it was also helping CTOs in some cases define a clear tech strategy because they were finally seeing clearly, you know, this is where you have problems, and based on the model, we recommend you don't start optimizing for speed if your site is down every other day, right? You should start first by creating that stability, right? But it also helps them communicate with the business and actually manage business expectations in a more constructive way. So, you know, in order for us to be able to get to a place where we can fulfill all the needs for the business in terms of like new features, new capabilities that we need to build, first we need to address this. Until we address this, we're not going to be able to predictably do our job, and we'll constantly be facing, you know, urgent fires that will disrupt our ability to plan and actually execute on those plans. So in that sense, there is a clear connection. 

Now, what the framework didn't tell was what features to build or how to solve a problem, right? That's exactly how then, you know, in my team, I was combining the framework as a diagnosis tool on one hand. So it was actually an observability tool. It was giving us information about where the potential problems were, and then I had a pool of experts and software engineers that in some cases I was able to deploy to help specific teams improve in certain areas, right? For instance, you know, "You guys, I have an architect here. You need help on refactoring this part of the architecture on your side. I'm gonna have this person working with you next quarter to help you move from here to there." 

Kovid Batra: Yeah, exactly I think that's the best part. I think it's not a static framework, right? You are always moving in between these layers depending on the situation. All you have to do is like make sure what exactly each state tells you, those metrics of each state tells you, and based on that, you can at least formulate in your, uh, strategy that, okay, this is where we need to focus, and if this is a business requirement, are we mapped to do it or not? You can actually make decisions by knowing the reality of your system. 

Sergio Visinoni: Exactly. You're absolutely right. And basically, it's a framework that helps you prioritize what to do, right?

Kovid Batra: Yeah. 

Sergio Visinoni: Uh, it gives you clear insights on, okay, I should look here or I should look there. So, what's the plan, y'know? Again, this is the problem. This is the symptom. Now, how do we go about addressing the underlying problem? And again, that was happening either in isolation within the team or with support from people that were, you know, uh, located with me in Barcelona. The 'how' is really context-dependent, again, because depending on what the specific problem is. It's also depending on the local pool of competences on the team that is suffering, you might come up with different ways to address it. 

The other interesting side effect of this approach, by the way, you know, it was not easy to deploy it because in the beginning, a lot of teams were seeing this just as extra toil, right? You're just asking me to collect metrics. What's in it for me? Right. It took some time to prove the value, right? And that's where, you know, strong conviction, but also repeating why we're trying to do something, y'know, being very consistent in the messaging is helpful. But, you know, in any organization, especially big organizations, there is a lot of competition for attention, right?

Kovid Batra: So what do you think was, uh, your takeaway from the incentivization for implementation? Like, uh, why people started sticking to it, uh, with you? 

Sergio Visinoni: I think there were probably two broad categories of people. Like, generalization, but to simplify things here, one category of people were people that, understood this at an intuitive level and actually saw this as a way to, okay, achieve something they wanted to do. So people were thinking, okay, "How can we become a bit more data-driven in deciding on this type of work?" So for this group of people, it was obvious. Of course, nobody likes extra work in general, but they understood why, right? And then, there was more like the detractors or people who were, uh, resisting this type of change, and what worked with them was spending time to help them understand this will actually help them do what they wanted to do, and they were frustrated because they were not able to do it, right?. So typically, and this roughly mapped, you know, it's not exactly one to one, but this roughly mapped to the more junior profiles, those who were able to see, okay, "I don't have time. I don't have resources to do all the important things I will need to do because, you know, my boss tells me that we need to prioritize X, Y, and Z." And I was telling him, okay, "So how do we change that?" Right? "How can I, how can we help you create, build stronger arguments to be able to, y'know, put those on the table and be able to have a proper prioritization discussion and say, okay, if we do this, we're going to improve that, and therefore, if we improve that, this will be the potential consequences." 

Kovid Batra: Right. 

Sergio Visinoni: So this has been the main driver, um, to get most of the people on board. 

Kovid Batra: Makes sense. And as you were talking about the challenges, I think if you could give me one example, uh, like some anecdotal information around how things worked out when you were implementing it, that would be something very insightful. 

Sergio Visinoni: Yeah. So, on one side we discovered, for instance, without naming names, we discovered that one specific organization that was considered (quote, unquote) "high-performing" because they were churning out lots of features. It turned out their metrics didn't look very good. All right, so this was an interesting case of doing a lot of average work rather than doing a little bit of good work. And once we started going beyond those, so those metrics, the first thing they did were they put us onto something. We realized there was something there that required further investigation. So we started working more closely with the team and we started realizing that there were basically gaps in their knowledge, right? So nobody had any bad intentions. But the problem is that, especially in a big organization, there is a certain element of competition across countries. So nobody wants to look bad. Everybody wants to look better, right? And therefore, they were a bit trapped in this self-fulfilling prophecy of showing off that they were good, even though they weren't, and therefore they didn't really ask for help, right, because asking for help would have meant, you know, we don't necessarily know exactly what we're trying to do. In this discipline, it's very hard to know exactly what you're trying to do. There's a lot of uncertainty, lots of surprises around the corner. So, in this specific case, this framework allowed us to identify something that was very much below the radar until that moment, which led us to spend more time with this team. Actually, this team from a business perspective was one of the most important countries in my portfolio. So it became quite evident that I and my team should prioritize our attention to this team, and over time, that led us to help in making significant changes at the organizational level and on the way they work, and therefore, the end results.

So this has been like one of the biggest investments. The other one, actually the most important asset in that portfolio, um, was falling in the category of those who understood this from day one, right? So this was like mature people who just saw in this a potential support. So there, the collaboration from day one was extremely easy, and it really became a partnership where I actually took a big part of my team to work with them for a sustained amount of time to actually move them very close to excellence on, you know, on the, at that moment, the DORA ranges, if you remember, and that has had massive impact on the business, right? The business was going through an important transformation. There were changes in the leadership, at the top, but actually also these changes on the tech side allowed to support a lot of the initiatives that the business wanted to push. So this was a very good success case where all pieces came together, the local tech team, my team centrally and the business realizing, okay, we need to put investments here, we need to do things in a better way. Um, you know, nowadays, that country is still thriving, uh, and part of it is because of some of those investments we made during those days. 

Kovid Batra: I think this, um, philosophy, the tech philosophy, I must say, solves a lot of problems. Like, first and foremost, I would say the decision-making, like anyone who is trying to make a decision that comes to a very good point where you know your reality, you know what is needed from the business and you can map those two and then come out with, okay, "This is the priority list for us." So that's the best thing I see coming out of this, and then, of course, like it makes you achieve as much as you can, not that as much as you want. So that is also very important because sometimes we overstretch ourselves to do things which probably are not realistic or not achievable. This would really help you give a fundamental check, and then, you would say, okay, "This is what needs to be done there."

One more thing, I think I discussed this in one more podcast that it is very difficult for tech leaders to explain to the business counterparts, why this tech thing is very important, right? Now they have a very good framework in place and a few metrics in place to tell them, okay, "This is where we need to focus right now." And if it is needed right now, then you have to bring it up. So I think a data-driven decision-making makes it very convincible for the business counterparts also to take up the decision. 

So yeah, I think this was really great, Sergio. I think a very, very good thing I learned today and hopefully the audience did too. So with that, I would like to say buh-bye and would love to have you on another episode anytime soon, and talk more on such topics with you. Thank you so much, Sergio, for today. 

Sergio Visinoni: Thank you, Kovid, and you know, just hit me up when you want me to join again, I will be very happy to have another chat. 

Kovid Batra: Thank you. Thank you so much. See you. 

Sergio Visinoni: Bye!

The DORA Lab EP #04 | Peter Peret Lupo - Head of Engineering at Sibli

In the fourth episode of ‘The DORA Lab’ - an exclusive podcast by groCTO, host Kovid Batra engages in an enlightening conversation with Peter Peret Lupo, Head of Software Engineering at Sibli, who brings over a decade of experience in engineering management.

The episode starts with Peter sharing his hobbies, followed by an in-depth discussion on how DORA metrics play a crucial role in transforming organizational culture and establishing a unified framework for measuring DevOps efficiency. He discusses fostering code collaboration through anonymous surveys and key indicators like code reviews. Peter also covers managing technical debt, the challenges of implementing metrics, and the timeline for adoption. He emphasizes the importance of context in analyzing teams based on metrics and advocates for a bottom-up approach.

Lastly, Peter concludes by emphasizing the significant impact that each team member has on engineering metrics. He encourages individual contributors and managers to monitor both their personal & team progress through these metrics.

Timestamps

  • 00:49 - Peter’s introduction
  • 03:27 - How engineering metrics influence org culture
  • 05:08 - Are DORA metrics enough?
  • 09:29 - Code collaboration as a key metric
  • 12:40 - Metrics to address technical debt
  • 17:27 - Overcoming implementation challenges
  • 21:00 - Timeframe & process of adopting metrics
  • 25:19 - Importance of context in analyzing teams
  • 28:31 - Peter’s advice for ICs & managers

Links and Mentions

Episode Transcript

Kovid Batra: Hi everyone. This is Kovid, back with another episode of our exclusive series, the DORA Lab, where we will be talking about all things DORA, engineering metrics, and their impact, and to make today's show really special, we have Peter with us, who is currently an engineering manager at Sibli. For a big part of his career, he has been a teacher at a university and then he moved into the career of engineering management and currently, holds more than 10 plus years of engineering management experience. He has his great expertise in setting up dev processes and implementing metrics, and that's why we have him on the show today. Welcome to the show, Peter. 

Peter Peret Lupo: Thank you. 

Kovid Batra: Quickly, Peter, uh, before we jump into DORA metrics, engineering metrics and dev processes, how it impacts the overall engineering efficiency, we would love to know a little bit more about you. What I have just spoken is more from your LinkedIn profile. So we don't know who the real Peter is. So if you could share something about yourself, your hobby or some important events of your life which define you today, I think that would be really great. 

Peter Peret Lupo: Well, um, hobbies I have a few. I like playing games, computer, VR, sort of like different styles, different, uh, genres. Two things that I'm really passionate about are like playing and studying. So I do study a lot. I've been like taking like one hour every day almost to study new things. So it's always exciting to learn new stuff. 

Kovid Batra: Great, great. 

Peter Peret Lupo: I guess, a big nerd here! 

Kovid Batra: Thank you so much. Yeah. No, I think that that's really, uh, what most software developers and engineering managers would be like, but good to know about you on that note.

Apart from that, uh, Peter, is there anything you really love or would you like to share any, uh, event from your life that you think is memorable and it defines you today who you are? 

Peter Peret Lupo: Well, that's a deep question. Um, I don't know, I guess like, one thing that was like a big game changer for me was, uh, well, I'm Brazilian, I came to Canada, now I'm Canadian too. Um, so I came to Canada like six years ago, and, uh, it has been transformational, I think. Like cultural differences, a lot of interesting things. I feel more at home here, to be honest. Uh, but like, yeah, uh, meeting people from all over the world, it's been a great experience. 

Kovid Batra: Great, great. All right, Peter. So I think, first of all, thanks a lot for that short, sweet intro about yourself. From this point, let's move on to our main topic of today, uh, which is around the engineering metrics and DORA metrics. Before we deep dive, I think the most important part is why DORA metrics or why engineering metrics, right? So I think let's start from there. Why these engineering metrics are important and why people should actually use it and in what situations? 

Peter Peret Lupo: I think the DORA metrics are really important because it's kind of changing the culture of many organizations, like a lot of people were already into, uh, measuring. Measuring like performance of processes and all, but, uh, it was kind of like, sometimes it wasn't like very well seen that people were measuring processes and people took it personally and it's like all sort of things. But nowadays, people are more used to metrics. DORA metrics is like a very good framework for DevOps metrics, and so widespread nowadays, it's kind of like a common language, a common jargon, like when you talk about things like mean lead time for changes, everybody knows that, everybody knows how to calculate that. I guess that's like the first thing, like the changing the culture about measuring and measuring is really important because it allows you to, uh, to establish a baseline and compare the results of your changes to where you were before and, uh, affirm if you actually have improved, if something got worse with your changes, if your, the benefits of your changes are aligned with the organizational goals. It allows everybody to be engaged at some level to, uh, reaching the organizational goals. 

Kovid Batra: Makes sense. Yeah, absolutely. I think when we always talk about these metrics, most of the people are talking about the first-level DORA metrics, which is your lead time for changes or cycle time, or the deployment frequency, change failure rate, mean time to restore. These metrics define a major part of how you should look at engineering efficiency as a manager, as a leader, or as a part of the team. But do you think is it sufficient enough? Like looking at just the DORA metrics, does it sound enough to actually look at a team's efficiency, engineering efficiency? Or do you think beyond DORA that we should look at metrics that could actually help teams identify other areas of engineering efficiency also? 

Peter Peret Lupo: Well, um, one thing that I like about our metrics is that it lets us start the culture of measuring. However, I don't see that as like the only source of information, like the only set of metrics that matter. I think there are a lot of things that are not covered in DORA metrics. The way that I see, it's like it's a very good subset for DevOps, it covers many different aspects of DevOps, and that's important because when you wanna measure something, it's important to measure different aspects because if you are trying to improve something, you want to be able to detect like side effects that may be negative on other aspects. So it's important to have like a good framework. However, it's focused a lot on DevOps, and, uh, I'll tell you, like, if you are on a very large organization with a lot of developers pushing features, like many changes daily, and your goal is to be able to continuously deliver them and be able to roll back them and assess like the time to restore the service when something breaks down. That's good, that's very, very interesting. And so I think it's very aligned with like what Google does. Like it's a very big corporation, uh, with a lot of different teams. However, context matters, right? The organizational context matters. Not all companies are able, for instance, to do continuous delivery. And sometimes in our matter of like what the company wants or their capability, sometimes their clients don't want that, like if you have like banks as clients, they don't want you to be changing their production environments every like 12 hours or so. Uh, they want like big phases, uh, releases where they can like do their own testing, do their own validation sometimes. So it's fundamentally different. 

In terms of, uh, the first part of it, because when you get to DevOps and you get to like delivery stuff into production, things were already built, right? So building is also something that you should be looking at. So DORA metrics provide a good entry point to start measuring, but you do need to look at things like quality, for instance, because if you're deploying something and you're rolling back, and I want to make a parenthesis there, if you're measuring deployment frequency, you should be telling those apart because rolling back a feature is not the same as, like, deploying a feature. But if you're rolling back because something wasn't built right, wasn't built correctly, there's a defect there. DORA metrics won't allow you to understand the nature of the defect, where you got into, like, got into, like the requirements and continue what's propagated to codes and tests, or if somebody made a mistake on the codes, like it doesn't allow you for this level of understanding of the nature of your defects or even productivity. So if you're not in a scenario where you do have a lot of teams, you do have a lot of like developers pushing codes, code changes all the time. Uh, maybe your bottleneck, maybe your concerns are actually on the development side. So you should be looking at metrics on that side, like code quality, or product quality in general, defect density, uh, productivity, these sorts of things. 

Kovid Batra: I think great point there. Uh, actually, context is what is most important and DORA could be the first step to look into engineering efficiency in general, but the important, or I should say the real point is understanding the context and then applying the metrics and we would need metrics which are on DORA also. Like, as you mentioned, like there would be scenarios where you would want to look at defect density, you would want to look at code quality, and from that, uh, I think one of the interesting, uh, metrics that I have recently come across is about code collaboration also, right? So people look at how well the teams are collaborating over the code reviews. So that also becomes an essential part of when you're shipping your software, right? So the quality gets impacted. The velocity of the delivery gets impacted. Have you encountered a scenario where you wanted or you had measured code review collaboration within the team? And if you did so, uh, how did you do it? 

Peter Peret Lupo: Yes, actually in different ways. So one thing that I like to do, it's more of a qualitative measurement, but I do believe there is space for this kind of metric as well. One thing that I like doing, that I'm currently doing, and I've done in other companies as well, is taking some part of the Sprint's retrospective to share with the team, results of a survey. And one of the things that I do ask on the survey is if they're being supported by team members, if they're supporting team members. So it's just like a Likert Scale, like 1 to 5, but it highlights like that kind of collaboration support. 

Kovid Batra: Right.

Peter Peret Lupo: Um, it's anonymous, so I can't tell like who is helping who. Uh, so sometimes somebody's, like, being very, like being helped a lot, and sometimes some other person is helping a lot. And maybe they switch, depending on like whether or not they're working on something that they're familiar with and the other person isn't or vice versa, I don't know. I have no means to do that, and I don't bother about that. Nobody should be bothering about that. I think if you have like a very senior person, they're probably like helping a lot of people and maybe they're not pushing many changes, but like everybody relies on them. Uh, so if you're working on the same, you should be measuring the team, right? But there are other things as well, like, um, you can see like comments on code reviews, who jumps to do code reviews, and all those kinds of things, right? These are very important indicators that they have like a healthy team, that they're supporting each other. You can even like indicate some things like if people are getting, uh, are learning more about the codes component they are changing or like some, like a service or whatever area, how you define it, uh, if you have like knowledge silos and, um, who should be providing training to whom to break out those silos to improve productivity. So yeah, that's very insightful and very helpful. Yeah, definitely. 

Kovid Batra: Makes sense, makes sense Um, is there anything that you have used, uh, to look at the technical debt? So that is also something I have, uh, always heard from managers and leaders. Like when you're building, whether you are a large organization or you are a small one moving faster, uh, the degree might vary, but you accumulate technical debt over a period of time. Is there something that you think could be looked at as a metric to indicate that, okay, it's high time now, that we should look at technical debt? Because mostly what happens is like whenever there are team meetings, people just come up with ideas that, okay, this is what we can improve, this is where we are facing a lot of bugs and issues. So let's work on this piece because this has now become a debt for us, but is there something objective that could tell that yes, now it's time that we should sit down and look at the technical debt part? 

Peter Peret Lupo: Well, uh, the problem is like, there are so many, uh, different approaches to technical debt. They're going to be more suited to one organization or another organization. If you have like a very, uh, engineering-driven organization, you tend to have less technical debt or you tend to pay that technical debt more often. But if it's not the case, if it's like more product-driven, you tend to accumulate those more often, and then you need to apply different approaches. So, one thing that I like doing is like when we are acquiring the debt; and that's normal, that's part of life. Sometimes you have to, and you should be okay with that. But when we are acquiring debt, we catalog it somewhere. Maybe you have like an internal wiki or something, like whatever documentation tool you use. You add that to a catalog where you basically have like your components or services or however you split your application. And then like what's the technical data you're acquiring, which would be the appropriate solutions or alternatives, how that's going to impact, and most importantly, when you believe you should pay that so you don't get like a huge impact, right? 

Kovid Batra: Right. Of course. So just one thing I recently heard from one of my friends. Like they look at the time for a new developer to do the first commit as an indicator of technical debt. So if they.. First release, actually. So if someone who is joining new in the team, if they're taking too much time to reach a point where they could actually merge their code, and like have it on production, uh, if that is high and they, of course, keep a baseline there, then they consider that there is a lot of debt they might have accumulated because of which the learning and the implementation for the first release from a new developer is taking time. So do you think this approach could work or this approach could be inferential to what we are talking about, like the technical debt? 

Peter Peret Lupo: I think that in this particular case, there are so many confounding variables. People join the team at different seniority levels. A senior might take less time than a junior, even in a scenario where there is more technical debt. So that alone is hard to compare. Even at the same level, people join with different skills. So maybe you have like a feature you need to write frontend and backend code, and some people are, uh, full stack but are more backend-inclined, more frontend-inclined. That alone will change your metric. You are waiting for one person to join that team so you can have like a new point of measurement. So you're not gonna have a lot, and there's gonna have like a lot of variation because of these confounding factors. Even that the onboarding process may change in between. The way that I usually like to introduce people to code is asking them to reduce the amount of warnings from like code linters first, and then fixing some simple defects, and then something like a more complex defect, and then writing a new feature. Uh, so, even like depending on your own onboarding strategy, your model strategy you're defining is going to affect that metric. So I wouldn't be very confident on that metric for this purpose. 

Kovid Batra: Okay. Got it, got it. Makes sense. All right. I think if I have to ask you, uh, it's never easy, right? Like in the beginning, you mentioned that the first point itself is talking about these metrics is hard, right? Even if they make a lot of practical sense, but talking about it is hard. So when there is inherited resistance towards this topic, uh in the teams, when you go about implementing it, there could be a hell of a lot of challenges, right? And I'm sure, you would have also come across some of those in your journey when you were implementing it. So can you give us some examples from the implementation point of view, like how does the implementation go for, uh, these metrics and what are the challenges one faces when they're implementing it? And maybe if there are certain timelines to which one should look at for a full-fledged implementation and getting some success from the implementation of these metrics. 

Peter Peret Lupo: Right. So, um, usually you're measuring something because you want to prove something, right? Because you want to like achieve like a certain goal, uh, maybe organizational, or just like the team. So I think that the first thing to lower, uh, the resistance is having a clear goal, and making sure that everybody understands that, uh, that the goal is not measuring anybody, uh, individually. That already like reduces the resistance a lot, and making sure that people understand why that goal is important and how you're going to measure in it is also extremely important.

Another thing that is interesting is to ask people for inputs on like how they think you could be measuring that. So making them also part of the process, and maybe the way that they're advising is not going to be like the way that you end up measuring. Maybe it influences, maybe it's exactly what they suggest, but the important thing is to make them part of the process, so they don't feel that the process, like the process of establishing metrics is not something that is being done to them, but something that they are doing with everybody else. 

And so honestly, like so many things are already measured by the team, uh, velocity or however they estimate productivity. Even the estimates themselves are on like tickets on user stories or, uh, these are all, uh, attempts to measure things and they're used to compare the destinations with, uh, the actual results, so they know what the measures are used for. So sometimes it's just a matter of like establishing these parallels. Look, we measure productivity, we measure velocity to see if we are getting better, if we're getting worse. We also need to measure, uh, the quality to see if we're like catching more defects than before, if we have like more escape defects. Measurement is in some way already a part of our lives. Most of the times, it's a matter of like highlighting that, and, uh, people are usually comfortable with them, yeah, once you go through all this. 

Kovid Batra: Right. Makes sense. Um, I think the major part is done when the team is aligned on the 'why' part, like why you are doing it, because as soon as they realize that there is some importance to measuring this metric, they would automatically be, uh, intuitively be aligned towards measuring that, and it becomes easier because then if there are challenges related to the implementation process also, they would like come together and maybe find out ways to, uh, build things around that and help in actual measurement of the metric also.

But if I have to ask, let's say a team is fully aligned and, uh, we are looking at implementing, let's say DORA metrics for a team, what should be the time frame one should keep in mind to get an understanding of what these metrics are saying? Because it's not like straightforward. You look at the common frequency, if it's high, you say things are good. If it's low, things are bad. Of course, it doesn't work like that. You have to understand these metrics in the first place in the cadence of your team, in the situation of your team, and then make sense out of it and find out those bottlenecks or those areas of inefficiency where you could really work upon, right? So what should be that time frame in one's mind that someone is an engineering manager who is implementing this for a team? What time frame should that person keep in mind and what exactly should be the first step towards measuring these once you start implementing them? 

Peter Peret Lupo: Right. So it's a very good question. Time frame varies a lot and I'll tell you why; because more important than the time is the amount of data points that you have. If you wait for, like let's say, a month and you have like three data points, you can't establish any sort of pattern. You don't know if that's increasing, decreasing. There's no confidence. There's no statistical relevance. It's, like, the sample is too small. But like if you collect, like three data points, that's like generic for any metric. If you collect, like three data points every day, maybe in a week you'll have enough. The problem I see here is like, let's say, uh, something happens that is out of the ordinary. I want to investigate that to see if there is room for improvement there, or if that actually indicates that something went like really well and you want to replicate that success in the other cases. Um, you can't tell what's out of the ordinary if you're looking at three, four points. 

Kovid Batra: Right. 

Peter Peret Lupo: Uh, or if it's just like normal variation. So, I think that what's important is to have like a good baseline. So, that's gonna vary from process to process, from organization to organization, but there are some indications in the literature that like you should collect at least 30 data points. I think that with 30 data points you have like somewhat of a good, uh, statistical relevance for it, for your analysis, but I don't think you should, you have to wait for those many points in order to start investigating things. Sometimes you have like 10 or 12 and you already see something that. looks like something that you should investigate or you start having like an idea of what's going on, if it's higher than you expected, if it's lower than you expected, and you can start taking actions and investigating that as long as you consider that your interpretation may not be valid, bec ause like your sample is small. The time that it takes, like time frame, I guess that's going to depend on how often you are able to collect a new data point, and that's going to vary from organization to organization and from process to process, like measuring quality is different from measuring productivity, uh, and so on. So, I think all these things need to be taken into consideration. I think that the confidence is important. 

And one other thing that you mentioned there, about like the team analyzing. It's something that I want to touch on because it's an experience that I've had more than once. You mentioned context. Context is super important. So much so that I think that the team that is producing the metrics should be the first one looking at them, not management, higher management, C-level, not them, because they are the only ones that are able to look at data points and say, "Yeah, things here didn't go well. Our only QA was on vacation." Or like somebody took a sick day or whatever reason, like they have the context. So they should be the first ones looking at the metric, analyzing the metric, and conveying the results of their analysis to higher levels, not the other way around, because what happens when you have it the other way around is that, like, they don't have the context, so they're looking at just the numbers, and if the number is bad, they're gonna inquire about it. If it's good, they are usually gonna stay quiet, uh, and they're gonna ask about the bad numbers, whether or not there was a good reason for that, whether or not it was like, uh, let's say, an exception. And then the team is going to feel that they have to defend themselves, to justify themselves every time, and it creates like a very poisonous scenario where the team feels that management is there to question them and they need to defend themselves against management instead of them having the autonomy to report on their success and their failures to management and let management deal with those results instead of the causes. 

Kovid Batra: Totally, totally. 

Peter Peret Lupo: Context is super important. 

Kovid Batra: Great point there. Yeah, of course. Great point there, uh, highlighting the do's and don'ts from your experience and it's very relevant actually because the numbers don't always give you the reality of the situation. They could be an indicator and that's why we have them in place. Like first thing, you measure it. Don't come to a conclusion from it directly. If you see some discrepancy, like if there are some extraordinary data points, as you said, then there is a point which you should come out and inquire to understand what exactly happened here, but not directly jump onto the team saying that, Oh, you're not doing good or the other way around. So I think that that totally makes sense, uh, Peter. 

I think it was really, really interesting talking to you about the metrics and the implementation and the experiences that you have shared. Um, we could go on on this, but today I think we'll have to stop here and, uh, say goodbye to you. Maybe we can have another round of discussion continuing with those experiences that you have had with the implementation.

Peter Peret Lupo: Definitely. It was a real pleasure.. 

Kovid Batra: It would be our pleasure, actually. But, uh, like before you leave, uh, anything that you want to share with our audience as parting advice, uh, would be really appreciated. 

Peter Peret Lupo: All right. Um, look at your metrics as an ally, as a guide to tell you where you're going. Compare what you're doing now with what you were doing before to see if you're improving. When I say 'you', I'm talking to, uh, each individual in the team. Consider your team metrics, look at them, your work is part of the work that is being analyzed, and you have an influence on that at an individual level and with your team. So do look at your metrics, compare where you are at with where you were before to see if your changes were improved, see if your changes, uh, carried improvements you're looking for, and talk to your team about these metrics on your sprint retrospective. That's a very powerful tool to tell you, like, if your, uh, retrospective actions are being effective in delivering the change that you want in your process.

Kovid Batra: Great! I think great piece of advice there. Thanks, Peter. Thank you so much. Uh, this was really insightful. Loved talking to you. 

Peter Peret Lupo: All right. Thank you.

The DORA Lab EP #03 | Ben Parisot - Engineering Manager at Planet Argon

In the third episode of ‘The DORA Lab’ - an exclusive podcast by groCTO, host Kovid Batra has an engaging conversation with Ben Parisot, Software Engineering Manager at Planet Argon, with over 10 years of experience in engineering and engineering management.

The episode starts with Ben offering a glimpse into his personal life. Following that, he delves into engineering metrics, specifically DORA & the SPACE framework. He highlights their significance in improving the overall efficiency of development processes, ultimately benefiting customers & dev teams alike. He discusses the specific metrics he monitors for team satisfaction and the crucial areas that affect engineering efficiency, underscoring the importance of code quality & longevity. Ben also discusses the challenges faced when implementing these metrics, providing effective strategies to tackle them.

Towards the end, Ben provides parting advice for engineering managers leading small teams emphasizing the importance of identifying & utilizing metrics tailored to their specific needs.

Timestamps

  • 00:09 - Ben’s Introduction
  • 03:05 - Understanding DORA & Engineering Metrics
  • 07:51 - Code Quality, Collaboration & Roadmap Contribution
  • 11:34 - Team Satisfaction & DevEx
  • 16:52 - Focus Areas of Engineering Efficiency
  • 24:39 - Implementing Metrics Challenges
  • 32:11 - Ben’s Parting Advice

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo, and today's episode is a bit special. This is part of The DORA Lab series and this episode becomes even more special with our guest who comes with an amazing experience of 10 plus years in engineering and engineering management. He's currently working as an engineering manager with Planet Argon. We are grateful to have you here, Ben. Welcome to the show. 

Ben Parisot: Thank you, Kovid. It's really great to be here. 

Kovid Batra: Cool, Ben. So today, I think when we talk about The DORA Lab, which is our exclusive series, where we talk only about DORA, engineering metrics beyond DORA, and things related to implementation of these metrics and their impact in the engineering themes. This is going to be a big topic where we will deep dive into the nitty gritties that you have experienced with with this framework. But before that, we would love to know about you. Something interesting, uh, about your life, your hobby and your role at your company. So, please go ahead and let us know. 

Ben Parisot: Sure. Um, well, my name is Ben Parisot, uh, as you said, and I am the engineering manager at Planet Argon. We are a Ruby on Rails agency. Uh, we are headquartered in Portland, Oregon in the US but we have a distributed team across the US and, uh, many different countries around the world. We specifically work with, uh, companies that have legacy rails applications that are becoming difficult to maintain, um, either because of outdated versions, um, or just like complicated legacy code. We all know how the older an application gets, the more complex and, um, difficult it can be to work within that code. So we try to come in, uh, help people pull back from the brink of having to do a big rewrite and modernize and update their applications. 

Um, for myself, I am an Engineering Manager. I'm a writer, parts, uh, very, very non-professional musician. Um, I like to read, I really like comic books. I currently am working on a mural for my son, uh, he's turning 11 in about a week, and he requested a giant Godzilla mural painted on his bedroom wall. This is the first time I've ever done a giant mural, so we'll see how it goes. So far, so good. Uh, but he did tell me that, uh, he said, "Dad, even if it's bad, it's just paint." So, I think that.. Uh, still trying to make it look good, but, um, he's, he's got the right mindset, I think about it. 

Kovid Batra: Yeah, I mean, I have to appreciate you for that and honestly, great courage and initiative from your end to take up this for the kid. I am sure you will do a great job there. All the best, man. And thanks a lot for this quick, interesting intro about yourself. 

Let's get going for The DORA Lab. So I think before we deep dive into, uh, what these metrics are all about and what you do, let's have a quick definition of DORA from you, like what is DORA and why is it important and maybe not just DORA, but other metrics, engineering metrics, why they are important. 

Ben Parisot: Sure. So my understanding of DORA is sort of the classical, like it's the DevOps Research and Assessment. It was a think tank type of group just to, I can't remember the company that they started with, but it was essentially to improve productivity specifically around deployments, I believe, and like smoothing out some deployment, uh, and more DevOps-related processes, I think. That's, uh, but, uh, it's essentially evolved to be more about engineering metrics in a broader sense, still pretty focused on deployment. So specifically, like how, how fast can teams deploy code, the frequency of those deployments and changes, uh, to the codebase. Um, and then also metrics around failures and response to failures and how fast people can, uh, or engineering teams respond to incidences. 

Beyond DORA, there's of course the SPACE framework, which is a little bit more broader and looks at some of the more day-to-day processes involved in software engineering, um, and also developer experience. We at Planet Argon, we do a little bit of DORA. We focus mainly on more SPACE-related metrics, um, although there's a bunch of crossover. For us, metrics are very important both for, you know, evaluating the performance of our team, so that we can, you know, show value to our clients and prove, you know, "Hey, this is the value that we are providing beyond just the deliverable." Sometimes because of the nature of our work, we do a lot of work on like the backend or improvements that are not necessarily super-apparent to an end user or even, you know, the stakeholder within the project. So having metrics that we can show to our clients to say, "Hey, this is, um, you know, this is improving our processes and our team's efficiency and therefore, that's getting more value for your budget because we're able to move faster and accomplish more." That's a good thing. Also, it's just very helpful to, you know, keep up good team morale and for longevity sake, like, engineers on our team really like to know where they stand. They like to know how they're doing. Um, they like to have benchmarks on which they can, uh, measure their own growth and know where in sort of the role advancement phase they are based on some, you know, quantifiable metric that is not just, you know, feedback from their coworkers or from clients. 

Kovid Batra: Yeah, I think that's totally making sense to me and while you were talking about the purpose of bringing these metrics in place and going beyond DORA also, that totally relates to the modern software development processes, because you just don't want to like restrict yourself to certain part of engineering efficiency when you measure it, you just don't want to look at the lead time for change, or you just don't want to look at the deployment frequency. There are things beyond these, which are also very important and become, uh, the area of inefficiency or bottlenecks in the team's overall delivery. So, just for example, I mean, this is a question also, whether there is good collaboration between the team or not, right? If there is no good code collaboration, that is a big bottleneck, right? Getting reviews done in a proper way where the quality, the base is intact, that really, really matters. Similarly, if you talk about things like delivery, when you're delivering the software from the planning phase to the actual feature rollout and users using it, so cycle time probably in DORA will cover that, but going beyond that space to understand the project management piece and making sure how much time in total goes into it is again an aspect. Then, there are areas where you would want to understand your team satisfaction and how much teams are contributing towards the roadmap, because that's also how you define that whether you have accumulated too much of technical debt or there are too many bugs coming in and the team is involved right over there. 

And an interesting one which I recently came across was someone was measuring that when new engineers are getting onboarded, uh, how much time does it take to go for the first commit, right? So, these small metrics really matter in defining how the overall efficiency of the engineering or the development process looks like. So, I just wanted to understand from you, just for example, as I mentioned, how do you look at code collaboration or how do you look at, uh, roadmap contribution or how do you look at initial code quality, deliverability, uh, when it comes to your team. And I understand like you are a software agency, maybe a roadmap contribution thing might not be very relevant. So, maybe we can skip that. But otherwise, I think everything else would definitely be relevant to your context. 

Ben Parisot: Sure. Yeah, being an agency, we work with multiple different clients, um, different repos in different locations even, some of them in GitHub, Bitbucket, um, GitLab, like we've got clients with code everywhere. Um, so having consistent metrics across all of like the DORA or SPACE framework is pretty difficult. So we've been able to piecemeal together metrics that make sense for our team. And as you said, like a lot of the metrics, they're for productivity and efficiency sake for sure, but they also then, if you like dig one level deeper, there is a developer experience metric below that. Um, so for instance, PR review, you know, you mentioned, um, like turnaround time on PRs, how quickly are people that are being assigned to review getting to it, how quickly are changes being implemented from after a review has been submitted; um, those are on the surface level, very productivity- driven metrics. We want our team to be moving quickly and reviewing things in a timely manner. But as you mentioned, like a slow PR turnaround time can be a symptom of bad communication and that can lead to a lot of frustration, um, and even like disagreement amongst team members. So that's a really like developer satisfaction metric as well, um, because we want to make sure no one's frustrated with any of their coworkers, uh, or bottlenecked and just stuck not knowing what to do because they have a PR that hasn't been touched. 

We use a couple of different tools. We're luckily a pretty small team, so my job as a manager and collecting all this data from the metrics is doable for now, not necessarily scalable, but doable with the size of our team. I do a lot of manual data collection, and then we also have various third-party integrations and sort of marketplace plugins. So, we work a lot in GitHub, and we use some plugins in GitHub to help us give some insight into, for instance, like you said, like commit time or number of commits within a PR size of those commits you know, we have an engineering handbook that has a lot of our, you know, agreed upon best practices and those are generally in place so that our developers can be more efficient and happy in their work, so, you know, it can feel a little nitpicky to be like, "Oh, you only had two commits in this giant PR." Like, if the work's getting done, the work's getting done. However you know, good commit, best practice. We try to practice atomic commits here at Planet Argon. That is going to, you know, not only like create easier rollbacks if necessary, there's just a lot of reasons for our best practices. So the metrics try to enforce the best practices that we have in mind already, or that we have in place already. And then, yeah, uh, you asked what other tools or? 

Kovid Batra: So, yeah, I mean talking specifically about the team satisfaction piece. I think that's very critical. Like, that's one of the fundamental things that should be there in the team so that you make sure the team is actually productive, right? From what you have explained, uh, the kind of best practices you're following and the kind of things that you are doing within the team reflect that you are concerned about that. So, are there any specific metrics there which you track? Can you like name a few of them for us? 

Ben Parisot: Sure, sure. Um, so for team satisfaction, we track a number of following metrics. We track build processes, code review, deep work, documentation, ease of release, local development, local environment setup, managing technical debt, review turnaround, uh, roadmap and priorities, and test coverage and test efficiency. So these are all sentiment metrics. I use them from a management perspective to not only get a feeling of how the team is doing in terms of where their frustrations lie, but I also use it to direct my work. If I see that some of these metrics or some of these areas of focus are receiving like consistently low sentiment scores, then I can brainstorm with the team, bring it to an all-hands meeting to be like, "Here's some ideas that I have for improving these. What is your input? What's a reasonable timeline look like?" And then, show them that, you know, their continued participation in these, um, these surveys, these developer experience surveys are leading to results that are improving their work experience. 

Kovid Batra: Makes sense. Out of all these metrics that you mentioned, which are those top three or four, maybe? Because it's very difficult to, uh, look at 10, 12 metrics every time, right? So.. 

Ben Parisot: Yes. 

Kovid Batra: There is a go-to metric or there are a few go-to metrics that quickly tell you okay, what's going on, right? So for me, sometimes what I basically do is like if I want to see if the code, initial code quality is coming out good or not I'm mostly looking at how many commits are happening after the PRs are being raised for review and how many comments were there. So when I look at these two, I quickly understand, okay, there is too much to and fro happening and then quality initially is not coming out well. But in the case of team satisfaction, of course, it's a very feeling, qualitative-driven, uh, piece we are talking about. But still, if you have to objectify it with, let's say three or four metrics, what would be those three or four important metrics that you think impact the developer's experience or developer's satisfaction in your team? 

Ben Parisot: Sure. So we actually have like 4 KPIs that we track in addition to those sentiment metrics, and they are also sort of sentiment metrics as well, but they're a little higher level. Um, we track weekly time loss, ease of delivery, engagement, uh, and perceived productivity. So we feel like those touch pretty much all of the different aspects of the software development life cycle or the developer's day-to-day experience. So, ease of delivery, how, you know, how easy is it for you to be, uh, deploying your code, um, that touches on any bottlenecks in the deployment pipelines, any issues with PRs, PR reviews, that sort of thing. Um, engagement speaks to how excited or interested people are about the work that they're doing. So that's the more meat of the software development work. Um, perceived productivity is how, you know, how well you think you are being productive or how productive you feel like you are being. Um, and that's really important because sometimes the hard metrics of productivity and the perceived productivity can be very different, and not only for like, "Oh, you think you're being very productive, but you're not on paper." Um, oftentimes, it's the reverse where someone feels like they aren't being productive at all and I can go, and I know that from their sentiment score. Um, but then I can go and pull up PRs that they've submitted or work that they've been doing in JIRA and just show them a whole list of work that they've done. I feel like sometimes developers are very in the weeds of the work and they don't have a chance to step back and look at all that they've accomplished. So that's an important metric to make sure that people are recognizing and appreciating all of the work and their contributions to a project and not feeling like, "Oh, this one ticket, I haven't been very productive on. So, therefore, I'm not a very good developer." Uh, and then finally, weekly time loss is a big one. This one is more about everything outside of the development work. So this also focuses on like, how often are you in your flow? Are you having too many meetings? Do you feel like, you know, the asynchronous current communication that is just the nature of our distributed team? Is that blocking you? And are you being, you know, held up too much by waiting around for a response from someone? So that's an important metric that we look at as well. 

Kovid Batra: Makes sense. Thanks. Thanks for that detailed overview. I think team satisfaction is of course, something that I also really, really care about. Beyond that, what do you think are those important areas of engineering efficiency that really impact the broader piece of efficiency? So, uh, just to give you an example, is it you're focusing mostly in your teams related to deliverability or are you focusing more related to, uh, the quality of the work or is it something related to maybe sprints? I'm really just throwing out ideas here to understand from you how you, uh, look at which are those important areas of engineering efficiency other than developer satisfaction. 

Ben Parisot: Yeah. I think, right. I think, um, for our company, we're a little bit different even than other agencies. Companies don't come to us often for large new feature development. You know, as I mentioned at the top of the recording, we inherit really old applications. We inherit applications that have, you know, the developers have just given up on. So a lot of our job is modernizing and improving the quality of the code. So, you know, we try to keep our deployment metrics, you know, looking nice and having all of the metrics around deployment and, uh, post-deployment, obviously. Um, but from my standpoint, I really focus on the quality of the code and sort of the longevity of the code that the team is writing. So, you know, we look at coding practices at Planet Argon, we measure, you know, quality in a lot of different ways. Some of them are, like I said earlier, like practicing atomic commits size of PRs. Uh, because we have multiple projects that people are working on, we have different levels of understanding of those projects. So there's like, you know, some people that have a very high domain knowledge of an application and some people that don't. So when we are submitting PRs, we try to have more than one person look at a PR and one person is often coming with a higher domain knowledge and reviewing that code as it, uh, does it satisfy the requirements? Is it high-quality code within the sort of ecosystem of that existing application? And then, another person is looking at more of the, the best practices and the coding standards side of it, and reviewing it just from a more, a little more objective viewpoint and not necessarily as it's related to that.

Let's see, I'm trying to find some specific metrics around code quality. Um, commits after a PR submission is one, you know, if where you are finding that our team is often submitting a PR and then having to go back and work a lot more on it and change a lot more things; that means that those PRs are probably not ready or they're being submitted a little earlier. Sometimes that's a reflection of the developer's understanding of the task or of the code. Sometimes it's a reflection of the clarity of the issue that they've been assigned or the requirements. You know, if the client hasn't very clearly defined what they want, then we submit a PR and they're like, "Oh, that's not what I mean." so that's an important one that we looked at. And then, PR approval time, I would say is another one. Again, that one is both for our clients because we want to be moving as quickly with them as we can, even though we don't often work against hard deadlines. We still like to keep a nice pace and show that our team is active on their projects. And then, it's also important for our team because nobody likes to be waiting around for days and days for their PR to be reviewed. 

Kovid Batra: Makes sense. I think, yeah, uh, these are some critical areas and almost every engineering team struggles with it in terms of efficiency and what I have felt also is not just organization, right, but individual teams have different challenges and for each team, you could be looking at different metrics to solve their problems. So one team would be having a low deployment frequency because of maybe not enough tooling in place and a lot of manual intervention being there, right? That's when their deployments are not coming out well or breaking most of the time. Or it could be for another team, the same deployment frequency could be a reason that the developers are actually not writing or doing enough number of like PRs in a defined period of time. So there is a velocity challenge there, basically. That's why the deployment frequency is low. So most of the times I think for each team, the challenge would be different and the metrics that you pick would be different. So in your case, as you mentioned, like how you do it for your clients and for your teams is a different method. Cool. I think with that, I.. Yeah, you were saying something. 

Ben Parisot: Oh, I was, yeah. I was gonna say, I think that, uh, also, you know, we have sort of across the board, company best practice or benchmarks that we try to reach for a lot of different things. For instance, like test coverage or code coverage, technical debt, and because we inherit codebases in various levels of, um, quality, the metric itself is not necessarily good or bad. The progress towards a goal is where we look. So we have a code coverage metric, uh, or goal across the company of like 80, 85%, um, test coverage, code coverage. And we've inherited apps, big applications, live applications that have zero test coverage. And so, you know, when I'm looking at metrics for tests, uh, you know, it's not necessarily, "Hey, is this application's test coverage meeting our standards? Is it moving towards our standards?" And then it also gets into the individual developers. Like, "Are you writing the tests for the new code that you're writing? And then also, is there any time carved out of the work that you have on that project to backfill tests?" And similarly, with, uh, technical debt, you know, we use a technical debt tagging tool and oftentimes, like every three months or so we have a group session where we spend like an hour, hour and a half with our cameras off on zoom, just going into codebases that we're currently working on and finding as much technical debt as we can. Um, and that's not necessarily like, oh, we're trying to, you know, find who's not writing the best code or what the, uh, you know, trying to find all the problems that previous developers caused. But it's more of a is this, you know, other areas for like, you know, improvements? Right. And also, um, is there any like potential risks in this codebase that we've overlooked just by going through the day-to-day? And so, the goal is not, "Hey, we need to have no technical debt ever." It's, "Are we reducing the backlog of tech debt that we're currently working within?" 

Kovid Batra: Totally, totally. And I think this again brings up that point that for every team, the need of a metric would be very different. In your case, the kind of projects you are getting by default, you have so much of technical debt that's why they're coming to you. People are not managing it and then the project is handed over to you. So, having that test coverage as a goal or a metric is making more sense for your team. So, agreed. I think I am a hundred percent in line with that. But one thing is for sure that there must be some level of, uh, implementation challenges there, right? Uh, it's not like straightforward, like you, you are coming in with a project and you say, "Okay, these are the metrics we'll be tracking to make sure the efficiency is in place or not." There are always implementation challenges that are coming with that. So, I mean, with your examples or with your experience, uh, what do you think most of the teams struggle with while implementing these metrics? And I would be more than happy to hear about some successes or maybe failures also related to your implementation experiences. 

Ben Parisot: Yeah. So I would just say the very first challenge that we face is always, um. I don't want to see team morale, but, um, the, somewhat like overwhelming nature depending on the state of the codebase. Like if you inherit a codebase, it's really large and there's no tests. That's, you know, overwhelming to think about, having to go and write all those tests, but it's also overwhelming and scary to think, "Oh, what if something breaks?" Like, a test is a really good indicator of where things might be breaking and there's none of that, so the guardrails are off. Um, and that's scary. So helping people get used to, especially newer developers who have just joined the team, helping them get used to working within a codebase that might not be as nice and clean as previous ones that they've worked with is a big challenge. In terms of actual implementation, uh, we face a number of challenges being an agency. Like I mentioned earlier, some codebases are in, um, different places like GitHub or Bitbucket. You know, obviously those tools have generally the same features and generally the same, you know, sort of dashboard type of things. Um, but if we are using any sort of like integrated tool to measure metrics around those things, if we get it, um, a repo that's not in the platform, it's not on the platform where we have that integration happening, then we don't get the metrics on that, or we have to spin up a new integration. 

Kovid Batra: Yeah. 

Ben Parisot: Um, for some of our clients, we have access to some of their repos and not others, and so, like we are working in an app ecosystem where the application that we are responsible for is communicating and integrated with another application that we don't, we can't see; and so that's very difficult at times. That can be a challenge for implementing certain metrics, because we need to know, like, especially performance metrics for the application. Something might be happening on this hidden application that we don't have any control over or visibility into. 

Um, and then what else? Just I would say also a challenge that we face is the, um, most of our developers are working on 2 to 3 applications at a time, and depending on the length of the engagement, um, sometimes people will switch on and off. So it can be difficult to track metrics for just a single project when developers are working on it for like maybe a few weeks or a few months and then leaving. Sometimes we have like a dedicated developer who's lead and then, have a support developer come in when necessary. And so that can be challenging if we're trying to parse out, like why there might've been a shift in the metrics or like a spike in one metric or another, or a drop and be like, "Okay, well, let's contextualize that around who was working on this project, try to determine like, okay, is this telling us something important about the project itself, or is it just data that is displaying the adding or subtracting of different developers to the project?" So that can be a challenge. 

Specifically, I can mention an actual sort of case study that we had. Uh, we were using Code Climate, which is a tool that we still use. We use the quality tool for like audits and stuff. Um, but when I first started applying to Argon, I wanted to implement its velocity tool, which is like the sister tool to quality and it is like very heavily around cycle time. Um, and it was all great, I was very excited about it. Went and signed up, um, went and connected our GitHub accounts, or I guess I did the Bitbucket account at the time cause most of our repos were in Bitbucket. Um, didn't realize at the time at least that you could only integrate with one platform. And so, even though we had, uh, we had accounts and we had clients with applications on GitHub, we were only able to integrate with Bitbucket. So some engineers' work was not being caught in that tool at all because they were working primarily in GitHub application. And again, like I said, sometimes developers would then go to one of the applications in Bitbucket, help out and then drop off. So it was just causing a lot of fluctuations in data and also not giving us metrics for the entire team consistently. So we eventually had to drop it because it was just not a very valuable tool, um, in that it was not capturing all of the activities of all of our engineers everywhere they were working. Um, I wished that it was, but it's the nature of the agency work and also, you know, having people that are, um. 

Kovid Batra: Yeah, I totally agree on that point and the challenges that you're facing are actually very common, but at the same time, uh, having said that, I believe the tooling around metrics observation and metrics monitoring has come way ahead of what you have been using in Code Climate. So, the challenge still remains, like most of the teams try to gather metrics manually, which is time-consuming, or in your case where agencies are working on different projects, it's very difficult or different codebases, very difficult to gather the right metrics for individual developers there also. Somehow, I think the challenges are very valid, but now, the tooling that is available in the market is able to cater to all those challenges. So maybe you want to give it a try and see, uh, your metrics implementation getting in place. But yeah, I think, thanks for highlighting these pointers and I think a lot of people, a lot of engineering managers and engineering leaders struggle with the same challenges while implementing those. So totally, uh, bringing these challenges in front of the audience and talking about it would bring some level of awareness to handle these challenges as well. 

Great. Great, Ben. I think with this, uh, we would like to bring an end to our today's episode. It was really amazing to understand how Planet Argon is working and you are dealing with those challenges of implementing metrics and how well you are actually doing, even though the right tooling or right things are not in place, but the important part is you realize the purpose. You don't probably go ahead and do it for the sake of doing it. You're actually doing it where you have a purpose and you know that this can impact the overall productivity of the team and also bring credibility with your clientele that yes, we are doing something and you have something to show in numbers also. So, I really appreciate that. 

And while, like before we say goodbye, is there parting advice or something that you would like to speak with the audience? Please go ahead. 

Ben Parisot: Oh, wow! Um, yeah, sure. So I think your point about like understanding the purpose of the metrics is important. You know, my team, uh, I am the manager of a small team and a small company. I wear a lot of hats and I do a lot of different things for my team. They show me a lot of grace, I suppose, when I have, you know, incomplete data for them, I suppose. Like you said, there's a lot of tools out there that can provide a more holistic, uh, look. Um, and I think that if you are an agency, uh, if you're a manager on a small team and you sort of struggle to sort of keep up with all of the metrics that you have even promised for your team or that you know that you should be doing, uh, if you really focus on the ones that are impacting their day-to-day experience as well as like the value that they're providing for either, you know, your company's internal stakeholders or external clients, you're going to quickly see the metrics that are most important and your team is going to appreciate that you're focusing on those, and then, the rest of it is going to fall into place when it does. And when it doesn't, um, you know, your team's not going to really be too upset because they know, they see you focusing on the stuff that matters most to them. 

Kovid Batra: Great. Thanks a lot, Ben. Thank you so much for such great, insightful experiences that you have shared with us. And, uh, we wish you all the best, uh, and your kid a very happy birthday in advance. 

Ben Parisot: Thank you. 

Kovid Batra: All right, Ben. Thank you so much for your time. Have a great day. 

Ben Parisot: Yes. Thanks.

‘Evolution of Software Testing: From Brick Phones to AI’ with Leigh Rathbone, Head of Quality Engineering at CAVU

In the latest episode of ‘groCTO: Originals’ (formerly ‘Beyond the Code: Originals’), host Kovid Batra engages with Leigh Rathbone, Head of Quality Engineering at CAVU who has a rich technical background with reputable organizations like Sony Ericsson and The Very Group. He has been at the forefront of tech innovation, working on the first touchscreen smartphone and smartwatch, and later with AR, VR, & AI tech. The conversation revolves around ‘Evolution of Software Testing: From Brick Phones to AI’.

The podcast begins with Leigh introducing himself and sharing a life-defining moment in his career. He further highlights the shift from manual testing to automation, discussing in depth the automation framework for touchscreen smartphones from 19 years ago. Leigh also addresses the challenges of implementing AI and how to motivate teams to explore automation opportunities. He also discusses the evolution of AR, VR, and 3D gaming & their role in shaping modern-day testing practices, emphasizing the importance of health and safety considerations for testers.

Lastly, Leigh offers parting advice urging software makers & testers to always prioritize user experience & code quality when creating software.

Timestamps

  • 00:06 - Leigh’s Introduction
  • 01:07 - Life-defining Moment in Leigh’s Career
  • 04:10 - Evolution of Software Testing
  • 09:20 - Role of AI in Testing
  • 11:14 - Conflicts with Implementing AI
  • 15:29 - Adapting to AI with Upskilling
  • 21:02 - Evolution of AR, VR & 3D Gaming
  • 25:45 - Unique Value of Humans in Testing
  • 32:58 - Conclusion & Parting Advice

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today, we are lucky to have a tech industry veteran with us on our show today. He is the Head of Quality Engineering at CAVU. He has had fascinating 25-plus years of engineering and leadership experience, working on cutting-edge technologies including the world's first smartphone and smartwatch. He was also involved in the development of progressive download and DRM technologies that laid the groundwork for modern streaming services. We are grateful to have you on the show, Leigh. 

Leigh Rathbone: Thank you, Kovid. It's great to be here. I'm really happy to be invited. I'm looking forward to sharing a few experiences and a few stories in order to hopefully inspire and help other people in the tech industry. 

Kovid Batra: Great, Leigh. And today, I think they would have a lot of things to deep dive into and learn from you, from your experience. But before we go there, where we talk about the changing landscape of software testing, coming from brick phones to AI, let's get to know a little bit more about each other. Can you just tell us something about yourself, some of your life-defining experiences so that I and the audience can know you a little more? 

Leigh Rathbone: Yeah. Well, I'm Leigh Rathbone. I live in the UK, uh, in England. I live just North of a city called Liverpool. People might've heard of Liverpool because there's a few famous football teams that come from there, but there's also a famous musical band called the Beatles that came from Liverpool. So, I live just North of Liverpool. I have two children. I'm happily married, been married for over 20 years. I am actually an Aston Villa football fan. I don't support any of the Liverpool football clubs. I'm not a cricket fan or a rugby fan. It's 100 percent football for me. I do like a bit of fitness, so I like to get out on my bike. I like to go to the gym. I like to drink alcohol. Am I allowed to say that, Kovid? Am I allowed to say that? I do like a little bit of alcohol. Um, and like everybody else, I think I'm addicted to Netflix and all the streaming services, which is quite emotional for me, Kovid, because having played a part in the building blocks and a tiny, tiny part in the building blocks of what later became streaming, when I'm listening to Spotify or when I'm watching something on Amazon Video or Netflix, I do get a little bit emotional at times thinking, "Oh my God! I played a minute part of that technology that we now take for granted." So, that's my sort of out-of-work stuff that, um, I hope people will either find very boring or very interesting, I don't know. 

Kovid Batra: No, I definitely relate to it and I would love to know, like, which was the last, uh, series you watched or a movie you watched on Netflix and what did you love about it? 

Leigh Rathbone: So, I watched a film last night called 'No Escape'. Um, it's a family that goes to, uh, a country in Asia and they don't say the name of the country for legal reasons. Um, but they get captured in a hotel and it's how they escape from some terrorists in a hotel with the help of Brosnan who's also in the film. So, yeah, it was, uh, it was high intensity, high energy and I think that's probably why I liked it because from the very almost 5-10 minutes, it's like, whoa, what's going on here? So, it was called 'No Escape'. It's on Netflix in the UK. I don't know whether it'll be on Netflix across the world. But yeah, it's an old film. It's not new. I think it's about three years old. But yeah, it was quite enjoyable. 

Kovid Batra: Cool, cool. I think that that's really interesting and thank you for such a quick, sweet intro about yourself. And of course, your contributions are not minute. Uh, I'm sure you would have done that in that initial stage of tech when the technology was building up. So, thanks on behalf of the tech community there. 

Uh, all right, Leigh, thank you so much and let's get started on today's main topic. So, you come from a background where you have seen the evolution of this landscape of software testing and as I said earlier, also like from brick phones to AI, I'm sure, uh, you have a lot of experiences to share from the days when it all started. So, let's start from the part where there was no automation, there was manual testing, and how that evolved from manual testing to automation today, and how things are being balanced today because we are still not 100 percent automated. So, let's talk about something like, uh, your first smartphone, uh, maybe where you might not have all the automation, testing or sophisticated automation possible. How was your experience in that phase? 

Leigh Rathbone: Well, I am actually holding up for those people that, uh, can either watch the video. 

Kovid Batra: Oh my God! Oh my God! 

Leigh Rathbone: I'm holding up the world's first touchscreen smartphone and you can see my reflection and your reflection on the screen there. This is called the Sony Ericsson P800. I worked on this in 2002 and it hit the market in 2003 as the world's first touchscreen smartphone, way before Apple came to the market. But actually, if I could, Kovid, can I go back maybe four years before this? Because there's a story to be told around manual testing and automation before I got to this, because there is automation, there is an automation story for this phone as well. But if I can start in 1999, I've been in testing for 12 months and I moved around a lot in my first four years, Kovid because I wanted to build up my skillsets and the only way to do that was to move jobs. So, my first four years, I had four jobs. So, in 1999 I'm in my second job. I'm 12 months into my testing career and I explore a tool called WinRunner. I think it was the first automation tool. So, there I am in 1999, writing automation scripts without really knowing the scale of what was going to unfold in front of not just the testing community, but the tech community. And when I was using this tool called WinRunner. Oh, Kovid, it was horrible. Oh my God! So, I would be writing scripts and it was pretty much record and playback, okay? So, I was clicking around, I was looking at the code, I was going, "Oh, this is exciting." And every time a new release would come from the developers, none of my scripts would work. You know the story here, Kovid. As soon as a new release of code comes out, there's bug fixes, things move around on the screens, you know, different classes change, there might be new classes. This just knocks out all of my scripts. So, I'd spend the next sort of, I don't know, eight days, working, reworking, refactoring my automation scripts. And then, I just get to the point where I was tackling new scripts for the new code that dropped and a new drop of code would come. And I found myself in this cycle in 1999 of using WinRunner and getting a little bit excited but getting really frustrated. And I thought, "Where is this going to go? Has it got a future in tech? Has it got a future in testing?" Cause I'm not seeing the return on investment with the way I was using it. So, at that point in time, 1999, I saw a glimpse, a tiny glimpse of the future, Kovid. And that was 25 years ago. And for the next couple of years, I saw this slow introduction, very, very slow back then, Kovid, of manual testing and automation. And the two were very separate, and that continued for a long, long time, whereby you'd have manual testers and automation testers. And I feel that's almost leading and jumping ahead because I do want to talk about this phone, Kovid, because this phone was touchscreen, and we had automation in 2005. We built an automation framework bespoke to Sony Ericsson that would do stress testing, soak testing, um, you know, um, it would actually do functional automation testing on a touchscreen smartphone. Let that sink into 19 years ago. We built a bespoke automation framework for the touchscreen smartphone. Let that sink in folks. 

Kovid Batra: Yeah. 

Leigh Rathbone: Unreal, absolutely unreal, Kovid. Back in the day, that was pretty much unheard of. Unbelievable. It still blows my mind to this day that in 2005, 19 years ago, on a touchscreen smartphone, we had an automation framework that added loads and loads of value. 

Kovid Batra: Totally, totally. And was this your first project wherein you actually had a chance to work hands-on with this automation piece? Like, was that your first project? 

Leigh Rathbone: So, what happened here, Kovid, and this is a trend that happened throughout the testing and tech industry right up until about, I'd say that seven years ago, we had an automation team and a manual team. I'll give you some context for the size. The automation team was about five people. The manual test team was about 80 people. So, you can see the contrast there. So, they were doing pretty much what I was doing in 1999. They were writing some functional test scripts that we could use for regression testing. Uh, but they were mostly using it for soak testing. So in other words, random touches on the screen, these scripts needed to run for 24 hours in order for us to be able to say, "Yes, that can, that software will exist in live with a customer touching the screen for 24 hours without having memory leaks, as an example." So, their work felt very separate to what we were doing. There was a slight overlap with the functional testing where they'd take some of our scripts and turn them into, um, automated regression packs. But they were way ahead of the curve. They were using this automation pack for soak testing to make sure there was no memory leaks by randomly dibbing on a screen. And I say dibbing, Kovid, because you touched the screen with a dibber, right? It wasn't a finger. Yeah, you needed this little dibber that clicked onto the side of the phone in order to touch the screen. So, they managed to mimic random clicks on the screen in order to test for memory leaks. Fascinating. Absolutely fascinating. So at that point, we started to see a nice little return on investment on automation being used. 

Kovid Batra: Got it. Got it. And from there, how did it get picked up over the years? Like, how have teams collaborated? Was there any resistance from, of course, every time this automation piece comes in, uh, there is resistance also, right? People start pointing things. So, how was that journey at that point? 

Leigh Rathbone: I think there's always resistance to change and we'll see it with AI. When we come on to the AI section of the talk, we'll see it there. There will always be resistance to change because people go through fear when change is announced. So, if you're talking to a tester, a QA or a QE and you're saying, "Look, you're going to have to change your skill sets in order to learn this." They're gonna go through fear before they spot the opportunity and come up the other side. So, everywhere I've gone, there's been resistance to automation and there's something else here, Kovid, from the years 1998 to 2015, test teams were massive. They were huge. And because we were in the waterfall methodology, they were pretty much standalone teams and all the people that were in power of running these big teams, they had empires, and they didn't want to see those empires come down. So actually, resistance wasn't just sometimes from testers themselves, it was from the top, where they might say, "Oh, this might mean that the number of testers I need goes down, so, therefore, my empire shrinks." And there were test leaders out there, Kovid, doing that, very, very naughty people. Like, honestly, trying to protect their own, their own job, instead of thinking about the future. So, I saw some testers try and accelerate the use of automation. I also saw test leaders put the brakes on it because they were worried about the status of their jobs and the size of their empires. 

Kovid Batra: Right. And I think fast-forward to today, we won't take much long to jump to the AI part here. Like, a lot of automation Is already in place. According to your, uh, view of the tech industry right now uh, let's say, if there are a hundred companies; out of those hundred, how many are at a scale where they have done like 80 percent or 90 percent of automation of testing? 

Leigh Rathbone: God! 80 to 90 percent automation of testing. You'll never ever reach that number because you can do infinite amounts of testing, okay? So, let's put that one out there. The question still stands up. You're asking of 100 companies, how many people have automation embedded in their DNA? So I would probably, yeah, I would probably say it's in the region of 70 to 80 percent. And I'd be, I wouldn't be surprised if it's higher, and I've got no data to back that up on. What I have got to back that up on is the fact that I've worked in 14 different companies, and I spend a lot of time in the tech industry, the tech communities, talking to other companies. And it's very rare now that you come across a company that doesn't have automation. 

But here's the twist, Kovid, there's a massive twist here. I don't employ automation testers, okay? So 2015, I made a conscious effort and decision not to employ automation testers. I actually employed testers who can do the exploratory side and the automation side. And that is a trend now, Kovid, that really is a thing. Not many companies now are after QAs that only do automation. They want QAs that can do the exploratory, the automation, a little bit of performance, a little bit of security, the people skills is obviously rife. You know, you've got to put those in there with everything else. 

Kovid Batra: Of course. 

Leigh Rathbone: Yeah. So for me now, this trend that sort of I spotted in 2014 and I started doing in 2015 and I've done it at every company I've been to, that really is the big trend in testers and QAs right now. 

Kovid Batra: Got it. I think it's more like, uh, it's an ever-growing evolutionary discipline, right? Uh, every time you explore new use cases and it also depends on the kind of business, the products the company is rolling out. If there are new use cases coming in, if there are new products coming in, every time you can just have everything automated. So yeah, I mean, uh, having that 80-90% testing scale automated is something quite far-fetched for most of the teams, until and unless you are stagnated over one product and you're just running it for years and years, which is definitely not, uh, sustainable for any business. 

So here, my question would be, like, how do you ensure that your teams are always up for exploring that side which can be automated and making sure that it's being done? So, how do you make sure? One is, of course, having the right hires in the team, but what are the processes and what are the workflows that you implement there from time to time? People are, of course, doing the manual testing also and with the existing use cases where they can automate it. They're doing that as well. 

Leigh Rathbone: It's a really good question, Kovid, and I'll just roll back in the process a little bit because for me, automation is not just the QA person's task and not even test automation. I believe that is a shared responsibility. So, quality is owned by everybody in the team and everyone plays their different parts. So for me, the automation starts right with the developers, to say, "Well, what are you automating? What are your developer checks that you're going to automate?" Because you don't want developers doing manual checks either. You want them to automate as much as they can because at the unit test level and even the integration test level, the feedback loops are really quick. So, that means the test is really cheap. So, you're getting some really good, rich feedback initially to show that nothing obvious has broken early on when a developer adds new code. So, that's the first part. So, that now, I think is industry standard. There aren't many places where developers are sat there going, "I'm not going to write any tests at all." Those days are long, long gone, Kovid. I think all, you know, modern developers that live by the modern coding principles know that they have to write automated checks.

But I think your question is targeted at the QAs. So, how do we get QAs involved? So, you have to breed the curiosity gene in people, Kovid. So, you're absolutely right. You have to bring people in who have the skills because it's very, very hard to start with a team of 10 QAs where no one can automate. That's really hard. I've only ever done that once. That's really hard. So, what I have done is I've brought people in with the mindset of thinking about automation first. The mindset of collaborating with developers to see what they're automating. The curiosity and the skill sets to grow and develop and learn more about the tools. And then, you have to give people time, Kovid. There is no way you can expect people who don't have the automation skills to just upskill like that. It's just not fair. You have to support, support, and support some more. And that comes from myself giving people time. It's understanding how people learn, Kovid.

So, I'll give you an example. Pair learning. That's one technique where you can get somebody who can't automate and maybe you get them pairing with someone else who can't automate and you give them a course. That's point number one. Pair learning could be pairing with someone who does know automation and pairing them with someone who doesn't know automation. But guess what? Not everyone likes pairing because it's quite a stressful environment for some people. Jumping on a screen and sharing your screen while you type, and them saying, "No, you've typed that wrong." That can be quite stressful. So, some people prefer to learn in isolation but they like to do a brief course first, and then come back and actually start writing something right in the moment, like taking a ticket now that they're manually testing, and doing something and practising, then getting someone to peer review it. So, not everyone likes pair learning. Not everybody likes to learn in isolation. You have to understand your people. How do they like to learn and grow? And then, you have to understand, then you have to relay to them why you are asking them to learn and grow. Why am I asking people to change? 'cause the skill bases that's needed tomorrow and the day after and in two years time are different to the skill bases we need right now or even 10 years ago. And if people don't upskill, how are they going to stay relevant? 

Kovid Batra: Right. 

Leigh Rathbone: Everything is about staying relevant, especially when test automation came along, Kovid, and people were saying, "Hey, we won't need QAs because the automation will get rid of them." And you'd be amazed how many people believed that, Kovid, you'd be absolutely gobsmacked how many tech leaders had in their minds that automation would get rid of QAs. So, we've been fighting an uphill struggle since then to justify our existence in some cases, which is wrong because I think the value addition of QAs and all the crafts when they come together is brilliant. But for me, for people who struggle to understand why they need to upskill in automation, it's the need to stay relevant and keep adding value to the company that they're in.

Kovid Batra: Makes sense. And what about, uh, the changing landscape here? So basically, uh, you have seen that part where you moved to phones and when these phones were being built, you said like, that was the first time you built something for touchscreen testing, right? Now, I think in the last five to seven years, we have seen AR, VR coming into the picture, right? Uh, the processes that you are following, let's say the pair learning and other things that you bring along to make sure that the testing piece, the quality assurance piece is in place as you grow as a company, as you grow as a tech team. For VR, AR kind of technologies, how has it changed? How has it evolved? 

Leigh Rathbone: Well, massively, because if you think about testing back in the day, everybody tested on a screen. And most of us are still doing that. And this is why this phone felt different and even the world's first smartwatch, which is here. When I tested these two things, I wasn't testing on a screen. I was wearing it on my wrist, the watch, and I was using the phone in my hand in the environment that the end user would use it in. So, when I went to PlayStation, Kovid, and I was head of European Test Operations for Europe with PlayStation, we had a number of new technologies that came in and they changed the way we had to think about testing. So, I'll give you some examples. Uh, the PlayStation Move, where you had the two controllers that can control the game, uh, VR, AR, um, 3D gaming. Those four bits of technology, and I've only reeled off four, there was more. Just in three years at PlayStation, I saw how that changed testing. So, for VR and 3D, you've got to think about health and safety of the tester. Why? Because the VR has bugs in it, the 3D has bugs in it, so it makes the tester disorientated. You're wearing.. They're not doing stuff through their eyes, their true eyes, they're doing it through a screen that has bugs in it, but the screen is right up and close to their eyes. So there was motion sickness to think about. And then, of course, there was the physical space that the testers were in. You can't test VR sat at a desk, you have to stand up. Look, because that's how the end users do it. When we tested the PlayStation Move with the two controllers, we had to build physical lounges for testers to then go into to test the Move because that's how gamers were going to use the game. Uh, I remember Microsoft saying that they actually went and bought a load of prompts for the Kinect. Um, so wigs and blow-up bodies to mimic different shapes of people's bodies because the camera needed to pick up everybody's style of hair, whether you're bald like me, or whether you have an afro, the camera needed to be able to pick you up. So all of a sudden, the whole testing dynamics have changed from just being 2 plus 2 equals 4 in a field, to actually can the camera recognize a bald, fat person playing the game. 

Everything changed. And this is what I mean. Now, it's performance. Uh, for VR, for augmented reality, mixed reality glasses, there's gonna be usability, there's gonna be performance, there's gonna be security. I'll give you one example if I can, Kovid. I'm walking down the road, and I'm wearing, uh, mixed reality glasses, and there's a person coming towards me in a t-shirt that I like, and all of a sudden, my pupils dilate, a bit of sweat comes out my wrist, That's data. That's collected by the wearable tech and the glasses. They know that I like that t-shirt. All of a sudden, at the top right-hand corner of those glasses, a picture of me wearing that t-shirt appears, and a voice appears on the arm and goes, "Would you like to purchase?" And I say, "Yes." And a purchase is made with no interaction with the phone. No interaction with anything except me saying 'yes' to a picture that appeared in the top right-hand corner of my phone. Performance was key there. Security was really key because there's a transaction of payments that's taken place. And usability, Kovid. If that picture appeared in the middle of the glasses, and appeared on both glasses, I'm walking out into the road in front of a bus, the bus is going to hit me, bang I'm dead because of usability. So, the world is changing how we need to think about quality and the user's experience with mixed reality, VR, AR is changed overnight.

Kovid Batra: I think I would like to go back to the point where you mentioned automation replacing humans, right? Uh, and that was a problem. And of course, that's not the reality, that cannot happen, but can we just deep dive into the testing and QA space itself and highlight what exactly today humans are doing that automation cannot replace? 

Leigh Rathbone: Ooh! Okay. Well, first of all, I think there's some things that need to be said before we answer that, Kovid. So, what's in your head? So, when I think of AI, when I think of companies, and this does answer your question, actually, every company that I've been into, and I've already mentioned that I've worked in a lot, the culture, the people, the tech stack, the customers, when you combine all of those together for each company, they're unique, absolutely unique to every single company. When you blend all of those and the culture and make a little pot of ingredients as to what that company is, it's unique. So, I think point number one is I think AI will always help and assist and is a tool to help everyone, but we have to remember, every company is unique and AI doesn't know that. So, AI is not a human being. AI is not creative. I think AI should be seen as a member of the team. Now if we took that mindset, would we invite everybody who's a member of the team into a meeting, into an agile ceremony, and then ignore one member of that team? We wouldn't, would we? So, AI is a tool and if we see it as a member of the team, not a human being, but a member of the team, why wouldn't we ask AI its opinion with everything that we do as QAs, but as an Agile team? So if we roll right back, even before a feature or an epic gets written, you can use AI for research. It's a member of the team. What do you think? What happened previously? It can give you trends. It can give you trends on bugs with previous projects that have been similar. So, you can use AI as a member of the team to help you before you even get going. What were the previous risks on this project that look similar? Then when you start getting to writing the stories, why wouldn't you ask AI its opinion? It's a member of the team. But guess what? Whatever it gives you, the team can then choose whether they want to use it, or tweak it, or not use it, just like any other member of the team. If I say this is my opinion, and I think we should write the story with this, the team might disagree. And I go, "Okay, let's go with the team." So, why don't we use AI as exactly the same, Kovid, and say, "When we're writing stories, let's ask it. In fact, let's ask it to start with 'cause it might get us into a place where we can refactor that story much quicker." Then when we write code, why aren't we as devs using AI as a member, doing pair programming with it? And if you're already pair programming with another developer, add AI as the third person to pair program with. It'll help you with writing code, spotting errors with code, peer reviews, pull requests. And then, when we come to tests, use it as a member of the team. " I'm new to learning Cypress, can you help me?" Goddamn right, it can. "I've written my first Cypress test. What have I done wrong?" It's just like asking another colleague. Right, except it's got a wider sort of knowledge base and a wider amount of parameters that it's pulling from. 

So for me, will AI replace people? Yes, absolutely. But not just in testing, not just in tech, AI has made things easily accessible to more people outside of tech as well. So, will it replace people's jobs? I'm afraid it will. Yes. But the people who survive this will be the ones who know how to use AI and treat it as a member of the team. Those people will be the last pots of people. They will be the ones who probably stay. AI will replace jobs. I don't care what people say, it will happen. Will it happen on a large scale? I don't know. And I don't think anyone does. But it will start reducing number of people in jobs, not just in tech. 

Kovid Batra: That would happen across all domains, actually. I think that that's very true. Yeah. 

So basically, I think it's more around the creativity piece, wherein if there are new use cases coming in, the AI is yet not there to make sure that you write the best, uh, test case for it and do the testing for you, or in fact, automate that piece for the coming, uh, use cases, but if the teams are using it wisely and as a team member, as you said, that's a very, very good analogy, by the way, uh, that's a great analogy. Uh, I think that's the best way to build that context for that team member so that they know what the whole journey has been while releasing an epic or a story, and then, probably they would have that creativity or that, uh, expertise to give you the use case and help you in a much better way than it could do today, like without hallucinating, without giving you results that are completely irrelevant. 

Leigh Rathbone: Yeah, I totally agree, Kovid. And I think this is, um, if you think about what companies should be doing, companies should be creating new code, new experiences for their customers, value add code. If we're just recreating old stuff, the company might not be around much longer. So, if we are creating new stuff, and let's make an assumption that, I don't know, 50 percent of code is actually new stuff that's never been out there before. Well, yeah, AI is going to struggle with knowing what to do or what automation test it could be. It can have a good stab because you can set parameters and you can say, you can give it a role, as an example. So, when you're working with chatGPT, you can say, as a professional tester or as a, you know, a long-term developer, what would be my mindset on writing JavaScript code for blah, blah, blah, blah? And it will have a good stab at doing it. But if it's for a space rocket that can go 20 times the speed of light, it might struggle because no one's done that yet and put the data back into the LLM, yet. 

Kovid Batra: Totally. Totally makes sense. Great. I think, Leigh, uh, with this thought, I think we'll bring our discussion to an end for today. I loved talking to you so much and I have to really appreciate the way you explain things. Great storytelling, great explanation. And you're one of the finest ones whom I have brought on the show, probably, so I would love to have another show with you, uh, and talk and deep dive more into such topics. But for today, I think we'll have to say goodbye to you, and before we say that, I would love for you to give our audience parting advice on how they should look at software quality testing in their career. 

Leigh Rathbone: I think that that's a really good question. I think the piece of advice, regardless of what craft you're doing in tech, always try and think quality and always put the customer at the heart of what you're trying to do because too many times we create software without thinking about the customer. I'll give you one example, Kovid, as a parting gift. Anybody can go and sit in a contact centre and watch how people in contact centres work, and you'll understand the thing that I'm saying, because we never, ever create software for people who work in contact centres. We always think we're creating software that's solving their problems, but you go and watch how they work. They work at speed. They'll have about three different systems open at once. They'll have a notepad open where they're copying and pasting stuff into. What a terrible user experience. Why? Because we've never created the software with them at the heart of what we were trying to do. And that's just one example, Kovid. The world is full of software examples where we do not put the customer first. So we all own quality, put the customer front and center. 

Kovid Batra: Great. I think, uh, best advice, not just in software testing or in general or any aspect of business that you're doing, but also I think in life you have to.. So I believe in this philosophy that if you're in this world, you have to give some value to this world and you can create value only if you understand your environment, your surroundings, your people. So, to always have that empathy, that understanding of what people expect from you and what value you want to deliver. So, I really second that thought, and it's very critical to building great pieces of software, uh, in the industry also. 

Leigh Rathbone: Well, Kovid, you've got a great value there and it ties very closely with people that write code, but leaders as well. So, developers should always leave the code base in a better state than they found it. And leaders should always leave their people in a much better place than when they found them or when they came into the company. So, I think your value is really strong there. 

Kovid Batra: Thank you so much. All right, Leigh, thank you. Thank you so much for your time today. It was great, great talking to you. Talk to you soon. 

Leigh Rathbone: Thank you, Kovid. Thank you. Bye. 

‘Team Building 101: Communication & Innovation’ with Paul Lewis, CTO at Pythian

In the latest episode of the ‘groCTO: Originals’ podcast (formerly Beyond the Code), host Kovid Batra welcomes Paul Lewis, CTO at Pythian and board member at the Schulich School of Business, who brings extensive experience from companies like Hitachi Vantara & Davis + Henderson. The topic for today’s discussion is ‘Team Building 101: Communication & Innovation’.

The episode begins with an introduction to Paul, offering insights into his background. During the discussion, Paul stresses the foundational aspects of building strong tech teams, starting with trusted leadership and clearly defining vision and technology goals. He provides strategies for fostering effective processes and communication within large, hybrid, and remote teams, and explores methods for keeping developers motivated and aligned with the broader product vision. He also shares challenges he encountered while scaling at Pythian and discusses how his teams manage the balance between tech and business goals, emphasizing the need for innovation & planning for future tech.

Lastly, Paul advises aspiring tech leaders to prioritize communication skills alongside technical skills, underscoring the pivotal role of 'code communication' in shaping successful careers.

Timestamps

  • 00:05 - Paul’s introduction
  • 02:47 - Establishing a collaborative team culture
  • 07:01 - Adapting to business objectives
  • 10:00 - Aligning developers to the basic principles of the org
  • 12:57 - Hiring & talent acquisition strategy
  • 17:31 - Processes & communication in growing teams
  • 22:15 - Communicating & imbibing team values
  • 24:33 - Challenges faced at Pythian
  • 26:00 - Aligning tech innovation with business requirements
  • 30:24 - Parting advice for aspiring tech leaders

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have a special guest. He has more than 25 years of engineering leadership experience. He has been a CTO for organizations like Hitachi Vantara and today, he's working as a CTO with Pythian. Welcome to the show. Great to have you here, Paul. 

Paul Lewis: Hi there. Great to be here. And sadly, it's slightly more than 30 years versus 25 years. Don't want to shame you. 

Kovid Batra: My bad. All right, Paul. So, before we dive into today's topic, by the way, today's topic, audience for you, uh, it's building tech teams from scratch. But before we move there, before we hear out Paul's thoughts on that, uh, Paul, can you give us a quick intro about yourself? Or maybe you would like to share some life-defining moments of your life. Can you just give us a quick intro there? 

Paul Lewis: Sure. Sure. So I've been doing this for a long time, as we just mentioned. Uh, but I've, I've had the privilege of seeing sort of the spectrum of technology. First 17 years in IT, like 5, 000 workloads and 29 data centers. You know, I was involved in the purchase of billions of dollars of hardware and software and services, and then moving to Hitachi, a decade of OT, right? So, I get to see what technology looks like in the real world, the impact to, uh, sort of the human side of the world and nuclear power plants and manufacturing and hospitals.

Uh, and then, the last three years at Pythian, uh, which is more cloud and data services. So, I get to see sort of the insight side of the equation and what the new innovation and technology might look like in the real future. I do spend time with academics. I'm on the board of Schulich School of Business, Masters of Technology Leadership, and I spend time with the students on Masters of Management and AI, Masters of Business Analytics. 

And then, I spend at least once a quarter with a hundred CIOs and CTOs, right? So, we talk about trends, we talk about application, we talk about innovation. So, I get to see a lot of different dimensions of the technology side. 

Kovid Batra: Oh, that's great. Thanks for that quick intro. And of course, I feel that today I'm sitting in front of somebody who has immense experience, has seen that change when there was internet coming in to the point where AI is coming in. So, I'm sure there is a lot to learn today from you. 

Paul Lewis: That sounds like a very old statement. Yes, I have used mainframe. I have used AS400. 

Kovid Batra: I have no intentions. Great, man. Great. Thank you so much. So, let's get started. I think when we are talking about building teams from scratch. I think laying the foundation is the first thing that comes to mind, like having that culture, having that vision, right? That's how you define the foundation for any strong tech team that needs to come in. So, how do you establish that? How do you establish that collaborative, positive team culture in the beginning? And how do you ensure the team aligns with the overall vision of the business, the product. So, let's hear it from you. 

Paul Lewis: Sure. Well, realistically, I don't think you start with the team and team culture. I think you start with the team leadership. I know as recent in the last three years, when we built out the Pythian software engineering practice, well, I started by bringing in somebody who's worked for me and with me for 15 years, right, somebody who I trusted, who has an enterprise perspective of maturity, who I knew had a detailed implementation of building software, who has built, you know, hundreds of millions of dollars worth of software over a period of time, and I knew could determine what skill set was necessary. But in combination with that person, I also needed sort of the program management side because this practice didn't exist, there was no sense of communications or project agility or even project management capability. So, I had to bring in sort of a program management leadership and software delivery leadership, and then start the practice of building the team. And of course, it always starts with, well, what are we actually building? You can't just hire in technologists assuming that they'll be able to build everything. It's saying, what's our technology goal? What's our technology principles? What do we think the technology strategy should be to implement? You know, whatever software we think we want to build. And from that, we can say, well, we need at least, you know, these five different skill sets and let's bring in those five skill sets in order to coordinate sort of the creation of at the very least, you know, the estimates, the foundation of the software. 

Kovid Batra: Makes sense So, I think when you say bringing in that right leadership that's the first step. But then, with that leadership, your thought is to bring in a certain type of personality that would create the culture or you need people who align with your thoughts as a CTO, and then you bring those people in? I think I would want to understand that. 

Paul Lewis: I'm less sure you need to decide between the two. I know my choices usually are bringing in somebody to which already knows how to manage me. Right? As you can imagine, CTOs, CIOs have personalities and those personalities sometimes could be straining, sometimes can be motivational, sometimes could be inspirational, but I knew I need to bring somebody in that didn't have to, that already knew how to communicate with me effectively, that I already know knew my sort of expectations of delivery, expectations of quality, expectations of timeliness, expectations of adhering to technology principles and technology maturity. So, they passed that gate, right? So now, I had sort of right out of the gate trust between me and the leadership that was going to deliver on the software which is sort of the first requirement. From then, then I expect them to both evolve the maturity of the practice, in other words, the documentation and technology principles and build out the team itself from scratch. 

So, determine what five skills are necessary and then acquire those skills and bring them into the organization. It doesn't necessarily mean hiring. In fact, the vast majority of the software, which I've built over the time, we started with partnerships with ecosystems, right? So, ecosystems of QA partnerships and development partnerships. Bring those skill sets in and as we determine, we need sort of permanent Pythian resources like software architecture resources or DevOps architecture resources or, you know, skilled senior development that we start to hire them in our organization as being the primary decision-makers and primary implementers of technology. 

Kovid Batra: Makes sense. And, uh, Paul, does this change with the type of business the org is into or you look at it from a single lens, like if the tech team is there, it has to function like this, uh, does it change with the business or not? 

Paul Lewis: I think it changes based on the business objectives. So, some businesses are highly regulated and therefore, quality might be more important than others. The reality is, you know, the three triangles of time, cost, and quality. For the most part, quality is the most fungible, right? There are industries where I'm landing a plane where quality needs to be, you know, near zero bugs and then tech startups where there's an assumption that there'll be severe, if not damaging bugs in production, right, cause I'm trying to deploy a highly agile environment. So, yes, different organizations have different sort of, uh, appetites for time, cost, and quality. Quality being the biggest measure that you can sort of change the scale on. And the smaller the organization, the more agile it becomes, the more likelihood that you can do things quickly with, I'll call it less maturity, out of the gate, and assume that you can grow maturity over time. 

So, Pythian is an example. Out of the gate, we had a relatively zero sense of maturity, right? No documentation, no process, no real sort of project management implementation. It was really smart people doing really good work. Um, and then we said, "Wow, that's interesting. That's kind of a superhero implementation which just has no longevity to it because those superheroes could easily win the lottery and move on." Right? So, we had to think about, well, how do we move away from the superhero implementation to the repeatable scalable implementation, and that requires process, and I know development isn't a big fan of process holistically, but they are a fan of consistency, right? They are a fan of proper decision-making. They are a fan of documented decisions so that the next person who's auditing or reviewing or updating the code knows the purpose and value of that particular code, right? So, some things they enjoy, some things they don't, uh, but we can improve that maturity over time. So, I can say every year we want to go from 0 to 1, 1 to 2, 2 to 3, never to pass 3, right, because we're not, like Pythian, for example, isn't a bank, right, isn't an insurance company, isn't a telco, we're not landing planes, we're not solving, uh, we're not solving complex, uh, healthcare issues, so we don't have to be as mature as any one of those organizations, but we need to have documents at least, right, we need to ensure that we have automation, automated procedures to push to production instead of direct access, DBA access to the database in a production environment. So, that's kind of the evolution that we had. 

Kovid Batra: So, amazing to hear these kinds of thoughts and I'm just trying to capture how you are enabling your developers or how you are ensuring that your developers, your teams are aligned with a similar kind of thought. What's your style of communicating and imbibing that in the team? 

Paul Lewis: We like to do that with technology principles, written technology principles. So, think of it as a, you know, top 10 what the CTO thinks is the most important when we build software. Right? So what the top 10 things are, let's mutually agree that automation is key for everything we do, right? So, automation to move code, automation to implement code, uh, automation to test, automation in terms of reporting, but that's key. Top 10 is also we need to sort of implement security by design. We need to make sure that, um, it has a secure foundation because it's not just internal users, but we're deploying the software to 2,000 endpoints, and therefore, I need to appreciate endpoints to which I don't control, there I need, therefore I need a sort of a zero trust implementation. I need to make sure that I'm using current technology standards and architectural patterns, right? I want to make sure that I have implemented such a framework that I can easily hire other people who would be just as interested in seeing this technology and using technology, and we want to be in many ways, a beacon to new technologies. I want the software we build to be an inspirational part of why somebody would want to come to work at Pythian because they can see us using an innovating current practical architectural standards in the cloud, as an example.

So, you know, you go through those technology principles and you say, "This is what I think an ideal software engineering outcome, set of outcomes look like. Who wants to subscribe to that?" And then, you see the volunteers putting up their hands saying, "Yeah, I believe in these principles. These principles are what I would put in place, or I would expect if I was running a team, therefore I want to join." Does that make sense? 

Kovid Batra: Yeah, definitely. And I think these are the folks who then become the leaders for the next set of people who need to like follow them on it. 

Paul Lewis: Yeah, exactly. 

Kovid Batra: It totally makes sense. 

Paul Lewis: And if you have a set of rules, you know, I use the word 'rules', you know, loosely, I really just mean principles, right? To say, "Here are the set of things we believe and want to be true even if there's different maturities to them. Yes, we want a fully automated system, but year one, we don't. Year three, we might." Right? So, they know sometimes it's a goal, sometimes it's principle, sometimes it's a requirement. Right? We're not going to implement low-quality code, right? We're not going to implement unsecured code, but if you have a team to buy in those principles, then they know it's not just the outcome of the software they're building, but it's the outcome of the practice that they're building. 

Kovid Batra: Totally, totally. And when it comes to bringing that kind of people to the team, I think one is of course enabling the existing team members to abide by that and believe in those principles, but when you're hiring, there has to be a good talent acquisition strategy there, right? You can't just go on hiring, uh, if you are scaling, like you're on a hiring spree and you're just bringing in people. So how do you keep that quality check when people are coming in, joining in from the lowest level, like from the developer point, we should say, to the highest leadership level also, like what's your strategy there? How do you ensure this team-building? 

Paul Lewis: Well, on the recruiting side, we make sure we talk about our outcomes frequently, both internally in the organization and externally to, uh, you know, the world at large. So internally, I do like a CTO 'ask me anything', right? So, that's a full, everybody's, you know, full board, everybody can access it, everybody can, and it's almost like a townhall. That's where we do a couple of things. We disclose things I'm hearing externally that might be motivating, inspiring to you. It's, "Here's how we're maturing and the outcomes we've produced in software over this quarter.", let's say. And then, we'll do a technology debate to say, "You know what, there's a new principle I need to think about, and that new principle might be generative AI. Let's all jump in and have a, you know, a reasonably interesting technology debate on the best implications and applications of that technology. So, it's almost like they get to have a group think or group input into those technology principles before we write it down and put it into the document. And then if I present that, which I do frequently externally, then I gavel, you know, whole networks of people to say, "Wow, that is an interesting set of policies. That's an interesting set of, um, sort of guiding principles. I want to participate in that." And that actually creates recruiting opportunities. I get at least 50 percent of my LinkedIn, um, sort of contributions and engagements are from people saying, "I thought what you said was interesting. That sounds like a team I want to join. Do you have openings to make that happen?" Right? So, we actually don't have in many ways a lack of opportunity, recruiting opportunity. If anything, we might have too much opportunity. But that's how we create that engagement, that excitement, that motivation, that inspiration both internally and externally. 

Kovid Batra: And usually, like when everyone is getting hired in your team like, do you handpick, like at least one round is there which you take with the developers or are you relying mostly on your leadership next in line to take that up? How does that work for your team? 

Paul Lewis: I absolutely rely on my leadership team, mostly because they're pretty seasoned and they've worked with me for a while, right? So, they fully appreciate what kind of things that I would expect. There are some exceptions, right? So if there are some key technologists to which I think will drive inspirational, motivational behavior or where they are implementing sort of the core or complex patterns that I think are important for the software. So, things like, uh, software architecture, I would absolutely be involved in the software architecture conversations and recruiting and sort of interviewing and hiring process because it's not just about sort of their technology acumen, it's also about their communication capabilities, right? They're going to be part of the architectural review board, and therefore, I need to know whether they can motivate, inspire, and persuade other parts of the organization to make these decisions, right? That they can communicate both verbally and in the written form, that when they create an architectural diagram, it's easy to understand, sort of that side, and even sort of DevOps-type architects where, you know, automation is so key in most of the software we develop and that'll lead into, you know, not just infrastructure as code, but potentially even the AI deployment of infrastructure as code, which means not only do they need to have, you know, the technical chops now, I need them to be well read on what the technical jobs are needed for tomorrow. Right? That also becomes important. So, I'll get involved in those key resources that I know will have a pretty big impact on the future of the organization versus, you know, the developers, the QAs, the BAs, the product owners, the project managers, you know, I don't necessarily involved in every part of that interview process.

Kovid Batra: Totally, totally. I think one good point you just touched upon right now is about processes and the communication bit of it. Right? So, I think that's also very critical in a team, at least in large-scale teams, because as you grow, things are going hybrid, remote, and even then, the processes are, and the communication is becoming even more critical there, right? So, in your teams, how do you take up or how do you ensure that the right processes are there? I mean, you can give some examples, like how do you ensure that teams are not overloaded or in fact, the work is rightly distributed amongst the team and they're communicating well wherever there is a cross-functional requirement to be delivered and teams are communicating well, the requirements are coming in? So, something around process and communication which you are doing, I would love to hear that. 

Paul Lewis: Good questions. I think communication is on three fronts, but I'll talk about the internal communication first, the communication within the teams. We have a relatively unique set of sort of development processes that are federated. So, think of it as there is a software engineering team that is dedicated to do software engineering work, but for scale, we get to dip into the billable or the customer practices. So, if I need to deliver an epic or a series of stories that require more than one, uh, Go developer or data engineer or DevOps practitioner, then I have the ability to dip into those resources, into those practices, assign them temporarily to these epics and stories, uh, or just a ticket that they, that I want them to deliver on and then they can deliver on them as long as it's all, as long as everybody's already been trained on how to implement the appropriate architectural frameworks and that they're subscribing to the PR process that is equivalent, both internally and externally to the team. We do that with standard agile processes, right? We do standups on a daily basis. We break down all of our epics in the stories and we deliver stories in the tickets and tickets get assigned people, like this is a standard process with standard PM, with standard architectural frameworks, standard automation, deployments, and we have specific people assigned to do most of the PRs, right? So not only PR reviews, but doing sort of code, code creation and code deployment so that, you know, I rely on the experts to do the expert work, but we can reach out into those teams when we need to reach out to those teams and they can be temporary, right? I don't need to assign somebody for an entire eight-week journey. Um, I could just assign them to a particular story to implement that work, which is great. So, I can expand any one particular stream from five people to 15 people at any one period of time. That's kind of the internal communication.

So, they do standups. We do, you know fine-tuned documentation, uh, we have a pretty prescriptive understanding of what's in the backlog and how and what we have to deliver in the backlog. We actually detail a one-year plan with multiple releases. So, we actually have a pretty detailed, we'll call it 'product roadmap' or 'project roadmap' to deliver in the year, and therefore, it's pretty predictable. Every eight weeks we're delivering something new to production, right? But that's only one of those communication patterns. The other communication patterns all to the other internal technology teams, because we're talking about, you know, six, seven hundred internal technologists, and we want them to be aware of not just things that we've successfully implemented in the past and how it's working in production, but what the future looks like and how they might want to participate in the future functions and features that we deliver on. 

But even those two communication patterns arguably isn't the most important part. The most important part might actually be the communication up. Right? So now, I have to have communication on a quarterly basis with my peers, with the CEO and the CFO to say not only how well we're spending money, how well we're achieving our technological goals and our technological maturity, but even more importantly, are we getting the gain in the organization? So, are we matching the revenue growth of the organization? Are we creating the operational efficiency that we expect to create with the software, right? Cause I have to measure what I produce based on the value created, not just because I happen to like building software, and that's arguably the most difficult part, right, to say, "I built software for a purpose, an organizational purpose. Am I achieving the organizational goals?" Much more difficult calculus as compared to, "I said I was going to do five features. I delivered five features. Let's celebrate." 

Kovid Batra: But I think that's the tricky part, right? And as you said, it's the hardest part. How do you make sure, like, as in, see, the leaders probably are communicating with the business teams and they have that visibility into how it's going to impact the product or how it's going to impact the users, but when it comes to the developers particularly, uh, who are just coding on a day-to-day basis, how do you ensure that the teams are motivated that way and they are communicating on those lines of delivering the outcomes, which the leaders also see? So, that's.. 

Paul Lewis: Well, they get to participate in the backlog prioritization, right? So, in fact, I like to have most of the development team consider themselves in many ways, owners of the software. They might not have the Product Owner title, or they might not be the Product Manager of the product, but I want them to feel like it's theirs. And therefore, I need them to participate in architectural decisions. I want them to buy-in to what the priority of the next set of features are going to be. I want to be able to celebrate with them when they do something successful, right? I want them to be on the forefront of presenting the value back to the rest of the team, which is why that second communication to the rest of the, you know, six or seven hundred technologists that they're the ones presenting what they created versus I'm the one presenting their credit. I want them to be proud of the code that they've built, proud of the feature that they've implemented and talk about it as if it's something that they, you know, had to spend a good portion of their waking hours on, right? That there was a technology innovation or R&D exercises that they had to overcome. That's kind of the best part. So, they're motivated to participate in the, um, in the prioritization, they're motivated to implement good code, and then they're motivated to present that as if it was an offering they were presenting to a board of decision-makers, right? It's almost as if they're going and getting new money to do new work, right? So, it's a dragon's den kind of world, which I think they enjoy. 

Kovid Batra: No, I think that's a great thought and I think this makes them feel accountable. This makes them feel motivated to whatever they are doing, and at the end of the day, if the developers start thinking on those lines, I think you have cracked it, probably. That's the criteria for a successful engineering, for sure. 

Apart from that, any other challenges while you were scaling, any particular example from your journey at Pythian that you felt is worth sharing with our audience here?

Paul Lewis: The challenge is always the 'what next?'. Right? So let's say, it takes 24 months to build a substantial piece of software. Part of my job, my leadership's job is to prepare for the next two years, right? So, we're in deep dive, we're in year one, we're delivering, we're halfway delivering some interesting piece of software, but I need to prep year three and year four, right? I need to have the negotiations with my peers and my leaders to say, "Once we complete this journey, what's the next big thing on the list? How are we going to articulate the value to the organization, either in growth or efficiency? How are we going to determine how we spend? Is this a $1m piece of software, or is this a $10m piece of software?" And then, you know, preparing the team for the shift between development to steady state, right? From building something to running something. And that's a pretty big mindset, as you know, right? It's no longer focused on automation of releases between dev, QA & production. It's saying, "It's in production now. It's now locked down. I need you to refocus on development on something else and let some other team run this system." So, both sides of that equation, how do I move from build to run in that team? And then, how do I prepare for the next thing that they build? 

Kovid Batra: And how do you think your tech teams contribute here? Because what needs to be built next is something that always flows in, in terms of features or stories from the product teams, right? Or other business teams, right? Here, how do you operate? Like, in your org, let's say, there is a new technology which can completely revamp the way you have been delivering value so that tech team members are open to speak and like let the business people know that this is what we could do, or it's more like only the technical goals are being set by the tech team and rest of the product goals are given by the product team. How does it work for the, for your team here? 

Paul Lewis: It's pretty mutual in fairness, right? So, when we determine sort of the feature backlog of a piece of software, there's contribution for product management, think of that as the business, right? And the technology architecture team, right? So, we mutually determine what our next bet in terms of features that will both improve the application, functionally improve the application technically. So, that's good. 

When it comes to the bigger piece of software, so we want to put this software in steady state, do minor feature adjustments instead of major feature adjustments, that's when it requires much more of a, sort of a business headspace, right? Cause it's less about technology innovation at that point. However, sometimes it is, right? Sometimes I'll get, "Hey, what are we doing for generative AI? What new software can we build to be an exemplar of generative AI?" And I can bring that to the table. So, I have my team bringing to the decision-making table of, "Here's some technology innovation that's happening in the world that I think we should apply." And then, from the business saying, "Here's a set of business problems or revenue opportunities that we can match." So now, it's a matching process. How can I match generative AI, interesting technology with, you know, acquiring a new customer segment we currently don't acquire now, right? And so, how do I sort of bring both of those worlds together and say, "Given this match program, I'm going to circle what I think is possible within the budget that we have."? 

Kovid Batra: Right. Right. And my question is exactly this, like, what exactly makes sure that the innovation on technology and the requirements from the customer end are there on the same table, same decision-making table? So, if I'm a developer in your team, even if I'm well aware of the customer needs and requirements and I've seen certain new technologies coming up, trending up, how do I make sure that my thought is put on the right table in front of the right team and members? 

Paul Lewis: Well, fortunately, like most organizations, but definitely Pythian, we've created like an architectural review board, right? So, that includes development, architecture, product management, but it's not the executive team, right? It's the real architectural practitioners and they get to have the debate, discussion on what they think is the most technologically challenging that we want to solve or innovation that we think matters or evolution of technology that we think we want to implement within our technologies, moving from, you know, an IaaS to a PaaS to a Saas, as an example, those are all decisions that in many ways we let the technology practitioners make, and then they bring that set of decisions to the business to say, "Well, let's match this set of architectural principles with a bunch of business problems." Right? So, it's not top-down. It's not me saying, "Thou shalt build software. Thou shalt use Gen AI. Make it so." It rarely is that. It's the technology principle saying, "We think this innovation is occurring. It's a trend. It's important. We think we should apply it knowing its implications. Let's match that to one of a hundred business problems to which we know the business has, right? The reality is the business has an endless amount of technology problems, of business problems. Technology has an endless amount of innovation, right? 

Kovid Batra: Yeah, yeah. 

Paul Lewis: There's no shortlist in either of those equations. 

Kovid Batra: Correct. Absolutely. Perfect, perfect. I think this was great. I think I can go on talking with you. Uh, this is so interesting, but we'll take a hard stop here for today's episode and thank you so much for taking out time and sharing these thoughts with us, Paul. I would love to have you on another show with us, talking about many more problems of engineering teams. 

Paul Lewis: Sure. 

Kovid Batra: But thanks for today and it was great meeting you. Before you leave, um, is there a parting advice for our audience who are mostly like aspiring engineering managers, engineering leaders of the modern tech world? 

Paul Lewis: Um, the gap with most technologists, because they tend to be, you know, put their glasses on, close the lights in the room, focus on the code, that's amazing. But the best part of the thing you develop is the communication part. So, don't be just a 'code creator', be a 'code communicator'. That's the most important part of your career as a developer, is to present that wares that you just built outside of your own headspace. That's what makes the difference between a junior, an intermediate, senior developer and architect. So, think about that. 

Kovid Batra: Great, great piece of advice, Paul. Thank you so much. With that, we'll say, have a great evening. Have a great day and see you soon! 

Paul Lewis: Thank you.

‘Enhancing DevEx, Code Review and Leading Gen Z’ with Jacob Singh, CTO in Residence, Alpha Wave Global

In the latest episode of 'groCTO Originals' podcast (formerly: 'Beyond the Code'), host Kovid Batra engages in a thought-provoking discussion with Jacob Singh, Chief Technology Officer in Residence at Alpha Wave Global. He brings extensive experience from his roles at Blinkit, Acquia, and Sequoia Capital. The heart of their conversation revolves around ‘Enhancing DevEx, Code Review and Leading Gen Z’. https://youtu.be/TFTrSjXI3Tg?si=H_KxnZGlFOsBtw7Y The discussion begins with Jacob's reflection on India and his career break. Moving on to the main section, he explores the evolving definition and widespread adoption of developer experience. He also draws comparisons between tech culture in Indian versus Western companies and addresses strategies for cultivating effective DevEx for Gen Z & younger generations. Furthermore, he shares practical advice for tech leaders to navigate the ‘culture-market fit’ challenge and team structuring ideas from his hands-on experience at Grofers (now ‘Blinkit’). Lastly, Jacob offers parting advice to developers and leaders to embrace AI tools like Copilot and Cursor for maximizing efficiency and productivity, advocating for investing in quality tooling without hesitation.

Timestamps

  • 00:06 - Jacob’s introduction
  • 00:39 - Getting candid
  • 04:22 - Defining ‘Developer Experience’
  • 05:11 - Why is DevEx trending?
  • 07:02 - Comparing tech culture in India & the West
  • 09:39 - How to create good DevEx for Gen Z & beyond?
  • 13:37 - Advice for leaders in misaligned organizations
  • 17:03 - How Grofers improved their DevEx
  • 22:04 - Measuring DevEx in multiple teams
  • 25:49 - Enhancing code review experience
  • 31:51 - Parting advice for developers & leaders

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone! This is Kovid, back with another episode of Beyond the Code by Typo. Today with us, we have a special guest. He's currently a CTO in Residence with Alpha Wave Group, which is a VC group. He comes with 20-plus years of engineering and leadership experience. He has worked with multiple startups and orgs like Blinkit as a CTO. He's the guest whom I have met and he's the only guest whom I have met in person, and I really liked talking to him at the SaaSBoomi event. Welcome to the show, Jacob. Great to have you here.Jacob Singh: Thanks. Good to be here, to chat with you.Kovid Batra: Cool. I think, let's start with something very unique that I've seen experienced with you, that is your name. It's Jacob Singh, right? So, how did that fusion happen?Jacob Singh: Just seemed like fun, you know? Just can't, since I was living in India anyway, I figured Smith is harder to pronounce, so.. I'm just kidding. My dad's from here. My dad's Punjabi. So, you know, when a brown boy and a white girl, they love each other a lot, then, you know, you end up with a Jacob Singh. That's about it. There's not much else to it. I grew up in the States, but I've lived in India on and off for the past 20 years.Kovid Batra: Great, great. Perfect. That's rare to see, at least in India. Most of the generation, maybe half-Indian, half-American are in the U.S. But what did you love about India? What made you come here?Jacob Singh: Good question. I was trying to escape my tech stuff. So, I sort of started very early. I taught myself to code as a teenager and started my first business when I was 18 and I'd done pretty well. And then, when I was like 21-22, I just sort of decided I wanted to do something different, do something in the social sector, helping people. So, I took a job with an NGO in West Delhi and sort of shifted for that. That was the original purpose. Why did I stay? I guess there's something dynamic and interesting about the place. India's changed a lot in 20 years, as everybody knows. And, I think that's been continuously exciting to be a part of. It doesn't feel stagnant. It doesn't feel like, I mean, a lot of changes are not in the direction I would love, to be honest, but, you know, but it's an interesting place. There's always something happening. And I found that, and then eventually you build your community, your friends and your family and all that kind of stuff. So, this is home. Yeah, that's about it.Kovid Batra: Perfect. Perfect. Talking about the family, I was just talking to you on LinkedIn. I found that there was like a one-year break that you took in your career and you were just parenting at that time. And honestly, that's very different and very unique to a culture, to an Indian culture, actually. So, I just wanted to know how was your experience there. I'm sure it would have made you learn a lot of things, as it does for a lot of other parents. But I just wanted to hear about your experience with your kid.Jacob Singh: Hopefully, it's not so uncommon. I think it's starting to change especially the role of men as fathers in India. I think it's traditionally, like my dad's father, he just saw him for tea, you know, and he was reading the newspaper and he'd meet him once a year on his birthday and take him to a quality restaurant for a coke, you know, that was their relationship. I think things are different with Indian fathers these days. I think for me, you know, we were just, perfectly honest, was going through a divorce. Difficult. I needed to be there for my daughter and I was, you know, sort of taking half the responsibility in terms of time with her. This was eight years ago. And I think my parents had also divorced. So, I was kind of, my dad was a very active part of my upbringing and did all the things, did all the dishes, cooked all the meals, you know, was around. He was also working as a programmer and did all that, but he was at home as well. And he found ways to make it work, even if it had meant sacrificing his career to some extent. He was working from home when I was a kid in the 80s. So, he had a giant IBM 880, or whatever computer, a little tiny green screen, a 300-bot modem, you know, to connect and send his work up. So, that's how I grew up. Turned out to benefit me a lot, uh, when it came to learning computers, but, um, you know, he would convince him to do that cause he was good at his job, and he's like, I have to be there for my kids. And he made it work, you know? I think we all find those times where we need to lean into family or lean into work in different proportions, you know?Kovid Batra: Hmm. Great. I think amazing job there honestly, Jacob All right, that was great knowing you and thanks for that quick intro. Moving on to the main section of our today's podcast, enhancing the developer experience. So, that's our topic for today. So let's start very basic, very simple. What is developer experience according to you?Jacob Singh: What is developer experience? It's an interesting term. I guess it's, you know, the day-to-day of how a programmer gets their job done. I guess the term would be everything encapsulated from, how their boss talks to them, how they work with their teammates, the kind of tools they use for project management down to the quality of documentation, APIs, the kind of tools they use on their computer, the tools they use in the cloud that they work with, et cetera. And all of that encapsulated is how effective can they be at their job, you know, and the environment around them that allows them to be effective. I guess what I would define it as.Kovid Batra: And why do you think it's trending so much these days? I think what you mentioned and what I have also read everywhere about developer experience is the same fundamental that has been existing all the years, right? But why is it trending these days? Why do you think this is something up in the market?Jacob Singh: Good question. I guess, I mean, I've been around for a while, so I think in the earlier days of the industry, when I first started engineers were a little expensive, but they were also looked at as like a commodity that you could just use. Like, you put them on a spreadsheet, you pay the engineers, you get the work done. They weren't considered really central. They were considered sort of like factory workers in an expensive factory, to some extent. I think it was more so in the 80s and 90s, right? But like, it's still trending more and more in the direction of engineers kind of being very expensive and being decision-makers, moving into C-level positions, having more authority, seeing that, like, if you just look at, you know, the S&P 500, you look at the, you look at the stock market and you see that the top companies are all tech companies and they command most of the market cap. I think those days are over. So now, it's very much that if you're able to execute with your engineering roadmap, you're able to win the market. It's considered the basis of a lot of companies, sort of strategies, whether they're a tech company or not, and therefore the effectiveness of the developers and the team plays into which developers will join you. And when they join you, how likely are they to be engaged and to be retained? And then, how effective, you know, is each developer because they're a rare commodity because it's hard to find good developers. There's so much demand, et cetera, even in recessionary times, most of the layoffs are not engineering teams. And so, the effectiveness of each engineer and their ability to motivate and retain them becomes paramount to a company strategy.Kovid Batra: Right. Makes sense. So, according to you, I'm sure you have had the experience of seeing this shift in the West as well as in companies in India, right? When you see the culture, particularly talking about the tech culture in a company, like maybe, for example, you work with a company like Blinkit, which is huge today in India and you must have worked with other companies in the West. How would you compare, like, how are teams being led in these two different cultures? Jacob Singh: Well, I would say those kind of, you know, anything I say is going to be a gross generalization, and it's going to be incorrect more often than it's correct. I think there's more difference between two Indian companies than there is between any given American or any Indian company, you know. There's a lot of variation. But if I'm being put on the spot to make such generalizations, I would say that one big difference is the age and maturity of engineers. So, like, when I was 29, I got hired as an IC, a Principal Engineer at this company called Acquia. They kind of acquired my startup and I joined there, and, you know, we had PhDs on the team who were ICs, right? Our VP Engineering, you know, had 25 years of experience in the field and was, you know, sort of. You know, one of my colleagues was like heading R&D for the RFID team at Sun. You know, like the very senior guys were still writing code.Kovid Batra: Yeah.Jacob Singh: It's like, very much just like in the weeds writing code. They're not trying to be managers and an architect. They're just like a programmer, right? I got my first team, like to manage, like I got a small team like at 25-26, but really I got my first team of like really qualified, expensive engineers when I was like 32. Whereas here, I'm a VP Engineering at Grofers at like 29. It's like managing a 100 people. It's very common to be much early in your career. Engineers with 3-4 years of experience are sort of talking about, "I should be an SDE-IV". So, the whole like, that scale is different. You have a much younger audience. In the States, at least when I was coming up, there's a lot more earning your stripes over a long time before you go into management. Whereas here, we have a lot more very junior managers. I think that's a big difference.Kovid Batra: Okay. And that's interesting, actually.Jacob Singh: That creates a lot of other dynamics, right? I mean, you just have like, generally you know, you have more, I would, and I hate to say this, probably going to take shit for this, but having been an engineering leader in both places, I feel like you have more like discipline and like professionalism issues here, generally, which is not to do with Indians. It's just to do with the age of people. Like, they're 24 years old, they're not gonna be very professional, right? Like a lot of your team.Kovid Batra: No, totally. I think, we are not generalizing this, but as you said, it's probably about the age. In one of my podcasts, I was just talking to this fellow from Spain. He's leading a team of almost 30 folks now.Jacob Singh: Yeah.Kovid Batra: And 50% of them were early hires, like early 20 hires, right?Jacob Singh: Yeah.Kovid Batra: And he's that guy. And then I was talking to somebody in India who was leading a 50-member team there. Again, 50% of the folks were like junior engineers in their 25-26. And both of them had the same issue of handling Gen Zs. So, I think from where you are coming, it's totally relatable and I think we touched on a very good topic here. How to create a good developer experience for these early-age, 25-26-year-olds in the system? Because they're not stable, they are not, So again, I am also not generalizing. A lot of good folks are there, but they're not like in the right mindset of sticking to a place, learning gradually and then making that impact. They just like really want to hop on fast on everything.Jacob Singh: Yeah.Kovid Batra: So, how do you handle that? Because that definitely is a pain for a lot of us, not just in India, but across the globe.Jacob Singh: No, no, I've heard this a lot, you know, and I'm not really sure. I mean, I'm far from Gen Z. I was actually having this exact same conversation with the CTO of an Indian unicorn, a pretty large one, who was talking about the same thing. He's like, "How do I motivate these?" This seems like the younger guys, they don't really want to work and they're constantly, you know, making noise and they think it's their right to work from home. They think it's their right to work 20-30 hours a week. They don't want, they don't want to advance and follow all this sort of stuff. I guess my advice to him was maybe a bit generic and maybe incorrect. You know, I think there are differences in generations, but I think some truths are fairly universal and I've uncovered a couple of things which have worked for me. And every manager has their own style and because of that, and every company has its own culture and its own goals. And so, there's a thing that's 'culture-market fit'. So, certain leaders will do well in certain kinds of companies, right, who have certain kinds of cultures made for the market they're in. This is not generic advice for everybody. But for me, I like to work in startups and I like to work in you know, startups, which are working on, I would say, kind of top-line problems which means not efficiency-related problems so much as innovation-related problems. How do we discover the next big thing? What feature is going to work with customers? Et cetera. And in such places, you need people who are motivated by intrinsic, they need intrinsic creative motivation. Carrot and Stick stuff doesn't work. Carrot and Stick will work for a customer service team, for a sales team, it'll work for an IT team at a Fortune 500 who's shipping the same things over and over again, but for creative teams, they really need to care about it intrinsically. They need to be interested in the puzzle. They want to solve the puzzle. You can't sort of motivate them in other ways. And I think this applies to the younger generation as much as the older ones. You know, the best thing to do is to, basically, it's a very simple formula, it sounds cliché but figure out where the hell you're going, why you should go there and everyone in the team should know where you're going and they should know why they're important to that strategy, what they're doing that's important, you know, and they should believe it themselves that it can work. And then, they should believe that if it works, you're gonna take care of them. And if you solve those things, they will work hard and they will try to solve problems. A lot of companies think they can shortchange that by having a crappy strategy, by having, you know, a lot of program management, which removes people from the real problem they're solving by treating people as numbers on a spreadsheet, naturally, you're going to get, you know, poor results when you do that.Kovid Batra: Totally. I think very well answered, Jacob. I think finding that culture-market fit and finding the place where you will also fit in naturally, you would be easily able to align with the folks who are working there and maybe lead them better. I think that that analogy is really, really good here. Apart from that, do you think like not everyone gets that opportunity to find the right place for themselves, right, when there is a dearth of opportunities in the market? What would be the advice for the leaders that they should apply to them when they are not in the culture-market fit scenario?Jacob Singh: Leaders? You mean, if a leader is in an organization where they don't feel like the values of the tech team aligned to their value, whether it be engineer or CTO or something?Kovid Batra: Correct, yes.Jacob Singh: Good question. The best thing to do is probably to quit. But if you can't afford, I mean, I say that a bit flippantly. I'm not saying "quit early". I'm not saying "don't try". I mean, if you really have a true values alignment problem you know, then that's hard to get over. If it's tactical, if it's relationship-based, you can work on those things, right? If you feel like you have to be someone you don't like to fit in an organization, then that's unlikely to change if it comes from the top, right? There's a very cliché saying, but you know, "Be careful who you work with. You're more likely to become them than they are to become you." And so, I would say that. But to get more practical, let's say if you can't, or you're feeling challenged, et cetera. Your question was basically, okay, so let's say you're a VP Engineering or Director of Engineering and you're unhappy with the leadership in some way, right?Kovid Batra: Yeah. So, no, I think it makes sense. The advice is generic, but it definitely gives you the direction of where you should be thinking when you are stuck in such a situation. You can't really fight against it.Jacob Singh: Yeah. I will say a couple of things. This is also the same conversation I had mentioned earlier. This also came up with the typical thing of leadership not trusting tech. You know, they don't think we're doing anything. They think we're moving too slow. They think we're, you know, sandbagging everything, etc. And to solve that, I think, which is not a values problem. That's the case in almost every organization. I mean, there's never been a CEO who said, "Oh, man! The tech team is so fast. They just keep.. I can't come up with dumb ideas fast enough. They keep implementing everything." So, you know, it's never happened in the history of companies. So, there's always that conflict which is natural and healthy. But I think to get over that, that's basically a transparency problem, usually. It's like, are you clear on your sprint reviews? Do you do them in public? Do you share a lot of information around the progress made? Do you share it in a way that gets consumed or not? Are you A/B testing stuff? Are you able to look at numbers? Able to talk numbers with your business teams? Do you know the impact of features you're releasing? Can you measure it? Right? If you can measure it, release that information. If you can give timely updates in a way which is entertaining and appropriate for the audience that they actually listen those problems tend to go away. Then it's just, the main problem there is not that people don't trust you. It's just that you're a black box to them. They don't understand your language. And so, you have to find, you know, techniques to go over that. Yeah.Kovid Batra: Yeah. Makes sense. Great, great. All right, coming back to enhancing the developer experience there. So, there are multiple areas where you can see developer experience taking a hit or working well, right? So, which are those areas which you feel are getting impacted with this enhancement of developer experience and how you have worked upon those in any one of your experiences in the past with any of the companies?Jacob Singh: You said "enhancement of developer experience". What do you mean by that?Kovid Batra: So, yeah. I'll repeat my question. Maybe I confused you with too many words there. So, in your previous experiences, you must have worked with many teams and there would have been various areas that could have impacted the developer experience. So, just to give you a very simple example, as you talked about the tooling, the people whom they're working with. So, there could be multiple issues that impact the developer experience, right? So, in your previous experiences where you found out a specific developer experience related problem and how you solved it, how it was impacting the overall delivery of the development team. Just want to like deep dive into that experience of yours.Jacob Singh: Yeah. So I think a big one was I can talk about Grofers. So, one of the things we had when I first came to Grofers, we had about 50-60 people in tech, product engineering, data, design, etc. We had them into different pods. That was good. Someone had created like different teams for different parts of the application. So, it wasn't just a free-flowing pool of labor. There was the, you know, the shopping cart team and the browsing team and the supply chain, like the warehouse management team, the last mile team, it was like, you know, four or five teams. But there was a shared mobile team. So at the front end, there was, there was one Android team, there was one iOS team, there was one web team, which again, is very common, not necessarily a bad idea. But what ended up happening was that the business teams would, they wouldn't trust the tech deadlines because a lot of times what happened is there'd be a bunch of backend engineers in the shopping cart team, they'd finish something, and then they'd be stuck on the front end team because the front end team was working on something for the, or the Android team was working on something for the browsing team, right? The iOS team was free, so they would build that shopping cart feature. But they couldn't really release it yet because the releases had to go out together with Android and iOS, because, you know, the backend release had to go with that. So, we're waiting on this one. Then we're waiting on the web one. There's this cascading waiting that's happening. And now, the shopping cart team is like, "We're done with our part. We're waiting on Android. So we're going to start a new feature." They start a new feature. And then again, the problem starts again where that feature is then waiting on somebody else, waiting on the QA team or waiting on somebody else. So, all of these waiting aspects that happen ruin the developer experience because the developer can't finish something. They get something half done, but it's not out in production, so they can't forget about it. Production is a moving target. The more teams you have, the more frequently it's changing. So, you have to go back and revisit that code. It's stale now. You've forgotten it, right? And you haven't actually released anything to customers. So, the business teams are like, "What the hell? You said you're done. You said you'd be done two weeks ago." And you're like, "Oh, I'm waiting for this team." "Stop giving me excuses." Right?Kovid Batra: Right.Jacob Singh: That team's waiting on the other team. So, I think one of the big things we did was we said, we took a hard call and said, at the time, Grofers was not quick commerce. At that time, Grofers was like DMart online, cheap discounting, 2-3 day delivery, and we scaled up massively on that proposition. And, uh, we said, hey, people who care about the price of bread being 5% less or more, do they own iPhones? No, they do not own iPhones. That's like 5% of our population. So we just ditched the iPhone team, cross-trained people on Android. We took all the Android engineers and we put them into the individual teams. We also took QA, automated most of our tests, and put QA resources into each of the teams, SDATs, and kind of removed those horizontal shared services teams and put them in fully cross-functional teams. And that improved developer experience a lot. And it's kind of obvious, like people talk about cross-functional teams and being able to get everything done within a team, being more efficient, less waiting for the teams, but it has another huge benefit. And the other huge benefit is motivation-wise. You cannot expect, like I said earlier, you want your engineers to care about the business outcomes. You want them to understand the strategy. They don't understand why they're building it. But if an engineer has to build something, send it to another team, get that team to send it to some other team, get that team to send it to some other team, to a release team to get released eventually, right? And then, the results come back three months later. You can't expect that engineer to be excited about their metrics, their business metrics and the outcomes.Kovid Batra: Right.Jacob Singh: If you put everyone in one team, they operate like a small startup and they just continually crank that wheel and put out new things, continually get feedback and learn, and they improve their part of the business and it's much more motivating and they're much more creative as a result. And I think that changes the whole experience for a developer. Just reduces those loops, those learning loops. You get a sense of progress and a sense of productivity. And that's really the most motivating thing.Kovid Batra: Totally makes sense. And it's a very good example. I think this is how you should reshape teams from time to time based on the business requirements and the business scale is what's going to impact the developer experience here. But what I'm thinking here is that this would have become a very evident problem while executing, right? Your project is not getting shipped and the business teams are probably out there standing, waiting for the release to happen. And you started feeling that pain and why it is happening and you went on solving it. But there could be multiple other issues when you scale up and 50-60 is a very good number actually. But when you go beyond that, there are small teams releasing something or the other on an everyday basis. How exactly would you go about measuring the developer experience on different areas? So, of course, this was one. Like, your sprints were delayed or your deliverables were delayed. This was evident. But how do you go about finding, in general, what's going on and how developers are feeling?Jacob Singh: Well, we hit those scaling things and like you said, yes, people are delayed. It sounds obvious, but it's mostly not. Most leaders actually take exactly the opposite approach. They say, "Okay. No more excuses. We're going to plan everything out at the beginning of the quarter. We're going to.. You plan your project. We'll do all the downstream mapping with every other Android, iOS, QA team required. We'll allocate all their bandwidth ahead of time. So, we'll never have this problem again. And we'll put program managers in place to make sure the whole thing happens." They go the opposite direction, which I think is kind of, well, it never works, to be honest. Kovid Batra: Yeah, of course.Jacob Singh: In terms of measuring developer experience as you scale. So, we got up to like 220 people in tech I think at some point in Grofers and we scaled up very quickly. That was within a year and a half or something, that happened. And, you know, that became much more challenging. I honestly don't love the word 'developer experience' cause it doesn't mean anything specifically cause there's sort of your experience as an employee, right, HR kind of related stuff, your manager or whatever, there's your experience, you know, as an engineer, like the tools you're using and stuff like that, right? And then your experience, like, as a team member, like your colleagues, your manager, your kind of stuff like that, right? So it's slightly different from an employee in terms of, it's not about company confidence and stuff or strategy, but more about your relationships. So, there's different areas of it. For measuring, like, general satisfaction, we use things like Officevibe, we use things like continuous performance improvement tools, like 15Five. we brought in OKRs, a lot of things which kind of are there to connect people to strategy and regularly check in and make sure that we're not missing things. All of those were effective in pockets, depending on usage. But by far the most effective thing, and I know this might not be the popular answer when it comes to what type of sells, although I do like the product a lot, which is why I'm doing this. I think it's a cool product. A lot of it is really just like 1-on-1s, just making sure that every manager does a 1-on-1 every two weeks. And making it like, put it in some kind of spreadsheet, some kind of lightweight thing, but making sure that new managers learn they have to do it, how to do them well, how to, you know, connect with individuals, understand their motivations and like follow through on stuff and make small improvements in their lives. Like, that's the cheat code. It's just doing the hard work of being a manager, following through with people, listening to them. That being said, data helps. So, like, what you guys have what you guys built, I've built small versions of that. I wrote a script which would look at all the repositories, see how long things are sitting in PR, look at Jira, see how long things are sitting in wait. You know, build continuous flow sort of diagrams, you know, sort of just showing people where your value, team is getting stuck. I've, like hand-coded some of that stuff and it's been helpful to start conversations. It's been helpful to convince people they need to change their ideas about what's happening. I think those are some of the ideas.Kovid Batra: Thanks for promoting Typo there. But yeah, I think that also makes sense. It's not necessary that you have to have a tooling in place, but in general, keeping a tab on these metrics or the understanding of how things are flowing in the team is critical and probably that's from where you understand where the developer experience and the experience of the team members would be impacted. One thing you mentioned was that you scaled very rapidly at Grofers and it was from 50 to 250, right? One interesting piece, I think we were having this discussion at the time of SaaSBoomi also the code review experience, right? Because at that scale, it becomes really difficult to, like even for a Director of Engineering to go into the code and see how it is flowing, where things are getting stuck. So, this code review experience in itself, I'm sure this impacts a lot of the developer experience piece, so to say. So, how did you manage that and how it flowed there for you?Jacob Singh: Well, one is I didn't manage it directly. So, like Grofers is big enough that I had a Director of Engineering sort of, or VP Engineering for different-different divisions. And that level of being hands-on in terms of code reviews, I wouldn't really participate in other than like, you know, sometimes as an observer, sometimes to show proof, if we're doing something new, like we're doing automation, I might whip up some sample code, show people, do a review here and there for a specific purpose about something, but never like generally manage those, like, look at that. Grofers was really good this way. I think we had a really good academic culture where people did like public code reviews. There wasn't this like protection shame kind of thing. It was very open. We had a big open-source culture. We used to host lots of open-source meetups. There was a lot of positive sort of view of inquiry and learning. It wasn't looked at as a threatening thing, which in a lot of organizations is looked at as a threatening kind of thing. And the gatekeeping thing, which I think we try to discourage. I think we had a lot of really positive aspects of that. Vedic Kapoor was heading DevOps and Security infrastructure stuff that I work with a lot. He's consulting now, doing this kind of work. He did a lot of great, sort of workshops and a lot of like a continuous improvement program with his teams around this kind of stuff where they do public reviews every so often every week or whatever. The DevOps teams made a big point of being that service team for the engineer so they would kind of build features for engineers. So, we had a developer experience team, essentially, because we were that size. Well, the review process, generally, I mean, I gave this rant at SaaSBoomi, and I think I've given it often. I think PRs are one of the worst things that's happened to software engineering, in the last 20 years.Kovid Batra: Yeah, you mentioned that.Jacob Singh: Yeah, and I think it's a real problem. And it's this funny thing where everyone assumes that progress moves forward and it never goes backwards. And then they, the younger generation doesn't necessarily believe that it could have been better before. But I'll tell you the reason why I say that is that, you know, Git was created by Linus, by the creator of Linux because they needed, well, they needed something immediately, but also they needed something which would allow thousands and thousands of people working at different companies with different motivations to all collaborate on the same project. And so, it's the branching and the sub branching and the sub-sub branching which Git allowed people to simultaneously work on many different branches and then sort of merge them late and review them in any order they wanted to and discuss them at length to get them in and had to be very secure and very stable at the end of the day. And that made a lot of sense, right? It's very asynchronous and things take a very long time to finish. That's not how your software team works. You guys are all sitting on the same table. What are you doing? Like, you don't need to do that. You can just be like, "Hey, look at this, right? There's a different way to do it." Even if you're on a remote team, you can be like, "Let's do a screen share." Or, "Can I meet you tomorrow at two or whatever, and we'll go through this." Or like, "I had some ideas, paste it in Slack, get back to me when you can." You know, "Here's my patch." Whatever. And I think what ends up happening is that this whole, the GitHub, and for open-source projects, it's huge. Git is amazing. Pull requests are amazing. They've enabled a lot. But if you are a team where you all kind of work on the same codebase, you all kind of work closely together, there's no reason to send up this long asynchronous process where it can take anywhere between a couple of hours to, I've seen a couple weeks to get a review done. And it creates a log jam and that slows down teams and that reduces again, that loop. I'm big on loops. Like, I think you should be able to run your code in less than a second. You should be able to compile as quickly as possible. You should be able to test things as quickly as possible that you should be able to get things to the market and review them as quickly as possible. You should get feedback from your colleague as soon as possible. And like, I think a lot of that has gotten worse. So, engineers like learn slower and they're waiting more, they're waiting for PRs to get reviewed, there's politics around it. And so, I think that that process, probably should change. More frequent reviews, pairing you know, sort of less formal reviews, et cetera. Yeah, and TDD if you can do it. It's kind of the way to get much faster loops, productivity loops going, get things out sooner. Sorry, a bit of a long rant, but yeah, PRs suck.Kovid Batra: No, I think this is really interesting, how we moved from enhancing developer experience and how code review should be done better because that ultimately impacts the overall experience and that's what most of the developers are doing most of the time. So, I think that makes sense. And it was.. Yeah?Jacob Singh: I just want to caveat that before you misquote me. I'm not saying formal reviews are bad. You should also have a formal review, but it should be like, it should be a formality. Like, you should have already done so many reviews informally along the way that anyone is reviewing it already kind of knows it's there and then the formal review happens. And it's like in the books and we've looked at it and we put the comments. It shouldn't just be like, "Here's a 20K patch.", a day before the deadline. You know what I mean? That shouldn't happen anymore, I think that's what I'm trying to say.Kovid Batra: Yeah. No, no, totally makes sense. I think this last piece was very interesting. And, we can have a complete discussion, a podcast totally on this piece itself. So, I'll stop here. And thank you so much for your time today, for discussing all these aspects of developer experience and how code reviews should be done better. Any parting advice, Jacob, for the dev teams of the modern age?Jacob Singh: The dev teams or the other managers? I guess the managers are probably watching this more than developers.Kovid Batra: Whichever you like.Jacob Singh: A parting advice. I would say that we're at the most exciting time to be an engineer since I mean, maybe I'm biased, but since I started coding. When I started coding, it was like, just as the web was taking off. You know, like, I remember when, like, CSS was released, you know, that's old. So I was like, "Oh, shit, this is great. This is so much fun!" You know, like, when it started getting adopted, right? So I think, like the sort of dynamic programmable web was nice when I started, right? Now, we're at the second most exciting one, in my opinion, as an engineer. And I think it's surprising to me. I work with a lot of portfolio companies at Alpha Wave. I talk to new companies that I'm looking at investing in. It's really surprising to me how few of them use Copilot or Cursor or these various sorts of AI tools to assist in development or like everyone uses them a little bit, but not programmatically. They don't really look into it too much. And I think that's a missed opportunity. I still code. When I code, I use them extensively. Like, extensively. I'm on ChatGPT. I'm on Copilot. I pay for all these subscriptions. You know, I use ShellGPT. I don't remember shell commands anymore. ShellGPT, by the way, is great to plug. Write what you want to do, hit ctrl+L, and it'll like generate the shell command for you. Even stuff I know how to do, it's faster. But the main thing is, like, the yak shaving goes away. So, I don't know if you guys know yak shaving, but yak shaving is this idea of having to do all this configuration, all this setup, all this screw around to get the thing actually working before you can start coding. Like, learning some new framework or dependency management, bullshit like that. That stuff is so much better now. You take your errors. You paste them into ChatGPT. It explains it. It's right most of the time. You can ask it to build a config script. So, I think if you know how to use the tool, you can just be a million times more productive. So, I would say lean into it. Don't be intimidated by it. Definitely don't shortchange it. Dedicate some research effort. Trust your engineers. Buy those subscriptions, It's 20 bucks a month. Don't be so cheap, please. Indian engineering managers are really cheap with tooling, I think a lot of time. Just spend it. It's fine. It's going to be worth it. I think that would be my big thing right now.Kovid Batra: Great, Jacob. Thank you. Thank you so much. Thank you so much for this. We'd love to have another discussion with you on any of the topics you love in the coming shows. And for today, I think, thanks a lot once again.Jacob Singh: My pleasure. Same here. Good talking to you, Kovid. All right. See you.Kovid Batra: Thank you. All right, take care. Bye-bye.Jacob Singh: Happy hacking!

View All

AI

View All

How does Gen AI address Technical Debt?

The software development field is constantly evolving field. While this helps deliver the products and services quickly to the end-users, it also implies that developers might take shortcuts to deliver them on time. This not only reduces the quality of the software but also leads to increased technical debt.

But, with new trends and technologies, comes generative AI. It seems to be a promising solution in the software development industry which can ultimately, lead to high-quality code and decreased technical debt.

Let’s explore more about how generative AI can help manage technical debt!

Technical debt: An overview

Technical debt arises when development teams take shortcuts to develop projects. While this gives them short-term gains, it increases their workload in the long run.

In other words, developers prioritize quick solutions over effective solutions. The four main causes behind technical debt are:

  • Business causes: Prioritizing business needs and the company’s evolving conditions can put pressure on development teams to cut corners. It can result in preponing deadlines or reducing costs to achieve desired goals.
  • Development causes: As new technologies are evolving rapidly, It makes it difficult for teams to switch or upgrade them quickly. Especially when already dealing with the burden of bad code.
  • Human resources causes: Unintentional technical debt can occur when development teams lack the necessary skills or knowledge to implement best practices. It can result in more errors and insufficient solutions.
  • Resources causes: When teams don’t have time or sufficient resources, they take shortcuts by choosing the quickest solution. It can be due to budgetary constraints, insufficient processes and culture, deadlines, and so on.

Why generative AI for code management is important?

As per McKinsey’s study,

“… 10 to 20 percent of the technology budget dedicated to new products is diverted to resolving issues related to tech debt. More troubling still, CIOs estimated that tech debt amounts to 20 to 40 percent of the value of their entire technology estate before depreciation.”

But there’s a solution to it. Handling tech debt is possible and can have a significant impact:

“Some companies find that actively managing their tech debt frees up engineers to spend up to 50 percent more of their time on work that supports business goals. The CIO of a leading cloud provider told us, ‘By reinventing our debt management, we went from 75 percent of engineer time paying the [tech debt] ‘tax’ to 25 percent. It allowed us to be who we are today.”

There are many traditional ways to minimize technical debt which includes manual testing, refactoring, and code review. However, these manual tasks take a lot of time and effort. Due to the ever-evolving nature of the software industry, these are often overlooked and delayed.

Since generative AI tools are on the rise, they are considered to be the right way for code management which subsequently, lowers technical debt. These new tools have started reaching the market already. They are integrated into the software development environments, gather and process the data across the organization in real-time, and further, leveraged to lower tech debt.

Some of the key benefits of generative AI are:

  • Identify redundant code: Generative AI tools like Codeclone analyze code and suggest improvements. This further helps in improving code readability and maintainability and subsequently, minimizing technical debt.
  • Generates high-quality code: Automated code review tools such as Typo help in an efficient and effective code review process. They understand the context of the code and accurately fix issues which leads to high-quality code.  
  • Automate manual tasks: Tools like Github Copilot automate repetitive tasks and let the developers focus on high-quality tasks.
  • Optimal refactoring strategies: AI tools like Deepcode leverage machine learning models to understand code semantics, break it down into more manageable functions, and improve variable namings.

Case studies and real-life examples

Many industries have started adopting generative AI technologies already for tech debt management. These AI tools assist developers in improving code quality, streamlining SDLC processes, and cost savings.

Below are success stories of a few well-known organizations that have implemented these tools in their organizations:

Microsoft uses Diffblue cover for Automated Testing and Bug Detection

Microsoft is a global technology leader that implemented Diffblue cover for automated testing. Through this generative AI, Microsoft has experienced a considerable reduction in the number of bugs during the development process. It also ensures that the new features don’t compromise with existing functionality which positively impacts their code quality. This further helps in faster and more reliable releases and cost savings.

Google implements Codex for code documentation

Google is an internet search engine and technology giant that implemented OpenAI’s Codex for streamlining code documentation processes. Integrating this AI tool helped subsequently reduce the time and effort spent on manual documentation tasks. Due to the consistency across the entire codebase, It enhances code quality and allows developers to focus more on core tasks.

Facebook adopts CodeClone to identify redundancy

Facebook, a leading social media, has adopted a generative AI tool, CodeClone for identifying and eliminating redundant code across its extensive codebase. This resulted in decreased inconsistencies and a more streamlined and efficient codebase which further led to faster development cycles.

Pioneer Square Labs uses GPT-4 for higher-level planning

Pioneer Square Labs, a studio that launches technology startups, adopted GPT-4 to allow developers to focus on core tasks and let these AI tools handle mundane tasks. This further allows them to take care of high-level planning and assist in writing code. Hence, streamlining the development process.

How Typo leverage generative AI to reduce technical debt?

Typo’s automated code review tool enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells.

Typo also auto-analyses your codebase pulls requests to find issues and auto-generates fixes before you merge to master. Its Auto-Fix feature leverages GPT 3.5 Pro trained on millions of open source data & exclusive anonymised private data as well to generate line-by-line code snippets where the issue is detected in the codebase.

As a result, Typo helps reduce technical debt by detecting and addressing issues early in the development process, preventing the introduction of new debt, and allowing developers to focus on high-quality tasks.

Issue detection by Typo

AI to reduce technical debt

Autofixing the codebase with an option to directly create a Pull Request

AI to reduce technical debt

Key features

Supports top 10+ languages

Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.

Fix every code issue

Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.

Efficient code optimization

Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.

Professional coding standards

Typo standardizes code and reduces the risk of a security breach.

Professional coding standards

Click here to know more about our Code Review tool

Can technical debt increase due to generative AI?

While generative AI can help reduce technical debt by analyzing code quality, removing redundant code, and automating the code review process, many engineering leaders believe technical debt can be increased too.

Bob Quillin, vFunction chief ecosystem officer stated “These new applications and capabilities will require many new MLOps processes and tools that could overwhelm any existing, already overloaded DevOps team,”

They aren’t wrong either!

Technical debt can be increased when the organizations aren’t properly documenting and training development teams in implementing generative AI the right way. When these AI tools are adopted hastily without considering any long-term implications, they can rather increase the workload of developers and increase technical debt.

Ethical guidelines

Establish ethical guidelines for the use of generative AI in organizations. Understand the potential ethical implications of using AI to generate code, such as the impact on job displacement, intellectual property rights, and biases in AI-generated output.

Diverse training data quality

Ensure the quality and diversity of training data used to train generative AI models. When training data is biased or incomplete, these AI tools can produce biased or incorrect output. Regularly review and update training data to improve the accuracy and reliability of AI-generated code.

Human oversight

Maintain human oversight throughout the generative AI process. While AI can generate code snippets and provide suggestions, the final decision should be upon the developers for final decision making, review, and validate the output to ensure correctness, security, and adherence to coding standards.

Most importantly, human intervention is a must when using these tools. After all, it’s their judgment, creativity, and domain knowledge that help to make the final decision. Generative AI is indeed helpful to reduce the manual tasks of the developers, however, they need to use it properly.

Conclusion

In a nutshell, generative artificial intelligence tools can help manage technical debt when used correctly. These tools help to identify redundancy in code, improve readability and maintainability, and generate high-quality code.

However, it is to be noted that these AI tools shouldn’t be used independently. These tools must work only as the developers’ assistants and they muse use them transparently and fairly.

Use of AI in the code review process

The code review process is one of the major reasons for developer burnout. This not only hinders the developer’s productivity but also negatively affects the software tasks. Unfortunately, it is a crucial aspect of software development that shouldn’t be compromised.

So, what is the alternative to manual code review? Let’s dive in further to know more about it:

The current State of Manual Code Review

Manual code reviews are crucial for the software development process. It can help identify bugs, mentor new developers, and promote a collaborative culture among team members. However, it comes with its own set of limitations.

Software development is a demanding job with lots of projects and processes. Code review when done manually, can take a lot of time and effort from developers. Especially, when reviewing an extensive codebase. It not only prevents them from working on other core tasks but also leads to fatigue and burnout, resulting in decreased productivity.

Since the reviewers have to read the source code line by line to identify issues and vulnerabilities, it can overwhelm them and they may miss out on some of the critical paths. This can result in human errors especially when the deadline is approaching. Hence, negatively impacting project efficiency and straining team resources.

In short, manual code review demands significant time, effort, and coordination from the development team.

This is when AI code review comes to the rescue. AI code review tools are becoming increasingly popular in today’s times. Let’s read more about AI code review and why is it important for developers:

What is AI Code Review?

AI code review is an automated process that examines and analyzes the code of software applications. It uses artificial intelligence and machine learning techniques to identify patterns, detect potential problems, common programming mistakes, and potential security vulnerabilities. These AI code review tools are entirely based on data so they aren’t biased and can read vast amounts of code in seconds.

Why AI in the Code Review Process is Important?

Augmenting human efforts with AI code review has various benefits:

Enhance Overall Quality

Generative AI in code review tools can detect issues like potential bugs, security vulnerabilities, code smells, bottlenecks, and more. The human code review process usually overlooks these issues. Hence, helping in identifying patterns and recommending code improvements that can enhance efficiency and maintenance and reduce technical debt. This leads to robust and reliable software that meets the highest quality standards.

Improve Productivity

AI-powered tools can scan and analyze large volumes of code within minutes. It not only detects potential issues but also suggests improvements according to coding standards and practices. This allows the development team to catch errors early in the development cycle by providing immediate feedback. This saves time spent on manual inspections and rather, developers can focus on other intricate and imaginative parts of their work.

Better Compliance with Coding Standards

The automated code review process ensures that code conforms to coding standards and best practices. It allows code to be more readable, understandable, and maintainable. Hence, improving the code quality. Moreover, it enhances teamwork and collaboration among developers as all of them adhere to the same guidelines and consistency in the code review process.

Enhance Accuracy

The major disadvantage of manual code reviews is that they are prone to human errors and biases. It further increases other critical issues related to structural quality, architectural decisions or so which negatively impact the software application. Generative AI in code reviews can analyze code much faster and more consistently than humans. Hence, maintaining accuracy and reducing biases since they are entirely based on data.

Increase Scalability

When software projects grow in complexity and size, manual code reviews become increasingly time-consuming. It may also struggle to keep up with the scale of these codebases which further delay the code review process. As mentioned before, AI code review tools can handle large codebases in a fraction of a second and can help development teams maintain high standards of code quality and maintainability.  

How Typo Leverage Gen AI to Automate Code Reviews?

Typo’s automated code review tool not only enables developers to merge clean, secure, high-quality code, faster. It lets developers catch issues related to maintainability, readability, and potential bugs and can detect code smells. It auto-analyses your codebase and pulls requests to find issues and auto-generates fixes before you merge to master.

Typo’s Auto-Fix feature leverages GPT 3.5 Pro to generate line-by-line code snippets where the issue is detected in the codebase. This means less time reviewing and more time for important tasks. As a result, making the whole process faster and smoother.

Issue detection by Typo

Auto fixing the codebase with an option to directly create a Pull Request

Key Features

Supports Top 10+ Languages

Typo supports a variety of programming languages, including popular ones like C++, JS, Python, and Ruby, ensuring ease of use for developers working across diverse projects.

Fix Every Code Issue

Typo understands the context of your code and quickly finds and fixes any issues accurately. Hence, empowering developers to work on software projects seamlessly and efficiently.

Efficient Code Optimization

Typo uses optimized practices and built-in methods spanning multiple languages. Hence, reducing code complexity and ensuring thorough quality assurance throughout the development process.

Professional Coding Standards

Typo standardizes code and reduces the risk of a security breach.

Comparing Typo with Other AI Code Review Tools

There are other popular AI code review tools available in the market. Let’s compare how we stack against others:

Typo

Sonarcloud

Codacy

Codecov

Code analysis

AI analysis and static code analysis

No

No

No

Code context

Deep understanding

No

No

No

Proprietary models

Yes

No

No

No

Auto debugging

Automated debugging with detailed explanations

Manual

No

No

Auto pull request

Automated pull requests and fixes

No

No

No

AI vs. Humans: The Future of Code Reviews?

AI code review tools are becoming increasingly popular. One question that has been on everyone’s mind is whether these AI code review tools will take away developers’ jobs.

The answer is NO.

Generative AI in code reviews is designed to enhance and streamline the development process. It lets the developers automate the repetitive and time-consuming tasks and focus on other core aspects of software applications. Moreover, human judgment, creativity, and domain knowledge are crucial for software development that AI cannot fully replicate.

While these tools excel at certain tasks like analyzing codebase, identifying code patterns, and software testing, they still cannot fully understand complex business requirements, and user needs, or make subjective decisions.

As a result, the combination of AI code review tools and developers’ intervention is an effective approach to ensure high-quality code.

Conclusion

The tech industry is demanding. The software engineering team needs to stay ahead of the industry trends. New AI tools and technologies can help them complement their skills and expertise and make their task easier.

AI in the code review process offers remarkable benefits including reducing human error and consistent accuracy. But, make sure that they are here to assist you in your task, not your whole strategy or replace you.

|

How Generative AI Is Revolutionising Developer Productivity

Generative AI has become a transformative force in the tech world. And it isn’t going to stop anytime soon! It will continue to have a major impact, especially in the software development industry.Generative AI, when used in the right way, can help developers in saving their time and effort. It allows them to focus on core tasks and upskilling. It further helps streamline various stages of SDLC and improves Developer Productivity. In this article, let’s dive deeper into how generative AI can positively impact developer productivity.

What is Generative AI?

Generative AI is a category of AI models and tools that are designed to create new content, images, videos, text, music, or code. It uses various techniques including neural networks and deep learning algorithms to generate new content.Generative artificial intelligence holds a great advantage for software developers in improving their productivity. It not only improves code quality and delivers better products and services but also allows them to stay ahead of their competitors.Below are a few benefits of Generative AI:

Increases Efficiency

With the help of Generative AI, developers can automate tasks that are either repetitive or don’t require much attention. This saves a lot of time and energy and allows developers to be more productive and efficient in their work. Hence, they can focus on more complex and critical aspects of the software without constantly stressing about other work.

Improves Quality

Generative AI can help in minimizing errors and address potential issues early. When they are set as per the coding standards, it can contribute to more effective coding reviews. This increases the code quality and decreases costly downtime and data loss.

Helps in Learning and Assisting with Work

Generative AI can assist developers by analyzing and generating examples of well-structured code, providing suggestions for refactoring, generating code snippets, and detecting blind spots. This further helps developers in upskilling and gaining knowledge about their tasks.

Cost Savings

Integrating generative AI tools can reduce costs. It enables developers to use existing codebases effectively and complete projects faster even with shorter teams. Generative AI can streamline the stages of the software development life cycle and get the most out of less budget.

Predict Analytics

Generative AI can help in detecting potential issues in the early stages by analyzing historical data. It can also make predictions about future trends. This allows developers to make informed decisions about their projects, streamline their workflow, and hence, deliver high-quality products and services.

How does Generative AI Help Software Developers?

Below are four key areas in which Generative AI can be a great asset to software developers:

It Eliminates Manual and Repetitive Tasks

Generative AI can take up the manual and routine tasks of software development teams. A few of them are test automation, completing coding statements, writing documentation, and so on. Developers can provide the prompt to Generative AI i.e. information regarding their code and documentation that adheres to the best practices. And it can generate the required content accordingly. It minimizes human errors and increases accuracy.This increases the creativity and problem-solving skills of developers. It further lets them focus more on solving complex business challenges and fast-track new software capabilities. Hence, it helps in faster delivery of products and services to end users.

It Helps Developers to Tackle New Challenges

When developers face any challenges or obstacles in their projects, they can turn to these AI tools to seek assistance. These AI tools can track performance, provide feedback, offer predictions, and find the optimal path to complete tasks. By providing the right and clear prompts, these tools can provide problem-specific recommendations and proven solutions.This prevents developers from being stressed out with certain tasks. Rather, they can use their time and energy for other important tasks or can take breaks.It increases their productivity and performance. Hence, improves the overall developer experience.

It Helps in Creating the First Draft of the Code

With the help of generative artificial intelligence, developers can get helpful code suggestions and generate initial drafts. It can be done by entering the prompt in a separate window or within the IDE that helps in developing the software.This prevents developers from entering into a slump and getting in the flow sooner. Besides this, these AI tools can also assist in root cause analysis and generate new system designs. Hence, it allows developers to reflect on code at a higher and more abstract level and focus more on what they want to build.

It Helps in Making Changes to Existing Code Faster

Generative AI can accelerate updates to existing code faster. Developers simply have to provide the criteria for the same and the AI tool can proceed further. It usually includes those tasks that get sidelined due to workload and lack of time. For example, Refactoring existing code further helps in making small changes and improving code readability and performance.As a result, developers can focus on high-level design and critical decision-making without worrying much about existing tasks.

How does Generative AI Improve Developer Productivity?

Below are a few ways in which Generative AI can have a positive impact on developer productivity:

Focus on Meaningful Tasks

As Generative AI tools take up tedious and repetitive tasks, they allow developers to give their time and energy to meaningful activities. This avoids distractions and prevents them from stress and burnout. Hence, it increases their productivity and positively impacts the overall developer experience.

Assist in their Learning Graph

Generative AI lets developers be less dependent on their seniors and co-workers. Since they can gain practical insights and examples from these AI tools. It allows them to enter their flow state faster and reduces their stress level.

Assist in Pair Programming

Through Generative AI, developers can collaborate with other developers easily. These AI tools help in providing intelligent suggestions and feedback during coding sessions. This stimulates discussion between them and leads to better and more creative solutions.

Increase the Pace of Software Development

Generative AI helps in the continuous delivery of products and services and drives business strategy. It addresses potential issues in the early stages and provides suggestions for improvements. Hence, it not only accelerates the phases of SDLC but improves overall quality as well.

5 top Generative AI Tools for Software Developers

Typo

Typo auto-analyzes your code and pull requests to find issues and suggests auto-fixes before getting merged.

Use Case

The code review process is time-consuming. Typo enables developers to find issues as soon as PR is raised and shows alerts within the git account. It gives you a detailed summary of security, vulnerability, and performance issues. To streamline the whole process, it suggests auto-fixes and best practices to move things faster and better.

Github Copilot

Github Copilot is an AI pair programmer that provides autocomplete style suggestions to your code.

Use Case

Coding is an integral part of your software development project. However, when done manually, takes a lot of effort. Github Copilot picks suggestions from your current or related code files and lets you test and select your code to perform different actions. It also ensures that vulnerable coding patterns are filtered out and blocks problematic public code suggestions.

Tabnine

Tabnine is an AI-powered code completion tool that uses deep learning to suggest code as you type.

Use Case

Writing code can prevent you from focusing on other core activities. Tabnine can provide accurate suggestions over time as per your coding habits and personalize code too. It also includes programming languages such as Javascript and Python and integrates them with popular IDEs for speedy setup and reduced context switching.

ChatGPT

ChatGPT is a language model developed by OpenAI to understand prompts and generate human-like texts.

Use Case

Developers need to brainstorm ideas and get feedback on their projects. This is when ChatGPT comes to their rescue. This AI tool helps in finding answers to their coding, technical documentation, programming concepts and much more quickly. It uses natural language to understand questions and provide relevant suggestions.

Mintlify

Mintlify is an AI-powered documentation writer that allows developers to quickly and accurately generate code documentation.

Use Case

Code documentation can be a tedious process. Mintlify can analyze code, quickly understand complicated functions, and include built-in analytics to help developers understand how users engage with the documentation. It also has a Mintlify chat that reads documents and answers user questions instantly.

How to Mitigate Risks Associated with Generative AI?

No matter how effective Generative AI is becoming nowadays, it also comes with a lot of defects and errors. They are not always correct hence, human review is important after giving certain tasks to AI tools.Below are a few ways you can reduce risks related to Generative AI:

Implement Quality Control Practices

Develop guidelines and policies to address ethical challenges such as fairness, privacy, transparency, and accuracy of software development projects. Make sure to monitor a system that tracks model accuracy, performance metrics, and potential biases.

Provide Generative AI Training

Offer mentorship and training regarding Generative AI. This will increase AI literacy across departments and mitigate the risk. Help them know how to effectively utilize these tools and know their capabilities and limitations.

Understand AI is an Assistant, Not a Replacement

Make your developers understand that these generative tools should be viewed as assistants only. Encourage collaboration between these tools and human operators to leverage the strength of AI.

Conclusion

In a nutshell, Generative AI stands as a game-changer in the software development industry. When they are harnessed effectively, they can bring a multitude of benefits to the table. However, ensure that your developers approach the integration of Generative AI with caution.

View All

Tutorials

View All

How Typo Uses DORA Metrics to Boost Efficiency?

DORA metrics are a compass for engineering teams striving to optimise their development and operations processes.

Consistently tracking these metrics can lead to significant and lasting improvements in your software delivery processes and overall business performance.

Below is a detailed guide on how Typo uses DORA to improve DevOps performance and boost efficiency:

What are DORA Metrics?

In 2015, The DORA (DevOps Research and Assessment) team was founded by Gene Kim, Jez Humble and Nicole Forsgren to evaluate and improve software development practices. The aim was to improve the understanding of how organisations can deliver software faster, more reliable and of higher quality.

They developed DORA metrics that provide insights into the performance of DevOps practices and help organisations improve their software development and delivery processes. These metrics help in finding answers to these two questions:

  • How to identify organisations’ elite performers?
  • What should low performers teams must focus on?

The Four DORA Metrics

DORA metrics helps in assessing software delivery performance based on four key (or accelerate) metrics:

  • Deployment Frequency
  • Lead Time for Changes
  • Change Failure Rate
  • Mean Time to Recover

Deployment Frequency

Deployment Frequency measures the number of times that code is deployed into production. It helps in understanding team’s throughput and quantifying how much value is delivered to customers.

When organizations achieve a high Deployment Frequency, they can enjoy rapid releases without compromising the software’s robustness. This can be a powerful driver of agility and efficiency, making it an essential component for software development teams.

One deployment per week is standard. However, it also depends on the type of product.

Why is it Important?

  • It provides insights into the overall efficiency and speed of the DevOps team’s processes.
  • It helps in identifying pitfalls and areas for improvement in the software development life cycle.
  • It helps in making data-driven decisions to optimise the process.
  • It helps in understanding the impact of changes on system performance.

Lead Time for Changes

Lead Time for Changes measures the time it takes for code changes to move from inception to deployment. The measurement of this metric offers valuable insights into the effectiveness of development processes, deployment pipelines, and release strategies.

By analysing the Lead Time for Changes, development teams can identify bottlenecks in the delivery pipeline and streamline their workflows to improve software delivery’s overall speed and efficiency. Shorter lead time states that the DevOps team is more efficient in deploying code.

Why is it Important?

  • It helps organisations gather feedback and validate assumptions quickly, leading to informed decision-making and aligning software development with customer needs.
  • It helps organizations gain agility and adaptability, allowing them to swiftly respond to market changes, embrace new technologies, and meet evolving business needs.
  • It enables experimentation, learning, and continuous improvement, empowering organizations to stay competitive in dynamic environments.
  • It demands collaborative teamwork, breaking silos, fostering shared ownership, and improving communication, coordination, and efficiency.

Change Failure Rate

Change Failure Rate gauges the percentage of changes that require hot fixes or other remediation after production. It reflects the stability and reliability of the entire software development and deployment lifecycle.

By tracking CFR, teams can identify bottlenecks, flaws, or vulnerabilities in their processes, tools, or infrastructure that can negatively impact the quality, speed, and cost of software delivery.

0% — 15% CFR is considered to be a good indicator of your code quality.

Why is it Important?

  • It enhances user experience and builds trust by reducing failures.
  • It protects your business from financial risks which helps in avoiding revenue loss, customer churn, and brand damage by reducing failures.
  • It helps in allocating resources effectively and focuses on delivering new features.
  • It ensures changes are implemented smoothly and with minimal disruption.

Mean Time to Recovery

Mean Time to Recovery measures how quickly a team can bounce back from incidents or failures. It concentrates on determining the efficiency and effectiveness of an organisation’s incident response and resolution procedures.

A lower mean time to recovery is synonymous with a resilient system capable of handling challenges effectively.

The response time should be as short as possible. 24 hours is considered to be a good rule of thumb.

Why is it Important?

  • It enhances user satisfaction by reducing downtime and resolution times.
  • It mitigates the negative impacts of downtime on business operations, including financial losses, missed opportunities, and reputational damage.
  • It helps meet service level agreements (SLAs) that are vital for upholding client trust and fulfilling contractual commitments.
  • It provides valuable insights in day to day practices such as incident management, engineering team performance and helps elevate customer satisfaction.

The Fifth Metrics: Reliability

Reliability is a fifth metric that was added by the DORA team in 2021. It measures modern operational practices and doesn’t have standard quantifiable targets for performance levels.

Reliability comprises several metrics used to assess operational performance that includes availability, latency, performance and scalability that measures user-facing behaviour, software SLAs, performance targets, and error budgets.

How Typo Uses DORA to Boost Dev Efficiency?

Typo is an effective software engineering intelligence platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It offers comprehensive insights into the deployment process through key DORA metrics such as change failure rate, time to build, and deployment frequency.

Below is a detailed view of how Typo uses DORA to boost dev efficiency and team performance:

DORA Metrics Dashboard

Typo’s DORA metrics dashboard has a user-friendly interface and robust features tailored for DevOps excellence. This helps in identifying bottlenecks, improves collaboration between teams, optimises delivery speed and effectively communicates team’s success.

DORA metrics dashboard pulls in data from all the sources and presents in a visualised and detailed way to engineering leaders and development team.

DORA metrics helps in many ways:

  • With pre-built integrations in the dev tool stack, DORA dashboard provides all the relevant data flowing in within minutes.
  • It helps in deep diving and correlating different metrics to identify real-time bottlenecks, sprint delays, blocked PRs, deployment efficiency and much more from a single dashboard.
  • The dashboard sets custom improvement goals for each team and tracks their success in real-time.
  • It gives real-time visibility into a team’s KPI and lets them make informed decisions.

How to Build your DORA Metrics Dashboard?

Define your objectives

Firstly, define clear and measurable objectives. Consider KPIs that align with your organisational goals. Whether it’s improving deployment speed, reducing failure rates, or enhancing overall efficiency, having a well-defined set of objectives will help guide your implementation of the dashboard.

Understanding DORA metrics

Gain a deeper understanding of DORA metrics by exploring the nuances of Deployment Frequency, Lead Time, Change Failure Rate, and MTTR. Then, connect each of these metrics with your organisation’s DevOps goals to have a comprehensive understanding of how they contribute towards improving overall performance and efficiency.

Dashboard configuration

Follow specific guidelines to properly configure your dashboard. Customise the widgets to accurately represent important metrics and personalise the layout to create a clear and intuitive visualisation of your data. This ensures that your team can easily interpret the insights provided by the dashboard and take appropriate actions.

Implementing data collection mechanisms

To ensure the accuracy and reliability of your DORA Metrics, establish strong data collection mechanisms. Configure your dashboard to collect real-time data from relevant sources, so that the metrics reflect the current state of your DevOps processes.

Integrating automation tools

Integrate automation tools to optimise the performance of your DORA Metrics Dashboard.

By utilising automation for data collection, analysis, and reporting processes, you can streamline routine tasks. This will free up your team’s time and allow them to focus on making strategic decisions and improvements.

Utilising the dashboard effectively

To get the most out of your well-configured DORA Metrics Dashboard, use the insights gained to identify bottlenecks, streamline processes, and improve overall DevOps efficiency. Analyse the dashboard data regularly to drive continuous improvement initiatives and make informed decisions that will positively impact your software development lifecycle.

Comprehensive Visualization of Key Metrics

Typo’s dashboard provides clear and intuitive visualisations of the four key DORA metrics:

Deployment Frequency

It tracks how often new code is deployed to production, highlighting the team’s productivity.

By integrating with your CI/CD tool, Typo calculates Deployment Frequency by counting the number of unique production deployments within the selected time range. The workflows and repositories that align with production can be configured by you.

Cycle Time (Lead Time for Changes)

It measures the time it takes from code being committed to it being deployed in production, indicating the efficiency of the development pipeline.

In the context of Typo it is the average time all pull requests have spent in the “Coding”, “Pickup”, “Review” and “Merge” stages of the pipeline. Typo considers all the merged Pull Requests for the main/master/production branch for the selected time range and calculates the average time spent by each Pull Request in every stage of the pipeline. No open/draft Pull Requests are considered in this calculation.

Change Failure Rate

It shows the percentage of deployments causing a failure in production, reflecting the quality and stability of releases.

There are multiple ways this metric can be configured:

  • A deployment that needs a rollback or a hotfix: For such cases, any Pull Request having a title/tag/label that represents a rollback/hotfix that is merged to production can be considered as a failure.
  • A high-priority production incident: For such cases, any ticket in your Issue Tracker having a title/tag/label that represents a high-priority production incident can be considered as a failure.
  • A deployment that failed during the production workflow: For such cases, Typo can integrate with your CI/CD tool and consider any failed deployment as a failure.

To calculate the final percentage, the total number of failures are divided by the total number of deployments (this can be picked either from the Deployment PRs or from the CI/CD tool deployments).

Mean Time to Restore (MTTR)

It measures the time taken to recover from a failure, showing the team’s ability to respond to and fix issues.

The way a team tracks production failure (CFR) defines how MTTR is calculated for that team. If a team considers a production failure as :

  • Pull Request tagging to track a deployment that needs a rollback or a hotfix: In such a case, MTTR is calculated as the time between the last deployment till such a Pull Request was merged to main/master/production.
  • Tickets tagging for high-priority production incidents: In such a case, MTTR is calculated as the average time such a ticket takes from the ‘In Progress’ state to the ‘Done’ state.
  • CI/CD integration to track deployments that failed during the production workflow: In such a case, MTTR is calculated as the average time between that deployment failure to its being successfully deployed.

Benchmarking for Context

  • Industry Standards: By providing benchmarks, Typo allows teams to compare their performance against industry standards, helping them understand where they stand.
  • Historical Performance: Teams can also compare their current performance with their historical data to track improvements or identify regressions.

Find out what it takes to build reliable high-velocity dev teams:

How Does it Help Engineering Leaders?

  • Typo provides a clear, data-driven view of software development performance. It offers insights into various aspects of development and operational processes.
  • It helps in tracking progress over time. Through continuous tracking, it monitors improvements or regressions in a team’s performance.
  • It supports DevOps practices that focus on both development speed and operational stability.
  • DORA metrics help in mitigating risk. With the help of CFR and MTTR, engineering leaders can manage and lower risk, ensuring more stability and reliability associated with software changes.
  • It identifies bottlenecks and inefficiencies and pinpoints where the team is struggling such as longer lead times or high failure rates.

How Does it Help Development Teams?

  • Typo provides a clear, real-time view of a team’s performance and lets the team make informed decisions based on empirical data rather than guesswork.
  • It encourages balance between speed and quality by providing metrics that highlight both aspects.
  • It helps in predicting future performance based on historical data. This helps in better planning and resource allocation.
  • It helps in identifying potential risks early and taking proactive measures to mitigate them.

Conclusion

DORA metrics deliver crucial insights into team performance. Monitoring Change Failure Rate and Mean Time to Recovery helps leaders ensure their teams are building resilient services with minimal downtime. Similarly, keeping an eye on Deployment Frequency and Lead Time for Changes assures engineering leaders that the team is maintaining a swift pace.

Together, these metrics offer a clear picture of how well the team balances speed and quality in their workflows.

How to engineer your feedback?

One of the ways organizations are implementing is through a continuous feedback process. While it may seem a straightforward process, it is not. Every developer takes feedback in different ways. Hence, it is important to engineer the feedback the right way.

Why is the feedback process important?

Below are a few ways why continuous feedback is beneficial for both developers and engineering leaders:

Keeps everyone on the same page: Feedback enables individuals to be on the same page. No matter what type of tasks they are working on. It allows them to understand their strengths and improve their blind spots. Hence, provide high-quality work.

Facilitates improvement: Feedback enables developers the areas they need to improve and the opportunities they can grab according to their strengths. With the right context and motivation, it can encourage software developers to work on their personal and professional growth.

Nurtures healthy relationships: Feedback fosters open and honest communication. It lets developers be comfortable in sharing ideas and seeking support without any judgements even when they aren’t performing well.

Enhances user satisfaction: Feedback helps developers to enhance their quality of work. This can have a direct impact on user satisfaction which further positively affects the organization.

Strength performance management: Feedback enables you to set clear expectations, track progress, and provide ongoing support and guidance to developers. This further strengthens their performance and streamlines their workflow.

How to engineer your feedback?

There are a lot of things to consider when giving effective and honest feedback. We’ve divided the process into three sections. Do check it out below:

Before the feedback session

Frame the context of the developer feedback

Plan in advance how will you start the conversation, what is worth mentioning, and what is not. For example, if it is related to pull requests, can start by discussing their past performance related to the same. Further, you can talk about how well are they performing, whether they are delivering the work on time, rating their performance and action plan, and if there are any challenges they are facing. Make sure to relate it to the bigger picture.

When framed appropriately and constructively, it helps in focusing on improvement rather than criticism. It also enables developers to take feedback the right way and help them grow and succeed.

Keep tracking continuously

Observe and note down everything related to the developers. Track their performance continuously. Jot down whatever noticed even if it is not worth mentioning during the feedback session. It allows you to share feedback more accurately and comprehensively. It also helps you to identify the trends and patterns in developer performance and lets them know that the feedback isn’t based on isolated incidents but rather the consistent observation.

For example, XYZ is a software developer at ABC organization. The engineering leader observed XYZ for three months before delivering effective feedback. She told him:

  • In 1st month, XYZ wasn’t able to work well on the initial implementation strategy. So, she provided him with resources.
  • In 2nd month, he showed signs of improvement yet he hesitated to participate in the team meetings.
  • In 3rd month, XYZ’s technical skills kept improving but he struggled to engage in meetings and share his ideas.

So, the engineering leader was able to discuss effectively his strengths and areas of improvement.

Understand the difference between feedback and criticism

Before offering feedback to software development teams, make sure you are well aware of the differences between constructive feedback and criticism. Constructive feedback encourages developers to enhance their personal and professional development. On the other hand, criticism enables developers to be defensive and hinder their progress.

Constructive feedback allows you to focus on the behavior and outcome of the developers and help them by providing actionable insights while criticism focuses on faults and mistakes without providing the right guidance.

For example,

Situation: A developer’s recent code review missed several critical issues.

Feedback: “Your recent code review missed a few critical issues, like the memory leak in the data processing module. Next time, please double-check for potential memory leaks. If you’re unsure how to spot them, let’s review some strategies together.”

Criticism: “Your code reviews are sloppy and miss too many important issues. You need to do a better job.”

Collect all important information

Review previous feedback given to developers before the session. Check what was last discussed and make sure to bring it up again. Also, include those that were you tracking during this time and connect them with the previous feedback process. Look for metrics such as pull request activity, work progress, team velocity, work log, check-ins, and more to get in-depth insights about their work. You can also gather peer reviews to get 360-degree feedback and understand better how well individuals are performing.

This makes your feedback balanced and takes into account all aspects of developers’ contributions and challenges.

During the feedback session

Two-way feedback

The feedback shouldn’t be a top-down approach. It must go both ways. You can start by bringing up the discussion that happened in the previous feedback session. Know their opinion and perspective on certain topics and ideas. Make sure that you ask questions to make them realize that you respect their opinions and want to hear what they want to discuss.

Now, share your feedback based on the last discussion, observations, and performance. You can also modify your feedback based on their perspective and reflections. It allows the feedback to be detailed and comprehensive.

Establish clear steps for improvement

When you have shared their areas of improvement, make sure you provide them with clear actionable plans as well. Discuss with them what needs immediate attention and what steps can they take. Set small goals with them as it makes it easier to focus on them and let them know that their goals are important. You must also schedule follow-up meetings with them after they reach every step and understand if they are facing any challenges. You can also provide resources and tools that can help them attain their goals.

Apply the SBI framework

Developed by the Center for Creative Leadership, the SBI stands for situation, behavior, and impact framework. It includes:

  • Situation: First, describe the specific context or scenario in which the observation/behavior took place. Provide factual details and avoid vague descriptions.

Example: Last week’s team collaboration on the new feature development.

  • Behavior: Now, articulate specific behavior you observed or experienced during that situation. Focus only on tangible actions or words instead of assumptions or generalizations.

Example: “You did not participate actively in the brainstorming sessions and missed a few important meetings.”

  • Impact: Lastly, explain the impact of behavior on you or others involved. Share the consequences on the team, project, and the organization.

Example: “This led to a lack of input from your side, and we missed out on potentially valuable ideas. It also caused some delays as we had to reschedule discussions.”

Final words could be: “Please ensure to attend all relevant meetings and actively participate in discussions. Your contributions are important to the team.”

This allows for delivering feedback that is clear, actionable, and respectful. It makes it relevant and directly tied to the situation. Note that, this framework is for both positive and negative feedback.

Understand constraints and personal circumstances

It is also important to know if any constraints are negatively impacting their performance. It could include tight deadlines or a heavy workload that is hampering their productivity or facing health issues due to which they aren’t able to focus properly. Ask them while you deliver feedback to them. You can further create actionable plans accordingly. This shows developers that you care for them and makes the feedback more personalized and relevant. Besides this, it also allows you to share tangible improvements rather than adding more pressure.

For example: “During the last sprint, there were a few missed deadlines. Is there something outside of work that might be affecting your ability to meet these deadlines? Please let me know if there’s anything we can do to accommodate your situation.”

Ask them if there’s anything else to discuss and summarize the feedback

Before concluding the meeting, ask them if there’s anything they would like to discuss. It could likely be that they have missed out on something or it wasn’t bought up during the session.

Afterwards, summarize what has been discussed. Ask the developers what are their key takeaways from the session and share your perspective as well. You can document the summary to help you and developers in the future feedback meetings. This gives mutual understanding and ensures that both are on the same page.

After the feedback session

Write a summary for yourself

Keep a record of what was discussed during this session and action plans provided to the developers. You can take a look at them in future feedback meetings or performance evaluations. An example of the structure of summary:

  • Date and time
  • List the main topics and specific behaviors discussed.
  • Include any constraints, personal circumstances, or insights the developer shared.
  • Outline the specific actions, along with any support or resources you committed to providing.
  • Detail the agreed-upon timeline for follow-up meetings or check-ins to monitor progress.
  • Add any personal observations or reflections that might help in future interactions.

Monitor the progress

Ensure you give them measurable goals and timelines during the feedback session. Monitor their progress through check-ins, provide ongoing support and guidance, and keep discussing the challenges or roadblocks they are facing. It helps the developers stay on track and feel supported throughout their journey.

How Typo can help enhance the feedback process?

Typo is an effective software engineering intelligence platform that can help in improving the feedback process within development teams. Here’s how Typo’s features can be leveraged to enhance feedback sessions:

  • By providing visibility into key SDLC metrics, engineering managers can give more precise and data-driven feedback.
  • It also captures qualitative insights and provides a 360-degree view of the developer experience allowing managers to understand the real issues developers face.
  • Comparing the team’s performance across industry benchmarks can help in understanding where the developers stand.
  • Customizable dashboards allow teams to focus on the most relevant metrics, ensuring feedback is aligned with the team’s specific goals and challenges.
  • The sprint analysis feature tracks and analyzes the progress throughout a sprint, making it easier to identify bottlenecks and areas for improvement. This makes the feedback more timely and targeted.
Typo can help enhance the feedback process
Typo can help enhance the feedback process

For more information, visit our website!

Conclusion

Software developers deserve high-quality feedback. It not only helps them identify their blind spots but also polishes their skills. The feedback loop lets developers know where they stand and the recognition they deserve.

Building and structuring an effective engineering team

Building a high-performing engineering team is crucial for the success of any company, especially in the dynamic and constantly evolving world of technology. Whether you’re a startup on the rise or an established enterprise looking to maintain your competitive edge, having a well-structured engineering team is essential.

This blog will explore the intricacies of building and structuring engineering teams for scale and success. We’ll cover many topics, including talent acquisition, skill development, team management, and more.

Whether you’re a CTO, a team leader, or an entrepreneur looking to build your own engineering team, this blog will equip you with the knowledge and tools to create a high-performing engineering team that can drive innovation and help you achieve your business goals.

What are the dynamics of engineering teams?

Before we dive into the specifics of team structure, it’s vital to understand the dynamics that shape engineering teams. Various factors, including team size, communication channels, leadership style, and cultural fit, influence these dynamics. Each factor plays a significant role in determining how well a team operates.

Team size

The size of a team can significantly impact its operation. Smaller teams tend to be more agile and flexible, making it easier for them to make quick decisions and respond to project changes. On the other hand, larger teams can provide more resources, skills, and knowledge, but they may struggle with communication and coordination.

Communication channels

Effective communication is essential for any team’s success. In engineering teams, communication channels play a significant role in ensuring team members can collaborate effectively. Different communication channels, such as email, chat, video conferencing, or face-to-face, can impact the team’s effectiveness.

Leadership style

A team leader’s leadership style can significantly impact the team’s effectiveness. Autocratic leaders tend to make decisions without input from team members, while democratic leaders encourage team members to participate in decision-making. Moreover, transformational leaders inspire and motivate team members to achieve their best.

Cultural fit

Cultural fit refers to how well team members align with the team’s values, norms, and beliefs. A team that has members with similar values and beliefs is more likely to work well together and be more productive. In contrast, a team with members with conflicting values and beliefs may struggle to work effectively.

Scaling engineering teams can present challenges, and planning and strategizing thoughtfully is crucial to ensure that the team remains effective. Understanding the dynamics that shape engineering teams can help teams overcome these challenges and work together effectively.

Key roles in engineering teams

An engineering team must be diverse and collaborative. Each team member should specialize in a particular area but also be able to comprehend and collaborate with others in building a product.

A few of them include:

Software development team lead and manager

The software development team lead plays a crucial role in guiding and coordinating the efforts of the software development team. They could have under 10 to hundreds of team members under their lead.

Software developer

Software developers write the code, their job is purely technical and they build the product. Most of them are individual contributors i.e. they have no management or HR responsibilities.

Product managers

Product managers define the product vision, gather and prioritize requirements, and deal with collaboration with engineering teams.

Designers

Designers create user-friendly interfaces, develop prototypes to visualize concepts and iterate on feedback-based designs.

Key principles for building and structuring engineering teams

Once the dynamics of engineering teams are understood, organizations can apply key principles to build and structure teams for scale. From defining goals and establishing role clarity to fostering a culture of collaboration and innovation, these principles serve as a foundation for effective team building.

  • Setting clear goals ensures everyone is aligned and working towards the same vision.
  • Clearly defined roles and responsibilities help prevent confusion and promote accountability within the team.
  • Foster an environment where team members feel empowered to collaborate, share ideas, and innovate.
  • Communication is the backbone of any successful team. Establishing efficient communication channels is vital for sharing information and maintaining transparency.
  • Encourage continuous learning and professional development to keep your team members motivated and up-to-date with the latest technologies and trends.
  • Allow individual team members autonomy while ensuring alignment with the organization’s overall goals and objectives.

Different approaches to structuring engineering teams

There is no one-size-fits-all approach to structuring engineering teams. Different structures may be more suitable depending on the organization’s size, industry, and goals. Organizations can identify the structure that best aligns with their unique needs and objectives by exploring various approaches.

The top two approaches are:

Project-based structure

When teams are formed based on the project for a defined period. It is a traditional way where engineers and designers are selected from their respective departments and tasked with project-related work.

It may seem logical, but it poses challenges. Project-based teams can prioritize short-term objectives and collaborating with unfamiliar team members can lead to communication gaps, particularly between developers and other project stakeholders.

Product-based structure

When teams are aligned around specific products or features to promote ownership and accountability. Since this team structure is centered around the product,  it is a long-term project, and team members are bound to work together more efficiently.

As the product gains traction and attracts users, the team needs to adapt to a changing environment i.e. restructuring and hiring specialists.

Other approaches include:

  • Functional-based structure: Organizing teams based on specialized functions such as backend, frontend, or QA.
  • Matrix-based structure: Combining functional and product-based structures to leverage expertise and resources efficiently.
  • Hybrid models: Tailoring the team structure to fit your organization’s unique needs and challenges.

Top pain points in building engineering teams

Sharing responsibilities

In engineering organizations, there is a tendency to rely heavily on one person for all responsibilities rather than distributing them among team members. It not only leads to bottlenecks and inefficiencies but also, slows down progress and the inability to deliver quality products.

Broken communication

The two most common communication issues while structuring and building engineering teams are – Alignment and context-switching between engineering teams. This increases the miscommunication among team members and leads to duplication of work, neglected responsibilities, and coverage gaps.

Lack of independence

When engineering leaders micromanage developers, it can hinder productivity, innovation, and overall team effectiveness. Hence, having a structure that fosters optimization, ownership, and effectiveness is important for building an effective team.

Best practices for scaling engineering teams

Scaling an engineering team requires careful planning and execution. Here are the best practices to build a team that scales well:

  • Streamline your hiring and onboarding processes to attract top talent and integrate new team members seamlessly.
  • Develop scalable processes and workflows to accommodate growth and maintain efficiency.
  • Foster a diverse and inclusive workplace culture to attract and retain top talent from all backgrounds.
  • Invest in the right tools and technologies to streamline development workflows and enhance collaboration.
  • Continuously evaluate your team structure and processes, making adjustments as necessary to adapt to changing needs and challenges.

Build an engineering team that sets your team up for success!

Building and structuring engineering teams for scale is a multifaceted endeavor that requires careful planning, execution, and adaptation.

But this doesn’t end here! Measuring a team’s performance is equally important to build an effective team. This is where Typo comes in!

It is an intelligent engineering management platform used for gaining visibility, removing blockers, and maximizing developer effectiveness. It gives a comparative view of each team’s performance across velocity, quality, and throughput.

engineering management platform

Key features

  • Seamlessly integrates with third-party applications such as Git, Slack, Calenders, and CI/CD tools.
  • ‘Sprint analysis’ feature allows for tracking and analyzing the team’s progress throughout a sprint.
  • Offers customized DORA metrics and other engineering metrics that can be configured in a single dashboard.
  • Offers engineering benchmark to compare the team’s results across industries.
  • User-friendly interface.

For more information, check out our website!

Iteration burndown chart: Tips for effective use

Agile project management relies on iterative development cycles to deliver value efficiently. Central to this methodology is the iteration burndown chart, a visual representation of work progress over time. In this blog, we’ll explore leveraging and enhancing the iteration burndown chart to optimize Agile project outcomes and team collaboration.

What is an iteration burndown chart?

An iteration burndown chart is a graphical representation of the total work remaining over time in an Agile iteration, helping teams visualize progress toward completing their planned work.

 iteration burndown chart

Components

It typically includes an ideal line representing the planned progress, an actual line indicating the real progress, and axes to represent time and work remaining.

Purpose

The chart enables teams to monitor their velocity, identify potential bottlenecks, and make data-driven decisions to ensure successful iteration completion.

Benefits of using iteration burndown charts

Understanding the advantages of iteration burndown charts is key to appreciating their value in Agile project management. From enhanced visibility to improved decision-making, these charts offer numerous benefits that can positively impact project outcomes.

  • Improved visibility: provides stakeholders with a clear view of project progress.
  • Early risk identification: helps identify and address issues early in the iteration.
  • Enhanced communication: facilitates transparent communication within the team and with stakeholders.
  • Data-driven decisions: enables teams to make informed decisions based on real-time progress data.

How to create an effective iteration burndown chart

Crafting an effective iteration burndown chart requires a thorough and step-by-step approach. Here are some detailed guidelines to help you create a well-designed burndown chart that accurately reflects progress and facilitates efficient project management:

  • Set clear goals: Before you start creating your chart, it’s essential to define clear objectives and expectations for the iteration. Be specific about what you want to achieve, what tasks need to be completed, and what resources you’ll need to get there.
  • Break down tasks: Once you’ve established your goals, you’ll need to break down tasks into manageable units to track progress effectively. Divide the work into smaller tasks that can be completed within a reasonable timeframe and assign them to team members accordingly.
  • Accurate estimation: Accurate estimation of effort required for each task is crucial for creating an effective burndown chart. Make sure to involve team members in the estimation process, and use historical data to improve accuracy. This will help you to determine how much work is left to be done and when the iteration will be completed.
  • Choose the right tools: Creating an effective burndown chart requires selecting the appropriate tools for tracking and visualizing data. Typo is a great option for creating and managing burndown charts, as it allows you to customize the chart’s appearance and track progress in real time.
  • Regular updates: Updating the chart regularly is essential for keeping track of progress and making necessary adjustments. Set a regular schedule for updating the chart, and ensure that team members are aware of the latest updates. This will help you to identify potential issues early on and adjust the plan accordingly.

By following these detailed guidelines, you’ll be able to create an accurate and effective iteration burndown chart that can help you and your team monitor your project’s progress and manage it more efficiently.

Tips for using iteration burndown charts effectively

While creating a burndown chart is a crucial first step, maximizing its effectiveness requires ongoing attention and refinement. These tips will help you harness the full potential of your iteration burndown chart, empowering your development teams to achieve greater success in Agile projects.

  • Simplicity: keep the chart simple and easy to understand.
  • Consistency: use consistent data and metrics for accurate analysis.
  • Collaboration: encourage team collaboration and transparency in updating the chart.
  • Analytical approach: analyze trends and patterns to identify areas for improvement.
  • Adaptability: adjust the chart based on feedback and lessons learned during the iteration.

Improving your iteration burndown chart

Continuous improvement lies at the heart of Agile methodology, and your iteration burndown chart is no exception. By incorporating feedback, analyzing historical data, and experimenting with different approaches, you can refine your chart to better meet your team’s and stakeholders’ needs.

  • Review historical data: analyze past iterations to identify trends and improve future performance.
  • Incorporate feedback: gather input from team members and stakeholders to refine the chart’s effectiveness.
  • Experiment with formats: try different chart formats and visualizations to find what works best for your team.
  • Additional metrics: integrate additional metrics to provide deeper insights into project progress.

Are iteration burndown charts worth it?

A burndown chart is great for evaluating the ratio of work remaining and the time it takes to complete the work. However, relying solely on a burndown chart is not the right way due to certain limitations.

Time-consuming and manual process

Although creating a burndown chart in Excel is easy, entering data manually requires more time and effort. This makes the work repetitive and tiresome after a certain point.

Unable to give insights into the types of issues

The Burndown chart helps to track the progress of completing tasks or user stories over time within a sprint or iteration. But, it doesn’t provide insights about the specific types of issues or tasks being worked on. It includes shipping new features, determining technical debt, and so on.

Gives equal weight to all the tasks

A burndown chart doesn’t differentiate between an easy and difficult task. It considers all of them equal, regardless of their size, complexity, or effort required to complete it. Hence, leading to ineffective outlines of project progress. This further potentially masks critical issues and hinders project management efforts.

Unable to give complete information on sprint predictability

The burndown chart primarily focuses on tracking remaining work throughout a sprint, but it doesn’t directly indicate the predictability of completing that work within the sprint timeframe. It lacks insight into factors like team velocity fluctuations or scope changes, which are crucial for assessing sprint predictability accurately.

How does Typo leverage the sprint predictability?

Typo’s sprint analysis is an essential tool for any team using an agile development methodology. It allows agile teams to track and analyze overall progress throughout a sprint timeline.  It helps to gain visual insights into how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This information can help to identify any potential problems early on and take corrective action.

sprint predictability

Our sprint analysis feature uses data from Git and issue management tools to provide insights into how software development teams are working. They can see how long tasks are taking, how often they’re being blocked, and where bottlenecks are occurring.

It is easy to use and can be integrated with existing Git and Jira/Linear/Clickup workflows.

Key features

  • A velocity chart shows how much work has been completed in previous sprints.
  • A sprint backlog that shows all of the work that needs to be completed in the sprint.
  • A list of sprint issues that shows the status of each issue.
  • Time tracking to see how long tasks are taking.
  • Blockage tracking to check how often tasks are being blocked, and what are the causes of those blocks.
  • Bottleneck identification to identify areas where work is slowing down.
  • Historical data analysis to compare sprint data over time.
sprint predictability

Constantly improve your charts!

The iteration burndown chart is a vital tool in Agile project management. It offers agile and scrum teams a clear, concise way to track progress and make data-driven decisions.

However, one shouldn’t rely solely on the burndown charts. Moreover, there are various advanced sprint analysis tools such as Typo in the market that allow teams to track and gain visual insights into the overall progress of the work.

What are Jira Dashboards and How to Create it?

Jira is a widely used project management tool that enables teams to work together efficiently and achieve outstanding outcomes. The Jira dashboard is a vital component of this tool, offering teams valuable insights, metrics, and project visibility. In this journey, we will explore the potential of Jira dashboards and learn how to leverage their full capabilities.

What is a Jira Dashboard?

A Jira dashboard serves as the nerve center of project activity, offering a consolidated view of tasks, progress, and key metrics. It gives stakeholders a centralized location to monitor project health, track progress, and make informed decisions.

Jira Core dashboard: your project status at a glance

What are the Components of a Jira Dashboard?

Gadgets

These modular components provide specific information and functionality, such as task lists, burndown charts, and activity streams. There are several gadgets built into Jira such as filter results gadget, issue statistics gadget, and road map gadget. However, additional gadgets can also be downloaded from the Atlassian Marketplace. Some of them are the pivot gadget and gauge gadget.

Reports

Jira dashboards host various reports, including velocity charts, sprint summaries, and issue statistics, offering valuable insights into team performance,  and project trends.

Why is it Used?

Jira dashboards are used for several reasons:

  • Visibility: Dashboards offer stakeholders a real-time snapshot of project status and progress, promoting transparency and accountability.
  • Decision Making: By providing access to actionable insights and performance metrics, dashboards enable data-driven decision-making, leading to more informed choices.
  • Collaboration: Dashboards foster collaboration by providing a centralized platform for teams to track tasks, share updates and communicate effectively.
  • Efficiency: Dashboards streamline project management processes and enhance team productivity by consolidating project information and metrics in one location.

The default Jira dashboard

The default dashboard is also known as the system dashboard. It is the screen Jira users see the first time they log in. It includes gadgets from Jira’s pre-installed selection and is limited to only one dashboard page.

Creating your Jira dashboard

Creating custom dashboards requires careful planning and consideration of project objectives and team requirements. Let’s explore the step-by-step process of crafting a bespoke dashboard:

Create a New Dashboard

Log in to your Jira account. Go to the dashboard and click ‘Create Dashboard’.

Define Dashboard Objectives

Start by defining the objectives and goals of your dashboard page. Determine what information is crucial for your team to track and monitor, and tailor your dashboard accordingly.

Select Relevant Gadgets and Reports

Choose gadgets and reports that align with your project’s needs and objectives. When curating your dashboard content, consider factors such as team workflow, project complexity, and stakeholder requirements.

Opt for your Preferred Layout and Configuration

Choose your preferred dashboard layout and configuration to ensure optimal visibility and usability for all stakeholders. Arrange gadgets and reports logically and intuitively to facilitate easy navigation and information access.

Iterative Refinement

Embrace an iterative dashboard refinement approach. Solicit user and stakeholder feedback to improve its effectiveness and usability continuously. Regularly assess and update your dashboard to reflect evolving project needs and priorities.

Share the Dashboard with Team Members

Don’t forget to share the Jira dashboard with the team. This ensures transparency and fosters a collaborative culture. By granting appropriate permissions, they can view and interact with the dashboard and get real-time updates.

JIRA Dashboard Examples

Personal Dashboard

A personal dashboard is tailored to individual needs and offers various advantages in streamlining workflow management and improving productivity. It provides a centralized platform for organizing and visualizing user’s tasks, different projects, issues, etc.

Sprint Burndown Dashboard

This dashboard gives real-time updates on whether the team is on pace to meet a sprint goal. It offers a glimpse of how much work is left in the queue and how long your team will take to complete it. Moreover, the sprint burndown dashboard allows you to jump on any issue when the remaining workload is pacing slower than the delivery date.

Workload Dashboard

The workload dashboard, also known as the monitor resource dashboard tracks the amount of work assigned to each team member and adjusts their workload accordingly. It helps identify workload patterns and plan resource allocation.

Issue Tracking Dashboard

The issue tracking dashboard allows users to quickly identify and prioritize the most important issues. It focuses on providing visibility into the status and progress of issues or tickets within a project.

Maximizing Dashboard Impact

To maximize the impact of your Jira dashboard, consider the following best practices:

Promote Transparency and Collaboration

Share your dashboard with relevant stakeholders to promote transparency and collaboration. Encourage team members to actively engage with the dashboard and provide feedback to drive continuous improvement.

Leverage Automation and Integration

Integrating your Jira dashboard with other tools and systems is the best way to automate data capture and reporting processes. Leverage integration capabilities to streamline workflow management and enhance productivity.

Foster Data-Driven Decision Making

Empower project teams and leaders to make informed decisions by providing access to actionable insights and performance metrics through the dashboard. Encourage data-driven discussions and decision-making to drive project success.

Advanced dashboard customization

Take your Jira dashboard customization to the next level with advanced techniques and strategies:

Dashboard Filters and Contextualization

Implement filters and contextualization techniques to personalize the dashboard experience for individual users or specific project phases. Allow users to tailor the dashboard view based on their preferences and requirements.

Dynamic Dashboard Updates

Utilize dynamic updating capabilities to ensure that your dashboard reflects real-time changes and updates in project data. Implement automated refresh intervals and notifications to keep stakeholders informed and engaged.

Custom Gadgets and Extensions

Explore the possibilities of custom gadgets and extensions to extend the functionality of your Jira dashboard. Develop custom gadgets or integrate third-party extensions to address unique project requirements and enhance user experience.

How Typo's Sprint Analysis Feature is Useful for the Jira Dashboard?

Typo’s sprint analysis feature can be seamlessly integrated with the Jira dashboard. It allows to track and analyze the team’s progress throughout a sprint and provides valuable insights into work progress, work breakup, team velocity, developer workload, and issue cycle time.

The benefits of Sprint analysis feature are:

  • It helps spot potential issues early, allowing for corrective action to avoid major problems.
  • Pinpointing inefficiencies, such as excessive time spent on tasks, enables workflow improvements to boost team productivity.
  • Provides real-time progress updates, ensuring deadlines are met by highlighting areas needing adjustments.

The better Way to Achieve Project Excellence

A well-designed Jira dashboard is a catalyst for project excellence, providing teams with the insights and visibility they need to succeed. By understanding its components, crafting a tailored dashboard, and maximizing its impact, you can unlock Jira dashboards’ full potential and drive your projects toward success.

Furthermore, while Jira dashboards offer extensive functionalities, it’s essential to explore alternative tools that may simplify the process and enhance user experience. Typo is one such tool that streamlines project management by offering intuitive dashboard creation, seamless integration, and a user-friendly interface. With Typo, teams can effortlessly visualize project data, track progress, and collaborate effectively, ultimately leading to improved productivity and project outcomes. Explore Typo today and revolutionize your project management experience.

How to fix scrum anti patterns?

Scrum has become one of the most popular project management frameworks, but like any methodology, it’s not without its challenges. Scrum anti-patterns are common obstacles that teams may face, leading to decreased productivity, low morale, and project failure. Let’s explore the most prevalent Scrum anti patterns and provide practical solutions to overcome them.

Lack of clear definition of done

A lack of a clear Definition of Done (DoD) can cause teams to struggle to deliver shippable increments at the end of each sprint. It can be due to a lack of communication and transparency. This ambiguity leads to rework and dissatisfaction among stakeholders.

Fix

Collaboration is key to establishing a robust DoD. Scrum team members should work together to define clear criteria for completing each user story. These criteria should encompass all necessary steps, from development to testing and acceptance. The DoD should be regularly reviewed and refined to adapt to evolving project needs and ensure stakeholder satisfaction.

Overcommitting in sprint planning

One of the common anti patterns is overcommitment during sprint planning meetings. It sets unrealistic expectations, leading to compromised quality and missed deadlines.

Fix

Base sprint commitments on past performance and team capacity rather than wishful thinking. Focus on realistic sprint goal setting to ensure the team can deliver commitments consistently. Emphasize the importance of transparency and communication in setting and adjusting sprint goals.

Micromanagement by the scrum master

Micromanagement stifles team autonomy and creativity, leading to disengagement, lack of trust and reduced productivity.

Fix

Scrum Masters should adopt a servant-leadership approach, empowering teams to self-organize and make decisions autonomously. They should foster a culture of trust and collaboration where team members feel comfortable taking ownership of their work. They should provide support and guidance when needed, but avoid dictating tasks or solutions.

Lack of product owner engagement

Disengaged Product Owners fail to provide clear direction and effectively prioritize the product backlog, leading to confusion and inefficiency.

Fix

Encourage regular communication and collaboration between the Product Owner and the development team. Ensure that the Product Owner is actively involved in sprint planning, backlog refinement, and sprint reviews. Establish clear channels for feedback and decision-making to ensure alignment with project goals and stakeholder expectations.

Failure to adapt and improve

Failing to embrace a mindset of continuous improvement and adaptation leads to stagnation and inefficiency.

Fix

Prioritize retrospectives and experimentation to identify areas for improvement. Encourage a culture of learning and innovation where team members feel empowered to suggest and implement changes. Emphasize the importance of feedback loops and iterative development to drive continuous improvement and adaptation.

Scope creep

Allowing the project scope to expand unchecked during the sprint leads to incomplete work and missed deadlines.

Fix

Define a clear product vision and prioritize features based on value and feasibility. Review and refine the product backlog regularly to ensure that it reflects the most valuable and achievable items. Encourage stakeholder collaboration and feedback to validate assumptions and manage expectations.

Lack of cross-functional collaboration

Siloed teams hinder communication and collaboration, leading to bottlenecks and inefficiencies.

Fix

Foster a collaboration and knowledge-sharing culture across teams and disciplines. Encourage cross-functional teams to work together towards common goals. Implement practices such as pair programming, code reviews, and knowledge-sharing sessions to facilitate collaboration and break down silos.

Inadequate Sprint review and retrospective

Rushing through sprint retrospective and review meetings results in missed opportunities for feedback and improvement.

Fix

Allocate sufficient time for thorough discussion and reflection during sprint review and retrospective meetings. Encourage open and honest communication and ensure that all development team members have a chance to share their insights and observations. Based on feedback and retrospective findings, prioritize action items for continuous improvement.

Unrealistic commitments by the product owner

Product Owners making unrealistic commitments disrupt the team’s focus and cause delays.

Fix

Establish a clear process for managing changes to the product backlog. Encourage collaboration between the Product Owner and the development team to negotiate realistic commitments and minimize disruptions during the sprint. Prioritize backlog items based on value and effort to ensure the team consistently delivers on its commitments.

Lack of stakeholder involvement

Limited involvement or feedback from stakeholders leads to misunderstandings and dissatisfaction with the final product.

Fix

Engage stakeholders early and often throughout the project lifecycle. Solicit feedback and involve stakeholders in key decision-making processes. Communicate project progress regularly and solicit input to ensure alignment with stakeholder expectations and requirements.

Ignoring technical debt

Neglecting to address technical debt results in decreased code quality, increased bugs, and slower development velocity over time.

Fix

Allocate time during each sprint for addressing technical debt alongside new feature development. Encourage collaboration between developers and stakeholders to prioritize and tackle technical debt incrementally. Invest in automated testing and refactoring to maintain code quality and reduce technical debt accumulation.

Lack of continuous integration and deployment

Failing to implement continuous integration and deployment practices leads to integration issues, longer release cycles, and reduced agility.

Fix

Establish automated CI/CD pipelines to ensure that code changes are integrated and deployed frequently and reliably. Invest in infrastructure and tools that support automated testing and deployment. Encourage a culture of automation and DevOps practices to streamline the development and delivery process.

Daily scrum meetings are inefficient

Daily scrum meeting is usually used synonymously with daily status meetings. This loses its focus on collaboration and decision-making. Sometimes, team members don’t find any value in these meetings leading to disengagement and decreased motivation.

Fix

In daily scrums, the focus should only be on talking to each other about what’s the most important work to get done that day and how to do it. Encourage team members to collaborate to tackle problems and achieve sprint goals. Moreover, keep the daily scrums short and timeboxed, typically to 15 minutes.

Navigating scrum challenges with confidence

Successfully implementing Scrum requires more than just following the framework—it demands a keen understanding of potential pitfalls and proactive strategies to overcome them. By addressing common Scrum anti patterns, teams can cultivate a culture of collaboration, efficiency, and continuous improvement, leading to better project outcomes and stakeholder satisfaction.

However, without the right tools, identifying and addressing these anti-patterns can be daunting. That’s where Typo comes in. Typo is an intuitive project management platform designed to streamline Agile processes, enhance team communication, and mitigate common Scrum challenges.

With Typo, teams can effortlessly manage their Scrum projects, identify and address anti-patterns in real-time, and achieve greater success in their Agile endeavors.

So why wait? Try Typo today and elevate your Scrum experience to new heights!

How to Improve Your Jira Ticket Management?

Jira software has become the backbone of project management for many teams across various industries. Its flexibility and powerful features make it an invaluable tool for organizing tasks, tracking progress, and collaborating effectively. However, maximizing its potential requires more than just basic knowledge. To truly excel in Jira ticket management, you must implement strategies and best practices that streamline your workflows and enhance productivity.

What is Jira Ticket Management?

Jira is a popular project management tool developed by Atlassian, commonly used for issue tracking, bug tracking, and project management. Jira ticket management refers to the process of creating, updating, assigning, prioritizing, and tracking issues within Jira.

Jira Service Desk | IT Service Desk & ITSM Software

Key Challenges in Jira Ticketing System

Requires Significant Manual Work

One of the major challenges with Jira ticketing platform is that it requires a lot of tedious and manual work.  This leads to developer frustration, incomplete ticket updates, and undocumented work.

Complexity of Configuration

Setting up Jira software to align with the specific needs of a team or project can be complicated. Configuring workflows, custom fields, and permissions requires careful planning and may involve a learning curve for administrators.

Lacks Data Hygiene

Due to the above-mentioned points, it can lead to software development team work becoming untracked and invisible. Hence, the team lacks data hygiene which further leads top management to make decisions with incomplete information. This can further impact planning accuracy as well.

How to Manage JIRA Tickets Better?

Below are some essential tips to help you manage your Jira tickets better:

JIRA Automations

Developers often find it labor-intensive to keep tickets updated. Hence, JIRA provides some automation that eases the work of developers. Although these automations are a bit complex initially, once mastered, they offer significant efficiency gains. Moreover, they can be customized as well.

Here are a few JIRA automation that you can take note of:

Smart Auto Design

This is one of the most commonly used automation that involves ensuring accountability for an issue by automatically assigning it to its creator. It ensures that there is always a designated individual responsible for addressing the matter, streamlining workflow management and accountability within the team.

Auto-Create Sub-Tasks

This automation can be customized to suit various scenarios, such as applying it to epics and stories or refining it with specific conditions tailored to your workflow. For example, when a bug issue is reported, you can set up automation to automatically create tasks aimed at resolving the problem. It not only streamlines the process but also ensures that necessary tasks are promptly initiated, enhancing overall efficiency in issue management.

Clone Issues

Implementing this advanced automation involves creating a duplicate of an issue in a different project when it undergoes a specific transition. It also leaves a comment on the original issue to establish a connection between them. It becomes particularly valuable in scenarios where one project is dedicated to managing customer requests, while another project is focused on executing the actual work.

Change Due Date

This automation automatically computes and assigns a due date to an issue when it’s moved from the backlog to the ‘In Progress’ status.  This streamlines the process of managing task timelines, ensuring that deadlines are promptly established as tasks transition into active development stages.

Standardize Ticket Creation

Establishing clear guidelines for creating tickets ensures consistency across your projects. Include essential details such as a descriptive title, priority level, assignee, and due date. This ensures that everyone understands what needs to be done at a glance, reducing confusion and streamlining the workflow.

Moreover, standardizing ticket creation practices fosters alignment within your team and improves communication. When everyone follows the same format for ticket creation, it becomes easier to track progress, assign tasks, and prioritize work effectively. Consistency also enhances transparency, as stakeholders can quickly grasp the status of each ticket without needing to decipher varying formats.

Customize Workflows

Tailoring Jira workflows to match your team’s specific processes and requirements is essential for efficient ticket management. Whether you follow Agile, Scrum, Kanban, or a hybrid methodology, configure workflows that accurately reflect your workflow stages and transitions. This customization ensures your team can work seamlessly within Jira, optimizing productivity and collaboration.

Customizing workflows allows you to streamline your team’s unique processes and adapt to changing project needs. For example, you can define distinct stages for task assignment, development, testing, and deployment that reflect your team’s workflow. Custom workflows empower teams to work more efficiently by clarifying task progression and facilitating smoother handoffs between team members.

Prioritize Effectively

Not all tasks are created equal in Jira. Use priority fields to categorize tickets based on urgency and importance. This strategic prioritization helps your team focus on high-priority items and prevents critical tasks from slipping through the cracks. By prioritizing effectively, you can ensure that important deadlines are met and resources are allocated efficiently.

Effective prioritization involves considering various factors, such as project deadlines, stakeholder requirements, and resource availability. By assessing the impact and urgency of each task, teams can more effectively allocate their time and resources. Regularly reviewing and updating priorities ensures your team remains agile and responsive to changing project needs.

Utilize Labels and Tags

Leverage tags or custom fields to add context to your tickets. Whether it’s categorizing tasks by feature, department, or milestone, these metadata elements make it easier to filter and search for relevant tickets. By utilizing labels and tags effectively, you can improve organization and streamline ticket management within Jira.

Furthermore, consistent labeling conventions enhance collaboration and communication across teams. When everyone adopts a standardized approach to labeling tickets, it becomes simpler to locate specific tasks and understand their context. Moreover, labels and tags can provide valuable insights for reporting and analytics, enabling teams to track progress and identify trends over time.

Encourage Clear Communication

Effective communication is the cornerstone of successful project management. Encourage team members to provide detailed updates, ask questions, and collaborate openly within Jira ticket comments. This transparent communication ensures that everyone stays informed and aligned, fostering a collaborative environment conducive to productivity and success.

Clear communication within Jira ticket comments keeps team members informed and facilitates knowledge sharing and problem-solving. Encouraging open dialogue enables team members to provide feedback, offer assistance, and address potential roadblocks promptly. Additionally, documenting discussions within ticket comments provides valuable context for future reference, aiding in project continuity and decision-making.

Automate Repetitive Tasks

Identify repetitive tasks or processes and automate them using Jira’s built-in automation features or third-party integrations. This not only saves time but also reduces the likelihood of human error. By automating repetitive tasks, you can free up valuable resources and focus on more strategic initiatives, improving overall efficiency and productivity.

Moreover, automation can standardize workflows and enforce best practices, ensuring project consistency. By defining automated rules and triggers, teams can streamline repetitive processes such as task assignments, status updates, and notifications. This minimizes manual intervention and enables team members to devote their time and energy to tasks that require human judgment and creativity.

Regularly Review and Refine

Continuously reviewing your Jira setup and workflows is essential to identify areas for improvement. Solicit feedback from team members and stakeholders to understand pain points and make necessary adjustments. By regularly reviewing and refining your Jira configuration, you can optimize processes and adapt to evolving project requirements effectively.

Moreover, regular reviews foster a culture of continuous improvement within your team. By actively seeking feedback and incorporating suggestions for enhancement, you demonstrate a commitment to excellence and encourage team members to engage. Additionally, periodic reviews help identify bottlenecks and inefficiencies, allowing teams to address them proactively and maintain high productivity levels.

Integrate with Other Tools

Jira seamlessly integrates with a wide range of third-party tools and services, enhancing its capabilities and extending its functionality. Integrating with other tools can streamline your development process and enhance collaboration, whether it’s version control systems, CI/CD pipelines, or communication platforms. Incorporating workflow automation tools into the mix further enhances efficiency by automating repetitive tasks and reducing manual intervention, ultimately accelerating project delivery and reducing errors.

Furthermore, integrating Jira with other tools promotes cross-functional collaboration and data sharing. By connecting disparate systems and centralizing information within Jira, teams can eliminate silos and improve visibility into project progress. Additionally, integrating with complementary tools allows teams to leverage existing investments and build upon established workflows, maximizing efficiency and effectiveness.

Foster a Culture of Continuous Improvement

Encourage a mindset of continuous improvement within your software teams. Encourage feedback, experimentation, and learning from both successes and failures. By embracing a culture of constant improvement, you can adapt to changing requirements and drive greater efficiency in your Jira ticket management process while also building a robust knowledge base of best practices and lessons learned.

Moreover, fostering a culture of continuous improvement empowers team members to take ownership of their work and seek opportunities for growth and innovation. By encouraging experimentation and learning from failures, teams can cultivate resilience and agility, enabling them to thrive in dynamic environments. Additionally, celebrating successes and acknowledging contributions fosters morale and motivation, creating a positive and supportive work culture.

How these Strategies Can Help in Better Planning?

Better JIRA ticket management helps in improving planning accuracy. Below are a few of the ways how these strategies can further help in better planning:

  • Automating these tasks reduces the likelihood of human error and ensures that essential tasks are promptly initiated and tracked, leading to better planning accuracy.
  • Establishing clear guidelines for creating tickets reduces confusion and ensures that all necessary details are included from the start, facilitating more accurate planning and resource allocation.
  • Clear communication within JIRA comments ensures that everyone understands project requirements and updates, reducing misunderstandings and enhancing planning accuracy by facilitating effective coordination and decision-making.
  • Connecting disparate systems and centralizing information improves visibility into project progress and facilitates data sharing. Hence, improving planning by providing a comprehensive view of project status and dependencies.
  • When you consistently follow through on your commitments, you build trust not just within your own team, but across the entire company. Hence, allowing other teams to confidently line up their timelines to development timelines, leading to a tightly aligned, high-velocity organization.

Plan your Way into a Good Jira Ticket System!

Improving your Jira ticket management, essential for effective task management, requires thoughtful planning, ongoing refinement, and a commitment to best practices. Implementing these tips and fostering a culture of continuous improvement can optimize your workflows, enhance collaboration, and drive greater project success, benefiting both internal teams and external customers.

If you need further help in optimizing your engineering processes, Typo is here to help you.

Curious to know more? Learn about Typo here!

How to Create a Burndown Chart in Excel?

In Agile project management, it is crucial to get a clear picture of the project’s reality. Hence, one of the best ways is to visualize the progress.

A Burndown chart is a project management chart that shows the remaining work needed to reach project completion over time.

Let’s understand how can you create a burndown chart in Excel:

What is a Burndown Chart?

A Burndown chart visually represents teams’ or projects’ progress over time. It analyzes their pace, reflects progress, and determines if they are on track to complete it on time.

Burndown charts are generally of three types:

Product Burndown Chart

The product burndown chart focuses on the big picture and visualizes the entire project. It determines how many product goals the development team has achieved so far and the remaining work.

Sprint Burndown Chart

Sprint burndown charts focus on the ongoing sprints. It indicates progress towards completing the sprint backlog.

Epic Burndown Chart

This chart focuses on how your team is performing against the work in the epic over time. It helps to track the advancement of major deliverables within a project.

Components of Burndown Chart

Axes

A burndown chart has two axes: X and Y. The horizontal axis represents the time or iteration and the vertical axis displays user story points.

Ideal Work Remaining

It is the diagonal line sloping downwards that represents the remaining work a team has at a specific point of the project or sprint under ideal conditions.

Actual Work Remaining

It is a realistic depiction of the team’s performance that is updated in real-time. It is drawn as the teams progress and complete user stories.  

Story Points

Each point on the work lines displays a measurement of work remaining at a given time.

Project/Sprint End

It is the rightmost point of your burndown chart that represents whether the team has completed a project/sprint on time, behind, or ahead of schedule.

Benefits of Burndown Chart

Visual Representation of Work

A Burndown chart helps in keeping an eye on teams’ work progress visually. This is not only simple to use but also motivates the team to perform well.

Shows a Direct Comparison

A burndown chart is useful to show the direct comparison between planned work and actual progress over time. This helps in quickly assessing whether the team is on track to meet its goals.

Better Team Productivity

A burndown chart acts as a tool for inspiration. Such types of charts transparently show the progress and work efficiency. Hence, improving the collaboration and cooperation between team members.

Quickly Identifies or Spots Blockers

A burndown chart must be updated daily. This helps in tracking progress in real-time, identifying problems in early stages hence, assisting in completing the project on time.

How to Create a Burndown Chart in Excel?

Step 1: Create Your Table

Open a new sheet in Excel and create a new table that includes 3 columns.

The first column should include the dates of each sprint, the second column have the ideal burndown i.e. ideal rate at which work will be completed and the last column should have the actual burndown i.e. updating them as story points get completed.

Step 2: Add Data in these Columns

Now, fill in the data accordingly. This includes the dates of your sprints and numbers in the Ideal Burndown column indicating the desired number of tasks remaining after each day throughout the let’s say, 10-day sprint.

As you complete tasks each day, update the spreadsheet to document the number of tasks you can finish under the ‘Actual Burndown’ column.

Step 3: Create a Burndown Chart

Now, it’s time to convert the data into a graph. To create a chart, follow these steps: Select the three columns > Click ‘Insert’ on the menu bar > Select the ‘Line chart’ icon, and generate a line graph to visualize the different data points you have in your chart.

How to Use a Burndown Chart in the Best Possible Way?

Determine the Project Scope

Study project scope and divide the projects or sprints into short-term tasks. Ensure to review them and estimate the time required to complete each task based on the project deadline.

Check the Chart Often

The Scrum master must check the chart often and update it daily. It helps to understand the flagging trends, know the pitfalls, and ensure it aligns with the expectations.

Pay Attention to the Outcome

Don’t lose sight of the outcome. By focusing on it, software development teams can ensure they are making progress toward their goals and adjust their efforts accordingly to stay on track for successful project completion.

Don’t Put in Weekends

Teams pause the work during weekends or holidays. Excluding weekends provides accuracy by focusing solely on the days when active work is being done hence giving a clearer representation of progress and highlighting the team’s actual productivity levels during working days.

Encourage Team Ownership

Burndown chart, when accessible to the entire team, fosters collaboration and accountability. It gives them a sense of ownership to discuss points to address challenges and celebrate achievements.

Limitations of a Burndown Chart

A burndown chart is great for evaluating the ratio of work remaining and the time it takes to complete the work. However, relying solely on a burndown chart is not the right way due to certain limitations.

A Time-Consuming and Manual Process

Although creating a burndown chart in Excel is easy, entering data manually requires more time and effort. This makes the work repetitive and tiresome after a certain point.

There are various tools available in the market that offer collaboration and automation features including Jira, Trello, and Asana.

It Doesn’t Give Insights into the Types of Issues

The Burndown chart helps in tracking the progress of completing tasks or user stories over time within a sprint or iteration. But, it doesn’t provide insights about the specific types of issues or tasks being worked on. It includes shipping new features, determining technical debt, and so on.

It Gives Equal Weight to all the Tasks

A burndown chart doesn’t differentiate between an easy and difficult task. It considers all of them equal, regardless of their size, complexity, or effort required to complete it. Hence, leading to ineffective outlines of project progress. This further potentially masks critical issues and hinders project management efforts.

As a result, the burndown chart is not a reliable metric engineering leaders can solely trust. It is always better to complement it with sprint analysis tools to provide additional insights tailored to agile project management. A few of the reasons are stated below:

  • Sprint analysis software can offer a wider range of metrics such as velocity, cycle time, throughput, and cumulative flow diagrams to provide a more comprehensive understanding of team performance and process efficiency.
  • These tools typically offer customization options to tailor metrics and reports according to the team’s specific needs and preferences.
  • They are designed with Agile principles in mind which incorporate concepts such as iterative improvement, feedback loops, and continuous delivery.

Typo - An Effective Sprint Analysis Tool

Typo’s sprint analysis feature allows engineering leaders to track and analyze their team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint hence, identifying any potential problems early on and taking corrective action.

Screenshot 2024-05-11 at 9.58.10 PM.png

Key Features:

  • A velocity chart shows how much work has been completed in previous sprints.
  • A sprint backlog that shows all of the work that needs to be completed in the sprint.
  • A list of sprint issues that shows the status of each issue.
  • Time tracking to See how long tasks are taking.
  • Blockage tracking to check how often tasks are being blocked, and what the causes of those blocks are.
  • Bottleneck identification to identify areas where work is slowing down.
  • Historical data analysis to compare sprint data over time.

How to Write Clean Code?

Martin Fowler once said “Anyone can write a code that a computer can understand. Good programmers write code that humans can understand.”

Clean code is an essential component of software development.

Writing clean code is exactly like a sales pitch. When you use words full of technical jargon, you end up losing your target audience. The same is true with coding as well. Writing clean code enhances the readability, maintainability, and understandability of the software.

What is Clean Code?

Robert C. Martin in his book “Clean Code: A Handbook of Agile Software Craftsmanship  defined clean code as:

“A code that has been taken care of. Someone has taken the time to keep it simple and orderly. They have laid appropriate attention to details. They have cared.”

Clean code is clear, understandable, and maintainable. It is well-organized, properly documented, and follows standard conventions. The purpose behind clean code is to create software that is not just functional but readable and efficient throughout its lifecycle. Since the audience isn’t a computer but rather a real live audience.

Why is Clean Code Important?

Clean code is the foundation of sustainable software development. Below are a few reasons why clean code is important:

Reduce Technical Debt

Technical debt can slow down the development process in the long run. Having clean code ensures that future modifications will be smoother as well as less costly process.

Increase Code Readability and Maintainability

Clean code means that the developers are prioritizing clarity. When it is easier to read, understand, and modify code, it leads to faster software development.

Enhance Collaboration

Good code means that the code is accessible to all team members and follows coding standards. This helps in improved communication and collaboration among them.

Debugging and Issue Resolution

Clean code is designed with clarity and simplicity. Hence, making it easier to locate and understand specific sections of the codebase. This further helps in identifying and resolving issues in the early stages.

Ease of Testing

Clean code facilitates unit testing, integrated testing, and other forms of automated testing. Hence, leading to increased reliability and maintainability of the software.

Clean Code Principles and Best Practices

Below are some established clean code principles that most developers find useful.

KISS Rule

Apply the KISS (Keep it simple, stupid) rule. It is one of the oldest principles of clean code. This means that don’t make the code unnecessarily complex. Make it as simple as possible. So that it takes less time to write, has less chance of bugs, easier to understand and modify.  

Curly’s Law

This law states that the entity (class, function, or variable) must have a single, defined goal. It should only do one thing in one circumstance.

DRY Rule

DRY (Don’t repeat yourself) is closely related to the KISS rule and Curly’s law. It states to avoid unnecessary repetition or duplication of code. Not following this can make the code prone to bugs and make the code change difficult.

YAGNI Rule

YAGNI (You aren’t gonna need it) rule is an extreme programming practice that states that the developers shouldn’t add functionality unless deemed necessary. It should be used in conjunction with continuous refactoring, unit testing, and integration.

Fail Fast

It means that the code should fail as early as possible. This is because issues can be quickly identified and resolved which further limits the number of bugs that make it into production.

Boy Scout Rule

This rule by Uncle Bob states that always leave the code cleaner than you found it. It means that software developers must incrementally improve parts of the codebase they interact with, no matter how minute the enhancement might be.

SOLID Principles

Apply the SOLID principles. This refers to:

S: Single Responsibility Principle which means that the classes must only have a single responsibility.

O: The open-closed Principle states that the piece of software should be open for extension but closed for modification.

L: The Liskov Substitution Principle means that subclasses should be able to substitute their base class without getting incorrect results.

I: The Interface Segregation Principle states that interfaces should be specific to clients instead of being generic for all clients.

D: The dependency Inversion Principle means that classes should depend on abstractions (interfaces) rather than concrete implementations.

A few of the best practices include:

Use Descriptive and Meaningful Names

Choose descriptive and clear names for variables, functions, classes, and other identifiers. They should be easy to remember and according to the context that conveys the purpose and behavior to make the code understandable.

Follow Established Code-Writing Standards

Most programming languages have community-accepted coding standards and style guides. Some of them include Google Java style and PEP 8 for Python and Javascript. Organizations must also have internal coding rules and standards that provide guidelines for consistent formal, naming conventions and overall code organization.

Avoid Writing Unnecessary Comments

Comments help explain the code. However, the codebase changes continuously so the comment can become old or obsolete soon. This can create confusion and distraction among software developers. Make sure to keep the comments updated. Also, avoid writing poorly written or redundant comments as it may increase the cognitive load of software engineering teams.

Avoid Magic Numbers

Magic numbers are hard-coded numbers in code. They are considered to be a bad practice since they can cause ambiguity and confusion among developers. Instead of directly using them, create symbolic constants for hard-coded values. It makes it easy to change the value at a later stage and improves the readability and maintainability of the code.

Refactor Continuously

Ensure that you regularly refactor to enhance the structure and readability of the code. It also helps in improving its flexibility and maintaining code that is overly complex, poorly structured, or duplicated.

You can apply refactoring techniques such as extracting methods, renaming variables, and consolidating duplicate code to keep the codebase cleaner.

Version Control

Version control systems such as GIT, SVN, and Mercurial help track changes to your code and pull back to previous versions, if necessary. Before refactoring, ensure that the code is under version control to safely experiment with changes. Moreover, it helps understand the evolution of the project and maintains the integrity of the codebase by enforcing a structured workflow.

Git - About Version Control

Testing

Software developers can write unit tests to verify the code’s correctness as well-tested code is reliable and easier to refactor. Test-driven development helps in writing cleaner code as it considers edge cases and provides immediate feedback on code changes.

Code Reviews

Code reviewing continuously helps in ensuring code quality by identifying potential issues, catching bugs, and enforcing coding standards. It also facilitates collaboration between software developers to see each other’s strengths and review mistakes together.

Typo - An Automated Code Review Tool

Typo’s automated code review tool not only enables developers to catch issues related to code maintainability, readability, and potential bugs but also can detect code smells. It identifies issues in the code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps the code error-free, making the whole process faster and smoother.

Key features:

  • Supports top 10+ languages including JS, Python, Ruby
  • Understands the context of the code and fixes issues accurately
  • Optimizes code efficiently
  • Standardizes code and reduces the risk of a security breach
  • Provides automated debugging with detailed explanations

Conclusion

Writing clean code isn’t just a crucial skill for developers. It is an important way to sustain software development projects.

By following the above-mentioned principles and best practices, you can develop a habit of writing clean code. It will take time but it will be worth it in the end.

Hope this was helpful. All the best!

How to identify and remove dead code?

Dead code is the most overlooked aspect of software development projects. They can become common when they evolve. A large amount of dead code can be harmful to software.

The best way to ensure this is to detect dead code in the early stages to maintain the quality of the software application.

Let’s talk more about dead code below:

What is Dead Code?

Dead code can be referred to as the segment of code that is unnecessary for the software program. They are executed without their results being used or accessed.

Dead code is known as zombie code. Such a portion of code may have been part of earlier versions, experimental features, or functions that are no longer needed. If the dead code remains in the software, it can decrease the software’s efficiency and add unnecessary complexity to it. This can further make the code harder to understand and maintain.

Other Types of Dead Code

Unreachable Code

The segment of code that is never executed under any condition during program runtime. It could be due to conditional statements, loops, or other control flow structures. Besides this, the issue may even arise during development because of coding errors, incorrect logic, or unintended consequences of code refactoring.

Obsolete Code

The portion of code that was once useful but not anymore. They have now become outdated or irrelevant due to changes in software requirements or function, technology, or best practices. Obsolete code may still be present in the codebase however, no longer recommended for use.

Orphaned Code

Code that was once part of a functional feature or system but is now left behind or isolated. This can result from changes in project requirements, refactoring, feature removal, or other modifications in the development process. As obsolete code, this code may still be present but no longer integrated or contribute to the application functionality.

Commented out Code

Sometimes, developers ‘comment out’ code rather than deleting it to use it in the future. However, when they forget about it, it can facilitate dead code. While it is a common practice, developers must take note of it otherwise it can reduce code readability and maintainability.

Why Remove Dead Code?

Dead code majorly contributes to Technical Debt. While a small amount of technical debt is still fine, if it grows, it can negatively affect the team’s progress. This can also increase the delivery time to market to end-users and reduce customer satisfaction.

Hence, it is important to monitor technical debt through engineering metrics to take note of dead code as well.

Besides this, there are other reasons why removing dead code is crucial:

Improves Maintainability

When dead code is present, it can complicate the understanding and maintenance of software systems. It can further lead to confusion and misunderstandings which increases the cognitive load of the engineering team.

Eliminating dead code lets them focus on relevant code that helps increase code readability, and facilitates feature updates and bug fixes.

Reduces Security and Risks

Dead code could be a hidden backdoor entry point to the system. This can be a threat to the security of the software. Moreover, dead code includes dependencies that are no longer needed.

Removing dead code simplifies code complexities, and improves code review and analysis processes. This further helps to address and reduce security vulnerabilities easily.

Decreases Code and Cognitive Complexity

Dead code disrupts the understanding of codebase structure. It not only decreases the development process but also developers’ productivity and effectiveness.

Eliminating dead code results in reducing the overall size of the code. Hence, it makes it concise and easier to manage which potentially enhances developers’ performance.

Avoid Code Duplication

Duplicate code is a considerable strain on the software development process. However, when dead code is present, it diverts developers from identifying and addressing areas where code duplication occurs.

Hence, eliminating dead code avoids code duplication and improves the codebase’s quality.

Streamline Development

When dead code is not present in the software, it allows developers to focus on the relevant active parts of the codebase. It also streamlines the process as there are no unnecessary distractions and identifies and addresses issues.

How to Identify and Remove Dead Code?

Static Analysis Tools

Dead code can often be removed through static code analysis tools. Automated tools such as code quality checkers can help in detecting unused variables, classes, imports, or modules. This allows developers to address and eliminate the dead code easily which reduces the development cost and improves the overall quality of the system.

However, the drawback is that during uncertainty regarding programming behavior, dead code may not be removed. Hence, static code analysis tools are not a complete solution.

Dynamic Analysis Tools

Dynamic code analysis tools involve running the program to see which lines are executed and identifying which code paths are never reached. Hence, the code that is never executed or used in the codebase i.e. dead code is eliminated.

However, most of these tools are specific to programming languages.

Version Control History

Leverage version control systems such as GIT commits to identify code that was once active but now deprecated or replaced. Commits that were removed or modified could indicate areas where dead code be found.

In case of a mistake, the code can be retrieved from the version control system. Hence, less risky and easily manageable.

Refactoring

Through refactoring, developers carefully examine the codebase to identify sections that include unused or old code, unnecessary variables, functions, or classes. Hence, revealing dead code that can be safely removed. Moreover, refactoring aims to optimize code for performance, maintainability, and readability. This further allows developers to look out for inefficient or unnecessary code by replacing or redesigning these segments.

Code Reviews

Code review is an effective method to maintain the quality of code. It promotes simplicity and clarity in the codebase. They can help in detecting dead code by applying best practices, standards, and conventions. However, when not automated, they can be time-consuming and harder to implement. Hence, it is recommended to use automated code review tools to speed up the process.

Typo - Automated Code Review Tool

Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.

Key features:

  • Supports top 8 languages including C++ and C#
  • Understands the context of the code and fixes issues accurately
  • Optimizes code efficiently
  • Provides automated debugging with detailed explanations
  • Standardizes code and reduces the risk of a security breach

Conclusion

In software engineering, detecting and removing dead code is imperative for streamlining the development process. You can choose the method or combination of methods to remove dead code that best aligns with your project’s needs, resources, and constraints.

All the best!

View All

Product Updates

View All

Typo Launches groCTO: Community to Empower Engineering Leaders

In an ever-evolving tech world, organisations need to innovate quickly while keeping up high standards of quality and performance. The key to achieving these goals is empowering engineering leaders with the right tools and technologies. 

About Typo

Typo is a software intelligence platform that optimizes software delivery by identifying real-time bottlenecks in SDLC, automating code reviews, and measuring developer experience. We aim to help organizations ship reliable software faster and build high-performing teams. 

However, engineering leaders often struggle to bridge the divide between traditional management practices and modern software development leading to missed opportunities for growth, ineffective team dynamics, and slower progress in achieving organizational goals. 

To address this gap, we launched groCTO, a community designed specifically for engineering leaders.

What is groCTO Community? 

Effective engineering leadership is crucial for building high-performing teams and driving innovation. However, many leaders face significant challenges and gaps that hinder their effectiveness. The role of an engineering leader is both demanding and essential. From aligning teams with strategic goals to managing complex projects and fostering a positive culture, they have a lot on their plates. Hence, leaders need to have the right direction and support so they can navigate the challenges and guide their teams efficiently. 

Here’s when groCTO comes in! 

groCTO is a community designed to empower engineering managers on their leadership journey. The aim is to help engineering leaders evolve, navigate complex technical challenges, and drive innovative solutions to create groundbreaking software. Engineering leaders can connect, learn, and grow to enhance their capabilities and, in turn, the performance of their teams. 

Key Components of groCTO 

groCTO Connect

Over 73% of successful tech leaders believe having a mentor is key to their success.

At groCTO, we recognize mentorship as a powerful tool for addressing leadership challenges and offering personalised support and fresh perspectives. That’s why we’ve kept Connect a cornerstone of our community - offering 1:1 mentorship sessions with global tech leaders and CTOs. With over 74 mentees and 20 mentors, our Connect program fosters valuable relationships and supports your growth as a tech leader.

These sessions allow emerging leaders to: 

  • Gain personalised advice: Through 1:1 sessions, mentors address individual challenges and tailor guidance to the specific needs and career goals of emerging leaders. 
  • Navigate career growth: These mentors understand the strengths and weaknesses of the individual and help them focus on improving specific leadership skills and competencies and build confidence. 
  • Build valuable professional relationships: Our mentorship sessions expand professional connections and foster collaborations and knowledge sharing that can offer ongoing support and opportunities. 

Weekly Tech Insights

To keep our tech community informed and inspired, groCTO brings you a fresh set of learning resources every week:

  • CTO Diaries: The CTO Diaries provide a unique glimpse into the experiences and lessons learned by seasoned Chief Technology Officers. These include personal stories, challenges faced, and successful strategies implemented by them. Hence, helping engineering leaders gain practical insights and real-world examples that can inspire and inform their approach to leadership and team management.
  • Podcasts: 
    • groCTO Originals is a weekly podcast for current and aspiring tech leaders aiming to transform their approach by learning from seasoned industry experts and successful engineering leaders across the globe.
    • ‘The DORA Lab’ by groCTO is an exclusive podcast that’s all about DORA and other engineering metrics. In each episode, expert leaders from the tech world bring their extensive knowledge of the challenges, inspirations, and practical uses of DORA metrics and beyond.
  • Bytes: groCTO Bytes is a weekly sun-day dose of curated wisdom delivered straight to your inbox, in the form of a newsletter. Our goal is to keep tech leaders and CTOs, VPEs up-to-date on the latest trends and best practices in engineering leadership, tech management, system design, and more.
Are you a tech coach looking to make an impact? 

Looking Ahead: Building a Dynamic Community

At groCTO, we are committed to making this community bigger and better. We want current and aspiring engineering leaders to invest in their growth as well as contribute to pushing the boundaries of what engineering teams can achieve.

We’re just getting started. A few of our future plans for groCTO include:

  • Virtual Events: We plan to conduct interactive webinars and workshops to help engineering leaders and CTOs get deeper dives into specific topics and networking opportunities.
  • Slack Channels: We plan to create Slack channels to allow emerging tech leaders to engage in vibrant discussions and get real-time support tailored to various aspects of engineering leadership.

We envision a community that thrives on continuous engagement and growth. Scaling our resources and expanding our initiatives, we want to ensure that every member of groCTO finds the support and knowledge they need to excel. 

Get in Touch with us! 

At Typo, our vision is clear: to ship reliable software faster and build high-performing engineering teams. With groCTO, we are making significant progress toward this goal by empowering engineering leaders with the tools and support they need to excel. 

Join us in this exciting new chapter and be a part of a community that empowers tech leaders to excel and innovate. 

We’d love to hear from you! For more information about groCTO and how to get involved, write to us at hello@grocto.dev

Why do Companies Choose Typo?

Dev teams hold great importance in the engineering organization. They are essential for building high-quality software products, fostering innovation, and driving the success of technology companies in today’s competitive market.

However, engineering leaders need to understand the bottlenecks holding them back. Since these blindspots can directly affect the projects. Hence, this is when software development analytics tools come to your rescue. And these analytics software stands better when they have various features and integrations, engineering leaders are usually looking out for.

Typo is an intelligent engineering platform that is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Let’s know more about why engineering leaders prefer to choose Typo as their important tool:

You get Customized DORA and other Engineering Metrics

Engineering metrics are the measurements of engineering outputs and processes. However, there isn’t a pre-defined set of metrics that the software development teams use to measure to ensure success. This depends on various factors including team size, the background of the team members, and so on.

Typo’s customized DORA (Deployment frequency, Change failure rate, Lead time, and Mean Time to Recover) key metrics and other engineering metrics can be configured in a single dashboard based on specific development processes. This helps benchmark the dev team’s performance and identifies real-time bottlenecks, sprint delays, and blocked PRs. With the user-friendly interface and tailored integrations, engineering leaders can get all the relevant data within minutes and drive continuous improvement.

Typo has an In-Built Automated Code Review Feature

Code review is all about improving the code quality. It improves the software teams’ productivity and streamlines the development process. However, when done manually, the code review process can be time-consuming and takes a lot of effort.

Typo’s automated code review tool auto-analyses codebase and pull requests to find issues and auto-generates fixes before it merges to master. It understands the context of your code and quickly finds and fixes any issues accurately, making pull requests easy and stress-free. It standardizes your code, reducing the risk of a software security breach and boosting maintainability, while also providing insights into code coverage and code complexity for thorough analysis.

You can Track the Team’s Progress by Advanced Sprint Analysis Tool

While a burndown chart helps visually monitor teams’ work progress, it is time-consuming and doesn’t provide insights about the specific types of issues or tasks. Hence, it is always advisable to complement it with sprint analysis tools to provide additional insights tailored to agile project management.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint. This helps in identifying potential problems in the early stages, identifying areas where teams can be more efficient, and meeting deadlines.

The metrics Dashboard Focuses on Team-Level Improvement and Not Micromanaging Individual Developers

When engineering metrics focus on individual success rather than team performance, it creates a sense of surveillance rather than support. This leads to decreased motivation, productivity, and trust among development teams. Hence, there are better ways to use the engineering metrics.

Typo has a metrics dashboard that focuses on the team’s health and performance. It lets engineering leaders compare the team’s results with what healthy benchmarks across industries look like and drive impactful initiatives for your team. Since it considers only the team’s goals, it lets team members work together and solve problems together. Hence, fosters a healthier and more productive work environment conducive to innovation and growth.

Typo Takes into Consideration the Human Side of Engineering

Measuring developer experience not only focuses on quantitative metrics but also requires qualitative feedback as well. By prioritizing the human side of team members and developer productivity, engineering managers can create a more inclusive and supportive environment for them.

Typo helps in getting a 360 view of the developer experience as it captures qualitative insights and provides an in-depth view of the real issues that need attention. With signals from work patterns and continuous AI-driven pulse check-ins on the experience of developers in the team, Typo helps with early indicators of their well-being and actionable insights on the areas that need your attention. It also tracks the work habits of developers across multiple activities, such as Commits, PRs, Reviews, Comments, Tasks, and Merges, over a certain period. If these patterns consistently exceed the average of other developers or violate predefined benchmarks, the system identifies them as being in the Burnout zone or at risk of burnout.

You can integrate as many tools with the dev stack

The more the tools can be integrated with software, the better it is for the software developers. It streamlines the development process, enforces standardization and consistency, and provides access to valuable resources and functionalities.

Typo lets you see the complete picture of your engineering health by seamlessly connecting to your tech tool stack. This includes:

  • GIT versioning tools that use the Git version control system
  • Issue tracker tools for managing tasks, bug tracking, and other project-related issues
  • CI/CD tools to automate and streamline the software development process
  • Communication tools to facilitate the exchange of ideas and information
  • Incident management tools to resolve unexpected events or failures

Conclusion

Typo is a software delivery tool that can help ship reliable software faster. You can find real-time bottlenecks in your SDLC, automate code reviews, and measure developer experience – all in a single platform.

Typo Ranked as a Leader in G2 Summer 2023 Reports

The G2 Summer 2023 report is out!

We are delighted to share that Typo ranks as a leader in the Software Development analytics tool category. A big thank you to all our customers who supported us in this journey and took the time to write reviews about their experience. It really got us motivated to keep moving forward and bring the best to the table in the coming weeks.

Typo Taking the Lead

Typo is placed among the leaders in Software Development Analytics. Besides this, we earned the ‘User loved us’ badge as well.

Our wall of fame shines bright with –

  • Leader in the overall Grid® Report for Software Development Analytics Tools category
  • Leader in the Mid Market Grid® Report for Software Development Analytics Tools category
  • Rated #1 for Likelihood to Recommend
  • Rated #1 for Quality of Support
  • Rated #1 for Meets Requirements
  • Rated #1 for Ease of Use
  • Rated #1 for Analytics and Trends

Typo has been ranked a Leader in the Grid Report for Software Development Analytics Tool | Summer 2023. This is a testament to our continuous efforts toward building a product that engineering teams love to use.

The ratings also include –

  • 97% of the reviewers have rated Typo high in analyzing historical data to highlight trends, statistics & KPIs
  • 100% of the reviewers have rated us high in Productivity Updates

We, as a team, achieved the feat of attaining the score of:

Typo User  ratings

Here’s What our Customers Say about Typo

Check out what other users have to say about Typo here.

What Makes Typo Different?

Typo is an intelligent AI-driven Engineering Management platform that enables modern software teams with visibility, insights & tools to code better, deploy faster & stay aligned with business goals.

Having launched with Product Hunt, we started with 15 engineers working with sheer hard work and dedication and have impacted 5000+ developers globally and engineering leaders globally, 400,000+ PRs & 1.5M+ commits.

We are NOT just the software delivery analytics platform. We go beyond the SDLC metrics to build an ecosystem that is a combination of intelligent insights, impactful actions & automated workflows – that will help Managers to lead better & developers perform better

As the first step, Typo gives core insights into dev velocity, quality & throughout that has helped the engineering leaders reduce their PR cycle time by almost 57% and 2X faster project deliveries.

PR cycle time

Continuous Improvement with Typo

Typo empowers continuous improvement in the developers & managers with goal setting & specific visibility to developers themselves.

The leaders can set goals to ensure best practices like PR sizes, avoid merging PRs without review, identify high-risk work & others. Typo nudges the key stakeholders on Slack as soon as the goal is breached. Typo also automates the workflow on Slack to help developers with faster PR shipping and code reviews.

Continuous Improvement with Typo

Developer’s View

Typo provides core insights to your developers that are 100% confidential to them. It helps developers to identify their strengths and core areas of improvement that have impacted the software delivery. It helps them gain visibility & measure the impact of their work on team efficiency & goals.

Developer’s view
Developer’s Well-Being

We believe that all three aspects – work, collaboration & well-being – need to fall in place to help an individual deliver their best. Inspired by the SPACE framework for developer productivity, we support Pulse Check-Ins, Developer Experience insights, Burnout predictions & Engineering surveys to paint a complete picture.

Developer’s well-being

10X your Dev Teams’ Efficiency with Typo

It’s all of your immense love and support that made us a leader in such a short period. We are grateful to you!

But this is just the beginning. Our aim has always been to level up your dev game and we will be coming with the new exciting releases in the next few weeks.

Interested in using Typo? Sign up for FREE today and get insights in 5 min.

View All