Priyasha Dureja

Technical Content Manager
AI-Driven SDLC: The Future of Software Development

AI-Driven SDLC: The Future of Software Development

Leveraging AI-driven tools for the Software Development Life Cycle (SDLC) has reshaped how software is planned, developed, tested, and deployed. By automating repetitive tasks, analyzing vast datasets, and predicting future trends, AI enhances efficiency, accuracy, and decision-making across all SDLC phases.

Let's explore the impact of AI on SDLC and highlight must-have AI tools for streamlining software development workflows.

How AI Transforms SDLC?

The SDLC comprises seven phases, each with specific objectives and deliverables that ensure the efficient development and deployment of high-quality software. Here is an overview of how AI influences each stage of the SDLC:

Requirement Analysis and Gathering

This is the primary process of SDLC that directly affects other steps. In this phase, developers gather and analyze various requirements of software projects.

How AI Impacts Requirement Analysis and Gathering?

  • AI-driven tools help in quality checks, data collection and requirement analysis such as requirement classification, models and traceability.
  • They analyze historical data to predict future trends, resource needs and potential risks to help optimize planning and resource allocation.
  • AI tools detect patterns in new data and forecast upcoming trends for specific periods to make data-driven decisions.

Planning

This stage comprises comprehensive project planning and preparation before starting the next step. This involves defining project scope, setting objectives, allocating resources, understanding business requirements and creating a roadmap for the development process.

How AI Impacts Planning?

  • AI tools analyze historical data, market trajectories, and technological advancements to anticipate future needs and shape forward-looking roadmaps.
  • These tools dive into past trends, team performance and necessary resources for optimal resource allocation to each project phase.
  • They also help in facilitating communication among stakeholders by automating meeting scheduling, summarizing discussions, and generating actionable insights.

Design and Prototype

The third step of SDLC is generating a software prototype or concept aligned with software architecture or development pattern. This involves creating a detailed blueprint of the software based on the requirements, outlining its components and how it will be built.

How AI Impacts Design and Prototype?

  • AI-powered tools convert natural language processing (NLP) into UI mockups, wireframes and even design documents.
  • They also suggest optimal design patterns based on project requirements and assist in creating more scalable software architecture.
  • AI tools can simulate different scenarios that enable developers to visualize their choices' impact and choose optimal design.

Microservices Architecture and AI-Driven SDLC

The adoption of microservices architecture has transformed how modern applications are designed and built. When combined with AI-driven development approaches, microservices offer unprecedented flexibility, scalability, and resilience.

How AI Impacts Microservices Implementation

  • Service Boundary Optimization: AI analyzes domain models and data flow patterns to recommend optimal service boundaries, ensuring high cohesion and low coupling between microservices.

  • API Design Assistance: Machine learning models examine existing APIs and suggest design improvements, consistency patterns, and potential breaking changes before they affect consumers.

  • Service Mesh Intelligence: AI-enhanced service meshes like Istio can dynamically adjust routing rules, implement circuit breaking, and optimize load balancing based on real-time traffic patterns and service health metrics.

  • Automated Canary Analysis: AI systems evaluate the performance of new service versions against baseline metrics, automatically controlling the traffic distribution during deployments to minimize risk.

Development

Development Stage aims to develop software that is efficient, functional and user-friendly. In this stage, the design is transformed into a functional application—actual coding takes place based on design specifications.

How AI Impacts Development?

  • AI-driven coding swiftly writes and understands code, generates documentation and code snippets that speeds up time-consuming and resource-intensive tasks.
  • These tools also act as a virtual partner by facilitating pair programming and offering insights and solutions to complex coding problems.
  • They enforce best practices and coding standards by automatically analyzing code to identify violations and detect issues like code duplication and potential security vulnerabilities.

Testing

Once project development is done, the entire coding structure is thoroughly examined and optimized. It ensures flawless software operations before it reaches end-users and identifies opportunities for enhancement.

How AI Impacts Testing?

  • Machine learning algorithms analyze past test results to identify patterns and predict areas of the code that are likely to fail.
  • They explore software requirements, user stories, and historical data to automatically generate test cases that ensure comprehensive coverage of functional and non-functional aspects of the application.
  • AI and ML automate visual testing by comparing the user interface (UI) across various platforms and devices to enable consistency in design and functionality.

Deployment

The deployment phase involves releasing the tested and optimized software to end-users. This stage serves as a gateway to post-deployment activities like maintenance and updates.

How AI Impacts Deployment?

  • These tools streamline the deployment process by automating routine tasks, optimize resource allocation, collect user feedback and address issues that arise.
  • AI-driven CI/CD pipelines monitor the deployment environment, predict potential issues and automatically roll back changes, if necessary.
  • They also analyze deployment data to predict and mitigate potential issues for the smooth transition from development to production.

DevOps Integration in AI-Driven SDLC

The integration of DevOps principles with AI-driven SDLC creates a powerful synergy that enhances collaboration between development and operations teams while automating crucial processes. DevOps practices ensure continuous integration, delivery, and deployment, which complements the AI capabilities throughout the SDLC.

How AI Enhances DevOps Integration

  • Infrastructure as Code (IaC) Optimization: AI algorithms analyze infrastructure configurations to suggest optimizations, identify potential security vulnerabilities, and ensure compliance with organizational standards. Tools like HashiCorp's Terraform with AI plugins can predict resource requirements based on application behavior patterns.

  • Automated Environment Synchronization: AI-powered tools detect discrepancies between development, staging, and production environments, reducing the "it works on my machine" syndrome. This capability ensures consistent behavior across all deployment stages.

  • Anomaly Detection in CI/CD Pipelines: Machine learning models identify abnormal patterns in build and deployment processes, flagging potential issues before they impact production. These systems learn from historical pipeline executions to establish baselines for normal operation.

  • Self-Healing Infrastructure: AI systems monitor application health metrics and can automatically initiate remediation actions when predefined thresholds are breached, reducing mean time to recovery (MTTR) significantly.

Maintenance

This is the final and ongoing phase of the software development life cycle. 'Maintenance' ensures that software continuously functions effectively and evolves according to user needs and technical advancements over time.

How AI Impacts Maintenance?

  • AI analyzes performance metrics and logs to identify potential bottlenecks and suggest targeted fixes.
  • AI-powered chatbots and virtual assistants handle user queries, generate self-service documentation and escalate complex issues to the concerned team.
  • These tools also maintain routine lineups of system updates, security patching and database management to ensure accuracy and less human intervention.

Observability and AIOps

Traditional monitoring approaches are insufficient for today's complex distributed systems. AI-driven observability platforms provide deeper insights into system behavior, enabling teams to understand not just what's happening, but why.

How AI Enhances Observability

  • Distributed Tracing Intelligence: AI analyzes trace data across microservices to identify performance bottlenecks and optimize service dependencies automatically.

  • Predictive Alert Correlation: Machine learning algorithms correlate seemingly unrelated alerts across different systems, identifying root causes more quickly and reducing alert fatigue among operations teams.

  • Log Pattern Recognition: Natural language processing extracts actionable insights from unstructured log data, identifying unusual patterns that might indicate security breaches or impending system failures.

  • Service Level Objective (SLO) Optimization: AI systems continuously analyze system performance against defined SLOs, recommending adjustments to maintain reliability while optimizing resource utilization.

Security and Compliance in AI-Driven SDLC

With increasing regulatory requirements and sophisticated cyber threats, integrating security and compliance throughout the SDLC is no longer optional. AI-driven approaches have transformed this traditionally manual area into a proactive and automated discipline.

How AI Transforms Security and Compliance

  • Shift-Left Security Testing: AI-powered static application security testing (SAST) and dynamic application security testing (DAST) tools identify vulnerabilities during development rather than after deployment. Tools like Snyk and SonarQube with AI capabilities detect security issues contextually within code review processes.

  • Regulatory Compliance Automation: Natural language processing models analyze regulatory requirements and automatically map them to code implementations, ensuring continuous compliance with standards like GDPR, HIPAA, or PCI-DSS.

  • Threat Modeling Assistance: AI systems analyze application architectures to identify potential threats, recommend mitigation strategies, and prioritize security concerns based on risk impact.

  • Runtime Application Self-Protection (RASP): AI-driven RASP solutions monitor application behavior in production, detecting and blocking exploitation attempts in real-time without human intervention.

Top Must-Have AI Tools for SDLC

Requirement Analysis and Gathering

  • ChatGPT/OpenAI: Generates user stories, asks clarifying questions, gathers requirements and functional specifications based on minimal input.
  • IBM Watson: Uses natural language processing (NLP) to analyze large volumes of unstructured data, such as customer feedback or stakeholder interviews.

Planning

  • Jira (AI Plugins): With AI plugins like BigPicture or Elements.ai helps in task automation, risk prediction, scheduling optimization.
  • Microsoft Project AI: Microsoft integrates AI and machine learning features for forecasting timelines, costs, and optimizing resource allocation.

Design and Prototype

  • Figma: Integrates AI plugins like Uizard or Galileo AI for generating design prototypes from text descriptions or wireframes.
  • Lucidchart: Suggest design patterns, optimize workflows, and automate the creation of diagrams like ERDs, flowcharts, and wireframes.

Microservices Architecture

  • Kong Konnect: AI-powered API gateway that optimizes routing and provides insights into API usage patterns.
  • MeshDynamics: Uses machine learning to optimize service mesh configurations and detect anomalies.

Development

  • GitHub Copilot: Suggests code snippets, functions, and even entire blocks of code based on the context of the project.
  • Tabnine: Supports multiple programming languages and learns from codebase to provide accurate and context-aware suggestions.

Testing

  • Testim: Creates, executes, and maintains automated tests. It can self-heal tests by adapting to changes in the application's UI.
  • Applitools: Leverages AI for visual testing and detects visual regressions automatically.

Deployment

  • Harness: Automates deployment pipelines, monitors deployments, detects anomalies and rolls back deployments automatically if issues are detected.
  • Jenkins (AI Plugins): Automates CI/CD pipelines with predictive analytics for deployment risks.

DevOps Integration

  • GitLab AI: Provides insights into CI/CD pipelines, suggesting optimizations and identifying potential bottlenecks.
  • Dynatrace: Uses AI to provide full-stack observability and automate operational tasks.

Security and Compliance

  • Checkmarx: AI-driven application security testing that identifies vulnerabilities with context-aware coding suggestions.
  • Prisma Cloud: Provides AI-powered cloud security posture management across the application lifecycle.

Maintenance

  • Datadog: Uses AI to provide insights into application performance, infrastructure, and logs.
  • PagerDuty: Prioritize alerts, automates responses, and predicts potential outages.

Observability and AIOps

  • New Relic One: Combines AI-powered observability with automatic anomaly detection and root cause analysis.
  • Splunk IT Service Intelligence: Uses machine learning to predict and prevent service degradations and outages.

How does Typo help in improving SDLC visibility?

Typo is an intelligent engineering management platform. It is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Through SDLC metrics, you can ensure alignment with business goals and prevent developer burnout. This tool can be integrated with the tech stack to deliver real-time insights. Git, Slack, Calendars, and CI/CD to name a few.

Typo Key Features:

  • Cycle time breakdown
  • Work log
  • Investment distribution
  • Goal setting for continuous improvement
  • Developer burnout alert
  • PR insights
  • Developer workflow automation

Future Trends in AI-Driven SDLC

As AI technologies continue to evolve, several emerging trends are set to further transform the software development lifecycle:

  • Generative AI for Complete Application Creation: Beyond code snippets, future AI systems will generate entire applications from high-level descriptions, with humans focusing on requirements and business logic rather than implementation details.

  • Autonomous Testing Evolution: AI will eventually create and maintain test suites independently, adjusting coverage based on code changes and user behavior without human intervention.

  • Digital Twins for SDLC: Creating virtual replicas of the entire development environment will enable simulations of changes before implementation, predicting impacts across the system landscape.

  • Cross-Functional AI Assistants: Future development environments will feature AI assistants that understand business requirements, technical constraints, and user needs simultaneously, bridging gaps between stakeholders.

  • Quantum Computing Integration: As quantum computing matures, it will enhance AI capabilities in the SDLC, enabling complex simulations and optimizations currently beyond classical computing capabilities.

Conclusion

AI-driven SDLC has revolutionized software development, helping businesses enhance productivity, reduce errors, and optimize resource allocation. These tools ensure that software is not only developed efficiently but also evolves in response to user needs and technological advancements.

As AI continues to evolve, it is crucial for organizations to embrace these changes to stay ahead of the curve in the ever-changing software landscape.

Top Swarmia Alternatives in 2025

Top Swarmia Alternatives in 2025

In today's fast-paced software development landscape, optimizing engineering performance is crucial for staying competitive. Engineering leaders need a deep understanding of workflows, team velocity, and potential bottlenecks. Engineering intelligence platforms provide valuable insights into software development dynamics, helping to make data-driven decisions. While Swarmia is a well-known player, it might not be the perfect fit for every team.This article explores the top Swarmia alternatives, giving you the knowledge to choose the best platform for your organization's needs. We'll delve into features, benefits, and potential drawbacks to help you make an informed decision.

Understanding Swarmia's Strengths

Swarmia is an engineering intelligence platform designed to improve operational efficiency, developer productivity, and software delivery. It integrates with popular development tools and uses data analytics to provide actionable insights.

Key Functionalities:

  • Data Aggregation: Connects to repositories like GitHub, GitLab, and Bitbucket, along with issue trackers like Jira and Azure DevOps, to create a comprehensive view of engineering activities.
  • Workflow Optimization: Identifies inefficiencies in development cycles by analyzing task dependencies, code review bottlenecks, and other delays.
  • Performance Metrics & Visualization: Presents data through dashboards, offering insights into deployment frequency, cycle time, resource allocation, and other KPIs.
  • Actionable Insights: Helps engineering leaders make data-driven decisions to improve workflows and team collaboration.

Why Consider a Swarmia Alternative?

Despite its strengths, Swarmia might not be ideal for everyone. Here's why you might want to explore alternatives:

  • Limited Customization: May not adapt well to highly specialized or unique workflows.
  • Complex Onboarding: Can have a steep learning curve, hindering quick adoption.
  • Pricing: Can be expensive for smaller teams or organizations with budget constraints.
  • User Interface: Some users find the UI challenging to navigate.

Top 6 Swarmia Competitors: Features, Pros & Cons

Here are six leading alternatives to Swarmia, each with its own unique strengths:

1. Typo

Typo is a comprehensive engineering intelligence platform providing end-to-end visibility into the entire SDLC. It focuses on actionable insights through integration with CI/CD pipelines and issue tracking tools.

Key Features:

  • Unified DORA and engineering metrics dashboard.
  • AI-driven analytics for sprint reviews, pull requests, and development insights.
  • Industry benchmarks for engineering performance evaluation.
  • Automated sprint analytics for workflow optimization.

Pros:

  • Strong tracking of key engineering metrics.
  • AI-powered insights for data-driven decision-making.
  • Responsive user interface and good customer support.

Cons:

  • Limited customization options in existing workflows.
  • Potential for further feature expansion.

G2 Reviews Summary:

G2 reviews indicate decent user engagement with a strong emphasis on positive feedback, particularly regarding customer support.

2. Jellyfish

Jellyfish is an advanced analytics platform that aligns engineering efforts with broader business goals. It gives real-time visibility into development workflows and team productivity, focusing on connecting engineering work to business outcomes.

Key Features:

  • Resource allocation analytics for optimizing engineering investments.
  • Real-time tracking of team performance.
  • DevOps performance metrics for continuous delivery optimization.

Pros:

  • Granular data tracking capabilities.
  • Intuitive user interface.
  • Facilitates cross-team collaboration.

Cons:

  • Can be complex to implement and configure.
  • Limited customization options for tailored insights.

G2 Reviews Summary: 

G2 reviews highlight strong core features but also point to potential implementation challenges, particularly around configuration and customization.


3. LinearB

LinearB is a data-driven DevOps solution designed to improve software delivery efficiency and engineering team coordination. It focuses on data-driven insights, identifying bottlenecks, and optimizing workflows.

Key Features:

  • Workflow visualization for process optimization.
  • Risk assessment and early warning indicators.
  • Customizable dashboards for performance monitoring.

Pros:

  • Extensive data aggregation capabilities.
  • Enhanced collaboration tools.
  • Comprehensive engineering metrics and insights.

Cons:

  • Can have a complex setup and learning curve.
  • High data volume may require careful filtering

G2 Reviews Summary: 

G2 reviews generally praise LinearB's core features, such as flow management and insightful analytics. However, some users have reported challenges with complexity and the learning curve.

4. Waydev

Waydev is an engineering analytics solution with a focus on Agile methodologies. It provides in-depth visibility into development velocity, resource allocation, and delivery efficiency.

Key Features:

  • Automated engineering performance insights.
  • Agile-based tracking of development velocity and bug resolution.
  • Budgeting reports for engineering investment analysis.

Pros:

  • Highly detailed metrics analysis.
  • Streamlined dashboard interface.
  • Effective tracking of Agile engineering practices.

Cons:

  • Steep learning curve for new users.

G2 Reviews Summary: 

G2 reviews for Waydev are limited, making it difficult to draw definitive conclusions about user satisfaction.

Waydev Updates: Custom Dashboards & Benchmarking - Waydev

5. Sleuth

Sleuth is a deployment intelligence platform specializing in tracking and improving DORA metrics. It provides detailed insights into deployment frequency and engineering efficiency.

Key Features:

  • Automated deployment tracking and performance benchmarking.
  • Real-time performance evaluation against efficiency targets.
  • Lightweight and adaptable architecture.

Pros:

  • Intuitive data visualization.
  • Seamless integration with existing toolchains.

Cons:

  • Pricing may be restrictive for some organizations.

G2 Reviews Summary: 

G2 reviews for Sleuth are also limited, making it difficult to draw definitive conclusions about user satisfaction

6. Pluralsight Flow (formerly Git Prime)

Pluralsight Flow provides a detailed overview of the development process, helping identify friction and bottlenecks. It aligns engineering efforts with strategic objectives by tracking DORA metrics, software development KPIs, and investment insights. It integrates with various manual and automated testing tools such as Azure DevOps and GitLab.

Key Features:

  • Offers insights into why trends occur and potential related issues.
  • Predicts value impact for project and process proposals.
  • Features DORA analytics and investment insights.
  • Provides centralized insights and data visualization.

Pros:

  • Strong core metrics tracking capabilities.
  • Process improvement features.
  • Data-driven insights generation.
  • Detailed metrics analysis tools.
  • Efficient work tracking system.

Cons:

  • Complex and challenging user interface.
  • Issues with metrics accuracy/reliability.
  • Steep learning curve for users.
  • Inefficiencies in tracking certain metrics.
  • Problems with tool integrations.


G2 Reviews Summary - 

The review numbers show moderate engagement (6-12 mentions for pros, 3-4 for cons), placing it between Waydev's limited feedback and Jellyfish's extensive reviews. The feedback suggests strong core functionality but notable usability challenges.Link to Pluralsight Flow's G2 Reviews

The Power of Integration

Engineering management platforms become even more powerful when they integrate with your existing tools. Seamless integration with platforms like Jira, GitHub, CI/CD systems, and Slack offers several benefits:

  • Out-of-the-box compatibility: Minimizes setup time.
  • Automation: Automates tasks like status updates and alerts.
  • Customization: Adapts to specific team needs and workflows.
  • Centralized Data: Enhances collaboration and reduces context switching.

By leveraging these integrations, software teams can significantly boost productivity and focus on building high-quality products.

Key Considerations for Choosing an Alternative

When selecting a Swarmia alternative, keep these factors in mind:

  • Team Size and Budget: Look for solutions that fit your budget, considering freemium plans or tiered pricing.
  • Specific Needs: Identify your key requirements. Do you need advanced customization, DORA metrics tracking, or a focus on developer experience?
  • Ease of Use: Choose a platform with an intuitive interface to ensure smooth adoption.
  • Integrations: Ensure seamless integration with your current tool stack.
  • Customer Support: Evaluate the level of support offered by each vendor.

Conclusion

Choosing the right engineering analytics platform is a strategic decision. The alternatives discussed offer a range of capabilities, from workflow optimization and performance tracking to AI-powered insights. By carefully evaluating these solutions, engineering leaders can improve team efficiency, reduce bottlenecks, and drive better software development outcomes.

Top Software Development Life Cycle (SDLC) Methodologies

Top Software Development Life Cycle (SDLC) Methodologies

The Software Development Life Cycle (SDLC) methodologies provide a structured framework for guiding software development and maintenance.

Development teams need to select the right approach for their project based on its needs and requirements. We have curated the top 8 SDLC methodologies that you can consider. Choose the one that best aligns with your project. Let’s get started: 

8 Software Development Life Cycle Methodologies 

Waterfall Model 

The waterfall model is the oldest surviving SDLC methodology that follows a linear, sequential approach. In this approach, the development team completes each phase before moving on to the next. The five phases include Requirements, Design, Implementation, Verification, and Maintenance.

Source

However, in today’s world, this model is not ideal for large and complex projects, as it does not allow teams to revisit previous phases. That said, the Waterfall Model serves as the foundation for all subsequent SDLC models, which were designed to address its limitations.

Iterative Model 

This software development approach embraces repetition. In other words, the Iterative model builds a system incrementally through repeated cycles. The development team revisits previous phases, allowing for modifications based on feedback and changing requirements. This approach builds software piece by piece while identifying additional needs as they go along. Each new phase produces a more refined version of the software.

Source

In this model, only the major requirements are defined from the beginning. One well-known iterative model is the Rational Unified Process (RUP), developed by IBM, which aims to enhance team productivity across various project types.

Incremental Model

This methodology is similar to the iterative model but differs in its focus. In the incremental model, the product is developed and delivered in small, functional increments through multiple cycles. It prioritizes critical features first and then adapts additional functionalities as requirements evolve throughout the project.

Source

Simply put, the product is not held back until it is fully completed. Instead, it is released in stages, with each increment providing a usable version. This allows for easy incorporation of changes in later increments. However, this approach requires thorough planning and design and may require more resources and effort.

Agile Model 

The Agile model is a flexible and iterative approach to software development. Developed in 2001, it combines iterative and incremental models aiming to increase collaboration, gather feedback, and rapid product delivery. It is based on the theory “Fail Fast and Early” which emphasizes quick testing and learning from failures early to minimize risks, save resources, and drive rapid improvement. 

Source

The software product is divided into small incremental parts that pass through some or all the SDLC phases. Each new version is tested and feedback is gathered from stakeholders throughout their process. This allows for catching issues early before they grow into major ones. A few of its sub-models include Extreme Programming (XP), Rapid Application Development (RAD), Scrum, and Kanban. 

Spiral Model 

A flexible SDLC approach in which the project cycles through four phases: Planning, Risk Analysis, Engineering, and Evaluation, repeatedly in a figurative spiral until completion. This methodology is widely used by leading software companies, as it emphasizes risk analysis, ensuring that each iteration focuses on identifying and mitigating potential risks.

Source

This model also prioritizes customer feedback and incorporates prototypes throughout the development process. It is particularly suitable for large and complex projects with high-risk factors and a need for early user input. However, for smaller projects with minimal risks, this model may not be ideal due to its high cost.

Lean Model 

Derived from Lean Manufacturing principles, the Lean Model focuses on maximizing user value by minimizing waste and optimizing processes. It aligns well with the Agile methodology by eliminating multitasking and encouraging teams to prioritize essential tasks in the present moment.

Source

The Lean Model is often associated with the concept of a Minimum Viable Product (MVP), a basic version of the product launched to gather user feedback, understand preferences, and iterate for improvements. Key tools and techniques supporting the Lean model include value stream mapping, Kanban boards, the 5S method, and Kaizen events.

V-Model 

An extension to the waterfall model, the V-model is also known as the verification and validation model. It is categorized by its V-shaped structure that emphasizes a systematic and disciplined approach to software development. In this approach, the verification phase ensures that the product is being built correctly and the validation phase focuses on the correct product is being built. These two phases are linked together by implementation (or coding phase). 

Source

This model is best suited for projects with clear and stable requirements and is particularly useful in industries where quality and reliability are critical. However, its inflexibility makes it less suitable for projects with evolving or uncertain requirements.

DevOps Model 

The DevOps model is a hybrid of Agile and Lean methodologies. It brings Dev and Ops teams together to improve collaboration and aims to automate processes, integrate CI/CD, and accelerate the delivery of high-quality software.It focuses on small but frequent updates, allowing continuous feedback and process improvements. This enables teams to learn from failures, iterate on processes, and encourage experimentation and innovation to enhance efficiency and quality.

Source

DevOps is widely adopted in modern software development to support rapid innovation and scalability. However, it may introduce more security risks as it prioritizes speed over security.

How Does Typo Help in Improving SDLC Visibility?

Typo is an intelligent engineering management platform. It is used for gaining visibility, removing blockers, and maximizing developer effectiveness. Through SDLC metrics, you can ensure alignment with business goals and prevent developer burnout. This tool can be integrated with the tech stack to deliver real-time insights. Git, Slack, Calendars, and CI/CD to name a few.

Typo Key Features:

  • Cycle time breakdown
  • Work log
  • Investment distribution
  • Goal setting for continuous improvement
  • Developer burnout alert
  • PR insights
  • Developer workflow automation

 

Conclusion 

Apart from the Software Development Life Cycle (SDLC) methodologies mentioned above, there are others you can take note of. Each methodology follows a different approach to creating high-quality software, depending on factors such as project goals, complexity, team dynamics, and flexibility.

Be sure to conduct your own research to determine the optimal approach for producing high-quality software that efficiently meets user needs.

FAQs

What is the Software Development Life Cycle (SDLC)?

The Software Development Life Cycle (SDLC) is a structured process that guides the development and maintenance of software applications.

What are the main phases of the SDLC?

The main phases of SDLC include:

  • Planning: Identifying project scope, objectives, and feasibility.
  • Requirement Analysis: Gathering and documenting user and business requirements.
  • Design: Creating system architecture, database structure, and UI/UX design.
  • Implementation (Coding): Writing and developing the actual software.
  • Testing: Identifying and fixing bugs to ensure software quality.
  • Deployment: Releasing the software for users.
  • Maintenance: Providing updates, fixing issues, and improving the system over time. 

What is the purpose of SDLC?

The purpose of SDLC is to provide a systematic approach to software development. This ensures that the final product meets user requirements, stays within budget, and is delivered on time. It helps teams manage risks, improve collaboration, and maintain software quality throughout its lifecycle.

Can SDLC be applied to all types of software projects?

Yes, SDLC can be applied to various software projects, including web applications, mobile apps, enterprise software, and embedded systems. However, the choice of SDLC methodology depends on factors like project complexity, team size, budget, and flexibility needs.

10 Best Developer Experience (DevEx) Tools in 2025

10 Best Developer Experience (DevEx) Tools in 2025

Developer Experience (DevEx) is essential for boosting productivity, collaboration, and overall efficiency in software development. The right DevEx tools streamline workflows, provide actionable insights, and enhance code quality.

We’ve explored the 10 best Developer Experience tools in 2025, highlighting their key features and limitations to help you choose the best fit for your team.

Key Features to Look For in DevEx Tools 

Integrated Development Environment (IDE) Plugins

The DevEx tool must contain IDE plugins that enhance coding environments with syntax highlighting, code completion, and error detection features. They must also allow integration with external tools directly from the IDE and support multiple programming languages for versatility. 

Collaboration Features

The tools must promote teamwork through seamless collaboration, such as shared workspaces, real-time editing capabilities, and in-context discussions. These features facilitate better communication among teams and improve project outcomes. 

Developer Insights and Analytics

The Developer Experience tool could also offer insights into developer performance through qualitative metrics including deployment frequency and planning accuracy. This helps engineering leaders understand the developer experience holistically. 

Feedback Loops 

For a smooth workflow, developers need timely feedback for an efficient software process. Hence, ensure that the tools and processes empower teams to exchange feedback such as real-time feedback mechanisms, code quality analysis, or live updates to get the view of changes immediately. 

Impact on Productivity

Evaluate how the tool affects workflow efficiency and developers’ productivity. Assess it based on whether it reduces time spent on repetitive tasks or facilitates easier collaboration. Analyzing these factors can help gauge the tool's potential impact on productivity. 

Top 10 Developer Experience Tools 

Typo 

Typo is an intelligent engineering management platform to gain visibility, remove blockers, and maximize developer effectiveness. It captures 360 views of the developer experience and uncovers real issues. It helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins. Typo also sends automated alerts to identify burnout signs in developers at an early stage. It can seamlessly integrate with third-party applications such as Git, Slack, Calenders, and CI/CD tools.

GetDX

GetDX is a comprehensive insights platform founded by researchers behind the DORA and SPACE framework. It offers both qualitative and quantitative measures to give a holistic view of the organization. GetDX breaks down results based on personas and streamlines developer onboarding with real-time insights. 

Key Features

  • Provides a suite of tools that capture data from surveys and systems in real time.
  • Contextualizes performance with 180,000+ industry benchmark samples.
  • Uses advanced statistical analysis to identify the top opportunities.

Limitations 

  • GetDX’s frequent updates and features can disrupt user experience and confuse teams. 
  • New managers often face a steep learning curve. 
  • Users managing multiple teams face configuration and managing team data difficulties. 

Jellyfish 

Jellyfish is a developer experience platform that combines developer-reported insights with system metrics. It captures qualitative and quantitative data to provide a complete picture of the development ecosystem and identify bottlenecks. Jellyfish can be seamlessly integrated with survey tools or use sentiment analysis to gather direct feedback from developers. 

Key Features

  • Enables continuous feedback loops and rapid response to developer needs.
  • Allows teams to track effort without time tracking. 
  • Tracks team health metrics such as code churn and pull request review times. 

Limitations

  • Problem in integrating with popular tools like Jira and Okta which complicates the initial setup process and affects the overall user experience.
  • Absence of an API restricts users from exporting metrics for further analysis in other systems. 
  • Overlooks important aspects of developer productivity by emphasizing throughput over qualitative metrics. 

LinearB

LinearB provides engineering teams with data-driven insights and automation capabilities.  This software delivery intelligence platform provides teams with full visibility and control over developer experience and productivity. LinearB also helps them focus on the most important aspects of coding to speed up project delivery. 

Key Features

  • Automates routine tasks and processes to reduce manual effort and cognitive load. 
  • Offers visibility into team workload and capacity. 
  • Helps maximize DevOps groups’ efficiency with various metrics.

Limitations 

  • Teams that do not utilize GIT-based workflow may find that many of the features are not applicable or useful to their processes.
  • Lacks comprehensive historical data or external benchmarks.
  • Needs to rely on separate tools for comprehensive project tracking and management. 

Github Copilot 

Github Copilot was developed by GitHub in collaboration with open AI. It uses an open AI codex for writing code, test cases and code comments quickly. It draws context from the code and suggests whole lines or complete functions that developers can accept, modify, or reject. Github Copilot can generate code in multiple languages including Typescript, Javascript and C++. 

Key Features

  • Creates predictive lines of code from comments and existing patterns in the code.
  • Seamlessly integrates with popular editors such as Neovim, JetBrains IDEs, and Visual Studio.
  • Create dictionaries of lookup data. 

Limitations 

  • Struggles to fully grasp the context of complex coding tasks or specific project requirements.
  • Less experienced developers may become overly reliant on Copilot for coding task.
  • Can be costly for smaller teams. 

Postman 

Postman is a widely used automation testing tool for API. It provides a streamlined process for standardizing API testing and monitoring it for usage and trend insights. This tool provides a collaborative environment for designing APIs using specifications like OpenAPI and a robust testing framework for ensuring API functionality and reliability. 

 

Key Features

  • Enables users to mimic real-world scenarios and assess API behavior under various conditions.
  • Creates mock servers, and facilitates realistic simulations and comprehensive testing.
  • Auto-generates documentation to make APIs easily understandable and accessible.

Limitations 

  • User interface non friendly for beginners. 
  • Heavy reliance on Postman may create challenges when migrating workflows to other tools or platforms.
  • More suitable for manual testing rather than automated testing. 

Sourcegraph 

An AI code-based assistant tool that provides code-specific information and helps in locating precise code based on natural language description, file names, or function names. 

It improves the developer experience by simplifying the development process in intricate enterprise environments. 

Key Features

  • Explain complex lines of code in simple language.
  • Identifies bugs and errors in a codebase and provides suggestions.
  • Offers documentation generation.

Limitations

  • Doesn’t support creating insights over specific branches or revisions.
  • Codebase size and project complexity may impact performance.
  • Certain features available when running insights over all repositories. 

Code Climate Velocity 

Code Climate Velocity is an engineering intelligence platform that provides leaders with customized solutions based on data-driven insights. Teams using Code Climate Velocity follows a three-step approach: a diagnostic workshop with Code Climate experts, a personalized dashboard with insight reports, and a customized action plan tailored to their business.

Key Features

  • Seamlessly integrates with developer tools such as Jira, GitLab, and Bitbucket. 
  • Supports long-term strategic planning and process improvement efforts.
  • Offers insights tailored for managers to help them understand team dynamics and individual contributions.

Limitations

  • Relies heavily on the quality and comprehensiveness of the data it analyzes.
  • Overlooks qualitative aspects of software development, such as team collaboration, creativity, and problem-solving skills.
  • Offers limited customization options.

Vercel 

Vercel is a cloud platform that gives frontend developers space to focus on coding and innovation. It simplifies the entire lifecycle of web applications by automating the entire deployment pipeline. Vercel has collaborative features such as preview environments to help iterate quickly while maintaining high code quality. 

Key Features

  • Applications can be deployed directly from their Git repositories. 
  • Includes pre-built templates to jumpstart the app development process.
  • Allows to create APIs without managing traditional backend infrastructure.

Limitations

  • Projects hosted on Vercel may rely on various third-party services for functionality which can impact the performance and reliability of applications. 
  • Limited features available with the free version. 
  • Lacks robust documentation and support resources.

Quovery 

A cloud deployment platform to simplify the deployment and management of applications. 

It automates essential tasks such as server setup, scaling, and configuration management that allows developers to prioritize faster time to market instead of handling infrastructure.

Key Features

  • Supports the creation of ephemeral environments for testing and development. 
  • Scales applications automatically on demand.
  • Includes built-in security measures such as multi-factor authentication and fine-grained access controls. 

Limitations

  • Occasionally experiences minor bugs.
  • Can be overwhelming for those new to cloud and DevOps.
  • Deployment times may be slow.

Conclusion 

We’ve curated the best Developer Experience tools for you in 2025. Feel free to explore other options as well. Make sure to do your own research and choose what fits best for you.

All the best!

How to Measure Change Failure Rate?

Smooth and reliable deployments are key to maintaining user satisfaction and business continuity. This is where DORA metrics play a crucial role. 

Among these metrics, the Change Failure Rate provides valuable insights into how frequently deployments lead to failures. Hence, helping teams minimize disruptions in production environments.

Let’s read about CFR further! 

What are DORA Metrics? 

In 2015, Gene Kim, Jez Humble, and Nicole Forsgren founded the DORA (DevOps Research and Assessment) team to evaluate and improve software development practices. The aim is to improve the understanding of how organizations can deliver faster, more reliable, and higher-quality software.

DORA metrics help in assessing software delivery performance based on four key (or accelerate) metrics:

  • Deployment Frequency
  • Lead Time for Changes
  • Change Failure Rate
  • Mean Time to Recover

While these metrics provide valuable insights into a team's performance, understanding CFR is crucial. It measures the effectiveness of software changes and their impact on production environments.

Overview of Change Failure Rate

The Change Failure Rate (CFR) measures how often new deployments cause failures, glitches, or unexpected issues in the IT environment. It reflects the stability and reliability of the entire software development and deployment lifecycle.

It is important to measure the Change Failure Rate for various reasons:

  • A lower change failure rate enhances user experience and builds trust by reducing failures. 
  • It protects your business from financial risks, revenue loss, customer churn, and brand damage. 
  • Lower change failures help to allocate resources effectively and focus on delivering new features.

How to Calculate Change Failure Rate? 

Change Failure Rate calculation is done by following these steps:

  1. Identify Failed Changes: Keep track of the number of changes that resulted in failures during a specific timeframe.
  2. Determine Total Changes Implemented: Count the total changes or deployments made during the same period.

Apply the formula:

CFR = (Number of Failed Changes / Total Number of Changes) * 100 to calculate the Change Failure Rate as a percentage.

For example, Suppose during a month:

Failed Changes = 2

Total Changes = 30

Using the formula: (2/30)*100 = 5

Therefore, the Change Failure Rate for that period is 6.67%.

What is a Good Failure Rate? 

An ideal failure rate is between 0% and 15%. This is the benchmark and standard that the engineering teams need to maintain. Low CFR equals stable, reliable, and well-tested software. 

When the Change Failure Rate is above 15%, it reflects significant issues with code quality, testing, or deployment processes. This leads to increased system downtime, slower deployment cycles, and a negative impact on user experience. 

Hence, it is always advisable to keep CFR as low as possible. 

How to Correctly Measure Change Failure Rate?

Follow the right steps to measure the Change Failure Rate effectively. Here’s how you can do it:

Define ‘Failure’ Criteria

Clearly define what constitutes a ‘Change’ and a ‘Failure,’ such as service disruptions, bugs, or system crashes. Having clear metrics ensures the team is aligned and consistently collecting data.

Accurately Capture and Label Your Data

Firstly, define the scope of change that needs to be included in CFR calculation. Besides this, include the details to be added for deciding the success or failure of changes. Have a Change Management System to track or log changes in a database. You can use tools like JIRA, GIT or CI/CD pipelines to automate and review data collection. 

Measure Change Failure, Not Deployment Failure 

Understand the difference between Change Failure and Deployment Failure. 

Deployment Failure: Failures that occur during the process of deploying code or changes to a production environment.

Change Failure: Failures that occur after the deployment when the changes themselves cause issues in the production environment.

This ensures that the team focuses on improving processes rather than troubleshooting unrelated issues. 

Analyze Trends Over Time 

Don’t analyze failures only once. Analyze trends continuously over different time periods, such as weekly, monthly, and quarterly. The trends and patterns help reveal recurring issues, prioritize areas for improvement, and inform strategic decisions. This allows teams to adapt and improve continuously. 

Understand the Limitations of DORA Metrics

DORA Metrics provide valuable insights into software development performance and identify high-level trends. However, they fail to capture the nuances such as the complexity of changes or severity of failures. Use them alongside other metrics for a holistic view. Also, ensure that these metrics are used to drive meaningful improvements rather than just for reporting purposes. 

Consider Contextual Factors

Various factors including team experience, project complexity, and organizational culture can influence the Change Failure Rate. These factors can impact both the failure frequency and effect of mitigation strategy. This allows you to judge failure rates in a broader context rather than only based on numbers. 

Exclude External Incidents

Filter out the failures caused by external factors such as third-party service outages or hardware failure. This helps accurately measure CFR as external incidents can distort the true failure rate and mislead conclusions about your team’s performance. 

How to Reduce Change Failure Rate? 

Identify the root causes of failures and implement best practices in testing, deployment, and monitoring. Here are some effective strategies to minimize CFR: 

Automate Testing Practices

Implement an automated testing strategy during each phase of the development lifecycle. The repeatable and consistent practice helps catch issues early and often, hence, improving code quality to a great extent. Ensure that the test results are also made accessible so they can have a clear focus on crucial aspects. 

Deploy small changes frequently

Small deployments in more frequent intervals make testing and detecting bugs easier. They reduce the risks of failures from deploying code to production issues as the issues are caught early and addressed before they become significant problems. Moreover, the frequent deployments provide quicker feedback to the team members and engineering leaders. 

Adopt a CI/CD

Continuous Integration and Continuous Deployment (CI/CD) ensures that code is regularly merged, tested, and deployed automatically. This reduces the deployment complexity and manual errors and allows teams to detect and address issues early in the development process. Hence, ensuring that only high-quality code reaches production. 

Prioritize Code Quality 

Establishing a culture where quality is prioritized helps teams catch issues before they escalate into production failures. Adhering to best practices such as code reviews, coding standards, and refactoring continuously improves the quality of code. High-quality code is less prone to bugs and vulnerabilities and directly contributes to a lower CFR.  

Implement Real-Time Monitoring and Alerting

Real-time monitoring and alerting systems help teams detect issues early and resolve them quickly. This minimizes the impact of failures, improves overall system reliability, and provides immediate feedback on application performance and user experience. 

Cultivate a Learning Culture 

Creating a learning culture within the development team encourages continuous improvement and knowledge sharing. When teams are encouraged to learn from past mistakes and successes, they are better equipped to avoid repeating errors. This involves conducting post-incident reviews and sharing key insights. This approach also fosters collaboration, accountability, and continuous improvement. 

How Does Typo Help in Reducing CFR? 

Since the definition of Failure is specific to teams, there are multiple ways this metric can be configured. Here are some guidelines on what can indicate a failure :

A deployment that needs a rollback or a hotfix

For such cases, any Pull Request having a title/tag/label that represents a rollback/hotfix that is merged to production can be considered a failure.

A high-priority production incident

For such cases, any ticket in your Issue Tracker having a title/tag/label that represents a high-priority production incident can be considered a failure.

A deployment that failed during the production workflow

For such cases, Typo can integrate with your CI/CD tool and consider any failed deployment as a failure. 

To calculate the final percentage, the total number of failures is divided by the total number of deployments (this can be picked either from the Deployment PRs or from the CI/CD tool deployments).

Conclusion 

Measuring and reducing the Change Failure Rate is a strategic necessity. It enables engineering teams to deliver stable software, leading to happier customers and a stronger competitive advantage. With tools like Typo, organizations can easily track and address failures to ensure successful software deployments.

A Complete Guide to Burndown Charts

Imagine you are on a solo road trip with a set destination. You constantly check your map and fuel gauge to check whether you are on a track. Now, replace the road trip with an agile project and the map with a burndown chart. 

Just like a map guides your journey, a burndown chart provides a clear picture of how much work has been completed and what remains. 

What is the Burndown Chart? 

Burndown charts are visual representations of the team’s progress used for agile project management. They are useful for scrum teams and agile project managers to assess whether the project is on track.

Burndown charts are generally of three types:

Product Burndown Chart

The product burndown chart focuses on the big picture and visualizes the entire project. It determines how many product goals the development team has achieved so far and the remaining work.

Sprint Burndown Chart

Sprint burndown charts focus on the ongoing sprints. It indicates progress towards completing the sprint backlog.

Epic Burndown Chart

This chart focuses on how your team performs against the work in the epic over time. It helps to track the advancement of major deliverables within a project.

When it comes to agile project management, a burndown chart is a fundamental tool, and understanding its key components is crucial. Let's break down what makes up a burndown chart and why each part is essential.

Core Elements of a Burndown Chart

Time Representation: The X-Axis

The horizontal axis, or X-axis, signifies the timeline for project completion. For projects following the scrum methodology, this axis often shows the series of sprints. Alternatively, it might detail the remaining days, allowing teams to track timelines against project milestones.

Effort Representation: The Y-Axis

The vertical axis, known as the Y-axis, measures the effort still needed to reach project completion. This is often quantified using story points, a method that helps estimate the work complexity and the labor involved in finishing user stories or tasks.

Real Progress Line

This line on the chart shows how much work remains after each sprint or day. It gives a tangible picture of team progress. Since every project encounters unexpected obstacles or shifts in scope, this line is usually irregular, contrasting with the straight trajectory of planned efforts.

Benchmark Progress Line

Also known as the ideal effort line, this is the hypothetical path of perfectly steady progress without setbacks. It generally runs in a straight line, descending from total projected work to zero. This line serves as a standard, assisting teams in assessing how their actual efforts measure up against expected outcomes.

Quantifying Effort: Story Points

Story points are a tool often used to put numbers to the effort needed for completing tasks or larger work units like epics. They are plotted on the Y-axis of the burndown chart, while the X-axis aligns with time, such as the number of ongoing sprints.

Sprint Objectives

A clear goal helps maintain focus during each sprint. On the burndown chart, this is represented by a specific target line. Even though actual progress might not always align with this objective, having it illustrated on the chart aids in driving the team towards its goals.

Incorporating these components into your burndown chart not only provides a visual representation of project progress but also serves as a guide for continual team alignment and focus.

How Does a Burndown Chart Work? 

A burndown chart shows the amount of work remaining (on the vertical axis) against time (on the horizontal axis). It includes an ideal work completion line and the actual work progress line. As tasks are completed, the actual line "burns down" toward zero. This allows teams to identify if they are on track to complete their goals within the set timeline and spot deviations early.

Understanding the Ideal Effort Line

The ideal effort line is your project's roadmap, beginning with the total estimated work at the start of a sprint and sloping downward to zero by the end. It acts as a benchmark to gauge your team's progress and ensure your plan stays on course.

Tracking the Actual Effort Line

This line reflects your team's real-world progress by showing the remaining effort for tasks at the end of each day. Comparing it to the ideal line helps determine if you are ahead, on track, or falling behind, which is crucial for timely adjustments.

Spotting Deviations

Significant deviations between the actual and ideal lines can signal issues. If the actual line is above the ideal, delays are occurring. Conversely, if below, tasks are being completed ahead of schedule. Early detection of these deviations allows for prompt problem-solving and maintaining project momentum.

Recognizing Patterns and Trends

Look for trends in the actual effort line. A flat or slow decline might indicate bottlenecks or underestimated tasks, while a steep drop suggests increased productivity. Identifying these patterns can help refine your workflows and enhance team performance.

Evaluating the Projection Cone

Some burndown charts include a projection cone, predicting potential completion dates based on current performance. This cone, ranging from best-case to worst-case scenarios, helps assess project uncertainty and informs decisions on resource allocation and risk management.

By mastering these elements, you can effectively interpret burndown charts, ensuring your project management efforts lead to successful outcomes.

How to Track Daily Progress and Remaining Work in a Burndown Chart?

Burndown charts are invaluable tools for monitoring progress in project management. They provide a clear visualization of work completed versus the work remaining.

Steps to Effectively Track Progress:

  • Set Initial Estimates: Begin by estimating the total effort required for your project. This lays the groundwork for tracking actual progress.
  • Daily Updates: Use your burndown chart to record the time spent on tasks each day. This will help to visualize how work is being completed over time.
  • Pacing Toward Goals:
    • Monitor Completed Tasks: Each task should be logged with the time taken to complete it. This gives insight into your efficiency and assists in forecasting future task completion times.
    • Evaluate Daily Against Estimates: Compare your daily progress to your initial estimates. By the conclusion of a specified period, such as five days, you should check if your completed hours align with your predicted timeline (e.g., 80 hours).

Visual Tools:

  • Use a Chart or Timeline Tool: A burndown chart could be created using spreadsheet software like Excel or Google Sheets, or specialized tools such as Trello or Jira, which offer built-in features for this purpose.
  • Track Remaining Work: Your chart should show a descending line representing the decrease in work as tasks are completed. Ideally, it should slope downwards steadily towards zero, indicating that you're on track.

By adopting these methods, teams can efficiently track their progress, ensuring that they meet their objectives within the desired timeframe. Analyzing the slope of the burndown chart regularly helps in making proactive adjustments as needed.

Purpose of the Burndown Chart 

A burndown chart is a visual tool used by agile teams to track progress. Here is a breakdown of its key functions: 

Identify Issues Early 

Burndown charts allow agile teams to visualize the remaining work against time which helps to spot issues early from the expected progress. They can identify bottlenecks or obstacles early which enables them to proactive problem-solving before the issue escalates. 

Visualize Sprint Progress

The clear graphical representation of work completed versus work remaining makes it easy for teams to see how much they have accomplished and how much is left to do within a sprint. This visualization helps maintain focus and alignment among team members. 

Boost Team Morale 

The chart enables the team to see their tangible progress which significantly boosts their morale. As they observe the line trending downward, indicating completed tasks, it fosters a sense of achievement and motivates them to continue performing well.

Improve Estimation

After each sprint, teams can analyze the burndown chart to evaluate their estimation accuracy regarding task completion times. This retrospective analysis helps refine future estimates and improves planning for upcoming sprints. 

How to Estimate Effort for a Burndown Chart

Estimating effort for a burndown chart involves determining the amount of work needed to complete a sprint within a specific timeframe. Here's a step-by-step approach to getting this estimation right:

Define Your Ideal Baseline

Start by identifying the total amount of work you expect to accomplish in the sprint. This requires knowing your team's productivity levels and the sprint duration. For instance, if your sprint lasts 5 days and your team can handle 80 hours in total, your baseline is 16 hours per day.

Break Down the Work

Next, divide the work into manageable chunks. List tasks or activities with their respective estimated hours. This helps in visualizing the workload and setting realistic daily goals.

  • Example Breakdown:
    • Task A: 20 hours
    • Task B: 30 hours
    • Task C: 30 hours
Determine Daily Workload

With your total hours known, distribute these hours across the sprint days. Begin by plotting your starting effort on a graph, like 80 hours on the first day, and then reduce it daily as work progresses.

  • Daily Tracking For a 5-Day Sprint:
    • Day 1: Start with 80 hours
    • Day 2: Reduce to 64 hours
    • Day 3: Decrease further to 48 hours
    • Day 4: Lower to 32 hours
    • Day 5: Finish with 16 hours
Monitor Your Progress

As the sprint moves forward, track the actual hours spent versus the estimated ones. This allows you to adjust and manage any deviations promptly.

By following these steps, you ensure that your burndown chart accurately reflects your team's workflow and helps in making informed decisions throughout the sprint.

How Does a Burndown Chart Help Prevent Scope Creep in Projects?

A burndown chart is a vital tool in project management, serving as a visual representation of work remaining versus time. Although it might not capture every aspect of a project’s trajectory, it plays a key role in preventing scope creep.

Firstly, a burndown chart provides a clear overview of how much work has been completed and what remains, ensuring that project teams stay focused on the goal. By continuously tracking progress, teams can quickly identify any deviation from the planned trajectory, which is often an early signal of scope creep.

However, a burndown chart doesn’t operate in isolation. It is most effective when used alongside other project management tools:

  • Backlog Management: A well-maintained product backlog is essential. It allows the team to prioritize tasks and ensures that only the most important items get addressed within the project's timeframe.
  • Change Control Processes: Even though a burndown chart might not show changes directly, integrating it with a robust change control process helps in capturing and managing these alterations systematically. This prevents unauthorized changes from bloating the project scope.

By consistently monitoring the relationship between time and completed work, project managers can maintain control and make informed decisions quickly. This proactive approach helps teams stay aligned with the project's original vision, thus minimizing the risk of scope creep.

Burndown Chart vs. Burnup Chart

Understanding the Difference Between Burndown and Burnup Charts

Both burndown and burnup charts are essential tools for managing projects, especially in agile environments. They provide visual insights into project progress, but they do so in different ways, each offering unique advantages.

Burndown Chart: Tracking Work Decrease

A burndown chart focuses on recording how much work remains over time. It's a straightforward way to monitor project progress by showing the decline of remaining tasks. The chart typically features:

  • X-Axis: Represents time over the life cycle of a project.
  • Y-Axis: Displays the amount of work left to complete, often measured in hours or story points.

This type of chart is particularly useful for spotting bottlenecks, as any deviation from the ideal line can indicate a pace that’s too slow to meet the deadline.

Burnup Chart: Visualizing Work Completion

In contrast, a burnup chart highlights the work that has been completed, alongside the total work scope. Its approach includes:

  • X-Axis: Also represents time.
  • Y-Axis: Shows cumulative work completed alongside total project scope.

The key advantage of a burnup chart is its ability to display scope changes clearly. This is ideal when accommodating new requirements or adjusting deliverables, as it shows both progress and scope alterations without losing clarity.

Summary

While both charts are vital for tracking project dynamics, their perspectives differ. Burndown charts excel at displaying how rapidly teams are clearing tasks, while burnup charts provide a broader view by also accounting for changes in project scope. Using them together offers a comprehensive picture of both time management and scope management within a project.

How to create a burndown chart in Excel? 

Step 1: Create Your Table

Open a new sheet in Excel and create a new table that includes 3 columns.

The first column should include the dates of each sprint, the second column have the ideal burndown i.e. ideal rate at which work will be completed and the last column should have the actual burndown i.e. updating them as story points get completed.

Step 2: Add Data in these Columns

Now, fill in the data accordingly. This includes the dates of your sprints and numbers in the Ideal Burndown column indicating the desired number of tasks remaining after each day throughout the let’s say, 10-day sprint.

As you complete tasks each day, update the spreadsheet to document the number of tasks you can finish under the ‘Actual Burndown’ column.

Step 3: Create a Burndown Chart

Now, it’s time to convert the data into a graph. To create a chart, follow these steps: Select the three columns > Click ‘Insert’ on the menu bar > Select the ‘Line chart’ icon, and generate a line graph to visualize the different data points you have in your chart.

How to Compile the Final Dataset for a Burndown Chart?

Compiling the final dataset for a burndown chart is an essential step in monitoring project progress. This process involves a few key actions that help translate raw data into a clear visual representation of your work schedule.

Step 1: Compare Initial Estimates with Actual Work Time

Start by gathering your initial effort estimates. These estimates outline the anticipated time or resources required for each task. Then, access your actual work logs, which you should have been maintaining consistently. By comparing these figures, you’ll be able to assess where your project stands in relation to your original forecasts.

Step 2: Keep Logs Accessible

Ensure that your logged work data is kept in a centralized and accessible location. This strategy fosters team collaboration and transparency, allowing team members to view and update logs as necessary. It also makes it easier to pull together data when you’re ready to update your burndown chart.

Step 3: Visualize with a Burndown Chart

Once your data is compiled, the next step is to plot it on your burndown chart. This graph will visually represent your team's progress, comparing estimated efforts against actual performance over time. Using project management software can simplify this step significantly, as many tools offer features to automate chart updates, streamlining both creation and maintenance efforts.

By following these steps, you’ll be equipped to create an accurate and insightful burndown chart, providing a clear snapshot of project progress and helping to ensure timelines are met efficiently.

Limitations of Burndown Chart 

One-Dimensional View

A Burndown chart mainly tracks the amount of work remaining, measured in story points or hours. This one-dimensional view does not offer insights into the complexity or nature of the tasks, hence, oversimplifying project progress. 

Unable to Detect Quality Issues or Technical Debt

Burndown charts fail to account for quality issues or the accommodation of technical debt. Agile teams might complete tasks on time but compromise on quality. This further leads to long-term challenges that remain invisible in the chart.

Lack of Visibility into Team Dynamics

The burndown chart does not capture team dynamics or collaboration patterns. It fails to show how team members are working together, which is vital for understanding productivity and identifying areas for improvement.

Mask Underlying Problems

The problems might go unnoticed related to story estimation and sprint planning. When a team consistently underestimates tasks, the chart may still show a downward trend. This masks deeper issues that need to be addressed.

Changes in Work Scope

Another disadvantage of burndown charts is that they do not reflect changes in scope or interruptions that occur during a sprint. If new tasks are added or priorities shift, the chart may give a misleading impression of progress.

Unable to Show Work Distribution and Bottlenecks

The chart does not provide insights into how work is distributed among team members or highlight bottlenecks in the workflow. This lack of detail can hinder efforts to optimize team performance and resource allocation.

What Key Components Are Missing in Burndown Charts for a Complete View of Sprints?

Burndown charts are great tools for tracking progress in a sprint. However, they don’t provide a full picture of sprint performance as they lack the following dimensions: 

Real-time Sprint Monitoring Metrics

Velocity Stability Indicators 

  • Sprint velocity variance: It tracks the difference between planned and actual sprint velocities to assess predictability.
  • Story completion rate by size category: It evaluates the team's ability to complete stories of varying complexities.
  • Average time in each status: It highlights bottlenecks by analyzing how long stories stay in each stage (To Do, In Progress, etc.).
  • Number of stories carried over: It measures unfinished work moved to the next sprint, which impacts planning accuracy.
  • Scope change percentage: It reflects how much the sprint backlog changes during execution

Quality Metrics

  • Code review coverage and throughput: It highlights the extent and speed of code reviews to ensure quality.
  • Unit test coverage trends: It measures improvements or regressions in unit test coverage over time.
  • Number of bugs found: It monitors the quality of sprint deliverables.
  • Technical debt items identified: It evaluates areas where shortcuts may have introduced long-term risks.
  • Build and deployment success rate: It highlights stability in CI/CD processes.
  • Production incidents related to sprint work: It connects sprint output to real-world impact.

Team Collaboration Indicators

  • Code review response time: It measures how quickly team members review code, impacting workflow speed.
  • Pair programming hours: It reflects collaborative coding time, boosting knowledge transfer and quality.
  • Knowledge-sharing sessions: This indicates team growth through discussions or sessions.
  • Cross-functional collaboration: It highlights collaboration across different roles, like devs and designers.
  • Blockers resolution time: It monitors how quickly obstacles are removed.
  • Team capacity utilization: It analyzes whether team capacity is effectively utilized.

Work Distribution Analysis

  • Task distribution across team members: It checks for workload balance.
  • Skill coverage matrix: It monitors whether all necessary skills are represented in the sprint.
  • Dependencies resolved: It highlights dependency identification and resolution.
  • Context switching frequency: It analyzes task switching, which can impact productivity.
  • Planned vs unplanned work ratio: It evaluates how much work was planned versus ad-hoc tasks.

Sprint Retrospective Analysis

Quantitative Measures

Sprint Goals Achievement
  • Completed story points vs committed: It evaluates sprint completion success.
  • Critical features delivered: It monitors feature delivery against sprint goals.
  • Technical debt addressed: It tracks progress on resolving legacy issues.
  • Quality metrics achieved: It ensures deliverables meet quality standards.
Process Efficiency
  • Lead time for user stories: Time taken from story creation to completion.
  • Cycle time analysis: It tracks how long it takes to move work items through the sprint.
  • Sprint predictability index: It compares planned vs actual progress consistency.
  • Planning accuracy percentage: It monitors how well the team plans tasks.
Team Performance
  • Team happiness index: It gauges morale.
  • Innovation time percentage: It monitors time spent on creative or experimental work.
  • Learning goals achieved: It tracks growth opportunities taken.
  • Cross-skilling progress: It measures skill development.

Qualitative Measures

Sprint Planning Effectiveness
  • Story refinement quality: It assesses the readiness and clarity of backlog items.
  • Estimation accuracy: It monitors the accuracy of time/effort estimates.
  • Dependencies identification: It indicates how well dependencies were spotted.
  • Risk assessment adequacy: It ensures risks are anticipated and managed.
Team Dynamics
  • Communication effectiveness: It ensures clarity and quality of team communication.
  • Collaboration patterns: It highlights team interactions.
  • Knowledge sharing: It checks for the effective transfer of knowledge.
  • Decision-making efficiency: It gauges the timeliness and effectiveness of team decisions.
Continuous Improvement
  • Action items completion rate: It measures follow-through on retrospective action items.
  • Process improvement initiatives: It tracks changes implemented for efficiency.
  • Tools and automation adoption: It monitors how well the team leverages technology.
  • Team capability enhancement: It highlights skill and process improvements.

Typo - An Effective Sprint Analysis Tool

Typo’s sprint analysis feature allows engineering leaders to track and analyze their team’s progress throughout a sprint. It uses data from Git and the issue management tool to provide insights into getting insights on how much work has been completed, how much work is still in progress, and how much time is left in the sprint hence, identifying any potential problems early on and taking corrective action.

Sprint analysis in Typo with burndown chart

‍Key Features:

  • A velocity chart shows how much work has been completed in previous sprints.
  • A burndown chart to measure progress
  • A sprint backlog that shows all of the work that needs to be completed in the sprint.
  • A list of sprint issues that shows the status of each issue.
  • Time tracking to See how long tasks are taking.
  • Blockage tracking to check how often tasks are being blocked, and what the causes of those blocks are.
  • Bottleneck identification to identify areas where work is slowing down.
  • Historical data analysis to compare sprint data over time.

Conclusion 

Burndown charts offer a clear and concise visualization of progress over time. While they excel at tracking remaining work, they are not without limitations, especially when it comes to addressing quality, team dynamics, or changes in scope. 

Integrating advanced metrics and tools like Typo, teams can achieve a more holistic view of their sprint performance and ensure continuous improvement. 

Engineering Management Platform: A Quick Overview

Your engineering team is the biggest asset of your organization. They work tirelessly on software projects, despite the tight deadlines. 

However, there could be times when bottlenecks arise unexpectedly, and you struggle to get a clear picture of how resources are being utilized. 

This is where an Engineering Management Platform (EMP) comes into play.

An EMP acts as a central hub for engineering teams. It transforms chaos into clarity by offering actionable insights and aligning engineering efforts with broader business goals.

In this blog, we’ll discuss the essentials of EMPs and how to choose the best one for your team.

What are Engineering Management Platforms? 

Engineering Management Platforms (EMPs) are comprehensive tools that enhance the visibility and efficiency of engineering teams. They serve as a bridge between engineering processes and project management, enabling teams to optimize workflows, track how they allocate their time and resources, track performance metrics, assess progress on key deliverables, and make informed decisions based on data-driven insights. This further helps in identifying bottlenecks, streamlining processes, and improving the developer experience (DX). 

Core Functionalities 

Actionable Insights 

One main functionality of EMP is transforming raw data into actionable insights. This is done by analyzing performance metrics to identify trends, inefficiencies, and potential bottlenecks in the software delivery process. 

Risk Management 

The Engineering Management Platform helps risk management by identifying potential vulnerabilities in the codebase, monitoring technical debt, and assessing the impact of changes in real time. 

Team Collaboration

These platforms foster collaboration between cross-functional teams (Developers, testers, product managers, etc). They can be integrated with team collaboration tools like Slack, JIRA, and MS Teams. It promotes knowledge sharing and reduces silos through shared insights and transparent reporting. 

Performance Management 

EMPs provide metrics to track performance against predefined benchmarks and allow organizations to assess development process effectiveness. By measuring KPIs, engineering leaders can identify areas of improvement and optimize workflows for better efficiency. 

Essential Elements of an Engineering Management Platform

Developer Experience 

Developer Experience refers to how easily developers can perform their tasks. When the right tools are available, the process is streamlined and DX leads to an increase in productivity and job satisfaction. 

Key aspects include: 

  • Streamlined workflows such as seamless integration with IDEs, CI/CD pipelines, and VCS. 
  • Metrics such as WIP and Merge Frequency to identify areas for improvement. 

Engineering Velocity 

Engineering Velocity can be defined as the team’s speed and efficiency during software delivery. To track it, the engineering leader must have a bird’s-eye view of the team’s performance and areas of bottlenecks. 

Key aspects include:

  • Monitor DORA metrics to track the team’s performance 
  • Provide resources and tools to track progress toward goals 

Business Alignment 

Engineering Management Software must align with broader business goals to help move in the right direction. This alignment is necessary for maximizing the impact of engineering work on organizational goals.

Key aspects include: 

  • Track where engineering resources (Time and People) are being allocated. 
  • Improved project forecasting and sprint planning to meet deadlines and commitments. 

Benefits of Engineering Management Platform 

Enhances Team Collaboration

The engineering management platform offers end-to-end visibility into developer workload, processes, and potential bottlenecks. It provides centralized tools for the software engineering team to communicate and coordinate seamlessly by integrating with platforms like Slack or MS Teams. It also allows engineering leaders and developers to have data-driven and sufficient context around 1:1. 

Increases Visibility 

Engineering software offers 360-degree visibility into engineering workflows to understand project statuses, deadlines, and risks for all stakeholders. This helps identify blockers and monitor progress in real-time. It also provides engineering managers with actionable data to guide and supervise engineering teams.

Facilitates Continuous Improvement 

EMPs allow developers to adapt quickly to changes based on project demands or market conditions. They foster post-mortems and continuous learning and enable team members to retrospectively learn from successes and failures. 

Improves Developer Well-being 

EMPs provide real-time visibility into developers' workloads that allow engineering managers to understand where team members' time is being invested. This allows them to know their developers’ schedule and maintain a flow state, hence, reducing developer burnout and workload management.

Fosters Data-driven Decision-Making 

Engineering project management software provides actionable insights into a team’s performance and complex engineering projects. It further allows the development team to prioritize tasks effectively and engage in strategic discussions with stakeholders. 

How to Choose an Engineering Management Platform for Your Team? 

Understanding Your Team’s Needs

The first and foremost point is to assess your team’s pain points. Identify the current challenges such as tracking progress, communication gaps, or workload management. Also, consider Team Size and Structure such as whether your team is small or large, distributed or co-located, as this will influence the type of platform you need.

Be clear about what you want the platform to achieve, for example: improving efficiency, streamlining processes, or enhancing collaboration.

Evaluate Key Categories

When choosing the right EMP for your team, consider assessing the following categories:

Processes and Team Health

A good EMP must evaluate how well the platform supports efficient workflows and provides a multidimensional picture of team health including team well-being, collaboration, and productivity.

User Experience and Customization 

The Engineering Management Platform must have an intuitive and user-friendly interface for both tech and non-tech users. It should also include customization of dashboards, repositories, and metrics that cater to specific needs and workflow. 

Allocation and Business Value 

The right platform helps in assessing resource allocation across various projects and tasks such as time spent on different activities, identifying over or under-utilization of resources, and quantifying the value delivered by the engineering team. 

Integration Capabilities 

Strong integrations centralize the workflow, reduce fragmentation, and improve efficiency. These platforms must integrate seamlessly with existing tools, such as project management software, communication platforms, and CRMs.

Customer Support 

The platform must offer reliable customer support through multiple channels such as chat, email, or phone. You can also take note of extensive self-help resources like FAQs, tutorials, and forums.

Research and Compare Options 

Research various EMPs available in the market. Now based on your key needs, narrow down platforms that fit your requirements. Use resources like reviews, comparisons, and recommendations from industry peers to understand real-world experiences. You can also schedule demos with shortlisted providers to know the features and usability in detail. 

Conduct a Trial Run

Opt for a free trial or pilot phase to test the platform with a small group of users to get a hands-on feel. Afterward, Gather feedback from your team to evaluate how well the tool fits into their workflows.

Select your Best Fit 

Finally, choose the EMP that best meets your requirements based on the above-mentioned categories and feedback provided by the team members. 

Typo: An Engineering Management Platform 

Typo is an effective engineering management platform that offers SDLC visibility, developer insights, and workflow automation to build better programs faster. It can seamlessly integrate into tech tool stacks such as GIT versioning, issue tracker, and CI/CD tools.

It also offers comprehensive insights into the deployment process through key metrics such as change failure rate, time to build, and deployment frequency. Moreover, its automated code tool helps identify issues in the code and auto-fixes them before you merge to master.

Typo has an effective sprint analysis feature that tracks and analyzes the team’s progress throughout a sprint. Besides this, It also provides 360 views of the developer experience i.e. captures qualitative insights and provides an in-depth view of the real issues.

Conclusion

An Engineering Management Platform (EMP) not only streamlines workflow but transforms the way teams operate. These platforms foster collaboration, reduce bottlenecks, and provide real-time visibility into progress and performance. 

What is Developer Experience?

Let’s take a look at the situation below: 

You are driving a high-performance car, but the controls are clunky, the dashboard is confusing, and the engine constantly overheats. 

Frustrating, right? 

When developers work in a similar environment, dealing with inefficient tools, unclear processes, and a lack of collaboration, it leads to decreased morale and productivity. 

Just as a smooth, responsive driving experience makes all the difference on the road, a seamless Developer Experience (DX) is essential for developer teams.

DX isn't just a buzzword; it's a key factor in how developers interact with their work environments and produce innovative solutions. In this blog, let’s explore what Developer Experience truly means and why it is crucial for developers. 

What is Developer Experience? 

Developer Experience, commonly known as DX, is the overall quality of developers’ interactions with their work environment. It encompasses tools, processes, and organizational culture. It aims to create an environment where developers are working efficiently, focused, and producing high-quality code with minimal friction. 

Why Does Developer Experience Matter? 

Developer Experience is a critical factor in enhancing organizational performance and innovation. It matters because:

Boosts Developer Productivity 

When developers have access to intuitive tools, clear documentation, and streamlined workflow, it allows them to complete the tasks quicker and focus on core activities. This leads to a faster development cycle and improved efficiency as developers can connect emotionally with their work. 

As per Gartner's Report, Developer Experience is the key indicator of Developer Productivity

High Product Quality 

Positive developer experience leads to improved code quality, resulting in high-quality work. This leads to customer satisfaction and a decrease in defects in software products. DX also leads to effective communication and collaboration which reduces cognitive load among developers and can thoroughly implement best practices. 

Talent Attraction and Retention 

A positive work environment appeals to skilled developers and retains top talents. When the organization supports developers’ creativity and innovation, it significantly reduces turnover rates. Moreover, when they feel psychologically safe to express ideas and take risks, they would want to be associated with an organization for the long run. 

Enhances Developer Morale 

When developers feel empowered and supported at their workplace, they are more likely to be engaged with their work. This further leads to high morale and job satisfaction. When organizations minimize common pain points, developers encounter fewer obstacles, allowing them to focus more on productive tasks rather than tedious ones.

Competitive Advantage 

Organizations with positive developer experiences often gain a competitive edge in the market. Enabling faster development cycles and higher-quality software delivery allows companies to respond more swiftly to market demands and customer needs. This agility improves customer satisfaction and positions the organization favorably against competitors. 

What is Flow State and Why Consider it as a Core Goal of a Great DX? 

In simple words, flow state means ‘Being in the zone’. Also known as deep work, it refers to the mental state characterized by complete immersion and focused engagement in an activity. Achieving flow can significantly result in a sense of engagement, enjoyment, and productivity. 

Flow state is considered a core goal of a great DX because this allows developers to work with remarkable efficiency. Hence, allowing them to complete tasks faster and with higher quality. It enables developers to generate innovative solutions and ideas when they are deeply engaged in their work, leading to better problem-solving outcomes. 

Also, flow isn’t limited to individual work, it can also be experienced collectively within teams. When development teams achieve flow together, they operate with synchronized efficiency which enhances collaboration and communication. 

What Developer Experience is not?  

Developer Experience is Not Just a Good Tooling 

Tools like IDEs, frameworks, and libraries play a vital role in a positive developer experience, but, it is not the sole component. Good tooling is merely a part of the overall experience. It helps to streamline workflows and reduce friction, but DX encompasses much more, such as documentation, support, learning resources, and the community. Tools alone cannot address issues like poor communication, lack of feedback, or insufficient documentation, and without a holistic approach, these tools can still hinder developer satisfaction and productivity.

Developer Experience is Not a Quick Fix 

Improving DX isn’t a one-off task that can be patched quickly. It requires a long-term commitment and a deep understanding of developer needs, consistent feedback loops, and iterative improvements. Great developer experience involves ongoing evaluation and adaptation of processes, tools, and team dynamics to create an environment where developers can thrive over time. 

Developer Experience isn’t About Pampering Developers or Using AI tools to Cut Costs

One common myth about DX is that it focuses solely on pampering developers or uses AI tools as cost-cutting measures. True DX aims to create an environment where developers can work efficiently and effectively. In other words, it is about empowering developers with the right resources, autonomy, and opportunities for growth. While AI tools help in simplifying tasks, without considering the broader context of developer needs may lead to dissatisfaction if those tools do not genuinely enhance their work experience. 

Developer Experience is Not User Experience 

DX and UX look alike, however, they target different audiences and goals. User Experience is about how end-users interact with a product, while Developer Experience concerns the experience of developers who build, test, and deploy products. Improving DX involves understanding developers' unique challenges and needs rather than only applying UX principles meant for end-users.

Developer Experience is Not Same as Developer Productivity 

Developer Experience and Developer Productivity are interrelated yet not identical. While a positive developer experience can lead to increased productivity, productivity metrics alone don’t reflect the quality of the developer experience. These metrics often focus on output (like lines of code or hours worked), which can be misleading. True DX encompasses emotional satisfaction, engagement levels, and the overall environment in which developers work. Positive developer experience further creates conditions that naturally lead to higher productivity rather than measuring it directly through traditional metrics

How does Typo Help to Improve DevEx?

Typo is a valuable tool for software development teams that captures 360 views of developer experience. It helps with early indicators of their well-being and actionable insights on the areas that need attention through signals from work patterns and continuous AI-driven pulse check-ins.

Key features

  • Research-backed framework that captures parameters and uncovers real issues.
  • In-depth insights are published on the dashboard.
  • Combines data-driven insights with proactive monitoring and strategic intervention.
  • Identifies the key priority areas affecting developer productivity and well-being.
  • Sends automated alerts to identify burnout signs in developers at an early stage.

Conclusion 

Developer Experience empowers developers to focus on building exceptional solutions. A great DX fosters innovation, enhances productivity, and creates an environment where developers can thrive individually and collaboratively.

Implementing developer tools empowers organizations to enhance DX and enable teams to prevent burnout and reach their full potential.

How to Reduce Cyclomatic Complexity?

Think of reading a book with multiple plot twists and branching storylines. While engaging, it can also be confusing and overwhelming when there are too many paths to follow. Just as a complex storyline can confuse readers, high Cyclic Complexity can make code hard to understand, maintain, and test, leading to bugs and errors. 

In this blog, we will discuss why high cyclomatic complexity can be problematic and ways to reduce it.

What is Cyclomatic Complexity? 

Cyclomatic Complexity, a software metric, was developed by Thomas J. Mccabe in 1976. It is a metric that indicates the complexity of the program by counting its decision points. 

A higher cyclomatic Complexity score reflects more execution paths, leading to increased complexity. On the other hand, a low score signifies fewer paths and, hence, less complexity. 

Cyclomatic Complexity is calculated using a control flow graph: 

M = E - N + 2P

M = Cyclomatic Complexity

N = Nodes (Block of code) 

E = Edges (Flow of control)

P = Number of Connected Components 

Understanding Cyclomatic Complexity Through a Simple Example

Let's delve into the concept of cyclomatic complexity with an easy-to-grasp illustration.

Imagine a function structured as follows:

function greetUser(name) {
   print(`Hello, ${name}!`);
}

In this case, the function is straightforward, containing a single line of code. Since there are no conditional paths, the cyclomatic complexity is 1—indicating a single, linear path of execution.

Now, let's add a twist:

function greetUser(name, offerFarewell = false) {
   print(`Hello, ${name}!`);
   
   if (offerFarewell) {
       print(`Goodbye, ${name}!`);
   }
}

In this modified version, we've introduced a conditional statement. It presents us with two potential paths:

  1. Path One: Greet the user without a farewell.
  2. Path Two: Greet the user followed by a farewell if is true.

By adding this decision point, the cyclomatic complexity increases to 2. This means there are two unique ways the function might execute, depending on the value of the  parameter.

Key Takeaway: Cyclomatic complexity helps in understanding how many independent paths there are through a function, aiding in assessing the possible scenarios a program can take during its execution. This is crucial for debugging and testing, ensuring each path is covered.

Why is High Cyclomatic Complexity Problematic? 

Increases Error Prone 

The more complex the code is, the more the chances of bugs. When there are many possible paths and conditions, developers may overlook certain conditions or edge cases during testing. This leads to defects in the software and becomes challenging to test all of them. 

Impact of Cyclomatic Complexity on Testing

Cyclomatic complexity plays a crucial role in determining how we approach testing. By calculating the cyclomatic complexity of a function, developers can ascertain the minimum number of test cases required to achieve full branch coverage. This metric is invaluable, as it predicts the difficulty of testing a particular piece of code.

Higher values of cyclomatic complexity necessitate a greater number of test cases to comprehensively cover a block of code, such as a function. This means that as complexity increases, so does the effort needed to ensure the code is thoroughly tested. For developers looking to streamline their testing process, reducing cyclomatic complexity can greatly ease this burden, making the code not only less error-prone but also more efficient to work with.

Leads to Cognitive Complexity 

Cognitive complexity refers to the level of difficulty in understanding a piece of code. 

Cyclomatic Complexity is one of the factors that increases cognitive complexity. Since, it becomes overwhelming to process information effectively for developers, which makes it harder to understand the overall logic of code.

Difficulty in Onboarding 

Codebases with high cyclomatic Complexity make onboarding difficult for new developers or team members. The learning curve becomes steeper for them and they require more time and effort to understand and become productive. This also leads to misunderstanding and they may misinterpret the logic or overlook critical paths. 

Higher Risks of Defects

More complex code leads to more misunderstandings, which further results in higher defects in the codebase. Complex code is more prone to errors as it hinders adherence to coding standards and best practices. 

Rise in Maintainance Efforts 

Due to the complex codebase, the software development team may struggle to grasp the full impact of their changes which results in new errors. This further slows down the process. It also results in ripple effects i.e. difficulty in isolating changes as one modification can impact multiple areas of application. 

To truly understand the health of a codebase, relying solely on cyclomatic complexity is insufficient. While cyclomatic complexity provides valuable insights into the intricacy and potential risk areas of your code, it's just one piece of a much larger puzzle.

Here's why multiple metrics matter:

  1. Comprehensive Insight: Cyclomatic complexity measures code complexity but overlooks other aspects like code quality, readability, or test coverage. Incorporating metrics like code churn, test coverage, and technical debt can reveal hidden challenges and opportunities for improvement.
  2. Balanced Perspective: Different metrics highlight different issues. For example, maintainability index offers a perspective on code readability and structure, whereas defect density focuses on the frequency of coding errors. By using a variety of metrics, teams can balance complexity with quality and performance considerations.
  3. Improved Decision Making: When decisions hinge on a single metric, they may lead to misguided strategies. For instance, reducing cyclomatic complexity might inadvertently lower functionality or increase lines of code. A balanced suite of metrics ensures decisions support overall codebase health and project goals.
  4. Holistic Evaluation: A codebase is impacted by numerous factors including performance, security, and maintainability. By assessing diverse metrics, teams gain a holistic view that can better guide optimization and resource allocation efforts.

In short, utilizing a diverse range of metrics provides a more accurate and actionable picture of codebase health, supporting sustainable development and more effective project management.

How to Reduce Cyclomatic Complexity? 

Function Decomposition

  • Single Responsibility Principle (SRP): This principle states that each module or function should have a defined responsibility and one reason to change. If a function is responsible for multiple tasks, it can result in bloated and hard-to-maintain code. 
  • Modularity: This means dividing large, complex functions into smaller, modular units so that each piece serves a focused purpose. It makes individual functions easier to understand, test, and modify without affecting other parts of the code.
  • Cohesion: Cohesion focuses on keeping related code close to functions and modules. When related functions are grouped together, it results in high cohesion which helps with readability and maintainability.
  • Coupling: This principle states to avoid excessive dependencies between modules. This will reduce the complexity and make each module more self-contained, enabling changes without affecting other parts of the system.

Conditional Logic Simplification

  • Guard Clauses: Developers must implement guard clauses to exit from a function as soon as a condition is met. This avoids deep nesting and enhances the readability and simplicity of the main logic of the function. 
  • Boolean Expressions: Use De Morgan's laws and simplify Boolean expressions to reduce the complexity of conditions. For example, rewriting! (A && B) as ! A || !B can sometimes make the code easier to understand.
  • Conditional Expressions: Consider using ternary operators or switch statements where appropriate. This will condense complex conditional branches into more concise expressions which further enhance their readability and reduce code size.
  • Flag Variables: Avoid unnecessary flag variables that track control flow. Developers should restructure the logic to eliminate these flags which can lead to simpler and cleaner code.

Loop Optimization

  • Loop Unrolling: Expand the loop body to perform multiple operations in each iteration. This is useful for loops with a small number of iterations as it reduces loop overhead and improves performance.
  • Loop Fusion: When two loops iterate over the same data, you may be able to combine them into a single loop. This enhances performance by reducing the number of loop iterations and boosting data locality.
  • Loop Strength Reduction: Consider replacing costly operations in loops with less expensive ones, such as using addition instead of multiplication where possible. This will reduce the computational cost within the loop.
  • Loop Invariant Code Motion: Prevent redundant computation by moving calculations that do not change with each loop iteration outside of the loop. 

Code Refactoring

  • Extract Method: Move repetitive or complex code segments into separate functions. This simplifies the original function, reduces complexity, and makes code easier to reuse.
  • Introduce Explanatory Variables: Use intermediate variables to hold the results of complex expressions. This can make code more readable and allow others to understand its purpose without deciphering complex operations.
  • Replace Magic Numbers with Named Constants: Magic numbers are hard-coded numbers in code. Instead of directly using them, create symbolic constants for hard-coded values. It makes it easy to change the value at a later stage and improves the readability and maintainability of the code.
  • Simplify Complex Expressions: Break down long, complex expressions into smaller, more digestible parts to improve readability and reduce cognitive load on the reader.

5. Design Patterns

  • Strategy Pattern: This pattern allows developers to encapsulate algorithms within separate classes. By delegating responsibilities to these classes, you can avoid complex conditional statements and reduce overall code complexity.
  • State Pattern: When an object has multiple states, the State Pattern can represent each state as a separate class. This simplifies conditional code related to state transitions.
  • Observer Pattern: The Observer Pattern helps decouple components by allowing objects to communicate without direct dependencies. This reduces complexity by minimizing the interconnectedness of code components.

6. Code Analysis Tools

  • Static Code Analyzers: Static Code Analysis Tools like Typo or Sonarqube, can automatically highlight areas of high complexity, unused code, or potential errors. This allows developers to identify and address complex code areas proactively.
  • Code Coverage Tools: Code coverage is a measure that indicates the percentage of a codebase that is tested by automated tests. Tools like Typo measures code coverage, highlighting untested areas. It helps ensure that the tests cover a significant portion of the code which helps identifies untested parts and potential bugs.

Other Ways to Reduce Cyclomatic Complexity 

  • Identify and remove dead code to simplify the codebase and reduce maintenance efforts. This keeps the code clean, improves performance, and reduces potential confusion.
  • Consolidate duplicate code into reusable functions to reduce redundancy and improve consistency. This makes it easier to update logic in one place and avoid potential bugs from inconsistent changes.
  • Continuously improve code structure by refactoring regularly to enhance readability, and maintainability, and reduce technical debt. This ensures that the codebase evolves to stay efficient and adaptable to future needs.
  • Perform peer reviews to catch issues early, promote coding best practices, and maintain high code quality. Code reviews encourage knowledge sharing and help align the team on coding standards.
  • Write Comprehensive Unit Tests to ensure code functions correctly and supports easier refactoring in the future. They provide a safety net which makes it easier to identify issues when changes are made.

To further limit duplicated code and reduce cyclomatic complexity, consider these additional strategies:

  • Extract Common Code: Identify and extract common bits of code into their own dedicated methods or functions. This step streamlines your codebase and enhances maintainability.
  • Leverage Design Patterns: Utilize design patterns—such as the template pattern—that encourage code reuse and provide a structured approach to solving recurring design problems. This not only reduces duplication but also improves code readability.
  • Create Utility Packages: Extract generic utility functions into reusable packages, such as npm modules or NuGet packages. This practice allows code to be reused across the entire organization, promoting a consistent development standard and simplifying updates across multiple projects.

By implementing these strategies, you can effectively manage code complexity and maintain a cleaner, more efficient codebase.

Typo - An Automated Code Review Tool

Typo’s automated code review tool identifies issues in your code and auto-fixes them before you merge to master. This means less time reviewing and more time for important tasks. It keeps your code error-free, making the whole process faster and smoother.

Key Features:

  • Supports top 8 languages including C++ and C#.
  • Understands the context of the code and fixes issues accurately.
  • Optimizes code efficiently.
  • Provides automated debugging with detailed explanations.
  • Standardizes code and reduces the risk of a security breach

 

Conclusion 

The cyclomatic complexity metric is critical in software engineering. Reducing cyclomatic complexity increases the code maintainability, readability, and simplicity. By implementing the above-mentioned strategies, software engineering teams can reduce complexity and create a more streamlined codebase. Tools like Typo’s automated code review also help in identifying complexity issues early and providing quick fixes. Hence, enhancing overall code quality.

Beyond Burndown Chart: Tracking Engineering Progress

Burndown charts are essential instruments for tracking the progress of agile teams. They are simple and effective ways to determine whether the team is on track or falling behind. However, there may be times when a burndown chart is not ideal for teams, as it may not capture a holistic view of the agile team’s progress. 

In this blog, we have discussed the latter part in greater detail. 

What is a Burndown Chart? 

Burndown Chart is a visual representation of the team’s progress used for agile project management. They are useful for scrum teams and agile project managers to assess whether the project is on track or not. 

The primary objective is to accurately depict the time allocations and plan for future resources. 

In agile and scrum environments, burndown charts are essential tools that offer more than just a snapshot of progress. Here’s how they are effectively used:

  • Create a Work Management Baseline: By establishing a baseline, teams can easily compare planned work versus actual work, allowing for a clear visual of progress.
  • Conduct Gap Analysis: Identify discrepancies between the planned timeline and current progress to adjust strategies promptly.
  • Inform Future Sprint Planning: Use information from the burndown chart to enhance the accuracy of future sprint planning meetings, ensuring better time and resource allocation.
  • Reallocate Resources: With real-time insights, teams can manage tasks more effectively and reallocate resources as needed to ensure sprints are completed on time.

Burndown charts not only provide transparency in tracking work but also empower agile teams to make informed decisions swiftly, ensuring project goals are met efficiently.

Understanding How a Burndown Chart Benefits Agile Teams

A burndown chart is an invaluable resource for agile project management teams, offering a clear snapshot of project progress and aiding in efficient workflow management. Here’s how it facilitates team success:

  • Progress Tracking: It visually showcases the amount of work completed versus what remains, allowing teams to quickly gauge their current status in the project lifecycle.
  • Time Management: By highlighting the time remaining, teams can better allocate resources and adjust priorities to meet deadlines, ensuring timely project delivery.
  • Task Overview: In addition to being a visual aid, it can function as a comprehensive list detailing tasks and their respective completion percentages, providing a clear outline of what still needs attention.
  • Transparency and Communication: Promoting open communication, the chart offers a shared view for all team members and stakeholders, leading to improved collaboration and more informed decision-making.

Overall, a burndown chart simplifies the complexities of agile project management, enhancing both team efficiency and project outcomes.

Components of Burndown Chart

Axes

There are two axes: x and y. The horizontal axis represents the time or iteration and the vertical axis displays user story points. 

Ideal Work Remaining 

It represents the remaining work that an agile team has at a specific point of the project or sprint under an ideal condition. 

Actual Work Remaining 

It is a realistic indication of a team's progress that is updated in real time. When this line is consistently below the ideal line, it indicates the team is ahead of schedule. When the line is above, it means they are falling behind. 

Project/Sprint End

It indicates whether the team has completed a project/sprint on time, behind or ahead of schedule. 

Data Points

The data points on the actual work remaining line represents the amount of work left at specific intervals i.e. daily updates. 

Understanding a Burndown Chart

A burndown chart is a visual tool used to track the progress of work in a project or sprint. Here's how you can read it effectively:

Core Components

  1. Axes Details:
    • X-Axis: Represents the timeline of the project or sprint, usually marked in days.
    • Y-Axis: Indicates the amount of work remaining, often measured in story points or task hours.

Key Features

  • Starting Point: Located at the far left, indicating day zero of the project or sprint.
  • Endpoint: Located at the far right, marking the final day of the project or sprint.

Lines to Note

  • Ideal Work Remaining Line:
    • A straight line connecting the start and end points.
    • Illustrates the planned project scope, estimating how work should progress smoothly.
    • At the end, it meets the x-axis, implying no pending work. Remember, this line is a projection and may not always match reality.
  • Actual Work Remaining Line:
    • This line tracks the real progress of work completed.
    • Starts aligned with the ideal line but deviates as actual progress is tracked daily.
    • Each daily update adds a new data point, creating a fluctuating line.

Interpreting the Chart

  • Behind Schedule: When the actual line stays above the ideal line, there's more work remaining than expected, indicating delays.
  • Ahead of Schedule: Conversely, if the actual line dips below the ideal line, it shows tasks are being completed faster than anticipated.

In summary, by regularly comparing the actual and ideal lines, you can assess whether your project is on track, falling behind, or advancing quicker than planned. This helps teams make informed decisions and adjustments to meet deadlines efficiently.

Types of Burndown Chart 

There are two types of Burndown Chart: 

Product Burndown Chart 

This type of burndown chart focuses on the big picture and visualises the entire project. It helps project managers and teams monitor the completion of work across multiple sprints and iteration. 

Sprint Burndown Chart 

Sprint Burndown chart particularly tracks the remaining work within a sprint. It indicates progress towards completing the sprint backlog. 

Advantages of Burndown Chart 

Visualises Progress 

Burndown Chart captures how much work is completed and how much is left. It allows the agile team to compare the actual progress with the ideal progress line to track if they are ahead or behind the schedule. 

Encourages Teams 

Burndown Chart motivates teams to align their progress with the ideal line. These small milestones boost morale and keep their motivation high throughout the sprint. It also reinforces the sense of achievement when they see their tasks completed on time. 

Informs Retrospectives 

It helps in analyzing performance over sprint during retrospection. Agile teams can review past data through burndown Charts to identify patterns, adjust future estimates, and refine processes for improved efficiency. It allows them to pinpoint periods where progress went down and help to uncover blockers that need to be addressed. 

Shows a Direct Comparison 

Burndown Chart visualizes the direct comparison of planned work and actual progress. It can quickly assess whether a team is on track to meet the goals, and monitor trends or recurring issues such as over-committing or underestimating tasks. 

Burndown Chart can be Misleading too. Here’s Why? 

While the Burndown Chart comes with lots of pros, it could be misleading as well. It focuses solely on the task alone without accounting for individual developer productivity. It ignores the aspects of agile software development such as code quality, team collaboration, and problem-solving. 

Burndown Chart doesn’t explain how the task impacted the developer productivity or the fluctuations due to various factors such as team morale, external dependencies, or unexpected challenges. It also doesn’t focus on work quality which results in unaddressed underlying issues.

How Does the Accuracy of Time Estimates Affect a Burndown Chart?

The effectiveness of a burndown chart largely hinges on the precision of initial time estimates for tasks. These estimates shape the 'ideal work line,' a crucial component of the chart. When these estimates are accurate, they set a reliable benchmark against which actual progress is measured.

Impacts of Overestimation and Underestimation

  • Overestimating Time: If a team overestimates the duration required for tasks, the actual work line on the chart may show progress as being on track or even ahead of schedule. This can give a false sense of comfort and potentially lead to complacency.
  • Underestimating Time: Conversely, underestimating time can make it seem like the team is lagging, as the actual work line falls behind the ideal. This situation can create unnecessary stress and urgency.

Mitigating Estimation Challenges

To address these issues, teams can introduce an efficiency factor into their calculations. After completing an initial project cycle, recalibrating this factor helps refine future estimates for more accurate tracking. This adjustment can lead to more realistic expectations and better project management.

By continually adjusting and learning from previous estimates, teams can improve their forecasting accuracy, resulting in more reliable burndown charts.

Other Limitations of Burndown Chart 

Oversimplification of Complex Projects 

While the Burndown Chart is a visual representation of Agile teams’ progress, it fails to capture the intricate layers and interdependencies within the project. It overlooks the critical factors that influence project outcomes which may lead to misinformed decisions and unrealistic expectations. 

Ignores Scope Changes 

Scope Creep refers to modification in the project requirement such as adding new features or altering existing tasks. Burndown Chart doesn’t take note of the same rather shows a flat line or even a decline in progress which can signify that the team is underperforming, however, that’s not the actual case. This leads to misinterpretation of the team’s progress and overall project health. 

Gives Equal Weight to all the Tasks

Burndown Chart doesn’t differentiate between easy and difficult tasks. It considers all of the tasks equal, regardless of their size, complexity, or effort required. Whether the task is on priority or less impactful, it treats every task as the same. Hence, obscuring insights into what truly matters for the project's success. 

Neglects Team Dynamics 

Burndown Chart treats team members equally. It doesn't take individual contributions into consideration as well as other factors including personal challenges. It also neglects how well they are working with each other, sharing knowledge, or supporting each other in completing tasks. 

To ensure projects are delivered on time and within budget, project managers need to leverage a combination of effective planning, monitoring, and communication tools. Here’s how:

1. Utilize Advanced Project Management Tools

Integrating digital tools can significantly enhance project monitoring. For example, platforms like Microsoft Project or Trello offer real-time dashboards that enable managers to track progress and allocate resources efficiently. These tools often feature interactive Gantt charts, which streamline scheduling and enhance team collaboration.

2. Implement Burndown Charts

Burndown charts are invaluable for visualizing work remaining versus time. By regularly updating these charts, managers can quickly spot potential delays and bottlenecks, allowing them to adjust plans proactively.

3. Conduct Regular Meetings and Updates

Scheduled meetings provide consistent check-in times to address issues, realign goals, and ensure everyone is on the same page. This fosters transparency and keeps the team aligned with project objectives, minimizing miscommunications and errors.

4. Foster Effective Communication Channels

Utilizing platforms like Slack or Microsoft Teams ensures quick and efficient communication among team members. A clear communication strategy minimizes misunderstandings and accelerates decision-making, keeping projects on track.

5. Prioritize Risk Management

Anticipating potential risks and having contingency plans in place is crucial. Regular risk assessments can identify potential obstacles early, offering time to devise strategies to mitigate them.

By combining these approaches, project managers can increase the likelihood of delivering projects on time and within budget, ensuring project success and stakeholder satisfaction.

What are the Alternatives to Burndown Chart? 

To enhance sprint management, it's crucial to utilize a variety of tools and reports. While burndown charts are fundamental, other tools can offer complementary insights and improve project efficiency.

Gantt Charts

Gantt Charts are ideal for complex projects. They are a visual representation of a project schedule using horizontal axes. They provide a clear timeline for each task, indicating when the project starts and ends, as well as understanding overlapping tasks and dependencies between them. This comprehensive view helps teams manage long-term projects alongside sprint-focused tools like burndown charts.

Cumulative Flow Diagram

CFD visualizes how work moves through different stages. It offers insight into workflow status and identifies trends and bottlenecks. It also helps in measuring key metrics such as cycle time and throughput. By providing a broader perspective of workflow efficiency, CFDs complement burndown charts by pinpointing areas for process improvement.

Kanban Boards

Kanban Boards is an agile management tool that is best for ongoing work. It helps to visualize work, limit work in progress, and manage workflows. They can easily accommodate changes in project scope without the need for adjusting timelines. With their ability to visualize workflows and prioritize tasks, Kanban boards ensure teams know what to work on and when, enhancing the detailed task management that burndown charts provide.

Burnup Chart 

Burnup Chart is a quick, easy way to plot work schedules on two lines along a vertical axis. It shows how much work has been done and the total scope of the project, hence, providing a clearer picture of project completion.

While both burnup and burndown charts serve the purpose of tracking progress in agile project management, they do so in distinct ways.

Similar Components, Different Actions:

  • Both charts utilize a vertical axis to represent user stories or work units.
  • The burndown chart measures the remaining work by removing items as tasks are completed.
  • In contrast, the burnup chart reflects progress by adding completed work to the vertical axis.

This duality in approach allows teams to choose the chart that best suits their need for visualizing project trajectory. The burnup chart, by displaying both completed work and total project scope, provides a comprehensive view of how close a team is to reaching project goals.

Developer Intelligence Platforms

DI platforms like Typo focus on how smooth and satisfying a developer experience is. They streamline the development process and offer a holistic view of team productivity, code quality, and developer satisfaction. These platforms provide real-time insights into various metrics that reflect the team’s overall health and efficiency beyond task completion alone. By capturing a wide array of performance indicators, they supplement burndown charts with deeper insights into team dynamics and project health.

Incorporating these tools alongside burndown charts can provide a more rounded picture of project progress, enhancing both day-to-day management and long-term strategic planning.

What Role does Real-Time Dashboards & Kanban Boards Play in Project Management?

In the dynamic world of project management, real-time dashboards and Kanban boards play crucial roles in ensuring that teams remain efficient and informed.

Real-Time Dashboards: The Pulse of Your Project

Real-time dashboards act as the heartbeat of project management. They provide a comprehensive, up-to-the-minute overview of ongoing tasks and milestones. This feature allows project teams to:

  • View updates instantaneously, thus enabling swift decision-making based on the most current data.
  • Track metrics such as task completion rates, resource allocation, and deadline adherence effortlessly.
  • Eliminate the delays associated with outdated information, ensuring that every team action is grounded in the present context.

Essentially, real-time dashboards empower teams with the data they need right when they need it, facilitating proactive management and quick responses to any project deviations.

Kanban Boards: Visualization and Prioritization

Kanban boards are pivotal for visualizing workflows and managing tasks efficiently. They:

  • Offer a clear visual representation of project stages, providing transparency across all levels of a team.
  • Help in organizing product backlogs and streamlining sprints by categorizing tasks into columns like "To Do," "In Progress," and "Done."
  • Enable scrum teams to prioritize tasks systematically, ensuring everyone knows what to focus on next.

By making workflows visible and manageable, Kanban boards foster better collaboration and continuous process improvement. They become a valuable archive for reviewing past sprints, helping teams identify successes and areas for enhancement.

In conclusion, both real-time dashboards and Kanban boards are integral to effective project management. They ensure that teams are always aligned with objectives, enhancing transparency and facilitating a smooth, agile workflow.

Typo - An Effective Sprint Analysis Tool

One such platform is Typo, which goes beyond the traditional metrics. Its sprint analysis is an essential tool for any team using an agile development methodology. It allows agile teams to monitor and assess progress across the sprint timeline, providing visual insights into completed work, ongoing tasks, and remaining time. This visual representation allows to spot potential issues early and make timely adjustments.

Our sprint analysis feature leverages data from Git and issue management tools to focus on team workflows. They can track task durations, identify frequent blockers, and pinpoint bottlenecks.

With easy integration into existing Git and Jira/Linear/Clickup workflows, Typo offers:

  • Velocity Chart that shows completed work in past sprints
  • Sprint Backlog that displays all tasks slated for completion within the sprint
  • Tracks the status of each sprint issue.
  • Measures task durations
  • Highlights areas where work is delayed and identifies task blocks and causes. 
  • Historical Data Analysis that compares sprint performance over time.

Hence, helping agile teams stay on track, optimize processes, and deliver quality results efficiently.

Conclusion 

While the burndown chart is a valuable tool for visualizing task completion and tracking progress, it often overlooks critical aspects like team morale, collaboration, code quality, and factors impacting developer productivity. There are several alternatives to the burndown chart, with Typo’s sprint analysis tool standing out as a powerful option. Through this, agile teams gain a more comprehensive view of progress, fostering resilience, motivation, and peak performance.

'How AI is Revolutionizing Software Engineering' with Venkat Rangasamy, Director of Engineering at Oracle

In this episode of the groCTO Originals podcast, host Kovid Batra talks to Venkat Rangasamy, the Director of Engineering at Oracle & an advisory member at HBR, about 'How AI is Revolutionizing Software Engineering'.

Venkat discusses his journey from a humble background to his current role and his passion for mentorship and generative AI. The main focus is on the revolutionary impact of AI on the Software Development Life Cycle (SDLC), making product development cheaper, more efficient, and of higher quality. The conversation covers the challenges of using public LLMs versus local LLMs, the evolving role of developers, and actionable advice for engineering leaders in startups navigating this transformative phase.

Timestamps

  • 00:00 - Introduction
  • 00:58 - Venkat's background
  • 01:59 - Venkat's Personal and Professional Journey
  • 05:11 - The Importance of Mentorship and Empathy
  • 09:19 - AI's Role in Modern Engineering
  • 15:01 - Security and IP Concerns with AI
  • 28:56 - Actionable Advice for Engineering Leaders
  • 32:56 - Conclusion and Final Thoughts

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of the groCTO podcast. And today with us, we have a very special guest, Mr. Venkat Rangasamy. He's the Director of Engineering at Oracle. He is the advisor at HBR Advisory Council, where he's helping HBR create content on leadership and management. He comes with 18 plus years of engineering and leadership experience. It's a pleasure to have you on the show, Venkat. Welcome. 

Venkat Rangasamy: Yup. Likewise. Thank you. Thanks for the opportunity to discuss on some of the hot topics what we have. I'm, I'm pleasured to be here. 

Kovid Batra: Great, Venkat. So I think there is a lot to talk about, uh, what's going on in the engineering landscape. And just for the audience, uh, today's topic is around, uh, how AI is impacting the overall engineering landscape and Venkat coming from that space with an immense experience and exposure, I think there will be a lot of insights coming in from your end. Uh, but before we move on to that section, uh, I would love to know a little bit more about you. Our audience would also love to know a little bit more about you. So anything that you would like to share, uh, from your personal life, from your professional journey, any hobbies, any childhood memories that shape up who you are today, how things have changed for you. We would love to hear about you. Yeah. 

Venkat Rangasamy: Yup. Um, in, in, in my humble background, I started, um, without nothing much in place, where, um, started my career and even studies, I did really, really on like, not even electricity to go through to, when we went for studies. That's how I started my study, whole schooling and everything. Then moved on to my college. Again, everything on scholarship. It's, it's like, that's where I started my career. One thing kept me motivated to go to places where, uh, different things and exploring opportunities, mentorship, right? That something is what shaped me from my school when I didn't have even, have food to eat for a day. Still, the mentorship and people who helped me is what I do today. 

With that context, why I'm passionate about the generative AI and other areas where I, I connect the dots is usually we used to have mentorship where people will help you, push you, take you in the right direction where you want to be in the different challenges they put together, right? Over a period of time, the mentorship evolved. Hey, I started with a physical mentor. Hey, this is how they handhold you, right? Each and every step of the way what you do. Then when your career moves along, then that, that handholding becomes little off, like it becomes slowly, it becomes like more of like instructions. Hey, this is how you need to do, get it done, right? The more you grow, even it will be abstracted. The one piece what I miss is having the handholding mentorship, right? Even though you grow your career, in the long run, you need something to be handholding you to progress along the way as needed. I see one thing that's motivated me to be part of the generative AI and see what is going on is, it could be another mentor for you to shape your roles and responsibility, your career, how do you want to proceed, bounce your ideas and see where, where you want to go from there on the problem that you have, right? In the context of the work-related stuff. 

Um, how, how you can, as a person, you can shape your career is something I'm vested, interested in people to be successful. In the long run, that's my passion to make people successful. The path that I've gone through, I just want to help people in a way to make them successful. That's my belief. I think making, pulling like 10 to 100, how many people you can pull out. The way when you grow is equally important. It's just not your growth yourself. Being part of that whole ecosystem, bring everybody around it. Everybody's career is equally important. I'm passionate about that and I'm happy to do that. And in my way, people come in. I want to make sure we grow together and and make them successful. 

Kovid Batra: Yeah, I think it's, uh, it's because of your humble background and the hardships that you've seen in the early of your, uh, childhood and while growing up, uh, you, you share that passion and, uh, you want to help other folks to grow and evolve in their journeys. But, uh, the biggest problem, uh, like when, when I see, uh, with people today is they, they lack that empathy and they lack that motivation to help people. Why do you think it's there and how one can really overcome this? Because in my foundation, uh, in my fundamental beliefs, we, as humans are here to give back to the community, give back to this world, and that's the best feeling, uh, that I have also experienced in my life, uh, over the last few years. I am not sure how to instill that in people who are lacking that motivation to do so. In your experience, how do you, how do you see, how do you want to inspire people to inspire others? 

Venkat Rangasamy: Yeah. No, it's, it's, it's like, um, It goes both ways, right? When you try to bring people and make them better is where you can grow yourself. And it becomes like, like last five to 10 years, the whole industry's become like really mechanics, like the expectation went so much, the breathing space. We do not have a breathing space. Hey, I want to chase my next, chase my next, chasing the next one. We leave the bottom food chain, like, hey, bring the food chain entirely with you until you see the taste of it in one product building. Bringing entire food chain to the ecosystem to bring them success is what makes your team at the end of the day. If we start seeing the value for that, people start spending more time on growing other people where they will make you successful. It's important. And that food chain, if it breaks, if it broke, or you, you kind of keep the food chain outside of your progression or growth, that's not actual growth because at one point of time, you get the roadblocks, right? At that point of time, your complete food chain is broken, right? Similar way, your career, the whole team, food chain is, it's completely broken. It's hard to bring them back, get the product launched at the time what you want to do. It's, it's, it's about building a trust, bring them up to speed, make them part of you, is what you have to do make yourself successful. Once you start seeing that in building a products, that will be the model. I think the people will follow that. 

The part is you rightly pointed out empathy, right? Have some empathy, right? Career can, it can be, can, can, it can go its own progress, but don't, don't squeeze too much to make it like I want to be like, it won't happen like in a timely manner like every six months and a year. No, it takes its own course of action. Go with this and make it happen, right? There are ups and downs in careers. Don't make, don't think like every, every quarter and every year, my career should be successful. No, that's not how it works. Then, then there is no way you see failure in your career, right? That's not the way equilibrium is. If that happened, everybody becomes evil. That's not a point, right? Every, everything in the context of how do you bring, uplift people is equally important. And I think people should start focusing more on the empathy and other stuff than just bringing as an IC contributor. Then you want to be successful in your own role, be an IC contributor, then don't be a professional manager bringing your whole.. There's a chain under you who trust you and build their career on top of your growth, right? That's important. When you have that responsibility, be meaningful, how do you bring them and uplift them is equally important. 

Kovid Batra: Cool. I think, uh, thanks a lot, uh, for this sweet and, uh, real intro about yourself. Uh, we got to, uh, know you a little more now. And with that, I, I'm sorry, but I was talking to you on LinkedIn, uh, from some time and I see that you have been passionately working with different startups and companies also, right, uh, in the space of AI. So, uh, With this note, I think let's move on to our main section, um, where you would, uh, be, where we would be interested in knowing, uh, what kind of, uh, ideas and thoughts, uh, are, uh, encompassing this AI landscape now, where engineering is changing on a day-in and day-out basis. So let's move on to our main section, uh, how AI is impacting or changing the engineering landscape. So, starting with your, uh, uh, advisories and your startups that you're working with, what are the latest things that are going on in the market you are associated with and how, how is technology getting impacted there? 

Venkat Rangasamy: Here is, here is what the.. Git analogy, I just want to give some history background about how AI is getting mainstream and people are not quite realizing what's happening around us, right? The part is I think 2010, when we started presenting cloud computing to folks, um, in the banking industry, I used to work for a banking customer. People really laughed at it. Hey, my data will be with me. I don't think it will move any time closer to cloud or anything. It will be with, with and on from, it is not going to change, right? But, you know, over a period of time, cloud made it easy. And, and any startups that build an application don't need to set up any infrastructure or anything, because it gives an easy way to do it. Just put your card, your infrastructure is up and running in a couple of hours, right? That revolutionized a lot the way we deploy and manage our applications.

The second pivotal moment in our history is mobile apps, right? After that, you see the application dominance was with enterprise most of the time. Over a period of time, when mobile got introduced, the distribution channels became easier to reach out to end users, right? Then a lot of billion-dollar unicorns like Uber and Spotify, everything got built out. That's the second big revolution happening. After mobile, I would say there were foundations happening like big data and data analytics. There is some part of ML, it, over a period of time it happened. But revolutionizing the whole aspect of the software, like how cloud and mobile had an impact on the industry, I see AI become the next one. The reason is, um, as of now, the software are built in a way, it's traditional SDLC practice, practice set up a long time ago. What, what's happening around now is that practice is getting questioned and changed a bit in the context of how are we going to develop a software, make them cheaper, more productive and quality deliverables. We used to do it in the 90s. If you've worked during that time, right, COBOL and other things, we used to do something called extreme programming. Peer programming and extreme programming is you, you have an assistant, you sit together, write together a bunch of instructions, right? That's how you start coding and COBOL and other things to validate your procedures. The extreme programming went away. And we started doing code based, IDE based suggestions and other things for developers. But now what's happening is it's coming 360, and everything is how Generative AI is influencing the whole aspect of software industry is, is, is it's going to be impactful for each and every life cycle of the software industry.

And it's just at the initial stage, people are figuring out what to do. From my, my interaction and what I do in my free time with NJ, Generative AI to Change this SDLC process in a meaningful way, I see there will be a profound impact on what we do in a software as software developers. From gathering requirements until deploying, deploying that software into customers and post support into a lifecycle will have a meaningful impact, impact. What does that mean? It'll have cheaper product development, quality deliverables. and having good customer service. What does it bring in over a period of time? It'll be a trade off, but that's where I think it's heading at this point of time. Some folks have started realizing, injecting their SDLC process into generative AI in some shape and form to make them better.

We can go in detail of like how each phases will look like, but that's, that's what I see from industry point of view, how folks are approaching generative AI. There is, there is, it's very conservative. I understand because that's how we started with cloud and other areas, but it's going to be mainstream, but it's going to be like, each and every aspect of it will be relooked and the chain management point of view in a couple of years, the way we see an SDLC will be quite different than what we have today. That's my, my, my belief and what I see in the industry. That's how it's getting there. Yep. Especially the software development itself. It's like eating your own dog food, right? It happened for a long time. This is the first time we do a software development, that whole development itself, it's going to be disturbed in a way. It'll be, it'll be, it'll be more, uh, profound impact on the whole product development. And it'll be cheaper. The product, go to market will be much cheaper. Like how mobile revolutionized, the next evolution will be on using, um, generative AI-like capability to make your product cheaper and go to market in a short term. That's, that's, that's going to happen eventually. 

Kovid Batra: Right. I think, uh, this, this is bound to happen. Even I believe so. It is, it is already there. I mean, it's not like, uh, you're talking about real future, future. It's almost there. It's happening right now. But what do you think on the point where this technology, which is right now, uh, not hosted locally, right? Uh, we are talking about inventing, uh, LLMs locally into your servers, into your systems. How do you see that piece evolving? Because lately I have been seeing a lot of concerns from a lot of companies and leaders around the security aspect, around the IP aspect where you are putting all your code into a third-party server to generate new code, right? You can't stop developers from doing that because they've already started doing it. Earlier, the method was going to stack overflow, taking up some code from there, going to GitHub repositories or GitLab repositories, taking up some code. But now this is happening from a single point of source, which is cloud hosted and you have to share your code with third parties. That has started becoming a concern. So though the whole landscape is going to change, as you said, but I think there is a specific direction in which things are moving, right? Very soon people realized that there is an aspect of security and IP that comes along with using such tools in the system. So how do you see that piece progressing in the market right now? And what are the things, what are the products, what are the services that are coming up, impacting this landscape? 

Venkat Rangasamy: It's a good question, actually. We, after a couple of years, right, what the realization even I came up with now, the services which are hosted on a cloud, like, uh, like, uh, public LLMs, right, which, you can use an LLM to generate some of these aspects. From a POC point of view, it looks great. You can see it, what is coming your way. But when it comes to the real product, making product in a production environment is not, um, well-defined because as I said, right, security audit complaints, code IP, right? And, and your compliance team, it's about who owned the IP part of it, right? It's those aspects as well as having the code, your IP goes to some trained public LLM. And it's, it's kind of a compromise where there is, there is, there is some concern around that area and people have started and enterprises have started looking upon something to make it within their workspace. End of the day, from a developer point of view, the experience what developer has, it has to be within that IDE itself, right? That's where it becomes successful. And keeping outside of that IDE is not fully baked-in or it's not fully baked-in part of the developer life cycle, which means the tool set, it has to be as if like it's running in local, right? If you ask me, like, is it doable? For sure. Yes. If you’d asked me an year back, I'd have said no. Um, running your own LLM within a laptop, like another IDE, like how do you run an IDE? It's going to be really challenging if you’d asked me an year back. But today, I was doing some recent experiment on this, um, similar challenges, right? Where corporates and other folks, then the, the, the, any, any big enterprises, right? Any security or any talk to a startup founders, the major, the major roadblock is I didn't want to share my IPR code outside of my workspace. Then bringing that experience into your workspace is equally important. 

With that context, I was doing some research with one of the POC project with, uh, bringing your Code Llama. Code Llama is one of the LLMs, public LLM, uh, trained by Meta for different languages, right? It's just the end of the day, the smaller the LLMs, the better on these kinds of tasks, right? You don't need to have 700 billion, 70 billion, those, those parameters are, is, it's irrelevant at this point of coding because coding is all about a bunch of instructions which need to be trained, right? And on top of it, your custom coding and templates, just a coding example. Now, how to solve this problem, set up your own local LLM. Um, I've tested and benchmarked in both Mac and PC. Mac is phenomenally well, I won't see any difference. You should be able to set up your LLM. There is a product called Ollama. Ollama is, uh, where you can use, set up your LLM within your workspace as if it's running, like running in your laptop. There's nothing going out of your laptop. Set up that and go to your IDE, create a simple plugin. I created a VC plugin, visual source plugin, connected to your local LLM, because Ollama will give you like a REST API, just connect it. Now, now, within your IDE, whatever code is there, that is going to talk to your LLM, which means every developer can have their own LLM. And as long as you have a right trained data set for basic language, Java, Python, and other thing, it works phenomenally well, because it's already trained for it. If you want to have a custom coding and custom templating, you just need to train that aspect of it, of your coding standards.

Once you train, keep it in your local, just run like part of an IDE. It's a whole integrated experience, which runs within developer workspaces, is what? Scalable and long run. It, if anything, if it goes out of that, which we, we, we have seen that many times, right, past couple of years. Even though we say our LLMs are good enough to do larger tasks in the coding side, if it's, if you want to analyze the complete file, if you send it to a public LLM, with some services available, uh, through some coding and other testing services, what we have, the challenges, number of the size of the tokens what you can send back, right? There is a limit in the number of tokens, which means if you want to analyze the entire project repository what you have, it's not possible with the way it's, these are set up now in a public site, right? Which means you need to have your own LLM within the workspace, which can work and in, in, it's like a, it's part of your workspace, that's what I would say. Like, how do you run your database? Run it part of your workspace, just make it happen. That is possible. And that's going to be the future. I don't think going any public LLM or setting up is, is, is not a viable option, but having the pipeline set up, it's like a patching or giving a database to your developers, it runs in local. Have that set up where everybody can use it within the local workspace itself. It's going to be the future and the tools and tool sets around that is really happening. And it's, it's at the phase where in an year's time from here, you won't even see that's a big thing. It's just like part of your skill. Just set up and connect your editor, whatever source code editor you have, just connect it to LLM, just run with it. I see that's a feature for the coding part of you. Other SDLCs have different nuance to it, but coding, I think it should be pretty straightforward in a year time frame. That's going to be the normal practice. 

Kovid Batra: So I think, uh, from what I understand of your opinion is that the, most of the market would be shifting towards their Local LLM models, right? Yeah. Uh, that that's going to be the future, but I'm not sure if I'm having the right analogy here, but let's talk about, uh, something like GitHub, which is, uh, cloud-sourced and one, which is in-house, right? Uh, the teams, the companies always had that option of having it locally, right? But today, um, I'm not sure of the percentage, uh, how many teams are using a cloud-based GitHub on a locally, uh, operated GitHub. But in that situation, they are hosting their code on a third party, right? The code is there. 

Venkat Rangasamy: Yup. 

Kovid Batra: The market didn't shape that way if we look at it from that perspective of code security and IP and everything. Uh, why do you think that this would happen for, uh, local LLMs? Like wouldn't the market be fragmented? Like large-scale organizations who have grown beyond a size have that mindset now, “Let's have something in-house.” and they would put it out for the local LLMs. Whereas the small companies who are establishing themselves and then, I mean, can it not be the similar path that happened for how you manage your code? 

Venkat Rangasamy: I think it is very well possible. The only difference between GitHub and LLM is, um, the artifact, the, GitHub is more like an artifact management, right? When you have your IP, you're just keeping it's kind of first repository to keep everything safe, right? It just with the versioning, branching and other stuff.

Kovid Batra: Right. 

Venkat Rangasamy: Um, the only problem there related to security is who's, um, is there any vulnerability within your code? Or it's that your repository is secure, right? That is kind of a compliance or everything needs to be there. As long as that's satisfied, we're good for that. But from an LLM lifecycle point of view, the, the IP, what we call so far in a software is a code, what you write as a code. Um, and the business logic associated to that code and the customizations happenening around that is what your IP is all about. Now, as of now, those IPs are patent, which means, hey, this is what my patent is all about. This is my IP all about. Now you have started giving your IP data to a public LLM, it'll be challenging because end of the day, any data goes through, it can be trained on its own. Using the data set, what user is going through, any LLM can be trained using the dataset. If you ask me, like, every application is critical where you cannot share an IP, not really. Building simple web pages or having REST services is okay because those things, I don't think any IP is bound to have. Where you have the core business of running your own workflows or your own calculations and that is where it's going to be more tough to use any public LLM.

And another challenge is, what I see in a community is, the small startups, right, they won't do much customization on the frameworks. Like they take Java means Java, right, Node means Node, they take React, just plain vanilla, just run through end-to-end, right? Their, their goal is to get the product up to market quicker, right, in the initial stage of when we have 510 developers. But when it grows, the team grows, what happens is, we, the, every enterprise it's bound to happen, I, I've gone through a couple of cycles of that, you start putting together a framework around the whole standardization of coding, the, the scaffolding, the creating your test cases, the whole life cycle will have enforced your own standard on top of it, because to make it consistent across different developers, and because the team became 5 to 1000, 1000 to 10,000, it's hard to manage if you don't have standards around it, right? That's where you have challenges using public LLM because you will have challenges of having your own code with your own standards, which is not trained by LLM, even though it's a simple application. Even simple application will have a challenge at those points of time. But from a basic point of view, still you can use it. But again, you will have a challenge of how big a file you can analyze using public LLM. It's the one challenge you might have. But the answer to your question, yes, it will be hybrid. It won't be 100 percent saying everybody needs to have their own LLM trained and set up. Initial stages, it's totally fine to use it because that's how it's going to grow, because startup companies don't have much resources to put together to build their own frameworks. But once they get in a shape where they want to have the standardized practices, like how they build their own frameworks and other things. Similar way, one point of time, they'd want to bring it up on their own setup and run with it. For large enterprise, for sure, they are going to have their own developer productivity suite, like what they did with their frameworks and other platforms. But for a small startup, start with, they might use public, but long run, eventually over a point of, over a period of time, that might get changed. 

And the benefit of getting hybrid is where you will, you'll make your product quick to market, right? Because end of the day, that's important for startups. It's not about getting everything the way they want to set up. It's important, but at the same time, you need to go to market, the amount of money what you have, where you want to prioritize your money. If I take it that way, still code generation and the whole LLM will play a crucial role on a, on the development. But how do you use and what third-party they can use? Of course, there will be some choices where I think in the future, what this, what I see is even these LLMs will be set up and trained for your own data in a, in a more of a hybrid cloud instead of a public cloud, which means your LLM, what you trained in a, in a hybrid cloud has visibility only to your code. It's not going, it's not a public LLM, it's more of a private LLM trained and deployed on, on a cloud can be used by your team. That'll, that'll, that'll be the hybrid approach in the long run. It's going to scale. 

Kovid Batra: Got it. Great. Uh, with that, I think, uh, just to put out some actionable advice, uh, for all the engineering leaders out there who are going through this phase of the AI transformation, uh, anything as an actionable advice for those leaders from your end, like what should they focus on right now, how they should make that transition? And I'm talking about, uh, companies where these engineering leaders are working, which are, uh, Series B, Series A, Series C kind of a bracket. I know this is a huge bracket, but what kind of advice you would give to these companies? Because they're in the growing phase of the, of the whole cycle of a company, right? So what, what should they focus on right now at this stage?

Venkat Rangasamy: Here, here is where some start. I was talking to some couple of, uh, uh, ventures, uh, recently about similar topic, how the landscape is going to change as for software development, right? One thing came up in that call frequently was cheaper to develop a product, go to market faster, and the expectation around software development has become changing quite a while, right? In the sense, the expectation around software development and the cost associated to that software development is where it's going to, it's going to be changing drastically. Same time, be clear about your strategy. It's not like we can change 50 percent of productivity overnight now. But at the same time, keep it realistic, right? Hey, this is what I want to make. Here is my charter to go through, from start from ideations to go to market. Here are the meaningful places where I can introduce something which can help the developers and other roles like PMs. Could be even post support, right? Have a meaningful strategy. Just don't go blank with the traditional way what you have, because your investors and advisors are going to start ask questions because they're going to see a similar pattern from others, right? Because that's how others have started looking into it. I would say proactively start going through that landscape and map your process to see where we can inject some of the meaningful, uh, area where it can have impacts, right?

And, and have, be practical about it. Don't think, don't give a commitment. Hey, I make 50 percent cheaper on my development and overnight you might burn because that's not reality, but just.. In my unit test cases and areas where I can build quality products within this money and I can guarantee that can be an industry benchmark. I can do that with introducing some of these practices like test cases, post customer support, writing code in some aspects, right? Um, that is what you need to set up, uh, when you started, uh, going for a venture fund. And have a relook of your SDLC process. That's important. And see how do you inject, and in the long term, that'll help you. And it'll be iterative, but at the end of the day, see, we've gone from waterfall to agile. Agile to many, many other paradigms within agile over a period of time. But, uh, the one thing what we're good at doing is in a software as an industry adapting to a new trend, right? This could be another trend. Keep an eye on it. Make it something where you can make it, make some meaningful impact on your products. I would, I would say, before your investor comes and talked about hey, can you do optimization here? I see another, my portfolio company does this, does this, does this. That's, it's, it's better to start yourself. Be collaborative and see if we can make something meaningful and learn across, share it in the community where other founders can leverage something from you. It will be great. That's my advice to any startup founders who can make a difference. Yep. 

Kovid Batra: Perfect. Perfect. Thank you, Venkat. Thank you so much for this insightful, uh, uh, information about how to navigate the situation of changing landscape due to AI. So, uh, it was really interesting. Uh, we would love to have you one another time on this show. I am sure, uh, you have more than these insights to share with us, but I think in the interest of time, we'll have to close it for today, and, uh, we'll see you soon again. 

Venkat Rangasamy: See you. Bye.

Webinar: ‘The Hows and Whats of DORA.' with Bryan Finster and Richard Pangborn

Webinar: ‘The Hows and Whats of DORA.' with Bryan Finster and Richard Pangborn

Typo hosted an exclusive live webinar titled 'The Hows and Whats of DORA', featuring Bryan Finster and Richard Pangborn. With over 150+ attendees, we explored how DORA can be misused and learnt practical tips for turning engineering metrics into dev team success.

Bryan Finster, Value Stream Architect at Defense Unicorns and co-author of 'How to Misuse and Abuse DORA Metrics’, and Richard Pangborn, Software Development Manager at Method and advocate for Typo, brought valuable perspectives to the table.

The discussion covered DORA metrics' implementation and challenges, highlighting the critical role of continuous delivery and value stream management. Bryan provided insights from his experience at Walmart and Defense Unicorns, explaining the pitfalls of misusing DORA metrics. Meanwhile, Richard shared his hands-on experience with implementation challenges, including data collection difficulties and the crucial need for accurate observability. They also reinforced the idea that DORA metrics should serve as health indicators rather than direct targets. Bryan and Richard offered parting advice on using observability effectively and ensuring that metrics lead to meaningful improvements rather than superficial compliance. They both emphasized the importance of a supportive culture that sees metrics as tools for improvement rather than instruments of pressure.

The event concluded with an interactive Q&A session, allowing attendees to ask questions and gain deeper insights.

P.S.: Our next live webinar is on September 25, featuring DORA expert Dave Farley. We hope to see you there!

Timestamps

  • 00:00 - Introduction
  • 00:59 - Meet Richard Pangborn
  • 02:58 - Meet Bryan Finster
  • 04:49 - Bryan's Journey with Continuous Delivery
  • 07:33 - Challenges & Misuse of DORA Metrics
  • 20:55 - Richard's Experience with DORA Metrics
  • 27:43 - Ownership of MTTR & Measurement Challenges
  • 28:27 - Cultural Resistance to Measurement
  • 29:37 - Team Metrics vs Individual Metrics
  • 31:29 - Value Stream Mapping Insights
  • 33:56 - Q&A Session
  • 40:19 - Setting Realistic Goals with DORA Metrics
  • 45:31 - Final Thoughts & Advice

Links and Mentions 

Transcript

Kovid Batra: Hi, everyone. Thanks for joining in for our DORA exclusive webinar, The Hows and Whats of DORA, powered by Typo. I'm Kovid, founding member at Typo and your host for today's webinar. With me, we have two special people. Please welcome the DORA expert for tonight, Bryan Finster, who is an exceptional Value Stream Architect at Defense Unicorns and the co-author of the ebook, 'How to Misuse and Abuse DORA Metrics', and one of our product mentors, and Typo advocates, Richard Pangborn, who is a Software Development Manager at Method. Thanks, Bryan. Thanks, Rich, for joining in. 

Bryan Finster: Thanks for having me. 

Richard Pangborn: Yeah, no problem. 

Kovid Batra: Great. So, like, before we, uh, get started and discuss about how to implement DORA, how to misuse DORA, uh, Rich, you have some questions to ask, uh, we would love to know a little bit about you both. So if you could just spare a minute and tell us about yourself. So I think we can get started with you, Rich. Uh, and then we can come back to Bryan. 

Richard Pangborn: Sure. Yeah, sounds good. Uh, my name is Richard Pangborn. I'm the Software Developer Manager here at Method. Uh, I've been a manager for about three years now. Um, but I do come from a Tech Lead role of five or more years. Um, I started here as a junior developer when we were just in the startup phase. Um, went through the series funding, the investments, the exponential growth. Today we're, you know, over a 100-person company with six software development teams. Um, and yeah, Typo is definitely something that we've been using to help us measure ourselves and succeed. Um, some interesting things about myself, I guess, is I was part of the company and team that succeeded when we did a Intuit hackathon. Um, it was pretty impactful to me. Um, We brought this giant check, uh, back with us from Cali all the way to Toronto, where we're located. Uh, we got to celebrate with, uh, all of the company, everyone who put in all the hard and hard work to, to help us succeed. Um, that's, that's sort of what pushed me into sort of a management path to sort of mentor and help those, um, that are junior or intermediate, uh, have that same sort of career path, uh, and set them up for success.

Kovid Batra: Perfect. Perfect. Thanks, Richard. And something apart from your professional life, anything that you want to share with the audience about yourself? 

Richard Pangborn: Uh, myself, um, I'm a gamer, um, I do like to golf, I do like to, um, exercise, uh, something interesting also is, um, I met my, uh, wife here at the company who I still work with today.

Kovid Batra: Great. Thank you so much, Rich. Bryan, over to you. 

Bryan Finster: Oh, yes. I'm Bryan Finster. I've been a software developer for, oh, well, since 1996. So I'll let you do the math. I'm mostly doing enterprise development. I worked for Walmart for 19 of those years, um, in logistics for most of that time and, uh, helped pilot continuous delivery at Walmart inside logistics. I've got scars to show for it. Um, and then later moved to platform at Walmart, where I was originally in charge of the delivery metrics we were gathering to help teams understand how to do continuous delivery so they can compare themselves to what good continuous delivery looked like. And then later was asked to start a dojo at Walmart to directly pair with teams to help them solve the problem of how do we do CD. And then about a little over three years ago, I was, I joined Defense Unicorns as employee number three of three, uh, and we're, we're now, um, over 150 people. We're focused on how do we help the Department of Defense deliver, um, you know, do continuous delivery and secure environments. So it's a fun path.

Kovid Batra: Great, great. Perfect. And the same question to you. Something that LinkedIn doesn't tell about you, you would like to share with the audience. 

Bryan Finster: Um, computers aren't my hobby. Uh, I, you know, it's a lot better than roofing. My dad had a construction company, so I know what that's like. Um, but I, I very much enjoy photography, uh, collecting watches, ride motorcycles, and build plastic models. So that's where I spend my time. 

Kovid Batra: Nice. Great to know that. All right. So now I think, uh, we are good to go and start with the main section of, of our webinar. So I think first, uh, let's, let's start with you, Bryan. Um, I think you have been a long-time advocate of value streams, continuous delivery, DORA metrics. You just briefly told us about how this journey started, but let's, let's deep dive a little bit more into this. Uh, tell us about how value stream management, continuous delivery, all this as a concept started appealing to you from the point of Walmart and then how it has evolved over time for you in your life.

Bryan Finster: Sure. Uh, no, at Walmart, um, continuous delivery was the answer to a problem. It wasn't, it was, we had a business problem, you know, our lead time for change in logistics was a year. We were delivering every quarter with massive explosions. Every time we piloted, I mean, it was really stressful. Um, any, anytime we did a big change of recorder, we had planned 24 by 7 support for at least a week and sometimes longer, um, And it was just a complete nightmare. And our SVP, instead of hiring in a bunch of consultants, cause we've been through a whole bunch of agile transformations over the years, asked the senior engineers in the area to figure out how we can deliver every two weeks. Now, if you can imagine these giant explosions happening every two weeks instead of every quarter, we didn't want that. And so we started digging in, how do we get that done? And my partner in crime bought a copy of continuous delivery. We started reading that book cover to cover, pulling out everything we could, uh, started building Jenkins pipelines with templates, so the teams didn't have to go and build their own pipeline. They can just extend the base template which was a pattern we took forward later. And, and, uh, we built a global platform. I started trying to figure out how do we actually do the workflow that enables continuous delivery. I mean, we weren't testing at all. Think how scary that is. Uh, other than, you know, handing it off to QA and say, "Hey, test this for us.

And so I had to really dig into how do we do continuous integration. And then that led into what's the communication problems that are stopping us from getting information so we can test before we commit code. Um, and then once you start doing that at the team level, what's preventing us from getting all the other information that we need outside the team? How do we get the connection? You know that, all the, all the roadblocks that are preventing us from doing continuous delivery, how do we fix those? Which kind of let me fall backwards in the value stream management because now you're looking at the broader value stream. It's beyond just what your team can do. Um, and so it's, uh, it's, it's been just a journey of solving that problem of how do we allow every team to independently deploy from any other team as frequently as they can. 

Kovid Batra: Great. And, and how do, uh, DORA metrics and engineering metrics, while you are implementing these projects, taking up these initiatives, play a role in it?

Bryan Finster: Well, so, you know, all this effort that we went on predated Accelerate coming out, but I was going to DevOps Enterprise Summit and learning as much as I could starting in 2015 and talking to people about how do we measure things, cause I was actually sent to DevOps Enterprise Summit the first time to figure out how do we measure if we're doing it well, and then started pulling together, you know, some metrics to show that we're progressing on this path to CD, you know, how frequently integrating code, how many defects are being generated over time, you know, and how, how often can individuals on the team deploy as like, you know, deploys per day per developer was a metric that Jim proposed back in 2015 as just a health metric. How are we doing? And then later in the, and when we started the dojo in platform at Walmart, we were using a metrics-based approach to help teams. Continuous delivery was the method we were using to improve engineering excellence in the organization. We, you know, we weren't doing any Agile frameworks. It was just, why can't we deliver change daily? Um, and early on when we started building the platform, the first tool was the CI tool. Second tool was how do we measure. And we brought in CapitalOne's Hygieia, and then we gamified delivery metrics so we can show teams with a star rating how they were doing on integration frequency, build time, build success rate, deploy frequency, you know, and code complexity, that sort of thing, to show them, you know, this is what good looks like, and here's where you are. That's it. Now, I learned a lot from that, and there's some things I would still game today, and some things I would absolutely not gamify. Um, but that's where I, you know, I spent a long time running that as the game master about how do we, how do we run the game to get teams to want to, want to move and have shown where to go.

And then later, Accelerate came out, and the big thing that Accelerate did was it validated everything we thought was true. All the experiences we had, because the reason I'm so passionate about it is that first, first experience with CD was such a morale improvement on the team that I, nobody ever wanted to work any other way, and when things later changed, they were forced to not work that way by new leadership, everyone who could left. And that's just the reality of it. And, but accelerate came out and said these exact things that we were seeing. And it wasn't just a one-off. It wasn't just this, you know, just localized to. What we were saying, it was everywhere.

Kovid Batra: Yeah, totally makes sense. I think, uh, it's been a burning topic now, and a lot of, uh, talks have been around it. In fact, like, these things are at team-level, system-level. In fact, uh, I'm, uh, the McKinsey article that came out, uh, talking about dev productivity also. So I, I have actually a question there. So, uh. 

Bryan Finster: Oh, I shouldn't have read the article. Yeah, go ahead. 

Kovid Batra: I mean, it's basically, it's basically talking about individual, uh, dev productivity, right? People say that it can be measured. So yeah. What's your take on that? 

Bryan Finster: That's, that's really dumb. If you want to absolutely kill outcomes, uh, focus on HR metrics instead of outcome metrics, you know. And, and so, I want to touch a little bit on the DORA metrics I think. You know, I've, having worked to apply those metrics on top of the metrics we're already using, there's some of them that are useful, but you have to understand those came from surveys, and there's some of them that are, that if you try to measure them directly, you won't get the results you want, you won't get useful data from measuring directly. Um, you know, and they don't tell you things are going well, they only tell you things are going poorly and you can't use those as your, your, the thing that tells you whether, whether you're delivering value well, you know? It's just something that you, cause you to ask questions about what might be going wrong or not, but it's not, it's not something you use like a dashboard. 

Kovid Batra: Makes sense. And I think, uh, the book that you have written, uh, 'How to Misuse and Abuse DORA Metrics', I think, let's, let's talk, talk about that a little bit. Like you have summarized a lot of things there, how DORA metrics should not be used, or Engineering metrics for that matter should not be used. So like, when do you think, how do you think teams should be using it? When do the teams actually feel the need of using these metrics and in which areas? 

Bryan Finster: Well, I think observability in general is something people don't pay enough attention to. And not just, you know, not just production observability, but how are we working as a team. And, and really what we're trying to do is you have to think of it first from what are we trying to do with product development. Um, a big mistake people make is assuming that their idea is correct, and all we have to do is build something according to spec, make sure it tests according to spec and deliver it when we're done. When fundamentally, the idea is probably wrong. And so the question is, how big of a chunk of wrong idea do I want to deliver to the end user and which money do I want to spend doing that? So what we're trying to do is we're trying to become much more efficient about how we make change so we can make smaller change at lower costs so that we can be more effective about delivering value and deliver less wrong stuff. And so what you're really trying to do is you're trying to measure the, the, the way that we work, the way we test, to find areas where we can improve that workflow, so that we can reduce the cost and increase the velocity, which we can deliver change. So we can deliver smaller units of work more frequently, get faster feedback and adjust our idea, right? And so if you're not, if you're just looking at, "Oh, we just need to deliver faster." But you're not looking at why do we want to deliver faster is to get faster feedback on the idea. And also from my perspective, after 20 years of carrying a pager, fix production very, very quickly and safely, I think those are both key things.

And so what we're trying to do with the metrics is we're trying to identify where those problems are. And so in the paper I wrote for IT revolution, which was about twice as long as they asked me for on, on how to misuse and abuse DORA metrics, I went into the details of how we apply those metrics in real life. At Walmart, when we were working with teams to help them improve and also, you know, using them on ourselves, I think if a team really wants to focus on improving, the first thing they should measure is how well they're doing at continuous integration, you know, how frequently are we integrating code, how long does it take us to finish whatever a unit of work is, and what's our, uh, how many defects we're generating, uh, over time as a trend. And measure trends and improve all those trends at the same time. 

Kovid Batra: How do we measure this piece where we are talking about measuring the continuous integration? 

Bryan Finster: So, as an average on the team, how frequently are we integrating code? And you really want to be at least daily, right? And that's integrated to the trunk, not to some develop branch. And then also, you know, people generally work on a task or a story or whatever it is. How long does it take to go from when we start that work until it's delivered? What's that time frame? And there's, there's other times within that we can measure and that was when we get into value stream mapping. We can talk about that later, but, uh, we want small units of work because you get higher quality information if you get smaller units work and you're more predictable on delivery of that unit of work, which takes a lot of pressure off, it eliminates story points. But then you also have to balance those with the quality of what we did, and you can't measure that quality until it's in production, because test to spec doesn't mean it's good. 'Fit for purpose' means the user finds it good. 

Kovid Batra: Right. Can you give us some examples of where you have seen implementing these metrics went completely south instead of working positively? Like how exactly were they abused and misused in a scenario? 

Bryan Finster: Yeah, every single time somebody builds a dashboard without really understanding what the problems you're trying to solve are, I see, I've seen lots of people over the years since Accelerate was published, building dashboards to sell, but they don't understand the core problem they're trying to solve. But also, you know, when you have management who reads a book and says, Oh, look, here's an end, you know, I helped cause this problems, which is why I work so hard to fix it by saying, "Hey, look at these four key metrics." Aren't you? You know, this, this can tell you some things, but then they start using them as goals instead of health indicators that are contextual to individual teams. And when you start saying, "Hey, all teams must have this, this level of delivery frequency." Well, maybe, but everybody has their own delivery context. You're not going to deliver to an air-gapped environment as frequently as you are to, you know, AWS, right? And so, you have to understand what it is you're actually trying to do. What, what decisions are you going to make with any metric? What questions are you trying to answer before you go and measure it? You have to define what the problem is before you try to measure that you're successful at correcting the problem. 

Kovid Batra: Right. Makes sense. There are challenges that I've seen in teams. Uh, so of course, Typo is getting implemented in various organizations here. What we have commonly come across is teams tend to start using it, but sometimes it happens that when there are certain indicators highlighted from those metrics, they're not sure of what to do next.

Bryan Finster: Right. 

Kovid Batra: So I'm sure you must. 

Bryan Finster: Well, but the reason why is because they didn't know why they were measuring it in the first place, right? And so, like I said, you know, DORA metrics in specific, they tell you something, but they're very much trailing metrics, which is why I point to CI because CI is really the, the CI workflow is really the engine that starts driving improvement. And then, you know, once you get better at that, you say, "Well, why can't I deliver today's work today?" And you start finding other things in the value stream that are broken, but then you have to identify, okay, well, We see this issue here with code review. We see this issue here. We have this handoff to another team downstream of development before we can deploy. How do we improve those? And how can we measure that we are improving? So you have to ask the question first. And then come up with the metrics that you're using to evaluate success. 

And so, people are saying, well, I don't know what to do with this number. It's because they don't, they didn't, they started with a metric and then tried to figure out what to do with it because someone told him it was a good metric. No metric is a good metric unless you know what you're doing with it. I mean, if I put a tachometer on a car and you think that more is better but you don't understand what the tachometer is telling you, then you'll just blow up your engine. 

Kovid Batra: But don't you think like there is a possible way to actually not know what to measure, but to identify what to measure also from these metrics itself? For example, like, uh, we have certain benchmarks for, uh, different industries for each metric, right? And let's say I start looking at the lead time, I start looking at the deployment frequency, mean time to restore, there are various other metrics. And from there, I try to identify where my engineering efficiency or productivity is, productivity is getting impacted. So can, can it not be a top-down approach where we find out what we need to actually measure and improve upon from those metrics itself? 

Bryan Finster: Only if you start with a question you're trying to answer. But I wouldn't compare. So one of the other problems I have with the DORA metrics specifically is that the, and I've talked to DORA at Google about this as well, it's, it's like some of the questions are nonspecific. So for your, the system you work on most of the time, how frequently you deliver. Well, are you talking about a thousand developers, a hundred developers, a team of eight, right? And so, your delivery frequency is going to be very much relative to the number of people working on it, plus other constraints outside of it. And so you, yes, high performers deliver, you know, multiple times a day with, uh, you know, lead times of less than an hour, except that what's the definition of lead time? Well, there's two inside Accelerate, and they're different depending on how you read it. And, but that doesn't mean that you should just copy what it says. You should look at that and say, "Okay, now what, what am I trying to accomplish? And how can I apply these ideas? Not necessarily the metrics directly, but how can I apply these ideas to measure what I'm trying to measure to find out where my problems are?" So you have to deep dive into where your problems are. And so just like, "Hey, measure these things and here's your benchmarks.

Kovid Batra: Makes sense. Makes sense. Richard, do you have a point that I think we have been talking for a long, if you have any question, uh, I think let's, let's hear from Richard also. Um, he has used Typo, uh, has been using it for a while now, and I'm sure, uh, in this journey of implementing engineering metrics, DORA metrics in his team, he would have seen certain challenges. Richard, I think the stage is yours. 

Richard Pangborn: Yeah, sure. Um, so my research into using DORA metrics stem from, um, building high-performing teams. So, um, we always, we're looking for continuous improvement, but we're really looking for ways to measure ourselves that, that makes sense, that can't be totally gamed, that, um, that are like standards. Uh, what I liked about DORA was it had some counterbalancing metrics like throughput versus quality, time to repair versus time to build, speed for stability. That's, it's a, it's a nice counterbalancing, um, effect. Um, and high-performing teams, they care about stuff like continuous improvement, they want to do better than they did last quarter or, or last month, they want to, um, they want help with decision-making. So better data to drive some of the guesswork about, you know, what, what area needs, um, The most improvement or what area is, uh, broken in our pipeline, maybe for like continuous delivery for quality. Um, I want to make sure that they're making a difference, that they're moving a needle, um, ladders up. So a lot of times, a lot of companies, uh, have different measurements at different levels, like company level, department level, team level, individual level. So DORA, we were able to identify some that do ladder up, which is great. 

Some of the there are some challenges with implementing DORA, like when we first started. Um, so I think part of the challenge, one of the first ones was the complexity around data collection. Um, so, you know, accurately tracking and measuring DORA metrics. So deployment frequency, lead time for changes, failure rate, recovery, um, they all come from different sources. So CI/CD pipelines, version control systems, incident management tools. So integrating these data sources and ensuring they provide consistent results can be a little time consuming. Um, it can be a little difficult to understand. Yeah, so that was, that was definitely one part of it. Uh, we haven't rolled out all four yet. We're still in the process, just ensuring that, you know, what we are measuring is accurate.

Bryan Finster: Yeah, and I'm glad you touched on the accuracy thing. Um, When we would go and work with teams and start collecting data, so number one, we had data from the pipeline because it was embedded into the platform, but we also knew that when we worked with teams that the Git data was accurate, but the workload was going to be garbage unless the teams actually cared about using Jira correctly. And so, education step number one was while we were cleaning up the, the data in Jira, educating them on why Jira actually should matter to them, instead of just as a, it's not, it's not a time-tracking tool, it's a communication tool. You know, and educating them so that they would actually take it seriously so that the workflow data would be accurate so that they could then use that to help them identify where the improvements could happen because we're going to try to teach them how to improve, we weren't just trying to teach them to do what we said. Um, and yeah, I built a few data collection tools since we started this, and yeah, the collecting the data and showing where, um, accuracy problems happen as part of the dashboard is something that needs to be understood because people will just say, "Oh, the data's right." But yeah, I mean, especially with workflow data, one of the things we really did on the last one I built was show where, where the, you know, where we're out of bounds, very high or very low, you know. I talked to management. I was like, "Well, look, we're doing really good. I've got stuff closing here really fast." I'm like, you're telling me it took 30 seconds to do that, give it a work. Yeah, the accuracy issues. And MTTR is something that DORA's talked about ditching entirely because it's a far too noisy metric if you're trying to collect it automatically. 

Richard Pangborn: Yeah, we haven't started tracking MTTR yet. Um, we're more concerned with the throughput versus stability that would have the biggest, um, change at the department level, at the team level. Um, I think, I think that's made the difference so far. Also, we have a challenge with, um, yeah, just doing a lot of stuff manually. So lack of tooling and automation. Um, there's a lot of manual measurements that are taking place. So like you said, error-prone for data collection, inconsistent processes. Um, once we get to a more automated state, I feel like it will be a bit more successful.

Bryan Finster: Yeah. There's a dashboard I built for the, for the Air Force. I'll send you a link later. It might, it might be useful, I'm not sure. But also the other thing is change failure rate is something that people misunderstand a lot, uh, and I've, I've combed through Accelerate multiple times. Uh, uh, Walmart has actually asked to reverse engineer the survey for the book, so I've gone back in depth. Change failure rate is any defect. It's not an incident. If you go and read what it says about change failure rate, it's any defect, which it should be because also the idea is wrong. If the user's reporting it's defective, and you say, "Well, that's a new feature." No, the idea was defective. We're not, it's not fit for purpose in most, you know, unless it's some edge case, but we should track that as well, because that's part of our quality process and change failure rate's trying to track our quality process. 

Richard Pangborn: Another problem we had is, um, mean, uh, meantime to recovery. So because we track our bugs or defects differently, they have different priorities. So, um, P0s here has to be done, has to be fixed in less than 24 hours. Um, P, priority 1 means, you know, five days, priority two, you have two weeks. So trying to come up with a, an algorithm to accurately identify, um, time to fix, I guess you'd have like three, three or four different ones instead of one. 

Bryan Finster: I've tried to solve that problem too, and especially on distributed systems, it becomes very difficult. So who's getting measured on MTTR? I mean, I'm sorry. Yes, yes. Who's getting measured, right? It's going to be because MTTR, by definition, is when the user sees impact. And so really, that's whoever has the user interface owns that metric. If you're trying to help a team improve their processes for recovery. So it's, it's, it's just a really difficult metric to try to do anything with unless, um, well, you can't, it's, I've, I've, I've tried to measure it directly. I've talked to Verizon, CapitalOne, uh, you know, other people in the dojo consortium, they've tried to make, nobody's been successful at measuring it. But yeah. I think better metrics are out there for how fast we can resolve defects. 

Richard Pangborn: Um, one of the things we were concerned about at the beginning was like a resistance to measurement. Um, some people don't want to be measured. 

Bryan Finster: That's because they have management meeting over the head and using it as, as the reason why it's a massive fear thing. And it's part of the, it's a cultural thing. I mean, as long as you, it's, you have to have a generative culture to make these metrics effective. One of the things we would do when we start working with teams is number one, we'd explain to them, we're not trying to judge you. We're like your doctor. We're working with you. We're in the trenches with you. These are all of our metrics. They're not yours. And here's how to use them to help you improve. And if a manager comes and starts trying to beat you up with them, just, you know, stop making the data valid. 

Richard Pangborn: Yeah. Well, some developers do want to know am I doing well, how do I measure myself? Um, So this gives them a way to do it a little bit, but we told them, um, you know, you set your own goals. Improve yourself. Don't measure yourself next to a developer, another developer on your team or, or someone else where you're looking for your own improvement. 

Bryan Finster: Well, I think it's also really important that the smallest unit that's measured with delivery metrics is team and not person. If, if, if individuals are being measured, they're going to optimize for themselves instead of optimizing for team goals. And this is something I've seen, uh, frequently, uh, there was one, uh, with, you know, on, on our, on the dojo team, we can walk into your team and see that if there was filters by individual developer, your team was seriously broken. Uh, and I've seen managers who measured team members by how many Jira issues they closed, which meant that code review is going to be delayed, uh, mentoring was not going to happen, um, uh, you'd have senior engineers focusing on easy tasks to get their numbers up instead of focusing on solving the hard problems, design was not going to happen well because it wasn't a ticket, you know, and so you focus on team outcomes and measure team goals and individual performance because everybody has different roles on the teams. People know that from an HR perspective, coaching by walking around is how you find out who's struggling. You go to the gimbal, you find out who's struggling, you can't measure people directly, that way it'll impact team goals, business goals. 

Richard Pangborn: Yeah, I don't think we measure it as a, um, whether they're not successful, it's just something for them to, to watch themselves.

Bryan Finster: As long as somebody else can see it. I mean. 

Richard Pangborn: Yeah, it's just for them, isn't it? Not for anyone else. 

Bryan Finster: Yeah. 

Richard Pangborn: Um, cool. Yeah. Yeah. That's, that's about it for me. I think at the moment. 

Kovid Batra: Perfect, perfect. I think, uh, Rich, if, if you are done with your questions, we have already started seeing questions from the audience. 

Bryan Finster: There's one other thing I'd like to mention real quick before we go there.

Kovid Batra: Sure. 

Bryan Finster: I also gave a talk about how to misuse and abuse DORA metrics, and the fact that people think there's, yes, there's four key metrics they focus on, but read Accelerate. There's a lot more in that book for things that you should measure, including culture. Uh, it's, it's important that you look at this as a holistic thing and not just focus on these metrics to show how well we're doing at CD. Cool, but the most valuable thing in Accelerate is Appendix A and not the four key metrics. So that's number one. But number two, value stream maps, they're manual, but they give you far deeper insights into what's going wrong than the 4 key metrics will. So learn how to do value stream maps and learn how to use them to identify problems and fix those problems.

Kovid Batra: And how exactly, uh, so just an example, I'm expecting an example here, like when, when you are dealing with value stream maps, you're collecting data from system, you're collecting data from people through surveys and what exactly are you creating here? 

Bryan Finster: No, I don't collect any data from the system initially. So if I'm doing a value stream map, it'll be bringing a team together. We're not doing it at the, at the organization level. We're doing it at the team level. So you bring a team together and then you talk about the process, starting from delivery and working backwards to initiation of how we deliver change. Uh, you get a consensus from the team about how long things take, how long things are waiting to start. And then you start seeing things like, Oh, we do asynchronous code review, and so I'm ready for code review to start. Four to eight hours later, somebody picks it up and they review it. And then I find out later that they've done and there's changes being made, you know, maybe the next day. And then I go make those changes, resubmit it, and like four to eight hours later, somebody would go re-review it. And, and you see things like, Oh, well, what if we just sat down and discuss the change together and just fix it on the fly, um, and remove all that wait time? How much, you know, that would encourage smaller pieces of work? And we can deliver more frequently and get faster feedback and see, you can see just immediate improvements from things like that, just by doing a value stream map. But bringing the team together will give you much higher quality data than trying to instrument that because not all of those things are, there's data being collected anywhere.

Kovid Batra: Makes sense. All right. We'll take a minute break and we'll start with the Q and A after that. So audience, uh, please shoot out all your questions that you have.

All right. Uh, we have the first question. 

Bryan Finster: Yeah. So MTTR is a metric measuring customer impact. So the moment from when a customer is impacted or user impact until they are no longer impacted. And that doesn't mean you fix the defect. It means that you are no, they are no longer being impacted. So roll back, roll forward, doesn't matter. That's what MTTR has mentioned. 

Kovid Batra: Perfect. Let's, let's move on to the next one. 

Bryan Finster: Yeah. So, um, there's some things where I can set hard targets on as, as ways to know that we're doing well. Integration frequency is one of those, you know, if, if we're integrating once per day or better into the trunk, then we're doing a really good job of breaking down our work. We're doing a good job of testing, or as long as we keep our defects from blowing up, you know, we should be testing. But you can set targets for that. You can also set targets as a team, not something you impose on a team. This is something we as a team do that we want to keep a story size of two days or less. Paul Hammett would say one day or less. Uh, but I think two days is, is a good time limit, that if we, if it takes us more than two days, we'll start running into other dysfunctions that cause quality impact and, and issues with delivery. So I've built dashboards where I have a line on those two graphs that say "this is what good looks like", so the teams can compare themselves to good. Other things that you don't want to gamify, you don't ever want to measure test coverage and say, "Hey, this is what good test coverage looks like." Because test coverage doesn't measure quality. It just measures how much code is executed by code that says it's a test whether it's a test or not. So don't want to do that. That's a fail. I learned that the hard way. Delivery frequency, of course, it's, that's relative to their delivery problem. Uh, you may be delivering every day, every hour, every week, and that all could be good. It just depends. Um, but you can make objective measurements on integration frequency and how long a unit of work takes to do. 

Kovid Batra: Cool. Moving on to the next one. Uh, any recommendations where you learn, uh, where we can learn value stream maps? 

Bryan Finster: Yeah, so Steve Pereira and Andrew Davis released 'Flow Engineering', which is basically, because there's lots of books on value stream mapping, but it's, from the past, but they're mostly focused on manufacturing and Steve and Andrew released the Flow Engineering book where they talk about using value stream maps to identify problems and how to go about fixing those things. So it was just released earlier this year. 

Kovid Batra: Cool. Moving on to the next one. When would you start and how to convince upper management? They want KPI now and we are trying to get a VSM expert to come in and help. It's a hard sell. 

Bryan Finster: Yeah, yeah. We want easy numbers. Okay. Well, you know, I would, I would start with having a conversation about what problems we're trying to solve. It's very much like the conversation you have when you're trying to convince management that we want to do continuous delivery. They don't care about continuous delivery unless that they're, they're deep into the topic. But they do care about, uh, you know, delivering better about business value. So you talk about the business value. When you're talking about performance indicators, well, what performance are we trying to measure? And we really need to have that hard conversation about, are we trying to measure how much, how many lines of code are getting dumped onto the end user? How much value are we delivering? Are we trying to, you know, reduce the size and cost of delivering change so we can be more effective about this, or are we just trying to make sure people are busy? And so if you have management that just wants to make sure people are productive, uh, and they're not opening to listening to why they're wrong, I'd quit.

Kovid Batra: All right. Can we move on to the next one then?

Bryan Finster: Where's the next one? 

Kovid Batra: Yeah. 

Bryan Finster: Oh, okay. 

Kovid Batra: Is there any scientific evidence we can use to point out that working on small steps iteratively is better than working in larger batches? The goal is to avoid anecdotal evidence while discussing what can improve the development process. 

Bryan Finster: You know, the hard thing about software, uh, in an industry is that people don't like sharing their information, uh, the real information because it can be stock impacting. And so we're, we're going to get a scientific study from a private company. Um, but we have a, you know, a few centuries worth of, of knowledge about knowing that if you build a whole bunch of the wrong thing, that you're not going to sell it. Um, there's, you don't have to do a scientific study because we have knowledge from manufacturing. Uh, you know, the, the, the Simpsons, the documentary The Simpsons, where they talk about the Homer car, where they build the entirely wrong car and put the company out of business without, because there was no feedback loop on that car at all until it was unveiled. Right? That's, that's really the problem. We're doing product development. And if you go off and say, I have this brilliant, well, you know, like, uh, uh, what was the, uh, Silicon Valley, they spent so much money building something nobody wanted and they kept iterating and trying to find the right thing, but they kept building the complete thing and building the wrong thing and just burning money. And this, this is the problem we're trying to solve. And so you're, you're trying to get faster feedback about when we're wrong, because we're inventing something new. Edison didn't build a million wrong light bulbs and see if any, I see if they worked.

Kovid Batra: All right. I think we can move on to the next one. Uh, what strategies do you recommend for setting realistic yet ambitious goals based on our current DORA metrics? 

Bryan Finster: Uh, I would start with why can't we deliver today's work today? Well, I'd do that right after why can't we integrate today's work today? And then start finding out what those problems are and solving them. Uh, as far as ambitious goals, I mean, I think it's ambitious to be doing continuous delivery. Why can't we do continuous delivery? Uh, you know, one of the reasons why we put minimumcd. org together several years ago was because it's a list of problems to solve, and if you solve those problems, you can't solve those problems with an organization that's not a great place to work. You just can't. And the goal is to make it a better place to work. So solve those problems. That's an ambitious goal. Do CD. 

Kovid Batra: Richard, do you have a question? 

Richard Pangborn: Uh, myself? No? 

Kovid Batra: Yup. 

Richard Pangborn: Nope. 

Kovid Batra: Okay. One last one we'll take here. Uh, yeah. 

Bryan Finster: Yeah, so common pitfalls, and I think we touched on some of these before, is trying to instrument all but two of them. You could instrument two of them mostly, I think that, uh, you know, and change fail rate is not named well because of the description. It's really defect arrival rate. But even then, that depends on being able to collect data from defects and whether or not that's being collected in a disciplined manner. Um, delivery frequency, you know, people frequently measure that at the organization level, but that doesn't really tell you anything. You really need to get down to where the work is happening and try to measure that there. But then setting targets around delivery frequency, instead of identifying how do we improve, right? And it's, it's, it's all it is, is how do we, how do we get better, um, using them as goals? They're absolutely not goals. They're health indicators. You know, like I talked about the tachometer before, I don't have a goal of, we're going to run at 5, 000 RPM. I mean, number one, it depends on the engine, right? I mean, that would be really terrible for a sport bike, would blow up a diesel. So we, we need to, using them naively without understanding what they mean and what it is we're trying to do. I see it constantly. Uh, I and others who were early adopters of these met out, screaming about this for several years, and that's why I'm on here today is please, please don't use them incorrectly because it just hurts things.

Kovid Batra: Perfect. Uh, Bryan, I have one question. Uh, uh, like when, when teams are setting these benchmarks for different metrics that they have identified to be measured, what should be the ideal strategy, ideal way of setting those benchmarks? Because that's a question I get asked a lot. 

Bryan Finster: Let's say, they were never benchmarks in Accelerate either. What they said was is that we're seeing a correlation between companies with these outcomes and metrics that look like this. So those aren't industry benchmarks, that's a correlation they're making. And correlation is not equal causation. I will tell you that being really good at continuous delivery means that you can, if you have good ideas, deliver good ideas well, but being good at CD doesn't mean you're going to be good at, at, at, you know, meeting your business goals because it depends, you know, garbage in, garbage out. Um, and so, you don't set them as benchmarks. They're not benchmarks. They're health indicators. Use them as health indicators. How do we make this better? Use them as, as things to cause you to ask questions. Why can't we deliver more than once a month? 

Kovid Batra: So basically, if we are, let's say, for a lack of a better term, we use 'benchmarks'. There should, those should be set on the basis of the cadence of our own team, how they are working, how they are designed to deliver. That's how we should be doing. Is that what you mean? 

Bryan Finster: No, I would absolutely use them as health indicators, you know, track trends. Are we trending up? Are we trending down? And then use that as the basis of starting an investigation into why are we trending up? Why are we trending down? I mean, are we trending up because people think it's a goal? And were there some other metric that's going south that we're not aware of while we're, while we're focusing on this one thing getting better? I mean, this is Richard, I mean, you pointed out exactly. It's a good balance set of metrics if they're measured correctly unlike if it's collected correctly. And you can't, you know, another problem I see is people focusing on 1. I remember a director telling his area, "Hey, we're going to start using DORA metrics. But for change management purposes, we're only going to start by focusing on MTTR instead of anything else." They're a set, they go together, you know? You can't just peel one out. Um, so. 

Kovid Batra: Got it, got it. Yeah, that absolutely answers my question. All right. I think with that, we come to the end of this session. Uh, before we part, uh, any parting advice from you, Bryan, Rich? 

Richard Pangborn: Um, just what we found successful in our own journey. Every, every company is different. They all have their own different processes, their own way of doing things, their own way of building things. So, there's not exactly one right way to do it. It's usually by trial and error for each, probably each company, uh, I would say. Depending on the tooling that you want to choose, the way you want to break down tasks and deliver stories. Like for us, we chose one day tasks in Jira. Um, we didn't choose, uh, long-lived branches. Um, we're not trunk-based explicitly, but we're, our PRs last no longer than a day. Um, so this is what we find works well for us. We're delivering daily. We haven't gotten yet to the, um, you know, delivering multiple times a day, but that's, that's somewhere in the future that we're going to get to, but you have to balance that with business goals. You need to get buy-in from stakeholders before you can get, um, development time to sort of build out that, that structure. So, um, it's a process. Um, everyone's different. Um, but I think bringing in some of these KPIs or, or sorry, benchmarks or health metrics, whatever you want to call them, um, has worked for us in the way where we have more observability into how we operate as engineers than we've ever had in the past. Um, so it's been pretty beneficial for us. 

Bryan Finster: Yeah. I'd say that the observability is critical. Um, you know, I've, I've built a few dashboards for showing these things. And for people, for development teams who were, uh, focusing on "we want to improve", they always found value in those things. Um, but I, one, one caution I have is that if you are showing metrics on a dashboard, understand that the user experience of that will change people's behaviors. It's so important people understand. And whenever I'm building a dashboard, I'm showing offsetting metrics together in a way that they can't be separated, um, because you, otherwise you'll just focus on one. I want you to focus on those offsetting metrics as a group, make them all better. Um, but it only matters if people are looking at it. And if it's not a constant topic of conversation, um, it, it, it won't help at all. And I know, uh, Abi Noda and I have a difference of opinion on how, on data collection. You know, I'm big on, I want real-time data because I'm trying to improve quickly. Uh, he's big on surveys, but for me, and I don't get feedback fast enough on, um, with a survey to be able to correct the course correctly if I'm trying to do, if I'm trying to improve CI and CD. It's good for other stuff. Good for culture. So that's the difference. Um, but make sure that you're not just going out and buying a tool to measure these things that shows data in a way or has, you know, that causes bad behavior, um, or shows, or collects data in a way where it's not collecting it correctly. Really understand what you're doing before you go and implement a tool. 

Kovid Batra: Cool. Thanks for that piece of advice, Bryan, Rich. Uh, with that, I think that's our time. Just a quick announcement about the next webinar session, which is with the pioneer of CD, the co-author of the book 'Continuous Delivery', Dave Farley. That will be on 25th of September. So audience, stay tuned. I'll be sharing the link with you guys, sending you emails. Thank you so much. That's it for today. 

Bryan Finster: Thanks so much. 

Richard Pangborn: I appreciate it. 

Kovid Batra: Thanks, Rich. Thanks, Bryan.

Ship reliable software faster

Sign up now and you’ll be up and running on Typo in just minutes

Sign up to get started