SDLC Insights

Improve observability, predictability and efficiency

Request demo

Do you know where your team is succeeding or struggling with building software? When did you last use quantitative data to assess the overall health of your end-to-end SDLC? And do you even know where to start if you were going to try?

Data is critical to understanding your team performance and identifying potential shortcomings within your SDLC, or understanding the strengths and weaknesses of your processes. And Agile—as it’s commonly understood—lacks many of the advanced metrics we need to fully gauge team performance. Software engineers have brought analytical rigor to everything from manufacturing automation to movie casting. Yet, like baseball before the analytics revolution, software development lacks an analytical framework for self-analysis, process understanding, or improvement.

This is just our opening perspective on the world of software process analytics. Over time, we will lay out our thoughts on advanced metrics for Agile Development in hopes of helping teams better understand and describe their software development, leading to a more transparent, predictable and efficient process. And since we’re still continuing to learn and evolve, we welcome your insights and perspectives.

A layered approach to performance measurement

To assess the performance of your team and identify inefficiencies in your SDLC, you can use three interconnected performance scores. These three scores provide stakeholders with a multi-perspective view of project health, leading to a more holistic understanding of the overall process:

  1. Sprint performance
  2. Product delivery
  3. Portfolio performance

Let’s break them down.

Sprint Performance Score (SPS)

In Agile, the sprint is the most common work unit. If your sprints aren’t predictable and efficient at the micro level, your broader deliverables won’t be either. To better understand your team performance, begin by scoring your sprints. This gives your team a tangible way to objectively gauge their progress relative to their goals.

The SPS measures the difference between actual and ideal progress, considering the size of the sprint in points. This is the sum of the deviations of the actual from the ideal, normalized to a scale of 100. The resulting score ranges from 0 to 100, with higher scores representing better performance and indicating a good cadence.

Let's delve into the heart of the sprint performance score calculation process. It starts by capturing daily progress and documenting how many points have been completed from the total planned for the sprint each day. Concurrently, an ideal progress trajectory is laid out. This can be as simple as assuming a steady pace of task completion throughout the sprint. Each day, a deviation is computed, representing the difference between the actual progress made and the ideal trajectory. To evaluate overall sprint performance, these daily deviations are averaged over the course of the sprint, providing an indication of how significantly the team typically deviated from their ideal trajectory.

Recognizing that the size of a sprint could naturally lead to larger deviations, the average deviation is normalized with respect to the total points planned for the sprint. A correction factor is introduced to adjust for sprint size—larger sprints are given a small bonus over smaller sprints. The final stage is the translation of this corrected deviation into a score ranging from 0 to 100. Essentially, this involves taking the complement of the corrected deviation and multiplying by 100, resulting in the sprint performance score. This score provides an objective measure of sprint health, maintaining its relevance and scalability across different team sizes, methodologies, and tools.

This score scales across teams regardless of sizes, development methodologies, or tooling and is resistant to a variety of edge cases that impact standard delivery metrics like velocity.

A high sprint performance score indicates the team is on track and meeting expectations. If a sprint has a poor grade, teams can evaluate their processes to identify opportunities for improvement. For tips on recovering from a low score, check out one of our previous blog posts: Getting Back On Track.

Bloomfilter sprint performance score

Product Delivery Score (PDS)

A normalized metric like the SPS is powerful in its simplicity and comprehensibility. More importantly, the SPS lends itself to a range of basic descriptive statistics techniques that make it even more crucial and powerful in a way that typical counting sprint metrics fall apart. For instance, it’s virtually impossible to get meaningful information out of the mean or the standard deviation of a classic Agile metric like “velocity,” given that team sizes and makeup fluctuate substantially over time.

Since the SPS is normalized, it can be analyzed over time, which brings us to the product delivery score. The PDS provides time-series data to understand the progress of product delivery over time.

To calculate the PDS, take the average of the sprint performance scores of a given time horizon. For example, taking the mean from the past quarter yields the quarterly product delivery score (qPDS). This score gives stakeholders a snapshot of the project's performance over time and helps spot patterns across sprints and grading trends. Of course, this scoring methodology can be adjusted fairly easily for other time period analysis (weekly, monthly, annually, etc).

As you look at sprints normalized in the context of time series and across different teams, it becomes possible to layer in other statistical analyses. It’s trivial to see if a given sprint falls in the bottom quartile, or if the standard deviation over a rolling time horizon is increasing or decreasing.

Bloomfilter product delivery score

Portfolio Performance Score (PoPS)

The portfolio performance score provides an overall assessment of a company's projects, considering both the project performance scores and the number of developers on each project.

This score is a weighted average that assigns more importance to projects with larger teams, so delivery is scaled by implied importance. By adjusting the score based on headcount, the portfolio performance score reflects the overall health of a company's projects. It helps stakeholders gain an objective evaluation of their company portfolio, specifically for high-priority business initiatives, assuming higher-priority initiatives are staffed with more team members.

Bloomfilter Portfolio performance score

The value of performance measurement in aligning scrum teams and business objectives

If we want to improve our processes, we need new tools to understand them. Quantifying the process flow of the SDLC ensures a more transparent, predictable, and efficient software delivery process. Objective data is needed to understand where the process goes off the rails, identify factors leading to derailment, and determine actions to make the process more predictable and efficient. Measurement in a language everyone understands is essential.

Taken together, the SPS, PDS, and PoPS provide an easy-to-understand process health metric for software development. They lay a foundation, along with other metrics we’ll discuss in the future, for a more complete understanding of the health of the process. Equipped with this data, everyone can have a shared view of objective reality. Guesswork about project status can be eliminated, and human bias can be removed from predictions about meeting deadlines.

Brian Kurr

Lead Software Engineer