Agile KPI: Two types of velocity

I have read articles/books around Agile Scrum which swears by the efficacy of "Team Velocity" as one of the key performance indicators of an agile team. It may vary from team to team so we really can not compare one team's velocity with other but for a single team it is a decent indicator of pace of delivery or how much the team is churning out in a sprint or is it? Let me share my experience

Scrum as a process is quite flexible, however, it is quite prescriptive in proposing that members should be fully aligned to a project and ideally should be co-located. In real life often I often found both of these are difficult to achieve especially in a geographically distributed team where product managers mostly are in United States and development team is based out of an offshore location, in my case, India.

As you know Agile as a process champions continuous deployment model where ideally the chunk of product delivered in a sprint should be deployed in production at the end of the sprint. In an ideal agile world with a mature process it can be even be deployed every day. But in reality I have found in established companies this is difficult to achieve where multiple sign offs are required from multiple stakeholders to release.In our case, we follow a sort of middle ground. Depending on the product impact and strategy we try to release an early feedback edition of the product (really only just the happy path) to a very limited set of customers; preferably internal ones (professional services in our case) . This sometimes takes around a quarter to develop and then shift to a point release per sprint and a major release per month model.

During the initial few months of when we developed MVP (minimum viable product), we observed that most the business stories were not delivered until the later half of the cycle. Team were spending majority of time in initial sprints doing design and proof of concepts and taking up "user stories" only on the later sprint cycles. This is similar to how "Waterfall" works though with some elements of "Agile Scrum". For example , keeping up with the sprint of agile development, team did not expect all user stories to be defined upfront before the development started. Most of the epics and few user stories were there in the backlog to start with and the expectation was that descriptions ,acceptance criteria and UX designs for user stories be added as we progress and a pre-requisite for respective backlog grooming sessions. Anyway coming back to sprint velocity, after three months the graph was looking like this

The initial impression was that , well , it was looking good. Velocity was hovering around 45 with occasional spike towards last sprint before release. That means team was generally delivering at a uniform pace with little more push before release.

However there were few problems:

  1. QA engineers started to get overloaded from 4th sprint onwards and was stretching themselves crazy in the last sprint
  2. Most of the feedbacks from product manager and other stakeholders started to come in at the later sprints leading to some rework and enhancements in last couple of sprints
  3. Feature bugs were reported late in the cycle and team had to stretch in last couple of sprints to close all the critical bugs along with user stories. Both of these have serious impacts on quality of product as well as on team dynamics. Engineers get overloaded with critical bugs in the middle of another feature story development and as the entire team came under pressure, there were not enough focus on writing enough unit test cases, spending enough time on reviewing codes and doing other due diligences before code check in. This usually means reduced dev quality, increased QA time and in general as everybody comes under pressure either quality of the product or date of delivery or sometimes both are compromised

We tried to dig a little bit deeper and explored options to fix this. The first concern was how do we identify from data that the team were keeping too many user stories for the later sprints instead of delivering those at an uniform pace. In this case sprint velocity reveals literally nothing. So we now know that there are issues with the process and it is more "scrumfall" (A cooked up term to explain a process which is not truly agile and mostly waterfall while following most of the scrum rituals ) but there is no measurable data which can reveal this right away. From a management point of view that is not ideal.

So how do we capture data that can help in similar situations: Enter "Two types of velocity". Before explaining it further I will put up a graph which should be self explanatory

In the above diagram red bars are for technical stories and yellow bars are for user stories/ functional stories. Once we started capturing data in this way it became evident that we took up too many technical stories or low level designs in the initial sprints and kept the functional stories for the later sprints.

To streamline the process we also made few changes in the way we create and close issues.

We made changes in "types" of issues that could be created in JIRA.The acceptable types of issues/tickets are restricted to:

  1. Epic and Capability
  2. Story [Only for user stories]. Typically these stories would start with "As a user.."
  3. Technical Story [For any technical debt, technical investigation or developer stories]. Typically these stories would start with "As a developer/architect .."
  4. Bug
  5. Task
  6. Test

Along with that we streamlined who (or which role) can manage different types (creating, modifying, closing issues) in the following manner:

  1. Epic/Capability/Story: Product manager and system analyst
  2. Technical story: Lead developers, architect and engineering manager
  3. Bug: QA
  4. Task: Developers
  5. Test: QA

In this way we ensured that we have governance in place to make data capture easy and error free. During Sprint planning there was renewed focus on keeping the balance of technical and functional stories. More importantly we found that this metric always revealed if functional stories were not delivered at the expected pace

So what do you think? From my experience I have seen this working out quite well. Do let me know your inputs and please share your experiences as well. I would be really interested to further streamline and improve the process.

To view or add a comment, sign in

More articles by Abhijit Mazumder

Insights from the community

Others also viewed

Explore topics