Spend any time with Agile development and you’ll come across the burn down chart. The premise is simple – if a team committed to delivering 15 planned innovations (features, user stories, or whatever you want to call them), a little gets done over time so the remaining work should “burn down” to zero at the end of the sprint.

While burn down charts are effective tools for tracking how a team is progressing towards finishing planned work, there is a sinister effect that happens in many organizations that learn to use them. Managers begin to primarily focus on the number of innovations delivered per sprint. Though they may be tracking other metrics, it draws their attention because it is simple to understand, and easy to communicate. It also feels like producing as many innovations as possible is getting the most value out of a team – common sense right?

Unfortunately, the goal of a business is not to do work. The goal is to grow and create economic value. As anyone who’s read Continuous Delivery or The Lean Startup knows, most of the innovations you plan and deliver do not produce value as expected. Though market and customer research, usability testing, and strategic planning are all useful tools; the only proof of value is whether customers actually use your innovations.

To find out which ideas are good, you’ve got to release them and be prepared to throw away the innovations you had that didn’t meet their needs. If you’ve invested in releasing in small batches and using a deployment pipeline, the delivery process is optimized for getting feedback. If you stop there however, you’ve fallen victim to process ceremony.

Once you start releasing more often, you also need to optimize for changing course based on what you learn from the increased feedback. The bad news is it doesn’t look as good on a chart.

Optimizing_for_Story_Points

The chart above depicts some of the effects on value for a team focused on innovation throughput as a primary goal. As each sprint progresses, the percentage of value is about the same. The industry standard being less than 1/3rd of the ideas you release will provide useful value to customers in established markets, much less in new products. If the theories businesses have about what their customers want before they deliver them were usually true, we’d see more successful startups.

Since what was planned at the beginning of the project above doesn’t change from one sprint to the next, there are no opportunities to course correct until the end. And each new feature that is released increases the maintenance cost since complexity goes up along with feature count.

In this sad state of affairs, the trends should be obvious: an appalling rate of value, decreasing innovation throughput, and increasing maintenance costs. In situations like this, it is alarming to experience how teams will subconsciously game the numbers so the number of story points (effort towards features) looks like it is going up. This is an unfortunate behavior that is unlikely to change with the motivation for measuring performance focused on innovation throughput.

Optimizing_for_Value

The chart above depicts a more typical set of activities for a team focused on value instead of rate of innovation. The highest throughput of new features occurs at the beginning, when you don’t know how your customer will receive the minimum viable product. The team can focus mostly on delivering the first few new features.

During the second sprint, the first release is available to customers and they are providing feedback. To prepare for the change in direction that is inevitable with listening to this feedback, it is more important at this point that the features that were already delivered are of high quality. If any corners were cut, now is a great time to remedy that.  The team might do some refactoring, and plans for what features they deliver next will need to change to accommodate the feedback that was gathered.

When the third sprint rolls around, the team got rid of some ideas we delivered that weren’t right based on what we learned, made sure that what was in the prior sprint was stable, and are now working to deliver features that are closer to what the customer said they wanted. If the feedback we got was reliable (customers ask for things they don’t really want all of the time!) the value produced during this next sprint will potentially be higher.

At the fourth sprint, the team has a steady stream of feedback, and is working on a combination of new ideas and enhancements to adapt what was released to the customer’s desires. The number of features delivered is still lower than when the project started though because some refactoring may need to be done to keep the quality of features high since potentially dramatic changes to the design may have been made to release the most valuable ones. From here on out is a see-saw of increased value from releases, and intermediary releases where the team vigorously realigns the priority and structure of work to accommodate what they are learning. That’s right folks – finding market fit is messy. But this is what needs to be done to make real money in the industry.

Now some of you might be thinking “Can’t I just measure the time spent refactoring and maintaining so we can combine these together and have a clean burn-down chart? I need to show that our resources are fully utilized!”. And yes, you could do this, but why would you want to? I will tell you why – a lack of trust. When management overseeing a new product or initiative wants to micro-measure all aspects of the delivery process to ensure they are getting the most throughput there is an assumption that someone is not working as hard as they could be, and that this can be identified on a chart to “save money”.

The problem is that any process measurement system can be gamed. The only true measurement of progress in a business is profit, and the way to arrive at these moments of high growth is through the adaptive, difficult-to-quantify, collaborative approach in the second chart. Having a team able to work towards value in this way requires brutal honesty about what didn’t work, variation from one sprint to the next, and above all – trust.

So go ahead and estimate features you will deliver each sprint, and use the burn down chart to see how the team is performing so they can do better at estimating. But stop penalizing teams for delivering a different number of features (or user stories) each sprint. You just might figure out the ideal features for your customers – and keep your staff around for the long haul ahead.


Also published on Medium.

Comments

comments