Estimating Stories

There are several approaches to software development that require the estimation of work items, many of these approaches call work items, stories or tasks. To keep this post applicable to various approaches I’ll simply refer to work items. There are many approaches and guides about how to do estimating, the process, the range of values (Fibonacci, linear etc…) but they all seem to have different ideas about the unit of estimation.

The whole point of the exercise is to provide an estimate of when the team will be able to delivery feature x or release y. How long will all the work items between now and the desired release take. An initial attempt was to use time as an estimating value, allowing the list of work items to be added up to get the answer. This has been proven not to work. Firstly it’s subjective, the time it will take one person on the team to do the work will be different to another team member. Secondly, humans are really bad at estimating the time required for anything other than simple or well understood tasks.

To address these two problems we need something that is easier to reason about and something that applies to a work item regardless of who works on it. Several options have been proposed such as the customer value of the work, it’s difficulty, or keeping it entirely abstract with no unit of measure at all. I advocate for using complexity (or inversely simplicity) to estimate work items.

Let me justify this. Complexity is an arbitrary measure of the mental overhead, the number of cross cutting concerns or just how tangled up the work is. There is no need to define a discrete unit of complexity, it’s enough to provides a scale. A complex work item could be easy for an individual developer, if they have lots of prior knowledge about it. So if complex isn’t the same as hard, then Simple != Easy. This is why using the difficulty (or easiness) of a work item is flawed, it involves an individual, it’s subjective. Points are an estimate for the team, not an individual. The complexity of an item is constant, it may be poorly understood before the work is started but the actual complexity will be unchanged after the work is done.

Once this approach has been used to estimate a few items the team should start to develop a shared consensus of how complex a work item is relative to other items that have already been estimated. Having a discrete scale that needs to be worked out for each work item is time consuming and unnecessary. Getting a consensus on the relative complexity of a work item is quicker and promotes discussion about the work item that leads to a consensus understanding of the work item rather than some metric about it.

Knowing how many complexity points need to be completed for a release isn’t enough though. Once a team has completed a few (hopefully short) iterations that have been estimated by complexity they will have an idea of the their pace. A teams pace (a.k.a. velocity) is a measure of how much complexity can they get through per iteration. Now the team can use their pace to predict when they will reach a particular release or feature. This places an emphasis on a stable pace rather than the fast pace, a stable pace means more reliable predictions of release dates.

There are many opinions about how this should be done and I don’t think the software development community has come to a consensus on this area yet. My view is certainly not exhaustive and I’m sure there are conflicting views out there but hopefully my reasoning has been at least a little compelling.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.