Where Did This Estimation Approach Come From?
Ever wondered where the story point / relative sizing based estimating approach came from. A lot of the thinking is based on the “Wideband Delphi” estimation method, a consensus-based technique for estimating effort. This, in turn, was derived from the Delphi method which was developed in the 1950-1960s at the RAND Corporation as a forecasting tool. It has since been adapted across many industries to estimate many kinds of tasks, ranging from statistical data collection results to sales and marketing forecasts.
In addition, it turns out there was significant amount of research into doing estimates, since this was considered such a core part of the software development process. The result is that this estimation approach has been validated by that research.
Looking at the approach:
- Those that do the work, estimate the work (1)
- Estimators are required to justify their estimates (2, 3)
- Focus most estimates within one order of magnitude (4, 5)
- Combining individual estimates through group discussion yield better estimates (6, 7)
The numbers refer to the following references:
- Jørgensen, Magne. 2004. A Review of Studies on Expert Estimation of Software Development Effort.
- Hagafors, R., and B. Brehmer. 1983. Does Having to Justify One’s Decisions Change the Nature of the Decision Process?
- Brenner, et al. 1996. On the Evaluation of One-sided Evidence.
- Miranda, Eduardo. 2001. Improving Subjective Estimates Using Paired Comparisons.
- Saaty, Thomas. 1996. Multicriteria Decision Making: The Analytic Hierarchy Process.
- Hoest, Martin, and Claes Wohlin. 1998. An Experimental Study of Individual Subjective Effort Estimations and Combinations of the Estimates.
- Jørgensen, Magne, and Kjetil Moløkken. 2002. Combination of Software Development Effort Prediction Intervals: Why, When and How?
And if you've ever wondered why we use the modified Fibonacci sequence instead of some kind of simple 1 to 10 scale, see Why Progressive Estimation Scale Is So Efficient For Teams.
In addition, this approach leads to:
- Emphasizing relative rather than absolute estimates, which means that you can estimate very quickly. People are very good at deciding whether something is bigger or smaller than something else (relative size estimates), but are terrible at determining how big it is (absolute estimates). You will often see Teams that can produce 20, 30 even 40 estimates in an hour with improved data and far less wastage associated with the process.
- Estimates are constrained to a set of values so that we don’t waste time on meaningless arguments.
- Everyone’s opinion is heard.
- Its fun! (or at least more fun than the old way)
You will also note that these references are quite old, so there really is no surprise that people continue to learn and experiment with different approaches. The reason I bring this up is that there are alternative approaches out there.
Want to Know More?
- Why the Fibonacci Sequence Works for Estimating - Weber’s Law approach from Mike Cohn
- Why Progressive Estimation Scale is So Efficient - Information theory approach from Alex Yakyma