acm-header
Sign In

Communications of the ACM

The business of software

What Is a 'Good' Estimate?

Whether Forecasting is valuable

What Is a 'Good' Estimate?, illustration

Credit: Alicia Kubista / Andrij Borys Associates

Our measure of the "goodness" of an estimate is usually based on one thing: how closely the forecast ends up matching what we actually see. The search for the "accurate estimate" is one of the El Dorado quests of software project management. Despite the clearly oxymoronic nature of the phrase,a the most common question I am asked about an estimate is "...how accurate is it?" It seems to be a very natural question and one we might ask of a car mechanic or a builder. However, "accuracy" is only one yardstick we could use to assess how good an estimate is—there are other criteria and the meaning of "accurate" could bear some scrutiny. But first, the weather.

Back to Top

Murphy's Lore

When considering weather prediction, the late Allan Murphy of Oregon State University suggested three attributes of a forecast that determine its "goodness."1 With a little adjustment, we can apply Murphy's principles to estimating software development projects.

The three attributes of "goodness" Murphy noted were:

  • Consistency
  • Quality
  • Value

To these three attributes, we can add three more: honesty, accuracy, and return. These three additions are closely aligned with Murphy's attributes so we can pair them together.

Back to Top

Consistency and Honesty

To be consistent, the process used to create an estimate must be rational (for example, no random guesses or wishes) and grounded in some knowledge base of relevant data. Ideally this knowledge base would also be consistent, representing the performance of similar projects, but this is not mandatory. If identical project data is not available then using history of somewhat dissimilar projects simply introduces additional uncertainty in the estimate output. As long as the uncertainty is honestly calculated and openly presented, it just becomes one of the considerations in making any business decision based on the estimate.

Also included in this category is the requirement that all the data pertinent to the project be included in the estimate. Data cannot be cherry-picked to achieve a desired result. This should especially include the inherent uncertainty in whatever result is forecast.

An estimate is "honest" if it truly reflects the best judgment of the estimator and the true output of the rational process used. I have often heard project managers complain they experienced intense pressure to lower their best-judgment estimates if they did not fit within preconceived or predefined commitment levels. Sometimes estimators anticipate this and proactively lower their estimates to make them more acceptable to the decision makers. Such pressures may be overt or covert but they almost always work to make the estimate less honest and less valuable.

Back to Top

Quality and Accuracy

This is the most common measure by which estimates are judged, but we need to examine what we mean by "accuracy." The usual interpretation is the degree of correspondence between the estimate and the actual result. However, at the time of estimating, there is no way to assess quality using this definition. We do not have the actual result in advance of running the project, so an estimate is accurate compared to...what?

Estimates—of rainfall or projects—are intrinsically probabilistic. Simply because a result does not exactly correlate to an estimate does not mean the estimate was wrong or even that it was inaccurate.b An estimate is inaccurate if its probability of result does not match the probability of the actual. If a project is estimated as having a 20% likelihood of success and it "fails" by overrunning the budget or the schedule, perhaps the project was flawed but arguably the estimate was accurate since it forecast a high probability of failure.

If we adjust our definition of the word "accurate" there are other valuable comparisons we can make. For instance, at the time we estimate, we can assess its quality and "accuracy" by comparing the estimate against the knowledge base mentioned earlier. If an estimate falls outside of the normal range of variability of similar projects it is reasonable to assert it is inaccurate.

The most obvious way to assess estimation accuracy is a posteriori, once we have the actual results. To do this correctly or, well, accurately would require reconstructing the estimate while accounting for the variance in data that was observed. When we do that, however, we are actually assessing the viability of the process and data used in the original estimate. For example, if a project was completely de-staffed for a while to deal with some unexpected emergency in the organization, it is unlikely the original estimate would correlate to the result. If, when the project finished (or during the de-staffing period) the estimate was rerun incorporating this new information, perhaps it would correlate well and could be considered "accurate."

Murphy notes a host of sub-attributes of "quality" as statistical assessments of probability distributions. Assessing these usually requires multiple forecasts—something that is common in meteorology but not in software. We can and should reestimate as data becomes more available. The data this would provide would allow us to apply statistical quality measurements to our estimates and estimation process—but first we would have to estimate more than once.

Back to Top

Value and Return

This aspect of estimation goodness is often totally ignored. The act of estimating is the purchasing of information. We expend a certain amount of time and effort to obtain some knowledge about a project. This effort costs money and the knowledge has worth. A "good" estimate maximizes the return on this investment by obtaining the highest value at the lowest cost.

Estimates have no intrinsic value; their value is determined entirely by how they are used to make business decisions. The consequences of those decisions might be quite unrelated to the technical aspects of producing a project forecast. A quick and rough estimate used to cancel an infeasible project might be a lot more valuable than a time-consuming and expensive estimate used to justify a marginal project. The true value of an estimate is realized mostly by the difference in the business decisions made.

There are four aspects to the value of estimates3,4: (a) What business choices might be indicated by the estimation output? (b) What are the benefits/costs of each choice? (c) What is the quality of the information available to make the decision without the estimate? (d) What is the quality of the estimate? This is a complex subject and quite situation-specific, but factors (a) and (b) are clearly business issues independent of any estimate. Factors (c) and (d) represent the incremental benefits of making decisions based on the defined process rather than whatever other approach (such as guessing or wishing) might be used. The value component of an estimate's goodness is not under the control of the estimator but is essential to providing a justifiable return and it might be the most important attribute of an estimate.

Back to Top

Getting Good

To improve the "goodness" of estimates we must address all of these factors: 1. Understand how the estimate output will be used and how it will guide the business choices. 2. Perform a trade-off analysis to determine the most optimal (and achievable) result that can be obtained.c 3. Use a consistent process based on the most relevant historical data available. 4. Assess the correlation of the estimate to its knowledge base and present it in a useful way. 5. Require an honest expression of the estimator's judgment independent of bias and pressure. 6. Express the estimate output in probabilistic terms. 7. Track project performance against the forecast and adjust for variance. 8. When projects finish, recalibrate the basis of estimates for next time.

Back to Top

Projects and Weather

Both are complex systems with a lot of interacting factors; both are somewhat nondeterministic but have trends that, probabilistically, can be measured. More importantly, both have measurements and forecasts that can be very valuable. We may complain about the "accuracy" of weather forecasts but, by applying honest and rational processes over the last 30 years, they have been steadily improving and becoming more valuable.2

We could do the same in software development.

Back to Top

References

1. Murphy, A.H. What is a good forecast? An essay on the nature of goodness in weather forecasting. American Meteorological Society Journal 8, 2 (June 1993), 281–293.

2. Silver, N. The Signal and the Noise: Why So Many Predictions Fail—But Some Don't. Penguin Press, 2012, 126.

3. Sonka, S.T. et al. Economic use of weather and climate information: Concepts and an agricultural example J. Climatology 6 (1986), 447–457.

4. Winkler, R.L., Murphy A.H., and Katz, R.W. The value of climate information: A decision-analytic approach. J. Climatology 3 (1983), 187–197

Back to Top

Author

Phillip G. Armour (armour@corvusintl.com) is a senior consultant at Corvus International Inc., Deer Park, IL, and a consultant at QSM Inc., McLean, VA.

Back to Top

Footnotes

a. As pointed out in P.G. Armour, "The Inaccurate Conception." Commun. ACM 51, 3 (Mar. 2008).

b. P.G. Armour, "The Inaccurate Conception." Commun. ACM 51, 3 (Mar. 2008).

c. See P.G. Armour, "The Goldilocks Estimate." Commun. ACM 55, 10 (Oct. 2012).


Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.


 

No entries found