acm-header
Sign In

Communications of the ACM

Communications of the ACM

The Impact of Size and Volatility on IT Project Performance


IT projects perform far better than is popularly believed. That is our conclusion from a large-scale study of projects led by experienced project managers in the U.K. In a complete reversal of reports from the Standish Group [6]1 that approximately 67% of IT projects fail or are challenged, we have found 67% of projects are delivered close to budget, schedule, and scope expectations.

How could there be such a difference? Two factors appear to be at work. First, the project managers in our survey (see the sidebar "The Survey") were very experienced. Second, statistical analysis led us to a more realistic measure of success and failure. One contribution of the study reported here is to suggest a new benchmark for what is reasonably achievable in IT projects. That is, experienced project managers should be able to come within a small margin of their targets on at least two out of every three projects.

A key objective of our research is to provide project managers, project sponsors, and steering committee members with guidance, based on empirical evidence, for defining and overseeing IT projects. We have chosen to focus on two key variables identified by other researchers as important indicators of IT project risk: project size [3] and project process [7]. We recognize that other factors affect project performance but we restrict ourselves to two that have received less empirical attention than our results suggest they deserve.

A second contribution of this study is to show that while a relationship exists between project size and underperformance, it is not as simple or direct as many think. For example, reducing size does not eradicate risk nor does increasing size automatically increase risk. Further, we found that size as measured in person-months is a better predictor of underperformance than other indicators such as budget, duration, and team size.

A third contribution is to highlight project volatility, which we define as the changes that occur during a project. We consider two aspects of volatility: governance volatility, which we define as the number of changes to project managers or sponsors; and target volatility, which we define as the number of changes to schedule, budget, and project scope. Both aspects of volatility had significant impact on performance with the strongest adverse effects being related to project manager turnover.

Back to Top

IT Project Performance

On average, projects carried out by our 412 experienced U.K. project managers overshot budget by 13%, schedule by 20%, and underdelivered on scope by 7%. It is difficult to compare this precisely with other studies [4]; however, the Standish Group 2002 figures indicate 43% budget overrun, 82% schedule overshoot, and 48% underdelivery of scope [6]. Others have found average budget overruns on the order of 33% [3, 5]. By comparison, our figures are most encouraging.

Our statistical analysis indicated that these IT projects could be categorized into five performance types: Abandoned (9%); Budget Challenged (5%); Schedule Challenged (18%); Good Performers (60%); and Star Performers (7%). The top half of Table 1 shows the average schedule and budget variances along with average percentage of scope delivered across each project type. (We use "variance" to refer to the amount by which actual performance differs from planned. For budget and schedule, therefore, a positive variance means an overrun whereas for scope it means delivery of more than was planned.)

Budget Challenged projects are distinguished by high budget variances. Schedule Challenged projects are distinguished by severe schedule variances. These project types also underperform on both their other targets. Good Performers, which comprised 60% of our sample, delivered projects with small variances on all targets. On average, good performers delivered projects within 2% of planned schedule, 7% of planned budget, and 7% of planned scope. Star Performers, which were a surprise category, beat budget and scope expectations by significant margins while scarcely compromising schedule.

The bottom half of Table 1 provides a summary of the five project types on five measures of size: median budget, mean budget, mean effort, mean duration, and mean team size. This data demonstrates that project types 1, 2, and 3 (which reflect underperformance) are on average larger in size than projects in the Good Performer category (Type 4). Abandoned projects were the largest of these underperforming projects. This suggests size and performance are related—as we discuss later.

Somewhat surprisingly, Table 1 shows that Star Performers are distinctly larger than the Good Performers on all measures of size. In analysis not reported here, we found no basis for supposing that the Stars were objectively easier projects. A plausible interpretation therefore is that organizations assign their very best project managers to the projects with the largest budgets and that these few individuals achieve highly successful results. However, it is also noticeable that the ratio of budget to total effort in Star Projects is very high compared to all other types. This leads us to a conjecture that many Stars are relatively capital-intensive and reinforces our belief that increased budget is not directly associated with increased risk. In view of the unexpected finding that some larger projects—the Stars—perform better than smaller projects—the Good Performers—we think very large projects warrant further investigation.

For the purpose of examining the effects of size and volatility on risk, we divided the performance types into two groups. The underperformers consisted of three categories: Abandoned, Budget Challenged, and Schedule Challenged. Underperformers represented 33% of the sample. Better-performing projects consisted of Good Performers and Stars. These two classes totalled 67% of projects in our sample.

Back to Top

Size Factors Affecting Risk

Conventional wisdom promotes the view that small is beautiful but there is no consensus as to what this really means. While our study confirmed that project size affects project performance, it revealed that risk does not rise smoothly against every dimension of size; the relationship between size and risk is not simple. We chose to analyze four components of project size: effort, duration, team size, and budget and found that the risk of underperforming was different across each component.

Effort. Total effort in terms of person-months proves to be the best discriminator of underperforming projects. Figure 1(a) shows a steady increase in risk. Projects of 24 person-months or less have a 25% probability of underperforming, slowly increasing until at the 500–1,000 person-month size the probability has doubled to 50%. As the size increases into the 1,000–2,400 person-months category, the risk triples to 75%. Above the 2,400 person-months mark, we found no successes.

Duration. The risk of underperforming increases as project duration (elapsed time) increases (see Figure 1b). Where three- to six-month projects have a 25% risk, beyond 18 months it becomes almost 50%. One implication is that while longer projects are more risky, there is no case for restricting projects to a maximum of three months. An argument can be made that 12 months should be the new cut-off.

Team Size. With respect to team size, Figure 1(c) shows that risk remains within the 25%–35% band until the team size exceeds 20. The risk then rises dramatically above 50%. Larger teams increase the risk of underperformance, which is a notion that has been recognized by other researchers [1].

Budget. This measure proved to be the poorest discriminator between underperforming and better performing projects. For this reason we have not included a graph for Budget in Figure 1. We found that risk rises only slightly up to £15 million—projects smaller than this amount remain within the 25%–35% risk bracket. It is when budgets exceed this point that risk rises more steeply to reach the 50% level.

Overall, increases in the size of a project mean increased risk, even for experienced project managers. However, conventional wisdom that restricts project size using budget or duration is somewhat misguided. A focus first on effort, then on team size and duration will limit risk of underperformance.

Another important insight can be gleaned from an analysis of all four measures. Surprisingly, we found that one-quarter of projects underperform however small their size. Even projects with budget less than £50,000, effort less than 24 person-months, duration shorter than six months, or team size of less than five experienced 25% risk. There is a significant level of risk regardless of size.

Back to Top

The Relationship between Process Volatility and Risk

IT projects can be disrupted by a variety of changes including technology, project requirements, personnel and the external environment. Over recent years there has been a growing consensus on the importance of continuity of sponsor and project manager throughout a project. For example, the U.K. government recently introduced the concept of a "Single Responsible Owner." To our knowledge the effects of stability and change on projects have not been empirically investigated.

We focused on two sources of change: governance volatility (changes in project manager or executive sponsor) and target volatility (changes in schedule, budget, and scope). In our sample, there was a change in project manager once in every two projects on average whereas there was a change in sponsor once in four projects. Target changes were more frequent than governance changes and occurred an average of eight times per project.

Both types of volatility are related to performance. The average number of governance changes for a better performing project was 0.4 (less than one change per project), almost four times lower than the average of 1.5 governance changes in underperforming projects. Better performing projects also averaged seven target changes per project as compared to an average of 12 in underperforming projects.

Figures 2(a) and 2(b) show the relationship between change and risk graphically. Projects with no change in key personnel faced a 22% risk of underperforming, whereas projects with two or more changes faced a risk of more than 50%. Projects with nine or fewer target changes faced no more than a 33% risk of underperforming whereas projects with more than nine changes faced a risk over 50%. These results suggest volatility is strongly related to performance, and indicate the importance of project governance.

Back to Top

Estimating the Impact of Size and Volatility

We discussed the relationships between project size, volatility, and risk in the previous section. Here, we use regression analysis to estimate how changes in each of the variables affect schedule, budget, and scope. Some significant and interesting results were found: these are presented in Table 2 and discussed in more detail here.

Perhaps the most startling coefficients in Table 2 show that a single change of project manager is associated with approximately 8% increase in actual time taken, 4% greater budget expended, and 3.5% less scope delivered. On hearing these findings, the ex-director of a mega-program quipped, "If only I'd known beforehand that my departure would add £80 million to the budget I'd have offered to stay for half that!" However, she missed the further implication that in addition to the increased budget spending, the project could also be expected to underdeliver its scope by 3.5% and take over 8% longer.

Changes in the project sponsor/client manager showed no significant effect on budget or schedule variances; however, a single change in sponsor was associated with a 5.6% decrease in the percentage of scope delivered.

Although changes in targets while a project is under way might be expected to make them more attainable, this need not always be so—for example, when competitive imperatives require delivery to be brought forward against the plan. Our results demonstrate that most changes are associated with adverse performance against original project targets. As shown in Table 2, a single change in a schedule target is associated with a 1.3% increase in actual time taken, a 1.4% increase in budget expended and an increase of 0.7% of scope delivered. A change in scope is associated with increases in time taken of 0.6% and of budget spent by 1.1%. In contrast, a change in a budget target is associated with a decrease in final budget expended of 1.5%.

The cumulative impact of these target volatilities deserves serious attention. While Table 2 shows a single target change is related to only a small percentage change in performance, projects usually undergo many target changes. On average, projects in our sample reported three to four changes in schedule, two in budget, and three in scope. For the average project example, the cumulative impact of changes to targets would increase schedule variance by 10%.

With regard to project size, total effort turns out to be the best predictor of budget and schedule variance. Results from the regression analysis indicated that for each 1,000 person-months of effort, budget variance increases by 8%. For smaller projects that deploy 100 person-months or less, this is inconsequential. For the few that run to several thousand person-months, the extra cost multiplies.

Back to Top

Conclusion

Our data provides a new benchmark, as called for by Glass [2], for what is reasonably achievable in IT projects today. Experienced project managers should be delivering close (on average plus or minus approximately 7%) to original budget, schedule, and scope on two out of every three projects. By analyzing the effects of project size and volatility, we have found some of the characteristics that will lead to this level of performance.

Our results indicate that while approximately 9% of IT projects are abandoned, another 7% consistently overdeliver on original project targets. Elements of project size, and in particular effort as measured in person-months, influence the risk of underperforming. Volatility is associated with project variances. In particular, changes in the project manager have shown strongly adverse effects.

What our results suggest is that IT project managers cannot accept all of the responsibility of delivering projects successfully. Top management and steering committees have a significant role to play in managing project risk. Ambitious-sized projects, moving targets, and managerial turnover present challenges for IT projects that stretch even experienced project managers and result in greater variances. Effective oversight of projects can help project managers respond to these challenges. In Table 3, we have summarized our findings, presenting them as responses to questions that might be posed by executive sponsors.

Back to Top

References

1. Brooks, F.P. The Mythical Man Month. Addison-Wesley, Reading, MA, 1975.

2. Glass, R. The Standish Report: Does it really describe a software crisis? Commun. ACM 49, 8 (Aug. 2006), 15–16.

3. Jenkins, A.M., Naumann, J.D., and Wetherbe, J.C. Empirical investigation of systems development practices and results. Information and Management 7, 2 (Feb. 1984), 73–82.

4. Jørgensen, M. and Moløkken-Østvold, K. How large are software cost overruns? A review of the 1994 CHAOS Report. Information and Software Technology 48 (2006), 297–301.

5. Phan, D., Vogel, D., and Nunamaker, J. The search for perfect project management. Computerworld (1988), 95–100.

6. Standish Group. Interview: Jim Johnson of the Standish Group. Infoqueue, Hartmann, D. (Aug. 25, 2006); www.infoq.com/articles/Interview-Johnson-Standish-CHAOS.

7. Wallace, L. and Keil, M. Software project risks and their effect on outcomes. Commun. ACM 47, 4 (Apr. 2004), 68–73.

Back to Top

Authors

Chris Sauer (chris.sauer@sbs.ox.ac.uk) is a Fellow in Information Management at Oxford University's Saïd Business School, University of Oxford, Egrove Park, U.K.

Andrew Gemino (gemino@sfu.ca) is an associate professor in the management information systems area of the Faculty of Business Administration at Simon Fraser University, Vancouver, Canada.

Blaize Horner Reich (BReich@sfu.ca) is a professor in the management information systems area of the Faculty of Business Administration at Simon Fraser University, Vancouver, Canada.

Back to Top

Footnotes

1The Standish Group CHAOS Report for 2006 is available but not in the public domain and hence is not used here.

Research funding for the work appearing in this article has been made possible by financial support from the French Thornton Group, the Social Sciences and Humanities Research Council of Canada, and the Natural Sciences and Engineering Research Council of Canada. The authors are also grateful to the editor and staff members of Computer Weekly for their encouragement and technical support with the data and descriptive statistics used in this article.

Back to Top

Figures

F1Figure 1. Risks associated with project size.

F2Figure 2. Risks associated with governance and target volatility.

Back to Top

Tables

T1Table 1. Identifying five IT project types.

T2Table 2. Estimating the impact of volatility and project size on performance.

T3Table 3. Guidelines for assessing and managing project risk.

Back to Top


©2007 ACM  0001-0782/07/1100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.


 

No entries found