Sign In

Communications of the ACM


Back to Experimentation

Some of us in the computing field have been around long enough to start to see the adage "History repeats itself" come true in the way we produce major advances in computing. Here, I want to note the recent emergence of serious experimentation on a scale not seen in years1 and relate it to what some of us participated in when the field was young.

Since the late 1990s, a number of proposals and initial attempts have sought to develop and experiment with new technology under real-world, but nonproduction, conditions, on a scale relative2 to the desired result not seen since CACM was young. Three such efforts point the way toward a renewed and very valuable trend of experimentation.

The most visible is the Defense Advanced Research Projects Agency-supported effort to build and operate robotic vehicles capable of driving themselves under demanding, real-world conditions. It has succeeded, not only operationally, but also in engaging the effort and imaginations of hundreds, perhaps thousands, of researchers and students, as well as the general public ( Darpa_grand_challenge). It is for roboticists to evaluate the technical results, but from my perspective it has been a great success in helping us all set our sights on what can be achieved through experimentation at scale.

The second, just starting to do some preliminary prototyping after extensive planning, is the Global Environment for Network Innovations Project ( begun in 2004 by the National Science Foundation's Directorate for Computer & Information Science & Engineering. GENI intends to refocus networking research on new architectures and mechanisms for future networks, not just on developing patches for our current networks. The project's Web site, which describes GENI and provides pointers to related information, is maintained by the GENI Project Office operated by BBN Technologies under agreement with NSF. GENI will support such research with a large-scale, experimental network that will be the largest experimental piece of "equipment" built solely for computer science research. It is not yet well known outside the computing research community, though such mainstream publications as The New York Times and The Economist have covered its progress. Meanwhile, it has already spurred networking and related research (including computer science theory and communications theory) and major responses from Europe ( and Japan (seen only in news reports at the time of this writing3).

The third effort—called by some "data-intensive supercomputing"—is still largely at the talking stage though appears to be gaining momentum ( Based on the idea that the massive, constantly changing databases we all access (think Google) represent a new mode of computing and deserves to be explored more systematically. Various ideas are being developed on how to do this without becoming entangled in critical production processes.

They present great opportunities for advancing computer science and the technologies it makes possible. They also potentially involve extensive research activities, as well as significant investment in research infrastructure to enable the actual research. NSF spends 25%–30% of its annual budget on instruments to advance science in other fields, but computer science has not envisioned such large projects until recently.4

These observations led the CISE Directorate to issue a call for proposals to create a "community proxy responsible for facilitating the conceptualization and design of promising infrastructure-intensive projects identified by the computing research community to address compelling scientific `grand challenges' in computing." In September 2006, NSF chose the Computing Research Association to create the Computing Community Consortium (; now in operation, it looks to engage as many people and institutions in the research, education, and industrial communities as possible to fulfill its charter. At the heart of the effort is the understanding that major experimentation can and should be done in many cases before more expensive development and deployment are undertaken, something that industry alone can't afford to do.

All three efforts described here involve research characterized by observation, measurement, and analysis of results. While the same can be said of many industrial prototyping efforts and should also be true of small-scale academic research (such as thesis work), they are either impossible to do under large-scale, real-world conditions (in the case of academic research) or aren't done at all due to the pressure to produce near-term, profitable results. It's rare for experimentation to advance the boundaries of what we know how to do in computer science on a scale that is large relative to the state of the art.

The "relativity" factor has all but eliminated the kind of experimentation we did in the 1950s and 1960s. For example, in the mid-1960s, I was able and encouraged to build a small (four-user) time-sharing system on a minicomputer as a master's thesis that others could use in a production environment to see how well it worked and how it might change operations [1]. Even though it was tiny by today's standards, it was large relative to what existed then. I was able to do it because there were no such commercial systems then, and users were hungry for any improvement, even if it crashed some of the time. Today, it is impossible to mount a similar operating systems project, relative to what is required technically and expected by users.

This brings me back to the title of this column. The projects I've described here and the efforts to develop others portend the return of experimentation, somewhat in the style of the early days of computer science but with some important differences. First, while we should and indeed will see much more serious experimentation in the future, it will certainly be more costly than its counterparts years ago. Second, in some projects—perhaps most, given the practical nature of computing—experimenters must find ways to involve significant numbers of users in the "experiment"; this is a key feature of the GENI project. Third, and most important, they must employ much more careful observation, measurement, and analysis than was necessary or possible 50 years ago. So, I hope history really is repeating itself but this time improving what we do, how we do it, and the results all at the same time.

Back to Top


1. Freeman, P. Design Considerations for Time-Sharing Systems on Small Computers. Master's Thesis, University of Texas at Austin, 1965.

Back to Top


Peter A. Freeman ( is Emeritus Dean and Professor at Georgia Tech, Atlanta, GA. As Assistant Director of the National Science Foundation Directorate for Computer & Information Science & Engineering, 2002–2007, he was involved in starting the GENI project and the Computing Community Consortium.

Back to Top


1Despite serious experimentation in computing research (reflected in the special section "Experimental Computer Science," November 2007), from my perspective as a professor, we have not insisted on enough experimentation.

2"Relative" is the operant idea here. Most experimentation so far has been only a fraction of what a "fieldable" product or system might be, thus leaving open the question of scalability. One might argue, only slightly gratuitously, that some large government projects have indeed been "experiments"; unfortunately, they are rarely intended to be experiments, nor is much learned from the attempt in many cases.

3The Japanese Minister of Technology was widely quoted last summer (, though he left office soon thereafter; plans are still being prepared.

4EarthScope ( is an excellent example of how science and technology advances in other fields.

©2008 ACM  0001-0782/08/0100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.


No entries found