Sign In

Communications of the ACM


The Next 50 Years

Since this issue celebrates 50 years of Communications it seems appropriate to speculate what the next 50 years may bring, in time for the 100th anniversary. For this look into the future, I cover three areas of "computing machinery": the theoretical understanding of computation; the technological substrate of computing machines; and the applications that computing machines bring to our lives.

Our current understanding of computation sprang from Alan Turing's famous 1936 paper on computable numbers but really got going around the time CACM was first published. It has centered around what is in principle computable and how much time and how much memory are needed to compute it.

By the early 1960s the idea of asymptotic analysis of algorithms was in place; if you can characterize some computation problem by a positive integer n (such as how many records you want to sort on some key) then what is the theoretical minimum time and/or storage space as a function of n required to complete the task? Even better, what is the algorithm for doing it? (In the case of sorting records, the time requirement is proportional to n*(log n), and the space is proportional to n.) This approach to understanding computation has been wonderfully productive and is still yielding both theoretical insights and practical results (such as in the field of cryptography). Other formalisms have been developed to understand distributed computing and asynchronous computation, but none has become as established or been able to yield such universal results as asymptotic complexity analysis.

The goal would be nonbrittle software modules that plug together and just work, in the remarkable way our own flesh repairs itself when insulted.

That lack of progress is sure to change over the next 50 years. New formalisms will let us analyze complex distributed systems, producing new theoretical insights that lead to practical real-world payoffs. Exactly what the basis for these formalisms will be is, of course, impossible to guess. My own bet is on resilience and adaptability. I expect we will gain insights from these two properties, both almost universal in biological systems. For example, suppose we start with the question of how to specify a computation and how quickly this particular computation is likely to diverge if there is a one-bit error in the specification. Or how quickly it will diverge if there is a one-bit error in the data on which it is acting. This potential divergence leads to all sorts of questions about resilience, then to questions about program encoding and adaptability of software to hostile environments. The goal would be nonbrittle software modules that plug together and just work, in the remarkable way our own flesh repairs itself when insulted and how our bodies adapt to transplantation of a piece of someone else's liver. The dream of reliable software may follow from such a theoretical reconsideration of the nature of computation.

As for the computing machines themselves it is worth noting that all technology generally seems to have a "use by" date. Bronze gave way to iron, horses gave way to automobiles, and more recently analog television signals finally and belatedly gave way to digital television, long after digital techniques were emulating analog, following years and years of back compatibility. We're just reaching that stage with regard to the classical von Neumann architecture for a single digital computational processor. We have spent the last few decades maintaining the appearance of a von Neumann machine with uniformly addressable memory and a single instruction stream, even though, in the interest of speed, we have had multiple levels of memories (and caches to hide them) and many parallel execution units (and pipeline stalls to hide them).

As the die size of our chips is getting so small that we cannot make it smaller and maintain the digital abstraction of what goes on underneath, we have begun to see the emergence of multi-core chips. And in traditional computing machinery style, we immediately also see an exponential increase in the number of cores on each chip. Each of these cores is itself a traditional von Neumann abstraction. The latest debate is whether to make that whole group of cores appear as a single von Neumann abstraction or bite the bullet and move beyond von Neumann.

Thus we are currently witnessing the appearance of fractures in the facade of the von Neumann abstraction. Over the next 50 years we will pass the "use by" date of this technology and adopt new computational abstractions for our computing machinery.

The most surprising thing about computation over the past 50 years has been the radical new applications that have developed, usually unpredicted, changing the way we work and live, from spreadsheets to email to the Web to search engines to social-interaction sites to the convergence of telephones, cameras, and email in a single device. Rather than make wildly speculative—and most likely wrong—predictions about applications, I will point out where a number of trends are already converging.

A key driver for applications is communication and social interaction. As a result, wireless networks are increasingly pervasive and indispensable. A key medical development over the past decade has been the implanting of chips that directly communicate with people's nervous systems; for example, more than 50,000 people worldwide now hear through the miracle of a cochlear implant embedded inside their heads (running C code, no less). In clinical trials blind patients are starting to see, just a little, with embedded chips, and quadriplegics have begun to control their environments by thinking what they want to happen and having signals generated in their brains detected and communicated by embedded chips.

Over the next 50 years we will bring computing machinery inside our bodies and connect ourselves to information sources and each other through these devices. A brave new world indeed.

Back to Top


Rodney Brooks ( is the Panasonic Professor of Robotics in the MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA.

Back to Top


UF1Figure. Aaron Edsinger and his Domo upper-torso humanoid robot (29 degrees of freedom) developed at the MIT Computer Science and Artificial Intelligence Laboratory (

Back to top

©2008 ACM  0001-0782/08/0100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: