Why do we, as researchers and practitioners, have this deep and abiding love of computing? Why do we compute? Superficially, the question seems as innocuous as asking why the sky is blue or the grass is green. However, like both of those childhood questions, the simplicity belies the subtlety beneath. Just ask someone about Raleigh scattering or the quantum efficiency of photosynthesis if you doubt that simple questions can unearth complexity.
At its most basic, computing is simply automated symbol manipulation. Indeed, the abstract Turing machine does nothing more than manipulates symbols on a strip of tape using a table of rules. More deceptively, the rules seem simpler than some board games. Though vacuously true, the description misses the point that symbol manipulation under those rules captures what we now call the Church-Turing thesis.
However, as deep and as beautiful as the notion of computability really is, I doubt that is the only reason most of us are so endlessly fascinated by this malleable thing we call computing. Rather, I suspect it is a deeper, more primal yearning, one that underlies all of science and engineering and that unites us in a common cause. It is the insatiable desire to know and understand.
When I recently I stood atop Mauna Kea, looking at the array of telescopes perched there, I was again struck by our innate curiosity. Operated by a diverse array of international partnerships and built on Mauna Kea at great expense, they are there because we care about some fundamental questions. What is the evolutionary history and future of the universe? What are dark matter and dark energy? Why is there anything at all?
Answers to these questions are not likely to address our current economic woes, improve health care or address our environmental challenges. We care about the answers, nevertheless.
As I pondered the gloaming sky my thoughts turned to Edwin Hubble, who first showed that some of those faint smudges in the sky were “island universes” – galaxies like our own. The universe was a far bigger place than we had heretofore imagined. As Hubble observed about this age-old quest to understand:
From our home on the Earth, we look out into the distances and strive to imagine the sort of world into which we are born. Today we have reached far out into Space. Our immediate neighborhood we know rather intimately. But with increasing distance our knowledge fades, and fades rapidly, until at the last dim horizon we search among ghostly errors of observations for landmarks that are scarcely more substantial. The search will continue. The urge is older than history. It is not satisfied and it will not be suppressed.
Hubble’s comment was about the observational difficulties of distance estimation and the challenges associated with identifying standard candles. However, it could just as easily have been a meditation on computing, for we are driven by our own insatiable desires for better algorithms, more flexible and reliable software, new sensors and data analytics tools, and by ever larger and more faster computers.
Why do we compute? I suspect it is for at least two, related reasons, neither relating to publication counts, tenure, wealth or fame. The first is the ability to give life to the algorithmic instantiation of an idea, to see it dance and move across our displays and devices. We have all felt the exhilaration when the idea takes shape in code, then begins to execute, sometimes surprising us in its unexpected behavior and complexity. Computing’s analog of deus ex machina brings psychic satisfaction.
The second reason is that computing is an intellectual amplifier, extending our nominal reach and abilities. I discussed the power of computing to enable and enhance exploration in another CACM blog. (See Intellectual Amplification via Computing.) It is why those of us in computational science continually seek better algorithms and faster computer systems. From terascale to petascale and the global race to exascale, it is a quest for greater fidelity, higher resolution and finer time scales. The same deep yearning drives astronomers to seek higher resolution detectors and larger telescope apertures. We are all chasing searching the ghostly signals for landmarks.
It is our ability to apply our ideas and their embodiment in code to a dizzying array of problems, from the prosaic to the profound, that attracts and compels us. It is why we compute.
Hubble was right. We compute because we want to know and understand. The urge is deep and unsatisfied. It cannot be denied.
Dear Dr. Reed
I was interested in your questions here in Communications of the ACM:Why do we ... have this deep and abiding love of computing? Why do we compute?
Your answer echoes my favorite quote from Richard Hamming of the old Bell Labs: The purpose of computing is insight, not numbers.
But a deeper understanding of our need to compute can be obtained from Greg Chaitin's work at IBM, which he calls Algorithmic Information Theory (AIT). The most important insight from AIT is that information is a conserved quantity, like energy and momentum. Therefore the output from any computation cannot contain more information than was input. This raises your question again at a different level: Why do we compute, if we are getting no more information than we started with?
The answer to this question takes us deep into the philosophy of science. With AIT we can make the philosophy of science quantitative for the first time. Instead of asking What do we know and how do we know it, we ask instead: How much do we know? How much can we know? and How much do we need to know?
Unlike most questions in philosophy, these questions have answers, quantitative answers that provide insight into the nature of science itself.
If you are interested in a quantitative, information-theoretic approach to the philosophy of science, you can find some of these ideas explored in my website:
Or if you like I would be happy to discuss them by e-mail:
I would be very interested in your ideas and insights into these matters.
Best regards, Doug Robertson
Cooperative Institute for Research in Environmental Sciences
University of Colorado
The following letter was published in the Letters to the Editor in the June 2012 CACM (http://cacm.acm.org/magazines/2012/6/149799).
Daniel Reed's blog (Sept. 2, 2011) and Douglas Robertson's related letter "Insight, Not Numbers" (Apr. 2012) speculated on why we compute, suggesting two noble motivations: "know and understand" and "insight." Robertson also added interesting comments regarding algorithmic information theory. However, both authors seemed to take a purely philosophical or research perspective, ignoring the large number of real-world corporate examples in which the primary motivation for computing is that many businesses would otherwise be unable to deliver services and products to their customers or manage, organize, store, or access in a timely fashion the ever-increasing data needed to run a large enterprise, especially one for which "information" is a key part of its products.
Reed's description of "the exhilaration when the idea takes shape in code, then begins to execute" is something that first attracted me to computing and I have always regarded it as a "perk" of the profession. However, it was always the need to solve utilitarian problems to improve the corporate ability to process data and support customers that justified my paycheck.
In real-world corporate computing, the idea of information conservation is not a limiting factor for computation, as one does not deal with a closed system. Every second of every day new data pours in from customers and corporate processes alike, accumulating in large databases. The challenge is to determine what data is no longer useful and how and when to discard it.
Joel C. Ewing
The following letter was published in the Letters to the Editor of the April 2012 CACM (http://cacm.acm.org/magazines/2012/4/147353).
In his blog (Sept. 2, 2011), Daniel Reed asked, "Why do we . . . have this deep and abiding love of computing?" and "Why do we compute?" His answer, "We compute because we want to know and understand," echoed Richard Hamming of the old Bell Labs, who famously said, "The purpose of computing is insight, not numbers."
But a deeper understanding of our need to compute can be found in the mathematical formalism Gregory Chaitin of IBM calls Algorithmic Information Theory, or AIT. Perhaps the most important insight from AIT is that information is a conserved quantity, like energy and momentum. Therefore, the output from any computation cannot contain more information than was input in the first place. This concept shifts Reed's questions more toward: "Why do we compute, if we get no more information out than we started with?"
AIT can help answer this question through the idea of compression of information. In AIT, the information content of a bitstring is defined as the length of the shortest computer program that will produce that bitstring. A long bitstring that can be produced by a short computer program is said to be compressible. In AIT it is information in its compressed form that is the conserved quantity. Compressibility leads to another answer to Reed'squestions: "We compute because information is often most useful in its decompressed form, and decompression requires computation." Likewise, nobody would read a novel in its compressed .zip format, nor would they use the (compressed) Peano axioms for arithmetic to make change in a grocery store.
Further, AIT also provides novel insight into the entire philosophy of science, into what Reed called our "insatiable desire to know and understand." AIT can, for the first time, make the philosophy of science quantitative. Rather than ask classical questions like "What do we know and how do we know it?," AIT lets us frame quantitative questions like "How much do we know?," "How much can we know?," and "How much do we need to know?"
Unlike most questions in philosophy, these questions have concrete, quantitative answers that provide insight into the nature of science itself. For example, Kurt Gdel's celebrated incompleteness theorem can be seen as a straightforward consequence of conservation of information. AIT provides a simple three-page proof of Gdel's theorem Chaitin calls "almost obvious." And one of the quantitative implications of Gdel's theorem is that a "Theory of Everything" for mathematics cannot be created with any finite quantity of information. Every mathematical system based on a finite set of axioms (a finite quantity of compressed information) must therefore be incomplete. This lack of completeness in mathematics leads naturally to another important quantitative question "Can a Theory of Everything for physics be created with a finite quantity of information?" that can also be explored using the concepts developed in AIT.
Douglas S. Robertson