acm-header
Sign In

Communications of the ACM

Historical reflections

Prophets, Seers, and Pioneers


Prophets, Seers, and Pioneers, illustrative photo

Credit: Toni V / Shutterstock

In the 1970s, Chris Evans, a psychologist and computer scientist on the staff of the National Physical Laboratory, Teddington, observed that many of the first-generation pioneers of modern computing; the people who can reasonably be crediting with laying the foundations of the digital age, were still with us. Chris conceived the idea that these people should be interviewed, and their recollections of the projects they led, the people they worked with, and the genesis of their ideas should be recorded for future generations. Tragically, aged only 48, Chris succumbed to cancer before he was able to complete the interview series he planned. Further interviews were carried out by Brian Randell, Simon Lavington, and others, but without intending any disservice to their sterling efforts, the conversations featuring Chris demonstrated standard of professionalism and ease within the milieu that really sets them apart.

The Evans interviews were released by the Science Museum, London,a as a set of 20 audio recordings under the title "The Pioneers of Computing." Almost 20 years ago, I became closely involved with the "Pioneers" series, transcribing, documenting, and annotating the interviews in collaboration with the Science Museum. One of the signature features of Chris Evans' interviewing technique was to conclude most of his recorded conversations with a couple of questions broadly concerned with prognostication. The first was to ask the interviewee to consider their state of mind at the time they were undertaking their pioneering work, and to say how they would have expected computing to have developed up to the present day (that is, the mid-1970s).

The second signature question of the Evans interviews encouraged the pioneers to look forward 10 or 20 years and indicate how they expected computing to progress into the 1990s. These questions have always struck me as being a good way to close out a somewhat technical conversation in a relaxing and informal way; more or less the complement of opening an interview by asking if the interviewee had a good journey. But quite aside from being a socially graceful exercise, the questions also produced some interesting responses. These range from F.C Williams' typically terse and somewhat cheeky 'no-comment' response, "I'm not really interested in computers. I mean it's just no good asking me a question like that. I made one, and I thought one out of one was a good score so I didn't make any more"b to Allen Coombs' much more loquacious 1,500-word response.

It is fair to say that predicting the future with any degree of accuracy is a tricky business. For my own part, my recent track record leads me to agree with Jane Austen's Emma, who opines "I begin to doubt my having any such talent."c

As part of my work I lead a research team that has a very successful track record in being awarded research grants from the European Commission, and others. This funding is critical for keeping together our international team of experts, and for recruiting new talent. The work that it supports keeps us embedded in the leading research in our field and supports international research collaboration on a reasonably large scale. For these reasons, it is important to have a keep a close eye on any broadly 'political' developments that might affect us, as well as paying attention to pure research matters. More than one year ago, as I considered how the Brexit vote would turn out, I was certain the result would be close, but I was very confident that the voters would, in the end, decide that continued membership of the EU was the wisest course of action. Areas of the country that benefitted most from regional and other support would surely realize where their interests were best served. Things would go on pretty much as they were. Having learned little from the experience of getting wrong a critically important prediction on matters where I am very well informed, I turned my attention to the 2016 U.S. Presidential election. It turns out that being relatively ignorant did nothing to improve my crystal-ball gazing.

One of the themes that emerged from the responses given to Chris Evans by the Pioneers, concerned an expectation that the costs of computing would fall. Another concerned communications infrastructure.

John M.M. Pinkerton, one of the leading figures involved with the LEO computer hit the nail fairly well on the head in saying: "Well a great many things are going to happen but the major influences I think are these: that first of all the cost of processing and the cost of storage is continuing to fall, it's fallen a lot in the last 10 years and it's continuing to fall and as far as I can see its going to go on falling. But what doesn't fall is the cost of communications, and also the cost of using people doesn't fall, it tends to go up. The result of this is there won't be anything like the obsession there has been with the efficient use of processing or storage but there will be concern over the costs of communications. Not only that, there will be a desire for everybody to have direct personal access to computing potential and because data arises everywhere and because people are everywhere we're going to have to make computing facilities available everywhere."

Arthur W. Burks, senior engineer on the ENIAC, drew a distinction between engineering and theory: "Well, we need to divide computing science into the computers proper and the theory of computers. I think for computers it is clear that they become cheaper and faster and that the revolution of computers in terms of how they interact with us and that the uses that we make of them will continue apace at least for the next 10 years. I think the theory of computers has developed much more slowly and I am not sure when there will be important breakthroughs in the theory of computers."

Konrad Zuse, the prodigious German computer pioneer contrasted the development of hardware and software, but also wanted to sound a cautionary note: "When I was a pioneer in the field and my colleagues were working in the two fields and translating one and another. Today, you have specialists for hardware and specialists for storage techniques, specialists for languages, specialists for theoretical, informatic and so on ... today we have a breakdown of the prices of the integrated circuits and the first consequence is that you can have very cheap and very small computers and this will go on. That means that processing units will become relatively cheap, I think in one or two years you will have already the machine like the Z3, I made in 1941, which to that time took a whole room, where you can put in the pocket. And this development will go on and I am not quite sure that the ... hardware engineers and the software engineers, will work good enough together to take all the consequences of this development. And, I can't say exactly how the computers will look in 10 years or 20 years, but the development is not just finished not at all. We are in the full development now and the consequences of these new techniques will be that intelligence will get cheaper and cheaper. I don't think this is good. There is surely ... there is some danger in this."

John V. Atanasoff remarked with great modesty: "I don't think I'm very wise. I think we see the main attributes of computers during the next years. They will become smaller, they will require less electricity. Once speaking about computers, I felt as if I should say something good to the people that were before me, and I said one thing you can say about computers is they will give great benefits without great losses of energy and I think this will one of the facets of new computers, the energy of which they use at the present time is of no consequence."

Donald W. Davies, developer of the notion of packet switching, opined: "To do this you first have to do some extrapolation of the technology, you need to know how far that's going to go and to examine, for example, how much steam there is left in semi-conductor development, store development, communications development. We could start, perhaps, with communications. Here the digit rates that are available on long lines in this country can be multiplied by factors of thousands with the technology which is almost available now. In other words, there's really no limit to the digit rates available and therefore the cost per digit could, in principle, come right down. So that, high bandwidth shouldn't cost very much. This is all extrapolating and glossing over all the political and organizational problems. With regard to semiconductor technology it's not quite so clear but I think one could say that there's probably a factor of at least two and perhaps as much as 10 available in the speed, power bandwidth, and so on. Derived from things like iron implantation and so forth so that there's quite a lot to be gained but perhaps not as ... not quite so much as in communications. Again, in storage—the technology is probably tied rather closely to that of semi-conductors. So, in general you can say there will be cheaper machines, slightly faster machines and almost unlimited communications."

Harold L. Hazen, who contributed significantly to the theory of servo-mechanisms and feedback control systems, was somewhat diffident observing: "I have been so far away from active participation in these things that, and I never was a great speculator. For large extrapolations, one sees sizes going down, the limits of almost microns of element size seem not too much further compressible. Reliability is at an impressive level now, probably will go up. See, what is it? Cost per unit of computation has been halving every two or three years or something of that sort, I don't remember the exact figure. It seems that such series must become asymptotic somewhere, where that will be I'm not sure! But extrapolation for me or imagining what lies ahead would have to be based on the rather plebian and earthy process of simply extrapolating what's happened the last 10 years . When is the software going to be the limitation rather than the hardware? I just don't know."


It is fair to say that predicting the future with any degree of accuracy is a tricky business.


Ralph J. Slutz, who worked in the IAS and SEAC computers, looked forward to increasing miniaturization, and networking: "What I see in the very near future is a growth of more small computers, say associated with individual engineering work or laboratory work such as that which really are stand-alone computers and can work by themselves perhaps in real time with a laboratory experiment but which, when the need occurs, comes around for something bigger than they customarily can handle, can be connected to a big central computer utility. In a sense, I sort of refer to it at the present time as the invisible computer network because 99% of the time it wouldn't exist but then you dial up on a telephone line or high speed line to your computer utility and get the advantage of large facilities."

There were a number of pioneers like Freddie Williams, who, perhaps sharing my own lack of confidence in their ability to see clearly into an uncertain future, were reluctant to say much.

Herman H. Goldstine, for example, responded: "I don't think I would even try that. Every time I've watched that kind of thing, I've seen how terribly badly the seers have missed the boat. For example, it was only a decade ago, I think, that people ... everybody was saying that the terminal was going to be "the answer to a maiden's prayer." It turns out now, I think that people want small computers which are more or less stand-alone with some capacity to be connected up to a big computer and I don't know what it will be in a decade. I just wouldn't guess."

John W. Mauchly was similarly reluctant: "This is sort of like writing science fiction. Science fiction and the comic strips like Buck Rogers try to be way ahead of the actuality but really, we don't see much further than the end of our nose. And human beings usually extrapolate from what they now know, don't really predict anything that is so remarkable and I am afraid I have that same limitation."

The theme of the likely emergence of learning computers and A.I., was taken up by British Pioneers who had worked with Alan Turing at Bletchley Park, and elsewhere.


I find the modestly expressed prescience of the pioneers who laid the foundations of the digital age both encouraging and uplifting.


Allen W.M. Coombs, in a very full response remarked: "Well, now I think the future of the computer lies in this. With large-scale integration we are going to be able to do this on a much bigger scale, it's getting bigger all the time . bigger and bigger in numbers and smaller and smaller in bulk, we can get closer and closer to a brain which can learn. I think probably that the next stage of the computer is to be large scale learning of really difficult and complicated things. Not just shapes, which are rather simple, but more difficult things. It is said that there are several stages of learning machine. There is the adaptive machine which can be changed ... modified, that is the learning machine which really means an adaptive machine with a human teacher and there is the self-organizing machine, which is a machine which can . which is adaptive and can learn but doesn't need a human to tell it what to learn, it finds out for itself. Every child in this sense is a self-organizing system. And the next stage of computer technology is to make self-organizing systems. That is something that hasn't been done yet. A lot of people have got ideas but there is a chance of doing it now that we have got large scale integration available to us and getting more and more understanding of what goes on in a brain when it learns things."

I.J. (Jack) Good thought that: "The main development will probably be in software I think, in machine intelligence work. But the main potential advance perhaps in electronic computers will be to go to ultra-parallel computers in my opinion, like the brain. I see that as the biggest potential advance in computers apart from the possibilities of programming reaching intelligent machinery."

Donald Michie responded that: "The theme which I see coming to the fore is the transfer of systematized human knowledge, for example from books where most of it now is, but including in the brains of experts, into computing systems. Not just as lookup systems but in the form of operational knowledge which the computing systems can utilize to perform skilled tasks of interest to particular professional specialists. Now once that process gets under way it is inherently a tear away process, it's a bootstrapping process, because then you have machine intelligence systems which are able assistants, not only in organic Chemistry, or astronomy or whatever branch of science a particular scientist's assistant program has been engineered for, but don't forget that there is one other branch of science namely machine intelligence and that branch will also acquire powerful and teachable and self-adaptable research assistants and that is the beginning of a tear away process. And I would say myself that sometime between 1980 and 1985, I would expect it to spread through the science community, the realization for good or otherwise that this tear away process has now started."

Perhaps the most remarkable thing about these various attempts to look into the future, is the very great extent to which all of the people who ventured an opinion got nearly everything right. We live in a time when provable falsehoods are presented as 'alternative facts' and expertise is routinely disparaged. In that context, I find the modestly expressed prescience of the pioneers who laid the foundations of the digital age both encouraging and uplifting. These voices from the past, captured by Chris Evans, give me renewed hope for the future.

Back to Top

Author

David P. Anderson (cdpa@btinternet.com) is Professor of Digital Humanities at the Centre for Research & Development (Arts)/Cultural Informatics Research Group, University of Brighton, U.K.

Back to Top

Footnotes

a. Published by Computer Capacity Management Limited Reading and Hugo Informatics.

b. All the quotations from the Pioneers recordings are drawn from my own transcriptions.

c. Austen, J. Emma, Chapter XI.


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.


Comments


John Schlesinger

Wonderful article, and thanks for the optimism at the end.


Displaying 1 comment

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: