Sign In

Communications of the ACM

Letters to the Editor

A Robot's Roots

Letters to the Editor


I would like to add a bit of etymological history concerning the word "robot" to Vinton G. Cerf's President's Letter "What's a Robot?" (Jan. 2013). The Czech word "robota" shares a common root with the Russian "cacm5604_a.gif" ("rabota"), as well as with the German "Arbeit," dating to the Dark Ages of idealized German-Slavic unity in the forests of Eastern Europe. The word robota means forced labor and differs from "práce," which means any kind of work, including that of a free man, as well as creative work. Práce shares a common root with the Greek "cacm5604_b.gif" ("praxis"), inferring free human existence. The accepted wisdom as to the origin of the word robot says that when Karel Čapek, a mid-20th century Czech author (1890–1938), needed a special word for an artificial slave in his 1920 play R.U.R. (Rossum's Universal Robots), he turned to his brother Josef, who suggested the neologism "robot," deriving it from "robota." The Čapek brothers cooperated often, co-authoring several plays and books. Josef was a modern painter (a favorite of collectors today) and illustrated many of Karel's books, especially those for children. The brothers also embraced English culture and democracy. Karel died shortly after part of Czechoslovakia was annexed by Nazi Germany in 1938, and Josef was arrested by the Gestapo and died in the Bergen-Belsen concentration camp April 1945.

Ivan Ryant, Prague, Czech Republic

Vinton G. Cerf cited a common misconception that the word "robot" is derived from the Russian word "rabota" (work). The origin of robot is actually more subtle: Unlike Russian, which has only one word for work, the Czech language (the native language of Karel Čapek, who coined the term "robot") has two; the general term is "práce"; the second, "robota" (similar to the Russian word), means "forced labor," as in the labor of a servant. Čapek chose robota since his intent was for robots to be servants to humanity.

Both the Russian word rabota and the Czech word robota derive from the same Slavic word "rab" (slave), because in earlier times, work could be seen as so undignified, even shameful, no self-respecting noble person would do it. Later, when attitudes changed and work was seen as dignified (and it was shameful to be a non-working social parasite), the original word for work in Russian lost its shameful association, while Czech added a new word to describe the dignity of work.

Vladik Kreinovich, El Paso, TX

Back to Top

Get a Job

In his Viewpoint "What College Could Be Like" (Jan. 2013), Salman Khan wrote how the core value proposition of U.S. higher education is increasingly untenable, as reflected in the rising costs of tuition and in graduate unemployment. In response, he envisioned a new kind of university built around the industry co-op. However, it would do little to address the challenges facing U.S. higher education today. Rather, we should see it as another round in the ongoing attempt by market-based education reformers to de-socialize U.S. higher education.

The recession is the dominant cause of both rising tuition and graduate unemployment today. Tuition has increased to help offset lost revenue from state funding and private endowments, not increased costs.2 Along with the long-term issue of rising tuition,1 the growing cost of student services, facilities, and operations have outstripped the growth in the cost of instruction. Khan's proposal—focused on reducing the cost of instruction through online lectures—would do little to stop or even slow this trend.

Perhaps Khan's hypothetical university could generate extra revenue through student co-ops. In this respect, his focus on computer science hides a larger issue: CS graduates are in high demand, while the opposite holds for nearly every other discipline. Outside computing, more co-ops (if possible) would create more undercompensated intern positions, not full-time positions.

But as academics and intellectuals, as well as citizens, we should be more concerned over how Khan's proposal would reduce education to job training. Where do the ideas of Plato's Republic, Karl Marx's Capital, or Thomas Hobbes's Leviathan fit into a "co-op education"? Or, less ambitious, where does a basic understanding of the U.S. Constitution and system of governance fit? Should we assume these subjects, along with "art and literature," would be deferred to "nights and weekends" as elective pursuits? Such a proposal fails to treat them as having merit equal to technical skills or as an integral part of a broader humanistic education.

Khan claimed students view college primarily as a means to a job (his core value proposition), yet enrollment in co-ops (available for credit in nearly every U.S. engineering school) remains modest. Rather than free students to pursue co-ops, Khan's proposal would shackle them to the demands of a nine-to-five job, restricting their freedom elsewhere.

Carved in Bedford Limestone on the main building of my alma mater, the University of Texas at Austin, are the words: "Ye shall know the truth and the truth shall make you free." Perhaps Khan would substitute "Get a job."

Gilbert Bernstein, Stanford, CA

Back to Top

Look Beyond North America

Anita Jones's Viewpoint "The Explosive Growth of Postdocs in Computer Science" (Feb. 2013) covered an important topic but failed to say explicitly that her argument did not include Europe or Asia. The Taulbee survey (, from which she drew her data, is limited to North America, an important job market but not the only one. There are probably more CS faculty and researchers outside (than in) North America. And more than half of the top 200 universities in CS worldwide, as ranked by the Shanghai Jiao Tong University index, are outside North America (

Communications missed an opportunity to address the wider ACM audience on a potentially global (not just North American) phenomenon. Moreover, looking beyond North America would be a good way to help achieve the vision Vinton G. Cerf outlined in his President's Letter "Growing the ACM Family" in the same issue.

Toby Walsh, Kensington, NSW, Australia

Back to Top

Less Negative Reviewing, More Conference Quality

Bertrand Meyer's blog post "When Reviews Do More Than Sting" (Feb. 2013) is an opportunity to reflect on how CS academic publishing has evolved since it was first posted at blog@cacm (Aug. 2011). Meyer rightly identified rejection of 80% to 90% of conference submissions as a key source of negative reviewing, with competitors ready to step in with even higher rejection rates, eager to claim the quality mantle for themselves.

In recent years, we have seen that conference quality can be improved and constructive reviewing facilitated, even when a greater proportion of papers is accepted. At least six conferences, including ACM's Special Interest Group on Management of Data (SIGMOD), Computer-Supported Cooperative Work (CSCW), and High Performance Embedded Architectures and Compilers (HiPEAC), incorporate one or more paper-revision cycles, leading to initial reviews that are constructive rather than focused on grounds for rejection. Giving authors an opportunity to revise also provides a path toward accepting more submissions while still improving overall conference quality.

Analyses by Tom Anderson of the University of Washington and George Danezis of Microsoft Research suggest there is little or no objective difference among conference submissions that reviewers rank in the top 10% to 50%. Many conferences could even double their acceptance rates without diminishing their quality significantly, even as a serious revision cycle would improve quality.

This change in the CS conference process would blend conference and journal practices. Though journal reviews may not always be measured and constructive, on balance they are, and, in any case, revision cycles are a way for conferences to be more collegial.

Jonathan Grudin, Redmond, WA

Back to Top

Simulation Is the Way Forward in AI

Robert M. French's main argument in his article "Moving Beyond the Turing Test" (Dec. 2012) is that the Turing test is "unfair" because we cannot expect a machine to store countless facts "idiosyncratic" to humans. However, the example behavior he cited does not hold up, as I outline here. He was careful in selecting it, as it came from one of his own articles, so, we might be justified inferring that other "quirky" facts about human behavior that might "trip up" a computer are, likewise, also no reason to discard the Turing test.

The example involved the "idiosyncrasy" that humans cannot separate their ring fingers when their palms are clasped together with fingers up-right and middle fingers are bent to touch the opposite knuckle. He then asked, "How could a computer ever know this fact?" How indeed? We did not know it either but discovered it only by following French's invitation to try to separate our own ring fingers. So, too, a computer can discover facts by simulating behavior and compiling results. The simulation would use the computer's model of the anatomy and physiology of human hands and fingers, together with the laws of related sciences (such as physics and biology), to compute the "open and close" behavior of each pair of fingers from some initial configuration.

If the model encapsulates our understanding well enough, the open-and-close motion would be 0 only for the pair of ring fingers. Moreover, following a combination of visualization and logic, an explanatory model might reason why separating the two ring fingers is not possible and under what conditions it might be. One could ask whether French ever asked a competent specialist why the motion is not possible; I myself have not asked but assume there is some explanation.

Idiosyncratic facts about human behavior are not "unfair." That any behavior can be understood (described computationally) is the fundamental assumption of science.

Most of French's argument about the way forward in AI evolving from brute force with unprecedented volumes of data, speed of processing, and new algorithms should be weighed with a caveat: Trying to side-step "Why?" belongs in the category of "type mismatch."

Turing thought computers could eventually simulate human behavior. He never proposed the Turing test as the way forward in AI, suggesting instead abstract activities (such as playing chess) and teaching computers to understand and speak English, as a parent would normally teach a child. He said, "We can only see a short distance ahead, but we can see plenty there that needs to be done." I say, let's not be in such a hurry to bid farewell to the Turing test.

Nicholas Ourusoff, New London, NH

Back to Top


1. Desrochers, D.M. and Wellman, J.V. Trends in College Spending 1999-2009, Sept. 2011;

2. Hurlburt, S. and Kirshstein, R.J. Spending, Subsidies, and Tuition: Why Are Prices Going Up? What Are Tuitions Going to Pay For?, Dec. 2012;

Back to Top


Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to

©2013 ACM  0001-0782/13/04

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.


Luke Dunstan

Please don't hyphenate hyperlinks:

Displaying 1 comment