Sign In

Communications of the ACM


The Artificiality of Natural User Interfaces

A. plant, B. lettuce, C. cactus, illustration

Which plant needs the least amount of water?

Credit: Haney and Scott

Consider the question of water needs associated with the vegetation appearing in the accompanying figure: what do you think is the correct answer? Most readers are likely to answer "C." However, findings from a study conducted by Haney and Scott3 reported there were people that chose B. Why? When they were asked to justify their supposedly wrong answer they explained that, as the cabbage is not in the pot, it means it has been picked and therefore does not need water anymore. What an example of disarming reasoning! In this case, answering B was categorized as incorrect because the test writer, who assigned the correct answer to C, did not predict it.

In computer science there are many cases of unexpected use of systems that were unpredictable for their designers. An example is how some users build up large databases in spreadsheet programs because they find them easier to use than regular database programs. The lesson learned from these examples is that people think and express in ways that cannot always be predicted by designers, in ways that are not necessarily wrong. This assumption is not often considered when designing interactive systems and the history of interaction between humans and computers is full of examples of users' adaptation to designer's choices. The development of new, gestural interfaces (for example, touch-based devices like smartphones or tablets) seems to follow this pattern.

We are accustomed to interact in an environment that inhibits our innate interactive capabilities. In Nicholas Negroponte's words, our connection to computers is "sensory deprived and physically limited." The mouse is the clearest example: a device that only provides two degrees of freedom (DOF), versus the 23DOF of our fingers. The mouse rules the desktop computing world and continues to be popular in home and office settings. We have to be grateful for Engelbart's invention, which was a breakthrough in computer history and promoted a new way for users to interact with computers via a graphical interface. The mouse is a great input device but surely it is not the most natural one. A user must learn how to work with it and, despite it being simple for insiders to use, there are many people who feel disoriented by their first encounter with a mouse.

Back to Top

Gestural Interfaces

With gestural interfaces, we strive for natural interaction. Today the word natural, as in "natural user interfaces" (NUI), is mainly used to highlight the contrast with classical computer interfaces that employ artificial control devices whose operation has to be learned. Nevertheless, current gestural interfaces are based on a set of command gestures that must be learned. We believe that, by means of a natural interface, people should be able to interact with technology by employing the same gestures they employ to interact with objects in everyday life, as evolution and education taught us. Because we have all developed in different environments, individual gestures may vary.

The main point we have to face when designing for gestural interfaces: Are these interfaces natural only in the sense they offer a higher degree of freedom and expression power, when compared with a mouse-and-keyboard interface? Or, are we really aiming at empowering users with a means to communicate with computer systems so they feel more familiar with them? Recently, Don Norman claimed that "natural user interfaces are not natural,"4 assuming they do not follow basic rules of interaction design. We also believe they cannot be, currently, called natural. Conversely, our criticism lies on reflections on cultural aspects.

Users should not have to learn an artificial gestural language created by designers and that depends on the device or even on the application. In this context, each company defined its guidelines for gestural interfaces, in an attempt to stand out as a de facto standard. Imposing a standard, especially when it is about cultural topics (such as gestures), can easily fail due to natural differences among human beings: for example, think about the Esperanto language, which failed to be widely adopted because of its artificiality. Technology is an interesting aspect to consider because, even if a gesture is not the most natural one, it can become natural due to the widespread use of technology adopting it. But, again, when it comes to cultural issues this can be quite difficult, as in the case of the English language, which is imposed as the de facto standard. Nevertheless, non-native speakers will always encounter difficulties in being proficient.

The main aim of natural interfaces should be to break down the technology-driven approach to interaction.

Consider the tabletop multi-touch environment in which moving the first finger of each hand away from each other is mapped into the "zoom" action. Is this natural or just an arbitrary gesture, easy to recognize and learn? Have you ever seen a person make that gesture while speaking with another person? While we interact with new technologies in a similar way we interact with the real environment, new natural interaction systems do not take into account the spontaneity of the users—actually they inhibit the way users naturally interact, because they force them to adopt a static and already defined set of command gestures. In their pioneering work with the Charade system, Baudel and Beau-douin-Lafon1 partially faced this quandary. They avowed that problems with gestural interfaces arose because users must know the set of gestures allowed by the system. For this reason they recommended that "gestural commands should be simple, natural, and consistent." However, this does not represent a real solution to the problem, as users were not free to interact naturally but once again they were forced to learn an artificial gesture vocabulary. Furthermore, in real scenarios the appropriate gesture depends also on the context, domain, cultural background, and even ethics and human values.2

Back to Top

Interpreting Gestures

From our everyday life we know the same meaning can be expressed by different gestures: for example, a handshake is a universal sign of friendliness, but only if our universe is limited to the occidental world. In India, the common way to greet someone is by pressing your hands together, palms touching and fingers pointed upward, in front of the chest (the Añjali mudra). Vice versa, the same gesture can have a different meaning depending on cultural context: the thumbs-up sign in America and most Western countries means something is OK, or that you approve. This sign is interpreted as rude in many Asian and Islamic countries.

The intended goal of gestural interfaces is to provide users with an intuitive way to interact so that, ideally, no learning or training for specific gesture/action mappings is required. Nevertheless, current interactive gestural languages are defined in a laboratory setting and so, even if they can be adopted in preliminary investigation, they do not accurately define users' behavior. This situation is similar to the one that emerged in the past when we shifted from command lines to graphical user interfaces. The main factor driving this paradigm shift was, in fact, a human factor on fostering recognition rather than recall. For most people, it is easier to recognize an icon and associate it with an action on an interface than it is to recall a command belonging to a specific language that had to be learned. Of course, this is made possible by the display interface, just as touchscreens open new possibilities for human-computer interaction.

The definition of an affordance language (in terms of visible hints) represents a possible solution to this problem, especially for interactive tables used in public spaces, where users ought to learn rapidly how to interact with the system. Another possibility is to investigate the participation of nontechnical users in the design of gestural languages.5 By actively involving end users in the definition of gesture sets (participatory design), we can aim at selecting gestures with the higher consensus. Nevertheless, we should take into account that gestures (as signs) are something alive that change depending on cultural and human aspects, time, and context. Therefore we envision a further step to address natural interaction, which is users' personalization. New devices able to recognize the user can provide gesture personalization mechanisms based on user's preferences. A broader level of personalization can take into account communities more than single users. For example, a gestural interface designed in a certain cultural context can evolve to address the diversity and cultural background belonging to different communities of users.

New gestural interfaces could also analyze and automatically employ users' unconscious movements during interaction. Unconscious movements can be considered the most natural ones, as they represent the honest signaling that happens without us thinking about it. For example, moving our face closer to a book for magnifying the text can be considered an unconscious action for zooming, which is more natural than any hand gesture. Clearly, considering diversity at the levels of both cultural and unconscious gestures raises many new challenges. For instance, performing experimental evaluations for validating gestures in multicultural and multidisciplinary environments instead of classic controlled experiments in laboratory settings. Another challenge can consist of augmenting the surrounding environments, not only for recognizing gestures but also facial expressions and body movements (for example, by utilizing Microsoft Kinect).

Back to Top


In our opinion, the road to natural interfaces is still long and we are now witnessing an artificial naturality. These interfaces are natural, in the sense they employ hand gestures, but they are also artificial, because the system designer imposes the set of gestures. The main aim of natural interfaces should be to break down the technology-driven approach to interaction and provide users with gestures they are more used to, taking into account their habits, backgrounds, and cultural aspects. Perhaps the goal is unobtainable—as we use technology in our everyday life this becomes as "natural" as using the hand to scratch an itch.

Back to Top


1. Baudel, T. and Beaudouin-Lafon, M. Charade: Remote control of objects using free-hand gestures. Commun. ACM 36, 7 (July 1993), 28–35.

2. Friedman, B., Kahn, P., and Borning, A. Value sensitive design: Theory and methods. University of Washington technical report. 2002.

3. Haney, W. and Scott, L. Talking with children about tests: An exploratory study of test item ambiguity. Cognitive and Linguistic Analyses of Test Performance 22 (1987), 69–87.

4. Norman, D.A. Natural user interfaces are not natural. Interactions 17, 3 (Mar. 2010), 6–10.

5. Wobbrock, J.O., Morris, M.R., and Wilson, A.D. User-defined gestures for surface computing. In Proceedings of the 27th International Conference on Human Factors in Computing Systems—CHI '09.

Back to Top


Alessio Malizia ( is an associate professor in the computer science department at the Universidad Carlos III de Madrid, Spain.

Andrea Bellucci ( is a Ph.D. student in the computer science department at the Universidad Carlos III de Madrid, Spain.

Back to Top


The authors are grateful to Kai Olsen for several useful comments and discussions. We thank the reviewers for valuable comments, which improved the presentation of this Viewpoint.

Back to Top


UF1Figure. Which plant needs the least amount of water?

Back to top

Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.



This article points on some very interesting aspects of what is considered natural in currently used interfaces and also gives the authors' ideas of its further preferable development. I agree with some of the ideas and absolutely disagree with others; I would like to express my point of view in a form of comments to some phrases from the article, so it would be easier to understand the cause of my comments.
1. the history of interaction between humans and computers is full of examples of user's adaptation to designer's choices.
It's not users' choice either to adapt or not; users are forced to adapt. This situation is perfectly described at the beginning of preface to End-User Development by Lieberman, Paterno, and Wulf: You have to figure out how to cast what you want to do into the capabilities that the software provides. You have to translate what you want to do into a sequence of steps that the software already knows how to perform
2. people think and express in a way that sometimes cannot be predicted by designers
Users often think differently from designers but they have no way out of the trap. All currently used programs are designed under the ideas of adaptive interface. This type of interface was definitely progressive 30 years ago and produced fine results throughout the years but eventually, as always happens with any dominant idea, it turns into dogma which prevents any further development. The whole adaptive interface is based on the false idea (never openly discussed with users!) that designers always know what is good for all users in this and that situation and provide selection only between those solutions that they, developers, consider affordable.
3. We believe that, by means of natural interface, people should be able to interact with technology by employing the same gestures they employ to interact with objects in the real world.
Interface of any program is the direct analogue of environment in our everyday life. The most natural and constantly used procedure to change the environment in the office, at home, etc. is to move things around us in order to make our life comfortable and most suitable for tasks at each moment. We move the things around us all the time without even thinking about it. The same idea introduced into the interfaces produces the amazing results. I have written about it in CiSE (July 2011, v.13, Issue 4, pp.79 - 84). Reading is not enough to understand the novelty of such idea; the book is accompanied by a huge Demo application (
4. Unconscious movements can be considered as the most natural ones For example, moving our face closer to a book for magnifying the text can be considered as an unconscious action for zooming, which is more natural than every hand gesture.
A perfect illustration of how an interesting idea (in the first phrase) can be combined with a wrong statement (the second one). For better reading, a lot of older people have to move a book (or a cell phone) not closer, but as far away from the face as possible. As doctors like to describe this period of life, our hands are too short for good reading. So, please, don't base the zooming on the distance between device and a face.
5. Another challenge can consist of augmenting the surrounding environments, not only for recognizing gestures but also facial expressions and body movements (e.g. by employing Microsoft Kinect).
I strongly oppose the use of devices like mentioned above to translate users' movements or facial expressions into commands controlling the programs. It's one more idea of turning a human being into some addition to device produced by one or another company. Microsoft Kinect maybe good for game addicts but I hope to continue my work without such devices. Consider a perfect analogue from another area. Car racers have a very good reason for wearing heavy helmets, but I don't think that everyone else needs such a helmet the moment he grabs the car wheel. Though I am absolutely sure that all helmet manufactures would pray for such legislation.
Thank you for your article.
Sergey Andreyev

Displaying 1 comment