acm-header
Sign In

Communications of the ACM

Research

Heads-Up Computing

Moving Beyond the Device-Centered Paradigm

closeup of person wearing stylish glasses that include a camera

Credit: Barry Downard

Humans have come a long way in our co-evolution with tools (see Figure 1). Well-designed tools effectively expand our physical as well as mental capabilities,38 and the rise of computers in our recent history has opened up possibilities like never before. The graphical user interface (GUI) of the 1970s revolutionized desktop computing. Traditional computers with text-based, command-line interfaces evolved into an integrated everyday device: the personal computer (PC). Similarly, the mobile interaction paradigm introduced in the 1990s transformed how information can be accessed anytime and anywhere with a single handheld device. Never have we had so much computing power in the palm of our hands.

f1.jpg
Figure 1. Human's co-evolution with tools.

Back to Top

Key Insights

ins01.gif

The question, "Do our tools really complement us, or are we adjusting our natural behavior to accommodate our tools?" highlights a key design challenge associated with digital interaction paradigms. For example, we accommodate desktop computers by physically constraining ourselves to the desk. This has encouraged sedentary lifestyles26 and poor eyesight,32 amongst other undesirable consequences. While smartphones do not limit mobility, they encourage users to adopt unnatural behavior, such as the head-down posture.5 Users look down at their handheld devices and pay little attention to their immediate environment. This 'smartphone zombie' phenomenon has unfortunately led to an alarming rise in pedestrian accidents.39

Although there are obvious advantages to the consolidated smart-phone hardware form, this 'centralization' also means that users receive all inputs and outputs (visual display, sound production, haptic vibration) from a single physical point (the phone). In addition, smartphones keep our hands busy; users interact with their devices by holding them while typing, tapping, swiping, and more. Mobile interactions limit users' ability to engage in other activities and can be intrusive, uncomfortable, and disruptive.31 Could we redesign computing devices to support our daily activities more seamlessly? Above all, can we move beyond the device-centered paradigm and into a more human-centered vision, where tools can better complement natural human capabilities instead of the other way around (see Figure 2)?

f2.jpg
Figure 2. Device-centered (top) vs. human-centered (bottom) interaction in text-entry and video-learning scenarios. We envision that interactivity with digital content can be facilitated by prioritizing user contexts (for example, walking) and leveraging resources that remain underutilized in these contexts (for example, voice).

While user-centered design24 has been introduced for decades, our observations are that everyday human-computer interactions have not aligned with this approach and its goals. In the next sections, we discuss our understanding of what placing humans at center stage entails, related works, and how Heads-Up computing may inspire future applications and services that can significantly impact how we live, learn, work, and play.

Back to Top

Humans at Center Stage

Understanding the human body and activities. The human body comprises input and output (I/O) channels to perceive the world. Human-computer interactions commonly use the hands to click a mouse or tap a phone screen or the eyes to read a computer screen. However, the hands and eyes are also essential for performing daily activities, such as cooking or exercising. When device interactions are performed simultaneously with these primary tasks, competition for I/O resources is introduced.25 As a result, current computing activities are performed either separately from our daily activities (for example, work in an office, live elsewhere) or in an awkward combination (for example, typing and walking like a smartphone zombie). While effective support of multitasking is a complex topic, and in many cases, not possible, computing activities can still be more seamlessly integrated with our daily lives if the tools are designed using a human-centric approach. By carefully considering resource availability—that is, the number of resources available for each I/O channel in the context of the user's environment and activity—devices could better distribute task loads by leveraging underutilized natural resources and lessening the load on overutilized modalities. This is especially true for scenarios involving so-called multichannel multitasking,9 in which one of the tasks is largely automatic—for example, routine manual tasks such as walking or washing dishes.

To design for realistic scenarios, we look at Dollar's7 categorization of Activities of Daily Living (ADL), which provides a taxonomy of crucial daily tasks (albeit originally created for older adults and rehabilitating patients). The ADL categories include domestic (for instance, office presentation), extra-domestic (for example, shopping), and physical self-maintenance (for example, eating), providing sufficient representation of what the general population engages in every day. It is helpful to select examples from this broad range of activities when learning about resource demands. We can analyze an example activity for its hands- and eyes-busy nature, identify underused/overused resources, and then select opportunistic moments for the system to interact with the user. For example, where the primary activity requires the use of hands but not the mouth and ears (such as when a person is doing laundry), it may be more appropriate for the computing system to prompt the user to reply to a chat message via voice instead of thumb-typing. But if the secondary task requires a significant mental load, for example, composing a project report, the availability of alternative resources may not be sufficient to support multitasking. Thus, it is important to identify secondary tasks that can be facilitated not only by underutilized resources but also by a minimal overall cognitive load that is complementary to the primary tasks.

To effectively manage resources for activities of daily living and digital interactions, we refer to the theory of multitasking. Multitasking, according to Salvucci et al.,30 involves several core components: ACT-R cognitive architecture, threaded cognition theory, and memory-for-goals theory. Multiple tasks may appear concurrently or sequentially, depending on the amount of time a person spends on one task before switching to another. For concurrent multitasking, tasks are harder to perform when they require the same resources. They are easier to implement if multiple resource types are available.37 In the case of sequential multitasking, users switch back and forth between primary and secondary tasks over a longer period (minutes to hours). Reducing switching costs and facilitating rehearsal of the 'problem representation'1 can significantly improve multitasking performance. Heads-Up computing is explicitly designed to take advantage of these theoretical insights: Its voice and subtle-gesture interaction method relies on available resources during daily activities, while its heads-up, optical head-mounted, see-through display (OHMD) also facilitates quicker visual attention switches.

Overall, we envision a more seamless integration of devices into human life by first considering the human's resource availability, primary/secondary task requirements, and then resource allocation.

Existing traces of the human-centered approach. Existing designs, such as the heads-up display (HUD) and smart glasses, exemplify the growing interest in human-centered innovation. A HUD is any transparent display that can present information without requiring the operator to look away from their main field of view.35 HUDs in the form of windshield displays have become increasingly popular in the automotive industry.3 Studies have shown that they can reduce reaction times to safety warnings and minimize the time drivers spend looking away from the road.8 This ensures the safety of vehicle operators. On the other hand, smart glasses can be seen as a wearable HUD that offers additional hands-free capabilities through voice commands. Wearers do not have to adjust their natural posture to the device; instead, a layer of digital information is superimposed upon the wearer's vision via the glasses. While these are promising ideas, their current usage is focused more on resolving specific problems and is not designed to be integrated into people's general and daily activities. Beyond devices, on the other end of the spectrum, are general-purpose paradigms such as Ubiquitous Computing (UbiComp), which also paints a similar human-centered philosophy but involves a very broad design space. Conceptualized by Weiser,36 UbiComp aims to transform physical spaces into computationally active, context-aware, and intelligent environments via distributed systems. Designing within the UbiComp paradigm has led to the rise of tangible and embodied interaction,28 which focuses on the implications and new possibilities for interacting with computational objects within the physical world.14 These approaches understand that technology should not overburden human activities and that computer systems should be designed to detect and adapt to changes in human behavior that naturally occur. However, the wide range of devices, scenarios, and physical spaces (for example, ATM spaces) means that there is much freedom to create all kinds of design solutions. This respectable vision has a broad scope and does not define how it can be implemented. Thus, we observe the need for an alternative interaction paradigm with a more focused scope. Its vision integrates threads of similar ideas that currently exist as fragments in the human-computer interaction (HCI) space.


Mobile interactions limit users' ability to engage in other activities and have been shown to be intrusive, uncomfortable, and disruptive.


We introduce Heads-Up computing, a wearable, platform-based interaction paradigm for which the ultimate goal of seamless and synergistic integration with everyday activities can be fulfilled. Heads-Up computing focuses only on the users' immediate perceptual space. At any given time, the space in which humans can perceive through their senses is what we refer to as the perceptual space. The specified form, that is, the hardware and software of Heads-Up computing, provides a solid foundation to guide future implementations, effectively putting humans at center stage.

Back to Top

The Heads-Up Computing Paradigm

Heads-Up computing's overarching goal is to offer more seamless, just-in-time, and intelligent computing support for humans' daily activities. It is defined by three main characteristics:

  • Body-compatible hardware components
  • Multimodal voice and gesture interaction
  • Resource-aware interaction model

Body-compatible hardware components. To address the shortcomings of device-centered design, Heads-Up computing will distribute the input and output modules of the device to match human input and output channels. Leveraging the fact that our head and hands are the two most important sensing and actuating hubs, Heads-Up computing introduces a quintessential design that comprises two main components: the headpiece and the handpiece. Smart glasses and earphones will directly provide visual and audio output to the eyes and ears. Likewise, a microphone will receive audio input from the user, while a hand-worn device (that is, a ring or wristband) will receive manual input. Note that while we advocate the importance of a handpiece, current smartwatches and smartphones are not designed within the principle of Heads-Up computing. They require users to adjust their head and hand position to interact with the device; thus, they are not synergistic enough with our daily activities. Our current implementation of a handpiece consists of a tiny ring mouse, which can be worn on the index finger to serve as a small trackpad for controlling a cursor, as demonstrated by EYEditor12 and Focal.33 It provides a relatively rich set of gestures to work with, which can provide manual input for smart glasses. While this is a base setup, many additional capabilities can be integrated into the smart glasses (for example, eye-tracking22 and emotion-sensing13) and the ring mouse (for example, multi-finger gesture sensing and vibration output) for more advanced interactions. For individuals with limited body functionality, Heads-Up computing can be customized to redistribute the input and output channels to the person's available input/output capabilities. For example, in the case of visually impaired individuals, the Heads-Up platform can focus on audio output capability with the earphone and haptic input from the ring mouse to make it easier for them to access digital information in everyday living. Heads-Up computing exemplifies a potential design paradigm for the next generation, which necessitates a style of interaction highly compatible with natural body capabilities in diverse contexts.


We envision a more seamless integration of devices into human life by first considering the human's resource availability, primary/secondary task requirements, and then resource allocation.


Multimodal voice and gesture interaction. Similar to hardware components, every interaction paradigm also introduces new interactive approaches and interfaces. With a headpiece and handpiece in place, users would be able to input commands through various modalities: gaze; voice; and gestures of the head, mouth, and fingers. But given the technical limitations,18 such as being error-prone, requiring frequent calibration, and involving obtrusive hardware, voice and finger gestures seem to be the more promising modalities for Heads-Up computing. As mentioned previously, voice modality is an underutilized input method that is convenient, faster, more intuitive, and frees up the hands and vision for other activities. However, one of its drawbacks is that voice input can be inappropriate in noisy environments and sometimes socially awkward to perform.17 Hence, it has become more important than ever to consider how users could exploit subtle gestures to employ less effort for input and generally do less. In fact, a recent preliminary study31 revealed that thumb/index-finger gestures could offer an overall balance and were preferred as a cross-scenario subtle interaction technique. More studies must be conducted to maximize the synergy of finger gestures or other subtle interaction designs during everyday scenarios. For now, the complementary voice and gestural input method is a good starting point to support Heads-Up computing. It has been demonstrated by EYEditor,12 which facilitates on-the-go text editing by using voice to insert and modify text and finger gestures to select and navigate the text. When compared to standard smartphone-based solutions, participants could correct text significantly faster while maintaining a higher average walking speed with EYEditor. Overall, we are optimistic about the applicability of multi-modal voice and gesture interactions across many hand-busy scenarios and the general active lifestyle of humans.

Resource-aware interaction model. The final piece of Heads-Up computing is its software framework, which allows the system to understand when to use which human resource. Firstly, the ability to sense and recognize ADL is made possible by applying deep-learning approaches to audio19 and visual23 recordings. The headpiece and handpiece configuration can be embedded with wearable sensors to infer the status of both the user and the environment. For instance, is the user stationary, walking, or running? What are the noise and lighting levels of the space occupied by the user? These are essential factors that could influence users' ability to take in information. In the context of on-the-go video learning, Ram and Zhao27 recommended visual information be presented serially, persistently, and against a transparent background to better distribute users' attention between learning and walking tasks. But more can be done to investigate the effects of various mobility speeds11 on performance and preference of visual presentation style. It is also unclear how audio channels can be used to offload visual processing. Subtler forms of output, such as haptic feedback, can also be used for low-priority message notifications29 or remain in the background of primary tasks.34

Secondly, the resource-aware system integrates feedforward concepts6 when communicating to users. It presents the available commands and how they can be invoked. While designers may want to minimize visual clutter on the smart glasses, it is also important that relevant headpiece and handpiece functions are made known to users. To manage this, the system must assess resource availability for each human input channel in any particular situation. For instance, to update a marathon runner about their physiological status, the system should sense if finger gestures or audio commands are optimal, and the front-end interface dynamically configures its interaction accordingly. Previous works primarily explored feedforward for finger/hand gestural input,10,15 but to the best of our knowledge, none have addressed this growing need for voice input.

An important area of expansion for the Heads-Up paradigm is its quantitative model, one that could optimize interactions with the system by predicting the relationship between human perceptual space constraints and primary tasks. Such a model will be responsible for delivering just-in-time information to and from the headpiece and handpiece. We hope future developers can leverage essential back-end capabilities through this model as they write their applications. The resource-aware interaction model holds great potential for research expansion and presents exciting opportunities for Heads-Up technology of the future.

Back to Top

A Day in the Life with Heads-Up Computing

Beth is a mother of two who works from home. She starts her day by preparing breakfast for the family. Today, she sets out to cook a new dish: broccoli frittata (see Figure 3). Beth queries a Heads-Up computing virtual assistant named Tom, voicing out, "Hey Tom, what are the ingredients for broccoli frittata?" Tom renders an ingredient checklist on Beth's smart glasses. Through the smart glasses' front camera, Tom "sees" what Beth sees and detects that she is scanning the refrigerator. This intelligent sensing prompts Tom to update the checklist collaboratively with Beth as she removes each ingredient from the refrigerator and places it on the countertop, occasionally glancing at her see-through display to double-check that each item matches. With advanced computer vision and augmented reality (AR) capabilities, Beth can even ask Tom to annotate where each ingredient is located within her sight. Once all the ingredients have been identified, Beth begins cooking. Hoping to be guided with step-by-step instructions, she again speaks out: "Tom, show me how to cook the ingredients." Tom searches for the relevant video on YouTube and automatically cuts it into stepwise segments, playing the audio through the wireless earpiece and video through the display. Beth toggles the ring mouse she is wearing to jump forward or backward in the video. Despite requiring both hands for cooking, she can use her idle thumb to control playback of the video tutorial simultaneously. Tom's in-time assistance seamlessly adapts to Beth's dynamic needs and constraints without referring to her remote phone, which would pause her task progress.

f3.jpg
Figure 3. The user is browsing for ingredients in the refrigerator with the help of augmented labels (left). The user cooks the ingredients while simultaneously adjusting the playback of a guided video (right).

Beth finishes with the cooking and feeds her kids, during which she receives an email from her work supervisor, asking about her availability for an emergency meeting. Based on Beth's previous preferences, Tom understands that Beth values quality time with her family and prevents her from being bombarded by notifications from work or social groups during certain times of the day. However, she makes an exception for messages labeled as 'emergency.' Like an intelligent observer, Tom adjusts information delivery to Beth by saying, "You have just received an emergency email from George Mason. Would you like me to read it out?" Beth can easily vocalize "Yes" or "No" based on what suits her. By leveraging the idle ears and mouth, Heads-Up computing allows Beth to focus her eyes and hands on what matters more in that context: her family.

Existing voice assistants, such as Amazon Alexa, Google Assistant, Siri from Apple, and Samsung Bixby, have gained worldwide popularity for the conversational interaction style that they offer. They can be defined as "software agents powered by artificial intelligence and assist people with information searches, decision-making efforts or executing certain tasks using natural language in a spoken format."16 Despite allowing users to multitask and work hands-free, the usability of current speech-based systems still varies greatly.40 These voice assistants currently do not achieve the depth of personalization and integration that Heads-Up computing can achieve given its narrower focus on users' immediate perceptual space and a clearly defined form—that is, hardware and software.

The story above depicts a system that leverages visual, auditory, and movement-based data from a distributed range of sensors on the user's body. It adopts a first-person view as it collects and analyzes contextual information—for example, the camera on the glasses sees what the user sees and the microphone on the headpiece hears what the user hears. It leverages the resource-aware interaction model to optimize the allocation of Beth's bodily resources based on the constraints of her activities. The relevance and richness of data collected from the user's immediate environment, coupled with the processing capability, allows the system to anticipate and calculate quantitative information ahead of the user. Overall, we envision that Tom will be a human-like agent, able to interact with and assist humans. From cooking to commuting, we believe that providing just-in-time assistance has the potential to transform relationships between devices and humans, thereby improving the way we live, learn, work, and play.

Back to Top

Future of Heads-Up Computing

Global technology giants such as Meta, Google, and Microsoft have invested a considerable amount in developing wearable AR.2 The rising predicted market value of AR smart glasses20 highlights the potential of such interactive platforms. As computational and human systems continue to advance, so too will design, ethical, and socio-economical design challenges. We recommend the paper by Mueller et al.,21 which presents a vital set of challenges relating to human-computer integration. These include compatibility with humans and the effects on human society, identity, and behavior. In addition, Lee and Hui18 effectively sum up interaction challenges specific to smart glasses and their implications on multi-modal input methods, all of which are relevant to the Heads-Up paradigm.


Heads-Up computing is a wearable, platform-based interaction paradigm for which the ultimate goal of seamless and synergistic integration with everyday activities can be fulfilled.


At the time this article was written, a great deal of uncertainty remained around global regulations for wearable technology. Numerous countries have no regulatory framework, whereas existing frameworks in other countries are being actively refined.4 For the promise of wearable technology to be fully realized, we share the hope that different stakeholders—theorists, designers, and policymakers—collaborate to drive this vision forward and into a space of greater social acceptability.

As a summary of the Heads-Up computing vision, we flesh out the following key points:

  • Heads-Up computing is interested in moving away from device-centered computing to instead place humans at center stage. We envision a more synergistic integration of devices into people's daily activities by first considering their resource availability, primary and secondary task requirements, and resource allocation.
  • Heads-Up computing is a wearable, platform-based interaction paradigm. Its quintessential body-compatible hardware components comprise a headpiece and handpiece. In particular, smart glasses and earphones will directly provide visual and audio output to the human eyes and ears. Likewise, a microphone will receive audio input from humans, while a hand-worn device will be used to receive manual input.
  • Heads-Up computing uses multimodal I/O channels to facilitate multi-tasking. Voice input and thumb/index-finger gestures are examples of interactions that have been explored as part of the paradigm.
  • The resource-aware interaction model is the software framework of Heads-Up computing, allowing the system to understand when to use which human resource. Factors such as whether the user is in a noisy place can influence their ability to absorb information. Thus, the Heads-Up system aims to sense and recognize the user's immediate perceptual space. Such a model will predict human perceptual space constraints and primary task engagement, then deliver just-in-time information to and from the headpiece and handpiece.
  • A highly seamless, just-in-time, and intelligent computing system has the potential to transform relationships between devices and humans, and there is a wide variety of daily scenarios for which the Heads-Up vision can translate and benefit. Its evolution is inevitably tied to the development of head-mounted wearables, as this emergent platform makes its way into the mass consumer market.

When queried about the larger significance of the Heads-Up vision, the authors reflect on a regular weekday in their lives—eight hours spent in front of a computer and another two hours on the smartphone. Achievements in digital productivity come too often at the cost of being removed from the real world. What wonderful digital technology humans have come to create, perhaps the most significant in the history of our co-evolution with tools. Could computing systems be so well-integrated that they not only support but enhance our experience of physical reality? The ability to straddle both worlds—the digital and non-digital—is increasingly pertinent, and we believe it is time for a paradigm shift. We invite individuals and organizations to join us in our journey to design for more seamless computing support, improving the way future generations live, learn, work and play.

uf1.jpg
Figure. Watch the authors discuss this work in the exclusive Communications video. https://cacm.acm.org/videos/heads-up-computing

Back to Top

References

1. Anderson, J.R. and Lebiere, C. The Atomic Components of Thought. Lawrence Erlbaum Associates Publishers (1998).

2. Applin, S.A. and Flick, C. Facebook's Project Aria indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the Commons. J. of Responsible Technology 5 (May 2021), 100010; https://bit.ly/3QkFg6K.

3. Betancur, J.A. et al. User experience comparison among touchless, haptic and voice Head-Up Displays interfaces in automobiles. Intern. J. on Interactive Design and Manufacturing 12, 4 (2018), 1469–1479.

4. Brönneke, J.B. et al. Regulatory, legal, and market aspects of smart wearables for cardiac monitoring. Sensors 14 (2021), 4937; https://bit.ly/44YDJHD.

5. Bueno, G.R. The head down generation: Musculoskeletal symptoms and the use of smartphones among young university students. Telemedicine and e-Health 25, 11 (2019), 1049–1056; https://doi.org/10.1089/tmj.2018.0231.

6. Djajadiningrat, T. et al. But how, Donald, tell us how? On the creation of meaning in interaction design through feedforward and inherent feedback. In Proceedings of the 4th Conf. on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, ACM (2002), 285–291; https://bit.ly/3KBKqrJ.

7. Dollar, A.M. Classifying Human Hand Use and the Activities of Daily Living. Springer Intern. Publishing, Cham (2014), 201–216; https://bit.ly/452eFzQ.

8. Doshi, A. et al. A novel active heads-up display for driver assistance. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 39, 1 (2009), 85–93; https://doi.org/10.1109/TSMCB.2008.923527.

9. Eyal, N. Indistractable: How to Control Your Attention and Choose Your Life. BenBella Books (2019).

10. Fennedy, K. et al. OctoPocus in VR: Using a dynamic guide for 3D mid-air gestures in virtual reality. IEEE Transactions on Visualization and Computer Graphics 27, 12 (2021), 4425–4438; https://bit.ly/3rFh70B.

11. Fennedy, K. et al. Investigating performance and usage of input methods for soft keyboard hotkeys. In 22nd Intern. Conf. on Human-Computer Interaction with Mobile Devices and Services. ACM (2020); https://doi.org/10.1145/3379503.3403552.

12. Ghosh, D. EYEditor: Towards on-the-go heads-up textediting using voice and manual input. In Proceedings of the 2020 CHI Conf. on Human Factors in Computing Systems, ACM, 1–13; https://doi.org/10.1145/3313831.3376173.

13. Hernandez, J. and Picard, R.W. SenseGlass: Using Google Glass to sense daily emotions. In Proceedings of the Adjunct Publication of the 27th Annual ACM Symp. on User Interface Software and Technology, ACM (2014), 77–78; https://bit.ly/477QmC8.

14. Hornecker, E. The role of physicality in tangible and embodied interactions. Interactions 18, 2 (March 2011), 19–23; https://bit.ly/43BEV2M.

15. Jung, J. et al. Voice+Tactile: Augmenting in-vehicle voice user interface with tactile touchpad interaction. In Proceedings of the 2020 CHI Conf. on Human Factors in Computing Systems, Association for Computing Machinery, 1–12; https://bit.ly/3q3uRlq.

16. Ki, C.-W. et al. Can an intelligent personal assistant (IPA) be your friend? Para-friendship development mechanism between IPAs and their users. Computers in Human Behavior 111 (2020), 106412; https://doi.org/10.1016/j.chb.2020.106412.

17. Kollee, B. et al. Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing. In Proceedings of the 2nd ACM Symp. on Spatial User Interaction, ACM (2014), 40–49; https://doi.org/10.1145/2659766.2659781.

18. Lee, L.-H. and Hui, P. Interaction methods for smart glasses: A survey. IEEE Access 6 (2018), 28712–28732; https://doi.org/10.1109/ACCESS.2018.2831081.

19. Liang, D. and Thomaz, E. Audio-based activities of daily living (ADL) recognition with large-scale acoustic embeddings from online videos. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 1, Article 17 (March 2019); https://doi.org/10.1145/3314404.

20. Merel, T. The reality of VR/AR growth. TechCrunch. (Jan. 11, 2017); https://tcrn.ch/3rJk0O2.

21. Mueller, F.F. Next steps for human-computer integration. In Proceedings of the 2020 CHI Conf. on Human Factors in Computing Systems, ACM, 1–15; https://doi.org/10.1145/3313831.3376242.

22. Mulvey, F.B. Gaze interactive and attention aware low vision aids as future smart glasses. In ACM Symp. on Eye Tracking Research and Applications (2021); https://doi.org/10.1145/3450341.3460769.

23. Nguyen, T-H-C. et al. Recognition of activities of daily living with egocentric vision: A review. Sensors 16, 1 (2016), 72.

24. Norman, D.A. User Centered System Design: New Perspectives on Human-Computer Interaction. CRC Press (1986).

25. Oulasvirta, A. et al. Interaction in 4-second bursts: The fragmented nature of attentional resources in mobile HCI. In Proceedings of the SIGCHI Conf. on Human Factors in Computing Systems, ACM (2005), 919–928; https://doi.org/10.1145/1054972.1055101.

26. Parry, S. and Straker, L. The contribution of office work to sedentary behaviour associated risk. BMC Public Health 13, 1 (2013), 1–10; https://doi.org/10.1186/1471-2458-13-296.

27. Ram, A. and Zhao, S. L.S.V.P. Towards effective on-the-go video earning using optical head-mounted displays. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 1, Article 30 (March 2021); https://doi.org/10.1145/3448118.

28. Rogers, Y. Moving on from Weiser's vision of calm computing: Engaging UbiComp experiences. UbiComp 2006: Ubiquitous Computing. P. Dourish and A. Friday (eds), 404–421, Springer, Berlin Heidelberg, Berlin, Heidelberg (2006).

29. Roumen, T. et al. NotiRing: A comparative study of notification channels for wearable interactive rings. In Proceedings of the 33rd Annual ACM Conf. on Human Factors in Computing Systems (2015), 2497–2500; https://doi.org/10.1145/2702123.2702350.

30. Salvucci, D.D. et al. Toward a unified theory of the multitasking continuum: From concurrent performance to task switching, interruption, and resumption. In Proceedings of the SIGCHI Conf. on Human Factors in Computing Systems, ACM (2009), 1819–1828; https://doi.org/10.1145/1518701.1518981.

31. Sapkota, S. et al. Ubiquitous interactions for heads-up computing: Understanding users' preferences for subtle interaction techniques in everyday settings. In Proceedings of the 23rd Intern. Conf. on Mobile Human-Computer Interaction, ACM, Article 36 (2021); https://doi.org/10.1145/3447526.3472035.

32. Sheppard, A.L. and Wolffsohn, J.S. Digital eye strain: Prevalence, measurement and amelioration. BMJ Open Ophthalmology 3, 1 (2018), e000146.

33. Review: Focals by North smart glasses, TechCrunch; https://youtu.be/5eO-Y36_t08.

34. Väänänen-Vainio-Mattila, K. User experience and expectations of haptic feedback in in-car interaction. In Proceedings of the 13th Intern. Conf. on Mobile and Ubiquitous Multimedia, ACM (2014), 248–251; https://doi.org/10.1145/2677972.2677996.

35. Ward, N.J. and Parkes, A. Head-up displays and their automotive application: An overview of human factors issues affecting safety. Accident Analysis & Prevention 26, 6 (1994), 703–717; https://doi.org/10.1016/0001-4575(94)90049-3.

36. Weiser, M. The computer for the 21st century. Scientific American 265, 3 (1991), 94–105; http://www.jstor.org/stable/24938718.

37. Wickens, C.D. Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science 3, 2 (2002), 159–177.

38. Wladawsky-Berger, I. The co-evolution of humans and our tools. (2011); https://bit.ly/3Kg0tei.

39. Zhuang, Y., and Fang, Z. Smartphone zombie context awareness at crossroads: A multi-source information fusion approach. IEEE Access 8 (2020), 101963–101977; https://doi.org/10.1109/ACCESS.2020.2998129.

40. Zwakman, D.S. et al. Usability evaluation of artificial intelligence-based voice assistants: The case of Amazon Alexa. SN Computer Science 2, 1 (2021), 1–16.

Back to Top

Authors

Shengdong Zhao is an associate professor at NUS-HCI Lab, Smart Systems Institute, National University of Singapore.

Felicia Tan is a research assistant at NUS-HCI Lab, Smart Systems Institute, National University of Singapore.

Katherine Fennedy (katherine.fennedy@gmail.com) is a postdoc researcher at NUS-HCI Lab, Smart Systems Institute, National University of Singapore.


cacm_ccby-sa.gif This work is licensed under a Creative Commons Attribution-ShareAlike International 4.0 License: http://creativecommons.org/licenses/by-sa/4.0/

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.

 


 

No entries found