Researchers at the University of Bristol and University College London (UCL) have found making an assistive robot partner expressive and communicative could make it more satisfying to work with and improve trust, but it also could encourage users to lie to the robot in order to avoid hurting its feelings.
The researchers experimented with a humanoid assistive robot helping users make an omelet, with the goal of investigating how a robot can regain a user's trust when it makes a mistake and how it can communicate its erroneous behavior to somebody who is working with it.
The study suggests a communicative, expressive robot is preferable for the majority of users to a more efficient, less error-prone one, despite it taking 50% longer to complete tasks.
During the study, users reacted well to an apology from the robot, and were particularly receptive to its sad facial expressions.
"Human-like attributes, such as regret, can be powerful tools in negating dissatisfaction but we must identify with care which specific traits we want to focus on and replicate," says UCL researcher Adriana Hamacher.
Meanwhile, Bristol professor Kerstin Eder notes, "complementing thorough verification and validation with sound understanding of these human factors will help engineers design robotic assistants that people can trust."
From University of Bristol News
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found