Researchers at the University of Alberta in Canada have found that virtual assistants do not live up to their potential in terms of providing users with reliable, relevant information on medical emergencies.
The team tested four commonly used devices—Alexa, Google Home, Siri, and Cortana—using 123 questions about 39 first aid topics, including heart attacks, poisoning, nosebleeds, and splinters.
The devices' responses were measured for accuracy of topic recognition, detection of the severity of the emergency, complexity of language used, and how closely the advice given fit with accepted first aid treatment and guidelines.
Google Home performed the best, recognizing topics with 98% accuracy and providing relevant advice 56% of the time.
Alexa also scored well, recognizing 92% of the topics and giving accepted advice 19% of the time.
The quality of responses from Cortana and Siri was so low that the researchers could not analyze them.
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found