Researchers at Google and other companies pursuing self-driving cars are challenged by the fact that automated vehicles, which are programmed to obey the letter of the law and traffic safety rules, may have trouble fitting in with autos driven by people, who do not always adhere to such rules.
"The real problem is the car is too safe," says Donald Norman, director of the Design Lab at the University of California, San Diego. "[Driverless cars] have to learn to be aggressive in the right amount, and the right amount depends on the culture."
Google cars routinely make quick, evasive maneuvers or practice caution in ways that are out of alignment with other vehicles on the road. Following the most cautious approach has led to 16 crashes involving Google cars in the last six years, with Google blaming human error for every collision.
Although the wide use of self-driving cars may alleviate these concerns, researchers say the near-term problem is finding ways to enable humans and machines to work together safely. Google's Courtney Hohne says researchers now are testing ways of "smoothing out" the relationship between humans and the car's software.
Dmitri Dolgov, head of software for Google's Self-Driving Car Project, says he has learned the initiative demonstrates the need for human drivers to be "less idiotic."
From The New York Times
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA
No entries found