Sign In

Communications of the ACM

ACM TechNews

Why Self-Driving Cars Must Be Programmed to Kill

View as: Print Mobile App Share:
The killer car from the movie "Christine."

The fact that automated cars can never be perfectly safe raises ethical issues.

Credit: Columbia Pictures Corp.

The fact that automated cars can never be perfectly safe raises ethical issues, such as how such vehicles should be programmed to act in the event of an unavoidable collision.

To characterize the public's feelings on algorithmic morality, Toulouse School of Economics researcher Jean-Francois Bonnefon and colleagues conducted a survey, which they say provides "a first foray into the thorny issues raised by moral algorithms for autonomous vehicles."

By posing the scenario of a self-driving car faced with an avoidable accident deciding to take action to minimize loss of life, the researchers found other consequences may come about from this approach.

One consequence is that fewer people may buy self-driving cars because they are programmed to sacrifice their owners, likely leading to more deaths caused by ordinary cars.

Bonnefon's research found people were generally in favor of cars programmed to minimize the death toll--but only to a certain degree. "[Participants] were not as confident that autonomous vehicles would be programmed that way in reality--and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves," the researchers say.

From Technology Review
View Full Article


Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account