A new book by Yale University ethicist Wendell Wallach and Indiana University professor Colin Allen, titled "Moral Machines: Teaching Robots Right From Wrong," argues that we need to establish how to make robots both moral and responsible machines, and that eventually a computer or robot will make a decision that causes a human disaster.
Wallach and Allen explore six strategies that could reduce the chances of such an event. First, make sure all computers and robots are never in a position where they must make a decision in which all the consequences cannot be predicted in advance. The likelihood of this strategy being widely deployed is highly unlikely however, as engineers are already building computers and robotic systems for use in environments where actions cannot always be predicted.
Second, do not give robots and computerized systems weapons. Semi-autonomous robotic weapons already exist, in the form of cruise missiles and Predator drones, and military planners are very interested in developing robotic soldiers as a way to reduce the deaths of human soldiers.
Third, the authors suggest installing robots with Isaac Asimov's "Three Laws of Robotics," which state that robots should not harm humans or allow them to come to harm through inaction, robots should always obey humans, and that robotic-self preservation is the lowest priority.
Fourth, program robots with principles for behavior that are more general than simplistic rules. Fifth, educate robots like children. Machines that learn as they go through new experiences could develop a sensitivity to the actions that people consider to be right or wrong. Finally, create machines capable of mastering emotions such as empathy, and give robots the ability to read non-verbal social cues.
From New Scientist
View Full Article
No entries found