Sign In

Communications of the ACM

ACM TechNews

Navy Looking at Teaching Robots How to Behave

View as: Print Mobile App Share:
Robots like the U.S. Navy's Shipboard Autonomous Firefighting Robot, rear, are being developed to assist human servicemembers.

The U.S. Navy is supporting a number of projects aimed at training autonomous systems on how to avoid harming people.

Credit: Sean Moores/Stars and Stripes

The U.S. Navy is funding projects to train autonomous systems to behave and not harm humans by demonstrating what to do, putting them through their paces, and then making remedial critiques.

"We're trying to develop systems that don't have to be told exactly what to do," says Office of Naval Research manager Marc Steinberg. "You can give them high-level mission guidance, and they can work out the steps involved to carry out a task."

One project at the Georgia Institute of Technology (Georgia Tech) involves an artificial intelligence software program named Quixote, which uses stories to teach robots acceptable behavior. Georgia Tech professor Mark Riedl says Quixote could function as a "human user manual" that teaches machines human values via parables that emphasize shared cultural knowledge, social mores, and protocols.

Steinberg notes such issues are important as the Navy deploys more unmanned systems. He says although no offensive machines would be allowed to attack without human authorization, there are situations in which a military robot might have to weigh risks to people and make appropriate decisions. "Think of an unmanned surface vessel following the rules of the road," Steinberg says. "If you have another boat getting too close, it could be an adversary or it could be someone who is just curious who you don't want to put at risk."

From Stars and Stripes
View Full Article


Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account