Sign In

Communications of the ACM

ACM News

The Battle Over Autonomous Killing Machines

View as: Print Mobile App Share:
The Robattle, seven tons of semi-autonomous machine with sensors and a gun.

Should the algorithms used to make automated battlefield decisions be subject to review and inspection?

Credit: Israel Aerospace Industries

From sticks and stones to cannonballs and laser guided missiles, warfare has always hinged on gaining a competitive advantage. In the digital age, vanquishing the enemy is taking on new dimensions and risks, as artificial intelligence, robotic soldiers, and lethal autonomous weapon systems (LAWS) are rapidly marching onto the battlefield.

These systems not only change warfare, they change the way we think about armed conflicts, and the ethics and morality surrounding them. "Machines that make automated decisions in warfare have been around for some time. But when the decisions involve human life, the ethical and moral concerns are magnified," says Peter Asaro, an associate professor at the New School, an affiliate scholar at Stanford University's Center for Internet and Society, and co-founder of the International Committee for Robot Arms Control.

Right now, there are more questions than answers. Should lethal autonomous weapon systems be programmed to strike and kill without direct human oversight? If so, when and how? Should the algorithms used to make automated battlefield decisions be subject to review and inspection? If so, by whom? And how can countries prevent such weapons from falling into the hands of rogue nations and terrorists?

Unfriendly Fire

The appeal of autonomous "intelligent" weapons and AI isn't difficult to understand. The quest for military superiority increasingly focuses on digital technology. In fact, many existing systems—from controls on fighter jets to computer guided missile systems—already rely on algorithms to remove humans from the decision-making loop.

"Humans get tired, have judgment issues, wind up with feelings of vengeance and sometimes don't follow rules of engagement," says John Arquilla, distinguished professor of defense analysis for the U.S. Naval Postgraduate School. "Automatons have the potential to create less collateral damage than human soldiers."

Arquilla is not alone in believing we have reached an inflection point with autonomous offensive weapons, and the stakes are about to become much greater. The ability to use robots, drones, even insect-size devices to maim and kill is rapidly taking shape. The once-futuristic idea of robot armies and drone swarms is now stepping beyond the realm of science fiction.

"The question, I believe, isn't whether we should use AI and automation on the battlefield, it's how, when, and where we should we use it," observes Moshe Vardi, professor of computer science at Rice University. "We really need to understand what autonomous means. If the initial order came from a human but the attack is carried out by a machine, what separation of time makes it autonomous? Five minutes, 50 minutes, five days? Right now, the concept is incredibly vague."

Kill the Switch?

It isn't difficult to imagine a dystopian future where automated drone strikes become the norm—and autonomous assassinations of top military and political leaders take place. There's also a fear that robotized armies could lead to more conflicts…and more aggressive conflicts. Finally, many observers worry that that autonomous weapons could reveal flaws in programming—or wind up hacked—and attack the wrong targets, including civilians. "We're talking about life and death, so the level of accuracy must be incredibly high," Asaro says.

This leads to another series of questions: How do you distinguish civilians from insurgents when combatants wear civilian clothing? How do you differentiate tanks from buses? How do you prevent human biases from becoming embedded in AI systems, including things like gender, age, racial characteristics, and the types of clothing people wear? How do you prevent automated escalations if both sides are using these weapons? "These systems are not moral or legal agents, and thus cannot be legitimately delegated the responsibility for making such decisions," Asaro contends.

Public concern is clear: a 2019 Ipsos poll found that 61% of people across 26 countries oppose LAWS systems, and 66% believe they cross a moral line that machines should not kill humans. Meanwhile, the United Nations is pushing to update international law and is considering a treaty to ban them. So far, 28 governments support a complete ban—although China, Russia, and the U.S. remain notably absent. In addition, more than 4,500 AI professionals have publicly stated they support a ban on offensive autonomous weapons, and upwards of 130 non-governmental organizations NGOs have publicly stated their opposition to fully autonomous weapons.

Nevertheless, autonomous weapons technology is advancing. "There's no practical way to avoid the research, development, and use of AI and robotics in military settings, or among rogue groups," Arquilla says. "The problem we now face is one where technology has outstripped current ethics and international law. Ultimately, we have to understand the optimal role of machines and humans working together in warfare."

Vardi believes the debate ultimately must move beyond simplistic pro and con arguments and tackle the moral and practical issues that have existed since the beginning of people…and warfare. "Everyone agrees there is a red line, but we haven't gotten to the point where we can agree on what it is," he argues. "Saying that we can perfect the programming so a robot can kill ethically is very questionable. Defining accuracy, autonomy, and what is acceptable is an extremely complicated task."

Samuel Greengard is an author and journalist based in West Linn, OR, USA.


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account