Sign In

Communications of the ACM

ACM TechNews

AIs Could Be Hacked with Undetectable Backdoors to Make Bad Decisions

View as: Print Mobile App Share:

If artificial intelligence algorithms are hacked to give wrong decisions, we might not be able to tell.

Credit: Quardia/Getty Images

Renegade staff could theoretically insert undetectable backdoors in third-party artificial intelligence (AI) algorithms, enabling hackers to commandeer the AIs to make bad decisions.

Training AI models demands vast computing resources that most researchers and companies lack, so they rely on specialist firms to provide such services.

The Massachusetts Institute of Technology's Vinod Vaikuntanathan and colleagues demonstrated exploits that train an AI to search for specific signatures within data, and to perform differently if it detects them.

Since AI models' operations lack transparency, confirming their behavior for all possible inputs would be impossible.

Vaikuntanathan said there are no obvious countermeasures apart from having trustworthy staff train AI in-house, although the researchers suggest slightly tweaking input suspected of triggering bad decisions would hopefully evade backdoor recognition.

From New Scientist
View Full Article


Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account