Sign In

Communications of the ACM

ACM News

Researchers Say Guardrails Built Around A.I. Systems Are Not So Sturdy

View as: Print Mobile App Share:
The researchers did not test technology from IBM, which competes with OpenAI.

Clockwise from left, Ruoxi Jia, Tinghao Xie, Prateek Mittal and Yi Zeng, part of a team that exposed a new flaw in A.I. systems.

Credit: Elias Williams/The New York Times

Before it released the A.I. chatbot ChatGPT last year, the San Francisco start-up OpenAI added digital guardrails meant to prevent its system from doing things like generating hate speech and disinformation. Google did something similar with its Bard chatbot.

Now a paper from researchers at Princeton, Virginia Tech, Stanford and IBM says those guardrails aren't as sturdy as A.I. developers seem to believe.

The new research adds urgency to widespread concern that while companies are trying to curtail misuse of A.I., they are overlooking ways it can still generate harmful material. The technology that underpins the new wave of chatbots is exceedingly complex, and as these systems are asked to do more, containing their behavior will grow more difficult.

From The New York Times
View Full Article



No entries found