Sign In

Communications of the ACM

ACM News

AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact

View as: Print Mobile App Share:
The White House.

It’s unclear how much the agreement will change how major AI companies operate.

Credit: Yasin Ozturk/Getty Images

The White House has struck a deal with major AI developers—including Amazon, Google, Meta, Microsoft, and OpenAI—that commits them to take action to prevent harmful AI models from being released into the world.

Under the agreement, which the White House calls a "voluntary commitment," the companies pledge to carry out internal tests and permit external testing of new AI models before they are publicly released. The test will look for problems including biased or discriminatory output, cybersecurity flaws, and risks of broader societal harm. Startups Anthropic and Inflection, both developers of notable rivals to OpenAI's ChatGPT, also participated in the agreement.

"Companies have a duty to ensure that their products are safe before introducing them to the public by testing the safety and capability of their AI systems," White House special adviser for AI Ben Buchanan told reporters in a briefing yesterday. The risks that companies were asked to look out for include privacy violations and even potential contributions to biological threats. The companies also committed to publicly reporting the limitations of their systems and the security and societal risks they could pose.

From Wired
View Full Article



No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account