Sign In

Communications of the ACM

ACM News

OpenAI's Ilya Sutskever Has a Plan for Keeping Super-Intelligent AI in Check

View as: Print Mobile App Share:

A research paper released by OpenAI touts results from experiments designed to test a way to let an inferior AI model guide the behavior of a much smarter one without making it less smart.

Credit: Eugene Mymrin/Getty Images

OpenAI was founded on a promise to build artificial intelligence that benefits all of humanity—even when that AI becomes considerably smarter than its creators. Since the debut of ChatGPT last year and during the company's recent governance crisis, its commercial ambitions have been more prominent. Now, the company says a new research group working on wrangling the supersmart AIs of the future is starting to bear fruit.

"AGI is very fast approaching," says Leopold Aschenbrenner, a researcher at OpenAI involved with the Superalignment research team established in July. "We're gonna see superhuman models, they're gonna have vast capabilities, and they could be very, very dangerous, and we don't yet have the methods to control them." OpenAI has said it will dedicate a fifth of its available computing power to the Superalignment project.

From Wired
View Full Article



No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account