acm-header
Sign In

Communications of the ACM

ACM News

Who Is Liable for A.I. Creations?


View as: Print Mobile App Share:
OpenAI CEO Sam Altman.

Said Eric Goldman, a law professor at Santa Clara University, “The blossoming of AI comes at one of the most precarious times amid a maturing tech backlash. We need some kind of immunity for people who make the tools.”

Credit: Joel Saget/Agence France-Presse/Getty Images

A string of challenges to Section 230 — the law that shields online platforms from liability for user-generated content — have failed over the last several weeks. Most recently, the Supreme Court declined on Tuesday to review a suit about exploitative content on Reddit. But the debate over what responsibility tech companies have for harmful content is far from settled — and generative artificial intelligence tools like the ChatGPT chatbot could open a new line of questions.

Does Section 230 apply to generative A.I.? The law's 1996 drafters told DealBook that it does not. "We set out to protect hosting," said Senator Ron Wyden, Democrat of Oregon. Platforms are immune only to suits about material created by others, not their own work. "If you are partly complicit in content creation, you don't get the shield," agreed Chris Cox, a former Republican representative from California. But they admit that these distinctions, which once seemed simple, are already becoming more difficult to make.

What about A.I. search engines? Typically, search engines are considered vehicles for information rather than content creators, and search companies have benefited from Section 230 protection. Chatbots generate content, and they are most likely beyond protection. But tech giants like Microsoft and Google are integrating chat and search, complicating matters. "If some search engines start to look more like chat output, the lines will be blurred," Wyden said.

 

From The New York Times
View Full Article

 


 

No entries found