acm-header
Sign In

Communications of the ACM

ACM News

China Has a New Plan for Judging the Safety of Generative AI—and it's Packed with Details


View as: Print Mobile App Share:
A pile of documents.

Said Matt Sheehan, a global technology fellow at the Carnegie Endowment for International Peace, “This essentially gives companies a rubric or a playbook for how to comply with the generative AI regulations that have a lot of vague requirements.

Credit: Stephanie Arnett/MIT Technology Review/Getty

Ever since the Chinese government passed a law on generative AI back in July, I've been wondering how exactly China's censorship machine would adapt for the AI era. The content produced by generative AI models is more unpredictable than traditional social media. And the law left a lot unclear; for instance, it required companies "that are capable of social mobilization" to submit "security assessments" to government regulators, though it wasn't clear how the assessment would work. 

Last week we got some clarity about what all this may look like in practice. 

On October 11, a Chinese government organization called the National Information Security Standardization Technical Committee released a draft document that proposed detailed rules for how to determine whether a generative AI model is problematic. Often abbreviated as TC260, the committee consults corporate representatives, academics, and regulators to set up tech industry rules on issues ranging from cybersecurity to privacy to IT infrastructure.

Unlike many manifestos you may have seen about how to regulate AI, this standards document is very detailed: it sets clear criteria for when a data source should be banned from training generative AI, and it gives metrics on the exact number of keywords and sample questions that should be prepared to test out a model.

From MIT Technology Review
View Full Article

 


 

No entries found