When Congress returns in September, AI will remain high on the policy agenda. It may even be Oscar-worthy. Here is a quick take on what to look for in Washington and on the broader AI Policy landscape.
U.S. Congress: Conventional wisdom in Washington is that the likelihood of passing substantial legislation is inversely proportional to the proximity of a presidential election as (a) members of Congress devote more time to campaigning and (b) Presidential elections increase polarization and make consensus more difficult to achieve. Still, AI is generating several bipartisan initiatives such as the proposal from Senators Blumenthal [D-CT] and Hawley [R-MO] to limit Section 230 immunity for generative AI and Senators Graham [R-SC] and Warren's [D-MA] proposal to establish a Digital Consumer Protection Commission. Senator Graham also teamed up with Senator Klobuchar [D-MN] on the Honest Ads Act to improve the transparency and accountability of online political advertising.[i] And there is the AI Labelling Act, introduced by Senators Schatz and Kennedy to impose labelling requirements on AI-generated content.[ii] Among the Senators, Gary Peters [D-MI] is also attracting attention for his quiet but effective leadership.[iii] Over in the House, Representatives Ted Lieu [D-CA] and Ken Buck [R-CO] have introduced a bill to create a National Commission on AI.[iv] And on the national security front, Senator Markey [D-MA] has joined with Senator Budd [R-NC] to assess health safety risks of AI,[v] and with several members of the Senate and House to reduce the risk of AI-based nuclear launch decisions.[vi]
Of particular interest to Communications readers may be the cleverly titled CREATE AI Act of 2023. Parsing that acronym produces "Creating Resources for Every American To Experiment with Artificial Intelligence" Act. The CREATE AI Act establishes the National Artificial Intelligence Research Resource (NAIRR) as a shared national research infrastructure that provides AI researchers and students from diverse backgrounds with greater access to the complex resources, data, and tools needed to develop safe and trustworthy artificial intelligence.[vii] The bill also includes requirements for privacy, ethics, civil rights and civil liberties, safety, and trustworthiness. With broad bipartisan support in both the Senate and House, the bill appears headed for passage.
AI Insight Forums: Senate Majority Leader Chuck Schumer [D-NY] has proposed a crash course for the Senate this fall on AI.[viii] The Insight Forum topics include copyright, workforce issues, national security, high risk AI models, existential risks, privacy, transparency and explainability, and elections and democracy. Congressional attention to AI should be welcome but the closed-door sessions raise concerns about public participation as does the lack of attention to developments in China and the EU. Recommended reading for lawmakers this fall should include Anu Bradford's Digital Empires: The Global Battle to Regulate Technology.
More Hearings: The Senate Judiciary Committee took the lead on AI policy this summer with several hearings that examined the development of rules for AI.[ix] At the most recent Senate hearing, ACM Turing Award recipient Yoshua Bengio and Stuart Russell set out both warnings and recommendations for Congress.[x] Expect other committees to hold hearings when Congress returns. Key committees to watch include Commerce, Defense, Foreign Affairs, Judiciary, as well as House Oversight where Center for AI and Digital Policy (CAIDP) president Merve Hickok testified earlier this year. Note the difference between a hearing that provides general information and a hearing that considers a specific bill. We will need to see more legislative hearings to enact a law for AI.
Federal Agencies: Many federal agencies are seeking public input on AI policy. The President's Science Advisors (PCAST) are looking for input on generative AI.[xi] The NTIA launched a Request for Comment back in April on AI accountability.[xii] OSTP wants to hear about workers and AI.[xiii] Perhaps the most significant agency initiative is over at the Federal Election Commission, where the agency is considering a petition to regulate the use of generative AI in campaign advertising.[xiv] With the upcoming election, the timing is tight. But the concern is real and the petition, launched by the consumer organization Public Citizen, received strong support from members of Congress. Pro tip for commentators: if you make a good recommendation, expect the agency to take it on board or to have a good reason for why it did not.
White House Leadership: President Biden and Vice President Harris have done remarkable work reaching out to tech leaders, civil society, and experts to gather insights and develop policies for a US approach on AI policy. The key question is whether these meetings will lead to concrete outcomes. See item below on Executive Orders.
Executive Orders: The President has the authority to make law through Executive Orders. Executive Orders from the Obama and Trump administrations set in place a legal foundation for AI regulation in the federal government.[xv] An earlier Executive Order from the Biden administration sought to root out bias and promote equity in the deployment of AI systems in federal agencies.[xvi] A pending Executive Order likely will build on pledges made by Tech CEOs to establish safety, security, and trust for AI.[xvii]
FTC Open AI Complaint: Currently before the Federal Trade Commission is the first complaint that seeks to establish guardrails for ChatGPT. FTC complaints are typically focused on one firm, though the CAIDP complaint also asks the FTC to establish an industry-wide regulation for AI services in the consumer marketplace, based on earlier guidance issued by the agency. In July, the New York Times reported that the FTC had opened the investigation of OpenAI.[xviii] According to the CID (Civil Investigative Demand), the investigation looks extensive though timing is also a factor. In similar cases involving Google and Facebook (now Meta), it took almost two years from the time of the initial complaint until there was a settlement.[xix] AI is moving rapidly. It is unlikely the FTC will take two years for this case. (Disclaimer: CAIDP filed the complaint).
Court Cases: Judge Howell's recent ruling in the Thaler case that authorship may not be ascribed to AI programs is a good reminder that courts will likely play a significant role in the development of AI policy, particularly in the U.S. where framework legislation is still not in place.[xx] There are many cases pending but perhaps the most closely watched matter is the looming battle between the New York Times and OpenAI over the use of the Time's archive for LLM training. OpenAI was able to strike a licensing deal with the Associated Press, but it is not clear that the NY Times or other news organizations will follow. Meanwhile, the ongoing writers' strike, concerning in part the use of AI to replicate the creative process, adds another dimension to the litigation docket.
UNESCO: The U.N. agency has taken a leadership role in global AI policy with the Recommendation on AI Ethics, backed by 193 member countries. The UNESCO AI Recommendation is far-reaching and comprehensive, going beyond the "human-centric," "trustworthy," and "fairness, accountability, and transparency" terminology of earlier AI governance models. The UNESCO recommendation includes prohibitions on social scoring and mass surveillance, considers sustainability and gender equity, and encourages impact assessments and readiness assessments. The U.S. rejoined UNESCO earlier this year to seek greater alignment on AI policy. Anticipate new efforts to promote the UNESCO AI recommendation and begin implementation.
The United Nations: A new initiative the U.N. may answer the call for a global commission AI, a recommendation backed by many AI experts. The U.N. plans to create a High-Level Advisory Body on Artificial Intelligence.[xxi] Some see this as the global commission for AI similar to the IPCC for climate change. But the mandate appears more narrow. The Advisory Board will "undertake analysis and advance recommendations for the international governance of artificial intelligence (AI)." Consider the U.N. body as the half-step toward a Global Commission. As part of the process, the U.N. also is seeking brief papers on global governance. This is a good opportunity to put forward concrete proposals. The deadline is September 30, 2023.
U.K. AI Summit: Add to the mix of fall policy events, the U.K. AI Safety summit announced earlier this year by Rishi Sunak. The U.K. Prime Minister is seeking to position the U.K. as a global leader in AI policy. The Summit is timely but Dame Wendy Hall, an architect of the U.K. AI policy (and a past president of ACM), has raised concerns about public participation.[xxii] Still no word yet on the agenda or the attendees.
GPT-5. And not to leave out a tech wild card, there is a "Will they? Won't they?" moment for Open AI. The company has pledged to hold off on the release of new versions of ChatGPT even as some are beta testing GPT-5.[xxiii] Maybe Claude has insight on whether OpenAI will release GPT-5.
And a last word.
Build AI Policy on Prior Work: Too many AI Policy discussions, particularly in the U.S., appear to be blank-sheet exercises, as experts in other domains set out recommendations for AI policy with little understanding of prior work or related undertakings. AI policy is an evolutionary process and new proposals should build on earlier ones. For example, the Universal Guidelines for AI is now celebrating its fifth anniversary.[xxiv] One of the early frameworks for the governance of AI, the Universal Guidelines received backing from experts and associations, including AAAS, ACM, IEEE. The Universal Guidelines have helped influence several of the global frameworks now endorsed by governments, such as the OECD AI Principles. So, too the new ACM Policies on Generative AI, which build on the ACM Principles or Algorithmic Systems, provide a useful framework for lawmakers.[xxv]
The Universal Guidelines and the ACM AI Principles would be good to include in the packets for the Senators when they return this fall for Senator Schumer's AI Insight Forums. No need to reinvent the wheel. Or AI policy frameworks.
Marc Rotenberg is the Executive Director and Founder of the Center for AI and Digital Policy, and the editor of the AI Policy Sourcebook. He has testified many times before the US Congress regarding law and technology.
[ii] S. 2691, the AI Labelling Act of 2023, https://www.congress.gov/bill/118th-congress/senate-bill/2691/
[iii] Bordelon, B. One senator's big idea for AI: With other lawmaking thin on the ground, Sen. Gary Peters is quietly pushing an idea: On AI, the government should start by regulating itself, POLITICO, July 19, 2023
[xx] Small, Z. As Fight Over A.I. Artwork Unfolds, Judge Rejects Copyright Claim: A federal judge dismissed an inventor's attempt to copyright artwork produced by an image generator he designed. But more legal challenges are on the way, The New York Times, Aug, 21, 2023
[xxi] High-Level Advisory Body on Artificial Intelligence, United Nations Office of the Secretary-General's Envoy on Technology
[xxiii] Singh, M. OpenAI Update: GPT-5 Has Been Provided to Early Customers, Tech Crunch, July 26, 2023. See also Singh, M., OpenAI still not training GPT-5, Sam Altman says, Tech Crunch, June 7, 2023
No entries found