acm-header
Sign In

Communications of the ACM

Legally speaking

AI Authorship?


two robotic hands drawing each other

Credit: Getty Images

Since the mid-1960s, intellectual property (IP) law specialists have debated whether computers or computer programs can be "authors" whose outputs can be copyrighted.6 The U.S. Congress was so befuddled about this issue in the mid-1970s that it created a special Commission on New Technological Uses of Copyrighted Works (CONTU) to address this and a few other computer-related issues.4

A second burst of interest in AI authorship broke out in the mid-1980s. Congress once again commissioned a study, this time from its Office of Technology Assessment (OTA), to address this and other controversial computer-related issues. OTA did not offer an answer to the question, perhaps in part because at that time, it was a "toy problem" because no commercially significant outputs of AI or other software programs had yet been generated.5

But deep learning and other AI breakthroughs have caused IP professionals to rethink the AI authorship issue.1,2 For example, The Next Rembrandt video features a group of art experts and computer scientists discussing how they collaborated to digitize many Rembrandt paintings, develop models of particular features of the paintings, and then create a Rembrandt-like portrait of a man with facial hair wearing a hat and looking to the right.6 The resulting AI-generated painting really does look like a Rembrandt. The video does not address how the team that brought this painting into being thinks about the copyright issues. But I couldn't help myself. That painting shows the copyrightability of AI outputs is no longer a toy problem.

uf1.jpg
Figure. An AI-generated painting, The Next Rembrandt (left) is the result of a collaborative effort using models of features from many Rembrandt paintings.

In the U.K. and New Zealand, that painting would be eligible for a short term of copyright protection because those nations passed laws permitting this approximately three decades ago. The question is open, however, in the U.S. and in most of the rest of the world. In February 2020, the U.S. Copyright Office and the World Intellectual Property Law Organization (WIPO) held an all-day conference in Washington, D.C., to consider how copyright should be applied to AI outputs. The first litigated cases about copyright in AI outputs have been decided in China.


AI software may be the author-in-fact of such outputs, but is it an author-in-law who can own a copyright?


This column reviews the reasons why copyright professionals find this such a bedeviling issue. AI software may be the author-in-fact of such outputs, but is it an author-in-law who can own a copyright? Which, if any, human is entitled to claim copyrights in such outputs?

Back to Top

CONTU and Copyright Office on AI Authorship

The CONTU's 1979 report concluded that there was "no reasonable basis for considering that a computer in any way contributes authorship to a work produced through its use."4 It regarded computers and computer programs as tools with which works could be created much like cameras enable the creation of copyrightable photographs.

The U.S. Copyright Office has in the past rejected claims of copyright in some non-AI machine-generated works. The Office, for instance, refused to register a claim of copyright in a software-generated colorized version of a black-and-white public domain movie. A machine-generated splattering of colors on a canvas, which looked something like a Jackson Pollack painting, was likewise refused registration. The Office recently reiterated its legal position on this issue: "only works created by a human can be copyrighted under United States law, which excludes photographs and artwork created by animals or by machines without human intervention."

The need for human authorship also explains why the Office refused to register a claim of copyright in a monkey selfie. David Slater, a British nature photographer, went to a wildlife park in Celebes and set up his camera in a way that enabled a crested macaque (known to the world as Naruto) to take photos of himself smiling. Slater claimed copyright in the Naruto photos because of his creative staging of the camera and settings. When some copies of the photos appeared on Internet sites, Slater claimed this was infringement. Tech-dirt picked up on the dispute and questioned Slater's ownership rights, claiming the photos were in the public domain or its posting of the monkey selfies was fair use.

An interesting twist in the monkey selfie case was a lawsuit that the People for the Ethical Treatment of Animals (PETA) brought against Slater, claiming it was Naruto's guardian and therefore entitled to claim copyright in the photos on Naruto's behalf. The trial judge ruled there was no human author of the photos, and so the photos were in the public domain. The appeals court affirmed dismissal of PETA's lawsuit.

Back to Top

Automatic Writing Cases

One set of relatively close precedents to the AI authorship issue are those rendered in the U.S. and U.K. involving claims of copyright in texts ostensibly created by supernatural beings.1

One such case was Cummins v. Bond, which a U.K. court decided in 1927. In justifying his copying of some parts of the text at issue, Bond relied on Cummins' statements that he wrote the text in a trance and was channeling messages from the spirit world. In Bond's view, if the spirit was the author, then no human author could claim copyright. The court decided that Cummins was the author of the text because he had "translated" the spirit's message into English.

Penguin Books v. New Christian Church was a similar case decided in the U.S. in 2000. The Copyright Office initially refused to register the work at issue, A Course in Miracles, because the application identified Jesus as its author. A second attempt at registration was more successful because "Anonymous" was now identified as its author. When the New Christian Church made copies of the text, thinking it was in the public domain, Penguin (to whom the copyright had been assigned) sued for infringement. The court held that there was sufficient creativity in the editorial selection and arrangement of these materials to support a copyright.

Back to Top

Software Output Cases

Two U.S. cases have ruled that certain outputs of computer programs were not infringing derivative works. The first was Design Data v. Unigate Enterprises in 2017. Unigate hired a Chinese company to use Design Data's CAD software to generate drawings, data, and models for structural steel components for buildings. Unigate sold these outputs to its clients.

Design Data claimed the Chinese company used an infringing copy of its software to generate these outputs, and the outputs were thus infringing derivative works of the infringed program. An appellate court ruled that Unigate's importation and sale of the CAD outputs were not infringements of Design Data's derivative work right. To be a derivative work, some expression from the underlying program would have to have been appropriated.

Rearden v. Walt Disney Co. in 2018 involved a similar claim. Rearden owned copyright in MOVA software, which created wire-frame models of live-action filmed performances onto which other images, such as animation, could then be superimposed for the movie. Rearden claimed that Beauty and the Beast, among others, infringed the MOVA copyright because the company Disney hired to generate models for this movie had used an infringing copy of the MOVA program.

Although the court allowed Rearden to proceed with its claim that Disney might be vicariously liable for its contractor's infringement, it rejected Rearden's claim that movies whose CGI effects were generated in part by an infringing program was a derivative work of the program. The court reasoned the "lion's share" of the creative expression in the movies was attributable to Disney, not the MOVA software.

Back to Top

Chinese Precedents

Chinese courts have recently decided two cases on AI authorship. The first was Feilin v. Baidu, which involved Baidu's republication of parts of "Analytic Report on the Judicial Big Data in the Film and Entertainment Industry in Beijing," which had been generated by AI software.

The court ruled that no copyright could exist in AI-generated outputs. They were not "works" protected by copyright, for there was no human author eligible to claim rights in them. The court directed that any text generated by an AI program must be identified as AI-generated.


The pragmatic answer to the AI authorship puzzle is the user who is responsible for generating the outputs.


Feilin's claim of infringement, however, was upheld because he had modified the AI outputs and had manually colored certain drawings. Under this ruling, human tinkering with AI-generated documents might qualify for copyright.

The second such case was Shenzen Tencent v. Yinxin. Yinxin copied an article about stock market activity that was automatically generated by Dream-writer, an intelligent writing assistance program developed by Shenzen. The court ruled that the article was copyrightable and Shenzen was its author. Yinxin's copying of the article was held to be an infringement, a ruling that is seemingly inconsistent with Feilin.

Back to Top

Commentator Views

A few dozen articles have been written over the years, speculating about the copyrightability of computer-generated works and the AI authorship issue.1,2,3,6 No consensus has emerged from this commentary.

Some say AI-generated works are in the public domain, like the monkey selfie. Some say the person or firm that wrote the AI program should get copyright in any copyrightable outputs. Others suggest the person who actually generates the output should be the rights-holder, if anyone is.

Some propose that both the programmer and the user should be co-owners of any copyrights in AI-generated outputs. Some would adapt the U.S. work-made-for-hire rules, under which employers or entities that specially commission certain works are authors-in-law, even if not authors-in-fact, to enable copyright ownership rights to be decided.

One problem with these proposed solutions is the AI outputs having some commercial value are products of highly collaborative processes, as The Next Rembrandt video demonstrates. AI software is not, as some commentators seem to believe, a black-box into which data is input at one end and the output spit out at the end. AI software has numerous component parts, not all of which may come from the same entity: training data, weights to be given to various criteria, models for generating outputs or certain parts, algorithms used to analyze the data, and software that executes instructions. Also important is the know-how of AI programmers who fine-tune these component elements to yield the desired results.

Back to Top

Conclusion

The pragmatic answer to the AI authorship puzzle, as I have argued elsewhere,4 is the user who is responsible for generating the outputs. If anyone needs to be designated as owner of rights in the outputs, it should be the user. That person possesses the outputs, discovered that the potential commercial value of the outputs, and is generally best situated to assess and exploit that value.

Moreover, as in the automated writing and Feilin cases, the user will often have adapted, rearranged, edited, or otherwise tinkered with the outputs to make them suitable for commercialization. If anyone needs copyright incentives to take the raw outputs and adapt them for commercial dissemination, it is that user. Besides, the user will also have already paid the owner of the AI software components for the right to use them to generate outputs.

It is, moreover, unlikely the Copyright Office or judges in litigation will generally be able to tell the difference between outputs that have been created by AI and those created by humans. Only time will tell what definitive answer that legislators and courts decide upon to resolve this long-standing puzzle.

Back to Top

References

1. Bridy, A. Coding creativity: Copyright and the artificially intelligent author. Stanford Tech. Law Journal 5, 1 (2012).

2. Ginsburg, J.C. and Budiarjo, L.A. Authors and machines. Berkeley Technology Law Journal, 34, 343 (2019)

3. Grimmelmann, J. There's no such thing as a computer-authored work—And it's a good thing, too. Columbia J. Law & Arts 39, 403 (2016).

4. National Commission on New Technological Uses of Copyrighted Works, Final Report (1979).

5. Office of Technology Assessment. Intellectual Property Rights in an Age of Electronics and Information (1986).

6. Samuelson, P. Allocating ownership rights in computer-generated works. U. Pittsburgh Law Review 47, 1185 (1986).

7. The next Rembrandt: Can the great master be brought back to create one more painting?; https://www.nextrembrandt.com/

Back to Top

Author

Pamela Samuelson (pam@law.berkeley.edu) is the Richard M. Sherman Distinguished Professor of Law and Information at the University of California, Berkeley, and a member of the ACM Council.


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2020 ACM, Inc.


 

No entries found