DALL-E, Midjourney, and Stable Diffusion are among the generative AI technologies widely used to produce images in response to user prompts. The output images are, for the most part, indistinguishable from images humans might have created.
Generative AI systems are capable of producing human-creator-like images because of the extremely large quantities of images, paired with textual descriptions of the images' contents, on which the systems' image models were trained. A text prompt to compose a picture of a dog playing with a ball on a beach at sunset will generate a responsive image drawing upon embedded representations of how dogs, balls, beaches, and sunsets are typically depicted and arranged in images of this sort.
The intellectual achievement under-girding the development of these generative AI models is unquestionably impressive. Some recent lawsuits against Stability AI have, however, raised serious questions about whether the use of copyrighted images as training data constitutes copyright infringement and whether generative AI outputs are infringing derivative works of images on which these models were trained.
My July 2023 Legally Speaking column discussed a different generative AI lawsuit, Doe v. GitHub—a proposed class action lawsuit proposed initiated by several programmers. They claim GitHub, OpenAI, and Microsoft violated various laws (but not copyright) because the programming assistance tool Copilot draws upon OpenAI's Codex large language model (LLM), which was trained on billions of lines of open source code available on public Internet sites, to suggest code sequences for particular tasks in response to users' text prompts.
Stability, similar to GitHub et al., has moved to dismiss the lawsuits on various grounds. As this column goes to press, courts have not ruled on those motions. Because Communications readers may be more interested in the substantive issues posed in the litigations than in the motions, this column focuses on the substantive claims and likely defenses. Because Stability has not yet publicly defended itself on the merits, this column discusses what are likely to be its principal defenses.
Getty has brought two lawsuits against Stability, one in the U.S. and one in the U.K. Sarah Andersen, a graphic artist, is the lead plaintiff in a similar proposed class action lawsuit against Stability. Getty claims Stability copied more than 12 million photographs, along with textual descriptions of the photographs' contents, from Getty websites. It says that Stability used these images and captions as training data for Stable Diffusion's image model.
The pairing of high-quality photographs with concise descriptions is said to make Getty images much more valuable to Stability as training data than ordinary uncaptioned images available on the Internet. Getty owns copyright on many of these photographs and holds nonexclusive licenses on many others. (Under U.S. law, Getty can only sue Stability for infringement of copyrights and get remedies against it for images in which Getty owns rights.) The complaint states the outputs produced by Stable Diffusion are often substantially similar to, and therefore infringing derivatives of, images copied from the Getty websites.
Getty also charges Stability with violating a law that protects copyright management information from being removed or altered. The complaint shows some examples of mangled Getty Images logos on Stable Diffusion outputs that are similar in content to those embedded in Getty photos.
The Andersen complaint, like its counterpart in the Getty case, asks the court to find that Stability infringed copyrights by using images copied from Internet sites as training data and by producing outputs that are infringing derivatives of the images used as training data. (Andersen also sued Midjourney and Deviant Art as infringers for products built on the Stable Diffusion model, but Stability is the main defendant in the Andersen case.)
What is mainly different about the Andersen case is the named plaintiffs' claim to represent a class of all visual artists whose copyrights Stability has infringed. Andersen also claims all of Stable Diffusion's outputs are infringing derivative works, not just those that are substantially similar to the originals.
Several judicial decisions in the U.S. have rejected claims of copyright infringement based on defendants making digital copies of in-copyright works to enable computational uses of their contents. Stability will almost certainly rely on those cases in support of its fair use defenses in the U.S. Getty and Andersen infringement cases. (Although the U.K. does not have a fair use defense, it has a text- and data-mining (TDM) exception to copyright rules that may have some application in Getty's case against Stability in the U.K.)
In A.V. v. iParadigms, LLC, for instance, some students unsuccessfully sued iParadigms, the maker of a plagiarism detection program, for copyright infringement because its software made and stored copies their essays. Because the iParadigms' copies were for a very different purpose than the students' purposes when writing the papers and there was no risk of harm to any markets for the students' papers, the appellate court found iParadigms had made fair uses of the students' essays.
Another example is Authors Guild v. Google. Google's fair use defense prevailed in the Authors Guild's class action lawsuit charging it with copyright infringement for digitizing millions of in-copyright books from research library collections. These copies enabled Google to index book contents so it could serve up snippets of their texts in response to user search queries. This was a very different kind of use than the books' original purpose. The snippets did not harm the markets for books because they were too few and too short to satisfy demand for the books' expressions. The court found the Authors Guild's harm claims to be too speculative to undermine Google's fair use defense.
A third fair use precedent on which Stability may rely is Sega Enterprises v. Accolade. Because Accolade wanted to make its video games compatible with the then-popular Sega Genesis video game console, it made copies of Sega programs to figure out how to make its games playable on that platform. The court found the intermediate copying for research purposes favored Accolade's fair use defense. Sega argued Accolade had harmed the market for its videogames. However, the court rejected this because Accolade's non-infringing games enabled the very kind of competition among copyrighted works the law is intended to foster.
Getty and Andersen will almost certainly seek to counter Stability's fair use defense by pointing to its commercial purpose and the competitiveness of Stable Diffusion's outputs with their works.
These plaintiffs may claim support for their arguments in the Supreme Court's most recent fair use decision in Andy Warhol Foundation, Inc. v. Goldsmith, pertaining to the Prince Series created by Andy Warhol based on a photograph of the musician Prince by Lynn Goldsmith. It affirmed a ruling against fair use because the Foundation's license of Warhol's image of Prince based on Goldsmith's photograph competed with Goldsmith's photograph to accompany magazine articles.
Getty may be able to show it has granted licenses to AI and machine learning companies to use its photographs as training data. Getty has consequently a better chance to show market harm than Andersen since she could not possibly grant a license to all images alleged to be infringed. Yet the Goldsmith decision cited positively to the Authors Guild decision, so it remains a sound precedent on which Stability can rely.
Copyright law grants authors of original works of authorship an exclusive right to control the making of derivative works. (In some countries, this is known as the adaptation right.)
The U.S. copyright statute defines the term "derivative work" as a "work based upon one or more preexisting works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgement, condensation, or any other form in which the work may be recast, transformed, or adapted."
Outputs of generative AI systems are certainly "based upon one or more preexisting works." This is a necessary element in any derivative work infringement claim, but by itself, insufficient. Numerous judicial decisions have considered and rejected derivative work infringement claims when the challenged works were similar to the challengers' works only in their ideas, facts, methods, or other unprotectable aspects.
Copyright law grants authors of original works of authorship an exclusive right to control the making of derivative works.
Courts have repeatedly held that to infringe the derivative work right, copyright owners must show substantial similarity between the second work's expressive elements and the first work's expression so it is fair to infer the defendants improperly appropriated expressive elements from the copyright owners' works.
Plaintiffs in the Stability cases may try to persuade judges to focus on the open-ended part of the derivative work definition: "or any other form in which the work may be recast, transformed, or adapted." While a few judicial decisions have given a broad interpretation to that phrase, most have not. And none has found infringement of the derivative work right without evidence that the defendant appropriated some expressive elements of the work allegedly infringed.
The Andersen complaint even admits that "[i]n general, none of the Stable Diffusion output images provided in response to a particular Text Prompt is likely to be a close match for any specific image in the training data." In this respect, Andersen seems to have a weaker claim against Stability than Getty.
It is, of course, possible for generative AI systems to produce infringing derivatives. If a generative AI model was trained on a dataset containing multiple images of a particular character or feature (for example, Superman), it may produce an infringing image in response to a user prompt calling for the generation of an image of that character or feature (for example, Superman at a grocery store).
Developers of generative AI systems can guard against this risk by removing duplicate images and/or developing output filters to prevent the resulting image from infringing. Assuming the outputs do not infringe the derivative work right, the next question is whether the outputs themselves are copyrightable. This issue too is in litigation.
A few developers and users of generative AI technologies have applied to the U.S. Copyright Office to register their claims of copyright in AI-generated outputs. Registration of copyright claims is relatively uncommon unless owners are involved in or anticipate infringement litigation. (U.S. authors must have a registration certificate to sue someone for infringement.) Some users of generative AI have applied for registration to make a different point.
Stephen Thaler, who developed an AI program he calls his Creativity Machine, sought to register his claim of copyright in an image entitled "A Recent Entrance to Paradise," an output of his program, the Creativity Machine. He claimed ownership of copyright in this image, likening the Creativity Machine to an employee whose copyright-able creations belong to the employer if created within scope of employment.
The Copyright Office denied Thaler's registration application because the image lacked human authorship. Thaler has asked a federal court to declare that the copyright statute does not require human authorship and to order the Office to issue a registration certificate for this work. A federal court recently ruled against Thaler's copyright claim.
Another AI-generated work applicant for copyright registration was Kris Kashtanova, who used Midjourney to produce a series of images in response to multiple prompts describing the kinds of images they sought. Kashtanova selected and arranged the images and added text to tell a story. Without realizing that Kashtanova's short book Zarya of the Dawn contained AI-generated images, the Copyright Office issued a registration certificate for the book.
When Office personnel discovered Kashtanova's book contained AI-generated images, it cancelled the registration certificate. After reviewing Kashtanova's submission about their creation process, the Office issued a revised certificate that recognized copyright in the text and the selection and arrangement of images Kashtanova contributed to the book. The Office opined the AI images were not copyrightable for lack of human authorship.
To provide guidance to generative AI creators and users, the Office issued a registration policy statement reflecting the Office's view that AI-generated works are uncopyrightable for lack of human authorship. (This is a bit of good news for Stability, as the Office is denying registration not because AI-generated works are infringing derivatives, but because they lack human authorship.)
The policy statement directs registration applicants to identify any aspects of their work produced by generative AI systems and to disclaim authorship in them. Because the use of AI may be a tool in many creations, the Office is likely to refine its policy statement to provide more nuanced guidance to authors.
Generative AI has spawned a substantial number of serious policy questions in the last year. Not the least of these concerns the legality of webcrawling and digitizing in-copyright works to serve as training data for the construction of generative models and producing output images. Litigations challenging generative AI on copyright grounds are in early stages. Definitive answers to questions posed in these lawsuits are likely years away.
Risk-averse developers of generative AI systems may prefer to use only public domain or otherwise unencumbered works as training data. Although several precedents support the use of in-copyright works as training data and requiring a connection between expressive elements of particular works and AI outputs to infringe copyright, courts may decide that generative AI requires novel answers to the novel questions posed in these cases. So stay tuned.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.
No entries found