acm-header
Sign In

Communications of the ACM

Legally Speaking

A Legal Challenge to Algorithmic Recommendations


Section 230 icon with water drops, illustration

Credit: Andrij Borys Associates

A young American student, Nohemi Gonzalez, was one of 149 people murdered in Paris in 2015 by ISIS terrorists. Her family blames Google for her death, claiming that YouTube's algorithms provided material support to the terrorist organization by recommending violent and radicalizing ISIS videos to its users based on their previous viewing histories. (The Gonzalez complaint levies the same charges against Twitter and Facebook, but to keep things simple, this column refers only to Google.)

Gonzalez' family sued Google for damages for this wrongful death. Both a trial and an appellate court agreed with Google that it could not be held liable for this tragic death under a federal immunity shield widely known as § 230 of the Communications Decency Act (CDA). However, the U.S. Supreme Court has decided to hear Gonzalez' appeal and consider whether YouTube's algorithmic recommendations are beyond the shelter of § 230.

This column explains the key facts and legal arguments in Gonzalez. If the Supreme Court decides to narrow the § 230 safe harbor so that recommendation algorithms are no longer exempt from liability for user-posted harmful material, that would be a striking change to decades of judicial consensus about § 230.

Back to Top

CDA § 230

Google's main defense to the Gonzalez lawsuit is based on § 230(c)(1) of the CDA. It provides that an interactive computer service, such as YouTube, cannot be held liable as a publisher or speaker of any information content posted on its site by another person.

Under § 230 (c)(1), Google cannot, for instance, be held liable for allowing YouTube users to upload ISIS recruitment or jihadi videos because those videos are information content provided by other persons, not by Google.

Congress passed this law in 1996 out of concern that without a liability shield of this sort, interactive computer services would be unwilling to monitor users' postings and take down harmful content. The law was adopted in reaction to two judicial decisions assessing online service provider liability to encourage them to engage in content moderation.

One decision was Cubby v. CompuServe, in which a court held that CompuServe could not be held liable for defamatory statements posted by one of its users because it had no editorial control over the newsletter that defamed Cubby.

The second was Stratton-Oakmont v. Prodigy, in which another court ruled that Prodigy could be treated as a publisher of defamatory statements made by one of its users because Prodigy held itself out as a service that monitored user postings and would remove wrongful content posted on its site.

If no monitoring equals no liability and any monitoring equals potential liability, websites that allow postings of user-generated content would have little incentive to engage in content moderation.

Although Congress had defamatory content in mind when enacting § 230, courts have construed § 230 as broadly insulating online services against many types of claims, including privacy violations, online harassment, and false advertising. Online services have typically persuaded courts to quickly dismiss on § 230 grounds lawsuits filed against them based on their user postings of wrongful content.

Back to Top

Criticism of Section 230

Section 230 has been widely criticized as having provided too much of a liability shield to online platforms. In the past three years, more than two dozen bills have been introduced in Congress to amend or repeal it. In general, conservative politicians think that platforms take down too much user-posted content (such as First Amendment protected hate speech) and liberal politicians think the platforms should take down more harmful content (such as disinformation). Congress has yet to achieve consensus about what to do about § 230.

Some critics of broad judicial interpretations of § 230 think courts have too long ignored § 230(c)(2), which they believe provides a more limited shelter for decisions to take down lawful-but-awful content posted by users. Under this reading, services are not liable if they act "in good faith to restrict access to or availability of material that the provider … considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable …" They read the "otherwise objectionable" phrase limited to content very similar to the other named types of content. They also think the "good faith" requirement limits the § 230(c)(2) liability shield.


Although Congress had defamatory content in mind when enacting § 230, courts have construed § 230 as broadly insulating online services against many types of claims.


Numerous scholarly commentators have argued that § 230 has allowed websites to turn a blind eye to abusive conduct on their sites. Various proposals have been made to limit the § 230 liability shield such as subjecting it to a reasonableness requirement or conditioning it on the services' compliance with some responsibility (such as protecting their users' personal data from misuses).

Back to Top

The Gonzalez Anti-Terrorism Lawsuit

The federal Anti-Terrorism Act (ATA) makes it illegal to knowingly provide material support to terrorists. Section 2333(a) of the ATA allows U.S. nationals to recover damages for injuries they have suffered "by reason of an act of international terrorism," even if the act was committed outside the U.S. Harms may occur not only from a defendant's providing material support to international terrorists, but also from aiding and abetting acts of international terrorism or conspiring with terrorist organizations such as ISIS in violation of § 2333(d).

The Gonzalez plaintiffs contend that Google is directly and secondarily liable for Nohemi's murder because it knew that its algorithms were recommending terrorist content to users and radicalizing them to further ISIS' mission, which included committing acts of violence such as the Paris massacre of 2015. They blame YouTube's recommendation algorithms for providing material support to ISIS terrorists and for aiding and abetting terrorist conduct that resulted in Nohemi's death.

The Ninth Circuit Court of Appeals, by a 2-1 margin, upheld a lower court's dismissal of the Gonzalez complaint. It ruled that Google could not be held liable for providing material support to acts of terrorism or for aiding and abetting ISIS terrorism because the radicalizing videos that some users posted on YouTube and others viewed were information content that Google neither authored nor published within the meaning of § 230.

The Gonzalez majority regarded YouTube's recommendation algorithms as neutral tools that facilitated users' access to content of interest to them without regard to the kind of content being recommended. Because another Ninth Circuit panel had previously upheld § 230 defenses in cases involving recommendation algorithms, the Gonzalez panel felt constrained by its precedents to rule in favor of Google's § 230 defense.

Back to Top

Alternative Views of § 230 in Gonzalez

One judge who concurred in Gonzalez did so only because she regarded Ninth Circuit precedents as requiring the panel to rule in Google's favor. If not so constrained, she would hold that "the term 'publisher' under § 230 reaches only traditional activities of publication and distribution—such as deciding whether to publish, withdraw, or alter content—and does not include activities that promote or recommend content or connect content users to each other."

She regarded the recommendation algorithms at issue in Gonzalez as "more analogous to the actions of a direct marketer, matchmaker, or recruiter than to those of a publisher." When YouTube communicates with users about what they might like to watch or with whom they might want to interact, she thinks that service becomes the speaker of the content whose conduct is not immunized by § 230.

Another Ninth Circuit judge who dissented in Gonzalez took a different approach. He would generally uphold § 230 defenses in neutral recommendation algorithm cases. But "where a website (1) knowingly amplifies a message designed to recruit individuals for a criminal purpose, and (2) the dissemination of that message materially contributes to a centralized cause giving rise to a probability of grave harm, then the tools can no longer be considered 'neutral.'" Moreover, by failing to review ISIS videos, which Google/YouTube knew were a "pervasive phenomenon" on its site, Google failed to be neutral about ISIS content and materially contributed to its ISIS terrorist activities.

These two judges urged their Ninth Circuit colleagues to rehear Gonzalez before a larger panel and to reconsider the recommendation algorithm issue in § 230 cases. However, the rehearing request was denied.

Back to Top

Gonzalez Goes to the Supreme Court

When litigants want the Supreme Court to hear their appeals, they must state precisely the question they want the Court to address. The Gonzalez petition asked the Court to decide whether § 230(c)(1) immunizes an interactive computer service "when it makes targeted recommendations of information provided by another party."

To persuade the Court to hear their case, the Gonzalez plaintiffs pointed to the Ninth Circuit's split decision, especially the concurring and dissenting opinions that cast doubt on the majority's interpretation of § 230 as applied to YouTube's recommendation algorithms. Although the Court declined in 2020 to review a very similar successful § 230 defense to an anti-terrorism claim in Force v. Facebook, the Gonzalez petition noted that Force was also a split decision whose dissenting judge would have given § 230 a narrower interpretation than the majority had done.

The Gonzalez petition also quoted from a statement by Justice Clarence Thomas published in response to the Court's decision not to review another § 230 case in Malwarebytes v. Enigma Software. The statement opined that courts had interpreted § 230 too broadly and hinted that he would be receptive to supporting a Supreme Court review of another § 230 case to consider how this liability shield should be construed.


When litigants want the Supreme Court to hear their appeals, they must state precisely the question they want the Court to address.


Justice Thomas must have persuaded three of his colleagues to hear Gonzalez' appeal (it takes four votes to grant a petition for review) because the Court granted Gonzalez' petition. The Court will decide the case in the first half of 2023.

Back to Top

Does § 230 Apply to Recommendation Algorithms?

The Gonzalez petition emphasizes YouTube's algorithms were designed to target users based on what it knew about their prior viewing history so that it could promote other content to them and keep them engaged on the site so it could earn money from the content they watched.

The petition further contends Google was well aware that YouTube was assisting ISIS by promoting videos of ISIS attacks and recruitment of new followers. Like the concurring judge in Gonzalez, the petition argues that recommending content is very different from publishing content as a reason to construe § 230(c)(1) more narrowly than the Ninth Circuit did.

Google's opposition brief pointed to two subsections of § 230, namely, (f)(2) and (f)(4), that define interactive computer services as including "software or enabling tools that pick, choose, analyze, … search, subset, organize, [or] reorganize" user content. This language seems to contemplate that Congress was trying to protect more than passive bulletin-board-types of services from liability for content posted by third parties. Recommendation algorithms would seem to fall within this broad definition of services to which the § 230 shelter applies.

Back to Top

Conclusion

Recommendation algorithms are nearly ubiquitous features of current interactive computer services, not just of YouTube and social media sites such as Facebook. These algorithms have been designed to facilitate user access to content in which the users may be interested.

In Gonzalez and other cases, the Ninth Circuit has thus far treated such algorithms as neutral "tools meant to facilitate the communication and content of others," not as "content in and of themselves." The Ninth Circuit ruling is consistent with numerous other precedents. No appellate court has yet held otherwise.

The Supreme Court is poised to have the final say about how § 230 should be interpreted. Whichever way the Court rules, its decision will have significant consequences for interactive computer services and the wider Internet ecosystem.

If the Court reverses the Ninth Circuit, there will likely be much more litigation against these services and some may shut down to avoid liability. An affirmance will likely be a relief to most services, but some may take an affirmance as a reason to relax efforts to proactively deter harmful content on their sites.

It may, of course, be very difficult for the Gonzalez plaintiffs to prove a causal connection between ISIS videos on YouTube and the murder of Nohemi in Paris. But they believe they deserve a chance to establish such a connection in litigation. Their goal before the Supreme Court is to overcome the § 230 defense upon which Google has so far successfully relied. Stay tuned to this major case, which will undoubtedly affect the future of the Internet.

Back to Top

Author

Pamela Samuelson (pam@law.berkeley.edu) is the Richard M. Sherman Distinguished Professor of Law and Information at the University of California, Berkeley, CA, USA.


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.


 

No entries found