Illustration by Alex Castro / The Verge
The first shots have been fired in a Supreme Court showdown over web platforms, terrorism, and Section 230 of the Communications Decency Act. On Tuesday and Wednesday, petitioners filed briefs in Gonzalez v. Google and Twitter v. Taamneh, a pair of lawsuits blaming platforms for facilitating Islamic State attacks. The court’s final ruling will determine web services’ liability for hosting illegal activity, particularly if they promote it with algorithmic recommendations.
The Supreme Court took up both cases in October: one at the request of a family that’s suing Google and the other as a preemptive defense filed by Twitter. They’re two of the latest in a long string of suits alleging that websites are legally responsible for failing to remove terrorist propaganda. The vast majority of these suits have failed, often thanks to Section 230, which shields companies from liability for hosting illegal content. But the two petitions respond to a more mixed 2021 opinion from the Ninth Circuit Court of Appeals, which threw out two terrorism-related suits but allowed a third to proceed.
Gonzalez v. Google claims Google knowingly hosted Islamic State propaganda that allegedly led to a 2015 attack in Paris, thus providing material support to an illegal terrorist group. But while the case is nominally about terrorist content, its core question is whether amplifying an illegal post makes companies responsible for it. In addition to simply not banning Islamic State videos, the plaintiffs — the estate of a woman who died in the attack — say that YouTube recommended these videos automatically to others, spreading them across the platform.
Google has asserted that it’s protected by Section 230, but the plaintiffs argue that the law’s boundaries are undecided. “[Section 230] does not contain specific language regarding recommendations, and does not provide a distinct legal standard governing recommendations,” they said in yesterday’s legal filing. They’re asking the Supreme Court to find that some recommendation systems are a kind of direct publication — as well as some pieces of metadata, including hyperlinks generated for an uploaded video and notifications alerting people to that video. By extension, they hope that could make services liable for promoting it.
This raises a lot of tricky questions, particularly around the limits of an algorithmic recommendation. An extreme version of that liability, for instance, would make websites liable for delivering search results (which, like almost all computing tasks, are fueled by algorithms) that include objectionable material. The suit tries to allay this fear by arguing that search results are meaningfully different since they’re delivering information that a user is directly querying. But it’s still an attempt to police an almost ubiquitous piece of present-day social media — not just on giant sites like YouTube and not just for terrorism-related content.
Twitter v. Taamneh, meanwhile, will be a test of Twitter’s legal performance under its new owner Elon Musk. The suit concerns a separate Islamic State attack in Turkey, but like Gonzalez, it concerns whether Twitter provided material aid to terrorists. Twitter filed its petition before Musk bought the platform, aiming to shore up its legal defenses in case the court took up Gonzalez and ruled unfavorably for Google on it.
In its petition, Twitter argues that regardless of Google’s outcome with Section 230, it’s not a violation of anti-terrorism law to simply fail at banning terrorists using a platform for general-purpose services. “It is far from clear what a provider of ordinary services can do to avoid terrorism liability” under that framework, Twitter argues — a lawsuit could always allege the platform might have worked harder to flush out criminals.
There’s no complete timeline for the cases yet, but new details will emerge over the coming months; Google, for its part, has until January 12th to file a response brief. And the Supreme Court is almost certain to take up other Section 230-related cases in the next few years — including a decision on laws banning social media moderation in Texas and Florida.