The Supreme Court battle for Section 230 has begun


The first shots have been fired in a Supreme Court showdown over web platforms, terrorism, and Section 230 of the Communications Decency Act. Today, the Supreme Court will hear oral arguments in Gonzales v. Google — one of two lawsuits that are likely to shape the future of the internet.

Gonzalez v. Google and Twitter v. Taamneh are a pair of lawsuits blaming platforms for facilitating Islamic State attacks. The court’s final ruling on these cases will determine web services’ liability for hosting illegal activity, particularly if they promote it with algorithmic recommendations.

The Supreme Court took up both cases in October: one at the request of a family that’s suing Google and the other as a preemptive defense filed by Twitter. They’re two of the latest in a long string of suits alleging that websites are legally responsible for failing to remove terrorist propaganda. The vast majority of these suits have failed, often thanks to Section 230, which shields companies from liability for hosting illegal content. But the two petitions respond to a more mixed 2021 opinion from the Ninth Circuit Court of Appeals, which threw out two terrorism-related suits but allowed a third to proceed.

Gonzalez v. Google claims Google knowingly hosted Islamic State propaganda that allegedly led to a 2015 attack in Paris, thus providing material support to an illegal terrorist group. But while the case is nominally about terrorist content, its core question is whether amplifying an illegal post makes companies responsible for it. In addition to simply not banning Islamic State videos, the plaintiffs — the estate of a woman who died in the attack — say that YouTube recommended these videos automatically to others, spreading them across the platform.

Google has asserted that it’s protected by Section 230, but the plaintiffs argue that the law’s boundaries are undecided. “[Section 230] does not contain specific language regarding recommendations, and does not provide a distinct legal standard governing recommendations,” they said in yesterday’s legal filing. They’re asking the Supreme Court to find that some recommendation systems are a kind of direct publication — as well as some pieces of metadata, including hyperlinks generated for an uploaded video and notifications alerting people to that video. By extension, they hope that could make services liable for promoting it.

Should promoting something via algorithms make companies liable for it?

This raises a lot of tricky questions, particularly around the limits of an algorithmic recommendation. An extreme version of that liability, for instance, would make websites liable for delivering search results (which, like almost all computing tasks, are fueled by algorithms) that include objectionable material. The suit tries to allay this fear by arguing that search results are meaningfully different since they’re delivering information that a user is directly querying. But it’s still an attempt to police an almost ubiquitous piece of present-day social media — not just on giant sites like YouTube and not just for terrorism-related content.

Meanwhile, Google has objected to claims that it’s not adequately fighting terrorism. “Through the years, YouTube has invested in technology, teams, and policies to identify and remove extremist content. We regularly work with law enforcement, other platforms, and civil society to share intelligence and best practices. Undercutting Section 230 would make it harder, not easier, to combat harmful content — making the internet less safe and less helpful for all of us,” said spokesperson José Castañeda in a statement to The Verge.

Twitter v. Taamneh, meanwhile, will be a test of Twitter’s legal performance under its new owner Elon Musk. The suit concerns a separate Islamic State attack in Turkey, but like Gonzalez, it concerns whether Twitter provided material aid to terrorists. Twitter filed its petition before Musk bought the platform, aiming to shore up its legal defenses in case the court took up Gonzalez and ruled unfavorably for Google on it.

In its petition, Twitter argues that regardless of Google’s outcome with Section 230, it’s not a violation of anti-terrorism law to simply fail at banning terrorists using a platform for general-purpose services. “It is far from clear what a provider of ordinary services can do to avoid terrorism liability” under that framework, Twitter argues — a lawsuit could always allege the platform might have worked harder to flush out criminals.

The Supreme Court is almost certain to take up other Section 230-related cases in the next few years — including a decision on laws banning social media moderation in Texas and Florida. The court recently asked the US solicitor general to submit briefs in that case, giving the Biden administration a chance to create a position on internet speech.

Update 4:20PM ET, 12/1/22: Added statement from Google.

Update 9:08PM ET, 2/21/23: Updated to note new Supreme Court begins oral arguments today.



Source link: https://www.theverge.com/2022/12/1/23487958/supreme-court-gonzalez-google-twitter-taamneh-section-230

Sponsors

spot_img

Latest

Novak Djokovic struggles but beats Enzo Couacaud

A nine-time champion Novak Djokovic is through to the Australian Open third round for the 15th time from 18 trips to Melbourne....

Going to a Party Sober

Bridget Martinez is an ex-Wall Streeter turned illustrator, based in New York City. She makes colorful comics about navigating identity, career and relationships....

‘David de Gea, you are finished!’ Manchester United fans demand goalkeeper to leave club as terrible mistake leads to West Ham goal

Manchester United fans were left beside themselves as David de Gea’s dreadful mistake gave West Ham the lead against them on Sunday night....