Column: The Supreme Court holds the internet’s fate in its hands, and you should be terrified
Almost no one noticed in 1996 when Congress gave online social media platforms sweeping legal immunity from what their users posted on them.
The provision, crafted by then-Rep. Christopher Cox (R-Newport Beach) and then-Rep. Ron Wyden (D-Ore.), was known as Section 230 of the Communications Decency Act. It has since become labeled as the “Magna Carta of the internet” and “the twenty-six words that created the internet.”
Without Section 230, according to Jeff Kosseff, the law professor whose book on the section bears the latter title, the social media world as we know it today “simply could not exist.”
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
— Section 230 of the Communications Decency Act, “the 26 words that created the internet”
That’s why advocates of online speech — indeed, of internet communications generally — are very, very nervous that the Supreme Court has taken up a case that could determine Section 230’s limits or even, in an extreme eventuality, its constitutionality.
The Supreme Court’s decision to review two lower court rulings, including an appellate case from the U.S. 9th Circuit Court of Appeals in San Francisco, marks the first time the court has chosen to review Section 230, after years in which it consistently turned away cases involving the law.
Get the latest from Michael Hiltzik
Commentary on economics and more from a Pulitzer Prize winner.
You may occasionally receive promotional content from the Los Angeles Times.
That may not reflect a change in its view of the legal issues so much as a change in how society views the internet platforms at the center of the cases — Google, Facebook, Twitter and other sites that allow users to post their own content with minimal review.
“We’ve been in the midst of a multi-year tech-lash, representing the widely-held view that the internet has gone wrong,” says Eric Goldman, an expert in high-tech and privacy law at Santa Clara University Law School. “The Supreme Court is not immune to that level of popular opinion — they’re people too.”
Disgruntlement with the big tech platforms stretches from one side of the political spectrum to the other.
Conservatives cherish the notion that the platforms are liberal fronts that have been hiding behind their content-moderation policies to disproportionately block conservative users and suppress conservative viewpoints; progressives complain that the platforms’ policies haven’t been successful in eradicating harmful content, including disinformation and racism and other hate speech.
Politicians say they’re concerned about misinformation on Twitter and Facebook, but they’re blowing smoke.
The harvest has been laws and legislative proposals aiming to dictate how the platforms moderate content.
Florida enacted a law prohibiting social media firms from shutting down politicians’ accounts based on proponents’ assertions that “big tech oligarchs in Silicon Valley” aim to silence conservatives to favor a “radical leftist agenda,” as a federal appeals court observed in a decision overturning the law.
Texas enacted a law forbidding the firms to remove posts based on a user’s political viewpoint. That law was upheld by a federal appeals court. Both laws may be destined to come before the Supreme Court.
As I’ve reported before, congressional hoppers are brimming with proposals to regulate tweets, Facebook posts and the methods those platforms use to winnow out objectionable content posted by their users.
Efforts to place collars on social media platforms haven’t emerged exclusively from red states or conservative mouthpieces. Last month, California Gov. Gavin Newsom signed a law requiring those firms to make public a host of information about their rules governing user behavior and activities.
The platforms are required to report twice a year how they define and deal with hate speech, content that might radicalize users, misinformation and disinformation and other content, as well as how often they took action respecting such content. The law sets stiff monetary penalties for violation.
It should be obvious that laws purporting to open online platforms to “neutral” judgments about content do nothing of the kind: They’re almost invariably designed to favor one color of opinion over others.
There’s no evidence that the online platforms have systematically suppressed conservative opinion — that’s just a talking point of conservatives such as Sen. Ted Cruz (R-Texas) and former President Trump. And progressives haven’t been militating against conservative speech, but hate speech and harmful misinformation, which the major platforms themselves claim to officially prohibit.
It’s being held as a landmark ruling for free speech on the internet, and a ringing endorsement of what’s been called “the most important law on the internet.”
Before exploring the implications of the Supreme Court’s review further, here’s a primer on what Section 230 says.
The 26 words cited by Kosseff state, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
That places the social media platforms, as well as other platforms that host outsiders’ content or images, such as newspaper reader content threads or consumer reviews, in the same position as owners of bookstores or magazine stands: They can’t be held liable for the content of the books or magazines they sell. Liability rests only with the actual content producers.
There’s a bit more to Section 230. It specifically allows, even encourages, the online platforms to moderate content on their sites by making good-faith judgments about whether content should be taken down or refused.
In other words, just because a site blocks some content, it can’t be held responsible for whatever it doesn’t block. Nor does Section 230 require sites to be “neutral,” however that term could ever be satisfactorily defined. (Almost any definition would presumably run afoul of the 1st Amendment.)
The power of Section 230 wasn’t evident when it was passed in 1996. Google, Facebook, Twitter and YouTube didn’t even exist at the time; the impetus for the law came from some legal rulings affecting CompuServe and Prodigy, interactive services that no longer exist as independent operations today.
The fortunes of today’s social media giants have been built upon the freewheeling content provided by their users at no charge. The nature of public discussion has also been transformed through the networks of users on the platforms.
From a commercial standpoint, the companies have been reluctant to get in the way of the torrent, unless it’s so noisome that it crosses an inescapable line. Where that line is, and who should draw it, is the issue at the heart of most of the controversy over the supposed power of the big tech companies to affect public discourse.
That brings us back to the California case before the Supreme Court. It was brought against Google, the owner of YouTube, by the family of Nohemi Gonzalez, an American who was killed in an attack by the militant group Islamic State, also known by the acronym ISIS, in Paris on Nov. 13, 2015.
The plaintiffs blame YouTube for amplifying the message of ISIS videos posted on the service by steering users who viewed the videos to other videos either posted by ISIS or addressing the same themes of violent terrorism, typically through algorithms. YouTube, the plaintiffs assert, has been “useful in facilitating social networking among jihadists,” and that it knew that the content in question was posted on its site.
The legal system’s perplexity about how to regulate online content was evident from the outcome of the Gonzalez case at the 9th Circuit. The three-judge panel fractured into issuing three rulings, though the effective outcome was to reject the family’s claim about algorithmic recommendations. The lead opinion by Judge Morgan Christen found that Section 230 protected YouTube.
Elon Musk has used Twitter to violate securities law and smear his critics. Placing him on the Twitter board is a terrible idea.
But one judge, Marsha Berzon, concurred in that opinion only because she concluded that precedent prevented the appeals court from narrowing the legal immunity granted by Section 230, but said she would “join in the growing chorus of voices calling for a more limited reading of section 230.”
The third judge, Ronald M. Gould, held in a dissenting opinion that Section 230 was “not intended to immunize” online platforms from liability for “serious harms knowingly caused by their conduct.”
In legal terms, Section 230 itself isn’t the subject before the court. The question the justices are asked to resolve is whether YouTube and other platforms move beyond the role of mere publishers or distributors of someone else’s content when they make “targeted recommendations” steering users to related content, including when they do so via automated algorithms.
The power of such recommendations to magnify the impact of online content has been acknowledged before.
The Gonzalez plaintiffs and others advocating narrowing the reach of Section 230 cite a 2019 dissent by Judge Robert A. Katzmann of the 2nd Circuit Court of Appeals in New York. Katzmann observed that online platforms “designed their algorithms to drive users toward content and people the users agreed with — and that they have done it too well, nudging susceptible souls ever further down dark paths.”
But that argument risks the creation of a legal minefield. Publishers and distributors constantly take steps to steer audience members toward content they might find provocative, piquant, or interesting; newspapers signal the importance or relevance of some articles by placing them on the front page or in sections with themes such as local or national news; news programs do the same through the order in which they present stories on the air.
Legal experts are perplexed about why the Court chose to take this case. Appellate courts haven’t differed on the specific issue of algorithmic recommendations, which would produce a so-called circuit split that would warrant the Supreme Court’s stepping in to resolve their conflicts.
Nor has the process of recommending content to users been in question. “The entire purpose of Section 230 was to give the platforms discretion as to how they present user content,” Kosseff told me.
More worrisome, however, may be this Supreme Court’s tendency to legislate on its own. “The court has shown consistently that it doesn’t care about other sources of power,” Goldman told me. There appear to be few grounds for the justices to drastically narrow Section 230, but given this court’s overreach on principles as well-established as abortion rights, Goldman says, “all bets are off.”
There is little to suggest that tampering with Section 230 will address all the issues that the public has with the state today of online speech. The real danger is that almost nothing the court could do would make the issues swirling around online content moderation better, only worse.
A world in which platforms lose their ability to exercise their own judgment about content, or in which that ability is constrained by a court decision, will be indistinguishable from an open sewer, which wouldn’t be healthy for anyone. A Supreme Court decision in that direction will be hard for Congress to undo.
What keeps advocates of Section 230 up at night is the possibility that the same Supreme Court justices who overturned the right to abortion and narrowed the application of the Voting Rights Act might see the potential for partisan advantage in removing the immunity enjoyed by online services for more than a quarter-century.
But the issues raised by Section 230 are so novel, and furor over the behavior of social media so widespread, that it’s hard to gauge how the nine justices will vote. “This isn’t like guns or abortion, where you can predict the partisan divide,” Kosseff says.
That just magnifies the nervousness permeating the legal community. “We’ve now put power into the hands of nine justices who have embraced the culture wars,” Goldman says, “and they’re going to decide how we talk to each other.”
More to Read
Get the latest from Michael Hiltzik
Commentary on economics and more from a Pulitzer Prize winner.
You may occasionally receive promotional content from the Los Angeles Times.