Common Assumptions and Misunderstandings about Reforming Section 230 and Online Content Moderation

Wikimedia Policy
Wikimedia Policy
Published in
7 min readNov 28, 2023

--

This is the second installment in our three-part series about Section 230.
(Read the
first blog post, and the third post)

A photograph of a “wrong way” sign on Cactus Forest Drive in Saguaro National Park, US
A “wrong way” sign on Cactus Forest Drive in Saguaro National Park, US. Image by Minh Nguyen, CC BY-SA 4.0, via Wikimedia Commons.

Written by the Wikimedia Foundation’s: Stan Adams, Lead Public Policy Specialist for North America

Section 230 of the Communications Decency Act (CDA) was once an obscure law from the late 1990s. In more recent years, it has become famous — or infamous, depending on who you ask — due to its association with “Big Tech” companies like Google, Meta, and X (formerly Twitter).

Politicians across the political spectrum have blamed this legal statute for shielding social media companies from being held accountable, and have characterized Section 230 as a “gift” that no other industry enjoys. Some members of US Congress have proposed reforms to the statute, claiming that removing the law’s liability shield would encourage companies to moderate user speech and content according to the legislators’ wishes. Other congresspeople imply that, but for Section 230, tech companies could be held accountable in court, forced to answer and pay for their purported misdeeds.

In this blog post, we will examine some of the assumptions and misunderstandings behind the legislative and regulatory rhetoric about Section 230. We encourage members of Congress, their staff, and their constituents to consider whether their own ideas on Section 230 reform are based on any of these assumptions or misunderstandings. If so, then any proposed reforms based on them may not deliver the intended results.

Assumption: Online providers would do a better job of moderating content if they could not “hide behind” Section 230

Critics of Section 230 often assert that the statute creates no incentives for online providers to remove or block content. While it is true that Section 230 provides legal protection even if a provider does not take steps to moderate content, the statute was never intended to be a legal “stick” to impose conditions on providers’ behavior. Instead it offers a legal “carrot” to providers who do indeed take steps to moderate content by shielding them from liability for their moderation efforts. That is, Section 230 protects providers if they try to moderate content in order to eliminate things like defamatory statements or other illegal content, even in the case that they do an imperfect job. The statue is structured this way to avoid punishing “good Samaritan” curation efforts that, without Section 230, could become a source of liability.

As for incentives, social and financial pressures already motivate providers to curate the spaces they offer online. The last few months of the formerly-known-as-Twitter saga have demonstrated these incentives — both users and commercial partners move away from websites that deprioritize content moderation. For many websites, there are other incentives to curate and moderate content. For example, Wikipedia volunteer editors work to preserve the project’s encyclopedic nature, which means that many forms of content are simply not appropriate. Wikimedia projects (hosted by a nonprofit) and social media (supported primarily by targeted advertising) may have very different business models and content formats, but have this in common: they need to manage user speech and content to align with their intended purposes and the expectations of their users. Section 230 encourages this content management by protecting the providers and the users who engage in content moderation. The statute is a shield, but that shield provides a safe space for moderators who could otherwise be held liable for trying to keep illegal or unwanted content off the platforms.

Misunderstanding: Harmful content is illegal and online providers can determine the legality of user speech and content

One of the more prominent complaints about providers’ moderation practices centers around content and speech that can be harmful, such as hate speech and disinformation, but which is not illegal as such: often called “lawful, but awful.” Proponents of Section 230 reform often point to events in which hateful or misleading posts circulated across one or more social media platforms, blaming the platforms’ providers for not doing enough to stop their spread. However, in the US, most speech — including many forms of false or hateful speech — is protected by the First Amendment to the Constitution and is therefore lawful, even if it may be harmful. Furthermore, governmental actions to require or coerce providers to remove speech and content protected by the First Amendment run afoul of the Constitution.

Online providers are not well suited to determine whether users’ online speech and content falls into one of the few, narrow categories of speech not protected by the First Amendment, and hence their legality. Such determinations depend on multiple, complex factors, including context and the relationship between a speaker and their audience. Since even judges and juries struggle to weigh these factors, it is not reasonable to expect moderators to consistently or accurately determine the legality of users’ posts.

Finally, increasing the risk of liability that providers face for hosting users’ speech and content, especially when liability reforms target certain types of content, impacts marginalized voices first. As the cost of user speech and content rises, speech with lower commercial value — for example, voices of dissent and those who criticize governments or political leaders — is more likely to be removed from a provider’s service than mainstream content because the potential costs of leaving it online outweigh the potential value it brings to the platform.

Misunderstanding: Without Section 230, lawsuits against online providers will succeed

Whether this is a misunderstanding, an assumption or quite another matter, there seems to be an unfounded expectation that removing Section 230 protections will result in providers having to pay damages to injured plaintiffs. This is certainly implied by talk about holding providers “accountable,” but reform proponents rarely follow through on this thought beyond the “take them to court” stage. As we have explained, recent US Supreme Court opinions in Taamneh v. Twitter, Inc. and Gonzalez v. Google, LLC illustrate that bringing successful claims against providers based on users’ speech and content is not easy.

In Gonzalez, the Court found that it did not even need to determine whether defendants could use Section 230 as a defense, since plaintiffs had not alleged enough facts to convince the Justices that the claim should move forward. Specifically, the Court found it very unlikely that plaintiffs would be able to show that the defendant, Google’s YouTube, had enough of a relationship with the allegedly harmful content — i.e., recruitment videos produced by the terrorist organization ISIS — to satisfy the legal elements for “aiding and abetting.” Although other cases with different facts could be decided differently, in many cases plaintiffs will not be able to show that providers had sufficient knowledge or intent to be held liable for claims based on content posted by users.

For claims based on a provider’s treatment of third-party content, such as lawsuits for the suspension of a user’s account, the defendant providers are very likely to win because their actions are protected by the First Amendment. Thus, for a wide variety of potential claims against providers, plaintiffs are unlikely to prevail. While rolling back Section 230 protections may enable plaintiffs to make it further along in litigation, it would by no means guarantee that they would be able to collect damages. In effect, many lawsuits brought in the absence of Section 230 protections would amount to an administrative fee that both plaintiffs and defendants pay into the legal system. (One important factor to remember about the US legal system: unlike most of the rest of the world, which uses a “loser pays” rule, in most US litigation each party pays their own legal fees.)

Misunderstanding: The First Amendment will protect speech in the absence of Section 230

Some may wonder why Section 230 is necessary given the broad protections created by the First Amendment to the US Constitution, which protects private speech, including the speech and editorial decisions of online providers. It is true that online providers could use the First Amendment as a shield to defend their content moderation decisions, but this approach overlooks the costs of litigation in US courts. Section 230 allows providers to have cases dismissed at an early stage of litigation, “only” costing them thousands of dollars — a relatively inexpensive lawsuit. In contrast, claims that proceed further become increasingly expensive, easily climbing into the six-figure range. A small or medium-sized online provider could go bankrupt winning cases: paying lawyers to prove, again and again, that their editorial decisions are protected by the First Amendment.

Section 230 reform proponents should consider what kinds of incentives this creates for both plaintiffs and defendants. Should providers fight to protect their constitutional rights? Should they settle, ceding their rights and their money to plaintiffs? Should they shape their content moderation practices to avoid lawsuits, even if that means decreasing the value of their websites or services? What kinds of plaintiffs would be motivated to bring lawsuits they are likely to lose? How many plaintiffs would use the high cost of litigation to push for settlements, knowing their case would likely fail at some point?

Conclusion

With all of the legislative and regulatory rhetoric around Section 230, it can be hard to sort fact from fiction. And with enough repetition, it can be easy to accept or overlook the misunderstandings and assumptions implied in talking points. We urge lawmakers, their staff, and constituents to remember the wide variety of online providers who depend on Section 230 and to think critically about the different impacts and incentives that reforms to Section 230 would create for each of them. Finally, to those proposing reforms, we ask that you carefully examine the complex ecosystem that has grown out of Section 230 and any assumptions you may have made and/or misunderstandings you may have about the impact of those reforms.

For our thoughts on ways to improve the internet for its users without reforms to Section 230, please be sure to check out our third and final blog post in this series.

--

--

Wikimedia Policy
Wikimedia Policy

Stories by the Wikimedia Foundation's Global Advocacy team.