Bills that claim to advance safety online will cause more online harm — it’s time to pay attention.

Wikimedia Policy
Wikimedia Policy
Published in
8 min readFeb 4, 2022

--

An introduction to the series on online safety bills.

Stylized image of Wikimedia Globe: a green globe made out of puzzle pieces with different icons on them representing an eye, a hand, a nose, an ear, a leaf. The green puzzle globe is against a red background.
Puzzle-globe symbol for Wikipedia 20, illustrated by Jasmina El Bouamraoui and Karabo Poppy Moletsane, CC0, via Wikimedia Commons

Written by Wikimedia Foundation’s: Leighanna Mixter, Lead Counsel; Tina Butoiu, Legal Counsel; Franziska Putz, Advocacy Community Manager.

Policymakers around the world put fundamental human rights at risk in the name of pushing ill-conceived proposals to address online harms. Of these recent proposals, the Proposed Approach to Address Harmful Content Online in Canada, the Draft Online Safety Bill (OSB) in the United Kingdom (UK), and the Basic Online Safety Expectations (BOSE) that are part of the Australian Online Safety Act of 2021 are particularly concerning.

These laws share the same objective: to ensure people’s safety on the Internet by holding large online platforms accountable for illegal, as well as legal but harmful content that spreads on their sites. Efforts to curtail online harms have gained renewed momentum in light of revelations from whistleblowers who exposed how for-profit platforms ignored real-life harms experienced by young adults and vulnerable communities. The most well-known of these harms include the mental health strains that young people face as a result of unrealistic body image expectations, as well as rampant medical misinformation that has spread throughout the COVID-19 pandemic. More recently, a hired moderator exposed the psychological trauma that individual moderators suffer due to content moderation standards that large online platforms endorse.

Facilitating safe online environments where everyone is encouraged to participate is a crucial step towards creating a digital ecosystem in which diverse information and experiences can be freely shared. Yet this wave of online safety proposals may do the opposite. They address the symptoms rather than the causes of online harms and threaten human rights as well as the existence of smaller platforms, particularly those with community-led models of content moderation like Wikipedia.

User-driven online sites cannot thrive if all websites are treated as if they were large social media sites with business models and staff of corporate giants. Community-governed spaces like Wikipedia, Reddit subreddits, and message boards with volunteer moderators, contribute to the rich diversity of knowledge available to the public online. Current legislative proposals in Canada, Australia, and the UK threaten to erode precisely the diversity that these moderation models empower. That’s why the Wikimedia Foundation (WMF), the non-profit that hosts free knowledge projects like Wikipedia, has submitted comments on the UK Online Safety Bill and the BOSE. We now also want to share those comments here.

Today, we’re launching a series of blog posts which highlight the threat to human rights and the diversity of our online spaces that these proposals present.

Our policy experts will dive into each of these proposals and discuss the impacts on freedom of expression, privacy, and access to free knowledge.

I. Content moderation requirements designed for corporate platforms threaten community-governed spaces.

Designing regulations with a one-size-fits-all approach is both ineffective to curb online harms, and jeopardizes platforms that serve a public interest. The current wave of regulation is designed with large, for-profit platforms in mind. Yet the small handful of sites that are the targets of these regulations are just that, a small handful. Their platform designs are not representative of the myriad of diverse websites that comprise information exchange and communication online.

The requirements outlined in these bills may force platforms to over-remove content, rely on automated tools rather than humans to make those decisions, and incur expenses that are difficult for smaller sites to meet. Short removal times combined with strict penalties, like financial costs or potential jail time for senior leadership, build such an incentive structure. This could dismantle alternative content moderation systems and degrade the quality and variety of content available on the Internet.

First, the use of automated filtering tools often does more harm than good. Their use threatens to perpetuate the societal biases that are encoded in the datasets used to train them, which overwhelmingly leads to a negative impact on the speech of marginalized groups. Automated processes for content detection and removal also often lead to the over-removal of entirely legal, legitimate content. Moreover, allowing time to review each piece of content in context in order to assess whether or not it qualifies as “harmful” is essential when automated tools are deployed, as these have been shown to be subjective and to not capture nuances and contextual variations of human speech. The required removal deadlines do not allocate this essential time.

Second, the proposed content moderation processes would compromise the effectiveness of community-based governance systems. For example, content standards and review mechanisms on Wikipedia are developed by our community of volunteer editors. Especially sensitive conduct or content can be escalated via the Volunteer Response Team, which is just one of a series of internal mechanisms. All content policies are developed by the community via transparent, decentralized decisions. These community content moderation processes have been proven to be so effective that most edits that do not meet Wikipedia’s content standards are addressed within six minutes. Wikipedia’s community governance model isn’t unique, however, as online message boards have used volunteer moderators for decades.

To preserve an Internet in which multiple governance models can exist, regulatory efforts need to reflect the diversity of content moderation models that are currently operating.

II. Business models that amplify harmful content online should be the target of regulation.

Online harm proposals that are only focused on content moderation, and especially content removal, will not lead to a safer Internet. Algorithms used for targeted advertising and recommender systems based on troves of personal data or viewing habits, which amplify content, are also part of the equation. Frances Haugen’s testimony clearly exposed how the algorithms driving profits for ad-placements are at the root of the problem that these online harm regulations intend to address. Regulation that recognizes the economic factors that drive platform design is the only way towards a safer Internet.

Community-led systems allow non-profits like Wikipedia to run a safe, global website with a smaller staff. The decentralized nature of this work makes it possible to operate this independent source of free knowledge for the world with many fewer instances of harmful content while using far fewer resources than profit-oriented platforms.

These shared responsibilities are a feature and a strength, not a bug.

Since Wikipedia is designed to maximize quality of article content, which relies on high levels of participation by editors (rather than content-engagement to meet revenue targets), content on Wikipedia is not amplified via recommender systems like those used by social media platforms. In light of this key contrast, non-profit platforms that serve the public interest should not be expected to implement the same kind of safety measures as engagement-based platforms. We ask regulators to consider the nature of a variety of platforms when designing new rules.

III. Privacy and access to knowledge are human rights that need to be protected by regulations.

The neglect for individual privacy is a disturbing pattern among this series of regulations. Potential violations of the fundamental human right to privacy are posed by the following requirements:

  • Requests to share user information are permissible under broad and vague conditions. In the Australian Online Safety Act, for example, end-user identity information can be shared “on reasonable grounds that the information is, or the contact details are, relevant to the operation of this Act.”
  • Mandates to store user information, or collect information about users’ ages are currently included under both UK and Australian legislation in order to protect children. However, age is a protected characteristic in several countries, and collecting this information could violate international human rights standards by undermining protections like encryption.
  • Automated filtering tools threaten privacy. Former UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, David Kaye, deemed proactive filtering tools as “inconsistent with the right to privacy and likely to amount to pre-publication censorship.” As mentioned above, these types of tools are incentivized by the short takedown windows and harsh penalties often seen in these proposals.

User privacy is essential to enable diverse voices and information to be represented online. Pseudonymity can create a shield of safety that is particularly important for people who live in regions where human rights, including freedom of expression, are not protected, or for those who contribute or moderate content on sensitive topics. Moreover, privacy is a cornerstone of international human rights law.

Laws that fail to consider globally accepted standards for human rights or data protection are antithetical to any initiative to promote online safety.

IV. Consistent enforcement measures should be based on existing human rights standards, not unaccountable regulatory structures.

All three regulations grant broad interpretive powers to various regulatory authorities. The Canadian proposal, for example, creates a new role: a Digital Safety Commissioner with the powers to hold hearings (including some in secret) on any issue they believe is in the public interest, and to assess whether AI tools used are sufficient, among other things. Similarly, the UK proposal grants broad discretion to the Secretary of State for Digital, Culture, Media and Sport (“the Secretary”) and OFCOM, and the Australian Online Safety Act grants broad powers to the eSafety Commissioner.

The three proposals on which our series focuses are just the latest manifestations of a global trend towards one-size-fits-all regulation of online spaces, making it all the more urgent to address this trend now. Notable past examples include the Network Enforcement Act in Germany and the United States’ controversial SESTA-FOSTA law. These examples illustrate that such regulation will do more harm than good to the digital spaces they seek to regulate.

The legacy of these initiatives looms large in the future of new online safety bills. Their enforcement mechanisms have the potential to create additional regulatory structures that are unaccountable to the public and unclear in terms of how compliance with new requirements will be secured. For a digital society, transparency in decision-making about what should or should not be available on the Internet is paramount. Legislators should look to the expertise of international organizations that are already working to address online harms based on their specific mandates. An increased level of standardization is preferable to a patchwork of similar but different national enforcement rules that seek to govern platforms with global reaches of their own. Only a handful of platforms will actually be able to navigate the labyrinth of international rules and requirements for content moderation that they will be asked to comply with.

The diversity of our online ecosystems needs to be considered and protected by lawmakers. It is too precious to be considered an afterthought.

Mission-driven platforms that provide a public good, of which Wikimedia is but one, should not bear an overwhelming burden as regulators join the critique of big, for-profit platforms. A lack of careful consideration of the Web’s diverse operating models, knee-jerk reactions to platform economics, and broad strokes that disregard individual privacy make a sorry recipe for effective regulation.

--

--

Wikimedia Policy
Wikimedia Policy

Stories by the Wikimedia Foundation's Global Advocacy team.