Early impressions of the UK Online Safety Bill

Forcing all platforms to track their users and automatically remove vaguely defined “harmful content” will not create a safer online environment.

Wikimedia Policy
Wikimedia Policy

--

The Grenadier Guards line up outside Parliament ahead of the Queen’s arrival at the State Opening in October 2019.
The Grenadier Guards line up outside Parliament ahead of the Queen’s arrival at the State Opening. Copyright House of Lords 2019 / Photography by Roger Harris (CC BY-NC 2.0).

Written by Wikimedia Foundation’s: Tina Butoiu, Legal Counsel, and Franziska Putz, Advocacy Community Manager.

The UK Government formally introduced its long-awaited Online Safety Bill on Thursday, March 17, stating the intention that this would bring Internet users ‘one step closer to a safer online environment.’ However, the current draft of the Bill only protects a select group of individuals, while likely exposing others to restrictions of their human rights, such as the right to privacy and freedom of expression.

The Bill introduces accountability mechanisms that only suit large for-profit platforms. The platforms that are treated as an afterthought in the Bill are expected to conform to a new regulatory framework that has not been designed with their operations in mind. In the case of community-governed platforms like the Wikimedia projects, the required changes to platform governance include: the increased collection of data on users, a lock-out of children from certain information, and the centralization of power away from the global community of volunteers to the Wikimedia Foundation, reversing a governance structure that has enabled Wikipedia to become “one of the last best places on the Internet.”

The specific needs of platforms like ours are often ignored by policymakers when they propose new online regulations. In the following paragraphs we will highlight the most important shortcomings of the Government’s proposal so that UK lawmakers can address them to truly ensure people’s safety and rights on the internet.

I. Protecting children from harmful content and limiting people’s exposure to illegal content, while protecting freedom of speech?

The current draft of the Bill threatens users’ right to privacy under international human rights law. The Bill will likely require platforms to collect personal data from all visitors, such as their age, for the purpose of protecting users under Clause 11(3).

This requirement is disproportionate to the Government’s aim of protecting children from harmful content. Organizations like the Wikimedia Foundation, which collect as little user information as possible in order to encourage diverse and safe participation, would have to abandon their privacy-respecting practices and start collecting and storing sensitive user data from everyone.

The mandate to collect user information to protect users is both counterintuitive and potentially ineffective, as even the best age assurance tools have been shown to be inaccurate. States also have a duty to protect children’s right to form and express their opinions without interference from automated processes of information filtering and profiling. As such, the use of these tools would not protect freedom of expression and the right to receive information as the Government claims, nor would the Government be shielding children from harmful content or abusive interactions online.

II. Problematic definitions and disregard for community-led content moderation.

Under Clause 187 of the Bill, the definition of “harm” from which children ought to be protected is broad enough to shelter children from criticism of dictators as well as reliable health information.

This would run counter to some of the Rights of the Child in the Digital Age:

“States are required to ensure that all children are informed about, and can easily find, diverse and good quality information online, including content independent of commercial or political interests.”

Under Clauses 9, 11, and 13, service providers will likely be forced to develop expensive automated content moderation tools for age-verification as well as illegal and harmful content detection, which would threaten childrens’ right to non-discrimination enshrined in the Digital Age charter. Under the new law the Wikimedia Foundation would have to invest in compliance measures rather than processes to empower individuals to effectively address the needs of their communities.

Many smaller platforms will not be able to bear the high cost of compliance while others will have to abandon effective content moderation processes, including community-driven processes. The pressure to deploy automated content moderation tools and institute constant surveillance is further increased by the threat of financial penalties and the potential imprisonment of senior managers under Clauses 87 and 96.

III. A powerful regulator who is not really independent.

The Office of Communications (Ofcom) can limit the privacy rights of users and impose fines on organizations up to 10% of their qualifying worldwide revenue or £18 million if they fail to comply with their duties under the Bill. Yet Ofcom will only be accountable to the Secretary of State for Digital, Culture, Media and Sport (“the Secretary”). Additionally, potential jail-time for executives may allow the Government to claim it’s getting tough on tech, but the threat of financial penalties and the imprisonment of Senior Management creates an incentive structure for service providers to over-remove anything that could be categorized as “harmful” by the Government.

The draft Bill empowers the Secretary to set Ofcom’s strategic priorities and to issue guidance about Ofcom’s exercise of their functions. The Secretary can even modify Ofcom’s code of practice “if the Secretary believes that modifications are required…for reasons of public policy.”

In light of the expansive scope of the Bill, Ofcom’s limited resources could very well be directed to protect politicians rather than the vulnerable users the Bill purports to protect.

For digital rights groups in the UK that protect privacy and free speech, these new powers mirror the online-censorship tactics of authoritarian governments.

The Bill, unfortunately, does not herald a safer digital age. As we wrote in a previous blog post, this legislative effort is part of a global trend that attempts to hold internet platforms accountable for harmful content that is spread via their services. Due to the approach they take, these initiatives are poised to cause more harm online. The UK Bill is misguided both in terms of the users it claims to protect and the platforms it supposedly holds accountable.

Our mission to create a digital space in which everyone can freely share in the sum of all knowledge — without being tracked, targeted by ads, or blocked by pay-walls — is threatened by this Bill. We look forward to working with the UK Parliament to ensure Wikipedia is protected and people can safely participate in knowledge.

--

--

Wikimedia Policy
Wikimedia Policy

Stories by the Wikimedia Foundation's Global Advocacy team.