Junior lawyers

JLD essay winner: Is social media the greatest arbiter of free speech?

Read the winning essay from the 2021 Junior Lawyers Division essay competition, Matthew Unsworth.

Multiple sets of hands using smartphones, with graphics of likes and reactions from a variety of social media networks overlaid.
Young people watching a live streaming. Social media concept.


Social media platforms are booking a lot of new faces. Over four-fifths of adult internet users in the UK now have a social media presence, up from two-thirds a decade ago, while the proportion of EU and US adults with at least one account increased by half over roughly the same period. What is more, we are no longer using the likes of Facebook and Twitter for purely social purposes. They have become an important source of news, especially in countries where the local press can be unreliable. They are also increasingly a key site for political self-expression and an invaluable tool for those running for elected office. 

It is no exaggeration to describe social media as an “attention utility”: the vital infrastructure we rely on to hear and be heard. Some have gone as far as to argue that our access to this infrastructure is so important that it should be guaranteed as a matter of human right.

Against this backdrop, platforms’ role as content moderators has come under scrutiny. Their ability to promote and demote our posts, delete our comments and images and, in severe cases, block our accounts entirely gives them huge control over our free speech. Yet just how legitimately are these decision-making powers exercised?

It is suggested, here, that a great arbiter in the legal sense is characterised by four main qualities. The first is independence. An arbiter should not be swayed by the potential political, economic or social consequences of his decision. Judgment should be given, in the words of the English and Welsh judicial oath, “without fear or favour, affection or ill will”. The second is clarity; we need to know what rules the arbiter will apply and be able to predict what sanctions might be imposed. The third is that an arbiter should be properly resourced. This is often thought of purely in financial terms but it extends to time and expertise too. Finally, an arbiter’s decision should be susceptible to appeal.

Since Roman times, the possibility of appeal has been recognised as a central ‘safeguard of liberty’, allowing for erroneous or unfair verdicts to be challenged and overturned. It is against these four qualities that social media falls to be assessed.


An arbiter can only be independent if he is entirely disinterested in the decisions that he makes. In other words, he must not favour one possible outcome over another on the basis that he stands to benefit from it. In this regard, it is problematic that social media platforms are for-profit entities. For a start, they exhibit a “bias towards virality”; they have an incentive not to censor “genuinely harmful speech that brings [in] users” and boosts their advertising revenue. YouTube, for example, was criticised back in 2018 for actively promoting conspiracy theory videos in an effort to keep people viewing content for longer. A follow-up study last year found that this was still a significant issue. 

Separately, there is an irresistible temptation for platforms to interfere where users’ free speech “conflicts with corporate profits”. Some of this interference is trivial; take the rumours that Facebook was removing unflattering GIF images of founder Mark Zuckerberg after acquiring the GIPHY library or the allegation that Twitter blocked the account of a British journalist purely because he criticised one of the platform’s business projects.

Other instances are more troubling. It is at the very least suspicious that a host of platforms only moved to ban former US President Donald Trump “at the dawn of a Democrat-controlled Washington”, having ignored his incendiary tirades for years. Equally, TikTok has been accused of suppressing content related to the Tiananmen Square massacre to curry favour with Beijing. 

Arguments about platforms’ commercial interests cut both ways. Certainly, they could not permit rampant hate speech, no matter how popular it was in certain circles, without doing long-term damage to their user numbers. Indeed, overly lenient polices on hate speech are thought to have contributed to Twitter’s growth stalling around 2015. Similarly, any extra views generated by allowing Covid-19 disinformation to spread unchecked would be outweighed by the loss of users’ trust.

However, in most cases, social media platforms can promote some types of speech and censor others with impunity. If we do not like the biases inherent in one platform’s decision-making, it is not as if we can easily switch to another; they are all complementary but not substitutable, so absent a compelling reason not to, we will stay put. The conclusion, then, must be that there is a real risk that social media platforms do not behave independently as arbiters and, generally speaking, it does not benefit them to do so.


In terms of clarity, it is essential that an arbiter sets out in advance the principles on which he will base his decisions. As a minimum, this means social media platforms drawing up detailed and precise acceptable use policies. If there is the possibility of users’ free speech being restricted by way of their posts being taken down or their accounts suspended, they deserve to be able to “predict with reasonable certainty” when and why this might happen.  

Credit where it is due, the major social media platforms all publish lengthy ‘community guidelines’ on the content that they do not permit. These guidelines are typically broken down into distinct categories, such as illegal activity, discrimination and nudity, each of which is accompanied by a brief rationale as to why exactly it is prohibited. 

Furthermore, platforms have also started to release data on how their guidelines are enforced, including, for example, how many individual pieces of content have been removed each year. Taking such steps was proactive on the part of platforms; they have, interestingly, pre-empted two of the core requirements of the proposed EU Digital Services Act. 

Having said this, a lot hinges on the way in which content rules, however clearly stated, are interpreted in practice. There is a worry that platforms rely on ‘public outrage’ as an interpretative aid,  severely undermining the predictability of the enforcement system; ironically, we have to wait for the response to what we have said before we know for sure whether we were allowed to say it. There are other consequences as well.

Construed by the majority, platforms’ guidelines end up offering inadequate protection to those who hold nonconformist views. In addition, in countries where platforms do not face significant public relations pressure, interpretation of the rules becomes far too generous; Facebook’s lax regulation of hate speech against Rohingya Muslims in Myanmar is a case-in-point. Overall, to ensure complete clarity, platforms need to be more transparent about how they apply their rules to particular fact patterns.

Proper resources

An arbiter will struggle to deliver just outcomes without the proper resources at his disposal. Social media platforms, then, should ensure that their content moderation teams are well funded, capable and not unduly rushed so that decisions to restrict users’ free speech are made as accurately and carefully as possible. Of these three elements, funding is the most difficult to discuss. 

Though we know that the major platforms are billion-dollar corporations, they reveal very little about their content moderation budgets. In a rare statement in 2019, Facebook said that “safety and security” were expected to cost it north of $3.7 billion that year, but without any context, it is impossible to say with any certainty whether this figure is sufficient or, as some argue, too low. What is needed is regular disclosure of the money spent on content moderation, perhaps in platforms’ financial reports, accompanied by a description of how this money is allocated.

As for whether social media content moderators are capable, one of the problems for platforms is that there is such a vast amount of speech to regulate that they have to turn to algorithms for help. The results vary. Algorithms have been responsible for erroneously removing a breast cancer awareness image as a nudity policy violation; suspending the page of Ville de Bitche, a French town, on grounds of profanity; and imposing a 12-hour ban on any account which posted the word “Memphis” for reasons wholly unknown.

These mishaps are less than ideal; even a small error rate “can result in a significant deprivation of speech”. However, algorithms are highly effective at catching some of the most egregious content, such as that related to child abuse, and they are constantly evolving to minimise the risk of over-moderation. There is no perfect moderation system and platforms could certainly do a lot worse.

What about time; are moderators rushed into reaching decisions on content? Increasingly, the answer is “yes”, although this has more to do with regrettable legislation than social media platforms themselves. In 2017, Germany passed its Network Enforcement Act (‘NetzDG’), requiring platforms to block unlawful content within seven days of receiving a complaint and manifestly unlawful content within just 24 hours of receiving a complaint on penalty of a fine of up to €50 million. This leaves a very small window for decision-making, with the consequence that most platforms will choose to remove “even slightly questionable posts” to avoid sanctions. Free speech is “collateral damage”.

It is disappointing to see that the UK Government’s Online Safety Bill goes one step further than NetzDG, obliging platforms to swiftly remove content which is not even illegal but merely “harmful”.

Social media platforms’ decisions on content moderation are, therefore, supported by a mixed bag of resources; cash and capability are not in short supply but, in some countries, the time pressure is enormous.

Possibility of appeal

Inevitably, an arbiter will not always make the right decision. It is crucially important, then, that those affected by his judgment have a real chance to challenge it. Though most social media platforms have systems in place for requesting reviews of post takedowns and account suspensions, their inner workings are opaque. Often appeals can “go into a black hole”, leaving users with no recourse.

Facebook’s response to these shortcomings is a new 20-member ‘Oversight Board’, which has the power to reverse decisions on content and suggest ways to improve future decision-making in similar cases. Its impact should not be overstated. The Oversight Board has been branded “painfully slow”, having only heard 10 cases this year. Indeed, this is not altogether surprising given that its remit only extends to a “selected number” of “highly emblematic” decisions. The charge has also been levelled that the Oversight Board is “toothless”; it can force Facebook to restore content but not to implement its general suggestions.

Moreover, panels of external experts do not have a strong track record in relation to social media platforms; Twitter established a Trust and Safety Council in 2016 but this appears to have been neglected. In summary, though the idea of a detached supervisory body is a novel one, it makes little difference to individual users who are trying to contest a restriction of their free speech. 


Social media, today, wields huge power as an arbiter of free speech by virtue of its importance to how we communicate. However, this is not a role to which it is well suited. A combination of platforms’ commercial biases, the influence of public outrage on how they interpret their rules, increasing statutory time pressure to remove inappropriate content and severely limited appeals processes means that certain users’ speech will be restricted too little; that of others, too much.

Some see the solution to these problems in forcing social media platforms to abandon their advertising-based business models in favour of charging subscription fees or, more radically, breaking them up altogether. These people might be in for a long wait.

In the meantime, a few small steps could have a substantial impact. Platforms should be more open about how their rules are applied to specific cases, regularly disclose what they spend on moderating content, continue to develop their moderation algorithms and engage constructively with their external advisory panels. This will not make social media the greatest arbiter but it will certainly make it a better one.

Maximise your Law Society membership with My LS