The internet is the most powerful tool yet devised for social and political discourse. At its best it is enormously valuable in support of a healthy democracy. But the web, and particularly social media, is awash with abusive and offensive speech, much of it discriminatory. The result is that marginalised groups are deterred from engaging in the online public space, moderate voices are drowned out by the more colourful (and click-friendly) speech of extremists, and the free flow of opinions suffers. Just as worrying, the proliferation of extreme views on social media may itself be having a radicalising effect, and even causing violence.

The law must adapt accordingly, but the law must be principled and properly structured, and before hard law is deployed at all, less drastic measures must be considered. The picture is complicated by the novel commercial circumstances in which much of this speech is taking place. Several giant multi- and truly trans-national companies (notably Google and its parent Alphabet, as well as Facebook and Twitter) dominate the landscape, and there is uncertainty as to which, if any, jurisdictions are prepared to take them on.

Here I will try to sketch out the legal issues in the context of international and domestic law and standards, and offer a glimpse into a possible way forward.

Competing rights

It may be convenient to begin by isolating the primary rights engaged in the discussion. A right to freedom of opinion and expression is protected in all major international and regional human rights treaties, although it is usually qualified (with any qualifications required to be construed narrowly and assessed against strict necessity). At the same time, international human rights law guarantees equality and non-discrimination for all people, including freedom from discrimination or any other form of prejudice in the enjoyment of fundamental rights, particularly where that is based on any protected characteristic.

Defining ‘hate speech’

‘Hate speech’ has played an important role in human rights jurisprudence (see for example the Grand Chamber of the European Court of Human Rights’ decision in Delfi AS v Estonia (64569/09) in which the term and concept were instrumental in the reasoning) but there is considerable uncertainty as to its meaning. There is no uniform definition of ‘hate speech’ in English, US, or international law, and so difficult has the phrase proved to pin down that a convention has arisen amongst commentators whereby it is usually placed in speech marks to indicate that it is being used as a phrase of convenience, rather than as a legal term of art.

A structured approach which seeks to describe and then combine the two foundational elements of ‘hate’ and ‘speech’ results in a very broad definition that captures a range of conduct much wider than that caught by any existing code of criminal law, even the most repressive. ‘Speech’ for these purposes is any form of expression, and is not limited to verbal expression. Hatred is of course a strong emotion, linked to opprobrium, enmity, even detestation; but it does not necessarily connote a threat of harm. ‘Hate speech’ is usually considered to indicate expression which targets a particular person or group of people having or perceived as having one or more protected characteristics, but there is considerable room for disagreement as to which characteristics should be protected. Particularly contentious is whether or not immigrant status should be a protected characteristic, or if it is, whether it should enjoy the same level of protection as other features such as race or sexuality. It is argued that immigration is a topic of such political importance that its discussion should not be fettered by the fear of committing ‘hate speech’.

The European Court of Human Rights uses the following definition adopted by the Council of Europe’s Committee of Ministers in 1997 (Recommendation No. R(97)20): ‘Hate speech’ includes all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including intolerance expressed by extreme nationalism and ethnocentrism, discrimination and hostility towards minorities, migrants and people of immigrant origin.

Facebook has very recently revamped its content moderation policies, and in theory at least, ‘hate speech’ is emphatically not allowed. They define ‘hate speech’ as, ‘a direct attack on people based on protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity and serious disease or disability.’ They also provide some protections for immigration status, but there is a specific saving for criticism of immigration policies. They define ‘attack’ as violent or dehumanising speech, statements of inferiority, or calls for exclusion or segregation, and attacks are separated into three tiers of severity, all of which are said to be prohibited.

Not all ‘hate speech’ should be criminal

The UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (‘the Special Rapporteur on FOE’) in his annual report to the General Assembly in September 2012 proposed dividing hate speech into three categories: (i) ‘hate speech’ that must be prohibited in order to comply with a state’s duty to prevent certain kinds of harm; (ii) ‘hate speech’ that may be prohibited provided states nevertheless comply with free speech standards; and (iii) ‘hate speech’ which should be protected from restriction, but nevertheless merits a critical response from states short of criminalisation. These distinctions were adopted by the group of international human rights lawyers and experts that came together in a UN sponsored workshop in Rabat, Morocco in October 2012 to grapple with the problem of how the prohibition of incitement to national, racial or religious hatred is to be reconciled with other human rights in the modern era, and have informed much of the policy in this area since then.

UK government’s political response

In early July 2016 the House of Commons Home Affairs Committee announced an inquiry into hate crime and its violent consequences. This was in direct consequence of the murder of Jo Cox MP in June 2016 in the lead-up to the EU referendum. In the Committee’s report dated 1 May 2017, it noted that it had received a large volume of written evidence and taken evidence on a wide range of issues including Islamophobia, misogyny, far-right extremism, the role of social media in hate crime, and the particular issues faced by MPs. The Committee voiced concerns regarding the commercialisation of extremist content, noting that advertising revenue was effectively being generated – particularly by Google through its YouTube platform and by Facebook – from hosting extremist videos and other content.

"The law must adapt accordingly, but the law must be principled and properly structured, and before hard law is deployed at all, less drastic measures must be considered... Several giant multi- and truly trans-national companies dominate the landscape, and there is uncertainty as to which, if any, jurisdictions are prepared to take them on"

In relation to the domestic legislative framework, the Committee noted that most relevant legal provisions predate the internet, or social media in particular. It recommended that the government review the entire legislative framework governing online hate speech. The Committee’s work was interrupted by the announcement on 18 April 2017 of a snap general election, but it has now resumed, and the Committee continues to hear evidence.

The domestic legislative landscape

Not only are domestic legislative provisions dealing with ‘hate speech’ largely outdated, as noted by the Committee, but they are an unwieldy mess. There is an array of legal provisions which may – depending upon the circumstances – be deployed in cases involving ‘hate speech’. Cases involving social media are most usually dealt with either under s 127 of the Communications Act 2003, whereby it is an offence to make improper use of a public electronic communications network, including by sending grossly offensive, indecent, obscene, or menacing messages, or as harassment contrary to s 2 of the Protection from Harassment Act 1997.

In December 2016, for example, Joshua Bonehill-Paine was convicted by a jury of the racially aggravated criminal harassment of Luciana Berger MP. He was sentenced to two years in prison for the offence, which was ordered to be served consecutively to a sentence of 40 months that he was already serving for sending malicious communications contrary to s 127 of the 2003 Act. His conduct involved an extensive campaign of anti-Semitic online abuse waged against Ms Berger, and his attempt at trial to argue that he was legitimately exercising his right of free speech was roundly rejected in the judge’s sentencing remarks.

Whilst the primary domestic legal provisions are certainly labyrinthine, their navigation and application is assisted considerably by the CPS guidance on prosecuting crimes involving elements such as ‘hate crime’, which guidance gives a coherence to this area of the law that would otherwise be lacking. In the result there is a relatively sophisticated and multi-layered approach to prosecuting ‘hate speech’ in practice, which expressly acknowledges and seeks to balance the competing rights involved.

A truly radical approach

In her book, ‘Hate Speech – Why We Should Resist It With Free Speech, Not Censorship’ (OUP, 2018), Nadine Strossen, Professor of Constitutional Law at New York Law School and the first woman President of the American Civil Liberties Union (ACLU) argues powerfully that there is no proper basis upon which to conclude that the further legal regulation of ‘hate speech’ (if it can be defined) would have any positive effect, and good reason to believe that any such censorship would in fact be counterproductive. She and other US social justice advocates propose instead that ‘hate speech’ be countered not by censorship but by vigorous ‘counterspeech’ and activism.

Perhaps the most vivid example of this free speech absolutism in practice is the ACLU’s involvement in the landmark case concerning neo-Nazi marches in the town of Skokie, Illinois. Skokie had a large Jewish population including many Holocaust survivors, but when in 1977 local laws were enacted for the specific purpose of preventing such demonstrations, the ACLU fought them, eventually securing victory in the US Supreme Court. The ACLU’s highly principled stand was the more impressive when it is recalled that its executive director at the time was Aryeh Neier, who had personally fled Nazi Germany as a child with his immediate family, whilst his extended family were slaughtered.

Platform regulation

In his most recent report the Special Rapporteur FOE proposes ‘a framework for the moderation of user-generated online content that puts human rights at the very centre’. He argues that content moderation as it is currently carried out by private companies lacks many of the qualities of natural justice, notably transparency and accountability, and calls upon states to implement ‘smart regulation, not heavy handed viewpoint-based regulation… focussed upon ensuring company transparency and remediation to enable the public to make choices about how and whether to engage in online forums.’ It is an attractive approach, which seeks to encourage proper standards in the processes adopted in content moderation by private companies, whilst hopefully deterring states from legislating too heavily (or at all) as to the content itself. It remains to be seen how such an approach might be implemented at UK and/or EU level, but it has been argued that EU law has already taken great strides towards platform regulation in the GDPR, the relevant mechanisms of which might be appropriately adapted to deal with ‘hate speech’.

As I write this, reports are emerging that the UK government is preparing a white paper which proposes the establishment of a new domestic internet regulator similar to Ofcom, with a code to regulate content, and power to sanction companies for non-compliance. Content targeted would apparently include terrorist-related material, child abuse images, and ‘hate speech’.

Jonathan Price is a barrister at Doughty Street Chambers specialising in media, privacy and human rights law.