The inside pages included photographs which illustrated how each of these types of content were shared by Facebook users. By way of example, the articles stated that when the newspaper reported a group which required videos of children under 13 years of age to be posted before members could join, Facebook moderators declined to act and said that the group ‘doesn’t go against any of our specific community standards.’

This reflects some of the matters relating to the internet and social media which have been the subject of widespread criticism and concern for a number of years. As the New Zealand Law Commission stated in News Media Meets New Media: Rights, Responsibilities and Regulation in the Digital Age back in December 2011: ‘The idea of restraining, or delaying, free speech, in order to protect other human rights, is an anathema to many internet users. Free speech values and an abhorrence of censorship have been hardwired into the architecture of the internet and are deeply embedded in its culture.’ Yet this approach may be inimical not only to countervailing rights but also to the right to free speech, as the Commission went on to point out: ‘However, censorship is not the only enemy of free speech. Those who exercise their free speech to intimidate, bully, denigrate and harass others on the internet lessen the credibility of free speech arguments. Even though the web provides those who are harmed by abusive speech the opportunity to exercise their right of reply, not all have the courage or the standing to exercise it. In effect, those who exercise their free speech rights to cause harm may inhibit others from participating freely in this vital new public domain. The practical anonymity afforded abusers, and the lack of real-life consequences can create an environment where such abusive behaviour can thrive.’

It is partly for these reasons that Google, Facebook, Twitter and other intermediaries not infrequently find themselves facing a broadly united front of protest or condemnation comprising not only individuals whose rights are adversely affected by abuses of free speech, but also the traditional media, as well as persons drawn from across the political spectrum.

Out of step, out of control

The Art 10 right to freedom of expression is not unqualified. On the contrary, it is qualified by (among other things) the need to protect the rights of others. Further, Art 10(2) states that the exercise of the right to freedom of expression ‘carries with it duties and responsibilities’. One problem with the internet and social media is that those who exercise their rights of free speech abusively are often in practical terms outside the scope of available legal remedies. For example, because they are anonymous, outside the jurisdiction, or because there are so many of them that it is impracticable to obtain effective relief against them all.

Another problem is that identification and development of the arsenal of procedures, remedies and sanctions that may be available to be deployed against intermediaries has not kept pace with their activities. In addition, unlike the print and broadcast media, and indeed (at least in the UK) most providers of services of comparable economic and social significance, intermediaries are not subject to any form of regulation that is material in this regard.

Accordingly, so far as concerns the internet and social media, the effective protection of the ‘rights of others’ and ensuring compliance with the important concepts of ‘duty and responsibility’ are left in large part to the voluntary actions of intermediaries. This is unlikely to be satisfactory, not least because attaining these ends may be contrary to the commercial interests of intermediaries: revenues and profits are influenced by content availability and traffic volumes, and there are costs associated with investigating and acting on complaints.

Problems with self-regulation

The pros and cons of reliance on voluntary actions are illustrated by the Leveson Inquiry Report into the culture, practices and ethics of the press, which states: ‘In many circumstances ISPs and others have cooperated with law enforcement and other agencies to remove illegal content or block access to it. The Internet Watch Foundation (IWF) is an example of this self-regulatory approach. The IWF works closely with ISPs to ensure that webpages, including those hosted outside of the UK, which provide access to potentially criminal content and, specifically, images of child abuse, are reported and removed or blocked at source.’

This suggests that a measure of effective removal is accomplished with regard to child pornography. However, it also suggests not only that participation is limited to ISPs (and so, for example, would not extend to Facebook) but also that the IWF scheme confines itself to removal or blocking of individual webpages, such that entire websites, no matter how egregious, will never be removed or blocked.

Can application of existing law rise to the challenge?

Criminal law: real problems

Criminal prosecutions may be effective against individual abusers and, perhaps, have a chilling effect on other individuals. But their deployment, let alone against intermediaries, may be problematic. The Leveson report states: ‘… successful prosecution relies on considerable cooperation across a number of agencies, not least the ISPs and content providers, and is most effective where the alleged act is also clearly criminal in the host country… the ability of the UK to exercise legal jurisdiction over content on internet services is extremely limited and dependent on many things… which are rarely aligned. These include: the location of the service provider; the location of the servers on which material is held; and international agreements and treaties.’

Data law: some success

In Google Spain SL and Google Inc v Agencia Espanola de Proteccion de Datos and Mario Costeja Gonzalez Case C-131/12, the Court of Justice of the European Union (CJEU) had to consider the Data Directive and what obligations are owed by operators of search engines to protect the personal data of individuals who do not wish certain information (which is published on third parties’ websites and contains personal data relating to them that enable that information to be linked to them) to be located, indexed and made available to internet users indefinitely. The CJEU observed that: ‘The answer to that question depends on the way in which [the Directive] must be interpreted in the context of these technologies, which appeared after [the Directive’s] publication.’ It ruled that a search engine operator engages in ‘processing of personal data’ and must be regarded as the ‘controller’ in respect of that processing.

The CJEU further ruled that a search engine operator is within the territorial scope of the Directive if it sets up in a member state a branch or subsidiary which is intended to promote and sell advertising space offered by that engine and which orientates its activity towards the inhabitants of that member state. In deciding whether the data subject had a right that the information in question relating to him personally should no longer be linked to his name by a list of results displayed following a search made on the basis of his name, the CJEU reasoned, among other things, that the processing of data that arises from the use of search engines has a particular propensity to affect an individual’s fundamental rights. The CJEU thus provided European data subjects with an effective means of asserting their rights against search engine operators, which they have used in later cases (see, for example, Vidal-Hall v Google Inc [2015] EWCA Civ 311).

The protection of European data subjects will be enhanced when the General Data Protection Regulation comes into effect on 25 May 2018. The Recitals to that Regulation state that technological developments and globalisation ‘require a strong and more coherent data protection framework in the Union, backed by strong enforcement’, and the sanctions which may be imposed under it have real force (including fines of up to the higher of 2% of worldwide turnover and EUR10m in respect of some breaches and up to double those figures in respect of others).

Defamation law: a possible way forward

In Tamiz v Google Inc [2013] EWCA Civ 68, Eady J held that Google Inc was not a publisher of blogs posted on Blogger.com, but that even if it was a publisher it would have a defence under s 1 of the Defamation Act 1996 (DA 1996) (which creates a defence in defamation proceedings for a person who shows that (a) he was not the author, editor or publisher of the statement complained of, (b) he took reasonable care in relation to its publication, and (c) he did not know, and had no reason to believe, that what he did caused or contributed to the publication of a defamatory statement). The claimant’s appeal against this decision was dismissed. However, the Court of Appeal held, first, that it was arguable that Google Inc was a publisher of the blogs in question after notification by the claimant, and, second, that Google Inc would not have an unassailable defence under s 1 of the DA 1996 after notification.

Private information: more problems

In other areas, such as misuse of private information, dissemination of material via the internet and social media may damage or in an extreme case even destroy rights which the court has held to be legitimate and has sought to uphold by an injunction, and threaten to undermine the rule of law. In PJS v News Group Newspapers Ltd [2016] UKSC 26 such dissemination, which an interim injunction had been ineffective to prevent, substantially damaged one aspect of the claimant’s Art 8 rights (unwanted access to private information). That was not only a wrong in itself; it also jeopardised his prospects of retaining interim protection against publication in the traditional media. However, the claimant succeeded in retaining the injunction on the basis that it still served a useful purpose by providing protection in respect of another aspect of his Art 8 rights (unwanted access to, or intrusion into, personal space).

The judgments in the case recognise the need to face up to the argument that such injunctions have no sensible place in the age of the internet. That argument was, in effect, accepted by the Court of Appeal which, in April 2016, discharged an interim injunction which it had granted in January 2016, because, in the intervening period, the story, including the names of those involved, had been published in the USA, Canada and Scotland, on websites and on social media. On the second occasion, Jackson LJ accepted that ‘the court should not set aside an injunction merely because it has met widespread disobedience or defiance. Such an approach would be contrary to the rule of law’, but went on to state that: ‘There is an important difference between succumbing to disobedience or defiance on the one hand, and accepting that there has been and is likely to be extensive dissemination of private material on the other’, that this was not a case of disobedience by the media, and that one of the difficulties about the submission that the case involved defiance was that ‘the internet and social networking have a life of their own’.

The claimant appealed to the Supreme Court, which, by a majority, allowed the appeal and ordered the continuation of the interim injunction until trial or further order. Lord Mance said at [45]: ‘At the end of the day, the only consideration militating in favour of discharging the injunction is the incongruity of the parallel – and in probability significantly uncontrollable – world of the internet and social media, which may make further inroads into the protection intended by the injunction.’  Lord Neuberger said at [65] that: ‘The publication of the story and the identification of PJS in the electronic media since January 2016 has undoubtedly severely undermined (and probably, but not necessarily, demolished) PJS’s claim for an injunction in so far as he relies on confidentiality. However, I am unconvinced, on the basis of the evidence and arguments we have heard, that it has substantially reduced the strength of his claim in so far as it rests on intrusion’ and at [70], as one of his concluding remarks, that: ‘I also accept that, as many commentators have said, the internet and other electronic developments are likely to change our perceptions of privacy as well as other matters – and may already be doing so. The courts must of course be ready to consider changing their approach when it is clear that that approach has become unrealistic in practical terms or out of touch with the standards of contemporary society’. Lord Toulson, dissenting, observed at [80] that, in addition to other forms of publication discussed by Lord Mance, ‘there have been numerous Twitter hashtags of a fairly obvious kind leading to material identifying PJS in connection with the injunction’; and at [89] he said with regard to the requirement under s12(4)(a)(i), Human Rights Act 1998 for the court, when considering an injunction, to pay particular regard to ‘the extent to which the material has, or is about to, become available to the public’ that ‘If the information is in wide, general circulation from whatever source or combination of sources, I do not see that it should make a significant difference whether the medium of the intended publication is the internet, print journalism or broadcast journalism. The world of public information is interactive and indivisible.’

Balancing rights in the traditional media

European and domestic case law has worked out sophisticated ways of balancing competing rights so far as the traditional media are concerned. Cases such as Mosley v United Kingdom [2011] ECHR 774 recognise the distinction between, on the one hand, ‘reporting facts capable of contributing to a debate of general public interest in a democratic society’, in respect of which ‘the pre-eminent role of the press in a democracy and its duty to act as a “public watchdog” are important considerations in favour of a narrow construction of any limitations on freedom of expression’ and, on the other hand, ‘press reports concentrating on sensational and, at times, lurid news, intended to titillate and entertain, which are aimed at satisfying the curiosity of a particular readership regarding aspects of a person’s strictly private life’ which ‘does not attract the robust protection of Art 10 afforded to the press and in respect of which freedom of expression requires a more narrow interpretation’; and that ‘the public interest cannot be reduced to the public’s thirst for information about the private life of others, or to the reader’s wish for sensationalism or even voyeurism’ (see Couderc v France [2015] ECHR 992).

In Times Newspapers Ltd v Flood [2017] UKSC 33, part of the analysis proceeded on the assumption that there is a rule that, where a claim involves restricting the freedom of expression of a defendant such as a newspaper or broadcaster, it would, as a matter of domestic law, normally infringe the defendant’s Art 10 rights to require it to reimburse the success fee and ATE premium for which the claimant is liable under the regime established by the Access to Justice Act 1999 Act. At [63], Lord Neuberger expressed the view that, bearing in mind that the rationale of the rule is to the effect that ‘the most careful scrutiny on the part of the court is called for when measures taken by a national authority are capable of discouraging the participation of the press in debates over matters of legitimate public concern’, that rule could not properly be invoked by a media defendant which had engaged in persistent, pervasive and flagrant hacking and blagging, and where the information which that would be expected to and did reveal lacked any public significance.

Sorting out the future

These intellectually refined and nuanced approaches do not apply to the internet and social media, either as a matter of language or as a matter of practice. A system of law which subjects some media and not others to a regime gives rise to problems of both fairness and utility: why is it fair on the traditional media; and what purpose does it serve if it applies to only some media? The possible solutions of voluntary action, criminal proceedings, substantive claims and amenability to injunctions were also considered by the New Zealand Law Commission in Rights, Responsibilities and Regulation in the Digital Age. The discussion above suggests that although the rights in play are clear and important, the relevant responsibilities are not being shouldered: in these circumstances, is it not time to consider whether regulation is required? 

Contributor Richard Spearman QC practises from 39 Essex Chambers. He has a wide ranging chancery, commercial, and common law practice. His many reported cases include those concerning freezing injunctions, letters of credit, civil fraud, tracing, judicial review, insurance, banking, defamation, copyright, confidence, private information and data protection.

Profit before safety? Inquiry calls for action

Social media company inaction on illegal and abusive content has been slammed by the Home Affairs Committee. In its report published on 1 May the committee noted that quick action is taken to remove content found to infringe copyright rules, but not when the material involves hate and dangerous material (see the full report here).

The snap election has curtailed the inquiry, but its recommendations for the next Parliament include the following:

  • A government review of the entire legal framework around online hate speech, abuse and extremism.
  • Much stronger enforcement, with a system of fines for companies that fail to remove illegal content.
  • Social media companies that fail to proactively search for and remove illegal material should pay towards costs of the police doing so instead.
  • Regular reports should be published on safeguarding activity including the number of staff, complaints and action taken.

London mayor Sadiq Khan launched the Online Hate Crime Hub in April 2017, a specialist police unit to help tackle online hate crime and provide better support for victims in London.