Tag Archives: Internet

Ian Cram: Penalising the googling juror? – Reflections on the futility of Part 3 of the Criminal Justice and Courts Bill (2013-14)

cram2-ianThe hotchpotch of measures that comprises the Criminal Justice and Courts Bill is about to reach Report Stage in the House of Lords. The Bill sets out a panoply of new and controversial measures to deal with dangerous offenders, young offenders, drugs-testing in prisons, wilful neglect or ill-treatment by care workers, reforms to criminal proceedings (including the use of cautions), the possession of extreme pornographic images, civil proceedings involving judicial review (B. Jaffey & T. Hickman), personal injury cases and challenges to planning decisions. The adequacy of this miscellaneous approach to law reform will doubtless come under the fuller scrutiny that it deserves elsewhere. This blog takes as its focus provisions in Part 3 of the Bill which seeks to put on a statutory footing offences connected with private research by jurors. I suggest that resort to the criminal law constitutes a clumsy, impractical and unnecessarily punitive attempt to regulate the extra-curial activities of the modern, online juror. It is incumbent on our lawmakers to explore more imaginative responses to the undoubted problem of jurors’ access to untested, internet materials – responses that might be more obviously premised upon an appreciation of jurors’ dutiful efforts to arrive at just verdicts.

Whilst illicit, private research by jurors long pre-dates the Internet (recall Sidney Lumet’s classic 1957 film Twelve Angry Men), the ability of jurors to seek out materials concerning events and personnel at the centre of criminal proceedings is considerably enhanced in the electronic era. A survey by Thomas for the Ministry of Justice in 2010 which at the time was reckoned to have underestimated the extent of online research, it was revealed that 12% of jurors in ‘high profile’ and 5% of jurors in standard (non-high profile) cases confessed to doing private research into the cases they were trying. (C Thomas, ‘Avoiding the Perfect Storm of Juror Contempt’ [2013] Crim L Rev 483) Despite some well- publicised convictions of jurors in 2011 and 2012 for online research during deliberations (Fraill [2011] EWHC 1629 and Dallas [2012] EWHC 156) resulting in custodial sentences, it would be surprising in 2014 if actual instances of jurors’ private research had not increased beyond the levels reported in 2010.

The legal basis of convictions such as those in Fraill and Dallas remains unclear. Is the offence committed merely when the juror intentionally disobeys a judicial instruction or does it also need to be shown that he/she has acted in a way calculated to create a real risk of prejudice to the administration of justice? Dallas is currently awaiting the outcome of her application to Strasbourg, arguing that the trial judge’s warning to jurors not to conduct private research lacked the requisite degree of clarity needed to make clear both what was prohibited and what the legal consequences of any breach might be.

It is against this somewhat uncertain background that the Law Commission recommended in 2013 the creation of statutory offences of new offences concerning private research by jurors (and its dissemination) as well as giving trial judges the power to order jury members to surrender electronic communications devices for a limited period. To be fair to the Commission, it is intended that these new offences operate alongside non-penal measures such as declarations of good behaviour and an amended oath that will reinforce the importance of trying the case solely upon the evidence presented by the parties.

Research in the US where the ‘google mistrial’ in both criminal and civil jury trials is a recognised phenomenon indicates two main reasons why jurors engage in prohibited online searches. (G Lacy, ‘Untangling the Web: How Courts should respond to Juries using the Internet for Research’ (2012) 1 Reynolds Court and Media Law Journal 169; D Aaronson & S Patterson, ‘Modernizing Jury Instructions in the Age of Social Media’ (2013) 27 Crim Just 4) The first is that some jurors do not understand what forms of conduct are prohibited. They fail thus to see that private inquiry into the meaning of legal/medical terms (such as negligence’ or ‘Van der Woude syndrome’) constitutes ‘research’. In other cases, a warning not to do private research is couched in general, technologically non- specific terms that is misconstrued. These sorts of misunderstanding are or ought to be fairly easily remedied through clearer instructions from the bench. The second reason behind juror online searches is altogether more troublesome however. Even in the face of unambiguous instructions which helpfully make explicit the rationale for restrictions, some jurors refuse to comply, believing that the lawyers are trying to conceal something that is relevant to the proceedings. (T Hoffmeister, ‘Google, Gadgets and Guilt: Juror Misconduct in the Digital Age’ (2012) 83 U Colo L Rev 409) Other empirical research from the Australian state of Victoria refers to a phenomenon of ‘juror reactance’ in which, notwithstanding a judicial direction to the contrary, jurors are unable to discard ‘information’ that is considered relevant to the case before them. (J Johnston et al, Juries & Social Media – A Report prepared for the Victorian Department of Justice (2013) available at http://epublications.bond.edu.au/law_pubs/600/) On this basis, it may be predicted the proposed new criminal restrictions in England and Wales will make jurors more likely to conceal the fact of their illicit research from fellow jurors. It is unlikely to stop the research in the first place. What if, in any given criminal trial, there are four or five jurors who have separately conducted private research and conceal this fact from their co-jurors?


‘Indeed, the internet has made the commission of many criminal offences much easier. It would be absurd to suggest that such conduct should no longer be criminalised on account of the ease with which such offences can now be committed.’– Rt Hon Attorney General Dominic Grieve QC MP (February 2013)

The insistence of the previous Attorney General on using the full force of the criminal law against googling jurors is understandable even laudable (costs of retrials, ordeals for witnesses and delayed justice are not insignificant reasons for taking a serious view of this conduct) but, for reasons advanced above, likely to fail in its primary objective of halting the practice. The empirically documented phenomena of ‘juror ‘reactance’, linked concerns that the adversarial process is keeping relevant material from jurors and an overriding desire to do justice to all parties will continue to prompt a certain (possibly rising) proportion of jurors to engage in online research. The supporters of the new measures have yet to explain satisfactorily how illicit internet use will be policed and detected. If, as seems likely, few cases of online research will be detected, it would be interesting to hear from the Bill’s supporters precisely how the law will (i) bolster the fairness of criminal proceedings and (ii) will not fall into general disrepute. (Interestingly, in the US there are few instances of criminal proceedings against jurors who engage in private research, D Bell, ‘Juror Misconduct and the Internet’ (2010) 38 Am J. Crim. L. 81)

It may be that part of the problem will take of itself in the aftermath of the European Court of Justice’s ruling this May in Google Spain v Gonzalez (and another). Well-counselled defendants may now instruct Google to remove links to webpages that mention them. In this way, ‘googling’ will yield up little of any prejudicial effect. But this incidental form of protection for adversarial justice can hardly be said to offer a coherent way forward. At bottom, the way in which our legal system signals its appreciation of jurors’ sincere efforts to arrive at justice may not be best served by a punitive response to ‘fact-gathering’. A more imaginative response to realities of jurors’ online research may be to explore within certain defined limits ways of accommodating jurors’ desire to be more informed about the case before them. At present, the practice of allowing jurors’ questions varies from Crown Court to Crown Court.

Whisper it quietly for fear of upsetting the legal profession’s control over adversarial proceedings – a better response to the problem of the googling juror may necessitate affording ordinary citizens a more active role in establishing the truth of the kind their 18th century predecessors enjoyed.

Ian Cram is Professor of Comparative Constitutional Law at the University of Leeds.


Suggested citation: I. Cram, ‘Penalising the googling juror? – Reflections on the futility of Part 3 of the Criminal Justice and Courts Bill (2013-14)’ U.K. Const. L. Blog (2nd October 2014) (available at http://ukconstitutionallaw.org).

1 Comment

Filed under Judiciary

Liz Fisher: Gov.Uk?

fishereAs some of you may have noticed, the UK government has a new website www.gov.uk which not only replaces the old Direct.gov site but also the individual government department and public body websites. Since late last year the 24 major government departments have been moving to this new platform. That move is now complete and smaller agencies and public bodies are now undertaking the shift. The new website won a Design of a Year Award in mid April with one of the judges describing it as ‘the Paul Smith of websites’ and another noting that ‘it creates a benchmark for which all international government websites can be judged on’ (BBC Report,  last accessed 9 May 2013).

I do not pretend to have any expertise in web design, information technology or anything like that, but I do think the new website is something that public lawyers should be thinking carefully about for three reasons. The first reason is that the shift to the new website raises a practical problem that many of us as scholars know too well – a consequence of the move is that some old web addresses are now defunct. In some cases, new links have been provided (and work seamlessly) but not in all. This is not a new problem and applies across the public and private sectors. It is a particular problem in relation to government websites because many government documents are web based and websites are now being cited as the way to find them. As a lecturer, author, and a journal editor the shift to the new platforms have caused all kinds of issues. Government websites are a major scholarly resource and yet there has been little discussion about that fact among public lawyers. Part of the debate of course needs to be about how these websites are stored and archived. Many documents have been shifted to the National Archives and a welcome development is that the British Library has since early April begun to harvest websites in the UK domain (British Library  last accessed 9 May 2013). But part of the debate also needs to be about how we as scholars cite and deploy such websites. Thus for example style guides for journals and scholarly works don’t often provide guidance on how to manage the fact that websites are likely to disappear. We as scholars need to have a conversation about these issues.

The second issue raised by the new website is about transparency. As I have written before on this blog (last accessed 9 May 2013), the Coalition government has had, and still has  a major policy about transparency,  but I’m afraid I haven’t found the new website very transparent at all. On the old website it was relatively easy to find documentation in relation to topics – that required clicking through a series of subheadings. As you did so, you not only found documents but also explanations of how the documentation fitted into the bigger legal and institutional perspective. These frameworks were not always perfect, but generally speaking they provided a good map of the activities of a government department. The new website is focused around ‘policies’ which don’t seem to have any logical order. The search tool works quite well, but provides no context for the documents you find. Thus you can produce a list of documents, but no explanation of how they relate to each other. Again, there are some exceptions to this (the page of biodiversity protection on the DEFRA website springs to mind (accessed 9 May 2013)

This relates to the third issue that the new website raises, and perhaps the most significant. As I have argued elsewhere (accessed 9 May 2013), the creation of an administrative transparency mechanism is really about building the architecture of public administration and a website is no exception. To paraphrase Harlow and Rawlings, behind every government department website is a theory of public administration. The theory behind this website is very much a ‘rational-instrumental’ one (Elizabeth Fisher, Risk Regulation and Administrative Constitutionalism (2007). The website’s focus on ‘policies’ and subsequent ‘actions’ taken pursuant to such policies means that a government department is largely conceptualised as a conduit for delivering an agenda set by the political party in government. Some of these policies are about specific reforms (planning for example), and other ‘policies’ are a continuance of a long entrenched complex regime (nature conservation). The overall impression however is that the role of public administration is to deliver the government’s particular strategy.  This approach raises an interesting question of how the website will need to evolve with a change in government. It also gives very little impression of the institutional structure of a government department or the way in which some policy areas develop incrementally over time from a variety of sources. The rational instrumental model of public administration has of course come to dominate understandings of UK public administration in the last three decades (David Faulkner, ‘Government and Public Services in Modern Britain? What Happens Next?’ (2008) 79 Political Quarterly 232) so the structure of the new website is not surprising. With that said, we should not let this website narrow our vision, and thus debate, about the nature and role of public administration.

I do appreciate that my response could be seen as akin to those people who get annoyed when the supermarket is rearranged and they can’t find where the eggs are anymore. Likewise, it is also clear tweaks and adjustments are being made. My overall point is not that change is bad, but in the information technology age, a government website really matters. It is a resource we regularly use that frames our understanding of what public administration does and what we should expect of it. The website maybe a marvel of design but I do wonder what kind of ‘benchmark’ it is which other ‘government websites can be judged on’. Whatever the case, we as public lawyers should be taking a keen interest in this new site and thinking about its role and nature, and its implications for the practice and study of public law.

Liz Fisher is  Reader in Environmental Law at Oxford University.

Suggested citation: L. Fisher, ‘Gov.Uk?’ U.K. Const. L. Blog (9th May 2013) (available at http://ukconstitutionallaw.org).


Filed under UK government

Paul Bernal: The Trial of Lady Chatterley’s Online Lover?

paul-bernalBritain has often had a very confused relationship with obscenity and pornography, one that has been played out in the law a few times over the years. We don’t quite know what to think about it, let alone what to do about it. Mervyn Griffith-Jones expressed the kind of attitude that many have in his notorious statement from the Lady Chatterley’s Lover trial in 1960, when he asked if it were the kind of book ‘you would wish your wife or servants to read’.

That kind of attitude seems to be in play again as another old idea about pornography seems to be emerging again: the idea of blocking all ‘pornography’ on the internet, forcing people to ‘opt-in’ to get access to pornography.

Details of how the idea might work have yet to be made clear. The stimulus this time around is that Iceland is actively considering proposals to block access to ‘violent and degrading’ content. Where Iceland leads, the UK, some have said, would like to follow next. But could it? And should it? And would such a ban be legal or proportionate? Perhaps even more pertinently, could the introduction of such a scheme have disturbing side effects?

Article 10 of the ECHR

Article 10 of the European Convention of Human Rights says that:

 Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

The ‘receive’ part of this right is often neglected – we not only have the right to express ourselves, but to receive information and ideas without interference, subject, of course, to Part 2 of the Article, which says amongst other things

 “…as are prescribed by law and are necessary in a democratic society… …for the protection of health or morals…”

So, if we are to suggest a block on online pornography, it would require it to be prescribed by law, and somehow for the ‘protection of health or morals’, which brings us back to the complex relationship we have with the whole idea of obscenity and pornography – back to Lady Chatterley’s lover.

Prescribed by Law

It is worth remembering that child sexual abuse images (sometimes described as child pornography) are already illegal pretty much everywhere in the world, and indeed on the internet. In the UK they are fairly effectively blocked, through the activities of the Internet Watch Foundation – an organisation that though it has been criticised in a number of ways (see for example this excellent article by my colleague Dr Emily Laidlaw) does seem to keep a tight rein on access to child abuse images.

There is consensus on this issue – and that consensus is reflected in both law and practice, not just in the UK but also in most other parts of the world. The level of success of the legal ban and its enforcement is another matter – but any ‘new’ ban on online pornography would not be about child sexual abuse images, no matter what campaigners say. Rather, it would be about a choice to ‘regulate’ access to content that is considered, in the UK at least, to be legal, at least for adult consumption.

Pornography is legal…

Of course there are some who may want porn not to be legal – but there are many others who would consider that an infringement of their Article 10 rights. One big question is how to define pornography – a question that the law has struggled with over the years. Section 63 of the Criminal Justice and Immigration Act 2008 made an attempt to define ‘extreme pornographic images’ but it has been subject to serious criticism (e.g. by Professor Andrew Murray in the his article for the MLR, available online here). The not guilty verdicts in R v Peacock and R v Simon Walsh in 2012 further muddied the waters – at the very least suggesting that there is no convincing consensus against pornography in our society. Taking it another step further, what about books like Fifty Shades of Grey? Not only is that not censored, but it is for sale in pretty much every bookshop in the country – and boldly and brightly on display, without any kind of an age-barrier. Is it pornographic? If not, how would pornography be defined so as to exclude it?

As well as the difficulties in defining pornography, there is the issue of enforcement. Who is going to trawl through the web looking at websites to try to classify them according to any standards that do get agreed? The IWF largely relies on sites being reported to them – to extend such a system to cover pornography in general would require huge resources, and the establishment of some kind of a censorship body.

Opt-in to porn?

But those proposing such a system might say, we’re not banning anything, or making anything illegal – people could just ‘opt-in’ to pornography, exercising their choice. In the digital world it isn’t as simple as that. First of all, there’s the question of whether it would work at all – and many commentators (for example Dr Brooke Magnanti in the Telegraph) have significant doubts. Blocking anything in the internet is much more easily said than done.

Secondly, bringing in an ‘opt-in’ system would have further implications. Signing yourself up as someone who opts-in to porn will put you on a database, a database of people who want access to pornography. What will that database be used for? It is a relatively simple slippery slope for a database of those who want access to adult content to become a database of people who are worth further investigation for other reasons.

It would be a database that would be of great vulnerability – the possibilities of using such information against people on it are significant. How many people would want their families, their employers or their friends to know that they had ‘opted in’? Effectively, forcing a sign-up could have a direct chilling effect: many people may not want to sign up for fear of the implications of that sign-up.


That, indeed, may well be the prime motivation behind some of these ideas: simply to discourage people from viewing pornography. That might be a laudable aim, but it is a bold claim to suggest that all pornography is ‘bad’, and some claim otherwise: Leslie Green in particular suggests that for the gay community pornography can be empowering and expressive rather than exploitative and oppressive. If it were to be accepted as a claim, it would need good, strong, empirical evidence to back it up. Is there such evidence? Even if there is, is such an aim one that either could or should be achieved through the law, and, ultimately, through censorship? That, ultimately, is what this kind of a system would result in.

It may in some ways be a laudable form of censorship, with laudable aims, at least insofar as some of the more extreme pornography is concerned – but it would be censorship nonetheless. What is more, it would be censorship of content that the law currently views as legal. If a porn-block is desirable, the first stage should surely be to designate the content as illegal – and not just on the internet. As mentioned above, the law has had great difficulty with that – R v Peacock and R v Walsh being recent examples – and it is hard to imagine that it wouldn’t have similar difficulties in the future.

There are echoes of the paternalism of the past in the current proposals, and, one suspects, some similar motivations. To update Mervyn Griffith-Jones’s notorious statement referred to above, there are some who might wish to ask:

“Is this the sort of website you would wish your husband or son to view?”

The answer to this question may very well be no – but it is worth remembering that Penguin won the Lady Chatterley’s Lover trial. Would the trial of Lady Chatterley’s Online Lover be any different?

Dr. Paul Bernal is a lecturer in the UEA Law School and a member of media@UEA. He blogs at: http://paulbernal.wordpress.com/ and tweets as @paulbernalUK.

Suggested citation: P. Bernal, ‘The Trial of Lady Chatterley’s Online Lover?’ UK Const. L. Blog (27th February 2013) (available at http://ukconstitutionallaw.org).


Filed under Human rights

Paul Bernal: Internet Anonymity: A Very British Dilemma

Andy Smith, a senior security official at the Cabinet Office, caused quite a stir at the Parliament and Internet Conference last month, when he suggested that people should use false names and provide false information on the internet – and in particular, when using social networking sites. Reaction was explosive in both directions: Labour MP Helen Goodman called his comments ‘totally outrageous’, while security expert Alec Muffett, in a wonderfully strident blog, expressed strong support, calling Smith an ‘epic hero’.  Both sides have strong reasons for their beliefs – and the disagreement is one that has been echoed over the years. It reflects an issue that seems to have particular interest for the British: when, how, and where to we have the right to anonymity – and who, when and where do people have the right to demand from us our real names and details.

Do we have a right to anonymity online?

There is a famous cartoon from the New Yorker back in 1993, with the caption ‘On the Internet, nobody knows you’re a dog’. It reflected the idea that your online ‘persona’ could be something you create, something that has no real connection to your ‘real world’ identity. In those seemingly long-lost days, the internet was a bit of a wilderness – a kind of ‘wild west’ where there was no place for law or governments. John Perry Barlow’s famous ‘Declaration of the Independence of Cyberspace’ in 1996 included the suggestion that ‘Your legal concepts of property, expression, identity, movement, and context do not apply to us.’ The inclusion of the word ‘identity’ was crucial. Identity, and the ability and right to either determine it or conceal it – to mask it – was fundamental to way that the pioneers of the internet saw their world. The adoption of the Guy Fawkes mask as the symbol of the current hacker group ‘Anonymous’, and indeed their very name, shows how that ideal has continued as a key part of what is considered to be ‘freedom’ on the internet.

Bringing in the law

As time has passed, however, things have changed online. Businesses have come in, and turned the place into a substantially commercial environment. Governments have come in, and tried to take a grip – to bring some kind of law and order to the online world. Both businesses and governments have seen the anonymity that characterised the internet in the early days as something threatening – and have sought to deal with it. From a government perspective, if we are to enforce law we need real names – to catch terrorist and paedophiles, to stop cyber-bullying and nasty anonymous commentators we need real names, which is why Helen Goodman found Andy Smith’s comments so outrageous.

From a business perspective, if they want to have a grip on their customers, the more information they can find out the better – and ‘real names’ and ‘real’ information is the best of all. That’s why the sharpest intakes of breath when Andy Smith made his remarks at the Parliament and Internet conference were from the Facebook delegation. Facebook’s policy is that we should all only use our real names on Facebook. Anonymity and pseudonymity are not only frowned upon but actually against their terms and conditions. Without real names, Facebook’s data – what they gather from us – would be far less valuable, and hence Facebook itself would be far less valuable.

An honourable British tradition?

There is, however, an honourable British tradition in our right to withhold our name and identity even from the authorities.  Looked at from the perspective of our European neighbours, Britain does not have a particularly good record in terms of privacy – our loving embrace of CCTV cameras is considered quite extreme, and we were the drivers of the Data Retention Directive, considered by Peter Hustinx, the European Data Protection Supervisor to be ‘the most privacy-invasive instrument ever adopted by the EU’ – and yet when it comes to Identity Cards, we are firmly opposed. Most European nations, even those keenest on privacy in other ways, accept identity cards without much complaint. We don’t – and have not, since the famous case of Willcock vs Muckle ([1951] 2 KB 844) where the whole idea of identity cards in peacetime was considered almost un-British.

The National Registration Act, 1939, had allowed the police to ask for identity cards, for security purposes during the Second World War. The police continued to use it, and Willcock, a noted liberal, had refused to produce it when asked by police constable Muckle, and was prosecuted under the Act. Willock appealed, and still lost, but Lord Goddard commented that:

“From what Mr. Gattie [a prosecution lawyer] has told the court it is obvious that the police now, as a matter of routine, demand the production of national registration cards whenever they stop or interrogate a motorist for whatever cause. Of course if they are looking for a stolen car or have reason to believe that a particular motorist is engaged in committing crime, that is one thing: but to demand production of the card from all and sundry, for instance, from a woman who has left her car outside a shop longer than she should, or on some trivial occasion of that sort, is wholly unreasonable. This Act was passed for security purposes; it was never passed for the purposes for which it is now apparently being used.”

 The inference is clear: that regular and trivial requirement for proof of identity is ‘wholly unreasonable’. Lord Goddard went on to say that relations between people and the police was something that we in this country are proud of – and that for police to regularly demand that people prove their identity would damage that relationship. Peoples identities are their own business, unless there is a drastic or emergency need for that identity to be revealed.

This is a tradition that has continued. Jacob Rees-Mogg MP, commenting on the potential expansion of police powers to demand identities contemplated in theLondon Local Authorities Bill, in December 2011, evoked another very British source: P.G. Wodehouse.

“Members will remember that Bertie Wooster, when arrested for pinching a policeman’s helmet on boat race night—I think wines had been taken—gave a false name when arrested. I cannot remember what name he gave, but I think he said that he lived in Acacia avenue. It might be a good address to give if you are ever caught doing things you should not do. There was no additional fine for giving a false name and Bertie Wooster paid the fine handed down at the magistrates court in London—five guineas, which was a lot of money in those days—but got away with giving a false name. There is a great tradition, from Odysseus to Bertie Wooster, of being allowed to hide one’s name from people who do not necessarily have the full authority to request it.”

There are current politicians who are known for using ‘false’ names: Conservative Party Chairman Grant Schapps is believed to have used at least three in addition to his own (Michael Green, Sebastian Fox and Chuck Champion), and the revelations of his use of those identities, whilst it has been criticised, has not been suggested to be illegal, and his party has not chosen to discipline him in any way.

There is of course a difference between anonymity and pseudonymity. Withholding your name and details, as Willcock did, is different from assuming a ‘false’ name, as Grant Schapps did and as Andy Smith advocated. However, in a practical sense, when operating on the internet, simply withholding your name is not an option. Online services require usernames and other user information – so to get the protection and rights that anonymity would provide can only be done by following Bertie Wooster’s approach and adopting a false name. Where anonymity is impossible, pseudonymity is the next best thing.

Rights to anonymity and pseudonymity?

The feeling in the Parliament and Internet Conference when Andy Smith made his statement may have been mixed, but there were sufficient numbers of people in the room who supported him – some just as vehemently as Alec Muffett – for it to be something that needs to be taken seriously. There are risks attached to the approach, and Helen Goodman’s concerns do have a real basis, but those risks are neither as great nor as insoluble as they might seem. Even when a pseudonym is used, where damage is caused it can be ‘broken’ – and the use of Norwich Pharmacal orders can help to reveal the person behind the problem. The respective rights can be held in some kind of balance. For most of us, for ordinary people, as Andy Smith suggested, pseudonymity can provide protection and reduce the risk of our data being misused, being hacked or lost. What is more, in suggesting that we should all use false names when needed, or not disclose our names at all, he seems to have been tapping into a long-standing British tradition, one supported not only in convention but in law.

Perhaps, if the Bill of Rights Commission really wants to look at specifically British rights, a right to use whatever name or details you choose unless there is a genuine, urgent and important reason not to, should be one of the rights that they consider. If they do, they would be facing considerable opposition, both from people like Helen Goodman who are concerned about the risks and dangers of crime online and from lobby groups from the likes of Facebook, whose business models might seem to be under threat. Whether the words – and the spirit – of P.G. Wodehouse and of Lord Goddard are strong enough to defend against them is another matter. I would like to think so.

Dr. Paul Bernal is a lecturer in the UEA Law School and a member of media@UEA. He blogs at: http://paulbernal.wordpress.com/ and tweets as @paulbernalUK.

Suggested citation: P. Bernal, ‘Internet Anonymity: A Very British Dilemma’ UK Const. L. Blog (6th November 2012) (available at http://ukconstitutionallaw.org).


Filed under Constitutional reform, Human rights

Paul Bernal: Between a European Rock and an American Hard Place?

Europe and the US have had very different approaches to privacy – and in particular data privacy – for a very long time. Data protection, the centrepiece of European data privacy law, is currently undergoing a reform – and that reform is highlighting the differences in attitude, approach and understanding of privacy and its place in the delicate balance with free expression and business.

The issue that is causing the most contention is the much discussed ‘right to be forgotten’, one of the central planks of the suggested new Data Protection Regulation. It’s being strongly pushed by Commissioner Viviane Reding – but isn’t exactly getting a good press in the US. Apocalyptic pronouncements like “the right to be forgotten could close the internet” and that it is the “biggest threat to free speech on the internet” have appeared in such august journals as the Stanford Law Review.

What is perhaps just as interesting to UK people is the distress that the whole affair is causing to the UK government. They don’t seem to know what to do, or where they stand.

The right to be forgotten

The central thrust of the so called ‘right to be forgotten’ is the idea that people should be able to delete information about them held on the internet. One of the key reasons for its development was the difficulty that people have had in deleting their accounts from social networking sites like Facebook – and the sense that the data being held about people is in some senses ‘theirs’, and that as a consequence they should have the right to delete it. Exactly what the right would mean in practice is somewhat unclear. What kind of data would be covered by the right, and who the right could be enforced against – and how it would or could be enforced in practice – still seems very much up for discussion, and will probably remain so for some time.

From the perspective of the proponents of the right, it is a logical extension of the existing principles of data protection. People already have rights to access information held about them and to correct it when it is erroneous – and to ask for it to be removed if it is being held inappropriately. The ‘right to be forgotten’ takes this a step further – changing the balance so that unless there is a ‘good’ reason for data to be held, the data subject should have the right to delete it. Looked at from this perspective, it is a right that empowers people against the ‘big players’ of the data world – challenging the establishment, and helping to shift the balance of power back towards the individual.

The US perspective

From the US perspective there’s something very different going on: the right to be forgotten seems to be seen primarily as a threat to free speech. The very name ‘the right to be forgotten’ raises a spectre of censorship, or of the rewriting of history – and when Americans look across the Atlantic and back into history and see figures from Stalin and Hitler to the likes of Berlusconi, that impression might be reinforced. It’s for that reason that I’ve been arguing for a while that it would be better to call it the ‘right to delete’ rather than the right to be forgotten – but the latter seems to be what we’re stuck with.

Does the right to be forgotten really threaten free speech? European Commissioner Viviane Reding has done her best to reassure audiences both sides of the pond that it doesn’t. There are exemptions, she has said, for the media, and for free expression:

“It is clear that the right to be forgotten cannot amount to a right of the total erasure of history. Neither must the right to be forgotten take precedence over freedom of expression or freedom of the media.”

Those words haven’t reassured many American writers. Jeffrey Rosen in the Stanford Law Review is one of the most often quoted: he has gone into the detail of what has been presented about the right so far, and found enough ammunition to be able to suggest that it might be used precisely as a tool of censorship. Is he right? Well, the way it looks at the moment, at the very least we are in for some protracted arguments from both sides.

What about business?

All of this, however, may well be somewhat beside the point. Some of the more cynical of privacy advocates – myself included – suspect that the US position isn’t quite as principled as it might appear. Free speech is of course fundamental to the US constitution, and prioritised over almost everything else – but free enterprise is in some ways every bit as fundamental to the US, and when looked at in detail the right to be forgotten is far more challenging to free enterprise than it is to free speech. Businesses all over the world – but in the US in particular – have been building business models relying upon the gathering, holding and using of vast quantities of personal data. It is those business models that are under threat. Not only might they have to build in mechanisms to allow people to see and then delete the data held about them but the potential they have for exploiting this data might be much reduced. Those businesses are not likely to be unhappy to have the much-respected advocates of free expression do the hard work of opposing the right to be forgotten for them…

And the UK?

The UK seems to have neither Europe’s enthusiasm for privacy nor the US’s passion for free speech. What it does have is a desire to support business – and not to let anything else get in the way of the freedom for businesses to find ways to make money.  Back when the proposal for the right to be forgotten first started doing the rounds, UK politicians were doing their best to oppose it.

In May 2011 Justice Secretary Ken Clarke gave a speech to the British Chamber of Commerce in Belgium, counselling against too much data protection. He suggested that the right to be forgotten was effectively unworkable, and implied that it should be abandoned. His words weren’t heeded – Viviane Reding in particular has continued to push and push for the right to be forgotten – and the UK government looks as though it’s been squirming ever since.

It’s not the first time that the UK Government has been put in a position of confusion over digital issues, trying to ‘support business’. Back in November 2010, Ed Vaizey came out first against the idea of net neutrality, thinking he was supporting business, and then almost immediately in favour of it when he saw the reactions his first statements produced. In a similar vein, the confusion shown by the Information Commissioner’s Office over the notorious ‘cookies’ directive has been rumbling on for many months and shows no sign of real resolution.

This time, though, the UK Government has taken it a step further. It appears that the UK Government would much rather the ‘right to be forgotten’ disappeared. The Ministry of Justice is undertaking a consultation, ostensibly a ‘Call for Evidence on EU Data Protection Proposals’. The language used is nicely neutral, but the purpose appears clear.  In Hawktalk, the blog of the Amberhawk, the leading information law training providers, headlined their report on the consultation:

“MoJ asks for arguments to oppose the European Commission’s Data Protection Regulation”

Amberhawk suggested that by the nature of their call for evidence – the questions asked, the information provided, and the groups to which the call for evidence was sent – the MoJ was setting up a ‘numbers game’, wanting to say that the vast majority of respondents are opposed to the changes.

Will it work? Will the UK be able to block the regulation, or at least water it down in such a way as to neuter it? Given the persistence with which Commissioner Reding has pushed for the right so far, it seems unlikely. US opposition appears more likely to have an effect, not just because of the power of the US in the internet as a whole, but because their stance is more consistent and principled. Even that, however, cannot be taken for granted, as the US is now taking baby steps towards recognising the importance of privacy on the internet, with Obama putting forward his new  ‘Consumer Bill of Rights’ for privacy on the net.

The UK looks distinctly out of step – seemingly unable to influence Europe and unwilling to accept the views that are coming out of Brussels. For this author at least, the European view is distinctly more palatable, putting the rights of individuals at the heart of their proposals. It would be good if the UK Government began to do the same – and they might find their way out of the awkward position they now find themselves in.

Paul Bernal is a lecturer in the UEA Law School and a member of media@UEA. He blogs at The Symbiotic Web Blog (link tohttp://symbioticweb.blogspot.com/) and tweets as @paulbernalUK.


Filed under Comparative law, Human rights

Paul Bernal: To block or not to block is not the question…

On the 26th October, the subject of website blocking was in the news in two apparently very different ways. Firstly, as a result of the ‘Newzbin2’ court case in July ([2011] EWHC 1981 (Ch)), BT was given 14 days to block access to the Newzbin website ([2011] EWHC 2714 (Ch)) a membership-based website that provides access to potentially copyright-infringing material such as ‘pirated’ movies, music and games. Secondly, the Internet Watch Foundation (IWF) the organisation that provides lists of websites containing child sex abuse content so that Internet Service Providers (ISPs) can block them, celebrated its 15th anniversary. On the surface these may simply sound like two good things, scarcely related to each other, and nothing much to do with constitutional law – but are they? Or, more pertinently, should they be?

For some of us who work in what is loosely described as ‘cyberlaw’, neither event is particularly to be celebrated, and the links between them are clear and significant. In their different ways they highlight the need for more thought – and more action – in how we consider the internet from a legal perspective, and how we consider the rights of those increasing numbers of people who use the internet. What’s more, as the internet is now effectively intrinsic to how most of us function in our society, the rights that we have online are becoming critical supports to our rights in the ‘real’ world – so to look properly at our real rights, we need to consider our online rights more carefully.

The Internet Watch Foundation

The IWF is a registered charity, which works in partnership with ISPs, the police, the government and the public. As set out on their website, the “…IWF was established to fulfil an independent role in receiving, assessing and tracing public complaints about child sexual abuse content on the internet and to support the development of website rating systems.” A hotline was set up so that people could report websites to the IWF, which were then ‘assessed’ by the IWF: effectively “a ‘notice and takedown’ service to advise ISPs in partnership with the Police Services in the UK to effect its removal.”

The precise nature of the service provided by the IWF has been a subject of much debate by academics and others. From a pragmatic perspective, however, they do just what they say: receive tips, assess websites, and if they believe the websites infringe the law in respect of child sexual abuse content, they ‘blacklist’ the sites. This blacklist is provided to UK ISPs – and almost all the UK ISPs use it. BT, one of the biggest of the ISPs, implements the blacklist using a system called ‘cleanfeed’. From a user’s perspective, using normal browsing methods they cannot get access to blacklisted sites – thus preventing, as intended, users from getting access to child sexual abuse content.


The original Newzbin website appeared to be intended to facilitate the sharing of copyright-infringing material – and in March 2010 Newzbin Ltd was found guilty of deliberately indexing copyrighted content (Twentieth Century Fox Film Corporation & Anor v Newzbin Ltd [2010] EWHC 608 (Ch)). Newzbin initially shut down as a result, but relaunched as ‘Newzbin2’ in June 2010, this time hosted in the Seychelles. Further legal action followed, resulting, eventually, in the order in July 2011 for BT (in its ISP role) to block access to the Newzbin site. The method by which BT should implement this block was suggested to be the cleanfeed system through which, as noted above, it currently implements website blocking for the blacklist provided by the IWF.

So what could be wrong?

The IWF, though it performs what looks like a simple public service, has been criticised for a lack of accountability, transparency and consistency. If a website is blocked, the provider of that website is not automatically informed and their opportunities to challenge that blocking are very limited. The best know example of this happened in 2008 when the Wikipedia page for the rock band the Scorpions’ 1976 album ‘Virgin Killer’ was reported to the IWF because of the image of an apparently pre-pubescent and near naked girl on the album cover. The IWF added it to the blacklist. Wikipedia complained, and found the response limited to say the least. As they put it:

 “When we first protested the block, their response was, ‘We’ve now conducted an appeals process on your behalf and you’ve lost the appeal.’ When I asked who exactly represented the Wikimedia Foundation’s side in that appeals process, they were silent.”

After significant pressure from Wikipedia and others, the IWF reversed their stance and allowed access to the page again – but at the date of writing this blog, there is still no transparent system of appeals, and no apparent sign of one being considered.

When the Newzbin case is added to the equation, the issues raised by this lack of accountability and transparency become more significant. To move from blocking assessed child sexual abuse content to blocking for copyright infringements is quite significant – and the potential chilling effect of the judgment should not be discounted. The BPI took little time in trying to start this ball rolling by suggesting to BT on November 4th that they should block The Pirate Bay on the basis of Newzbin 2. When the contentious Digital Economy Act, (currently under Judicial Review) and the even more controversial Anti-Counterfeiting Trade Agreement (ACTA) (which has worldwide scope and is currently finding its way through the European Parliament) are added to the equation it’s not just legally proven copyright infringements that could trigger blocking, but suspected copyright infringements.

What else might people want to block? Site associated with ‘terrorism’? Certainly. Sites that might incite violence or unrest? The response to the summer riots makes that seem entirely possible – but we should remember the uncomfortable parallels between the methods of control attempted by the now-ousted governments of Tunisia and Egypt – and indeed the current governments of the likes of Syria and China – to the policies suggested by David Cameron and others in the immediate aftermath of the summer riots.

To block or not to block is not the question

There are not many who would against the need for rights-holders to be able to defend their rights any more than they are arguing in favour of a right to publish or consume child sexual abuse content. To block or not to block is not the question – it’s more a question of when and how to block. The process is crucial. We need transparency and accountability. We need due process. We need a proper balancing of rights. In order that we are able to find a proper balance, and a ‘proportionate’ way to protect the rights-holders from having their rights infringed, and indeed to protect children (and adults) from extremes such as child sexual abuse content, more coherent, intelligent and clear thought needs to be put in.

The starting point has to be to get a greater understanding of the nature and role of the internet in today’s society. The internet has changed significantly in the fifteen years since the foundation of the IWF – and so, importantly, has the way that people use it. We’re no longer either ‘users’ or ‘providers’ of information on the internet: we’re contributors, collaborators, discriminators – and we’re conduits for content ourselves. How many twitter users have tweeted interesting links or content to others? We don’t use the internet just as a system of communication, a source of information or as a method of self-publicising – pretty much every activity we do in the real world can be integrated with the online world, from shopping to interaction with government, to work and our social lives. The trend towards that integration is unlikely to slow any time soon – if anything, it appears to be accelerating with the increased prevalence of smartphones and the ubiquity of social networks.

Rights on the internet

This trend has been effectively acknowledged by the increasing acceptance of a ‘right to access’ the internet – the UN, for example, pushed the idea in the report in August 2011 by UN Special Rapporteur Frank La Rue. If we have the right to access the net, then we need to think about what rights we have once we’re on the net. There are many related rights that need to be considered, from privacy and freedom of expression to access to information – and such rights as freedom of association and assembly and even freedom of religion have their online aspects too.

These rights can be complex and apparently in conflict – but isn’t it time that we thought about them in a more integrated and coherent way? We ‘need’ the internet in more ways than before, ways that make it a much bigger thing to cut off internet access or to censor or control what we can see. In the opinion of many – myself included – rights to access, to free expression and to privacy need to be given more weight than they currently are given, particular in relation to intellectual property rights, and even, contentious as it may seem, to the need to combat the producers and consumers of child sexual abuse content. 

Paul Bernal is a lecturer in the UEA Law School and a member of media@UEA. He blogs at The Symbiotic Web Blog (link to http://symbioticweb.blogspot.com/) and tweets as @paulbernalUK.

1 Comment

Filed under Human rights, Judiciary

Jacob Rowbottom: Do electoral attack sites need taming?

Much is made of the democratising effect of the internet. However, the freedom to communicate online has costs, which are sometimes felt by the politicians that are the subject of internet communications. These costs were highlighted by the Labour MP, Mike Gapes who presented a Ten Minute Rule Motion on Tuesday, calling for greater regulations on websites that make negative attacks on candidates during election campaigns. While there are already some controls on third party activities in elections, Mr Gapes told the House of Commons that these rules are not working:

“An enormous number of groups, local and national, do not register, and some organisations have websites. During the last general election, a group called the Muslim Public Affairs Committee UK targeted Labour MPs and candidates, with downloadable leaflets and other material attacking people whom they were trying to get out of Parliament. It was able to put out tens of thousands of leaflets without any restriction-negative material arguing against sitting Members of Parliament and encouraging people to vote for other candidates. In some constituencies where the result changed, Liberal Democrats or other people were elected having been beneficiaries of this negative campaigning. We should tighten up the rules to regulate what can be put on the internet. We could also prosecute people who have downloadable material that does not have imprints. We need to ask the Electoral Commission to take these matters far more seriously, and that is why I am proposing this Bill.”

From this statement, it is not entirely clear whether Mr Gapes is complaining about gaps in the law or that the current law is not being enforced properly. However, there are several areas of concern relating to attack websites.

The first is the anonymity of the attack sites, which Mr Gapes raises when referring to material without ‘imprints’ (ie details of the printer and promoter on the material). Anonymity can be a problem for the reader of the website, as not knowing the source of the negative statement means visitors have less means to assess the credibility of the claims being made. The audience will not know whether the anonymous author has inside knowledge about the politician or is merely a front organisation with a vested interest. Of course, anonymous sources are commonly used in the mainstream media, but at least there an established media entity will vouch for the credibility of the information being published. From the perspective of the politician being attacked, the problem of anonymity lies in not knowing to whom to address the rebuttal (or in other cases, who to sue for a defamatory statement).

In the current election laws, there are already controls on printed leaflets, requiring the identity of the printer and promoter to be included in any election material. While the Electoral Commission advises people to include such details on internet material, this is not a legal requirement. The imprint requirement applies only to printed material. There are statutory powers to extend the imprint requirements beyond printed materials and to web-based materials, but so far these powers have not been used. One of the problems lies in deciding how these requirements could apply to the various types of digital communication – for example a text message or tweet could not include such full disclosure in the text itself. Alterative transparency requirements (if any are needed) will need to be formulated for internet communications.

Whether anonymity is a problem is open to debate. To many, the ability to speak anonymously is regarded as a strength of internet communications. However, whether there is true anonymity online is questionable – there are various ways (including court orders) in which it is possible to discover the identity of a speaker on a website or blog. These means often require either some technical skill or resources. To go further and require websites to disclose the identity of person speaking or moderating would open this information up to all the audience. The danger with such a requirement lies in creating a chilling effect on some speakers (for example, those who fear the consequences of making their political views public), or at the very least may impose a bureaucratic hurdle which may discourage some election speech (a point made by Robert Halfon MP in his reply to the House of Commons).

A second problem raised by Mr Gapes’ statement is that the material published online is often negative. However, there is nothing wrong with a negative attack in itself.  If a negative statement is true and relevant to the election campaign, then it can provide valuable information to voters. The problem is that people may be more willing to make unfounded attacks on a politician if they remain anonymous and thereby unlikely to be held accountable for the statements. Some transparency may discourage more reckless claims being made online and would require the speaker to stand by his statement (the argument along these lines being that a chilling effect is not always a bad thing, as long as it chills the right type of speech).

Aside from the two points above, there is also a broader issue of what third parties should be forced to disclose. People may be interested to know who is financing a website. Under the current legislation, third parties spending over a certain amount on election material are required to register with the Electoral Commission. Despite an increasing number of political websites commenting on election issues, most did not register as third parties in the 2010 General Election (an exception being the campaign group 38 Degrees). As I told the Committee on Standards in Public Life last year, it is not clear why websites are not being caught by the election regulations. In many cases, this will be because the costs of running the website do not meet the threshold for registering as a third party. However, in the case of larger scale websites that employ staff, that threshold is more likely to be met. This is a matter that requires further investigation.

Third party activities during elections have long been the subject of regulations, but it is not clear at present how these controls are or should be applied to internet communications. I can certainly see hazards in some regulatory strategies and would not want to see any heavy-handed controls.  The complexities of the issue provide a strong incentive to ignore these questions. However, the role of third party activities online needs to be considered if elections are to remain fair, and Mike Gapes’ contribution on Tuesday is important in acknowledging the potential problems attack sites can pose.

Jacob Rowbottom is a Lecturer at the University of Cambridge and author of Democracy Distorted.


Filed under UK Parliament