You are here:
  1. Home
  2. Practice areas
  3. Who wouldn’t trade a little privacy for a lot of security?

Who wouldn’t trade a little privacy for a lot of security?

Posted: 20 December 2017

If privacy is defined as the ‘right to be left alone’, whether by governments, private companies or both, then very few of us currently enjoy that right.

We have already traded our privacy for technology. The webcam on your laptop can be remotely activated. Voice assistants such as Siri or Alexa are constantly switched on, listening, and recording. How else would they hear the command ‘Ok Google’ or ‘Alexa’? The new BMW has seven different SIM cards inside it, all of them sending back data to headquarters. Smart fridges know whether you’re in the house or not. Facial recognition technology is becoming ubiquitous: it will enable you to unlock your smartphone, but it could also discern your sexual orientation [1] , or automatically alert the police if you go to a football game or the Notting Hill carnival.

Whilst most of these technologies are created and owned by private companies, the state can use them to monitor, collect data, or gather information on its citizens, all in the name of security. Governments can already listen in to any phone call you make, and can ask phone operators to decrypt any text messages sent through their network. Law enforcement agencies can study your wifi-enabled appliances to track your movements. Intelligence services can force Internet Service Providers to provide them with a user’s browser history, and in some cases, directly access it themselves - without anyone else knowing about it.

None of this would be too worrying if it was underpinned by, and framed within, a robust regulatory framework, detailing the circumstances in which these technologies can be accessed, and by whom. As things currently stand, however, this is not the case. The state surveillance apparatus can access most devices, most of the time, without a warrant or other written justification. And even if this isn’t too alarming in western democracies, it is easy to imagine how this access could be abused in more authoritarian countries, where political dissent is a criminal offence that entails a lengthy spell in prison, or worse.

The Law Society and the University of Essex’s Human Rights Centre organised a research seminar on 11 December to start sketching out answers to the human rights questions posed by these technologies. The audience was a mixture of representatives from technology firms, human rights lawyers, politicians and academics. Conducted under the Chatham house rule, they discussed the limits to state surveillance using private companies’ technology to collect data about individuals - or rather the lack thereof.

Even in Western developed countries, the regulatory framework lags far behind technological advances. When the FBI tried to force Apple to unlock the smartphone of the San Bernardino terrorist, it relied on a law from 1789, the All Writs Act.

Assuming that it is not already too late, at the very least, it should be made transparent to all citizens what their own - if not other - governments can and cannot do to access ‘private’ information, and what the obligations of private companies are when governments want to use their technology to collect data and informatin. This report discusses the legal frameworks of the countries in which Vodafone operates with regards to government collecting data. The situation varies greatly from country to country, and in many the law governing disclosure is unclear.

Yet there appears to be little demand from citizens/consumers for regulators, parliaments or governments to publish their own transparency reports, outlining the nature and number of requests made by various agencies in a meaningful and reliable way.

While it may seem difficult for legislators and regulators to respond to or even anticipate technological development, the truth is that many of the conundrums posed by technological developments have been apparent for the best part of a decade, if not more: artificial intelligence, facial recognition software, big data, or algorithms have all been part of the public discourse since the early 2000s, if not longer.

It is important not to fall into a trap of dystopian pessimism. Properly harnessed, technology can enhance transparency and accountability. The EyeWitness Project , for example, has created technology that enables human rights defenders to capture verifiable footage related to international atrocities, and document human right abuses. The pictures taken through an anodyne-looking camera app, capture the metadata needed to ensure that images can be used in investigations or trials. The picture is immediately and safely stored in a remote online storage facility.

Private companies have an increasingly important role to play in shaping the debate and legal framework around human rights and technology.

For more information on any of the issues discussed in this blog, please contact Olivier Roth, domestic human rights policy advisor on Olivier.Roth@lawsociety.org.uk


[1] https://www.washingtonpost.com/news/morning-mix/wp/2017/09/12/researchers-use-facial-recognition-tools-to-predict-sexuality-lgbt-groups-arent-happy/?utm_term=.3b7a0ebe95ce