William McSweeney is the Law Society’s technology and law policy adviser, working on the impact of key emerging technologies on the law.
Facial recognition technology is gaining a foothold in many industries, and over the years to come, could affect us all. But where will we see the benefits, and what are the issues to look out for from a legal and ethical perspective?
AI and facial recognition mean that a photo can be quickly scanned against thousands of others for a possible match, sourced from everywhere from border checkpoints to human trafficking investigations. It can also predict a change of appearance over time. This has already been used to staggering effect in India, where 3,000 missing children were located in just four days, after photos provided by parents were compared with those taken in Child Help Centres around the country.
Many genetic disorders come with subtle facial traits, and algorithms are beginning to identify these with a simple facial scan. More sophisticated trials have even managed to pinpoint the specific genetic mutation that has caused a particular syndrome. The hope is that this technology eventually rivals traditional genetic testing in terms of accuracy, while besting it on both speed and cost.
The 'internet of things'
The Internet of Things (IoT), an increasingly common catch-all phrase to explain the extension of internet connectivity into everyday objects, is becoming ever more available in our homes, from doorbells and security cameras to fridges and thermostats. Essentially, commonplace domestic devices can now be 'smart'.
The Internet of Things can make our lives easier and more convenient in some ways, but with it comes the increased ability to create, store and share new data, leaving it vulnerable to hacking. With stories of security systems compromised by lightbulbs, to coffee machines being used to infect computers with ransomware, the possibilities may seem absurd, but should we be taking them seriously? Some IoT devices, such as smart doorbells, harness facial recognition technology, and there are many concerns surrounding its use and application.
Facial recognition technology – the concerns
The use of facial recognition technology raises questions around efficiency, bias, impact on human rights, and legislative basis. Facial recognition technology has the ability to militarise policing in public spaces. It has been used in other countries to target vulnerable communities and curtail legal and legitimate protest.
Campaigners, including Liberty UK and Big Brother Watch, have stressed that mass surveillance of innocent people in public violates three articles of the Human Rights Convention, article 10: The Right to Freedom of Expression; article 11: The Right to Freedom of Assembly and Association; and article 8: The Right to a Private Life. There is a real worry that the indiscriminate use of facial recognition technology in the public realm stifles non-conformist modes of appearance and expression. Facial recognition technologies and their use have normalised pervasive surveillance practices in public spaces and, in doing so, has undermined several inalienable rights.
Facial recognition software is beginning to be used by police forces around the UK, yet the number of false positives remains high in many use cases. When used during the UEFA Champions League Final 2017 to identify those who had previously caused trouble, 92% of matches were incorrect, with only 173 people correctly identified. In addition, human checks and balances are not as common when computer-generated decision-making is accepted as accurate, with limited or no human oversight.
Facial recognition software learns from the data that it is trained with, and because of the profiles of the people developing this technology, there is a bias towards using Caucasian male subjects. This leads to low identification rates of females and anyone with darker skin, with one trial misidentifying darker skinned women for men 31% of the time. The US Government Accountability Office, in March 2017, found that these technologies were 15% less accurate for women and ethnic minorities. When this technology is being used in high stakes scenarios, such as identifying criminal suspects, it is not difficult to see how this could perpetuate an existing racial bias within the law.
While biometric identification requires you to consciously submit something to match, such as an iris or fingerprint scan, facial recognition software merely needs to capture an image of your face – which can be can done as you walk about in public spaces. This can result in a form of 'perpetual line-up' where our images are constantly being matched with potential criminals.
In February 2017, the government gave unconvicted individuals the right to ask police forces to delete their images from custody image database. A year later, 67 applications for deletion had been made, with only 34 successful (Press Association Investigation, figures from 37 out of 43 police forces in England and Wales, obtained following FoI requests, 2018). This suggests that the current method for storing and process for deleting data is not fit for purpose.
The open nature of this kind of technology has consequences far beyond law enforcement and other regulated bodies. Invasion of privacy can have a serious effect when it's committed by a company, but it can have a devastating one if it's used by individuals to commit crimes like stalking or harassment. It remains to be seen if lawmakers around the world can keep up with the speed of innovation from the likes of Amazon and other large product developers.
Any facial recognition technologies which are developed for the use of, or by, public agencies should be open, transparent and accountable. It is vital the design, development and deployment stages in technological innovation and adoption should be open to both internal and external evaluators to validate the results independently. Technology should only be deployed when it:
- complies with data protection and human rights laws, ethical considerations, and administrative law
- is tied directly to a long-standing and ethical policy
- operates in line with its initial problem statement.
Views expressed in our blogs are those of the authors and do not necessarily reflect those of the Law Society.
Read more and download the Algorithms in the justice system report
Explore our map showing what we know today about where complex algorithms are being used in the justice system in England and Wales.
Law Society members save 12.5% on Hiscox Home Insurance policies and 5% on cyber insurance*. *Terms and conditions apply. To find out more and see the full terms and conditions visit www.hiscox.co.uk/lawsociety. The Law Society is an Introducer Appointed Representative of Hiscox Underwriting Ltd which is authorised and regulated by the Financial Conduct Authority. For UK residents only.
Visit the Technology and Law Policy Commission for details of the experts and to watch the evidence sessions
Read blogs by the co-commissioners Professor Sofia Olhede: Can algorithms ever be fair?
Professor Sylvie Delacroix blogs: Will data + algorithms change what we can expect from law (and lawyers)? And Ask an AI: what makes lawyers "professional"?
Our Lawtech Report highlights key developments and what this means for the work of the profession and the business of law
Read Rafie Faruq CEO of Genie AI on 10 factors to consider before procuring legal AI