You are here:
  1. Home
  2. News
  3. Speeches
  4. Christina Blacklaws’ keynote speech at the Artificial Intelligence in Legal Services Summit

Christina Blacklaws’ keynote speech at the Artificial Intelligence in Legal Services Summit

4 June 2019

Law Society president Christina Blacklaws delivered her keynote speech this morning at the Artificial Intelligence in Legal Services Summit to launch the Technology and the Law Policy Commission's report on algorithms in the criminal justice system.

**check against delivery**

Response to the lord chancellor's announcement

I am delighted to be able to follow on from the lord chancellor, and I welcome his announcement that the government will provide funding for the lawtech sector. As chair of the LawTech Delivery Panel I know that this funding will be crucial to achieving our aim of ensuring the UK becomes a world leader in lawtech, and is able to reap the economic benefits it will provide.

I would dearly like to thank the lord chancellor for acting as a champion for UK lawtech, and for all the support his Ministry has provided.

Introduction

It is both a pleasure and a privilege to be able to address you today as a keynote speaker.

It is a pleasure because understanding the ways that technology is changing, and will continue to change, the way we do law is something that I have always had a passion for.

And it is a privilege because this topic has become one of the most interesting and one of the most pressing questions facing the legal profession.

As the lord chancellor described so eloquently just before me, artificial intelligence is no longer the preserve of science fiction - it is all around us and permeates every aspect of our lives.

The criminal justice system is no different.

The rapid growth in the use of artificial intelligence and algorithms across all areas of public and private life has prompted plenty of debate, and no little concern. While new technologies can, and will, revolutionise the way we live our lives, for obvious reasons a lot of focus has been placed on the potential dangers posed by a technologically integrated future.

The speed of technological innovation and adoption in recent years has posed new ethical, legal and social challenges, and prompted many questions: Is artificial intelligence a force for good or an unacceptable risk to human rights? Is it right to allow machines to take over judging functions previously reserved for humans? Do we need to regulate, or indeed even restrict, the use of algorithms in public institutions?

Amidst an often febrile atmosphere, we at the Law Society wanted to separate the facts from the fiction. So we set up the Technology and the Law Policy Commission to examine the use of algorithms in the justice system, and to suggest recommendations for how policy-makers should react to the development of these technologies.

Overview of the Commission

I was delighted to have the opportunity to chair the Commission, alongside Professor Sylvie Delacroix from Birmingham Law School and Professor Sofia Olhede from UCL.

We chose to focus our inquiry on the criminal justice system as a whole, as it is in this theatre that the risks associated with AI are at their most acute. Erroneous decisions here could threaten human rights and undermine the rule of law.

Our year-long inquiry saw us:

  • hold four public evidence sessions
  • interview over 75 experts
  • and read over 80 written submissions of evidence

We received input from a wide range of disciplines and sectors, including computer science, regulation, ethics, human rights and civil liberties.

I am delighted to be able to launch the report of the Commission today. I will share with you our main findings, top recommendations and next steps.

Main findings of the report

There were three main findings from our research.

First, there is widespread use of algorithms in the criminal justice system, and significant variety in this use.

Our study found that algorithms were in widespread use across the justice system by police forces, crime labs, courts, lawyers, parole officers and more. The ways that algorithms are deployed are impressively varied, with current applications encompassing:

  • photographic and video analysis, including facial recognition
  • DNA profiling
  • individual risk assessment and prediction
  • predictive crime mapping
  • mobile phone data extraction tools
  • data mining and social media intelligence

Across the country, officials and authorities in the criminal justice system are using all or some of these technologies, in different scales and with different applications. We have produced an interactive map to demonstrate the variety in the use of different algorithmic systems across England and Wales.

Some of the drivers behind the use of algorithms in criminal justice are:

  • political pressure to be more proactive in anticipating and reducing crime, rather than simply reacting to crime after it happens. This has driven the use of predictive policing tools across England and Wales
  • a determination to improve efficiency and relieve pressure on our creaking criminal justice system
  • new financial incentives, such as the Police Transformation Fund. Amounting to almost £43 million across 2018/19 and 2019/20, the Fund is creating strong incentives for forces to adopt technological solutions to funding pressures

There are significant benefits to be gained from the use of algorithms in the criminal justice system. For a start, automation of rote services, such a form-filling, checking, information retrieval and dissemination, can significantly improve efficiency - a valuable outcome, particularly for often straitened public bodies.

More granular technologies can also augment existing systems as well as facilitating a greater degree of scrutiny of existing processes and outcomes. Machine learning algorithms are able to analyse cases, objects or individuals on a deeper level, and open up the possibility of individually tailored interventions.

For example, machine learning can be used to understand how individuals can be provided with rehabilitative services best suited to their circumstances, deliver training courses while in detention, or identify the leverage points in criminal networks most likely to disrupt their functioning.

Relatedly, algorithms can help to ensure a minimum level of consistency in decision-making across public bodies. It can also improve performance monitoring and evaluation procedures, which are often lacking in rigour across the justice system.

Secondly, we found a lack of explicit standards and a lawful basis for the use of algorithms in the criminal justice system.

This was concerning, as the high stakes in the criminal justice system demand careful assessment of any new systems before deployment.

During the course of our study, the Commission became heavily concerned that some systems and databases operating today, such as facial recognition or some uses of mobile device extraction, lack a clear and explicit lawful basis.

Opaquely designed algorithms deny individuals the ability to assess whether an algorithmic decision is legitimate, justified or legal. This opacity can emerge from secrecy in the development stage, or a desire by private developers to protect their intellectual property, or simply due to the technical complexity of the algorithm.

These issues need to be addressed urgently.

And thirdly, algorithms are not being critically assessed, and are creating risks to the justice system and the rule of law.

Our research found that while many pieces of technology are in pilot or experimental stages, these technologies are not so technically novel that they cannot be critically assessed. In other words, the algorithms can and should be subject to scrutiny.

This scrutiny should be led by experts from different teams and should focus on the conformity of algorithms to real challenges and their potential for unintended and undesirable side effects - particularly the possibility that they might prioritise some goals or values to the detriment of others.

The lack of scrutiny of new algorithms is generating significant risks:

Risks to fairness

Algorithmic systems encode assumptions and systematic patterns which can result in discriminatory outputs. The way input data is labelled, measured and classified is subjective and can embed bias.

Training data itself is almost certain to be biased. There is no way truly to measure crimes committed in society; only proxies such as convictions or, more problematically, individuals arrested or charged can be measured. If, as is commonly accepted, the justice system does under-serve certain populations or over-police others, these biases will be reflected in the data.

Risks to human rights

Algorithms rely on data identifying shared characteristics and patterns to reveal insights. In so doing, an algorithm will naturally categorise individuals into groups, without much personal consideration.

Machine learning also presents significant risks to privacy. There are examples of machine learning systems that can infer data or behaviours which are considered private from seemingly non-sensitive data.

Algorithmic systems might also be used to retrieve information from media that are considered private, such as mobile phones and social media, or through facial recognition and 'smart' CCTV analysis.

Risks to the effective delivery of justice and the rule of law

Witnesses also expressed concerns about the possibility of dehumanised justice. On this point chief constable Michael Barton of Durham Constabulary noted that human decision-makers may lack the confidence and knowledge to question or override an algorithmic recommendation.

Another critical issue that emerged was concern that the way algorithms work, by analysing past data rather than looking forward, could threaten the organic evolution of jurisprudence and risk stagnation.

The use of algorithms in the justice system may soon have to be examined by the courts if there is potential conflict, for instance with the right to fair trial under article 6 of the European Convention of Human Rights. The European Court previously ruled that cumulative defects in the judicial process would infringe this right, and it is likely that the Court will have to re-examine this precedent if a case involving algorithmic justice is brought forward.

Top recommendations

So, what recommendations do we make to ensure we maximise the benefits of algorithms in the criminal justice system, and mitigate the risks arising from their use?

Firstly, there is a need for a range of new mechanisms and institutional arrangements to improve the oversight of algorithms used in the justice system. This should be done through:

  • enhancing the capacity and role of the Information Commissioner's Office to perform this oversight function. We recommend that the government ensure that the ICO is adequately resourced to allow it to examine algorithmic systems proactively, rather than merely reactively
  • creating a statutory code of practice for algorithms in the justice system under the Data Protection Act, section 128
  • creating a national register of algorithms in the justice system, which includes data on their characteristics, relevant audits and the datasets used to train them
  • and giving the Centre for Data Ethics and Innovation a statutory footing as an independent, parliamentary body with a statutory responsibility for examining and reporting on the capacity for public bodies, including those in criminal justice, to analyse and address emerging challenges around data

Secondly, we recommend the clarification and strengthening of protections relating to algorithms.

  • this should be done through Part 3 of the Data Protection Act. It could include mandating the publication of data protection impact assessments and provisions for ensuring meaningful human intervention in algorithmic decision-making.
  • looking beyond data protection, existing regulations concerning the fairness and transparency of activities in the justice system should be strengthened to take full account of algorithmic decision-making. For example, given the need to counter discrimination and bias in algorithms, a formal requirement to carry out equality impact assessments should be introduced for algorithms in the public sector.

Thirdly, consideration must also be given to the procurement of algorithmic systems to ensure that at all stages they are subject to appropriate control, and that due consideration is given to human rights concerns. This should include:

  • a statutory procurement code for algorithms in the criminal justice system with an enforceable duty on relevant actors to adhere to it
  • a review into policy options for mandating human rights considerations in technological design within different sectors. This review should consider how and where human rights impact assessments should be required in public procurement processes
  • and explanation facilities for algorithms in the criminal justice system designed to allow individuals to understand how a decision has been reached and assess whether they should seek a remedy through the courts

Fourthly, our report also makes clear that all algorithms used in the justice system must have a clear and explicit basis in law. To ensure this, the government should:

  • strengthen the powers of the Biometrics Commissioner, and ensure it is adequately resourced
  • task an appropriate body with establishing a working group to assess the use of technologies to search seized electronic devices and ensure that such searches are legitimate
  • and ensure facial recognition systems operate clearly under the rule of law, with their lawful basis made explicit and publicly available. This also applies to datasets used in facial recognition, which should adhere to conditions of strict necessity and with categories of individuals clearly split as required under the Data Protection Act 2018

And finally, significant investment is needed to allow public bodies to develop the in-house capabilities necessary to understand, scrutinise and coordinate the appropriate use of algorithms.

These recommendations are as ambitious as they are comprehensive. We are confident that they map out the basis of a framework for the ethical and proportionate use of algorithms in the criminal justice system which would allow the public to reap their benefits, while preventing some of the greatest dangers.

Conclusion and next steps

The technologies that are being deployed in the justice system at present are novel. Yet I think our Commission has demonstrated that they are not beyond critical assessment.

Our Commission has sought to approach this area dispassionately, to uncover the real value of algorithmic systems, and identify those areas in which controls are needed to prevent abuses.

Going forward, it will be critical that public bodies are themselves able to carry out these kinds of rigorous assessments to ensure that all algorithms are deployed in a safe and proportionate way, and in that respect developing in-house expertise across the justice system is essential.

Still, in-house capacity is only one piece of the puzzle. Ultimately leadership is needed to ensure that emerging and future issues are analysed on a cross-disciplinary basis.

Our Commission has finished its work, and it is now up to the government and the relevant stakeholders to assess our work and take forward the necessary recommendations. I would like to thank the members of the Commission, expert witnesses and everyone else involved for their work.

We believe that the United Kingdom has a window of opportunity to become a beacon for a justice system that is trusted to use technology well, with a social licence to operate, and in line with the values and human rights underpinning criminal justice. It must take proactive steps to seize that window now.

Thank you.

Recommended

key lock
International data transfer

Learn in this one hour webinar more about data transfer, adequacy decisions, EU/US Privacy Shield

International data transfer > More