Our one-year mission: the Technology and Law Policy Commission on the use of algorithms in the justice system
What is the Technology and Law Policy Commission?
The Law Society's Technology and Law Policy Commission launched in summer 2018 to help us understand the critical issues involved with algorithms being used in the justice system. The Commission set out to examine, through a cross disciplinary method, the use of algorithms in the justice system in England and Wales and what controls, if any, are needed to protect human rights and trust in the justice system.
We held four public evidentiary sessions, interviewed over 75 experts and read over 80 submissions of evidence. These engagements spanned disciplines and sectors, with expertise in computing, regulation, political science, ethics, the rule of law, public policy, human rights and civil liberties, from the public and private sectors.
What is an algorithm?
An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output.
Traditional rule-based systems have the relationships between inputs and outputs crafted by hand. These are often rule-based systems, like flowcharts.
More advanced algorithmic systems, which include machine learning approaches, are used with problems where pre-existing rules or theories do not capture the desired input–output relationships well. In these cases, machines craft the relationship between inputs and outputs backwards from the data.
Are algorithms currently being used in the justice system?
Yes. Police forces, crime labs, courts, lawyers and parole officers are using algorithmic systems in many ways. We have created a map showing what we know today about where complex algorithms are being used in the justice system in England and Wales.
What did we find?
During our evidence gathering stage we heard about the benefits of algorithms, including their positive influence on enabling people to carry out their jobs more efficiently, while also hearing about the unintended consequences and risks associated with new, untested/unpiloted technology.
What are the benefits?
Algorithmic systems are considered to bring a range of benefits to different sections and functions, although not all have been realised yet. We explore this further in our Commission report. Some of the potential benefits in the context of criminal justice are:
- Efficiency from automation
- Access to justice
- Consistency and control
- Monitoring performance
What are the dangers?
There are many approaches to understanding dangers from algorithmic decision-making in the justice sector. Our final Commission report breaks these down into:
- considering issues around deployment, consequences of their use, misuse or errors, such as:
- Bias and discrimination
- Oversimplification of complex issues
- Changing nature of law
- Vulnerability to attacks and failure
Human rights / rule of law concerns
- considering the threat to individual human beings being respected as whole, free persons:
- Individuals not treated as such
- Dehumanised justice
- Loss of autonomy
– relating to questions of legitimacy and procedural justice around decisions made and supported by algorithmic systems:
- Opacity preventing scrutiny of justification
- Rule-making without scrutiny
- Power and function creep from information infrastructures
We have published the final Commission report which includes a detailed review of our evidence on the current use of algorithms and forecasts how these are likely to develop, and how they should be regulated. It also provides our recommendations for immediate actions to safeguard human rights and maintain public trust in the justice system.
A legal framework for the use of complex algorithms in the justice system. The lawful basis for the use of any algorithmic systems must be clear and explicitly declared.
A national register of algorithmic systems used by public bodies.
The public sector equality duty is applied to the use of algorithms in the justice system.
Public bodies must be able to explain what human rights are affected by any complex algorithm they use.
There must always be human management of complex algorithmic systems.
Public bodies must be able to explain how specific algorithms reach specific decisions.
Public bodies should own software rather than renting it from tech companies and should manage all political design decisions
Views expressed in our blogs are those of the authors and do not necessarily reflect those of the Law Society.
Explore our new map showing what we know today about where complex algorithms are being used in the justice system in England and Wales
Visit the Technology and Law Policy Commission for details of the experts and to watch the evidence sessions
Read blogs by the co-commissioners Professor Sofia Olhede: Can algorithms ever be fair?
Professor Sylvie Delacroix blogs: Will data + algorithms change what we can expect from law (and lawyers)? And Ask an AI: what makes lawyers "professional"?
Our Lawtech Report highlights key developments and what this means for the work of the profession and the business of law
Read Rafie Faruq CEO of Genie AI on 10 factors to consider before procuring legal AI