You are here:
  1. Home
  2. Support services
  3. Research trends
  4. Algorithms in the justice system

Algorithms in the justice system

4 June 2019

The map shows what we know today about where complex algorithms are being used in the justice system in England and Wales. There is no centralised, publicly available information on this topic, so our picture is probably not yet complete.

Complex algorithms crunch data to help public bodies manage and use the vast quantities of data they hold about people, places and events.

Facial recognition technology, predictive policing and risk assessment tools are all examples of algorithms being used in the justice system. They help police forces make decisions about all sorts of things – from where to send a bobby on his beat to who is at risk of being a victim or perpetrator of domestic violence; who to pick out of a crowd or let out on parole.

Human and civil rights may be infringed by algorithms used today in the justice system. In some cases their use may not be lawful. We are calling on government and public bodies to be transparent and take responsibility for how algorithms are being used by police, prison and border forces. Human rights and equality must be intrinsic to the design of any such system.

We have created this map using information from a variety of sources: evidence provided to the Law Society Technology and Law Policy Commission, Liberty’s report Policing by Machine and media reports. If more information comes into the public domain we hope to be able to include it here to build a more complete picture of complex algorithms being used in the justice system.

See the full-screen version of the map.

Human rights at risk

human rights

The Human Rights Act 1998 sets out the fundamental rights and freedoms that everyone in the UK is entitled to. It incorporates the rights set out in the European Convention on Human Rights (ECHR) into British domestic law. The Human Rights Act came into force in the UK in October 2000.

Some key human rights are put at risk by algorithms in our justice system. Some of the rights put at risk are:

Predictive policing

predictive policing

Predictive policing algorithms usually use historical crime data to predict where future crimes might occur.

Concerns over predictive policing include:

  • Historic police data does not always provide an accurate picture of crime in an area. For example, there may be crimes which go unreported, or areas which are historically over-policed.
  • Predictive policing algorithms can easily become biased. For example, those from a mixed ethnic minority background are two times more likely to be arrested. As the algorithms are built on this data, they risk inheriting these biases.

For more information read our report

Facial recognition technology

facial recognition technology

Facial recognition technology is used to scan faces in public spaces – on the street, at large events like music concerts or sports matches – to identify people who are on a watch list – these may be suspected criminals or vulnerable people.

The use of facial recognition technology must operate within the rule of law with the lawful basis explicitly, openly defined and this assessment publicly available

For more information read our report

Related podcast: Facial recognition technology - Who is watching us?

Individual risk assessment

harm reduction

Individual risk assessment algorithms are used to predict people’s behaviour and circumstances to determine who is likely to be a perpetrator of a crime or a victim. Risk assessment tools are used to predict:

  • likelihood of re-offending
  • likelihood of being a gang member or a victim of gang violence
  • likelihood of committing a serious crime
  • likelihood of being reported missing
  • likelihood of becoming a perpetrator or victim of domestic violence
  • likelihood of becoming the victim of a serious crime

The use of these algorithms to assess victimhood has been controversial and some people are concerned that – much like predictive policing programs – these algorithms contain inherent racial and social biases.

For more information read our report

Gangs matrix

gangs matrix

The gangs matrix is currently only in use in London and was developed by the Metropolitan police following the 2011 London riots. The matrix flags gang members, those considered to be susceptible to joining gangs and potential victims of gang violence.

The matrix has received considerable criticism and the London mayor's office ordered reform in December 2018 after it found the matrix was potentially discriminatory.

For more information read our report

Nationwide: offender management

offender management

The Ministry of Justice ‘digital reporting tool’ sorts and analyses live data on prison inmates' conduct during their time behind bars, providing an overview to inform decisions about offender management - like which prison or wing someone is placed in, which activities they can do.

The data that is used includes things like involvement in assaults, disorder and seizures of contraband such as drugs and mobile phones - as well as demographic information and location history.

Details about new incidents are logged on the database shortly after they take place, which can result in new scores being generated regularly. Inmates' "scores" can change to take into account improvements or deteriorations in their behaviour.

Nationwide: visa processing

visa processing

The chief inspector of borders reported in 2018 that “Since 2015, UKVI has been developing and rolling out a ‘streaming tool’ to assess the perceived risks attached to each [visa] application.

“Managers explained that the streaming tool was fed with data of known immigration abuses (for example, data relating to breaches of visa conditions after entry to the UK).

“The tool streams applications green, amber or red.

“The tool uses a ‘decision tree’ approach with a series of yes/no questions to determine an applicant’s risk rating. There are three possible ratings:

  • Green: Low risk applications, more likely to have known positive attributes and evidence of compliance
  • Amber: Medium risk applications, with limited evidence (or equally balanced evidence) of negative and positive attributes so potential for refusal
  • Red: High risk applications, appearing to have a greater likelihood of refusal because of the individual’s circumstances

Little information on this tool is in the public domain, raising concerns about transparency, alongside in-built bias and the possibility that without proper frameworks and oversight applicants from certain groups may be discriminated against.


We have published a report on the use of algorithms in the criminal justice system.