Speaking notes for Christina Blacklaws at London Technology Week 2018.
Good afternoon, welcome to the Law Society of England and Wales.
I would like to start by thanking you for attending the launch of our Public Policy Commission today as part of London Technology Week 2018.
My name is Christina Blacklaws and I am the vice president of the Law Society.
The opportunities and the challenges presented by technology is one of the biggest issues facing our global society. It forces us to address deep ethical and jurisprudential questions, and sparks fierce, often polarised debate.
And nowhere is this debate more important, intense or difficult than around the areas our commission aims to investigate - human rights and the justice system!
So let me tell you about:
- This new Commission, and the issues it will be examining
- How it will unfold and the format it will take
- Some of the complexities and intricacies linked to the use of algorithms in the justice system.
2. Public Policy Commission
Today, I’m delighted to announce that we are launching a ground-breaking initiative of a year-long exploration of the impact of technology and data on human rights and the justice system, focusing on the use of algorithms by various actors within the justice system.
The justice system is a core pillar of the effective operation of a democratic nation. It is vital that in dispensing justice, that it is both efficient and effective, but most importantly that it retains the trust and confidence of society.
As we see technology developing at a phenomenal rate, rightly people are exploring how these tools can be used to support our justice system.
Innovation pushes at the boundaries of the norms, that is to be expected, but at the same time we should ensure that as a society we understand the choices we make, that there is space made to have public discussions about the pros and cons, that we give voice to all those with a stake and we then emerge at a point of informed consensus.
It is in pursuit of this debate, one which will be cross disciplinary, which will involve the voices of citizens, law enforcement, academia, computer scientists, ethicists, the legal profession, that we are today launching this Commission.
I have the privilege of chairing the Commission, along with co-commissioners Sofia Olhede and Sylvie Delacroix - who I will introduce properly later on.
Yes, I know three women on a tech issue - this is not going to be your standard Commission!
However, we are modelled on a traditional Commission format in that we will be taking oral and written evidence from a range of experts from the diverse fields of tech, government, commercial and human rights to explore the various implications, good and bad, from a human rights perspective of using these algorithm-based technologies in our justice system. Do we need greater protection? Is the current human rights framework fit for the modern age? Amongst many, many more questions.
This evidence will inform our report, which we will make public. And this is key, not only will our report be public, but so will the process. Our sessions will be open to the public, we will publish all evidence provided - we aim to lead the charge in having a public debate about these issues.
3. Evidence sessions
The Commission will meet three times over the rest of this year to inform our final report, which is provisionally scheduled to be launched in February next year.
The first evidence session, which is scheduled for late July, will consider the use of algorithms in the justice system today and how we foresee their future use. This opening session will provide us with an insight into the current state of play with regards to the development, sale and use of algorithms in the justice system of England and Wales.
Following on from that in September, our second session will consider the future of algorithms in the justice system - where we aim to understand what is on the horizon, and to anticipate future developments.
The third and final evidence session for the Commission will examine what a framework for algorithms in the justice system should look like.
4. Horizon scanning report on AI
To kick start this discussion, I am also thrilled to launch our new horizon scanning report on Artificial Intelligence (AI) and the Legal Profession.
This report explores the developments of the use of AI in legal practice, such as document analysis and delivery, legal advisor support and case outcome prediction. It will also consider the likely implications on legal jobs, the types of legal work and the impact on fee structures and costs.
The report also examines the legal issues arising from the increased use of AI systems in society generally, including issues around transparency, ethics and liability.
Over the next few years there can be little doubt that AI will start to have a noticeable impact on the legal profession.
Change is coming regardless, and law firms will have opportunities to explore and challenges to address.
We believe we are in a unique position to lead the debate in society about the difficult ethical questions around AI. And we see this report and the launch of this Commission as one of the first steps towards debating and making decisions about these issues.
5. Nature of Algorithms
So what are we actually talking about here? What is an algorithm? If we 'dissect' an algorithm we will find that it is just a set of instructions for solving a problem or completing a task. A set of rules and procedures that gets us from where we are now to where we want to be but with the increase in computing power, and the capability to go beyond the expert system, we are faced with a whole host of potential uses, which many may not have ever dreamt possible.
Algorithms are not just for the elite computing and machine learning applications, in computing labs if the world’s elect universities, they are completely embedded in our lives. They are the brains behind our devices, curating our world of endless options, they point us in the right direction for what we want and need - because we have wanted and needed it before. They help to optimise our time by giving us the information we want quickly and accurately.
There is nothing wrong with algorithms in and of themselves. If we step back and take a close look, it’s clear that algorithms do what humans do:
- they take data,
- they process the data through a series of ‘conditions’
- they arrive at an answer.
Whether you are a judge, a lawyer or a custody officer, this is what you do when you have a decision to make.
Sometimes these decisions are made consciously, sometimes subconsciously - but the process is the same.
Algorithms are no different, except that they can process a lot more data, and do so much more efficiently. They are powerful and ubiquitous, and they have even evolved to write their own programmes regardless of any human intention or intervention. If we don’t know which sections of data that the algorithm has been exposed to, or which sections of data that the algorithms has based its action or decisions on, we’re right to ask questions about ethics and the law.
The question isn’t whether algorithms are right or wrong. They are here to stay, rather our endeavour is to understand how best to use this technology for good, to help us with our problems whilst avoiding the creation of new ones.
The real issue is:
- When is their use appropriate?
- What kind of oversight do we need?
- How do we ensure that the data they use is correct and free from bias?
- How to we ensure the system is trusted and trust-worthy?
- What are the rights of those whose data is being used?
And just like with decisions from individuals in the justice system, how do we ensure those decisions can be reviewed, appealed, or understood?
6. Algorithms in the justice system
I am conscious of not pre-empting or pre-judging the conclusions of this Commission before it has even begun to take evidence on the issue of algorithms in the justice system.
However, I thought it would be useful to offer some reflections on some of the issues which will no doubt arise in the course of our evidence gathering.
There are issues around ethics, the practicalities, around oversight and commercial sensitivities in this area.
There are many examples of algorithms already being used today.
In policing, algorithmic data or intelligence analysis is generally used for three reasons:
i. predictive policing on a macro level incorporating strategic planning, prioritisation and forecasting;
ii. operational intelligence linking and evaluation which may include, for instance, crime reduction activities; and
iii. decision-making or risk-assessments relating to individuals.
Durham Constabulary have used an artificial intelligence system to inform decisions about whether or not to keep a suspect in custody. They use an algorithm to assess low, medium and high risk of reoffending - to ensure that steps can be taken to ensure that high-risk offenders don’t go back to prison.
Their Harm Assessment Risk Tool (HART) is one of the first algorithmic models to be deployed by a UK police force in an operational capacity.
HART underwent a two-year trial period to monitor the accuracy of the tool. Over the trial period, predictions of low risk were accurate 98 per cent of the time, whilst predictions of high risk were accurate 88 per cent of the time, according to media reports.
The tool has also been used to enable those individuals identified as moderate risk to be eligible for the Constabulary’s Checkpoint programme - an intervention currently being tested in the Constabulary and an ‘out of court disposal’ (a way of dealing with an offence not requiring prosecution in court) aimed at reducing future offending.
Mathematicians and social scientists in the US have also developed a crime prediction tool in collaboration with the Los Angeles Police Department.
After proving effective in reducing the incidence of property crimes such as burglary in California, ‘PredPol’ has been used by Kent Police to conduct their own crime prediction hotspot mapping.
After one year of operation, a review carried out by Kent Police in 2014 found that the software produced a hit rate of 11 per cent, making it ‘10 times more likely to predict the location of crime than random patrolling and more than twice as likely to predict crime [than] boxes produced using intelligence-led techniques’.
The Metropolitan Police and South Wales Police are also using facial recognition technology at public events, music festivals and demonstrations to cross-reference people already on ‘watch-lists.’
Some in policing see this as the next big leap in law enforcement, akin to the revolution brought about by advances in DNA analysis. Privacy campaigners see it as the next big battleground for civil liberties, as the state effectively asks for a degree of privacy to be surrendered in return for a promise of greater security.
But so far this technology has not worked.
The Met used facial recognition at the 2017 Notting Hill carnival, where the system was wrong 98 per cent of the time, falsely telling officers on 102 occasions it had spotted a suspect.
South Wales Police were also given £2.1 million by the Home Office to test the technology, but so far it gets it wrong 91 per cent of the time.
Algorithmic bias will be amongst the many key questions the Commission will explore in detail over the course of the year.
I invite you to join us in this exciting, stimulating and challenging endeavour!
I’d now like to hand you over to my co-commissioners Sofia and Sylvie.
Syvie Delacroix is a Professor in law and ethics at the University of Birmingham Law School. She takes a particular interest in machine learning and agency and has recently been researching the design of both decision-support and ‘autonomous’ systems, and on the effect of personalised profiling and ambient computing on ethical agency.
Our next speaker this afternoon is Sofia Olhede, a Professor at the Department of Statistical Science, an Honorary Professor at the Department of Computer Science, and also an Honorary Research Associate at the Department of Mathematics at University College London.
Thank you very much indeed.