When, if at all, might computers replace humans and in what capacities? And how can we grasp the reach and depth of the transformations that are already well underway?
New tools commonly end up changing not just the nature of the problems they are meant to solve, but also the tool-users themselves.
Mobile, connected devices have not only changed the way we make friends. They are also changing our very understanding of what friendship stands for, what we can expect from our friends, and what they can expect from us. Could the same be said of the way computer systems are increasingly being deployed in professional contexts? Are these systems about to change our very understanding of what the legal profession stands for, what we can expect from it, and what it can expect from us?
Improving prediction accuracy, reducing uncertainty
In 2016 a team of researchers designed a system to predict the outcome of cases tried by the European Court of Human Rights (ECHR) that received a lot of attention because of its 79 per cent impressive accuracy. The researchers wrote:
"Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions."
Similarly, a tool developed in 2017 to predict US Supreme Court decisions over nearly two centuries (despite changes in the court composition and socio-cultural contexts), managed to anticipate whether the court would 'reverse' the status quo or not with 70.2 per cent accuracy.
The risk of inherent conservatism
While these court case predictions bring benefits, particularly for those businesses whose risk models in part depend on the outcome of such cases, they also come with dangers. The most evident risk is of inherent conservatism: cases with a low success prediction are unlikely to be heard in court, in turn making organic changes within case law less likely.
Organic changes often depend upon an accumulation of previous, unsuccessful cases that trigger a growing number of dissenting voices within and without the judiciary. Now, there may be ways of developing tools that not only predict the chances of success in court, but also the likelihood that a particular case will eventually contribute to some organic evolution within case law. But commercial interest in such tools is likely to be low.
Side-effects of the "science of judicial predictions"?
The other type of risk inherent in such prediction tools is less tangible, and could lead to a shift in the aspirations we associate with law. Those who deem prediction accuracy to be the most promising aspect of recent technological advances within the legal profession often assume that the success of a legal system can and ought to be measured according to the extent to which such a system reduces uncertainty.
From the latter perspective, if technological advances ultimately allow us to successfully automate, rather than merely predict court rulings, we should embrace them: how better to foster 'the rule of law' than by substituting algorithmic predictability for fickle human judgments?
Yet besides the reduction of uncertainty, there are other values that one may promote through law. The success of a legal system may, for instance, be rated according to the extent to which it preserves the voice of minorities, or peacefully absorbs disagreement. The balancing of these demands and aspirations reflects different political choices, and it can shift as political realities shift. It is necessarily a matter of ongoing political negotiation, and needs to remain so, no matter how alluring the promises of some efficiency-maximising technologies.
What do we want law for?
It is up to us whether we allow an efficiency-driven logic to take over, or whether the aims that preside over the data mining necessary to building better prediction or enforcement tools are made to reflect what we want law for. The latter necessarily reflects changing aims and aspirations.
Do we want a justice system that alleviates, rather than reinforces, existing biases and inequalities? If so, we may want to look twice at the traits and characteristics taken into account by automated law enforcement systems.
Similar questions can be asked of increasingly sophisticated prediction systems, customer facing apps etc.
There is little doubt that computer systems will play an essential role within the legal profession, and that this could transform it for the better. Today we are still a long way from harnessing the full potential of the data now available. Everything hangs on exactly how we harness that potential.
For further information on this topic see Computer systems for the legal profession?
Views expressed in our blogs are those of the authors and do not necessarily reflect those of the Law Society.
Join us for our third Technology and Law Policy Commission evidence session on Thursday 14 February 15:00-18:00 in London with our president Christina Blacklaws, Royal Statistical Society fellow Sofia Olhede, UCL Big Data and Sylvie Delacroix, University of Birmingham and Alan Turing Institute. This session will focus on what controls, if any, are needed to protect human rights and trust in the justice system. Law Society Hall, 113 Chancery Lane, London WC2A 1PL.
Call for evidence: We are calling for written evidence from all interested parties on the topic of algorithms in the justice system. We are looking to hear from practitioners, academics, tech professionals, civil liberties organisations, companies that make algorithms, public bodies that use them, and anyone who has an interest in technology, the rule of law and human rights.
Research paper from 2017: A general approach for predicting the behavior of the Supreme Court of the United States