You are here:
  1. Home
  2. News
  3. Blog
  4. Ask an AI: what makes lawyers “professional”?

Ask an AI: what makes lawyers “professional”?

27 June 2017

AI has the potential to augment our ability to deliver affordable, quality, professional services. Does it also have the potential to leave you out of a job? An AI could make legal decisions, but what about ethical decisions?


By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones: Daniel Kahneman

Transforming our professions 

Only 15 years ago, the possibility of replacing professionals in medical diagnosis or legal representation was science-fiction. Not anymore. Machine Learning is a methodology that is actually well suited to automating those tasks that rely heavily on experience-based "tacit knowledge". This will drastically change the way we practice law, among other professions. And that might be a good thing.

Computer systems have the potential to augment our ability to deliver affordable, quality professional services. Do they also have the potential to leave you out of a job? Absolutely. So far the debate over the desirability of ultimately replacing particular professions with computer systems has mostly been framed in straightforward terms: if automation can maximise the affordability, accountability and quality of our services, we should not allow short-sighted (or self-interested) objections to hamper its deployment.

Yes, whatever professional work is left may become duller as a result (live with it). No, we do not necessarily need to preserve face-to-face interaction to deliver quality professional services that deserve the public's trust. Is there any difference, in this respect, between the services provided by mountain guides, say, and professionals? If we let this outcome-focused logic run its course, there soon won't be.

Drawing a line: professional responsibility as a constraint to wholesale AI replacement

To think long and hard about what, if anything, distinguishes us as "professionals" from other expert service providers is not something that we lawyers (nor, for that matter, doctors or even philosophers) tend to be very keen on. This is in part because the notion of professionalism is historically entangled with rather vague notions of "public interest" (who wants to argue that the services of mountain guides are not in the public interest?) or social engineering ambitions that are not that palatable today.

Most people will intuitively associate a professional's particular status with a concomitant responsibility, but the grounds of that responsibility remain elusive. I argue that the specific responsibility of professionals lies in the distinct nature of the lay-professional relationship.

At the heart of this relationship is a vulnerability that is different in kind from the one at play when our life is at stake on the side of the mountain. The role of the mountain guide indeed does not affect the development of those interests or concerns that are closest to our sense of self. In most cases, the role of the professional does: whether we are struggling to preserve our health or our social standing and recognition (which a divorce, sudden poverty, prosecution can all endanger), our sense of owning the way we project ourselves, both socially and physically, is typically weakened.

Because, and to the extent that, in our society, educators, bankers, lawyers and doctors are all in a position to significantly alter our sense of self, they are endowed with a particular type of responsibility. That responsibility simply cannot be met by a computer system. Hence a line needs to be drawn between those occupations that may eventually lend themselves to wholesale AI replacement on consequentialist grounds, versus those that should not.

Now let's imagine I've convinced you as a professional body, and lawyers successfully manage to constrain the deployment of professional AI applications: computers become our indispensable partners. To do so, they'll have to be able to take into account a whole range of moral values and concerns which permeate professional practice. How do they do that? Or more precisely how do we do that?

Computer-enabled moral holidays? Beware what you wish for

Computer scientists talk of the so-called value-alignment problem. They do so in a way that worries me because they tend to think of moral values as given: provided we can somehow identify them, all we have to do is to include these values as constraints within the operation of the system. This assumption is both naïve and dangerous. It is naïve because even in the most harmonious societies, values will always be the subject of controversy and disagreement. It is also dangerous, because ethics cannot but be a work in progress.

If we start thinking of moral values as static, lending themselves to some neat inclusion into systems designed to simplify our practical reasoning, the danger is that we'll not only stop being the authors of those values, we'll also stop being capable of "ethical effort": the critical engagement that is at the root of the messy but nevertheless precious value system we share today. My worry is that computers may become so very good at simplifying our practical reasoning that we may find ourselves in never-ending 'moral holidays'. These might look attractive at first, until we find ourselves incapable of mobilising atrophied moral muscles.

You don't need to learn to code to contribute to design choices

Our quest to develop artificial intelligence has already taught us much about our own, eminently fallible intelligence. Now that AI applications prepare to revolutionise the way we professionals operate, we stand to learn something important, both about the nature of our work and the nature of our responsibility as professionals. The latter could be bolstered (rather than hampered) by the deployment of professional AI on one condition: that we actively engage as a profession with the strategic choices that are being made today, both in terms of policy and in terms of system design.

For a fuller version of this argument that Dr Delacroix gave at our London Tech Week event, read Drawing a Non-Consequentialist Line: Augmenting v. Replacing the Professions with Computer Systems.

Dr Sylvie Delacroix was one of four speakers at our free 2017 London Tech Week event: Does your machine mind? Ethics and potential bias in the law of algorithms.

Read the full article: Drawing a Non-Consequentialist Line: Augmenting v. Replacing the Professions with Computer Systems  

Get the latest in cyber regulation, guidance and emerging technologies affecting the legal sector at our Legal services in a data driven world event, 27 September

Tags: technology | artificial intelligence

About the author

Dr Sylvie Delacroix is a reader in legal theory and ethics, UCL Laws and UCL Computer Science. 

  • Share this page:
Authors

Adam Johnson | Adele Edwin-Lamerton | Alex Barr | Alex Heshmaty | Alexandra Cardenas | Amy Heading | Andrew Kidd | Andy Harris | Anna Drozd | Annaliese Fiehn | Anne Waldron | Asif Afridi and Roseanne Russell | Bansi Desai | Barbara Whitehorne | Barry Wilkinson | Ben Hollom | Bob Nightingale | Caroline Roddis | Caroline Sorbier | Catherine Dixon | Ciaran Fenton | David Gilroy | David Yeoward | Douglas McPherson | Dr Sylvie Delacroix | Duncan Wood | Elizabeth Rimmer | Emily Miller | Emma Maule | Gary Richards | Gary Rycroft | Graham Murphy | Hayley Stewart | Ignasi Guardans | Jayne Willetts | Jeremy Miles | Jerry Garvey | Jessie Barwick | Jonathan Smithers | Julian Hall | Julie Ashdown | Julie Nicholds | Karen Jackson | Kate Adam | Kayleigh Leonie | Keiley Ann Broadhead | Kerrie Fuller | Kevin Poulter | Larry Cattle | Laura Devine | Leah Glover and Julie Ashdown | LHS Solicitors | Mark Carver | Mark Leiser | Markus Coleman | Martin Barnes | Matthew Still | Meena Toor | Melissa Hardee | Nick Denys | Nick Podd | Pearl Moses | Peter Wright | Philippa Southwell | Preetha Gopalan | Rachel Brushfield | Ranjit Uppal | Richard Coulthard | Richard Heinrich | Richard Messingham | Richard Miller | Richard Roberts | Rita Oscar | Rob Cope | Robert Bourns | Robin Charrot | Rosy Rourke | Sara Chandler | Sarah Austin | Sarah Crowe | Sarah Henchoz | Sarah Smith | Shereen Semnani | Sophia Adams Bhatti | Steve Deutsch | Stuart Poole-Robb | Susan Kench | Suzanne Gallagher | Tom Ellen | Tony Roe Solicitors | Vanessa Friend

Tags

access to justice | anti-money laundering | apprenticeships | archive | artificial intelligence | Autumn Statement | bid process | brand | Brexit | British Bill of Rights | Budget | business | careers | centenary | charity | city | communication | Conservatives | conveyancing | court closures | court fees | courts | CPD | criminal legal aid | cyber security | David Cameron | development | Diversity Access Scheme | diversity and inclusion | education and training | elderly people | emotional resilience | employment law | equality | European Union | Excellence Awards | finance | George Osborne | human rights | human trafficking | immigration | in-house | International Womens Day | Investigatory Powers Bill | IT | Jeremy Corbyn | justice | knowledge management | Labour | law management | Law Society | leadership | legal aid | legal professional privilege | LGBT | Liberal Democrats | library | Liz Truss | Magna Carta | mass data retention | mediation | members | mention | mentoring | merger | modern slavery | morale | National Pro Bono Week | Parliament | party conferences | personal injury | Pii | politics | president | pro bono | productivity | professional indemnity insurance | represent | retweet | risk | rule of law | security | social media | social mobility | SRA | staff | strategy | stress | talent | tax | tax credits | team | technology | Theresa May | Time capture | training | Twitter | UKIP | value proposition | website | wellbeing | Westminster weekly update | wills