How in-house lawyers can (and should) use AI and ChatGPT

In-house teams need to get on board with generative AI: that's the message from Lizzy Lim, legal counsel at Trustpilot. She shares how ChatGPT and other forms of AI can enhance productivity of in-house departments, and what risks legal professionals should be aware of.
A woman sits in front of a laptop holding a mobile phone

Everyone is talking about ChatGPT and AI, and how these systems will change the way we work forever.

Most people seem to be scared of what AI can do, the risks associated with it and how it can replace lawyers.

This is surprising given that AI has been used by both private practice and in-house organisations through lawtech software for some time.

I work in-house at Trustpilot, an online review platform, whose mission is to be a universal symbol of trust. I’m in a unique position to consider AI working as an in-house lawyer in the technology sector in an organisation focused on trust.

Although there are risks associated with using AI, which should be recognised and considered, AI is here to stay, and in-house organisations should utilise this software or risk being left behind.

How AI can be used in-house

In-house lawyers are sometimes underappreciated by organisations, as lawyers need a significant amount of time to consider and assess legal risks and convert this into a simple explanation to justify the best way forward.

So, it’s important that in-house lawyers undertake work as efficiently and effectively as possible.

1. Increase efficiency

AI is great for increasing an in-house team’s efficiency. You can use systems such as ChatGPT to create a starting base to work from, such as basic legal research on a topic or creating templates, which can then be probed, modified or used in a way that best suits your organisation.
Sometimes it’s really difficult to write the first sentence, and having a starting point can really boost productivity.

2. Deal with admin tasks

AI software can also be used for admin tasks, such as creating a spreadsheet or presentation. This is great for those pesky tasks that are repetitive or time-intensive, or for those who are new to in-house life.

By using AI, in-house teams can focus on what really matters and carry out more legal-focused tasks.
With Google’s recently announced generative AI in Google Workspaces, this is likely to be an area that sees the most development in the near future.

3. Simplify legal concepts

Most in-house colleagues, such as sales teams, won’t have a legal background. As such, a big part of an in-house lawyer's job is to provide understandable summaries of complex legal concepts.

This is something that many in-house lawyers, especially those at the beginning of their in-house careers, struggle with.

ChatGPT is great for this, as you can put in your original draft of what you want to say, and simply ask it “can you simplify this for me?” Then it provides a summary of the information for you.

It’s important that all lawyers are able to explain complex legal concepts, and junior lawyers in particular need to develop this skill, but having an aid to get you going doesn’t hurt.

What AI can’t replace

We hear a lot about all the great things that AI can do, but what can AI not do?

Speaking to other in-house lawyers, this seems to be a common concern, and is particularly despairing to aspiring lawyers who feel that organisations may replace training contracts with AI software.

However, AI won’t ever be able to replace the service that in-house lawyers provide.

AI will never be able to balance the interests of an organisation to come up with the most commercial decision.

This is a task that is undertaken by in-house lawyers regularly, and although AI can produce information on how to proceed, it won’t be able to appreciate what’s the most sensible way forward and take into account your organisation’s mission, values, and goals.

The risks

Before diving into the risks, it’s worth bearing in mind that all software comes with risks that need to be considered before implementing.

There are specific legal risks depending on the software you use, but there are some general AI risks to be mindful of.

1. Data security

Most notably, there is a data security risk. There have been well-reported data breaches with several AI programmes, for example, OpenAI confirmed a data breach of ChatGPT in May 2023.

These reports highlight the potential dangers of using any software that you don’t have a contractual relationship (and importantly the necessary data protection) with, particularly in relation to personal data.

2. Intellectual property

It’s yet to be established who is the intellectual property owner of the outputs produced by AI.

As we’re waiting for this to be stress tested you should ask yourself, do you want your organisation to be the test case?

3. Discrimination

As with most software, there is a potential discrimination risk when using AI.

Software can only produce output based on the information that it is fed, and if this information doesn’t cover all possibilities, then AI systems may not consider all the options.

This is notable in AI systems which don’t consider people’s diverse characteristics, for example it’s well-known that facial recognition software misidentifies people with darker skin tones more often.

4. It’s not always right

Although it does a great job at fetching information, there is no guarantee that this information is all correct.

To test this, I asked ChatGPT to put together a summary of several legal concepts and it would typically miss out key legislation or legal points.

Although AI is great to use as a starting point, it’s important that its outputs are checked for accuracy, as if not you run the risk of producing advice based on wrong information.

The future

About 80% of in-house lawyers that I spoke to work at organisations that have either restricted or blocked access to ChatGPT.

However, many of the lawyers I spoke to continued to use ChatGPT for work on their personal devices, and they were well aware that their colleagues outside of the legal team were still regularly using it.

Restricting or blocking access to AI/ChatGPT is not the answer.

AI should be used to optimise an in-house team’s efficiency by automating repetitive or admin tasks, allowing the team to focus on more important matters to your organisation.

However, organisations should consider their individual risks of using AI.

Some general suggestions on how best to prepare to use AI are:

  • consider how AI may be used across your organisation, and the potential risks associated with this
  • consider how these risks may affect your organisation
  • explain the potential risks of using AI to colleagues across your organisation
  • provide colleagues with guidelines on how they can and can’t use AI software
I want to know more

Read more from InsideOut, our quarterly e-magazine for in-house lawyers

Explore our in-house resources, designed to offer support and advice on key issues facing all in-house lawyers working in the corporate and public sectors, not-for-profit organisations and charities.

Find out more about our In-house Network

Maximise your Law Society membership with My LS