1. Introduction

The emergence of generative artificial intelligence (AI) tools has opened up new possibilities to increase technology adoption in the legal sector.

However, the use of generative AI can create technology and data risks, some of which may not be fully understood, particularly as the technology is developing.

While it is uncertain how generative AI may impact the legal profession in the long-term, many solicitors and law firms are using and investing in tools as part of their practice, with the aims of improving service provision, reducing costs and meeting new client demands.

This introduction to generative AI is designed to be a primer for solicitors and firms, particularly small and medium-sized firms (SMEs), who want to understand more about the technology.

Aspects of this guidance will also be relevant for solicitors working in-house.

It provides a broad overview of both the opportunities and risks the legal profession should be aware of to make more informed decisions when deciding whether and how generative AI technologies might be used.

This guide does not constitute legal advice. 

1.1 Glossary of terms

Similar to many evolving technologies, there is often no universally agreed definition of terms such as 'artificial intelligence'.

This glossary provides context for how relevant terms have been defined in this guide.

Artificial intelligence

The theory and development of computer systems able to perform tasks that usually require human intelligence, such as visual perception, speech recognition and decision-making.

The data needed to train an AI system is referred to as the input data, and the results produced by the system as the output.

Generative artificial intelligence

Generative AI is a subcategory of AI that uses deep learning algorithms to generate new outputs based on large quantities of existing or synthetic (artificially created) input data.

These outputs can include multiple modes such as text, images, audio or video.

Chatbot

A digital tool or software application designed to simulate conversation with users, primarily via text or synthesised speech.

While some chatbots operate based on predefined responses, advanced versions use AI (including generative AI techniques) to provide more dynamic and contextually relevant interactions, reducing the need for immediate human intervention.

Large language model

A large language model (LLM) is an AI system trained on an exceptionally large amount of data. There is no consensus on what constitutes 'large'.

LLMs specifically work with language and are built on a type of machine learning called deep learning to understand how characters, words and sentences function together.

An LLM is one underlying technology that enables generative AI.

The term ‘foundation model’ is often used synonymously with LLM but could refer to systems with broader functions as developed in the future.

Lawtech

Technologies that aim to support, supplement or replace traditional methods for delivering legal services or that improve the way the justice system operates.

Machine learning

A subset of AI that sees computer algorithms evolve and refine their performance by processing and learning from data over time.

1.2 Who should read this?

All solicitors and members of their staff who use, or are looking to use, generative AI tools as part of their practise.

This guidance may be particularly useful for SMEs and in-house solicitors seeking to understand the opportunities and risk landscape of generative AI tools.

AI and generative AI vendors may find this guidance helpful for understanding the types of considerations and risks solicitors need to be aware of when using these tools as part of their practice.

1.3 What is the issue?

While the technology that underpins generative AI is not new, the increased accessibility, affordability and sophistication of these tools have allowed for widespread professional and public adoption.

Like all technologies, AI and generative AI have the potential to enable increased efficiency and cost-savings where processes can be automated and streamlined, with minimal human intervention.

Such opportunities also come with both new and longstanding risks such as:

  • intellectual property risks: potential infringements of copyright, trade marks, patents and related rights, and misuse or disclosure of confidential information
  • data protection and privacy risks: concerns related to the unauthorised access, sharing or misuses of personal and sensitive data
  • cybersecurity risks: vulnerabilities to hacking, data breaches, corruption of data sources and other malicious cyber activities
  • training data concerns: the use or misuse of data to train generative AI models, which could result in biases or inappropriate outputs
  • output integrity: the potential for generative AI to produce misleading, inaccurate or false outputs that can be misconstrued or misapplied
  • ethical and bias concerns: the possibility of AI models reflecting or amplifying societal biases present in their training data, leading to unfair or discriminatory results. There may also be environment, social and governance (ESG) considerations
  • human resources and reputation risks: if the use of generative AI may result in negative consequences for clients, there may be reputational and brand damage

The UK government published a white paper, a pro-innovation approach to AI regulation, which proposes taking a risk-based and principles-based approach to regulating AI.

We have responded to the consultation on the white paper, calling for a balanced approach to AI regulation.

This balanced approach would enable the profession to capitalise on the benefits of deploying AI technologies while establishing a clear delineation of the human role and accountability within the AI lifecycle.

Our response also stressed the importance of ensuring the continuing respect for the rule of law.

The AI landscape has shifted since the publication of the white paper with the exponential emergence of generative AI.

Given the legal uncertainty regarding whether and how AI and generative AI may be regulated in the future, it is important to recognise the risks and available protections under existing regulations.

1.4 Checklist when considering generative AI use

In summary, the following points should be considered when considering generative AI use:

  • define the purpose and use cases of the generative AI tool
  • outline the desired outcome of using the generative AI tool
  • follow professional obligations under the SRA Code of Conduct, Standards and Regulations, and Principles
  • adhere to wider policies related to IT, AI, confidentiality and data governance
  • review the generative AI vendor’s data management, security and standards
  • establish rights over generative AI prompts, training data and outputs
  • establish whether the generative AI tool is a closed system within your firm’s boundaries or also operates as a training model for third parties
  • discuss expectations regarding the use of generative AI tools for the delivery of legal services between you and the client
  • consider what input data you are likely to use and whether it is appropriate to put it into the generative AI tool
  • identify and manage the risks related to confidentiality, intellectual property, data protection, cybersecurity and ethics
  • establish the liability and insurance coverage related to generative AI use and the use of outputs in your practice
  • document inputs, outputs, and any errors of the generative AI tool if this is not automatically collected and stored
  • review generative AI outputs for accuracy and factual correctness, including mitigation of biases and factchecking

2. What is generative AI?

Generative AI is a subcategory of AI that uses deep learning algorithms to generate new outputs based on large quantities of existing or synthetic (artificially created) input data.

These outputs can include multiple modes such as text, images, audio or video.

Given the significant quantities of data and computational power used by these systems, their outputs are considered to mimic human-like responses.

There is a large variety of generative AI tools which can produce text, audio, visualisations and code. Generative AI tools can be proprietary or open-source.

A wide range of paid, free and freemium services are available. Some popular examples include:

  • OpenAI’s ChatGPT and DALL-E
  • Google’s Bard
  • Anthropic’s Claude
  • GitHub Copilot

In simple terms, traditional AI recognises, while generative AI creates.

Historically, traditional AI encompasses a broad spectrum of computer-driven tasks that mimic human intelligence, from pattern recognition to decision-making.

Generative AI differs from traditional AI due to its ability to create new content based on its training data, rather than just analysing or categorising existing information.

It is important to note that generative AI, like all forms of AI, lacks the capability to understand its output and meaning in the same way that humans do.

This means that generative AI cannot autonomously validate or audit the accuracy of its results. It may even create false outputs.

To counteract this, certain tools such as recitation checkers have been developed to address this concern and can be built into the AI process.

As generative AI typically relies on a predefined dataset and cannot automatically be trained using user input and prompts, providing clearer prompts (known as ‘prompt engineering’) can help achieve results more in line with user intentions.

Within the legal profession, an early example of generative AI is Harvey AI. This is a multi-tool platform that can assist lawyers in every practice area with their daily workflows in almost any language.

Harvey has been reported to enhance aspects of legal work such as:

  • contract analysis
  • due diligence
  • litigation
  • regulatory compliance

Dentons has also launched fleetAI, a client-secure version of ChatGPT.

While Harvey and fleetAI are designed and developed with the legal practice and profession as its primary audience, using legal data for training as well as with legal expertise, most freely available tools are not for exclusive use in the legal sector.

2.1 Generative AI in the legal profession

Generative AI can be used in a variety of ways. These include:

  • analysing contracts
  • drafting or summarising documentation
  • facilitating e-discovery
  • powering client chatbots
  • enhancing internal knowledge databases
  • predicting case outcomes

A survey of over 1,000 UK lawyers and legal professionals conducted by LexisNexis found the vast majority (95%) of respondents believe generative AI will have a noticeable impact on the law, with 38% believing it will be significant and 11% transformative.

Two-thirds (67%) of survey participants said they feel mixed about the impact of generative AI on the practice of law, admitting that they can see both the positives and the drawbacks.

In September 2023, the deputy head of civil justice Lord Justice Birss, while commenting publicly on the potential of tools like ChatGPT at our dispute resolution conference, revealed he used the tool to provide a summary of an area of law, which he subsequently included in his judgment.

This is the first known use of an AI chatbot by a British judge, demonstrating that generative AI tools can and already have been used within our legal system.

Generative AI is an emerging market. How and to what extent these technologies are used within the legal profession, as well as what tools become available, may rapidly change.

3. Things to consider and risk management

As solicitors, you must understand your regulatory and professional responsibilities in context of generative AI.

The principles of the Solicitors Regulation Authority's (SRA) Standard and Regulations continue to apply to your provision of legal services.

Even if outputs are derived from generative AI tools, this does not absolve you of legal responsibility or liability if the results are incorrect or unfavourable. These remain subject to the same professional conduct rules if the requisite standards are not met.

In an example from the US, a judge imposed sanctions on two New York lawyers who submitted a legal brief that included six fictitious case citations generated by AI chatbot ChatGPT.

The production of outputs that are presented as accurate but are illogical, nonsensical, factually incorrect or do not reflect the training data is often called ‘hallucination’.

In the same way that generative AI does not understand its outputs, generative AI is not sentient. Hallucinations do not refer to generative AI’s capacity for human-like imagination or perception.

Whether you work in-house or in a SME, are a public or private practice solicitor, similar considerations need to be made regarding the use of generative AI in relation to your business and organisational needs, including the responsibilities of and to the board and key affected stakeholders.

In addition to risk management, this could include:

  • business alignment
  • purpose and scope
  • stakeholder communications
  • cost and ROI analysis
  • pricing
  • ongoing training

If you are an in-house solicitor or work in a SME, you may wish to consider whether you can work in consortium with similar organisations who may also be looking to procure, use and review generative AI tools.

Working collectively may be useful if you have share similar purpose and objectives as well as potentially provide greater value for money.

To support SMEs that have responsibility for deciding how generative AI could be used and adopted within your practice, we created a document that provides an overview of considerations across the generative AI lifecycle from initial exploration, through to procurement, use and review.

Skip to download the PDF

3.1 Current regulatory landscape

If you use generative AI tools as part of legal service provision, it is important that you maintain effective, professional quality control over their output and use.

You should:

  • carefully factcheck its products and authenticate the outputs
  • carry out due diligence, including supplier due diligence, on the AI tools you use and consider the often-limited warranties offered by providers and contained in their terms of use
  • ensure that appropriate staff protocols and guidance are provided around employees’ use of such tools if they are permitted

Where applicable to procurement, you should carefully negotiate key contractual terms of warranties, indemnities and limitations on liability with vendors. This includes any relevant source code agreements.

When assessing the market, it may be useful to examine vendor’s attitudes to research and development of their tool to make sure future innovation is in line with your expectation and objectives.

Consider whether you need or will have long-term support from a vendor, as well as an exit plan should a generative AI tool be adopted but the vendor exits the market.

It is important that you comply with any existing internal policies throughout the process of generative AI planning, from considering the potential use of the tool, to possible procurement, risk management, and decommissioning where relevant.

At present, there are no statutory obligations on generative AI technology companies to audit their output to ensure they are factually accurate.

Consequently, the use of these tools by legal professionals could result in the provision of incorrect or incomplete advice or information to clients.

Additional risk may also occur where automated decisions are made using generative AI outputs.

As there is currently no AI- or generative AI-specific regulation in the UK, it is important you understand the capacities of the generative AI tool you plan to use.

Although you do not have to have full knowledge of the inner workings of a tool, consider the claims the provider is making and assess the evidence and benchmarks they use to demonstrate the tool’s capabilities.

Currently, the SRA does not have specific guidance on generative AI related to use or disclosure of use for client care.

It is advisable that you and your clients decide on whether and how generative AI tools might be used in the provision of your legal advice and support.

While it is not a legal requirement to do so, clear communication on whether such tools are used prevents misunderstandings as to how information is produced and how decisions are made.

If a generative AI tool is used and the tool does not provide a history of use, it is advisable that you document all inputs, outputs, and system errors to ensure that the use of the tool can be monitored as appropriate.

If you have decided to use or procure a generative AI tool, ensure that you regularly assess the tool’s relevance and value addition to your practice.

When assessing the tool, it is important that a holistic view is taken across the tool’s lifecycle.

All reviews should be outcome- and objective-led, with specific measurements taken to assess the tool’s performance.

If your initial or updated requirements are no longer met, consider how you can transition away and extricate your organisations from the tool if necessary, including data removal and deletion within the generative AI system, as well as source code transfer if relevant.

While generative AI introduces new risks, existing risk management processes such as cybersecurity and insurance may already be in place to mitigate risk.

The issues below are specific to generative AI technologies but may overlap with your wider risk management processes.

3.2 Intellectual property

Ownership of input and output data

One key concern when using generative AI tools is determining the ownership of both the input and output data.

In addition to data over which you have control and to which you have rights, data scraping is often performed in the training of generative AI tools and can cause concerns.

Data scraping is the process of extracting data from various sources, such as databases, files or websites (web scraping), often using automated tools.

While this is a common technique for data gathering and is not exclusive to AI or generative AI, its use in training generative AI tools can lead to intellectual property challenges.

There is also potential for copyright infringement if input data containing copyright works results in recognisable outputs.

To mitigate potential data scraping infringements, you may wish to speak to your web service provider to rate limit website requests and disallow generative AI bots from scraping your website by limiting its accessibility from scraping tools through editing the robots.txt file where possible.

Ownership clauses in terms of services

Generative AI tool agreements may contain provisions allowing the AI vendor to reuse input data to refine their system.

Additionally, some agreements note that the AI provider and vendor retains ownership of the output data.

It is imperative that such clauses do not conflict with professional standards concerning the ownership and control of legal advice or compromise data confidentiality.

Procurement considerations

If considering the procurement of an AI tool, clarity on who will own the intellectual property rights over input and output data is essential.

It is important to review the terms of service and privacy policy of each tool you use to check the usage rights, ownership of outputs and any disclaimers. 

3.3 Data protection and privacy

You should be cautious that generative AI companies may be able to see your input and output data.

As many generative AI companies are located outside of the UK, data may be transferred outside of UK borders and international data processing may occur.

Personal data may be knowingly or unknowingly included in the datasets that are used to train generative AI systems.

This could raise data protection concerns both regarding what personal data was used as well as whether such personal data may be present in the outputs.

As generative AI tools are trained using large volumes of data, it may be possible that confidential or sensitive information is exposed.

Generally, it is advisable that you do not feed confidential information into generative AI tools, especially if you lack direct control and oversight over the tool’s development and deployment.

If you are using a free, online generative AI service where you have no operational relationship with the vendor other than use, do not put any confidential data into the tool.

If you are procuring or working with a vendor to develop a personalised generative AI product for internal use contained solely within your firm’s legal environment, you may wish to consider if and how you want to put confidential data into the tool, subject to the terms of use.

Caution should also be taken when using tools and features that are built on top of generative AI platforms.

Metadata and information such as document authorship, websites accessed, and file names and downloads might be shared with the main technology providers, not only with the specific software and vendor you are using.

You can find more information on data protection in our GDPR guide for law firms.

3.4 Cybersecurity

Using generative AI tools can introduce cybersecurity risks, including the potential for malicious actors to exploit vulnerabilities.

For example, one such method is ‘prompt injections’, where certain commands are subtly inserted into the tool to manipulate or bypass tool restrictions on data outputs to perform previously restricted activities.

Other cybersecurity risks include data source corruption that may compromise the generative AI outputs.

Generative AI tools can also be weaponised by adversaries to produce more sophisticated phishing and cybersecurity attacks in greater volume.

The National Cyber Security Centre has written about the cybersecurity issues related to generative AI.

The Institute of Chartered Accountants of England and Wales (ICAEW) has produced its own generative AI guidance. 

3.5 Ethical considerations

There are ethical considerations on the use of generative AI.

In 2021, we published research on lawtech principles and ethics to find out how some of the largest firms in the UK assessed solutions and navigated ethical considerations around lawtech.

Within an overarching client care principle, the five main principles identified to inform lawtech design, development and deployment are:

  1. compliance
  2. lawfulness
  3. capability
  4. transparency
  5. accountability

These ethical principles around lawtech are relevant to generative AI.

More recently, the UK government’s principles outlined in its AI white paper may be useful when thinking about wider ethical considerations regarding the use of generative AI tools. These principles are:

  1. safety, security, and robustness
  2. appropriate transparency and explainability
  3. fairness
  4. accountability and governance
  5. contestability and redress

Public sector and public bodies need to consider generative AI in context of the Public Sector Equalities Duty (PSED), which applies to any AI systems that public bodies are already using or that others may be developing or using on their behalf.

Even for organisations in the private sector, consideration of PSED requirements may help support ethical and responsible practices when using generative AI tools.

The UK Government Digital Service and the Office for Artificial Intelligence have also published joint guidance on how to build and use AI in the public sector.

More broadly on the creation of generative AI systems, it is useful to consider what labour was used and what values were considered in the design, development and deployment process.

When generative AI tools are used, it would be advisable to ask the generative AI vendors for more information about:

  • what datasets they used
  • how such datasets were acquired and fed into their systems for training
  • who is tasked with data labelling and training the generative AI system

As the layers of data and generative AI platforms increase, computational and bureaucratic systems become increasingly complex.

This means that, when harm does occur, figuring out why and who is responsible becomes more challenging, with further complication as the technology continues to evolve.

Finally, access to justice considerations should be taken into account when deciding on the design, development and deployment of generative AI tools.

This includes thinking critically about potential biases, discriminatory or exclusionary practices and outcomes that may manifest from generative AI and AI systems in general.

Our 21st century justice green paper explores the problems facing justice, including challenges related to rapid digitalisation and the rise of AI.

4. Common questions

What do I need to think about if I am looking to use or procure a generative AI tool?

When considering the use or procurement of generative AI tool or system, start by clearly defining its intended purpose and use cases, and by determining whether it aligns with your desired identified outcome.

Outline your requirements before exploring market options, ensuring that the tool can improve your current practices.

This process might require extensive internal discussions to reach a consensus on quality benchmarks, answering the question of “what does ‘good’ look like?”

While cost considerations are crucial, they should not overshadow the tool’s effectiveness and alignment with your need.

In addition, explore tools that may be already available within your organisation, as they, or a combination of them, might be more suitable for your needs. This step may also include seeking the expertise of your colleagues in different teams.

Before you commit, explore the market extensively. Try to obtain free demonstrations from a variety of providers to gain a comprehensive view of available options.

It is important to evaluate the claims made by the providers regarding the tool’s technical capabilities.

In particular, you need to understand how your input data will be used by both the provider and the AI tool itself.

Always review the terms of service and privacy policy of each tool to understand the usage rights, output ownership and any associated disclaimers.

Ideally, trial the generative AI tool first, preferably in isolation of other technical systems, before committing to its use.

Trials are useful for market research and any demonstrations received should focus on holistic outcomes and objectives of your proposed use, not only on the efficiency of the system or the specific scenarios showcased by the vendor.

If your organisation has an AI or generative AI policy, you should adhere to those guidelines.

Consult with your management and IT teams regarding integration with any existing systems within your organisation and refer to the relevant IT or AI policy that may cover these requirements.

Can we share our client data with a generative AI business?

Careful consideration needs to be taken when client data is to be shared with a third-party AI vendor.

Although the data itself may not be confidential, you should be clear on how your input data is being used by the generative AI tool.

This is often covered in the tool’s terms of service. If it is not, you should get confirmation from the tool provider.

This includes data protection considerations but may also require agreements between you and your client. It may be possible that the generative AI provider can see all your inputs.

For example, you should be able to answer questions about:

  • where the data is being processed
  • who is processing the data
  • how and where the data is stored
  • who has access to the data
  • what are you using the generative AI output for
  • whether an alternative, more appropriate means can be used to achieve this
Can I use client confidential and personal data in my organisation to create templates or test a generative AI system?

It is important to consider:

  • your professional obligations under the SRA Code of Conduct
  • the client engagement agreement between you and your client with regards to how you can use their data and whether they can be input into a generative AI system

While there is no legal obligation to disclose the use of generative AI tools as part of legal practice, clear communication between you and your client on whether such tools are used can help prevent misunderstandings.

Data protection and intellectual property considerations should be reviewed and adhered to.

As a general rule, you should not permit the use of personal data or client confidential information in any testing, templating or similar context on generative AI. Always create and use fictional data.

Lifecycle considerations for SMEs

This overview sets out considerations across the generative AI lifecycle for small and medium-sized firms (SMEs) from initial exploration, through to procurement, use and review.

This guide does not constitute legal advice.

Preliminary considerations:
  • Purpose identification: determine the primary goal or need for the tool in your practice – use cases can be helpful
  • Research and vendor diligence: investigate reputable AI tool providers; speak to others in the industry to determine their experience with the provider and consider whether they can meet your requirements
  • Stakeholder engagement: involve IT, management and legal teams in discussions to make sure the use of the tool is in alignment with the firm’s strategy and various existing policies
  • Collective use: consider whether you can work in consortium with other SMEs who may also be looking to procure, use and review AI tools in their practice. Working collectively may be useful if you have shared purpose and objectives
Data and training:
  • Data use: be very careful about the data you use to train any AI tools. Consider whether the use of anonymised data is more appropriate. If you are unsure, seek advice as disclosure of confidential information or personal data can lead to serious consequences
  • Data protection compliance: confirm the use and storage of data complies with data protection laws, including the UK GDPR, if applicable
Procurement:
  • Trials and demos: request demos or trial versions to evaluate the tool’s effectiveness and relevance to your purpose
  • Contractual terms: review the terms of service, especially regarding data rights, intellectual property, the geographic location of data and any liability clauses
  • Cost analysis: understand the total cost of the tool, including support and computational charges. It is advisable that you agree to a fixed cost to avoid any unexpected expenditure
  • Long-term planning: consider the long-term viability and support of the vendor and put in place a regular review process to establish business need and purpose, as well as an exit plan is required
Implementation and usage:
  • Training: ensure proper training sessions for staff on how to use and benefit from the tool are carried out, including effective prompts. Technical training in the use of such tools is separate from the training in the ethical use of such tools and both are equally important
  • Data input management: clearly define what data can be fed into the tool considering both legal and ethical constraints
  • Review mechanism: implement a consistent process for checking and verifying the AI’s outputs for accuracy, reliability and liability
  • Feedback loop: establish channels for users to provide feedback on the tool’s performance to ensure it continues to meet your requirements
Risk management:
  • Legal and regulatory compliance: ensure that you comply with existing laws and regulations, particularly around data privacy and cybersecurity. Note that the legal landscape is fast-changing in this area, so keep abreast of any new legal and regulatory standards that may come into force following the implementation of the tool that may be relevant to your jurisdiction
  • Cybersecurity measures: ensure robust security protocols to protect data from breaches. Understand how data is processed on the vendor side and the security measures they have in place to protect your data
  • Liability and insurance: assess the liability and insurance cover related to the use of the tool. Speak to your insurer to determine whether the usage of the tool will be covered under your existing policy
  • Business continuity: ensure that there is a plan in place in case of system failure or outage of the service
  • Ethical considerations: consider the potential ethical implications and biases of the AI’s output and establish responsible AI use guidelines that consider the public interest from the user’s perspective
Review and evolution:
  • Regular assessment: periodically assess the tool’s relevance and value addition to your practice. Ensure that it continues to meet your requirements
  • Business continuity: understand any business continuity measures you need to have in place if the tool fails or is not accessible
  • Exit strategy: consider how you can transition away from the tool if necessary. Understand whether you can transfer your data, source code, or any existing training on the tool, if required
Communication:
  • Client communication: clearly communicate to clients when and how AI tools are used in their matters, where appropriate to do so
  • Internal awareness: keep the firm’s staff informed about the tool’s capabilities, benefits, and limitations, as well as their own professional responsibilities to ensure it is being used appropriately
I want to know more

This guide is a living document and will be updated to reflect ongoing regulatory, policy and technological developments.

To provide feedback, email our data and technology law policy adviser Janis Wong

Download the guidance:

Explore our pages on AI and lawtech