Mitigating AI risks and preventing deepfake attacks

In collaboration with Beale & Co, Paragon’s Claire Revell and Harriet Sensier discuss how law firms can navigate AI risks – and how deepfake content could be used to facilitate data leaks and fraud.

There is vast opportunity for new AI technologies to be used within the legal sector – from supporting risk identification, to drafting contracts.

But while AI has the ability to positively impact the legal profession, its adoption introduces risks such as:

  • The potential for infringements of copyrights, trademarks, patents and related rights if tools are being trained on protected material without permission
  • The misuse or disclosure of confidential, personal or sensitive information which can result in breaches of legislation
  • The risk of hacking, data breaches and malicious cyber activities such as deepfakes
  • The risk that generative AI will produce misleading, inaccurate or false outputs – including ‘hallucinated’ outputs, where AI produces highly plausible case law or statute law that is fabricated
  • The risk that AI models reflect social bias in their output, resulting in output that is discriminatory or unfair

It is important to remember that solicitors using AI remain responsible for the work produced using those tools, and must ensure any information or documents are accurate and from genuine and verifiable sources.

In order to comply with the SRA Principles and Code of Conduct, practitioners should supervise AI use for quality control, as an improper reliance on technology risks violating professional obligations to uphold the rule of law, maintain public trust, and act with integrity.

How can firms manage the risks around AI use?

Firms should consider how they can mitigate the risks surrounding AI usage. They should:

  • Choose AI systems carefully: ensure that they meet the firm’s needs and familiarise teams with the terms and conditions of use. Firms should ensure that they are clear as to when errors are the responsibility of the provider or the user.
  • Identify the risks of using AI platforms: consider risks to confidentiality, intellectual property, data protection, cyber security and ethics, and establish that insurance policies in place cover the intended use of AI.
  • Implement a clear AI policy: ensure this is aligned to existing laws and regulations on data protection, copyright, human rights and equality. regulators.
  • Train and supervise staff on the use of AI systems: make it clear what is acceptable under your AI and IT policies. Firms should ensure that staff who are using AI understand how it operates, are using approved AI tools for the intended purpose, and have established that the data being inputted into these AI platforms is appropriate.
  • Have a robust process in place for reviewing AI outputs for accuracy and fact checking.
  • Ensure that staff are aware of and adhering to professional obligations in their use of AI.
  • Be transparent with clients about AI: this should include details of when an AI tool is being used in their cases, which AI tool is being used and what they are asking that tool to do, and how that tool operates.
  • Be aware that AI has the potential to help cyber criminals to carry out illegitimate activities: AI can be used to create highly realistic ‘deepfake’ images and videos, which, combined with AI assisted voice imitation, can make phishing scams difficult to recognise.
  • Deepfakes

    Deepfakes involve the use of AI to create convincing forgeries of images, videos and audio recordings. These can be indistinguishable from genuine content, making it difficult to identify whether content is real or not.

    Cyber criminals may create deepfake audio or video messages to deceive employees into divulging sensitive information or authorising fraudulent financial transactions.

    Criminals may transform existing content by swapping one person for another. They may also create entirely original content where a person appears to say or do something that they did not.

    Key warning signs of deepfakes include:

    • Audio issues: odd noise distortion in the background or voice quality
    • Sync problems: disconnection or delay between speech and mouth movement
    • Visual Anomalies: pixelation or lack of visual clarity, or an absence of or unusually patterned blinking

    In the LexisNexis Cybersecurity and AI 2025 Report, 24% of legal professionals cited AI-generated threats such as deepfakes and synthetic email scans as their second biggest concern after phishing.

    Why are solicitors at risk?

    Although technical controls may be in place to prevent cyber-attacks, deepfakes target human trust. Deepfake technology is also evolving at a fast rate, making it essential for businesses to continuously monitor and improve their deepfake detection capabilities.

    Law firms are particularly vulnerable to attacks as they often manage substantial sums of client money. Advances in deepfake technology are a particular threat in conveyancing and property transactions.

    Criminals can convincingly impersonate sellers or agents using deepfake technology, resulting in solicitors unwittingly facilitating fraudulent transactions. Additionally, the nature of conveyancing transactions provides cyber criminals with both the method for committing fraud and the means to launder stolen funds effectively.

    How can firms manage the risk of deepfake crime?

    To manage the threat of deepfakes, law firms should implement a robust, multi-faceted security strategy.

  • Raise staff awareness: train all staff on potential deepfake threats and their warning signs
  • Strengthen authentication: implement measures like multi-factor authentication (MFA) and conditional access to sensitive documents
  • Adopt defence-in-depth: employ multiple layers of protection across IT systems and processes
  • Establish breach protocols: ensure additional safeguards and alerting mechanisms are in place for when a control is bypassed
  • Audit security regularly: conduct frequent assessments of security measures to confirm their effectiveness
  •  

    This article is written by Paragon as a hosted feature on the Law Society website. Views expressed are Paragon’s.

    This article is provided for general information only. It is not intended to amount to advice on which you should rely. You should obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content in this article. See our website legal notice here.

    Paragon is a partner of the Law Society. This article is written in collaboration with Beale & Co, who offer support to Paragon’s LawSelect clients when claims or circumstances arise.

    If you would like to discuss this article or how Paragon can help with professional indemnity now or in the future, contact their team.