Deepfakes and the legal sector

Gallagher explores what deepfakes are, how cybercriminals are deploying them, why legal practices are particularly vulnerable – and how a cyber insurance policy could apply in the event of an attack.

Artificial intelligence (AI) continues to reshape the cyber landscape, and one of the most disruptive threats emerging in professional services is the rise of deepfakes.

For law firms, which deal with high-value transactions, sensitive data and time-critical decisions on a daily basis, deepfakes present a material and rapidly evolving risk.

What are deepfakes?

Deepfakes are AI-generated images, audio clips or videos that appear genuine – but are in fact entirely fabricated.

Using complex machine-learning models, cybercriminals can now convincingly clone a person’s voice, replicate their appearance and even mimic their behaviour with an accuracy that would have been unthinkable only a few years ago.

This technology is increasingly being weaponised in cyber attacks, eroding traditional trust signals and making it significantly harder for organisations to distinguish between authentic communications and fraudulent ones.

How deepfakes are used in cyber attacks

Deepfakes are gaining traction as a tool for cybercriminals. They are typically deployed for two reasons:

1. To facilitate fraudulent transfer of funds

Deepfakes have been used to impersonate senior executives, clients or even public officials to trick people into making payments to fraudulent third parties.

A widely reported example involved a finance worker transferring $25m after attending a video call with someone who appeared to be the company’s CFO, only to later discover the individual was an AI-generated fake.

2. Social engineering to gain system access

Deepfakes are also being used to deceive service providers and internal support teams to ultimately gain access to computer systems.

Wiz, a cloud security provider, was targeted by threat actors who used deepfaked audio of its CEO to gain unauthorised access to the company's systems.

This kind of instance demonstrates how deepfakes bypass typical red flags in fraud prevention, because the ‘person’ making the request looks and sounds legitimate.

Why law firms should be concerned

Law firms are uniquely exposed to this class of threat due to the nature of their operations and the high-value information they handle.

Key risk factors include:

1. Frequent handling of client funds

Across conveyancing, M&A, litigation and wider commercial practice areas, solicitors routinely authorise transfers of significant sums of money.

A convincing deepfake request from a client, buyer, lender or counterpart could easily lead to a mistaken payment.

2. Holding large amounts of personally identifiable information

Law firms store sensitive personal and corporate information, making them attractive targets for threat actors seeking to extort firms through ransomware demands in exchange for not releasing the data on the dark web.

3. High pressure working environments

Transactional and contentious teams often work under intense time pressure.

Fee-earners may feel compelled to act quickly on what appears to be an urgent instruction, especially when the deepfake presents as a senior colleague, officer of the court or key client contact.

How could a cyber policy apply?

It is important to note that nothing in this article should be taken as an affirmation or guarantee of coverage.

Every claim turns on its own facts, policy wording including relevant limits and the circumstances of the loss. Depending on how the deepfake attack manifests, several sections of a cyber policy may be relevant.

The following coverage points are taken from Gallagher’s exclusive Pen Cyber Wording, available to Law Society members.

Section C – cyber crime (usually an optional extension)

Given the nature of deepfake-enabled fraud, ‘Section C – cyber crime’ is often the most relevant area of coverage, particularly for law firms that regularly transfer funds.

The main insuring clauses that may be relevant are summarised below, although this is not exhaustive.

  • Crime loss – direct financial loss resulting from the theft of funds from a number of means, including dishonest manipulation of an insured’s computer systems and fraudulent electronic messages
  • Phishing loss – the inability to collect funds owed to you by a third party due to a threat impersonating you via electronic communications
  • Corporate identity theft loss – financial loss as a result of fraudulent use of the insured’s electronic identity

Section A – cyber liability, breach response and extortion

If deepfakes are used not for financial manipulation but to gain system access, for example by impersonating staff to an IT helpdesk, section A may apply.

For instance, breach response costs could apply where a deepfake-enabled intrusion results in a data breach, triggering notification obligations, forensic investigation and legal support.

Exclusions to keep in mind

Although losses arising from AI are not explicitly excluded, firms should be aware of relevant exclusions that may affect cover, such as failure to verify payments – which may apply if appropriate payment verification protocols were not followed.

It is also important to ensure that you follow the controls detailed in your proposal form which navigates the transfer of funds, such as the number of staff who must authorise a transfer in order for it to go ahead.

Given the rise of sophisticated social engineering techniques, clear internal procedures and evidence that they are followed are increasingly important.

Practical measures to reduce deepfake risk

A robust cyber policy is only one part of an overall defensive strategy. Law firms should also consider the following controls.

1. Enhanced employee training

Traditional phishing training is no longer sufficient.

At Gallagher, for example, we have rolled out dedicated deepfake training to help employees recognise AI-generated voices and videos.

This should extend to third parties, including outsourced service desk providers, as attackers may attempt to impersonate staff.

2. Strengthened verification procedures

Employees interacting with payment authorisations or system access requests should be thoroughly familiar with procedures – particularly those detailed in the proposal form to your cyber policy.

Robust, documented verification steps and increased staff awareness reduce the risk of falling victim to manipulated instructions.

3. Foster a culture of professional curiosity

Staff should feel comfortable questioning unexpected or unusual requests, especially urgent requests from individuals they do not normally deal with.

Firms should actively reward, not punish, employees who take time to verify instructions. This shifts the culture from ‘speed above all’ to ‘secure by default’.

Protecting your firm

Deepfakes represent a new frontier in social engineering, eroding long-trusted human intuition and placing new pressure on verification processes.

For law firms, the combination of high-value transactions and sensitive data creates a particularly fertile environment for attacks.

A well-structured cyber policy, including the optional cyber crime section, can offer crucial protection – but it must operate alongside robust internal controls, informed employees and an organisational culture that encourages careful verification.

Law firms that prepare now, through training, procedural reinforcement and insurance review, will be better positioned to withstand this emerging AI-driven threat landscape.

This article is written by Arthur J. Gallagher (UK) Limited (Gallagher) as a hosted feature on the Law Society website. Views expressed are Gallagher’s.

Partner information

Gallagher offers cyber insurance for the legal sector – helping you protect your law firm from cyberattacks.

About Gallagher

Gallagher is a partner of the Law Society.

If you wish to find out more about cyber insurance, please visit Gallagher’s Law Society page.

The information provided in this article is for general informational purposes only and does not constitute legal, financial, or professional advice.

While Gallagher has made every effort to ensure the accuracy and reliability of the information presented, readers are encouraged to consult with qualified legal, insurance, or risk management professionals to obtain advice tailored to their individual needs and circumstances.

Pen Underwriting is another company in the Arthur J. Gallagher global group which acts on behalf of one or more insurers.