Back to overview

Commercial Question

AI in the legal landscape

updated on 21 October 2024

Question

What role does AI play in facilitating corporate fraud?

Answer

The use of AI in the legal landscape has developed significantly within the past five years and is increasingly being incorporated into the everyday work of lawyers. When it comes to administrative tasks such as document reviews, contract analysis, and carrying out due diligence, lawyers and clients alike will thank AI for its efficiency and cost-saving capabilities.

However, AI is a dual risk technology. As we discover more ways of utilising it to make our lives easier, bad actors also devise new ways to take advantage of technological advancements to facilitate corporate crime.

Risks posed by electronic signatures

One example of AI being used for fraudulent purposes is the development of FraudGPT, a large language model (LLM), which is exclusively used to aid fraudulent crime ventures. For example, users can prompt the model to write malicious code, create undetectable malware or draft highly personalised and convincing phishing emails, eliminating the tell-tale signs that we’ve become alert to, such as poor grammar or sentence construction.

One threat posed by these malicious AI models is the manipulation of metadata from electronic signatures. Although the use of electronic signatures has increased significantly since its initial endorsement by the Law Commission in September 2019, the threat of malicious AI means it’s vulnerable to attack.

Electronic signatures are verified through their metadata, which details the relevant IP address and identity of who produced the signature. This data ensures information is correctly interpreted and understood; however, bad actors may attempt to alter this metadata for fraudulent purposes by using generative AI tools. For example, consistency checks and formatting clean-up, which would be undertaken as part of the normal procedure, could be exploited by bad actors to manipulate metadata, leaving it unreadable or incorrect. Being unable to verify electronic signatures could have significant ramifications for both litigators and the wider legal landscape, especially in disputes where the validity of a signature is contested.

Is blockchain the solution?

Blockchain technology poses a potential solution to metadata and IP address tampering. Blockchain operates on what is known as a ‘decentralised ledger system’, where each transaction is verified by multiple computer nodes before being recorded permanently. This is known as the ‘proof of work consensus mechanism’, and it ensures that the data remains unaltered. Its decentralisation means that no single entity controls the entire ledger, which in turn makes it extremely difficult for unauthorised changes to be made. Once added, blockchain transactions can’t be altered or deleted, which ensures transparency and that the information remains verifiable over time. Although theoretically possible, it’s unlikely that AI, in its current form, has the capacity to gain control over the majority of the network's computational power in order to alter the blockchain record and this may encourage the use of smart contracts that are executed and recorded on the blockchain.

Rise of deepfake technology

Another, perhaps more chilling, misuse of AI is through deepfakes and voice cloning, which is a tactic increasingly seen in authorised push-payment fraud. As opposed to seeing a familiar name in an email, we may now hear a familiar voice or see a familiar face. We can no longer rely on the usual identifiers of fraudulent scams, which in turn makes it harder to detect. In addition, the speed at which AI is developing and the low level of expertise required to execute commands has significantly increased its accessibility to this type of technology and it’s therefore essential that individuals and businesses educate themselves on the capabilities of AI and remain vigilant when making payment instructions.

What’s the impact on legal processes?

To avoid deepfake scams, corporations will now collect more biometric identification as part of revised know-your-customer or KYC checks, which is significantly more difficult for AI to mimic; however, this may only be a temporary solution to a long-term and constantly evolving issue. Given the rapid development of AI, we must consider the possibility of AI eventually being able to recreate unique biometric data such as fingerprints and facial features. On the other hand, perhaps one approach to mitigating the risks posed by the malicious use of AI is to rely on technology less for certain tasks. More traditional methods of identity authentication, such as wet ink signatures, aren’t vulnerable in the same way as their digital counterparts.

An additional risk identified by the Guidance for Judicial Office Holders is that of AI tools being used to produce fake material, which is then disclosed as evidence, including text, images and videos. Consequently, we may also see a revival of oral evidence being given in person to avoid the challenges posed by generative AI and deepfake technology. In addition, the emergence of a new category of 'expert witness' is anticipated, solely tasked with distinguishing legitimate and illegitimate documents – an example of why this may be needed is seen in the recent case of Crypto Open Patent Alliance v Wright [2024] , where it was discovered that the defendant had manufactured a significant amount of documents, some of which using ChatGPT, to support his false assertion that he was the creator of Bitcoin.

It’s also interesting to consider the potential impact this may have on CPR 32.19, which imposes a seven-day window, or the latest date for serving witness statements, as a deadline for a party to challenge the authenticity of a document. This would be particularly challenging if a significant number of disclosed documents are believed to be AI-generated or enhanced.

An evolving story

AI creates enormous and exciting opportunities – the scope of which we’re only just beginning to understand. AI itself can also be used very effectively to help detect and reduce fraud risk.  However, it’s important that both the opportunity and risk are considered hand in hand, and that anti-fraud measures are considered as a priority alongside evolving technology. With the new EU AI Act coming into force in August 2024, it’ll be interesting to see how the law in the UK develops to address the malicious use of AI to perpetrate fraud.

Luke Harrison is a solicitor apprentice in disputes and investigations and Meshah Kuevi is a solicitor apprentice in technology, IP and information at Taylor Wessing