Back to overview

Commercial Question

Generative AI use and client confidentiality

updated on 08 April 2025

Question

By using AI, am I breaching client confidentiality?

Answer

Generative AI has been perhaps one of the most ground-breaking developments of our generation. It’s baffling to think that just 80 years after Alan Turing's electronic numerical integrator and computer was created (broadly accepted as the birth of computer science), we now live in a world where computer science has become so advanced that algorithms are capable of creating their own content by processing huge quantities of data in a fraction of the time it’d take its biological creators.

While the benefits of AI's application to the legal sector are plain to see, is wide scale use of these algorithms truly acting in a client's best interests? As legal practitioners, the duty to act in a client's best interests is an imprinted professional and ethical obligation. Indeed, one of the Solicitors Regulation Authority’s (SRA) principles, which are regarded as the fundamental tenets of ethical behaviour in legal practice, is to always act within the client's best interests. While there may be some client cost saving benefits to using generative AI, do we inadvertently risk undermining client trust?

Client confidentiality

Neatly sidled alongside the principle of acting in a client's best interests is perhaps the most sacrosanct of a solicitor's obligations – confidentiality. Rule 6.3 of the SRA Code of Conduct for Solicitors prescribes that "you keep the affairs of current and former client confidential unless disclosure is required or permitted by law or the client consents". The duty of confidentiality is further underpinned by common law. In fact, the first noted regulation of solicitors, in respect of keeping the affairs of their client confidential, was enshrined in 1275 in the Statute of Westminster I, which imposed an obligation for serjeants (barristers) to avoid conflicts of interest.

It's been widely acknowledged by the courts that for information to be classed as ‘confidential’ it must:

  • have the necessary quality of confidence.
  • be imparted in a “situation imposing an obligation of confidence”; and
  • be used in an unauthorised way to the detriment of the owner.

Have the necessary quality of confidence

This principle was distilled from an appeal heard by Lord Greene at the Court of Appeal. The case was between Saltman Engineering Co Ltd and Others v Campbell Engineering Co Ltd [1963]. Lord Greene stated: “The information, to be confidential, must, I apprehend, apart from contract, have the necessary quality of confidence about it, namely, it must not be something which is public property or public knowledge.” While on the face of it this seems glaringly obvious, it’s perhaps more nuanced than it appears. At what point does the disclosure of a company's name breach confidentiality? Many company names are in the public domain, under the incorporation requirements imposed by the Companies Act 2006. However, if disclosed in the context of a matter then it becomes an issue of confidentiality.

Be imparted in a “situation imposing and obligation of confidence”

While this principle was first conceived in Saltman v Campbell, it was further explored by Justice Megarry in the High Court upon hearing Coco v AN Clark (Engineers) Ltd. Megarry caveated their interpretation on the basis that legal precedent really offered no definitive test to answer this question. Instead, they relied on the principles of logic considering what a reasonable person would realise. Some of you may note the basis of the test aligns closely with the standard of which a tortfeasor is held accountable for negligence.

Megarry stated: "It seems to me that if the circumstances are such that any reasonable man standing in the shoes of the recipient of the information would have realised that upon reasonable grounds the information being given to him in confidence, then this should suffice to impose upon him the equitable obligation of confidence." Megarry's comments really cut to the ethical bone of the doctrine in that any principled individual should be able to plainly identify when information has been shared with them in confidence.

Be used in an unauthorised way to the detriment of the owner

The final element of the test really talks to how the doctrine of confidence is breached. It’s implied in the sharing of confidential information that there’s attached to it an ‘authorised purpose’. For example, a divorce lawyer may receive confidential financial information for the purpose of advising on the merits of their client's case or in anticipation of matrimonial proceedings. In legal work, the authorised purpose is to further the instructions from our clients with a view to acting in their best interests. In that respect the principle and the rule walk hand in hand, and you should always be mindful of whether you’re using a client's confidential information for the purpose for which they’ve shared it.

The risk posed by generative AI

With a heavy workload, competing deadlines and mounting client expectations, solicitors look for every opportunity to maximise the efficiency of each vital minute. When faced with writer's block or a looming drafting deadline there can be a real temptation to ask for help from AI. You may consider that you’re acting in the client's best interests by asking AI to compose your letter or to draft your settlement agreement, as surely it means that you’ll incur less time on the file and therefore provide a much more cost-effective service to your client. While the motive is pure, the act may be rash.

Much will depend on the nature of the task you’re asking AI to undertake. General research tasks on points of law will usually be low risk but, in activities where you’re having to supply specific details for the composition of a letter or an agreement, you risk breaching the sacred rule of confidentiality.

What happens to data input to generative AI?

The public AI systems save all user inputs. Put plainly, whenever you ask public generative AI a question or ask it to complete a task, it’s saving the information you put into it, using it for training and potentially to formulate responses to other's queries. The obvious question then is, if you input confidential information into a public AI system, are you breaching confidentiality? In consideration of Saltman v Campbell, arguably yes you are. Referring to the case, confidential information is "not something that is public property or public information". By sharing client information with a public generative AI tool, you’re effectively making non-public information public, albeit in an indirect manner. The act, by its very nature, is a breach of confidence.

Indeed, the Law Society guidance, titled Generative AI: the essentials and published on the 7 August 2024, advises practitioners, under Section 3.3, that it’s "generally advisable that you do not feed confidential information into generative AI tools, especially if you lack direct control and oversight over the tool's development and deployment". The guidance proceeds to specifically refer to free online generative AI stating that "where you have no operational relationship with the vendor other than use, do not put any confidential data into the tool".

Common principles for the safe use of AI in the legal industry

It should therefore be clear that, on the face of it, asking free generative AI to rework a letter or statement of case isn’t advised and in breach of your ethical obligations as a practising solicitor. However, that shouldn’t curtail you from reaping some of the benefits of generative AI.

Generally speaking, if you’re conducting legal research, or asking AI to summarise a legal point or a judgment then you aren’t breaching your ethical obligations. Likewise, if your firm is working with a vendor to provide a ringfenced system that’s designed to protect the integrity of the data it’s trained on then fully exploiting the efficiencies afforded by AI is certainly less risky, however should still be premeditative.

In order to help you navigate an increasingly complex digital landscape, I offer you a checklist prepared by the Law Society, in its guidance on the use of generative AI.

  • Define the purpose and use cases of the generative AI tool.
  • Outline the desired outcome of using the generative AI tool.
  • Follow professional obligations under the SRA Code of Conduct, SRA Standards and Regulations and SRA Principles.
  • Adhere to wider policies (including your firm’s policies) related to IT, AI, confidentiality and data governance.
  • Review the generative AI vendor's data management, security and standard.
  • Establish rights over generative AI prompts, training data and outputs.
  • Establish whether the generative AI tool is a closed system within your firm's boundaries or also operates as a training module for third parties.
  • Discuss expectations regarding the use of generative AI tools for the delivery of legal services between you and your client.
  • Consider what input data you are likely to use and whether it’s appropriate to put into the generative AI tool.
  • Identify and manage the risks related to confidentiality, intellectual property, data protection, cybersecurity and ethics.
  • Establish the liability and insurance coverage related to generative AI use and the use of outputs in your practice.
  • Document inputs, outputs and any errors of the generative AI tool is this isn’t automatically collected and stored.
  • Review generative AI outputs for accuracy and factual correctness, including mitigation of biases and factchecking. 

In conclusion, while AI will at some point likely form a part of the manner in which we manage our clients matters, you should tread with caution as to not inadvertently erode your client's confidence.

Laurence Platt is a graduate solicitor apprentice at Michelmores LLP.