Back to overview

LCN Says

Wrestle with PESTLE: impact of AI on the legal field

updated on 20 February 2025

Reading time: five minutes

AI describes the ability of a computer system to perform tasks typically associated with human intelligence, including those relating to perception, recognition, decision making and learning.

Predominantly classed into two different categories, both generative and predictive AI types can impact the legal field significantly. Generative AI, which uses large amounts of data input to create new content, can be used to draft novel legal documents/advice. Predictive AI instead uses input to discover and review pre-existing documents to estimate the likelihood of their relevance to a specified context, to conduct large-scale e-discovery and document review-based tasks.

The scale of AI’s impact is continuing to rise, with around 40% of lawyers using AI in their daily work and more than 78% of law firms adopting AI in some format. The future potential of AI is similarly rising. A recent Goldman Sachs study concluded that more than 44% of tasks performed in the legal industry could be automated by AI and noted that this subset of automatable tasks accounts for upwards of 66% of hourly billable work done by the average law firm.

Given this substantial potential, it's key for legal professionals to understand the factors that influence how AI can impact the legal field.

Political

The EU AI Act 2024 was the world’s first comprehensive regulation of AI by a major regulator. Organising AI applications and systems into three categories dependent on risk, this regulation assigns more strenuous legal requirements to those carrying higher risk levels. Note, however, that applications not qualified above a 'high risk' level are largely left unregulated.

The UK AI Bill was proposed in 2023 and sets out to create an AI authority to enforce regulations on the use of AI, monitor economic risks from AI and introduce regulatory principles governing the development and use of AI. Given the current infancy of AI, the bill seeks to foster innovation and work alongside these developing technologies. However, the effect of the bill largely relies upon voluntary commitments from key developers rather than binding legal requirements.

Economic

With the coinciding trends of legal fees increasing and legal aid funding declining, the increased use of expensive technology systems by law firms can be seen as contributing to the cost crisis.

However, while the initial set-up costs can be high, the use of AI can streamline legal processes exponentially. When trained effectively, AI can achieve task completion within a comparatively lower amount of time, which reduces the time billed on a matter. A recent MIT study demonstrated that employees using AI-powered tools experience a 40% increase in performance (main performance indicators including speed and accuracy) when compared to traditional methods, and a further case study from KPMG evidenced that companies using such AI tools could save up to 20% in overall workload/associated costs.

The use of AI can achieve more efficient and cost-effective results, especially with legal matters involving large-scale document review and redactions and with general administrative filing and billing procedures.

Sociological 

Society's use of technology has increased exponentially since the late 20th Century, with approximately 90% of the EU population using internet-based technology applications daily. This regular usage suggests a general desire for effective and safe technology and therefore a desire for the innovation and development of new and improved software such as AI.

However, there’s a rising social trend of distrust towards AI and its output. AI algorithms and their outputs are entirely shaped by their inputted datasets. If the inputted data is tainted by errors, inaccuracies or biases, the AI's output will reflect these prejudices each time. This can lead to the exacerbation of social biases, with one study demonstrating how AI regularly, but unintentionally, produces racist decisions due to the tainted input data.

Technological

AI has undergone a comparatively rapid evolution over the past few decades, evolving from theoretical concepts to basic algorithms (eg, satellite navigation), to sophisticated and fully autonomous systems (eg, ChatGPT). By the time the executive and legislature have had the opportunity to review, draft and implement regulations to monitor certain AI technologies, the very systems it targets are often comparatively obsolete. As a result, legal regulations are often outdated and inapplicable to the AI system most prevalent at a given time. If the legislature can’t keep up with the speed of AI's technological development, regulations risk both not accounting for and unnecessarily limiting the innovation of new AI technologies.

Legal

There are numerous client confidentiality, accuracy and accountability legal factors to consider when evaluating AI’s impact on the legal field.

Client confidentiality concerns stem from the data as initially inputted to train the AI. For an AI system to be effective at a given task, it needs to be trained with a wide and highly relevant data set input. However, by inputting client information to train this system, there’s a risk that confidential information could be leaked in outputs given to other clients.

Another key concern is that the accuracy of AI’s output depends on the data used to train systems. The input of inaccurate or unclear data to train the AI system will result in the mass production of inaccurate output, as the system would utilise the inaccurate information and replicate inaccuracies in the exact same way for all relevant outputs.

Given the highly regulated nature of the legal profession, there are also issues regarding AI accountability. While the Solicitors Regulation Authority (SRA) regulates the advice generated by solicitors and holds them duly accountable, there’s no precedent as to how society would regulate and monitor output generated by AI systems. This brings forward a range of questions. Should the initial legal data input be regulated? Should the AI programmers be legally trained and/or have accountability responsibilities? Should the final output adopted by firms be regulated? 

For more on how law firms are adopting AI, check out our guide to the legal profession 2024/25.

Environmental

The computational resources required for the training and operation of AI systems result in a large carbon footprint, with these technologies already accounting for approximately 1.8% to 3.9% of annual global greenhouse gas emissions. Compared to regular search engines (which also require significant amounts of energy), a single ChatGPT search requires around 10 times as much electricity as a standard Google search query.

Further, even the physical creation of these systems, via the mining and production of specialised metals for AI hardware, causes significant pollution concerns.

While these concerns are undeniable, the environmental impact of AI could be largely alleviated by a large-scale switch from fossil fuels to renewable resources. Therefore, while energy-intensive, poor environmental impact isn’t inherent to AI technology.

The verdict

In conclusion, AI has the potential to impact the legal field in a huge way. The verdict centres on exactly how AI technology is implemented, including how heavily it’s legislated, how it’s priced, how accurate the original input is, how confidentiality and accountability mechanisms are maintained and how the systems are to be powered.

Check out this Oracle for advice on using AI in your law firm applications.

Mary-Kate Hubbard is a paralegal.