updated on 07 December 2023
Reading time: 12 minutes
This article uses AI to help write one of the sections, read on to find out whether it was successful – can you spot the difference?
This LCN Says is part of LawCareers.Net’s ‘Wrestle with PESTLE (WWP)’ series, which looks at various business case studies using the PESTLE technique.
Prefer to listen? You can listen to our brand new COMMERCIAL CONNECT podcast series on Spotify, Soundcloud, Apple Podcasts or any of your favourite podcast platforms, or on the LawCareers.Net Podcast hub.
PESTLE stands for:
This technique involves using these six external factors to analyse the impact on a business and/or industry.
On 1 and 2 November 2023, world and market leaders gathered in the historic Bletchley Park, home of the code-breaking heroes of the Second World War, to discuss a new existential threat. The unlikely band, including the likes of Rishi Sunak, Elon Musk, Kamala Harris, Nick Clegg, Wu Zhaohui and King Charles III were all linked by a mutual interest in AI, and the impacts it could have on humanity.
‘AI’ is an indeterminate term and generally refers to a machine capable of performing a task that typically requires human intelligence. However, with the arrival of machine learning (ML), natural language processing (NLP) and strides forwards in robotics, the capabilities of such machines are increasing rapidly. Most notably, the technology has reached the public forum through the availability of large language models (LLMs), in the form of OpenAI’s ChatGPT or Google’s Bard. These astounding pieces of software can produce original writing, art and research on an endless range of topics, in coherent and natural language.
Amid a range of opinions from the techno-optimist to the sceptical, the summit resulted in the signing of the Bletchley Declaration by 28 countries, from across the EU, Middle East, Africa and Asia. The declaration solidified the intent of the states involved to take collective action to ensure compliance with “human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection” and safeguards against “the capability to manipulate content or generate deceptive content”.
The potential political impacts of AI and the summit are both democratic and geopolitical.
The capacity of AI to have a direct impact on the political process was a topic of central discussion at the summit. AI can produce deepfakes (a video of a person that’s been digitally altered so that they realistically appear to be someone else). Deepfakes have been weaponised in the past to disparage political figures. Moreover, deepfakes have most prominently been used to target groups already marginalised in the political process, such as women and people of colour. AI also rapidly increases the ability to astroturf, both virtually and physically. ‘Astroturfing’ is a technique by which manufactured support for a policy or group is used to misrepresent a high level of grassroots support. This can take place through letters to local political representatives or posts on social media. This is made all the easier by AI, which can generate accounts, posts and discussion easily, and at little cost. The effect of this is incalculable and has the potential to alter policy direction or swing voters due to their misapprehension of a measure or group’s popularity. Only last month the Republican National Committee (RNC) announced that the video it had released as anti-Biden propaganda immediately following his re-election campaign was generated by AI. Concerningly, due to the high level of protection afforded to political speech, the RNC was under no duty to divulge the source of the video. There’s a plausible future in which voters are consuming and engaging with propaganda that was generated by and is being advocated for by AI.
The emergence of AI has also presented a new battleground for the significant geopolitical actors to compete for dominance. Governments are jostling to lead the way in regulation (or a lack thereof) in order to attract the major tech entities to their countries. Rishi Sunak has frequently proclaimed that he wants Britain to be at the forefront of the AI revolution, and Britain’s departure from the EU gives him the potential for unilateral movement in the space required to fashion the UK into a global hub for AI usage and development. Corporations attach to wherever they have the most freedom, and this places governments in a sensitive position as they have to mediate between creating a climate that facilitates technological innovation and the desires of ordinary citizens. Movement on AI has also clearly awakened thawing tensions between the US and China; Chinese presence at the summit was likely a cause of Biden’s absence. Kamala Harris’ decision to give a speech on the subject, overlapping with the summit and drawing some of the personnel away, as well as Biden’s decision to use the summit to announce the US launch of the AI Safety Institute, were indubitably veiled attempts to hijack Sino-British regulatory impetus.
Besides this, there was a weighty silence throughout the summit on whether the populous of member states would have any say on the use of AI. This is transformative technology, about which there are shared and substantiated fears. Professor Noortje Marres, among others, has therefore objected to the summit on the grounds that it was profoundly undemocratic, as it omitted any mention of whether citizens would have a meaningful voice on one of the biggest technological shifts of our lifetimes. The omission, in this case, carries the undertone that this will not be the case; ‘big picture’ AI policy will be left to the political and technological elite.
AI, and its regulation, will have significant impacts on productivity and equality.
AI can increase productivity by up to 40%, enabling workers to ‘shortcut’ the mundane daily tasks required of them and enabling more efficient management of human intelligence. A McKinsey Global Institute report has estimated that AI could add $4.4 trillion to the global economy and create a new ‘virtual workforce’ that requires zero downtime and is capable of both learning and problem solving. This is possible through a cycle of an ever-increasing volume of data, which generates ever-increasingly accurate AI analysis in a fraction of the time and cost that such an outcome previously required.
However, there’s a strong chance that this effect will be ‘lopsided’; adoption of AI requires a high pre-existing level of technological development that only certain nations can take advantage of. This effect will likely be exacerbated by the stronger initiative to substitute labour for AI in those countries due to the higher cost and non-manual nature of that labour. A similar effect is likely to be observed with regards to wages and employment. A forecast by think-tank Bruegel warns that as many as 54% of jobs in the EU face the probability or risk of computerisation within 20 years. Low-skilled non-manual jobs seem to be those at the highest risk of replacement, closely followed by low-skilled manual work such as long-distance driving. This would drive demand for high-skilled workers, both those most capable of maximising benefit from AI and to oversee the inevitable growth that’ll result from the process. This might drive their wages up, while pushing lower-skilled workers’ wages downwards, weakening unions or even generating unemployment. Massachusetts Institute of Technology’s Johnson illustrates the potential effect of AI on employment and wages by comparing it to innovations of the past; the advent of the railway in Britain, which enabled faster transport of food and labour, was a democratic innovation, with its benefits felt equally through society. In contrast, modern innovations such as the automated self-checkout provide no benefit to low-income citizens; groceries are no cheaper and there’s a resultant net decrease in employment. It’s easy to see that the AI revolution could follow either trajectory.
AI, even in its current form, has the potential to wholly democratise healthcare and education, as well as improve the efficiency of both.
In healthcare, AI revolutionises the potential personalisation and responsivity of the industry. AI’s ‘computer-vision’ capabilities – the ability to see the whole pool of data at once and draw any conclusions in the blink of an eye, give providers valuable insights into and information on people’s health. This effect would be compounded by the everyday presence of fitness wearables, such as Apple watches or Fitbits, which collect massive amounts of biological data. An infrastructure of fitness wearables with AI tools incorporated can help identify abnormalities early and prevent subsequent illness.
AI has the potential to transform education, particularly for neurodivergent individuals who typically struggle with orthodox learning techniques and socialisation. A use case deriving from Autism Glass, a Stanford research project, involved using AI to automate the recognition of emotions and social cues, massively expediting the typically drawn-out process of children with autism learning to interact with their social environment. Therapists in this sector are perpetually over capacity, so the potential for AI to be a viable alternative will substantially alter the ability of the affected children to integrate socially and excel academically. On top of this, for all students, AI can be used to automate marking and identify learning gaps, enabling more targeted education.
AI's ability to automate complex tasks has brought about a paradigm shift in industries, fostering efficiency and unlocking new potentials. ML, an integral facet of AI, empowers systems to learn, adapt and innovate autonomously. From predictive analytics in healthcare to algorithmic trading in finance, AI is revolutionising how we approach data, uncover patterns and derive meaningful insights.
Looking ahead, the future applications of AI promise even greater technological marvels. The convergence of AI with the Internet of Things (IoT) is creating smart ecosystems where devices communicate and collaborate seamlessly. Smart cities, driven by AI technologies, will optimise resource management and enhance the quality of urban living through intelligent infrastructure.
Natural Language Processing and computer vision are reshaping how we interact with technology. Voice-activated AI assistants are becoming our virtual companions and language translation services are bridging global communication gaps. Computer vision, with applications from augmented reality to autonomous vehicles, is propelling us towards a world where machines not only understand, but also interpret and respond to our visual surroundings.
Perhaps the biggest showcase of AI’s technological capability is its products’ indistinguishability from human speech and writing, an ability that was previously thought impossible to truly duplicate. Illustrating this, the three paragraphs comprising this section were generated with AI. I used one prompt and a couple of minor manual edits on ChatGPT, meaning this section was at least five times faster to write than the others.
AI and its regulation will impact both the legal industry and the law itself.
At present, due to the fast-moving nature of technology, and comparatively slow-moving nature of regulation and legislation, providers of AI services are required only to submit their products to regulators on a voluntary basis, prior to releasing them. This needs to change; issues with transparency, bias, accountability and privacy demand regulatory measures and enforcement.
In the realm of intellectual property, the works of generative AI are derivative of the human works that it’s been trained on. In essence, if you ask it for a piece of art, it’ll pirate parts and ideas from works produced by humans. It’s difficult to interrogate this process and identify which works were drawn upon. This issue is compounded by the archaic nature of copyright law, which requires the determination of who actually came up with an idea for a piece of art. This ‘who’ must be a natural, human person. This circles around the general issue of liability. The law demands an agent to hold accountable for a given action. If somebody is unjustly harmed, someone must be identified as the responsible party. AI muddies these waters; this is highlighted by the recent incident regarding GM’s ‘Cruise robotaxi’. A customer was struck and dragged by a Cruise vehicle after a prior hit-and-run on the victim by a human-driven car. If the customer presses charges for the latter injury, who can they pursue?
AI is already assuming a major role in the legal industry; services such as Ironclad, Luminance and Casetext are all being used across all strata of firms as cost-effective methods to sift through documents and fast-track legal research. Firms that effectively leverage AI tools are offering services at lower cost, higher efficiency and with higher odds of favourable outcomes in litigation. The influence of AI is so pervasive that the judge in Cass v Ontario (2018) reduced the defendant’s payable costs on the basis that the prosecution hadn’t used AI for tasks in which it was capable. However, an outstanding question is: “If AI develops in such a way that it can replace paralegals and junior lawyers, how will we develop competent senior professionals, given that these roles are formative and valuable experiences that contribute to their expertise and current position?”
The jury isn’t yet out on AI’s potential environmental impact, and it’s likely that regulation will have a major role to play in shaping it.
One study estimates that training GPT-3 could potentially have consumed 700,000 litres of freshwater. The water used to prevent data centres from overheating is usually evaporated, which means it can't be reused and is therefore entirely wasted. Similar statistics can be produced about AI training in general; researchers at the University of Massachusetts found that the training process for a single AI model can emit more than 626,000 pounds of carbon dioxide. That's around the same as 63 gasoline-powered passenger vehicles driven for a year or 125 round-trip flights between New York and Beijing. Data centres, which are instrumental in the data-heavy training of AI, consume 10 to 50 times more energy than similar-sized commercial properties. Furthermore, AI technology generates the by-product of ‘e-waste’ – a set of hazardous chemicals including lead, mercury and cadmium that can damage biodiversity and agriculture.
On the other hand, AI has also been deployed in sorting through the numerous satellite images researchers use to monitor climate change, identifying underperforming sectors in electricity grids to optimise efficiency, and finding ways to recycle and reuse water by identifying contaminants. More strikingly, Google, American Airlines and Breakthrough Energy recently used AI in the development of maps to forecast and reduce airplane contrails, which account for roughly 35% of the aviation sector's global warming impact. AI has also piloted robotics systems that can find and collect recyclable materials at twice the speed of a human, identify wildfires before they spread, and detect illegal logging in vulnerable forest areas by analysing audio-sensor data. AI’s uses in environmental conservation and emissions reduction are diverse and innumerable, and may well outweigh the direct downsides of increased emissions, waste and water usage.
AI may become the most transformative technology of the modern age. It has the capacity to revolutionise the way we work, socialise, distribute social benefits and interact with the world around us. As emphasised throughout this article, there are a plethora of potential tripping hazards that could become obstacles to AI, producing a net benefit for humanity; potential unemployment and changing labour relations, environmental damage, invasions of privacy, and political interference, to name a few. For this reason, diligent and reactive regulation will be essential in navigating the pitfalls listed. Therefore, the Bletchley Summit, and its conclusion that ??“AI should be designed, developed, deployed, and used […]in such a way as to be human-centric, trustworthy and responsible”, alongside the resultant declaration of intent for international cooperation on the issue, are both important steps forwards.
Joshua Masson (he/him) is a University of Oxford philosophy, politics and economics graduate and is currently enrolled on the Solicitors Qualifying Exam preparation course at BPP University Law School.