Matthew Biggerstaff
01/05/2024
Reading time: four minutes
The AI Act (the act) was first proposed by the European Commission in 2021 due to the risk posed by the technology. Although the act hasn’t yet come into force in Europe, it’s likely to do so sometime within the coming months.
Of course, the UK is no longer required to follow European law after leaving the European Union. However, with the act set to be the first piece of legislation placing limitations and regulations on AI tools, it may demonstrate what rules are likely to come, both inside and outside of the UK.
The Act proposes many rules that intend to make AI tools safer and more transparent to prevent misuse. The European Commission has stated that different tools pose different levels of risk to people, and that two kinds of risk will be most often considered; ‘unacceptable risk’ or ‘high risk’.
‘Unacceptable risk’ tools will always be banned under the legislation and will most often be tools that look to track and identify people based on facial recognition, or tools that look to manipulate the population. Any exception to these rules must be passed through a court, such as the police using facial recognition software to identify people for serious crimes.
‘High risk’ tools will not always be banned but must first be reviewed before being allowed to be put on the market. These tools will also be open to complaints and subject to removal by national authorities. They will most often be:
The act will require all generative AI, such as ChatGPT, Sora and other tools that generate anything in real time, to comply with strict transparency requirements. This will essentially ensure that all content made by these tools is disclosed as being made by AI, as well as preventing the tools from creating any illegal content, such as the illicit photos created of Taylor Swift, which I discussed in a previous blog. This also ensures that any advanced generative AI must be reviewed and evaluated before being allowed to be publicly accessed, while all AI generated content must be clearly marked as being so.
The act will be brought into European Law, likely within the next two months. However, the rules will become applicable steadily over the next two years. Certain rules like the prevention of ‘unacceptable risk’ tools will be brought in within six months, while the transparency requirements will not become applicable until after 12 months.
While any form of regulation of AI is surely welcomed by many, the incredible rate at which AI has developed means that in two years' time the ability of generally accessible tools could be extraordinary.
While the UK is no longer under European Law, it’s likely that any country looking to bring in their own regulations of AI outside of the EU will look to the AI Act and evaluate any proposals against that. Prime Minister Rishi Sunak stated that the UK isn’t looking to rush any legislation into effect, and is quite happy to allow AI tools to grow and develop for now. While of course innovation and creation should be allowed and encouraged, allowing something with so few constraints and seemingly unlimited potential to grow with no restrictions certainly carries risk. The UK competition watchdog raised concerns to the government over the potential harms of AI. The chief executive of the UK’s Competition and Markets Authority stated that she too had concerns over some businesses’ use of AI models.
The UK is currently relying on current legislation, such as the Online Safety Act, to regulate the use of AI in the short term, as well as voluntary agreements with companies and governments.
Any change in the UK’s stance in the short term is unlikely, especially with an impending election. Therefore, change is likely to not be seriously discussed until after the next controlling party is confirmed.