As artificial intelligence (AI) gathers momentum in establishing itself in our everyday lives and across all industry sectors, new laws and guidelines are quickly emerging to manage the risks of this fast-evolving technology. The European Union has led the way in creating an overarching AI regulation, with the EU AI Act, setting similar regulatory standards for all sectors and industries — but Dr Asress Gikay, an AI and law academic at Ã÷ÐÇ°ËØÔ, explains why the UK should not rush to follow the EU, while acknowledging the flaws in the UK’s current regulatory policy and trajectory.
- The EU AI Act is problematic
- Sector-led AI regulation is better than the EU’s blanket regulation
- The UK’s current incremental approach to establishing AI regulation is preferred
- The UK’s non-statutory five principles for AI regulation are not enough
- A central AI authority with robust statutory authority should coordinate the enforcement of AI regulation across sectors
In his new , published in the International Journal of Law and Information Technology, Dr Asress Gikay unpacks the UK’s incremental approach to AI regulation while comparing it to the EU’s complete approach. “The European Union has taken a comprehensive approach to regulation, through the EU AI Act, but the UK has favoured a sector-led regulation and does not believe that a blanket AI specific regulation would be appropriate,” explained Dr Gikay.
In contrast to the EU, the UK government has not yet established any laws around AI regulation and has provided five non-statutory principles for independent sector regulators to consider and apply to AI systems used within their remit. These and additional principles have been incorporated in the UK AI Regulation Bill, but no significant steps have been taken by parliament so far.
“The UK is set to create regulatory standards that respond to the particular needs of specific sectors based on evidence, whereas the EU sets similar regulatory standards for AI development, deployment and governance across all sectors,” explained Dr Gikay. “Implementing comprehensive laws, as seen in the EU, makes parliamentary changes to law extremely difficult and can also threaten technological advancement.”
The EU AI Act classifies AI systems into unacceptable (banned), high-risk and low-risk categories, which Dr Gikay highlights as a key concern. “The Act’s classification of high risk has the potential to over-regulate or underregulate AI systems, and it does not consistently take context into consideration, which is problematic.
“A delivery robot used within a controlled environment such as a warehouse should be classified as low-risk, but if it is deployed in an urban area, it could increase the risk of accidents and be considered as a high-risk use case,” explained Dr Gikay. “Because the EU AI Act treats this AI-powered robotic as a high-risk AI system, it imposes similar regulatory standards regardless of the context in which it is used.”
Due to the EU’s closed list of high-risk AI systems, Dr Gikay believes that some AI use cases pose a serious risk to public interest because they are not considered high-risk. “AI systems used in migration and border management and anti-money laundering are currently not considered high-risk systems, although they could be used in a manner that discriminates against people based on race, religion or geographic origin,” explained Dr Gikay.
The AI and law expert argues that the tendency for the EU AI Act to apply excessive regulatory standards to AI systems that pose a low risk while allowing risky AI systems to slip out of the high-risk regulation scope provides a strong reason for the UK’s slow approach to AI regulation.
“If risk-based regulation is adopted in the UK, the category of risks should not be listed, but it should be defined through an adaptable principle that regulators and judges can apply to different contexts,” he said.
The Ã÷ÐÇ°ËØÔ academic believes that the UK’s incremental approach to AI regulation will provide a more pragmatic approach to regulation if it is appropriately fined-tuned. “Although the existing five principles allow for adaptability, it takes flexibility to the extreme by advocating for a non-statutory framework where regulators implement the principles without statutory duty,” he said.
“With a proper principle-driven risk classification system, a strong commitment to coordinate sectoral legislations and enforcement through a central authority, the UK could implement an AI regulatory framework that better balances the need to encourage innovation and prevent the potential risks presented by AI.”
‘, by Asress Gikay, is published in the International Journal of Law and Information Technology.
Reported by:
Nadine Palmer,
Media Relations
+44 (0)1895 267090
nadine.palmer@brunel.ac.uk