Artificial intelligence (AI) has been permeating all aspects of our lives for a while. AI underpins several of the digital services we use and, perhaps less known to most of us, an increasing number of public services. However, only recently and on the back of questionable claims about existential AI threats, has AI regulation started to grab mainstream headlines, permeate public discourse, and quickly rise to the top of the political agenda. Before this recent flurry of AI regulation discourse, in late March 2023, the UK Government published a much-awaited white paper setting out its ‘pro-innovation approach to AI regulation’ (the AI White Paper). Much has happened in the short period since the AI White Paper was published, including the launch of a £100m Foundation Model Taskforce, the appointment of its Chair, and the announcement that, in a bid to lead the global discussion on AI guardrails, the UK will convene a global AI safety summit.
With developments taking place at breakneck speed, the AI White Paper might seem to have become obsolete even before the end of its public consultation on 21 June 2023. However, the Government has not abandoned the AI White Paper and claims that more recent initiatives on AI safety are a major plank of its approach to AI. As the AI White Paper outlined the direction of travel the current UK government intends to follow, we took the time to analyse it in detail. We found quite a few shortcomings and pitfalls that, in our view, the UK Government should fix and avoid if it seriously aims to become a global leader in AI regulation. Our full submission is available here, and this blog post summarises our main findings.
The regulatory model described in the AI White Paper
The AI White Paper seeks to address regulatory gaps and challenges posed by the emergence of AI, which it defines as a set of technologies characterised by their ‘adaptivity’ and ‘autonomy’. The AI White Paper is premised on ‘an initial assessment of AI-specific risks and their potential to cause harm, with reference … to the values that they threaten if left unaddressed. These values include safety, security, fairness, privacy and agency, human rights, societal well-being and prosperity.’
The AI White Paper stresses that ‘some AI risks arise across, or in the gaps between, existing regulatory remits’. And that there is a ‘risk of inconsistent enforcement across regulators [and] also a risk that some regulators could begin to dominate and interpret the scope of their remit or role more broadly than may have been intended in order to fill perceived gaps in a way that increases incoherence and uncertainty.’
The AI White Paper also echoes concerns expressed by industry that ‘conflicting or uncoordinated requirements from regulators create unnecessary burdens and that regulatory gaps may leave risks unmitigated, harming public trust and slowing AI adoption’, and that ‘regulatory incoherence could stifle innovation and competition by causing a disproportionate amount of smaller businesses to leave the market.’
To tackle those regulatory challenges, the AI White Paper foresees a regulatory regime characterised as ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’. The AI White Paper claims to set out an approach that ‘is proportionate, adaptable, and context-sensitive to strike the right balance between responding to risks and maximising opportunities.’ The approach is based on an agile (iterative) model that leverages the capabilities and skills of existing regulators, as ‘[c]reating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.’
The AI White Paper claims to set out ‘a clear and unified approach to regulation [that] will build public confidence, making it clear that AI technologies are subject to cross-cutting, principles-based regulation’ to improve upon the current ‘complex patchwork of legal requirements’.
The five values-focused cross-sectoral principles are:
- Safety, security and robustness;
- Appropriate transparency and explainability;
- Accountability and governance; and
- Contestability and redress.
Our assessment of the regulatory model
In short, the AI White Paper claims to advance a ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ model that leverages the capabilities and skills of existing regulators to foster AI innovation. This model, we are told, would be underpinned by a set of principles providing a clear, unified, and flexible framework improving upon the current ‘complex patchwork of legal requirements’ and striking ‘the right balance between responding to risks and maximising opportunities.’
We challenge such claims in the AI White Paper, arguing that:
- The AI White Paper does not advance a balanced and proportionate approach to AI regulation, but rather, an “innovation first” approach that caters to industry and sidelines the public. The AI White Paper primarily serves a digital industrial policy goal ‘to make the UK one of the top places in the world to build foundational AI companies’. The public interest is downgraded and building public trust is approached instrumentally as a mechanism to promote AI uptake. Such an approach risks breaching the UK’s international obligations to create a legal framework that effectively protects fundamental rights in the face of AI risks. Additionally, in the context of public administration, poorly regulated AI could breach due process rules, putting public funds at risk.
- The AI White Paper claims to embrace an agile regulatory approach, but instead engages in active deregulation. The AI White Paper stresses that the UK ‘must act quickly to remove existing barriers to innovation’ without explaining how any of the existing safeguards are no longer required in view of identified heightened AI risks. Coupled with the “innovation first” mandate, this deregulatory approach risks eroding regulatory independence and the effectiveness of the regulatory regimes the AI White Paper claims to seek to leverage. A more nuanced regulatory approach that builds on, rather than threatens, regulatory independence is required.
- The AI White Paper builds on shaky foundations, including the absence of a mapping of current regulatory remits and powers. This makes it near impossible to assess the effectiveness and comprehensiveness of the proposed approach, although there are clear indications that regulatory gaps will remain. The AI White Paper also presumes continuity in the legal framework, which ignores reforms currently promoted by Government and further reforms of the overarching legal regime repeatedly floated. It seems clear that some regulatory regimes will soon see their scope or stringency limited. The AI White Paper does not provide clear mechanisms to address these issues, which undermine its core claim that leveraging existing regulatory regimes suffices to address potential AI harms. This is perhaps particularly evident in the context of AI use for policing, where the regulatory and procurement framework for use of AI is fragmented and unclear.
- The AI White Paper does not describe a full, workable regulatory model. Lack of detail on the institutional design to support the central function is a crucial omission. Crucial tasks are assigned to such central function without clarifying its institutional embedding, resourcing, accountability mechanisms, etc.
- The AI White Paper foresees a ministerially-dictated approach that further risks eroding regulatory independence, especially given the “innovation first” criteria to be used in assessing the effectiveness of the proposed regime.
- The principles-based approach to AI regulation suggested in the AI White Paper is undeliverable due to lack of detail on the meaning and regulatory implications of the principles, barriers to translation into enforceable requirements, and tensions with existing regulatory frameworks. The minimalistic legislative intervention entertained in the AI White Paper would not equip regulators to effectively enforce the general principles. Following the AI White Paper would also result in regulatory fragmentation and uncertainty and not resolve the identified problem of a ‘complex patchwork of legal requirements’.
- The AI White Paper does not provide any route towards sufficiently addressing the digital capabilities gap, or towards mitigating new risks to capabilities, such as deskilling—which create significant constraints on the likely effectiveness of the proposed approach.
Our full submission is available as A Charlesworth, K Fotheringham, C Gavaghan, A Sanchez-Graells and C Torrible, ‘Response to the UK’s March 2023 White Paper “A pro-innovation approach to AI regulation”’ (June 19, 2023) https://ssrn.com/abstract=4477368.