Can the government just go and ‘confidently and responsibly’ buy artificial intelligence?

by Albert Sanchez-Graells, Professor of Economic Law and Co-Director of the Centre for Global Law and Innovation (University of Bristol Law School).

On 29 March 2023, the UK Government published its much awaited policy paper ‘AI regulation: a pro-innovation approach’ (the ‘AI White Paper’). The AI White Paper made it clear that Government does not intend to create new legislation to regulate artificial intelligence (‘AI’), or a new AI regulator. AI regulation is to be left to existing regulators based on ‘five general principles to guide and inform the responsible development and use of AI in all sectors of the economy’, including accountability, transparency, fairness, safety, and contestability.

The Government’s approach will entrench the regulatory vacuum in which the public sector is adopting AI, as there is no existing regulator with a remit clearly comprising the public sector. Some instances of public sector AI use will fall under the powers of the Information Commissioner’s Office or the Equality and Human Rights Commission, and others could fall within the remit or lesser-known regulators, such as the Biometrics and Surveillance Camera Commissioner. However, many AI deployments will not be necessarily or clearly caught by existing regulatory regimes. Moreover, given the level of secrecy with which the public sector is adopting AI—as clearly denounced and demonstrated by the Public Law Project—even existing regulators will struggle to monitor and effectively intervene to protect existing rights. Rights that, at the same time, are at a very real risk of watering down under the Data Protection and Digital Information (No. 2) Bill.

While the AI White Paper is cognisant of this situation, the Government does not seem to think this is a problem and, rather, that there are alternative mechanisms to control the adoption of AI by the public sector. On that, the AI White Paper retains the approach in the earlier July 2022 policy paper ‘Establishing a pro-innovation approach to regulating AI’. This emphasised how the 2021 National AI Strategy tackled the issue by highlighting the importance of the public sector’s role as a buyer and stressing that the Government had already taken steps ‘to inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens’. Such steps primarily involve the publication in 2020 of a Guide to using AI in the public sector and Guidelines for AI procurement.

The UK Government thus seems to think it can just go and ‘confidently and responsibly’ buy trustworthy AI. In research funded by the British Academy, I critically assessed whether that is the case. The answer is a resounding ‘no’.

Public buyers are inadequately resourced and wrongly placed

My research shows that the first reason why public buyers are not adequately placed to ‘confidently and responsibly procure AI technologies for the benefit of citizens’ is that they suffer a significant digital skills gap and are under a structural conflict of interest that prevents them from acting as gatekeepers of fundamental and individual rights, as well as broader social interests.

Given the large number of civil service data and data science unfilled vacancies, public buyers are inadequately staffed to confront and negotiate with much more knowledgeable and well-resourced tech companies—and at risk of being captured or deceived by companies selling snake oil. Public buyers’ excessive reliance on external consultants to plug the digital skills gap further erodes their ability to act as effective gatekeepers.

Moreover, even if the public buyers were adequately resourced, they would be conflicted by their own operational (and political) interest in the deployment of the AI, which can easily trump consideration of broader public interests.

This can be exacerbated by Government interventions such as the recently announced launch of the Foundation Model Taskforce—which foresees the investment of £100 million ‘in foundation model infrastructure and public service procurement, to create opportunities for domestic innovation’. Procurement geared towards the delivery of digital industrial policy goals is at odds—or at the very least hard to square—with the expected role of public buyers in responsibly procuring AI technologies, especially where guardrails and regulatory constraints are seen as mechanisms that stifle innovation and place the UK at a disadvantage in the global race to (generative) AI. A ‘pro-innovation’ narrative and ethos will reduce the level of scrutiny of the compatibility of (some) AI deployments with fundamental and individual rights, as is clearly the case with the emerging approach towards the adoption of generative AI (such as ChatGPT) by the public sector.

Public buyers do not have the right tools

Further to that, my research also shows that public buyers do not have the right tools to ‘regulate AI by contract’. A first constraint derives, again, from the gap in public sector digital skills, which prevents the public buyer from adequately understanding the technologies it seeks to buy. The public buyer risks procuring AI it does not understand, which is already a widespread phenomenon in the private sector. A lack of understanding can make it impossible to set regulatory requirements to be embedded through procurement tools—assuming such requirements could at all be formulated with a sufficient level of precision, which is not the case of all principles or attributes of trustworthy AI, eg explainability.

Even if digital skills and AI understanding were not a barrier, procurement law (both current EU-derived rules and the Procurement Bill) constrains the exercise of administrative discretion to ensure meaningful competition and value for money in the award of public contracts. This limits the technical prescriptiveness that is allowed to the public buyer, which necessarily means that it is not easy or at all possible to use procurement tools to embed defined and mandatory regulatory requirements in the tender procedure and in public contracts.

Procurement rules require consideration of equivalent solutions, which evaluation is extremely complicated in the absence of generally accepted methodologies and metrics. And there is a significant gap in methodologies and metrics tailored to most of the regulatory requirements ideally applicable to public sector AI adoption.

Moreover, procurement tools and mechanisms are highly likely to end up being a conduit for the direct or indirect application of commercially determined technical standards and industry practices. This creates a significant regulatory problem because industry led standards are inadequate tools to protect the fundamental and individual rights at risk of jeopardy in AI adoption in the public sector.

Privatising AI regulation by contract

Given the inadequate position public buyers find themselves in, and their also inadequate tools, the current approach to the ‘regulation by contract’ of AI adoption by the public sector cannot work. Relying on public buyers to ‘confidently and responsibly procure AI technologies for the benefit of citizens’ is naïve at best, and disingenuous at worst. Public buyers can be overpowered, captured, or both, by technology providers. Technology providers are also likely to set the rules of the game, either through direct negotiations of ‘their’ contracts, or through the development of industry-friendly standards that procurement simply adopts. This is not conducive to guaranteeing that the public sector only adopts trustworthy AI because the process of public sector digitalisation risks being driven by commercial and industrial policy interests.

More controls and more rules are needed

My analysis shows that, contrary to the position in the AI White Paper, more controls and more rules are needed. I have thus formulated a proposal for the creation of a new ‘AI in Public Sector Authority’ (AIPSA). AIPSA would be an independent authority with the statutory function of promoting overarching goals of digital regulation, and specifically tasked with regulating the adoption and use of digital technologies by the public sector, whether through in-house development or procurement from technology providers.

AIPSA’s role in setting mandatory requirements for public sector digitalisation would be twofold.

First, through an approval or certification mechanism, it would control the process of industry led standardisation to neutralise risks of regulatory capture and commercial determination. Where no industry led standards were susceptible of approval or certification, AIPSA would develop them.

Second, through a permission or licencing process, AIPSA would ensure that decisions on the adoption of digital technologies by the public sector are not driven by ‘policy irresistibility’, that they are supported by clear governance structures and draw on sufficient resources, and that adherence to the goals of digital regulation is sustained throughout the implementation and use of digital technologies by the public sector and subject to proactive transparency requirements.


Contrary to the UK Government’s position in the AI White Paper, without more controls and more rules, public buyers cannot ‘confidently and responsibly procure AI technologies for the benefit of citizens’. The creation of those controls and those rules is urgent, as the rapidly accelerating pace of development of AI will translate into an also accelerating adoption of those technologies by the public sector. While this process of public sector digitalisation remains unregulated, there is a clear risk that fundamental and individual rights will be impinged upon, and that broader social interests will be compromised. There is also a risk of creating a stock of untrustworthy AI in the public sector that will then be difficult and costly to dismantle. A change of policy approach is urgently needed.

If you want to know more about this, and join the discussion, please consider attending the end of project public lecture ‘AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?‘, to be held in Bristol on 4 July 2023.

Leave a Reply

Your email address will not be published. Required fields are marked *