By Professor Albert Sanchez-Graells, Co-Director of the Centre for Global Law and Innovation (University of Bristol Law School).
The Digital Constitutionalist (DigiCon) has recently hosted a symposium on ‘Safeguarding the Right to Good Administration in the Age of AI’, co-edited by Dr Simona Demková (Leiden), Dr Melanie Fink (Leiden) and Dr Giulia Gentile (Essex). Professor Sanchez-Graells contributed his thoughts on the need to extend good administration requirements to the phases of decision-making that are not yet directly relevant to the individual, as well as the need to broaden good administration guarantees to a collective dimension, to account for the new risks arising in the AI-driven administrative context. In this post, first published in the DigiCon symposium, Albert looks at ways to achieve this, whether through an expansive interpretation of Article 41 of the Charter of Fundamental Rights of the European Union or through a European legislative reform.
Resh(AI)ping good administration
Much like in every other area of socio-economic activity, the ‘covid digital shift’ and the mainstreaming of advances in artificial intelligence (AI) have prompted discussion of how the public sector could harness the advantages of digital technologies and data-driven insights. AI brings the promise of a more efficient, adaptable, personalisable, and fairer public administration (Esko and Koulu, 2023). States are thus experimenting with AI technology, seeking more streamlined and efficient digital government and public services—giving rise to what has been termed ‘New Public Analytics’ (Yeung, 2022), or third wave of digital era governance (Dunleavy and Margetts, 2023). However, it is also clear that such ‘digital transformation’ poses significant risks (Kaminski, 2022). Generative AI, for example, has been shown to create unreliability, misuse, and systemic risks (Maham and Küspert, 2023)—which are particularly acute in public sector automated decision-making (ADM) supported by AI (Finck, 2019; Kuziemski and Misuraca, 2020).
Current legal frameworks seem ineffective in tackling some (or most) of these risks, where eg the technology pushes the limits of the GDPR, or even those of new instruments of EU digital law (on which there is a burgeoning literature, including contributions by the symposium organisers, see eg Demkova, 2023a and 2023b; Fink and Finck, 2022; Gentile, 2023); or where the risks concern discrimination not (directly) linked to currently protected characteristics (Wachter, 2022). There are also broader risks of erosion of the functioning of the public sector and the values traditionally assigned to it to protect social interests (Smuha, 2021), or values to which the public administration should aspire (Ranchordás, 2022). Overall, there is an open question whether current and emerging legislative frameworks can adequately deal with the impacts of the public sector’s digital transformation.
This symposium offers an opportunity to reflect on whether Art 41 of the EU Charter of Fundamental Rights (CFR) provides a suitable framework for safeguarding the right to good administration in the AI context. In this contribution, I will sketch the argument that ensuring the right to good administration in the AI context requires both an extension and a broadening of the guarantees currently encapsulated in Art 41 (and 47) CFR. The extension requires regulating phases of administrative (self)organisation not (yet) of direct and immediate relevance to the individual, while the broadening requires embedding a collective dimension within a framework until now focused on individual rights and redress. I will also briefly consider whether such extension and broadening can be achieved through the interpretation of the umbrella clause in Art 41 CFR, or legislative reform is needed to achieve such reshaping of the right to good administration in the AI context (for some discussion, see Laukyte, 2022 and the contributions to Chevalier and Menéndez Sebastián, 2022). Some ideas are tentative or expressed in relatively informal terms, with the purpose of fostering discussion. There are other issues that exceed the possibilities of this contribution, such as the need to rethink theories such as presumptions of legality in administrative decision-making, or legitimate expectations arising from automated decision-making. Those are saved for another time.
Simplified logic underpinning Art 41 (and 47) CFR
The proper functioning of the public administration is of crucial relevance to the rule of law and the functioning of constitutional democracies (eg Lock, 2019: 2205). However, the CFR does not encapsulate a general or social right to a good public administration. Rather, in simplified terms, the right to good administration in Art 41 CFR follows an individualistic logic to the protection of the interests of those at the receiving end of administrative decision-making. Art 41 CFR needs to be considered in coordination with Art 47 CFR, which provides access to (judicial) remedies when Art 41 CFR protection has been ineffective in avoiding individual harm. Art 47 CFR follows an equally individualistic logic. The overall logic is thus one of empowering individuals in their relationships with the public administration, both through procedural guarantees seeking to promote adequate decision-making, and through the last resort possibility of enforcing them against the public administration and/or obtaining redress for defective decision-making. That individualistic logic leaves two important issues outwith the scope of Art 41 and 47 CFR protection.
First, protection is only triggered at the point where an individual situation is susceptible of administrative decision-making. (Earlier) techno-organisational decisions adopted in preparation for such decision-making are only to be taken into account to the extent they impinge on specific guarantees given to the individual in relation to the specific (potential) decision—eg in terms of the objectivity which the organisational setting is capable of ensuring. However, techno-organisational decisions are not capable of pre-emptive challenge, or challenge in abstracto.
Second, it is irrelevant whether the situation triggering a (potential) breach of the right to good administration is unique to the individual facing administrative action, or not. Individual rights and remedies do not vary depending on the number of individuals (potentially) affected by the decision-making, as each of them will be capable of individualised enforcement of their own rights.
I argue that these implications of the individualistic logic underpinning Art 41 and 47 CFR are of great importance in the AI context for two reasons.
On the one hand, this is important because techno-organisational decisions will have a much more direct bearing on individual decision-making in the AI context than in other settings, as in some ways the deployment (pre)determines the decision almost invariably (eg by excluding all discretion). The impossibility of challenging such techno-organisational decisions before they are implemented can thus create a situation where existing Art 41 CFR rights offer ‘too little, too late’ by way of protection.
On the other hand, this is important because individual redress may be nearly impossible to obtain in a context of mass decision-making leading to masses of individual claims, such as that enabled by AI, thus rendering Art 47 CFR protection ineffective where tribunal protection is delayed or unobtainable. And it is also important because redress for the social interest in the proper functioning of the public administration may not be (pragmatically) actionable under current mechanisms. I address each of these issues in turn.
Regulating organisational risk-taking
It is increasingly accepted that regulating AI use requires a precautionary or anticipatory approach (see eg the working draft for a framework convention on AI and human rights)—even if the contours and implications of such approach, or any (new) rights within it, are heavily contested (for discussion, see eg Abrusci and Mackenzie-Gray Scott, 2023). This stems from the realisation that AI deployment can generate mass effects that are very difficult or simply impossible to correct for. Experience has already shown that the implementation of defective or discriminatory algorithms by the public sector can generate massive harms thwarting the lives and opportunities of very many citizens—and oftentimes the most vulnerable and marginalized. Techno-organisational decisions can thus irretrievably translate into breaches of the right to good administration (as well as other fundamental rights) of many citizens at once, all ‘with a simple click of the mouse’, so to speak.
In this context, given the potential mass effects of discrete techno-organisational decisions, it is not acceptable to expect large numbers of citizens—or specific minorities—to have to rely on individualised ex post challenges to the implementation of those techno-organisational decisions. The right to good administration—or the mirroring duty of good administration incumbent on the public sector—must encompass a proactive and thorough ex ante assessment of the likely impact of techno-organisational decisions on the ability of the public sector user to respect individual rights when deploying AI. Such assessment needs to take place at the point of organisational risk-taking. Or, in other words, ahead or in anticipation of the technological deployment.
In my view, such assessment of the likely (in)compatibility of a planned technological deployment with individual rights needs to be undertaken by an institution with sufficient independence and domain expertise—which rules out a self-assessment by the public sector user and/or its technology providers. It thus needs to be implemented through a system of licencing or permissioning of public sector AI use—and I have developed elsewhere a proposal for the system to be managed by an ‘AI in the Public Sector Authority’ (AIPSA) (along similar lines, see Martín Delgado, 2022). To foster the effectiveness of such a system, the right to good administration needs to encompass a right to enforce such licencing mechanism against any planned or implemented AI deployment by the public sector—which is an alternative, but complementary approach to disclosure-based proposals (see eg Smuha, 2021; Laux, Wachter and Mittelstadt, 2023). The right would be framed in negative terms, such as an individual right not to be affected by administrative decisions resulting from unlicenced systems or systems violating the terms of the relevant licence—which would be a variation of the right not to be subjected to automated decision-making, as it would not challenge the what, but the how AI is deployed by the public sector.
‘Automating’ and collectivising redress
It is also increasingly accepted that the automation of decision-making and the mass effects that can result from a single techno-organisational decision pose significant challenges to existing remedies systems (eg Benjamin, 2023). It is easy to see how tribunals and courts could quickly become overwhelmed and ineffective if they had to deal with thousands or even hundreds of thousands of claims arising from a single techno-organisational decision (eg the implementation of a faulty algorithm in any core digital government service to do with taxation or social security). It is also increasingly clear that the outputs of a techno-organisational solution can then ‘snowball’ through an increasingly interconnected and data-driven public administration, thus further increasing the volume and variety of harms, damages and complaints that can arise from a single AI deployment (see eg Widlak, van Eck and Peeters, 2021).
Equally, it is increasingly accepted that there are social interests (eg in the proper functioning of the public administration as a crucial element in citizens’ assessments of the functioning of the State and the underlying constitutional settlement) that are not amenable to the current system of individual redress (Smuha, 2021). Either because the related incentives do not operate in favour of enforcing any existing checks and balances (eg where the individual interest is relatively small and would thus not ‘activate’ individual claims), or because the erosion of social interests is the result of compounded techno-organisational processes with interactive effects in the long run that cannot be separately challenged effectively (Yeung, 2019: 42 and 75). This poses a major difficulty.
While ex ante controls on the adoption of AI by the public sector (as above) should reduce the likelihood or frequency of such mass and/or collective and social harms, they would not be altogether excluded. It is thus necessary to think about ways to tackle the issue. In my view, a broadening of the right to good administration to encompass a proactive duty on the public administration using an AI deployment to undo the harms arising from techno-organisational decisions would go some way in that regard (similarly, Widlak, van Eck and Peeters, 2021). A public administration that was put on notice of a (potential) harm arising from an AI deployment would immediately become duty bound to: (a) suspend or discontinue the use of the AI, and (b) proactively redress the situation for everyone affected without the need for any individual claims. It would also be (c) under a duty to report to the licencing or permissioning authority (AIPSA), so that relevant duties to revisit the assessment of equivalent or compounded AI deployments potentially affected by the same problem are triggered. All public authorities using such AI deployments would be under (d) a duty to collaborate in the efforts to proactively undo the damage and to ‘fix the system’ going forward.
Statutory fit
I have argued for an extension and broadening of the specific rights encompassed by the broader right to good administration under Art 41 (and 47) CFR. The umbrella clause in Art 41(1) CFR includes some flexible elements that could accommodate such expansion and broadening. It has been authoritatively argued that it is open to claimants ‘to rely on Article 41(1) for aspects of the right to good administration that do not readily fall within the more specific parts of Article 41(2)’ and that an expansive interpretation of Art 41(1) would include many analogous extensions put forward by the EU Ombudsman (Craig, 2021: 1128). However, it seems to me that there are some barriers to the full operationalisation of the right to good administration in the digital administrative space solely through the interpretation of Art 41 (and 47) CFR. If nothing else, due to their individualistic logic (above; more generally, see Wolswinkel, 2022a and 2022b).
Concluding thoughts
Overall, I think the time is ripe for a reconsideration of the right to (a) good administration under EU (digital) law. It also seems that the international instruments likely to emerge eg from the Council of Europe’s work on AI are unlikely to impose stringent regulatory controls capable of addressing the growing deficit and imbalance resulting from the public sector’s digital transformation. Hopefully the proceeds of this symposium will offer a strong foundation on which to develop a fuller proposal for a set of rights and duties of good administration in the context of AI within the European Union.