AWARD-WINNING LEGAL & REGULATORY COMPLIANCE CONSULTANTS

Handley Gill Limited

Our expert consultants at Handley Gill share their knowledge and advice on emerging data protection, privacy, content regulation, reputation management, cyber security, and information access issues in our blog.

Public F(AI)rness

When public authorities, or entities exercising public functions, deploy artificial intelligence (AI) technologies, compliance with their public and administrative law obligations effectively requires them to comply not only with data protection, human rights, health and safety and other applicable laws, but with concepts of fairness, transparency and the elimination of bias, and to be aware of specific public sector applications and AI use cases that present unique challenges. The government and public bodies should be under no illusions that unleashing AI across the UK to deliver the AI Opportunities Action Plan requires significant due diligence to be undertaken by each organisation in respect of each use case before roll out to prevent claims and regulatory challenges from being unleashed.
— Handley Gill Limited

One of the three goals of the AI Opportunities Action Plan published in January 2025 was that “The public sector should rapidly pilot and scale AI products and services”. The government responded, committing to adopt all 50 of its author, Matt Clifford CBE’s recommendations, “Backing AI to the hilt” with “Artificial intelligence… unleashed across the UK to deliver a decade of national renewal”.

Shortly thereafter, Secretary of State for Science, Innovation and Technology, Peter Kyle, announced that his department would become the digital centre of government to “overhaul digital services” and “put AI to work”, committing to the publication in summer 2025 of a cross-government digital and AI roadmap. At the same time, the government announced the launch of ‘Humphrey’ (affectionately named after the character of the senior civil servant Sir Humphrey Appleby GCB KBE MVO in the BBC political satire ‘Yes, Minister’) a series of AI tools for government which will be made available to civil servants comprising: Parlex, to search and analyse records of Parliamentary debates; Minute, a secure transcription service; Redbox, a generative AI tool to summarise policy and support briefing preparation; and, Lex, to support legal research by summarising and analysing relevant law. Other measures announced included the establishment of a Responsible AI Advisory Panel, rules requiring public sector organisations to publish their application programming interfaces (APIs) and the creation of a Technical Design Council.

There is also a risk of civil servants utilising unauthorised AI models in their work, which may not be subject to governance and assurance measures but to which public law obligations will apply equally.

When any organisation deploys artificial intelligence (AI) systems and tools in the UK, while there are currently no AI specific laws, it must comply with its legal obligations under existing law and regulation, to process personal data fairly, lawfully and compliantly, to prevent unlawful discrimination, to comply with health and safety regulations, to meet any duty of care and to protect against unlawful interference with human rights.

When public authorities (or private entities exercising public functions) deploy AI however, they are subject to additional public and administrative law obligations. As well as complying with the Public Sector Equality Duty (PSED) under section 149 Equality Act 2010, they must also comply with their public law obligations including to act lawfully and within their powers, fairly, reasonably and proportionately and,  transparently.  

How does the use of AI in public sector policy making and delivery affect compliance with public law principles?

Legality

Issues of illegality can arise as a consequence of unlawful delegation of powers, failure to meet the Gunning principles in the context of carrying out consultations, failure to comply with the European Convention on Human Rights and the Equality Act 2010 and/or failure to meet legitimate expectations.

When utilising AI in the context of public decision making, particular regard needs to be had to the extent of the delegated power and the decision maker. Where the relevant legal framework specifies the relevant decision maker, effectively delegating that decision to an AI tool whereby the AI tool either makes a decision or recommends a decision which is then adopted without meaningful human intervention is likely to be deemed to be ultra vires. It would be advisable to ensure that in any case where AI is proposed to be used to support decision making, not only are staff trained in AI literacy and the specific limitations of the relevant tool, but that records are maintained of the decision maker’s own independent determination having regard to the AI tool’s recommendation, as well as monitoring undertaken by the relevant AI governance mechanism to ensure that meaningful intervention is maintained and that systemic unlawfulness is avoided. Decision makers must not fetter their own discretion and therefore AI models with binary outputs or subsequent human reviews which are binary in nature may not pass muster. 

Where irrelevant factors are taken into account or relevant factors are disregarded or given inappropriate weight then this may also lead to a conclusion of illegality. So-called deep learning algorithms, which are used in generative AI models, comprise multiple layers of neurons to which the model accords different weights which impact how a prompt travels through the neural network for it to predict an output. Even model developers may lack an understanding of the operation of multi-layered neural networks and, because these models are stochastic/probabilistic in nature, their reliability and consistency is not guaranteed given that they could produce different outputs based on the same prompt. Where such models are used in decision making, their ‘black box’ nature would make it difficult to prove what factors were considered and what weight they were given.

Even at the policy making stage, the use of AI in the context of consultations is capable of breaching public law. The Gunning Principles, established in R v Brent London Borough Council ex parte Gunning & others (1985) 84 LGR 168, require that consultations should (i) occur while proposals are at a formative stage; (ii) provide sufficient information to enable the public to give the consultation intelligent consideration; (iii) be conducted so as to provide consultees with sufficient time to consider and respond; and, (iv) give conscientious consideration to responses before making decisions. In so far as AI tools may be used to review and summarise consultation responses, this has the potential to fail the fourth Gunning principle. AI tools may support overarching analysis and summary of sentiment in terms of whether respondents favour, reject or are neutral as to particular proposals, but may not be adept at identifying nuance and alternate propositions or examining their viability.  

Section 6 Human Rights Act 1998 makes it unlawful for a public authority to act in any way which is incompatible with a Convention right. This includes Article 14 European Convention on Human Rights which prohibits discrimination on “on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status”. Section 149 Equality Act 2010 requires public authorities to go further by implementing the Public Sector Equality Duty (PSED), building on the obligation that public authorities not discriminate to require them to have due regard to the need to eliminate discrimination, harassment, victimisation and other prohibited conduct, advance equality of opportunity between those sharing a protected characteristic and those who don’t and, foster good relations between those sharing a protected characteristic and those who don’t. Bias can arise in the context of AI tools as a consequence of the data used to train, validate and test the underlying model, the data used to finetune an AI tool, the labelling of data, the machine learning methodology applied, the algorithm itself, the input, the AI output and human interpretation and application of the output. The US National Institute of Standards and Technology categorises these biases into systemic biases, statistical and computational biases and, human biases in its paper ‘Towards a Standard for Identifying and Managing Bias in Artificial Intelligence’ (2022). It is imperative that public authorities utilising AI tools develop their own understanding of the development and workings of AI tools at the procurement stage, identify whether and how biases may arise and how they have been managed and whether further measures are necessary to comply not only with the prohibition on discrimination but also the PSED. This may be addressed by undertaking an Equality Impact Assessment or incorporated into a wider Algorithmic Impact Assessment. This will need to be conducted prior to or in conjunction with the conduct of the Algorithmic Transparency Recording Standard. As the PSED is an ongoing duty, however, it is insufficient to merely carry out these assessments upon initial use, and they must be kept under review throughout the period of deployment. 

For central government departments currently subject to the Algorithmic Transparency Recording Standard, and for other public bodies as they are mandated to comply with it, failure to comply could result in breach of a legitimate expectation.

Fairness

Any determination of civil rights and obligations is required to be conducted in accordance with Article 6 European Convention on Human Rights and, where Article 6 is not engaged established pubic law principes of fairness apply.

In either case, the law requires that the decision maker be impartial and unbiased, which may raise similar issues to those identified in connection with illegality above.

Furthermore, in each case transparency and a reasoned decision is likely to be required. Both of these may prove difficult if deep learning algorithms are utilised due to their black box nature as described above.

The concept of fairness in AI has been the subject of international guidance on best practice, which extends beyond Article 6 and public law concepts of fairness:

  • UNESCO’s Recommendation on the Ethics of Artificial Intelligence (AI) established 10 core principles for human rights centred approach to AI ethics, with the 10th principle being ‘Fairness and Non-Discrimination’, which provides that “AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all”.

  • The Organisation for Economic Co-operation and Development’s (OECD’s) updated Principles for Trustworthy AI (Artificial Intelligence) identifies 5 principles for responsible stewardship of trustworthy AI, including “Respect for the rule of law, human rights and democratic values, including fairness and privacy” which it categorises as “AI actors should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle. These include non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour rights. This also includes addressing misinformation and disinformation amplified by AI, while respecting freedom of expression and other rights and freedoms protected by applicable international law”. The OECD suggests that in order to meet the requirements of this principle, “AI actors should implement mechanisms and safeguards, such as capacity for human agency and oversight, including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art”.

  • The former UK Conservative Government’s 2023 AI Regulation White Paper identified 5 principles to guide and inform the responsible development and use of AI in all sectors of the economy, one of which was fairness. While no concise definition of fairness in the context of AI was proposed, it was noted that  fairness requires that “AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes” and that this would require compliance with the Human Rights Act 1998, Equality Act 2010, data protection legislation (including the UK GDPR and Data Protection Act 2018), consumer and competition law including rules to protect vulnerable consumers, such as the Financial Conduct Authority’s (FCA’s)  Consumer Duty or the Competition and Market Authority’s (CMA’s) approach to vulnerable consumers and individuals, and sector specific regulations such as the FCA’s Handbook.

  • The Council of Europe (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the AI Treaty), which the UK has signed but not yet ratified, imposes several obligations which could be considered relevant to an assessment of fairness including: the adoption of measures to promote the reliability of artificial intelligence systems and trust in their outputs (Article 12); adopt or maintain measures that seek to ensure that adverse impacts of artificial intelligence systems to human rights, democracy, and the rule of law are adequately addressed (Article 16(3)); ensure activities within the lifecycle of artificial intelligence systems respect equality, including gender equality, and the prohibition of discrimination, as provided under applicable international and domestic law (Article 10(1)); and, overcome inequalities to achieve fair, just and equitable outcomes, in line with its applicable domestic and international human rights obligations, in relation to activities within the lifecycle of artificial intelligence systems (Article 10(2)).

What fairness means in the context of AI in the UK specifically has been addressed by the Digital Regulation Co-operation Forum, which recognised that “Fairness can arise in a variety of contexts, and what is defined as “fair” in AI differs depending on the situation” and that while fairness requires the elimination of bias the concept is far wider than that. Regulators are beginning to form their own views of what fairness in the context of AI requires, with the Information Commissioner’s Office’s (ICO’s) guidance stating that in connection with data protection “fairness means you should only process personal data in ways that people would reasonably expect and not use it in any way that could have unjustified adverse effects on them. You should not process personal data in ways that are unduly detrimental, unexpected or misleading to the individuals concerned”. In the context of personalised pricing in communications industries, which could be based on AI, Ofcom has indicated that “the exploitation of behavioural biases would concern us where customers are adversely affected, for example where this impairs a customer’s ability to make well-informed decisions or means they make decisions which are not in their best interests”.

For private sector organisations, since the UK has so far refrained from introducing specific legislation to regulate AI, outside of regulators’ application of the concept in existing law, the general best practice principle of fairness is only observed as a matter of best practice, but for public authorities obliged to act fairly and reasonably these - somewhat nebulous – principles warrant closer examination.

Reasonableness and proportionality

The public law concept of Wednesbury unreasonableness impose a standard which requires that the courts intervene in circumstances where action or inaction is “so unreasonable that no reasonable authority could ever have come to it”, essentially that it is perverse. This is a high threshold but one which may be relied upon with greater frequency and success in the context of AI decision making, particularly if the necessary records pertaining to the operation of AI models and the wider context in which they deployed are not maintained. Unreasonableness can arise either as a consequence of the outcome or the route by which it was arrived at though procedural impropriety.

Where Convention rights are engaged, public authorities are required to consider proportionality, having regard to whether the aim of the relevant measure is legitimate, whether the measure is a suitable means of achieving the aim, whether the measure is the least intrusive means of achieving the aim and whether the aim justifies the means.  

Transparency

As set out above, public law obligations may require that reasons be given for decisions. The applicable standard was set out by Lord Brown in South Buckinghamshire District Council v Porter (No 2) [2004] 1 WLR 1953 at [36] that the reasons relied on must enable the reader to understand why the matter was decided as it was and what conclusions were reached on the 'principal important controversial issues', disclosing how any issue of law or fact was resolved and giving rise to no substantial doubt as to whether the decision-maker erred in law. The Administrative Court in R (on the application of Gare) v Babergh District Council [2019] EWHC 2041 (Admin) recognised that “Meeting the required standard of clarity is likely to be more difficult where the reasons relied on are not contained in a single document but have to be pieced together from two sources which do not agree with each other in the outcome”. Consideration as to how decisions and the reasons for them are to be presented to affected individuals will be required, as the presentation of an outcome, supplemented by details of prompts, an Algorithmic Transparency Recording Standard report, applicable policies and, copies of impact assessments etc may well fail to meet the standard.

Other legal obligations may also require a measure of transparency, for example where personal data is processed in the context of AI then data subjects will be entitled to be provided with meaningful information about the logic of the algorithm.

Central government departments are currently mandated to comply with the Algorithmic Transparency Recording Standard in relation to their use of algorithmic tools including artificial intelligence (AI) which either have a significant influence on (in the sense that they meaningfully assist, supplement, or fully automate) a decision-making process with public effect or, which directly interact with the general public. This obligation is to be extended to other public bodies in due course. The ATRS requires a spreadsheet to be completed and submitted for publication on the Algorithmic Transparency Recording Standard hub detailing information including relating to the procurement of the tool, data sharing, the development of the tool including its training, the operation of the tool and its performance, the risks associated with the tool and how the tool is being used. Failure to comply where the ATRS is mandated may therefore breach a legitimate expectation.

The government has compiled a repository of the ATRS records in respect of current AI applications.

Transparency is a key feature of all international best practice on AI:

  • The CoE AI Treaty requires that member states ensure that: adequate transparency and oversight requirements tailored to the specific contexts and risks are in place in respect of activities within the lifecycle of artificial intelligence systems, including with regard to the identification of content generated by artificial intelligence systems (Article 8); and, as appropriate for the context, persons interacting with artificial intelligence systems are notified that they are interacting with such systems rather than with a human (Article 15(2)).

  • The concept of appropriate transparency and explainability was one of the five principles proposed by the former UK Conservative Government’s in its 2023 AI Regulation White Paper ‘A pro-innovation approach to AI regulation’;

  • The UNESCO Recommendation on the Ethics of AI identifies  transparency  and  explainability as “essential   preconditions   to   ensure   the   respect,   protection and promotion of human rights, fundamental freedoms    and    ethical    principles”; and, 

  • Paragraph 1.3 of the OECD Principles for Trustworthy AI  similarly requires that “AI Actors should commit to transparency and responsible disclosure regarding AI systems”, providing “meaningful information” intended to “foster a general understanding of AI systems, including their capabilities and limitations”, “make stakeholders aware of their interactions with AI systems”, “where feasible and useful, to provide plain and easy-to-understand information on the sources of data/input, factors, processes and/or logic that led to the prediction, content, recommendation or decision, to enable those affected by an AI system to understand the output” and, “provide information that enable those adversely affected by an AI system to challenge its output”.

We anticipate courts will recognise the use of AI as being a circumstance in and of itself in which transparency and reasons for decisions are required, particularly where AI is heavily relied upon.

Enforcement

Section 84 Criminal Justice and Courts Act 2015 amends section 31 of the Senior Courts Act 1981 so as to restrict the right to judicial review to circumstances where “it appears to the court to be highly likely that the outcome for the applicant would not have been substantially different if the conduct complained of had not occurred”, unless it appears appropriate to depart from this for reasons of “exceptional public interest”. Since an application for permission requires the High Court’s leave, in determining whether to grant permission the Court is entitled of its own motion to consider whether the outcome for the applicant would have been substantially different if the conduct complained of had not occurred and is obliged to do so upon the defendant public body’s request. In practice therefore, in the event of a dispute, public bodies will need to be prepared to disclose significant information relating to the AI tool, its operation and its wider processes at the pre-action stage.

The interaction of judicial review of AI decisions and usage with the rebuttable common law presumption that computer systems operate correctly, which is currently the subject of consultation, is likely to be the subject of dispute.

In relation to non-proprietary AI models, while the completion of the Algorithmic Transparency Recording Standard will go some way to addressing disclosure obligations, it may not be sufficient for the purposes of proceedings and public authorities will want to ensure that the provision of information by and the co-operation of the AI developer in such circumstances is incorporated into contractual provisions.

Watch our official UK AI Safety Summit Fringe webinar ‘practical mAgIC’ and download our accompanying free Helping Hand checklist on deploying AI responsibly, safely and ethically.

Find out more about our responsible and ethical artificial intelligence (AI) services.