AIME high
In the previous government’s response to its consultation on the White Paper ‘A pro-innovation approach to AI regulation’ the government committed to consult on making compliance with the forthcoming AI Management Essentials Scheme a mandatory requirement for public sector procurement.
In November 2024, the new Labour government published its Artificial Intelligence (AI) Management Essentials Scheme (AIME) tool for consultation until 29 January 2025. The tool is stated to be “a self-assessment tool that aims to help organisations assess and implement responsible AI management systems and processes” through the evaluation of “the organisational processes that are in place to enable the responsible development and use of these products”.
The AIME tool addresses the following issues:
AI policy
AI record keeping
AI fairness, although no definition of fairness is proffered and this is entirely subjective
AI impact assessments
AI monitoring
AI remediation
AI risk management
AI training
AI bias
Data protection in AI
Reporting and complaints
AI transparency
At first blush that might appear to be a fairly comprehensive overview of the necessary considerations relating to the lawfulness of an AI model and the developer’s/operator’s commitment to responsible, safe and ethical AI, and consideration is being given to whether the AIME tool should be embedded in government procurement frameworks. The AIME tool would thus be a precursor to compliance with the Algorithmic Transparency Recording Standard.
The AIME tool does not currently identify where questions relate to legal requirements, for example by referring to the protected characteristics established by section 4, Equality Act 2010 in the context of questions pertaining to bias or to the requirements of Article 28 UK GDPR in relation to the requirement for agreements to be in place with between controllers and processors of personal data.
Nor does it provide a mechanism for assessing responses to the tool, whether the responses indicate compliance with laws and regulations and the risk that might be associated with the binary responses provided, whether viewed individually or cumulatively.
There are several areas that the AIME tool fails to address, including the lawfulness and fairness of the collection and utilisation of training data, staff training on AI, whether and how AI is used in decision making and consultation with affected individuals
As such, the mere completion of the AIME tool is not an indication of a commitment to responsible AI, the implications of the responses may not be well understood and even where positive responses are provided that does not equate to the development or deployment of the tool being lawful or compliant with ethical principles, with further information and analysis being required. The risk of incorporating the AIME tool in its current form into government procurement processes is that the mere completion of the AIME tool is a tick box exercise and no proper evaluation of AI models is carried out until the Algorithmic Transparency Recording Standard (ATRS) report is completed. The AIME tool in isolation is inadequate to assess responsible AI.
Matters that are covered but could be addressed in more detail by the AIME tool include whether measures are in place to implement any AI policy, the approach to fairness including in connection with processes and not merely outcomes, and risk mitigation other than in connection with bias. Greater use of narrative responses could support this.
Handley Gill submitted its response to the consultation online on 27 January 2025.
Access and download Handley Gill’s Helping Hand checklist on deploying artificial intelligence (AI) safely, responsibly and ethically.
Find out more about our responsible and ethical artificial intelligence (AI) services.