AWARD-WINNING LEGAL & REGULATORY COMPLIANCE CONSULTANTS

Handley Gill Limited

Our expert consultants at Handley Gill share their knowledge and advice on emerging data protection, privacy, content regulation, reputation management, cyber security, and information access issues in our blog.

Frame by Frame

For organisations wishing to understand and demonstrate their compliance not merely with strict legal obligations in the development and deployment of AI systems but with wider human rights and social rights, the Council of Europe’s HUDERIA methodology proposes a mechanism for AI governance through the conduct of a risk assessment to meet the requirements of the Framework Convention on AI.
While we anticipate the development and publication of the accompanying HUDERIA Model, Handley Gill has prepared its own bespoke modular AI impact assessment/conformity assessment model which can be flexed according to jurisdiction, ESG demands and the AI system specifics.
— Handley Gill Limited

The US, UK, EU and Israel are all signatories to the Council of Europe Framework Convention on Artificial Intelligence (AI) (CETS No. 225), which opened for signature in September 2024. No signatory has yet ratified the Convention. The Convention is not yet in force, but will enter into force on the first day of the next month following a three (3) month period after the Convention is ratified by five (5) signatories with at least three (3) of which being members of the Council of Europe.

The Convention requires state signatories to take measures to address AI systems and activities within the lifecycle of AI systems (whether undertaken by or on behalf of public authorities or by private actors) that have the potential to interfere with human rights, democracy and the rule of law. AI systems are defined as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments”, recognising that “Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment”.

In particular, state parties to the AI Convention must take measures in relation to activities within the lifecycle of AI systems to:

  • ensure they are consistent with domestic and international obligations to protect human rights (Article 4, AI Convention)

  • protect against their use to undermine the integrity, independence and effectiveness of democratic institutions and processes (Article 5(1), AI Convention)

  • protect democratic processes, in particular individuals’ fair access to and participation in public debate and ability to freely form opinions (Article 5(2), AI Convention)

  • respect human dignity and individual autonomy (Article 7, AI Convention)

  • deliver adequate transparency and oversight, tailored to the specific contexts and risks (Article 8, AI Convention)

  • provide accountability and responsibility for adverse impacts (Article 9, AI Convention)

  • respect equality and the prohibition of discrimination (Article 10, AI Convention)

  • protect the privacy rights and personal data of individuals with effective guarantees and safeguards (Article 11, AI Convention)

  • promote the reliability of AI systems and trust in their outputs, potentially including quality and security requirements (Article 12, AI Convention)

  • enable controlled environments for supervised development, experimentation and testing (Article 13, AI Convention)

  • ensure the availability of accessible and effective remedies for human rights violations arising, including the effective possibility for persons concerned to lodge a complaint with public authorities (Article 14, AI Convention)

  • require the recording, reporting to relevant authorities and (where appropriate) disclosure sufficient to enable decisions or the system itself to be contested in respect of AI systems with the potential to significantly impact human rights (Article 14, AI Convention)

  • make available effective procedural guarantees, safeguards and rights in respect of AI systems which significantly impact upon the enjoyment of human rights (Article 15(1), AI Convention)

  • require the notification to individuals that they are interacting with an AI system (Article 15(2), AI Convention)

  • adopt or maintain graduated and differentiated measures for the identification, assessment, prevention and mitigation of risks posed by artificial intelligence systems by considering actual and potential impacts to human rights, democracy and the rule of law (Article 16, AI Convention)

  • publicly consult on important questions relating to AI systems (Article 19, AI Convention)

  • encourage and promote adequate digital literacy and skills (Article 20, AI Convention)

  • establish effective independent and impartial mechanisms to oversee compliance and the powers, expertise and resources to fulfil their tasks (Article 26, AI Convention)

In addition, State parties are required to report the state of compliance within the first two (2) years of becoming a party to the Convention and periodically thereafter.

To supplement the Convention and aid its practical implementation, in late November 2024 a methodology was adopted by the Council of Europe's Committee on Artificial Intelligence (CAI) to provide a structured approach to help identify and address risks and impacts to human rights, democracy and the rule of law throughout the lifecycle of AI systems. HUDERIA is the methodology for the risk and impact assessment of artificial intelligence systems from the point of view of human rights, democracy and the rule of law. It is intended that the HUDERIA methodology will be supplemented by the HUDERIA model in 2025.

The HUDERIA methodology is comprised of 4 elements, which will be broadly familiar to those who have  carried out Data Protection Impact Assessments (DPIAs):

  1. Context-Based Risk Analysis (COBRA);

  2. Stakeholder Engagement Process (SEP);

  3. Risk and Impact Assessment (RIA); and,

  4. Mitigation Plan (MP). 

  1. Context-Based Risk Analysis (COBRA)

The context-based risk analysis element of the HUDERIA methodology is itself comprised of four phases:

Preliminary scoping to identify and outline: the purpose of the AI system; key components of the AI system; the context(s) in which it is intended to be used; the area/domain(s) in which it will operate; the degree of human intervention; and the nature and amount of data it will process and on which it will be trained (noting any checks that may have  already been done to assess bias in the dataset or model); persons or groups who may be affected by, or may affect, the AI system, focusing on the relevant contextual characteristics of identified persons and groups including protected characteristics and vulnerability factors; a preliminary scoping of potential adverse impacts on human rights, democracy and the rule of law by exploring the illustrative areas of concern [to be reflected in the HUDERIA model]; and, an initial mapping of roles and responsibilities across the AI system’s lifecycle.

Analysis of risk factors having regard to the AI system’s: application context, i.e. the system’s application sector and domain, the legal and regulatory environments in which the system is being developed and used, the system’s intended purpose, and other relevant details of the system’s application context, such as any known legacies of bias of discrimination; design and development context, i.e. the relevant technical characteristics of the system, which may include known limitations of the system, considerations related to data collection, enrichment, storage, use, and retirement and considerations related to the algorithm or model itself; and, deployment context, i.e. factors that govern how potential risks may manifest and be managed in practice, such as steps that will be taken to protect privacy and personal data, mitigate harmful bias, ensure proper training, guard against unintended uses, and ensure accountability and legal compliance.

Mapping of potential impacts on human rights, democracy, and the rule of law by identifying potentially affected persons or groups and conducting an initial assessment of key risk variables - severity (scale, scope, reversibility) which might   be based on both low scope and high gravity effects as well as high scope and low gravity effects, and probability of potential or actual impacts on human rights, democracy and the rule of law, detailing the nature of each.

Triage to identify and triage AI systems that pose significant risks and enable an initial determination to be made of whether the AI system should be developed or deployed balancing its anticipated benefits against the risks.

2. Stakeholder Engagement Process (SEP)

The Stakeholder Engagement Process is intended to incorporate the views of identified potentially affected persons, including those in vulnerable situations, and comprises five (5) phases of:

  • Stakeholder Analysis to identify those at disproportionate risk, particularly vulnerable to potential harms, or are particularly limited in influencing the system.

  • Positionality Reflection to consider how the organisation and individua(s) undertaking the assessment influence their ability to  identify, understand and reflect the views of and impacts on stakeholders and how any gaps might be remedied.

  • Establishment of Engagement Objectives to facilitate the inclusive, informed and meaningful involvement of potentially affected persons.

  • Determination of Engagement Method through the evaluation and accommodation of the needs of potentially affected persons having regard to criteria of engagement, equality and prohibition of discrimination, empowerment, transparency and, accountability

  • Implementation of the chosen engagement method, recording its outcomes.

3. Risk and Impact Assessment (RIA)

Particularly in respect of those AI systems assessed as posing significant risks to human rights, democracy and the rule of law, the scale, scope, reversibility and probability of the potential adverse impacts are evaluated, including having regard to any particular vulnerabilities of affected populations and the wider context in which the AI system is deployed including any cumulative impact where it could be deployed together with other AI systems.

4. Mitigation Plan (MP)

To prioritise and implement mitigations to the identified potential or actual adverse impacts by reference to a mitigation hierarchy of harms to be avoided, mitigated, restored and compensated and, where necessary, to implement redress mechanisms to afford restoration and/or compensation.

Finally, the HUDERIA methodology advocates an iterative approach to ensure effectiveness throughout the AI lifecycle, with periodic “re-assessment, reconsideration, and amendment”.

Handley Gill’s specialist AI governance consultants have developed a bespoke modular artificial intelligence (AI) risk assessment framework that can be tailored to the specific AI system and its risks, fand the relevant jurisdiction(s), or example incorporating human rights, data protection, environmental, equality and other legal and regulatory compliance issues, drawing on the requirements of applicable laws, regulations and guidance, including the Algorithmic Transparency Recording Standard and HUDERIA.

If your organisation requires support in understanding the requirements of responsible AI development and deployment, conducting an algorithmic impact assessment, an artificial intelligence (AI) impact / risk assessment, conformity assessment or related human rights, equality or community impact assessments, please contact us.

Find out more about our responsible and ethical artificial intelligence (AI) services.