Poster Child
“The 5Rights Foundation’s Children and AI Design Code sets out a framework for the responsible management and governance of AI systems throughout their lifecycle, much of which is applicable whether or not they impact children. While the Code itself is not binding and goes beyond legal and regulatory obligations, elements of the Code reflect the requirements of the EU AI Act, for example, and therefore those elements may be - or will in time become - binding on relevant entities, but the Code needs to be read alongside relevant legal and regulatory obligations and understood in the context of the obligations applicable in the relevant jurisdiction. ”
2025 marks 35 years since the United Nations Convention on the Rights of the Child (UNCRC) entered into force, and coincides with the launch on 18 March 2025 by the 5Rights Foundation, which describes itself as “an international NGO working with and for children for a rights-respecting digital world”, of its Children and AI Design Code, which offers “A protocol for the development and use of AI systems that impact children”.
Children’s Rights and the Status of the UN Convention on the Rights of the Child (UNCRC)
The UNCRC identifies a child as anyone under the age of 18 and comprises 54 articles of which Articles 2-42 establish various rights including, at Article 3(1), that “In all actions concerning children, whether undertaken by public or private social welfare institutions, courts of law, administrative authorities or legislative bodies, the best interests of the child shall be a primary consideration”. It is significant that the best interest of the child are required to be “a” primary consideration and not ‘the’ primary consideration, and therefore other weighty factors can and should be taken into account, with competing rights balanced and children’s rights being outweighed where warranted.
Other rights of particular relevance in the context of AI include Article 12(1) UNCRC, which affords to “the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child”, and Article 2(2), which establishes the right not to be discriminated against. Furthermore, Article 17(e) requires that States shall “Encourage the development of appropriate guidelines for the protection of the child from information and material injurious to his or her well-being”.
The Children and AI Design Code correctly states that the “The UNCRC applies to every child across the globe”. In practice, however, the ability to enforce those rights is limited.
While the UK ratified the UNCRC in 1991, it has not incorporated the UNCRC wholesale into UK domestic law, nor has legislation been enacted to give effect to the UNCRC, and it is therefore not generally binding on UK courts, public bodies or private sector organisations (see J.H. Rayner Ltd. v. Dept. of Trade [1990] 2 A.C. 418, at 476H-477A). The UNCRC may, however, provide an aid to interpretation of legislative provisions where the domestic measure is ambiguous (see R (on the application of JC & RT) v Central Criminal Court and others [2014] EWCA Civ 1777, [32]), in which case “the interpretation chosen should be that which better complies with the commitment to the welfare of children which this country has made by ratifying the United Nations Convention on the Rights of the Child” (see Smith v Secretary of State for Work and Pensions & Anor [2006] 1 WLR 2024 per Baroness Hale, [78]) and, legislative provisions which post-date the ratification of a treaty and which deal with the same subject matter should be construed as if they were intended to carry out the treaty obligation (Garland v British Rail Engineering Ltd [1983] 2 AC 751 [771] and A v Secretary of State for the Home Department (No 2) [2006] 2 AC 221, [27]).
In those cases where rights under the European Convention on Human Rights are engaged, Article 53 European Convention on Human Rights makes clear that the ECHR is not to be construed as limiting or derogating from any of the human rights and fundamental freedoms which may be ensured whether under domestic law of Member States or as a consequence of their treaty obligations. While Convention rights will not necessarily be engaged where children are impacted by AI, in relevant cases the European Court of Human Rights would look to other international human rights instruments such as the UNCRC as an aid to interpretation and the Human Rights Act 1998 was enacted after the UK signed and ratified the UNCRC which may permit reliance upon it (see R (on the application of T) v the Secretary of State for Justice [2013] EWHC 1119 (Admin) [29]), although in relation to cases where data protection rights are the foundation of the claim that Convention rights are engaged, this could be affected by The Data Protection (Fundamental Rights and Freedoms) (Amendment) Regulations 2023.
Status of the Children and AI Design Code
Unlike, for example, the UK Information Commissioner’s Age-Appropriate Design Code, often referred to as its Children’s Code, which is a statutory code of practice binding on those subject to the U.K. GDPR/DPA 2018, the 5Rights Foundation’s Children & AI Design Code is entirely voluntary and non-binding, despite stating that it “demands engagement”, although we expect efforts will be made to persuade regulators to adopt at least elements of the Code, not least as the ICO has indicated its support for the introduction of a new obligation in the Data Protection Act 2018 to produce a new statutory code of practice on data protection and AI.
The Children and AI Design Code is stated to be intended to build on The Alan Turing Institute’s work to map themes across 13 transnational frameworks relating to AI, children’s rights, and wellbeing, and to be compatible with the UN Convention on the Rights of the Child (UNCRC), General Comment 25 on children's rights in relation to the digital environment, EU AI Act Regulation (EU) 2024/1689, CoE AI Treaty, US National Institute of Standards and Technology (NIST) AI Risk Management Framework and draft US AI legislation.
The Code therefore goes beyond strict legal obligations and may impose regulations that don’t apply in particular jurisdictions or which only apply to certain types of AI systems.
The Code does not replace or consolidate, but would need to be read in conjunction with, relevant applicable laws and regulations.
The Children and AI Design Code
The Code provides contextual information regarding the development of children at ages 0-5, 6-9, 10-12, 13-15 and, 16-17 and identifies common risks to children from AI systems as including: unfairness; harmful content and activity; privacy; security; and, capture.
Circumstances identified in the Code as being when AI will impact children, and when the Code should apply, are when:
Children’s data is included in training data;
AI systems shape children’s experience of a product or service;
Children are likely to directly/indirectly engage with the AI system;
Outputs or outcomes are likely to impact children;
Decisions impacting children are influenced by AI systems.
The Code identifies the following key considerations throughout the lifecycle and deployment of an AI system:
Supply chain;
AI lifecycle;
Context;
Testing and Metrics;
Stakeholder engagement;
Children’s rights and capacities;
Diversity and inclusion;
Proportionality; and,
Role of parents or carers.
Throughout the lifecycle of an AI system, the Code specifies 5 criteria against which to assess AI systems’ impact on children and to guide decision making under the Code:
Developmentally appropriate;
Lawful;
Safe;
Fair;
Reliable;
Provide redress;
Transparent;
Accountable; and,
Uphold rights.
The Code identifies responsible individuals within an organisation, specifically: a Senior Accountable Leader; Project Manager; AI Systems Expert; AI Risks Expert; Age Appropriate Expert; Child Rights and Voice Expert; AI Testing Expert; Data Set Expert; Privacy Expert; Security Expert; Transparency Expert; and, Design Lead. In practice, few organisations would have access to or would engage such resources and multiple roles could be adequately undertaken by the same individual with the appropriate skillset, for example one individual might take on the role of AI risks expert, age appropriate expert, privacy expert and transparency expert.
The Code itself is comprised of 9 stages. At each stage, the Code sets out the purpose and outcomes, together with guidance on conforming with the Code:
Stage 1: Preparation
Establish a process for making decisions, including when and by whom;
Create a project plan that conforms with the requirements of the Code;
Provide realistic estimate of resourcing needs (money, time, and people) that has been approved.
Assemble a project team with the necessary skills, experience, and competencies.
Assign roles and responsibilities to team members and the Executive Leadership for all tasks.
Make a written record of all the Preparation stage that has been reviewed and signed (in writing) by the Executive Leadership.
Stage 2: Intentions
Carry out an initial exploration of what you want your AI system to do and why (problem statement);
Assess your intentions against the criteria to identify and evaluate risk of non-conformity;
Revise any aspects of your intentions that do not conform with the Code;
Test your revised intentions to ensure they now conform with the Code;
Make a written record of your assessment process and the changes you have made in response that has been reviewed and signed (in writing) by the Executive Leadership;
Ensure your project plan aligns with your intentions.
Stage 3: Data
Carry out an audit of your proposed or existing data sources/inputs;
Assess your data inputs against the criteria to identify and evaluate the risk of non-conformity, including using appropriate testing if necessary;
Revise any aspect of your data inputs that does not conform with the criteria;
Test your revised data inputs to ensure they now conform with the criteria.
Make a written record of your assessment process and the changes you have made in response that has been reviewed and approved (in writing) by the Executive Leadership;
Provide in your project plan for ongoing monitoring of your data inputs, including ensuring that data generated by your AI system also conforms with the criteria.
Stage 4: Development
Be clear on the instructions that will drive your AI system;
Assess the instructions against the criteria to identify and evaluate risk of non-conformity using appropriate testing and consultations methods;
Revise any aspect of your instructions that do not conform with the criteria;
Test your revised instructions to ensure they now conform with the criteria;
Make a written record of your assessment process and the changes you have made in response.
Stage 5: Deployment
Completed all conformity assessments and testing;
Prepare a launch report for the Executive Leadership;
Conduct a launch review;
Receive Executive Leadership approval or reverted to an earlier stage to address issues;
Make a written record of the launch review process;
Launch your AI system (if agreed).
Stage 6: Monitoring
Have a plan and capacity for the continued monitoring of your AI system that has been approved by the Executive Leadership;
Have systems and processes to respond to issues identified through monitoring;
Run operational and team tests at regular intervals to ensure systems and processes continue to work effectively, and that personnel understand their roles and responsibilities;
Log monitoring outcomes, including incidents.
Stage 7: Transparency·
Develop a comprehensive transparency strategy that has been approved by the Executive Leadership;
Develop all aspects of your transparency strategy collaboratively with relevant stakeholders, including children;
Take account of the needs and capacities of children at different stages of development and those with additional vulnerabilities;
Identify ways in which you can provide users with key information about your AI system upfront and throughout the user journey (if your AI system is public facing);
Continually review and update your transparency strategy to ensure it is as user-friendly and as useful as possible
Stage 8: User Reports and Redress
Prepare a comprehensive user reporting strategy that takes account of the needs and capacities of children at different stages of development and those with additional vulnerabilities;
Create a way for parents, carers, and teachers to report on behalf of children that does not require being logged into or registered to your product or service;
Co-create your user reporting strategy with relevant stakeholders including children;
Sign off your user reporting strategy with the Executive Leadership;
Put a protocol in place to inform regularly or in extremis the relevant authorities about emerging risks or incidents. (f) A plan to periodically review and update your user reporting strategy.
Stage 9: Decommissioning
Agree the criteria against which you will assess life expectancy and the cadence at which it will be reviewed;
Carry out a preliminary review of your AI system’s life expectancy;
Conduct a decommissioning impact assessment for a planned and emergency retirement of your AI system;
Be clear what steps you will need to take to retire your AI system, and the resources (people, time, and money) required to complete the process;
Have emergency protocols in place in the event that it becomes necessary to retire your AI system at short notice;
Secure written approval from the Executive Leadership of the retirement protocols, assessment, and planning.
If your organisation requires support in understanding the requirements of responsible AI development and deployment, establishing an AI governance framework, or conducting an algorithmic impact assessment, an artificial intelligence (AI) impact / risk assessment, conformity assessment or related human rights, data protection, equality or community impact assessments, please contact us.
Find out more about our responsible and ethical artificial intelligence (AI) services.