LEGAL, REGULATORY & COMPLIANCE CONSULTANTS

Handley Gill Limited

Our expert consultants at Handley Gill share their knowledge and advice on emerging data protection, privacy, content regulation, reputation management, cyber security, and information access issues in our blog.

Another bauble for the tree?

With the UK’s signature to the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, the world’s first legally binding global AI treaty, the government is signalling its intent to establish the necessary legal framework to comply with the AI Treaty’s requirements, which go further than the promised legislation to target only the most powerful AI models.
— Handley Gill Limited

The Labour Party’s expression of its intentions for the regulation of artificial intelligence in the UK have been consistent: it will introduce legislation but only to regulate the companies developing the most powerful AI models, which has been suggested is intended to mean so-called frontier AI models, not least because the Department for Science, Innovation and Technology is currently advertising for a new Head of Frontier AI Regulatory Framework to lead the team responsible for delivering on the Government’s commitment in the King’s Speech 2024 to “establish binding regulation on a small number of companies responsible for the most powerful AI systems in order to enhance safety”.

In its 2024 manifesto, the Labour Party committed to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes”, while simultaneously ensuring its “industrial strategy supports the development of the Artificial Intelligence (AI) sector”, including by removing planning barriers to new datacentres and creating a “National Data Library to bring together existing research programmes and help deliver data-driven public services, whilst maintaining strong safeguards and ensuring all of the public benefit”. This was followed by a commitment in the King’s Speech 2024 to introducing “appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”, although specific AI legislation wasn’t included in the supporting list of priority legislation in the Background Briefing, albeit the proposed product safety legislation would appear to be intended to apply to AI. The focus on the most powerful AI models was re-emphasised by Secretary of State for Science, Innovation and Technology, Peter Kyle MP, when he met with tech bosses and promised that the proposed legislation would not become a “Christmas tree bill." This is despite his own position during the passage of the previous Conservative government’s Data Protection and Digital Information Bill when he supported calls for stronger safeguards in relation to automated decision-making and high-risk processing activities, both of which are clearly relevant to the processing of personal data in the context of AI, and the Labour Party’s adoption at its 2023 Conference of a motion to “ensure that a legal duty on employers to consult trade unions on the introduction of invasive automated or artificial intelligence technologies in the workplace is enshrined in law”.

Prior to the General Election, in May 2024 at the meeting of the Committee of Ministers of the Council of Europe, the UK’s representative Minister of State (Minister for Europe) in the Foreign, Commonwealth and Development Office Nusrat Ghani voted in favour of the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, the first international legally binding treaty on AI regulation. Having secured a two thirds majority of eligible voters present at the Committee of Ministers, the treaty was adopted and opens for signature today, 05 September 2024. Signature is only a statement of intent to become a party to a treaty in the future and not a binding legal commitment, which only arises when the signatory ratifies the treaty and becomes a state party to it, subject to any permitted reservations.

The UK, US and the EU have each signed the AI Treaty, among other countries, with the UK government stating that it would “work closely with regulators, the devolved administrations, and local authorities as the Convention is ratified to ensure it can appropriately implement its new requirements” and that “Once the treaty is ratified and brought into effect in the UK, existing laws and measures will be enhanced. For example, aspects of the Online Safety Act will better tackle the risk of AI using biased data and producing unfair outcomes”.

How do the UK’s current and proposed laws stack up against the obligations on state parties to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law and what are the baubles that might have to be hung on the proposed AI legislation - and amendments made to other legislation - to enable the UK to achieve compliance with its requirements?  

Scope

The application of the Convention is not limited to frontier models but applies to all “artificial intelligence systems”, which are defined as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.”  This effectively adopts the same definition of an AI system as that of the OECD in its Recommendation of the Council on Artificial Intelligence and by the EU at Article 3(1) EU AI Act.

The Convention applies to the lifecycle of such AI systems which “have the potential to interfere with human rights, democracy and the rule of law”, comprising not merely their development but also their deployment, and to public authorities as well as private actors, with carve outs for national security and national defence (Article 3, AIHRDRL Convention).

The AIHRDRL Convention requires state parties to maintain or adopt measures to address  the impact of AI systems on human rights, democracy, transparency, liability and redress, accuracy, reliability, safety, security, risk management and public participation.

Human Rights

The AI Treaty requires that state parties:

  • ensure activities in the AI system lifecycle are consistent with domestic and international obligations to protect human rights (Article 4);

  • respect human dignity and individual autonomy in relation to activities within the lifecycle of artificial intelligence systems (Article 7);

  • ensure activities within the lifecycle of artificial intelligence systems respect equality, including gender equality, and the prohibition of discrimination, as provided under applicable international and domestic law (Article 10(1));

  • overcome inequalities to achieve fair, just and equitable outcomes, in line with its applicable domestic and international human rights obligations, in relation to activities within the lifecycle of artificial intelligence systems (Article 10(2));

  • ensure that, with regard to activities within the lifecycle of artificial intelligence systems privacy rights of individuals and their personal data are protected, including through applicable domestic and international laws, standards and frameworks; and effective guarantees and safeguards have been put in place for individuals, in accordance with applicable domestic and international legal obligations (Article 11);

  • implement the provisions of the Convention without discrimination on any ground (Article 17); and,

  • take due account of any specific needs and vulnerabilities in relation to respect for the rights of persons with disabilities and of children (Article 18).

The Human Rights Act 1998 already requires public authorities and, indirectly, private actors to comply with relevant provisions of the European Convention on Human Rights, including the right to respect for private and family life (which can in appropriate circumstances include the protection of personal data) and the prohibition of discrimination, and affords a mechanism for claims to be brought and redress obtained.  

The Equality Act 2010 prohibits direct discrimination  by public and private entities on the grounds of the protected characteristics of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and/or sexual orientation, and indirect discrimination by public and private entities on the grounds of the protected characteristics of age, disability, gender reassignment, marriage and civil partnership, race, religion or belief, sex and/or sexual orientation, and imposes obligations to make reasonable adjustments for disabled persons in certain circumstances. The Act also imposes an obligation on public authorities and those exercising public functions to have due regard to the need to eliminate discrimination, harassment, victimisation and any other conduct that is prohibited by or under the Equality Act 2010, advance equality of opportunity between persons who share a relevant protected characteristic and persons who do not share it and foster good relations between persons who share a relevant protected characteristic and persons who do not share it (‘the Public Sector Equality Duty’ or ‘PSED’).

The restrictions on the territorial and material scope of the UK GDPR and Data Protection Act 2018 could mean that entities within the AI lifecycle, in particular entities engaged in the gathering of personal data for the training of AI models and developers of AI models, are not covered by the legislation, issues considered by the First-Tier Tribunal (Information Rights) in Clearview AI v Information Commissioner [2023] UKFTT 00819 (GRC) ,which is currently the subject of an application for permission to appeal by the ICO.

In so far as the UK GDPR and Data Protection Act 2018 are applicable, these require inter alia that data controllers comply with obligations of data protection by design and default in their processing activities (Article 25 UK GDPR), conduct a data protection impact assessment in respect of high risk processing activities (Article 35 UK GDPR) and consult with the ICO in the event that high risks cannot be mitigated (Article 36 UK GDPR), inform data subjects of the existence of automated processing and provide meaningful information about the logic determining such decisions (Article 13(2)(f) UK GDPR) and comply with the rights to object to automated processing producing legal or similarly significant effects  (Article 22 UK GDPR) and to processing based on legitimate interests (Article 21 UK GDPR).

The previous Conservative government’s Data Protection and Digital Information Bill would have eased the restrictions on solely automated processing (clause 14 Data Protection and Digital Information Bill as amended in Grand Committee).  The list of priority legislation published to accompany the King’s Speech 2024 included a new Digital Information and Smart Data Bill to “enable new innovative uses of data to be safely developed and deployed”, but it is anticipated that safeguards will be maintained.

While the AI Treaty doesn’t explicitly refer to the environmental impact of AI systems, the recent decision of the European Court of Human Rights in Verein KlimaSeniorinnen Schweiz and Others v Switzerland (application no. 53600/20) demonstrates that in appropriate circumstances the impact of climate change can engage Article 8 ECHR. Consideration could be given to transparency requirements in relation to the environmental impact of AI systems and measures to ensure that the lifecycle of AI systems doesn’t undermine climate change targets and the government’s positive duty to combat climate change. 

Democracy

The AI Treaty requires that state parties:

  • ensure artificial intelligence systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence and access to justice (Article 5(1)); and,

  • seek to protect its democratic processes in the context of activities within the lifecycle of artificial intelligence systems, including individuals’ fair access to and participation in public debate, as well as their ability to freely form opinions (Article 5(2)).

The background briefing notes to the King’s Speech 2024  identified a new Product Safety and Metrology Bill, which would apply to artificial intelligence (AI), and either this or the “appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models” could potentially be directed to AI systems which have the potential to be used to undermine democracy and not merely so-called frontier AI models by expanding the scope of what is considered to be a high-risk AI system..

While not yet in force, the Online Safety Act 2023 imposes obligations on certain user to user and search services to implement proportionate systems and processes to prevent users from encountering fraudulent advertising and to enable such content to be promptly removed but these apply to paid-for content rather than organic content. It also includes obligations on certain user to user services to enable functionality which would permit users to filter out, for example, non-verified users under so-called user empowerment duties, which could serve to support limits on the reach of bots which seek to influence the public through the spread of disinformation including on behalf of hostile states, as has recently been alleged by the US against Russia, but its effectiveness and users’ appetite for such functionality remains to be seen.

The Act also create an offence of knowingly conveying false information intending to cause non-trivial psychological or physical harm to its likely audience, but this is unlikely to apply to the use of AI to share false information in the context of elections.

Recent prosecutions in relation to riots in the UK and social media posts considered to have incited such violence and disorder, relied upon pre-existing legislation including the Public Order Act 1986, which creates offences of using threatening, abusive or insulting words with the intention to cause a person harassment, alarm or distress and causes such harassment, alarm or distress, of publishing or distributing written material which is threatening, abusive or insulting either with the intention of stirring up racial hatred or where racial hatred is likely to be stirred up by it and of publishing or distributing written material which is threatening with the intention to stir up racial hatred.

In relation to the risk posed by AI-generated deep fakes, in so far as these utilise real people the UK GDPR and Data Protection Act 2018 would render such output unlawful as inaccurate and unfairly processed personal data, granting individuals the right to demand its removal and seek compensation, and the obligation on data controllers to ensure data protection by design and default should serve to encourage the implementation of mechanisms to identify and detect such content but the practicality of such measures and the existence of any regulatory incentive to comply is in doubt. 

Transparency

The AI Treaty requires that state parties:

  • ensure that adequate transparency and oversight requirements tailored to the specific contexts and risks are in place in respect of activities within the lifecycle of artificial intelligence systems, including with regard to the identification of content generated by artificial intelligence systems (Article 8); and,

  • ensure that, as appropriate for the context, persons interacting with artificial intelligence systems are notified that they are interacting with such systems rather than with a human (Article 15(2)).

While the UK GDPR requires that data subjects are informed of the existence of the automated processing of personal data, UK law does not currently impose any general obligation to notify individuals as to their engagement with AI systems. Such an obligation could be incorporated into the forthcoming Product Safety and Metrology Bill, or the “appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”, but it isn’t currently clear whether the government would propose to limit this obligation to interactions with high risk AI systems or whether it would go beyond the requirements of the Convention to apply the obligations to all AI systems, such as chatbots, and what the extent of the obligation would be.

While many reputable news organisations and social media companies, including the BBC, Meta and TikTok, have implemented their own policies on the identification and labelling of AI content, the UK does not currently impose any obligation to identify AI content. Such obligations have already been enacted by the EU at Article 50(2) EU AI Act and are in the process of being passed in California under Bill AB-3211 California Digital Content Provenance Standards.

Since it is anticipated that such obligations would be required to apply more widely than the services covered by the Online Safety Act 2023, we anticipate that this could be addressed either under the Product Safety and Metrology Bill or the “appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.

Liability & Redress

The AI Treaty requires that state parties:

  • ensure accountability and responsibility for adverse impacts on human rights, democracy and the rule of law resulting from activities within the lifecycle of artificial intelligence systems (Article 9);

  • ensure the availability of accessible and effective remedies for violations of human rights resulting from the activities within the lifecycle of artificial intelligence systems, including measures to ensure that relevant information regarding artificial intelligence systems which have the potential to significantly affect human rights and their relevant usage is documented, provided to bodies authorised to access that information and, where appropriate and applicable, made available or communicated to affected persons,  measures to ensure that such information is sufficient for the affected persons to contest the decision(s) made or substantially informed by the use of the system, and, where relevant and appropriate, the use of the system itself, and an effective possibility for persons concerned to lodge a complaint to competent authorities (Article 14); and,

  • ensure that, where an artificial intelligence system significantly impacts upon the enjoyment of human rights, effective procedural guarantees, safeguards and rights, in accordance with the applicable international and domestic law, are available to persons affected thereby (Article 15(1)).

The requirement in the Human Rights Act 1998 that courts and tribunals act compatibly with the relevant European Convention on Human Rights ensures that individuals are able to seek redress for the actions of private actors.

Expansion of the Online Safety Act 2023 or provisions to be incorporated into the Product Safety and Metrology Bill or the “appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. would be necessary to afford redress in relation to adverse impacts for democracy and the rule of law. 

In the UK, the Equality and Human Rights Commission has a statutory mandate to advise government and Parliament on matters relating to equality and human rights, and to promote and protect equality and human rights across Britain. It was one of the regulators required by the previous Conservative government to provide an update on its strategic approach to implementing the 2023 AI Regulation White Paper, but warned that “Our ability to scale up and respond to the risks to equality and human rights presented by AI is… limited. While we do have an important role in regulating AI, we have to prioritise our work” and “Given our current resourcing we are unable to increase our capacity to regulate AI or to introduce the technical roles that we might wish”. There is clearly scope for a greater role for the EHRC in supporting the implementation of the AI Treaty through an expanded remit, and an argument for a more cohesive approach to regulation by formally including the EHRC in the Digital Regulation Co-operation Forum.

Accuracy, Reliability, Safety, Security & Risk Management

The AI Treaty requires that state parties:

  • adopt measures to promote the reliability of artificial intelligence systems and trust in their outputs, which could include requirements related to adequate quality and security throughout the lifecycle of artificial intelligence systems (Article 12);

  • enable, as appropriate, the establishment of controlled environments for developing, experimenting and testing artificial intelligence systems under the supervision of its competent authorities with a view to fostering innovation while avoiding adverse impacts on human rights, democracy and the rule of law (Article 13);

  • adopt or maintain graduated, differentiated and iterative measures, as appropriate throughout the AI lifecycle, for the identification, assessment, prevention and mitigation of risks posed by artificial intelligence systems by considering actual and potential impacts to human rights, democracy and the rule of law, having regard to the context and intended use of artificial intelligence systems, in particular as concerns risks to human rights, democracy, and the rule of law, the severity and probability of potential impacts, the perspectives of relevant stakeholders, in particular persons whose rights may be impacted, and including monitoring for risks and adverse impacts to human rights, democracy, and the rule of law, documentation of risks, actual and potential impacts, and the risk management approach and requiring, where appropriate, testing of artificial intelligence systems before making them available for first use and when they are significantly modified (Article 16(1)-(2));

  • adopt or maintain measures that seek to ensure that adverse impacts of artificial intelligence systems to human rights, democracy, and the rule of law are adequately addressed (Article 16(3)); and,

  • assess the need for a moratorium or ban or other appropriate measures in respect of certain uses of artificial intelligence systems where it considers such uses incompatible with the respect for human rights, the functioning of democracy or the rule of law (Article 16(4)).

Currently, AI developers may seek to eschew liability for the accuracy and reliability of AI systems and their outputs in their standard terms.

While obligations of accuracy in relation to personal data are already well established under the UK GDPR, these have not served to prevent AI systems being developed and made available which involve unlawful data processing. There have been numerous well-publicised examples of AI models released to the public which have presented false and even dangerous information. In the UK, the tort of negligence is capable of applying to such instances, but we anticipate that the proposed Product Safety and Metrology Bill or the “appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models” will seek to put obligations of safety, security and reliability on a statutory footing.

The UK Artificial Intelligence Safety Institute (AISI), established in 2023, is tasked with developing and conducting evaluations on advanced AI systems, driving foundational AI safety research and facilitating information exchange including by establishing information-sharing channels between the Institute and other national and international actors, such as policymakers, international partners, private companies, academia, civil society, and the broader public. This would include driving forward the work on AI safety made possible by the AI safety commitments secured at the AI Seoul Summit. The Labour government has committed to putting the AI Safety Institute on a statutory footing and to strengthen its role as part of its forthcoming AI legislation, and this is likely to include establishing binding obligations on AI developers to submit relevant AI systems to safety testing prior to initial release and any subsequent material modification. Again, however, the current narrow focus of the AI Safety Institute would warrant a closer working relationship and expanded remit for the EHRC to address the wider issues to human rights and democracy presented by AI systems. The establishment of categories of prohibited AI systems and minimum standards, coupled with a power of the Secretary of State to ban or restrict the provision and use of AI systems deemed not to meet the minimum standards and to pose an unacceptable risk could be created on the recommendation of a relevant regulator. The EU AI Act, for example, bans exploitative AI systems, those which deploy subliminal techniques, involve certain profiling, predictive policing and certain biometric technologies including the use of emotion recognition in the workplace and AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

Public participation

The AI Treaty requires that state parties:

  • ensure that important questions raised in relation to artificial intelligence systems are, as appropriate, duly considered through public discussion and multistakeholder consultation in the light of social, economic, legal, ethical, environmental and other relevant implications (Article 19); and,

  • encourage and promote adequate digital literacy and digital skills for all segments of the population, including specific expert skills for those responsible for the identification, assessment, prevention and mitigation of risks posed by artificial intelligence systems (Article 20).

The previous Conservative government has already consulted on its AI Regulation White Paper and it has been reported that the Labour government intends to consult on the content of its proposed AI legislation in the coming weeks. As regulators develop their emerging strategies, we anticipate that they will continue to consult on their approaches, as the Information Commissioner’s Office has been in relation to the regulation of data protection in the context of generative AI.

Many pieces of legislation include obligations on statutory regulators to promote public education and literacy within their remit, which would include how their powers apply to  artificial intelligence, but there are wider opportunities for the Department for Education to establish baseline AI skills across the population.

The Treaty also provides for state parties to exchange, as appropriate, relevant and useful information between themselves concerning aspects related to artificial intelligence which may have significant positive or negative effects on the enjoyment of human rights, the functioning of democracy and the observance of the rule of law, including risks and effects that have arisen in research contexts and in relation to the private sector, and are encouraged to involve, as appropriate, relevant stakeholders and States that are not Parties to the Convention in such exchanges of information (Article 25(2)).

If your organisation requires support in developing or deploying AI lawfully, to ensure that you are at the forefront of using AI safely, responsibly and ethically, or to understand how new laws and regulations could affect you, please contact us.

Find out more about our responsible and ethical artificial intelligence (AI) services.

Access Handley Gill Limited’s proprietary AI CAN (Artificial Intelligence Capability & Needs) Tool, to understand and monitor your organisation’s level of maturity on its AI journey.

Download our Helping Hand checklist on using AI responsibly, safely and ethically.

Check out our dedicated AI Resources page.

Follow our dedicated AI Regulation Twitter / X account.