LEGAL, REGULATORY & COMPLIANCE CONSULTANTS

Handley Gill Limited

Our expert consultants at Handley Gill share their knowledge and advice on emerging data protection, privacy, content regulation, reputation management, cyber security, and information access issues in our blog.

A Herculean Labour?

While somewhat speculative, we anticipate that a Labour AI Bill suggested in the King’s Speech 2024 (but not detailed in the list of immediate forthcoming legislation) would contain provisions protecting workers, but will be broader than the TUC’s proposed Artificial Intelligence (Employment and Regulation) Bill but perhaps not as wide-ranging as the EU’s AI Act.
— Handley Gill Limited

It is reported that as part of the King’s Speech on 17 July 2024, during which the new Labour government will set out its legislative priorities, the government will commit to the introduction of an AI Bill.

In its 2024 manifesto, the Labour Party committed to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes”, while simultaneously ensuring its “industrial strategy supports the development of the Artificial Intelligence (AI) sector”, including by removing planning barriers to new datacentres and creating a “National Data Library to bring together existing research programmes and help deliver data-driven public services, whilst maintaining strong safeguards and ensuring all of the public benefit”.

But what might a Bill to regulate artificial intelligence (AI) address? But what might a Bill to regulate AI address? And with AI models already having been trained, developed and proliferated the mainstream, are efforts to rein in AI futile?

We have previously analysed the private members bill introduced in the House of Lords by the Conservative peer Lord Holmes of Richmond, a member the House of Lords’ Science and Technology Select Committee, the Artificial Intelligence (Regulation) Bill (HL Bill 11 2023-24), which fell when Parliament was dissolved after the General Election was called, but which would have established a specialist regulator, the AI Authority, introduced a requirement for AI Responsible officers and independent AI auditing, secured the creation of AI Regulatory Sandboxes, imposed AI transparency obligations on quoted companies and introduced AI health warnings.

We anticipate that Labour’s Artificial Intelligence (AI) Bill will go further than Lord Holmes’ private members bill.

At the 2023 Labour Party conference, the Unite union proposed Composite Motion 6 on Technology and AI in the workplace, which was seconded by the Communication Workers Union (CWU), urging delegates to call on the Labour Party to “develop a comprehensive package of legislative, regulatory and workplace protections to ensure that when in government the positive potential of technology is realised for all including the fair distribution of productivity gains”. Specific provisions included “amendments to the UK General Data Protection Regulation (UK GDPR) and Equality Act to guard against discriminatory algorithms”, “protections of workers’ data, human characteristics, acquired knowledge and experience, as the intellectual property of the worker”, and “a legal right for all workers to have a human review of decisions made by AI systems that are unfair and discriminatory so they can challenge decisions”. The motion was passed, with Conference resolving that “the next Labour government should ensure that a legal duty on employers to consult trade unions on the introduction of invasive automated or artificial intelligence technologies in the workplace is enshrined in law” and that “Labour should commit to working with trade unions to gain an understanding of the unscrupulous use of technology in the workplace and campaign against it”. Both unions sit on the Labour Party’s National Executive Committee and are Labour Party donors.

In April 2024, the Trades Unions Congress (TUC), of which both Unite and the CWU form part, published its proposed Artificial Intelligence (Employment and Regulation) Bill, which is targeted at the use by employers, or those acting on their behalf, of artificial intelligence systems  for ‘high-risk decision making’. The definition of high-risk decision making draws on the prohibition of automated decision-making under Article 22 GDPR and UK GDPR, and is defined as decisions including profiling taken or supported by AI systems (even those with an element of human input) which have the capacity or potential to produce legal effects in the context of the rights and responsibilities of workers and jobseekers, or similar significant effects, with certain decision making pre-designated as being high-risk. The proposed Bill is intended to be enforceable in the Employment Tribunal, with rights to compensation.

The Act would require employers to maintain a register of AI systems used for high-risk decision making and to conduct a Workplace AI Risk Assessment prior to undertaking high-risk decision making using AI systems and every 12 months thereafter, addressing the purpose, logic, data and monitoring arrangements of the AI system as well as the risks and mitigations, and which must be shared at least a month in advance for the purposes of consultation with workers and their representatives, including trade unions and other representatives, and repeated every 12 months. Consultations must address the risks to the rights encapsulated in the Equality Act 2010, the Human Rights Act 1998, Health and Safety at Work etc Act 1974, the Data Protection Act 2018, and the UK General Data Protection Regulation.

It would also establish a right to a personalised explanation of a high-risk decision for an affected individual and a right to human reconsideration.

The proposed Bill would ban high-risk decision making using emption recognition technology which would be detrimental for workers, and also amend s39 Equality Act 2010 to prohibit discrimination in identifying and advertising to job seekers and make employers liable for high-risk decision making they deploy or which is deployed on their behalf.

The proposed Bill would grant trade unions the right to be provided with all data in anonymised form used or  proposed to be used by an employer for AI decision making, unless individuals object, but where trade union members agree the data need not be anonymised.

The proposed Bill seeks to make provision for innovation by suspending the obligations under the Act in so far as AI systems are deployed in a Regulatory Sandbox, and also makes provision for an exemption to be implemented for microbusinesses.

Finally, the proposed Bill also incudes a statutory right to disconnect, which is supported by Deputy Prime Minister Angela Rayner.

We anticipate that while Labour’s new AI Bill may draw from the TUC’s Artificial Intelligence (Employment and Regulation) Bill, it is likely to have a broader focus more akin to the EU’s AI Act which became law on 12 July 2024 and enters into force on 01 August 2024, with the majority of its provisions becoming applicable from 02 August 2026.

If your organisation requires support in developing or deploying AI lawfully, to ensure that you are at the forefront of using AI safely, responsibly and ethically, or to understand how new regulations could affect you, please contact us.

Find out more about our responsible and ethical artificial intelligence (AI) services.

Access Handley Gill Limited’s proprietary AI CAN (Artificial Intelligence Capability & Needs) Tool, to understand and monitor your organisation’s level of maturity on its AI journey.

Download our Helping Hand checklist on using AI responsibly, safely and ethically.

Check out our dedicated AI Resources page.

Follow our dedicated AI Regulation Twitter / X account.