The Bill with the Holes?
On 22 November 2023, the Conservative peer Lord Holmes of Richmond, a member the House of Lords’ Science and Technology Select Committee, introduced the Artificial Intelligence (Regulation) Bill (HL Bill 11 2023-24), A Bill to make provision for the regulation of Artificial Intelligence; and for connected purposes., in the House of Lords as a private members bill.
While the government’s position on the Bill is not yet clear, although it has been reported that PM Rishi Sunak was considering establishing a global AI authority in London, modelled on the International Atomic Energy Agency (IAEA), the central planks of the Bill are:
To define AI as technology which enables the programming or training of a device or software to perceive environments through the use of data, interpret data using automated processing designed to approximate cognitive abilities and make recommendations, predictions or decisions with a view to achieving a specific objective;
To require the Secretary of State for Science, Innovation and Technology to make regulations establishing the AI Authority;
To designate the functions of the AI Authority, which include the co-ordination of existing regulators in relation to AI, the conduct of a review of the existing legal and regulatory framework and its effectiveness, the accreditation of AI auditors; and to take a leading role in relation to the identification of risks, horizon scanning, international liaison, education and awareness, and public engagement and consultation in relation to the opportunities and risks of AI;
To identify the principles according to which the AI Authority must fulfil its functions, including that any burdens or restrictions are proportionate to the benefit of AI taking into account the nature of the tool or service, the nature of the risk to consumers, the cost of implementation proportionate to the risk and international competitiveness and having regard to the principles that AI should deliver safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, contestability and redress, that AI and its applications should AI and its applications should comply with equalities legislation, be inclusive by design, be designed so as neither to discriminate unlawfully among individuals nor, so far as reasonably practicable, to perpetuate unlawful discrimination arising in input data, meet the needs of those from lower socio-economic groups, older people and disabled people and generate data that are findable, accessible, interoperable and reusable, and that businesses developing or using AI should do so transparently, undertake thorough testing, and comply with applicable laws, including in relation to data protection, privacy and intellectual property;
To require the AI Authority to collaborate with existing regulators to create Regulatory Sandboxes for AI;
To require the Secretary of State for Science, Innovation and Technology to make regulations requiring any business which develops, deploys or uses AI to have a designated AI Responsible Officer, whose role is to ensure the safe, ethical, unbiased and non-discriminatory use of AI and that data used in AI is similarly unbiased so far as reasonably practicable;
To impose an obligation on quoted companies to include within their strategic report information about any development, deployment or use of AI by the company, and the name and activities of the AI officer;
To require the Secretary of State for Science, Innovation and Technology to make regulations obliging any person involved in training AI to supply to the AI Authority a record of all third-party data and intellectual property used in that training, together with an assurance that all such data is used with informed consent (explicit or implicit) and complies with applicable IP law;
To require the Secretary of State for Science, Innovation and Technology to make regulations any person supplying a product or service involving AI to give customers clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance; and,
To require the Secretary of State for Science, Innovation and Technology to make regulations requiring any business developing, deploying or using AI to submit to independent third party audit by an AI Auditor accredited by the AI Authority.
For what is described as a Bill to regulate AI, it is notable that it is remarkably light on regulation, albeit that this is consistent with the government’s approach which was recently reiterated by Viscount Camrose in the House of Lords on 14 November, who stated that the government not committed to new legislation “at this stage”, but had “not ruled out legislative action in future as and when there is evidence of substantial risks, where non-statutory measures would be ineffective”.
The cumulative definition of artificial intelligence in the Bill is a narrow one, which appears to be potentially more restrictive than the OECD definition, for example, and which might exclude AI models already available on the market and in wide use.
The proposed creation of the AI Authority would appear to be in addition to the AI Safety Institute announced by Prime Minister Rishi Sunak at the AI Safety Summit on 02 November 2023, whose functions are intended to be to develop and conduct evaluations on advanced AI systems, drive foundational AI safety research and facilitate information exchange between the Institute and other national and international actors, such as policymakers, international partners, private companies, academia, civil society, and the broader public.
Certain provisions of the Bill apply only to businesses, and not to public authorities, charities and other organisations which might use or deploy AI, and are therefore too restrictive in their application and could be avoided by businesses by establishing sophisticated corporate structures such as that used by OpenAI which is comprised of a non-profit AI research organisation and the for-profit subsidiary OpenAI Global LLC.
While the Bill doesn’t make any specific provisions creating offences or imposing penalties for failure to comply with obligations under the Act, such as the obligation on businesses to notify the AI Authority of their training data, it does give the Secretary of State discretion to create offences and require the payment of fees, penalties and fines.
The drafting of the Bill is such that, other than in relation to the notification and audit provisions, the AI Authority would not itself have direct regulatory oversight over the developers and users of AI, for example to require that AI models are – and are used in a manner that is - safe, secure, robust, fair, transparent, explainable, inclusive by design, non-discriminatory, and compliant with equalities and other legislation, and instead of imposing any direct or even secondary liabilities it would instead impose tertiary liability by requiring the AI authority to have regard to these factors, as well as to the benefits of AI when carrying out its functions to consider whether further legislation may be necessary or in overseeing existing regulators in ensuring that they take account of AI. This would certainly appear to offer the light touch approach to AI regulation preferred by the government as set out in its White Paper ‘A pro-innovation approach to AI regulation’.
The obligation to notify the AI Authority in relation to the use of third party data and IPR in training data, appears not to be intended to extend to personal data within third party data, and an obligation to provide assurances in relation to compliance with data protection legislation would also be welcome. The obligation would support the fulfilment of the commitment to increase transparency by private actors as envisaged in the Bletchley Declaration.
The application of the UK’s current data protection, IP and other legislation to AI developers and users outside the UK, but which involve the use of the personal data or IPR of persons in the UK, should be the subject of urgent review.
The obligation on businesses developing, deploying or using AI to submit to audit by third-party accredited auditors will no doubt come as welcome news to the Big Four consultancy firms, but in practice given the breadth of current and anticipated use of AI, any such power would not be capable of being enforced at scale and, in any event, should surely be restricted to developers of AI and then to any organisation deploying or using AI in ways which are deemed to present a high risk. This would, however, require the AI Authority to work with its co-regulators to establish an AI risk assessment framework and taxonomy of AI harms, incorporating intellectual property, data protection, human rights, equality, ESG and wider societal risks.
If the government is serious about instituting AI guardrails, any Bill to regulate AI should apply to all organisations developing or using AI, and should seek to embed the principles of transparency, explainability etc, which are not necessarily addressed by existing legislation and regulation.
Should your organisation require support in understanding and complying with its legal obligations in relation to the development or use of AI, in conducting an AI Risk Assessment, or wish to engage an outsourced AI Responsible Officer contact us.
Find out more about our responsible and ethical artificial intelligence (AI) services.
Access Handley Gill Limited’s proprietary AI CAN (Artificial Intelligence Capability & Needs) Tool, to understand and monitor your organisation’s level of maturity on its AI journey.
Download our Helping Hand checklist on using AI responsibly, safely and ethically.
Check out our dedicated AI Resources page.
Follow our dedicated AI Regulation Twitter / X account.