Opportunity Knocks
The government preceded the publication of Matt Clifford CBE’s AI Opportunities Action Plan, commissioned in July 2024, by announcing that it had agreed to take forward all 50 of his recommendations to “shape the AI revolution”.
The Plan’s terms of reference were to “set out a roadmap for government to capture the opportunities of AI to enhance growth and productivity and create tangible benefits for UK citizens”, including by considering how the UK could build a scalable and globally competitive AI sector, adopt artificial intelligence to enhance growth and productivity supporting delivery of the government’s five stated missions, use artificial intelligence in government to transform citizens’ experiences of interacting with the state and boosting take-up in all parts of the public sector and the wider economy and, strengthen the enablers of artificial intelligence adoption, such as data, infrastructure, public procurement processes and policy, and regulatory reforms.
The plan and its recommendations are split into 3 sections:
1. Invest in the foundations of AI;
2. Push hard on cross-economy AI adoption; and,
3. Position the UK to be an AI maker, not an AI taker.
As well as addressing wider infrastructure and compute requirements, international co-operation, training and addressing the skills gap, the AI Opportunities Action Plan includes recommendations affecting data protection, intellectual property and copyright, AI regulation and the environment and ESG.
The report calls for the government to embed AI in the delivery of public services, which will require public sector bodies to not only comply with existing law and regulation, including the Algorithmic Transparency Reporting Standard, but to consider how they meet their public law obligations when deploying artificial intelligence.
In this post we highlight some of the key recommendations of the AI Opportunities Action Plan impacting data protection, intellectual property, AI regulation, ESG and online safety.
What are the implications of the AI Opportunities Action Plan for data protection?
The AI Opportunities Action Plan makes several recommendations that involve the collection of new data with a view to making it available for the purposes of the development of artificial intelligence (AI) models or the repurposing of existing datasets and mechanisms to facilitate this.
Recommendation 7 of the AI Opportunities Action Plan calls on the government to “Rapidly identify at least 5 high-impact public datasets it will seek to make available to AI researchers and innovators”, suggesting that these should be prioritised having regard to factors including the potential economic and social value of the data, as well as public trust, national security, privacy, ethics, and data protection considerations.
In addition, under recommendation 7 it is proposed that the government should “explore use of synthetic data generation techniques to construct privacy-preserving versions of highly sensitive data sets”.
Recommendation 8 of the AI Opportunities Action Plan calls for the government to “Strategically shape what data is collected, rather than just making data available that already exists”.
Recommendation 9 of the AI Opportunities Action Plan calls on the government to “Develop and publish guidelines and best practices for releasing open government datasets which can be used for AI, including on the development of effective data structures and data dissemination methods”.
Recommendation 10 of the AI Opportunities Action Plan calls for the government to “Couple compute allocation with access to proprietary data sets as part of an attractive offer to researchers and start-ups choosing to establish themselves in the UK and to unlock innovation.”
Recommendation 27 of the AI Opportunities Action Plan calls for the government to establish “A data-rich experimentation environment including a streamlined approach to accessing data sets, access to language models and necessary infrastructure like compute”.
Recommendation 50 calls on the government to “Create a new unit, UK Sovereign AI, with the power to partner with the private sector to deliver the clear mandate of maximising the UK’s stake in frontier AI” and goes on to recommend that “UK Sovereign AI should lead the delivery of a government offer to new and existing frontier AI companies that includes:… Packaging and providing responsible access to the most valuable UK-owned data sets and relevant research.”
In connection with the collection of additional data specifically for the purpose of enabling it to be used for the development of AI models, this could result in dual purposes for processing being created each requiring their own lawful basis. Where processing would not be necessary for the purpose of the task carried out in the public interest, in the absence of the data subject’s consent public bodies would lack a lawful basis for processing. To implement this recommendation would therefore require the government to establish new legal obligations or public duties on public sector bodies in connection with the creation of datasets in the public interest.
The Data (Use and Access) Bill already proposes to amend the UK GDPR to ease the restrictions on the re-purposing of personal data, including a presumption of compatibility where the new purpose is for historical or scientific research, archiving or statistical purposes (the so-called RAS purposes), with a new definition of what constitutes scientific research being introduced which would be broad enough to include AI model training and development, and extending the scope of exemptions to the obligations to provide transparency information to data subjects. These proposals have already been the subject of concern, and proposed amendments, throughout the House of Lords Grand Committee scrutiny of the Data (Use and Access) Bill, and their significance becomes clearer through the publication of the AI Opportunities Action Plan.
Download our free comprehensive briefing on the Data (Use and Access) Bill, including its implications for artificial intelligence (AI), and our unofficial Data (Use and Access) Bill Keeling Schedules illustrating the changes the Data (Use and Access) Bill (as introduced) will have on the UK GDPR, Data Protection Act 2018 and Privacy and Electronic Communications (EC Directive) Regulations (2003) (PECR).
What are the implications of the AI Opportunities Action Plan for AI regulation in the UK?
The AI Opportunities Action Plan doesn’t make any specific recommendations as to the nature of UK AI Regulation which would inform the government’s promised AI Bill to “place requirements on those working to develop the most powerful artificial intelligence models”, and instead calls for regulators to be better equipped to oversee AI, to be focused on supporting the growth of AI and to promote and socialise best practice.
Recommendation 9 of the AI Opportunities Action Plan calls on the government to “Develop and publish guidelines and best practices for releasing open government datasets which can be used for AI, including on the development of effective data structures and data dissemination methods”.
Under recommendation 23 of the AI Opportunities Action Plan, it is emphasised that it is essential for the government to “act quickly to provide clarity on how frontier models will be regulated” and suggests that “A top priority of any such regulation should be preserving the capability, trust and collaboration that the AISI [AI Safety Institute] has built up since its creation”.
Several recommendations relate to support for and requirements on regulators, including:
Recommendation 25, which calls on the government to “Commit to funding regulators to scale up their AI capabilities, some of which need urgent addressing”.
Recommendation 26, which calls on the government to “Ensure all sponsor departments include a focus on enabling safe AI innovation in their strategic guidance to regulators”;
Recommendation 27, which calls on the government to “Work with regulators to accelerate AI in priority sectors and implement pro-innovation initiatives like regulatory sandboxes”; and,
Recommendation 28, which calls on the government to “Require all regulators to publish annually how they have enabled innovation and growth driven by AI in their sector”.
Recommendation 29 of the AI Opportunities Action Plan calls on the government to “Support the AI assurance ecosystem to increase trust and adoption” including by “Investing significantly in the development of new assurance tools” and “Building government-backed high-quality assurance tools that assess whether AI systems perform as claimed and work as intended”.
In addition to formal action, the AI Opportunities Action Plan calls on the government to effectively shape the market by introducing standard public sector procurement practices including “Procurement contract terms should set standards (e.g. quality), requirements, and best practice (e.g. performance evaluations)” (Recommendation 43, AI Opportunities Action Plan).
Recommendation 45 calls on the government to “Publish best-practice guidance, results, case-studies and open-source solutions through a single “AI Knowledge Hub” accessible to technical and non-technical users across private and public sectors as a single place to access frameworks and insights”.
What are the implications of the AI Opportunities Action Plan for intellectual property law and copyright?
The Plan essentially recommends that the government adopts the proposals it is currently consulting on to relax copyright law in favour of AI developers, as well as identifying specific copyright protected datasets for release to the sector.
Recommendation 13 of the AI Opportunities Action Plan calls on the government to “Establish a copyright-cleared British media asset training data set, which can be licensed internationally at scale” suggesting that this could utilise data drawn from cultural organisation including the BBC and British Library.
Recommendation 24 of the AI Opportunities Action Plan calls on government to “Reform the UK text and data mining regime so that it is at least as competitive as the EU”, arguing that “The current uncertainty around intellectual property (IP) is hindering innovation and undermining our broader ambitions for AI, as well as the growth of our creative industries”. The EU Digital Single Market Directive (Directive (EU) 2019/790) (‘the DSM Directive’) creates exceptions to copyright law for text and data mining provided certain conditions are met where this is carried out by research organisations and cultural heritage institutions for the purposes of scientific research (Article 3, DSM Directive) and in respect of lawfully accessible works and other subject matter provided that the rights have not been expressly reserved (Article 4, DSM Directive).
It will perhaps be a matter of concern that, notwithstanding the government’s open consultation on ‘Copyright and artificial intelligence’ which was published less than a month ago on 17 December 2024, the government may be perceived to have pre-empted the outcome of that consultation by indicating that it will go ahead with all of the Plan’s recommendations, including recommendation 24. In that consultation, the government proposes to create a copyright exception under the Copyright, Designs and Patents Act 1988 for data mining for any purpose including commercial purposes where there is lawful access to the works and the rights holder has not reserved their rights through an agreed mechanism.
What are the implications of the AI Opportunities Action Plan for the environment?
Recommendation 4 of the AI Opportunities Action Plan calls on the government to “Establish ‘AI Growth Zones’ (AIGZs) to facilitate the accelerated build out of AI data centres”.
We have previously highlighted the environmental impact of data centres in our post on adapting data protection compliance to support ESG goals, ‘Can data protection save the planet?’ Data centres, which provide network, compute and storage infrastructure, are estimated to account for around 1% of worldwide electricity use. Water consumption, both direct through cooling and indirect through electricity generation, attributable just to US data centres in 2014 was estimated as being some 626 billion litres, with one commercial data centre provider reporting that the majority of its water consumption was from potable water in each of the years 2017 – 2019.
The consumption of energy and natural resources by data centres is exacerbated in connection with AI. As we highlighted in our AI Bootcamp Part II post, one study, ‘Estimating the carbon footprint of Bloom, a 176B parameter language model’, while noting that the available data sources to measure AI carbon emissions were inadequate, made estimations of the likely carbon emissions of various LLMs suggesting that the training of ChatGPT-3 would involve CO2eq emissions of 502 tonnes and 552 tonnes taking into account data center emissions, and this was before it was re-trained or anyone entered a prompt to use ChatGPT-3. According to the US Environmental Protection Agency’s Greenhouse Gas Equivalencies Calculator, this is the equivalent of more than 62,000 gallons of gas being consumed. OpenAI has suggested, however that, once trained models can be energy efficient, claiming that “even with the full GPT-3 175B, generating 100 pages of content from a trained model can cost on the order of 0.4 kW-hr, or only a few cents in energy costs”. It has been suggested that the energy consumed in responding to a prompt could exceed 100 times as much as a Google search.
While not recommended in the report, the location of AIGZs to take into account the availability of clean energy should be prioritised, dovetailing with the government’s Clean Power 2030 Action Plan announced in December 2024. In addition, guidance and upskilling on the deployment of AI could incorporate when appropriate AI use cases having regard to their environmental impact, and when alternatives are more appropriate.
What are the implications of the AI Opportunities Action Plan for online safety?
While not the subject of any specific recommendations, one example given of a use case for the deployment of artificial intelligence models was the use of “Automated threat and anomaly detection… to clean up social media”.
Find out more about our responsible and ethical artificial intelligence (AI) services.