LEGAL, REGULATORY & COMPLIANCE CONSULTANTS

Handley Gill Limited

Our expert consultants at Handley Gill share their knowledge and advice on emerging data protection, privacy, content regulation, reputation management, cyber security, and information access issues in our blog.

AI acceleration

The House of Lords’ Communications and Digital Committee rejects the government’s focus on the existential risks of artificial intelligence and urges the government to take action - at pace - to address real and immediate risks presented by AI, including copyright infringement, discrimination and bias, data protection, cyber security and disinformation. Whether the government shall take heed when it publishes its response to its consultation on its proposed ‘Pro-innovation approach to AI regulation’ remains to be seen.
— Handley Gill Limited

The House of Lords’ Communications & Digital Committee has this morning published its report following its inquiry on Large Language Models and Generative AI.

The Committee calls on the Government to deploy “a more deliberate focus on near-term risks” & to give “urgent attention” to the risk of “regulatory capture by vested interests”.

It goes on to raise “even deeper concerns about the Government’s commitment to fair play around copyright” & warns the status quo with regard to the use of AI training data, through unauthorised web scraping,  fails to “reward creators for their efforts, prevent others from using works without permission & incentivise innovation”.

Notwithstanding the government’s focus on the existential risks posed by AI, including at the AI Safety Summit, the report states that “Wider concerns about existential risk (posing a global threat to human life) are exaggerated and must not distract policymakers from more immediate priorities”.

Headline recommendations from the report include:

  • Guard against regulatory capture

  • Make market competition an explicit AI objective

  • Adopt a nuanced approach to the comparative benefits of open and closed AI models, and review their respective security implications

  • Ensure any new rules support rather than stifle competition

  • Rebalance focus away from a narrow approach to AI safety

  • Explore sovereign LLM capability

  • Boost computing power, infrastructure, skills  and support for academic spin outs

  • Invest in large high quality training datasets

  • Encourage use of licenced material for training data

  • Update copyright legislation if necessary to resolve copyright disputes

  • Implement faster mitigations for immediate AI risks around cyber security, counter-terror, CSAE/CSAM and disinformation

  • Require improved assessments and guardrails to tackle societal harms around discrimination, bias and data protection in AI

  • Develop intelligence capacity and capability around catastrophic AI risks

  • Implement mandatory safety tests for high risk high impact AI models

  • Conduct a legal review of AI liability

  • Empower existing regulators with investigatory and sanctioning powers

  • Introduce cross-sector guidelines

  • Develop accredited standards and common auditing methods

Organisations developing or deploying AI who wish to comply with existing regulatory obligations or to ensure that they are at the forefront of using AI safely, responsibly and ethically, should conduct an AI risk assessment, establish an AI governance programme and appoint an AI Responsible Officer. If Handley Gill can support you with these, please contact us.

Find out more about our responsible and ethical artificial intelligence (AI) services.

Access Handley Gill Limited’s proprietary AI CAN (Artificial Intelligence Capability & Needs) Tool, to understand and monitor your organisation’s level of maturity on its AI journey.

Download our Helping Hand checklist on using AI responsibly, safely and ethically.

Check out our dedicated AI Resources page.

Follow our dedicated AI Regulation Twitter / X account.