LEGAL, REGULATORY & COMPLIANCE CONSULTANTS

Handley Gill Limited

Our expert consultants at Handley Gill share their knowledge and advice on emerging data protection, privacy, content regulation, reputation management, cyber security, and information access issues in our blog.

AI Bootcamp Part I

In Part 1 of our 5 part AI Bootcamp, we consider the terms and concepts needed to understand what AI is and how it works. In Parts 2-4 of our AI Bootcamp, we will consider the risks of developing, using and even not using AI, while in Part 5 of our AI Bootcamp, we will focus on AI regulation.

What is AI?

The Oxford English Dictionary defines artificial intelligence, or AI, as “The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this”.

Is AI the same as machine learning?

Machine learning is a type of artificial intelligence or AI, whereby computers learn or improve without further programming. This is achieved through algorithms, the rules to be used in carrying out calculations or problem solving, being programmed to analyse data and predict outputs and to refine those outputs based on new data. There are different types of machine learning, the main ones being: supervised, reinforcement and unsupervised machine learning. Supervised machine learning involves the computer being provided with the input data and the correct answers or labelled desired outputs and then observes the patterns in the dataset to determine the appropriate model to predict the correct output for any new data provided, with the operator then correcting its predictions. An example of the use of supervised machine learning is in spam filters such as CAPTCHA. Whereas with unsupervised machine learning there is no operator to correct the computer’s predictions and its ability to learn relies on it assessing more unlabelled data. Reinforcement machine learning involves a process of trial and error by the computer itself, based on its own actions with the aim of performing tests to ascertain the optimal outcome.

What is generative AI?

Generative AI is the term used to describe the application of artificial intelligence to create or generate content, such as text, images or video in response to instructions or prompts given by the user. The content that generative AI creates is a prediction by the software based upon the patterns in its training data.  

Does all AI involve neural networks?

Neural networks, sometimes referred to as artificial or simulated neural networks (ANNs or SNNs), are a type of unsupervised machine learning, which attempt to mimic brain function and the way that neurons interact, and enable iterative learning. The network comprises input and output layers and then a hidden layer - or layers - of processing operations. Neural networks are AI, but not all AI relies on neural networks. 

So, what is deep learning?

Deep learning is the term used to refer to multiple layered neural networks, comprising more than 3 layers.

What are large language models?

Large language models (LLMs) are a type of deep learning and generative AI based on natural language processing using vast text datasets, typically gleaned by scraping data from the internet.  

What is a foundation model?

The term foundation model, coined by the Stanford University Center for Research on Foundation Models (CRFM), is used to describe the development of AI systems which are less task-specific and have been trained on a broad range of training data to enable them to be adapted to a wide range of tasks and provide the foundation for a range of AI deployments.  

What is Frontier AI?

Frontier AI is a term used to describe foundation models which, due to their vast training data and general applicability, are at the cutting edge of AI capability. The Frontier Model Forum defines “‘Frontier Models' as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks”, whereas in the 2023 scientific research paper ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’, frontier AI models were defined as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety”.

What is a prompt?

A prompt is the instruction or information given to an AI model to inform its output. A good prompt will not merely ask for a specific output, but also provide context around the request. Prompts are often iterative in nature, building on the initial prompt to refine the ultimate output.

What are hallucinations?

In the context of AI, hallucinations are the term used when the probabilistic prediction of generative AI models results in output which is false, inaccurate and potentially even entirely fabricated, albeit that the information may appear to be legitimate.

What can AI do that humans can’t?

AI can consistently identify patterns in vast quantities of data and can accurately apply that pattern recognition, often at least as accurately as humans, if not more so.

What can humans do that AI can’t?

AI cannot create truly unique or original content, only content based on patterns observed in its training data. While some AI tools have demonstrated their capacity to identify, describe and even feign emotions based on their predictions of what a human would say in the relevant situation, they can’t experience emotions. While AI can’t follow programmed concepts of right or wrong, and even predict the right or wrong response based on patterns it has analysed, it doesn’t have a n ingrained concept of morality.

Can AI trick us into thinking it is human?

The Turing Test, developed by the English computer scientist Alan Turing, was designed to assess whether a computer could demonstrate human intelligence based on its ability to pass itself off as a human in response to questions by a human assessor in a controlled environment. As the quality and volume of data upon which AI systems are trained improves, the ability of AI to fool a human into thinking that it is not a machine may improve, but that doesn’t necessarily indicate that the computer is now able to think like a human.

It has been reported that ChatGPT-4 successfully persuaded a human TaskRabbit worker to complete a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) test - used to inhibit bots - by deceiving the worker by claiming to be visually impaired and therefore in need of assistance.

What is ChatGPT?

ChatGPT is a chatbot developed by OpenAI using large language models and operating as a Generative Pre-trained Transformer, hence the GPT in the name. Based on its training data, ChatGPT is able to predict text in providing a human-like response.

What is Google DeepMind?

DeepMind Technologies Limited is actually the name of the British company whose ultimate holding company is Alphabet Inc., Google’s parent company, rather than the name of a specific piece of AI. Its AI tools have been deployed in a number of areas, to improve Google products but also in the medical and scientific research fields. For example, DeepMind partnered with the Moorfield Eye Hospital in London to identify patients for referral for specialist treatment for “sight-threatening eye diseases”. DeepMind was one of the defendants to an ultimately unsuccessful claim for misuse of private information brought in connection with the sharing by London’s Royal Free NHS Foundation Trust and subsequent use by DeepMind of confidential medical records in the context of a mobile healthcare app called Streams which was being used to support the identification and treatment of patients with acute kidney injury. Streams was not an AI tool, but relied on an NHS algorithm, but the app has since been withdrawn.

Find out more about our responsible and ethical artificial intelligence (AI) services.

Access Handley Gill Limited’s proprietary AI CAN (Artificial Intelligence Capability & Needs) Tool, to understand and monitor your organisation’s level of maturity on its AI journey.

Download our Helping Hand checklist on using AI responsibly, safely and ethically.

Check out our dedicated AI Resources page.

Follow our dedicated AI Regulation Twitter / X account.