UK and US join forces to strengthen global AI safety

Article by Aniqah Majid

The UK and US AI safety institutes plan to create a cooperative programme where they can share research and approaches on AI risks

THE UK and US have signed a memorandum of understanding (MoU) to collaborate on research and policy that will bolster international AI safety and security.

Following the AI Safety Summit in November, where 29 countries pledged to identify and build risk-based policies on AI, the UK and US’s respective AI safety institutes plan to create a cooperative programme to achieve their shared safety goals.

The institutes will share information across the breadth of their activities, in accordance with national laws and regulations, and contracts.

Michelle Donelan, the UK Science, Innovation, and Technology (DSIT) secretary, said: “The work of our two nations in driving forward AI safety will strengthen the foundations we laid at Bletchley Park in November.

“I have no doubt that our shared expertise will continue to pave the way for countries tapping into AI’s enormous benefits safely and responsibly.”

Future international collaboration

On top of sharing research and approaches, the UK and US plan to work with other governments on international standards for AI safety testing.

Donelan added: “We have always been clear that ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The safety institutes expect to collaborate on at least one joint testing exercise on a publicly accessible model. The exercise will test several situations where frontier AI – advanced AI models that match or outperform existing cutting-edge AI models – can be used to facilitate real-world harm.

The UK’s commitment to AI

The safety initiative is part of the UK’s aim to “seize the opportunities” of advanced AI systems, which it expects will become a major tool across industries, from identifying early signs of breast cancer to tackling cybercrime.

In its October “Frontier AI: capabilities and risks” discussion paper, the government stated that AI systems could be used by bad actors to create cyber-attacks, disinformation campaigns, and biological and chemicals weapons.

Alongside the report, the UK launched the Frontier AI Taskforce, to investigate AI risk and build technical expertise on how the government can safeguard against it. The safety institute will be used to evaluate and test new and emerging AI systems and make its research publicly available to the world.

In November, the UK announced the development of two new supercomputers, marking a huge step forward in AI capabilities. The government is investing £225m (US$285m) into the first of these, Isambard-AI, which is expected to be ten times faster than the UK’s current fastest supercomputer. A second, Dawn, is slated to follow and will be connected by a cloud service to Isambard-AI.

Article by Aniqah Majid

Staff reporter, The Chemical Engineer

Recent Editions

Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.