New supercomputers will put the UK 'first in the queue' when it comes to R&D and managing AI risks

Article by Adam Duckett

Joe Bishop / Cambridge Open Zettascale Lab
Paul Calleja (left), director of Dawn AI Service, and Richard McMahon, UKRI Dawn principal investigator, stand in front of the Dawn supercomputer

UK INDUSTRIAL researchers have been promised a boon on the fringes of the Bletchley AI Summit thanks to new supercomputers called Isambard-AI and Dawn being built in Bristol and Cambridge. Their proponents say the supercomputers will allow a huge step forward in AI and the simulation capabilities needed to accelerate the development of fusion power and drugs, while testing the risks of new powerful AI models.

Once Isambard-AI is installed at the National Composites Centre at the University of Bristol next year it will be ten times more powerful than the UK’s current fastest supercomputer. The government is investing £225m (US$278m) in the supercomputer which, at around 200 petaflops, would rank in the world’s top ten according to figures from the supercomputer league table.

To put this performance into context, a single petaflop is equal to performing one thousand billion calculations every second. To further bolster this computing resource, Isambard-AI will be connected by a cloud service to another supercomputer called Dawn that is being installed in the Open Zettascale Lab at the University of Cambridge.

Dawn is set to be fully operational by the end of the year and is being built through a partnership involving the university, the UK government R&D funding outfit UKRI, tech firms Intel and Dell, and the UK Atomic Energy Authority (UKAEA), which is heading up the country’s development of fusion power.

Paul Calleja, director of research computing services at the University of Cambridge, told TCE that Dawn’s computing power has recently been benchmarked at around 20 petaflops, putting it on a par with the 30th most powerful supercomputer in the world.

The announcement of extra investment in supercomputing power was made by the UK government on the sidelines of the AI safety summit hosted at Bletchley Park this week. Government and industry representatives gathered to discuss how the risk of AI can be managed through international collaboration and how to take advantage of the opportunities it offers.

Rory Arnold / No 10 Downing Street / CC BY 2.0 DEED
Representatives from the tech sector included Twitter owner Elon Musk (left), who cofounded the startup that created hit AI chatbot ChatGPT, and former Google CEO Eric Schmidt (right). Musk warned on the sidelines that AI is potentially the most pressnig existential threat humanity faces given the pace of development. Sam Altman CEO of ChatGPT developer OpenAI was also at the meeting.

Simon McIntosh-Smith, director of the Isambard National Research Facility at Bristol, said: “Isambard-AI will offer capacity never seen before in the UK for researchers and industry to harness the huge potential of AI in fields such as robotics, big data, climate research, and drug discovery."

Rob Akers, director of computing programmes and senior fellow at UKAEA, said: “Dawn will form an essential part of a diverse UKRI supercomputing ecosystem, helping to promote high-fidelity simulation and AI capability, ensuring that UK science and engineering is first in the queue to exploit the latest innovation in disruptive high-performance computing hardware.”

Exascale and the industrial metaverse

Charting the history of computing power, the megaflop threshold (1 million, or 106 calculations per second) was surpassed in 1964. The US Department of Energy then broke through the teraflop barrier (1012 calculations per second) in 1996. In 2008, the US Los Alamos National Laboratory achieved a petaflop (1015) and then last year the Frontier supercomputer at the US Oak Ridge National Laboratory was the first to cross the exascale, achieving more than 1018 calculations per second.

Last month, the UK government selected the University of Edinburgh as its preferred location for an exascale supercomputer, with installation expected to begin in 2025. On the promise of exascale computing, Akers said: “Fusion has long been referred to as an ‘exascale grand challenge’. The exascale is finally upon us and I firmly believe that the many collaborations coalescing around Dawn will be a powerful ingredient for extracting value promised by the exascale – for the UK to deliver fusion power to grid in the 2040s, to realise net zero more generally, to seed high value UK jobs in AI and ‘digital’ and to drive economic growth across the entire United Kingdom.”

Earlier this year, partners in Cambridge’s Dawn project said that AI will be necessary to accelerate the development of fusion power and expect that the engineering designs for the prototype STEP reactor will be developed in a highly immersive and connected virtual environment known as the metaverse. Powered by AI, this would allow engineers to collaborate on designs and perform faster simulations and iterations.

Calleja said: “UKAEA’s moon-shot mission to put clean fusion energy on the UK grid in the 2040s is a hugely ambitious goal, needing equally ambitious advance computing and AI technologies to fuel the virtual engineering effort to create a complete digital reality of the power plant that can be developed and tested in silicon, which greatly accelerates the process.”

Chemical engineers are working at the forefront of fusion design, with members of the UKAEA team developing the fuel cycle, and the power and cooling systems.

AI safety summit

Isambard-AI and Dawn will also be used to support the UK government’s efforts to position itself as a leader in AI risk management.

This week, government leaders, AI industry executives and representatives from civil society met at Bletchley Park, the site where World War Two codebreakers pioneered computing techniques to decipher enemy messages. Leaders from more than 25 countries, including China, the EU, and US, signed a declaration recognising the substantial risks posed by AI and the need to work together to manage them.

During the summit, the UK government announced the launch of a new AI Safety Institute that will test new types of frontier AI – a term used to describe powerful pending AI models – before and after they are released to check against harmful potential outcomes, including spreading misinformation and existential risks if humanity loses control of the technology.

Kirsty O'Connor / No 10 Downing Street / CC BY-NC-ND 2.0 DEED
Prime Minister Rishi Sunak speaks at a plenary session on day two of the AI Summit at Bletchley Park

The government said new frontier models are expected next year and the first task of the institute will be to quickly put in place the processes and systems to test AI models before they launch. The institute will be formed from the existing Frontier AI Taskforce and the government says researchers who are already in place will be given access to the supercomputers at Bristol and Cambridge to support their assessments.

The Bletchley Declaration notes the pressing need to ensure the safe development of AI given that the technology is already deployed across many domains of daily life, including housing, employment, transport, education, health, accessibility, and justice,” and that new more powerful models are on the way. It notes action is needed to ensure AI inclusively delivers “transformative opportunities” in health, clean energy, and climate.

“There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent,” the declaration says.

The agreement is light on specific detail about what more will be done, neatly encapsulating the challenge that slow, deliberative international policymaking faces when trying to negotiate and agree safeguards needed to manage the risks of fast-evolving technologies.

Delegates have agreed it’s important to work together to identify and manage risks; that AI developers should act responsibly and transparently; and that follow-up summits will be held every six months, starting in South Korea and then France. It was agreed that Yoshua Bengio, a Turing Award-winning AI academic, will lead the publication of a frontier “State of the Science” report to inform future work on AI safety.

Reactions

Marc de Kamps, associate professor at University of Leeds’ school of computing, said the AI Safety Institute could be a positive step in fostering a broader discussion about the societal impacts of AI and said it was the right decision not to be prescriptive about whether certain types of research is off limits. However, he warned: “The communique is unspecific about the ways in which its goals will be achieved and is not explicit enough about the need for engagement with the public.”

Rashik Parmar, CEO of BCS, The Chartered Institute for IT, said he was pleased to see a focus on AI issues that are a problem today – “particularly disinformation, which could result in ‘personalised fake news’ during the next election”.

He added: “We would like to see government and employers insisting that everyone working in a high-stakes AI role is a licensed professional and that they and their organisations are held to the highest ethical standards. It’s also important that CEOs who make decisions about how AI is used in their organisation are held to account as much as the AI experts; that should mean they are more likely to heed the advice of technologists.”

Stephanie Baxter, head of policy at The Institution of Engineering and Technology (IET), said: “[With] emerging technologies, like AI, being fundamental to sector growth, it’s important to recognise that education and training is key to the safe use of AI and we should look at upskilling and reskilling the current workforce. Employers are telling us that there is a lack of skills in industry to take advantage of AI so we need to be agile and offer options for rapid training, such as micro credentials, to adapt and make best use of new technologies. Government plays a key role in supporting initiatives that enable employers to stay competitive and innovative in this space.”

Natasha McCarthy, associate director of policy at the Royal Academy of Engineering, said: “Engineers throughout history have been vital for embedding appropriate safeguards in technologies that we use every day in areas as broad as transport, medicine, and banking. We know that the engineering profession’s expertise in creating ever-safer technologies and infrastructure, including tools and techniques designed for safety, as well as the institutions that accredit and certify people, skills, and education to build responsible practice, will be incredibly valuable as part of a cross-sector and cross-discipline approach to mitigating the risks associated with AI today. We hope that engineering will be well represented in the forums and advisory bodies that take the discussions from the summit forwards.”

Article by Adam Duckett

Editor, The Chemical Engineer

Recent Editions

Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.