Tim Duignan looks at how AI accelerated simulation will transform chemical engineering, freeing chemical engineers up to tackle more complex challenges
PICTURE yourself as an ancient bridge builder. You’ve studied under masters, memorised rules of thumb, and learned from countless failures. Yet each new bridge is still a leap of faith – will your design hold, or will it collapse? This was engineering before modern simulation tools. In contrast, today’s bridge builders can model every beam, bolt, and load with a precision that would seem magical to their predecessors.
Now imagine that same revolutionary leap forward in chemical engineering. That’s where we stand today with artificial intelligence and molecular simulation. After decades of running experiments, making educated guesses and learning through trial and error, we’re on the cusp of being able to simulate almost any chemical system from first principles with unprecedented accuracy. It’s a turning point similar to the introduction of computational fluid dynamics or finite element analysis. Those tools didn’t replace chemical engineers – they amplified our capabilities and freed us to tackle more complex challenges.
AI accelerated simulation will do the same again, but on a far greater scale. Chemical engineering will transition away from a reliance on educated trial and error to an era of precise prediction and design. The only question is whether you’ll be leading this transformation or trying to catch up.
Companies that embrace AI simulation will have an insurmountable advantage in:
Those who don’t adapt risk becoming as irrelevant as slide rule manufacturers in the age of calculators. However, the transition won’t be instant or easy. We’ll face challenges with:
But the direction is clear. Just as no one today would design a bridge without computer simulation, in ten years, no one will develop chemical processes without AI accelerated simulation assistance.
I first glimpsed this future through my work simulating electrolyte solutions for battery systems. Just a few years ago, modelling even basic ion interactions in these solutions required months of processing time on supercomputers, and we could only handle the simplest cases. The computational demands were so intense that practical questions about real-world battery systems remained frustratingly out of reach.
Then came the breakthrough. Today, we can simulate complex, practically important systems like real battery electrolytes overnight on a regular desktop computer. What once seemed impossibly complex – modelling the intricate dance of ions through channels, predicting electrolyte behaviour under different conditions, understanding the subtle interactions that determine battery performance – has become routine. This isn’t just an incremental improvement in speed; it’s a fundamental shift in what’s possible.
Think about how computer-aided design changed architecture and manufacturing. Architects can now test thousands of designs virtually before breaking ground. Soon, chemical engineers will have the same capability with molecules and processes.
This shift transforms how we approach chemical engineering problems at their core. Where we once relied on trial-and-error experimentation – mixing different compounds and hoping for desired results – we can now increasingly predict outcomes before entering the lab. Take catalyst design, for instance. Instead of synthesising dozens of potential catalysts and testing each one, we can simulate their performance virtually, understanding how different molecular structures will likely interact with reactants.
For complex formulations like pharmaceutical co-crystals or battery electrolytes, we can predict stability, solubility, and performance characteristics, potentially reducing the time-consuming cycle of make-test-iterate. This capability could reshape multiple industries: in pharmaceuticals, accelerating aspects of drug development through better prediction of molecular interactions; in energy storage, helping optimise battery materials; in materials science, supporting the design of membranes for separations; and in consumer products, informing the development of new formulations. The guesswork of molecular mechanisms – whether a reaction proceeds through a concerted or stepwise pathway, how solvent effects influence reaction rates – can now be explored through simulation. This suggests a future of faster development cycles, lower costs, and deeper understanding of the chemistry we’re working with.
To understand how AI is revolutionising molecular simulation, we need to grasp a fundamental innovation: neural network potentials. Let me explain this breakthrough using an analogy that chemical engineers will instantly recognise.
Think about how you learned to predict chemical behaviour in your early training. You started with basic principles – electronegativity, atomic radii, bonding rules. With experience, you developed an intuition for how molecules would behave. You learned to recognise patterns: certain functional groups consistently react in particular ways, specific molecular arrangements lead to predictable properties.
Neural network potentials work similarly, but with superhuman precision and scope. Traditionally, when we wanted to simulate molecular systems, we had to write explicit mathematical rules for how atoms interact – like creating a massive cookbook of every possible chemical reaction. These classical force fields were limited by our ability to mathematically describe complex quantum mechanical interactions.
The revolutionary insight was this: instead of trying to write rules by hand, we could teach AI to understand the underlying patterns in quantum mechanical data. We start by performing extremely accurate but computationally expensive quantum calculations for a diverse set of molecular configurations. The neural network then learns to recognise patterns in this data – much like how you learned to recognise patterns in chemical behaviour, but with the ability to handle millions of examples and thousands of variables simultaneously.
The result is remarkable: a simulation method that approaches quantum mechanical accuracy but runs at speeds closer to classical molecular dynamics. It’s like having a savant-level chemical intuition that can be applied to any molecular system.
The real magic lies in how these neural networks represent molecular environments. They don’t just look at pairs of atoms like classical force fields do. Instead, they consider the entire local environment around each atom – how many neighbours it has, their types, their arrangements, and how these factors influence each other. This holistic view allows them to capture subtle quantum effects that emerge from the complex interplay between electrons and nuclei.
Consider our earlier example of electrolyte solutions. Classical models struggled because they couldn’t capture how the presence of ions subtly influences the behaviour of nearby water molecules, which in turn affects other ions – a cascading series of interactions that determines everything from ion channel function to battery performance. Neural network potentials can capture these intricate quantum mechanical effects while running fast enough to simulate realistic systems over meaningful timescales.
The revolutionary insight was this: instead of trying to write rules by hand, we could teach AI to understand the underlying patterns in quantum mechanical data
Neural network potentials aren’t restricted to simulating every atom in a system, though. Recent exciting developments have shown that neural network potentials can also be used to build coarse-grained simulations that focus only on the parts of the system that are most important, ie the solute atoms in a solvent or the backbone atoms of a larger molecule, as shown in Figure 1.
These simulations can access dramatically longer time and spatial scales than is possible with all atom molecular simulations and have already been used to simulate processes like protein folding and crystal nucleation.
This means that, although it may take longer, these tools won’t just be applicable to molecular scale processes. They may enable predictive multiscale models of macroscopic phenomena that are built on nothing but fundamental physical laws. Chemical engineers may soon be able to predict which crystal structures will form when cooling a complex mixture of solutes, capturing everything from the initial nucleation events to the final crystal morphology and size distribution, using the power of coarse-grained neural network potentials to neglect unimportant parts of the system such as solvent molecules which do not directly participate in the key processes.
The first and most significant impact of neural network potentials will be their ability to act as a computational microscope. For too long, understanding molecular-scale processes has been like trying to identify objects in a dark room using only touch, smell, and taste. We’ve relied on indirect experimental measurements and complex interpretations, piecing together clues about what might be happening at the molecular level. Nuclear magnetic resonance signals, spectroscopic data, and vapour pressure are all valuable indicators, but ultimately indirect evidence of the molecular dance we’re trying to understand.
AI-powered molecular simulation is like finally being able to flip on the lights in that dark room. We can now directly observe the critical time and spatial scales where chemistry actually happens – watching bonds break and form, seeing how ions navigate through channels, understanding exactly how catalysts interact with their substrates. This computational microscope reveals molecular processes with unprecedented clarity, showing us not just what happens, but how and why it happens. We’re no longer making educated guesses about molecular behaviour; we’re watching it unfold before our eyes.
Crucially this microscope can be applied to new molecules and systems without having to order in new chemicals or carry out challenging synthesis procedures but merely by changing some parameters in a digital input file.
Neural network potentials can also provide data to parameterise and validate models of larger scale processes, such as activity coefficients and diffusivities. This capability is particularly valuable for process engineers, as accurate flow modelling and process simulation depend critically on having reliable thermodynamic and transport properties.
Currently, working with new chemicals or unusual conditions often means costly and time-consuming experimental measurements to obtain these essential parameters, or making do with rough estimations. In many cases, the lack of reliable data can be a major bottleneck in process design. With neural network potentials, engineers could generate this fundamental data computationally for any chemical system and operating condition, making accurate process modelling possible even for novel compounds or extreme conditions where experimental data would be difficult or impossible to obtain. Additionally, with sufficient computational speed, these tools could enable rapid virtual screening of thousands of potential molecules or materials, automatically identifying promising candidates that meet target property requirements before any physical synthesis is required.
The core idea of neural network potentials has been around for some time, but the field has recently experienced a seismic shift, mirroring what we’ve seen in large language models (LLMs). New hardware custom built for AI models combined with new model architectures and larger datasets are spurring a rapid acceleration in capabilities. Big tech and startups are now building massive, universal neural network potentials trained on unprecedented amounts of quantum mechanical data. These universal neural network potentials are focused on building models that can handle virtually any combination of elements in the periodic table with remarkable accuracy. It’s a bit like moving from having a different dictionary for each language to having a universal translator that works across all languages simultaneously, but one that can learn new dialects from just a few conversations. These universal models represent a fundamental shift from the traditional approach of building specialised potentials for specific systems. The implications are profound: chemical engineers can now tackle previously intractable problems without needing to develop system-specific models first and can rapidly adapt these models to specific systems.
Just as in the field of LLMs, the race is on to have the most capable model, with different architectures, modelling choices, and data generation strategies being explored. For example, the company I work for, Orbital, has recently introduced Orb, an innovative model that combines diffusion models with a highly efficient architecture – achieving breakthrough accuracy with far higher speed than alternatives.
The combination of these tools with LLMs, such as chatbots, is a particularly powerful one, as they can provide a source of ground truth information even for new systems that can be hard to obtain otherwise. Orbital is working on building agents that can help to automate and streamline the process of setting up, running, and analysing these simulations to minimise the expertise gap and increase the scale and scope of problems they can be efficiently applied to.
The implications stretch far beyond simple optimisation of existing processes. We’re talking about:
The future is clear: chemical engineering is becoming a computational science. Those who embrace this change will write it. Those who don’t will read about it in retirement.
Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.