David Jamieson believes AI can revolutionise process safety, but says there is still some way to go before it can be trusted for HAZOPs
IN THE whirlwind of the late 1990s dotcom boom, a bright star named Pets.com emerged. Their vision was bold: revolutionise the pet supply industry by bringing everything online. From dog food to cat toys, they aimed to deliver it all directly to pet owners’ doors.
Investors were smitten. They poured millions into the company, dazzled by its aggressive marketing and the allure of the booming e-commerce sector. In February 2000, Pets.com went public, raising a staggering US$82.5m. Their launch parties were the talk of the town, and their sock puppet mascot became an unofficial emblem of the dotcom era.
But behind the scenes, storm clouds were gathering. The company’s business model, built on offering deep discounts and often free shipping, was bleeding money. The cost of shipping bulky items like pet food quickly ate into their profits. And though they were spending lavishly on advertising, customer loyalty proved elusive.
As the dotcom bubble began to waver, Pets.com’s vulnerabilities became painfully apparent. By November 2000, a mere nine months after their glittering public offering, the company announced its closure.
The rise and fall of Pets.com is a perfect example of how hype, without substance, is unsustainable.
Like many of you, I have asked myself if the hype around artificial intelligence, specifically large language models (LLMs) such as ChatGPT, is real or if we are set for another Pets.com. When launched in November 2022, ChatGPT was met with astonishment as computer code, articles, and social media posts could be written in seconds. It appears to have a universal understanding of almost everything. Ask it to explain quantum physics to a ten-year-old or write a poem about climate change, and it is difficult not to be impressed with the results.
But can this technology be used for something more sophisticated than writing Facebook posts? Ultimately, we wanted to determine if we could replicate the value-add of products such as GitHub Copilot, which uses AI for writing computer code and where 9/10 users report performance improvements, saving an average of 50% coding time.
The team at Salus Technical and I were curious to find out. Over a five-day hackathon, our software and process safety team aimed to create a high-quality, reliable AI product. Here is our story.
The main objective was to replicate the expertise of the best engineer in the room, ensuring maximum preparedness, focus, momentum, and effective engagement.
HAZOP.AI was developed as a web application where users could input information on a HAZOP node, including the equipment, operating conditions, chemicals used, and location, mirroring the inputs typically provided in a HAZOP study. This information triggered three HAZOP pipelines to assist the attendees. The first pipeline provided a list of past incidents related to the study, along with preventive questions. The second generated questions likely to be raised during the session, acting as prompts for the HAZOP chair or assisting the organiser in ensuring all necessary information was available. The third offered a partially pre-populated HAZOP table, promoting effective thinking from the start.
To achieve this functionality, each pipeline relied on a database that helped the AI model understand how different types of equipment functioned, potential failure modes, and consequences. This database incorporated past HAZOPs, incident data, and best practices. Importantly, the AI model solely relied on user-entered information and the database only and did not have access to any external sources.
To guide the AI model, prompt engineering methods were employed, prompting the model to perform simple tasks in a focused and effective thought process. Instead of providing complex instructions, a series of prompts guided its reasoning. For instance, it listed equipment provided by the user, summarised chemicals used, retrieved relevant incidents from the database, identified underlying causes for each incident, and suggested HAZOP questions for prevention. All these were done in individual steps, with an independent check on the accuracy of results returned.
While HAZOP.AI offered three templates for common processes (condensate storage tank, offshore well test, and three-stage separator), users had the freedom to input their own specific HAZOP node information into the application.
Prompt engineering proved to be an effective method for enhancing the accuracy of HAZOP.AI. Over the course of the week, the quality of the results improved dramatically as the model instructions were fine-tuned. However, no matter how fine-tuned any model or series of prompts, there was always a minor degree of hallucination. For example, producing a failure mode that is not credible or, conversely, the model concluding something that, with the information provided, it should not have been able to determine.
The response to HAZOP.AI has been overwhelming, with over 70 companies scheduling demonstrations and more than 150 companies signing up for a free trial. This strong interest highlights the enormous potential and demand for an AI-powered tool that can assist in HAZOP studies.
During our testing and implementation phase, we encountered several noteworthy observations and factors to consider for further improvement.
One clear drawback that users pointed out is the reliance on text inputs. Having to type each piece of equipment into HAZOP.AI takes considerable time, is open to errors, and can be difficult to troubleshoot. Users reported that this was the single biggest blocker to using HAZOP.AI. Ideally, process and instrumentation diagrams (P&IDs) would be scanned, but the technology to do that reliably and at scale simply isn’t available.
Although largely positive following testing, users remained sceptical, and many would only start using HAZOP.AI once they had seen evidence that it was working to the required standard across a range of applications.
We will continue to refine HAZOP.AI to address these observations and meet the expectations of our users. Prompt engineering alone cannot address all the challenges associated with LLMs. If we expect LLMs to exclusively handle all the thinking, there will always be risks of hallucinations and other performance issues. It is more reliable to task LLMs with understanding inputs, retrieving data from a database, and then presenting the answer. Combining LLMs with knowledge graph databases and vector-embedded databases will substantially improve the quality of results.
I believe in the long-term potential of AI in process safety, and do not consider it to be mere hype. However, we must acknowledge that it is not perfect. While LLMs already find widespread use in tasks like drafting emails, social posts, and trip planning, it will take significant effort and time before we can rely on them for automating more complex problems like HAZOP studies.
Our commitment to developing HAZOP.AI remains steadfast. We plan to introduce a graphical input feature that allows users to build process flow diagrams (PFDs) using drag-and-drop functionality. Additionally, we aim to incorporate additional pipelines such as creeping change HAZOPs, chemical reactions, design code checks, and more. We are also working on implementing a chatbot that can provide information and answer questions related to the input data.
While AI cannot replace human expertise when it comes to safety, it has the potential to enhance the role of individuals to be the best HAZOP participants that they can be.
It is crucial to remember that while many companies like Pets.com failed during the dotcom bubble, successful giants like Google, Amazon, and Netflix emerged and transformed their respective sectors. Similarly, I believe that at least one AI product will emerge that revolutionises process safety.
By acknowledging the limitations of AI, combining it with human expertise, and learning from past successes and failures, we can work towards harnessing AI’s potential to make significant advancements in the field of process safety.
David Jamieson will be presenting this work at Hazards 33 alongside colleague Craig Paterson in a presentation titled “Artificial Intelligence in Safety, the Future or a Recipe for Disaster?” Find out more at https://www.icheme.org/training-events/hazards-process-safety-conference/
It’s hard to say with any certainty. The technology is developing rapidly and we have been unable to find a study focused on generative AI’s predicted impacts on chemical and process engineering.
For a wider analysis, the UN’s International Labour Organization published a study in August that concluded that only clerical workers are highly exposed to being fully replaced. For other occupations, the impact is likely to be automation of some tasks leaving time for other duties, opposed to becoming fully automated.
It specifically lists chemical engineering technicians and plant operators as having “high augmentation potential” while “retaining an important human component”.
Read the report: https://bit.ly/45TN1Wl
Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.