Is AI a ‘Ghost in the Machine’?

Article by John Challenger FIChemE

John Challenger explores how IChemE is addressing the risks of artificial intelligence in contracts and practice, balancing opportunity with the ethical and legal safeguards engineers cannot afford to ignore

IN 1967, Arthur Koestler published Ghost in the Machine, a psychological and philosophical work that tried to explain humanity’s self-destructive tendencies, peaking in the nuclear weapons arena. Since then, many eminent scientists and philosophers such as Max Tegmark and Stephen Hawking have suggested that artificial intelligence (AI) may pose an even greater existential threat.

This may sound a little excessive as a means of introducing reservations about the use of AI technology, but it does reflect deep underlying concerns about the control and application of this rapidly developing technology, often masked by a faith that it will resolve all of humanity’s problems.

What’s of great concern is the fact the internet has evolved from its originally intended aims and is now populated by unverified information. The lack of robust legislation and codes of practice is worrying enough but with AI self-learning techniques which can draw on unsubstantiated information the risks are increased substantially. This may put individuals, companies and even governments at considerable peril. If AI is built into an engineering project without clear disclosure from the technology provider, users may face hidden risks. New software should be carefully examined to understand how it learns and mines data – and, crucially, to ensure those processes have been properly validated.

Testing the untestable

As systems grow more complex, it becomes increasingly difficult for users to assess the accuracy and reliability of the technology. For example, system validation may ultimately require specialised AI system to undertake a “third-party peer review” due to the level of complexity within the software that has been developed by AI. Established engineering practices such as hazard identification (HAZID) and hazard and operability (HAZOP) studies may also need to be reconsidered when applied to AI systems. These methods, long proven as cornerstones of safety in the chemical industry, could require thorough re-evaluation in this new context.

On a practical level for engineers, I consulted with Paul Buckingham KC, a member of the IChemE’s Contract Committee and Chair of the Disputes Resolution Committee to establish what the latest thinking was in the legal profession. In 2024, the Bar Council, the professional body for barristers in England and Wales, issued guidance titled Considerations when using ChatGPT and generative artificial intelligence software based on large language models.

It cautioned that legal submissions in some cases had already included AI-generated “facts” that collapsed under scrutiny – with serious consequences for justice. That prompted IChemE’s Contracts Committee to consider the same risks in engineering projects. It was agreed that for all IChemE-published contracts, guidance should be incorporated to make users of AI aware of the risks and ethics associated with the technology. To strengthen the work, IChemE sought external input from the Nuffield Bioethics Committee and invited John Appleby (Lancaster University) to peer review the draft. The guidance will be included in all future contracts but in the meantime an addendum has been issued online to make users aware of our concerns on this matter. The guidance can be found here: bit.ly/forms-of-contract-addenda

It was agreed that for all IChemE published contracts, guidance should be incorporated to make users of AI aware of the risks and ethics associated with the technology

IChemE is not alone in acting. The EU Artificial Intelligence Act prohibits certain high-risk applications, including systems that manipulate decisions or exploit vulnerabilities, evaluate individuals based on social behaviour or personal traits, or predict the likelihood of criminal activity. Several UK government departments are also drafting AI guidance, while responsible AI governance, information security management and standards are covered across two ISO packages (re: SO/IEC 42001:2023, 27001:2022 & 42005:2025).

Opportunities alongside risks

There are clearly some significant advantages of using AI to assist in developing potential solutions to difficult problems. One of the prominent and beneficial discoveries facilitated by the use of deep learning has been the work carried out at MIT on new antibiotics that can kill certain drug-resistant bacterium. The compounds have been subjected to testing which has shown a very low toxicity, making them good drug candidates.1

More traditional research methods may have taken years to identify new medicinal products but MIT is understood to have identified a number of potential candidates in a relatively short timescale. Wisely, the research team has stated that optimisation, proof of efficacy and safety will be required before new drugs can be approved by the regulatory authorities for use.

What is unclear at present is who exactly owns the intellectual property, a matter which will, no doubt, be exercising the minds of patent lawyers – European patent offices have ruled that only humans, not AI systems, can be named as inventors.

Ethics and cooperation

Clearly AI offers the potential to resolve many intransigent scientific, technical, social and commercial issues. However, history is littered with examples of irresponsible developments that started out with the aim of improving the human condition and understanding only to be corrupted by uncontrolled use. In all IChemE published contracts, one of the core principles has been cooperation and fairness:

  • The parties shall co-operate with each other in the discharge of their respective obligations under the Contract
  • The parties shall deal fairly, openly and in good faith with each other. Each party shall disclose information which the other might reasonably need in order to exercise its rights and to perform its obligations under the Contract

These are all ethical processes which engineers will understand and recognise when undertaking complex process plant design and construction. But AI will not naturally adhere to the same standards. That makes it imperative for the profession to adopt a responsible stance, and for governments to legislate where necessary. Some countries are more advanced than others but for now much depends on professional integrity and corporate governance. IChemE’s decision to publish guidance reflects this reality: AI will not regulate itself, so those who use it must take the lead in ensuring its safe, ethical and transparent application.


John Challenger CEng FIChemE is an engineering and project management consultant and chairman of IChemE’s Contracts Committee


Reference

1. F Wong, E Zheng et al: Discovery of a structural class of antibiotics with explainable deep learning: https://www.nature.com/articles/s41586-023-06887-8

Article by John Challenger FIChemE

Chairman, IChemE Contracts Committee

Recent Editions

Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.