Ethics and AI: Concepts and Relevance for Chemical Engineers

Article by John McDermid OBE FREng

John McDermid considers the ethical implications of using AI in a chemical engineering setting

CHEMICAL engineers have long been concerned with safety, and many modern engineering practices, including use of HAZOPS and safety cases, have their roots in the industry. Safety is, of course, an ethical consideration, although not the only one. But why would chemical engineers need to consider wider ethical issues? And how might artificial intelligence (AI) come into the picture?

Safety is concerned with physical harm to individuals. Taking an ethical viewpoint, we would also consider a wider set of individual factors, including mental health, quality of life, and economic well-being. Work can have positive impact – not just in terms of economics, but a feeling that what is being done is worthwhile and camaraderie with colleagues which can contribute to a sense of well-being. Work can also have a negative impact – stress from long work hours with demanding schedules and concerns about job security.

Ethical factors go wider. What are the impacts of the activity on the environment? Clearly a topical concern is contribution to global warming, in terms of emissions of greenhouse gases, but there are other effects such as depositing effluent into rivers and oceans.

These concerns sit in a social context. There are environmental harms and individual safety risks from the extraction and processing of oil and gas – but also benefits to individuals in terms of the ability to heat and cool their homes, to travel, the availability of a wide range of manufactured goods, and societal benefits in terms of the wider economy. Thus, ethical considerations lead us to the need to make informed trade-offs – across individual, societal and environmental concerns, addressing both harms and benefits. These trade-offs should seek to achieve fairness or justice – that the risk-exposed are not without benefit, or that some section of society is not unfairly exposed to risks that others do not have to bear.

But what about AI? At its simplest, the use of AI, for example in process automation, can alter the risk-benefit trade-offs. Does automating well-head management or introducing so-called autonomous vehicles (AVs), which use AI for perception and navigation, net improve matters or make them worse? This remains a trade-off. Removing workers from harm’s way through automation (AI-based or not) reduces (removes?) safety risks but impairs their livelihoods. But this is really the effect of automation – the use of AI in such systems raises deeper questions, which depend on the nature of those systems.

One of the powers of AI is that it learns from training data – and what it has learnt can be used in different situations. So, an AI-based system might learn how to optimise a chemical process, thereby reducing the energy consumption and environmental impact without degrading the quality of the end-product. But the AI algorithms are (generally) opaque – we can’t inspect them to see how they work. So, if we do move a successful algorithm from the site where it was developed to control a new plant, how do we know it will give similar benefits? How do we know that it won’t exacerbate hazard risks (or even cause new ones) because it is not well suited to the different plant configuration?

What if a commercially available AV is introduced onto a chemical process plant to transport goods around the site? As well as the likely impact on jobs, what about the risks to other workers on the site? The AV will have been trained on image data to recognise pedestrians, to predict their motion, and thus to decide on a course (trajectory) to avoid them. How will the object detection and avoidance cope with workers in hi-viz clothing? This may seem an odd question – but hi-viz to people doesn’t necessarily translate that it is easily visible to a machine. It’s unlikely that such clothing will have been in the initial training data set. And how would we know if it was, and if the algorithms work well regardless?

One approach to answering these questions is through extending the idea of safety cases to ethics assurance arguments.1 Such arguments will address benefits and harms and support transparency so, for example, we would know if the AI algorithms had been trained on appropriate data (eg people wearing hi-viz clothing) and might be safe in the intended operating environment. The approach also gives us a basis for reasoning about human autonomy – the ability to have meaningful control over the use of such technologies – and the overall fairness, in terms of distribution of benefits and risks. While the concepts are quite abstract, they can be made concrete by reasoning about specific situations and discussing the advantages and disadvantages of different design alternatives.2

One approach to answering these questions is through extending the idea of safety cases to ethics assurance arguments

AI can be used “standalone” as well as embedded in physical systems. Here, generative AI might be used to produce safety documentation. Experiments by the author with ChatGPT got it to produce quite a good procedure for isolating a valve – the benefit is that ChatGPT has access to all such procedures available online. However, for use of such a procedure to be ethical, it should be checked by a competent engineer, to ensure it is consistent with operating procedures on the site, specific regulations in the country – or perhaps it would be better to use the ChatGPT-generated procedure only as a checklist. More worryingly, when ChatGPT was asked to produce a RIDDOR report, it made up information about the cause of the accident and remedial action that had been taken – even though this information was not in the prompt. Using generative AI this way could mean that the opportunity to learn from the incident was missed, and claimed remedial action not being implemented.

In general, when considering the use of AI, engineers should1:

  • identify the benefits and the beneficiaries
  • understand the harms and identify the risk-affected, taking a broad view of risks
  • ensure meaningful human control over the standalone or embedded AI system
  • ensure transparency of the AI system itself and the assurance evidence
  • seek to balance the risks and benefits across the beneficiaries and risk-affected so that, for example, all risk-affected individuals also receive benefits to ensure fairness

While not simple to implement, the above gives a structured set of principles for the ethical introduction of AI.

References

1. Zoe Porter, Ibrahim Habli, John McDermid & Marten Kaas. A principles-based ethics assurance argument pattern for AI and autonomous systems. AI and Ethics, pp1–24. 2023
2. John McDermid, Simon Burton, and Zoe Porter. Safe, ethical and sustainable: framing the argument. In The Future of Safe Systems: Proceedings of the 31st Safety-Critical Systems Symposium, 2023 (pp297–316). Safety Critical Systems Club.
3. For more details, see: https://nationalpreparednesscommission.uk/2023/07/safety-of-artificial-intelligence-prelude-principles-and-pragmatics/

Article by John McDermid OBE FREng

Software engineer and professor at the University of York, UK, and non-executive director at the Health and Safety Executive

Recent Editions

Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.