Navigating the AI Frontier: Bias, Ethics, and the Vital Collaboration Between Engineers and Policymakers

Article by Graham Herries CEng FIET

Graham Herries on the guardrails that must be established to ensure the fair and responsible integration of this AI into our society

THE PERFORMANCE of AI-powered technologies is doubling every six to 12 months. Right now, ChatGPT can do a reasonable, if imperfect imitation of an engineering student in an exam setting – as long as maths is not involved. The next generation of sector-specific large language models (LLMs) with added functions will be increasingly difficult to distinguish from the real thing. This is when the critical contribution and assessment by engineers is required to ensure that generalisation of bias and ethics in policy development does not leave significant threats to more technical applications of AI. 

Bias in AI: The unintended consequences

Bias in AI is a glaring issue that has drawn widespread attention and generalisation. AI systems learn from data, and when this data is skewed or contains historical biases, the AI models can perpetuate and even exacerbate these biases. This can lead to unfair and discriminatory outcomes in areas such as hiring, lending, and law enforcement.

In an applied engineering context, we must recognise that AI systems are founded on mathematical and statistical principles and that the quality of training data is of fundamental importance in minimising bias. This involves diverse and representative data collection, ongoing testing, and validation to ensure that AI models do not discriminate against any group or data categories. The fantastic capability of AI systems to approximate based on a small training dataset can open huge opportunities to address unidentified situations. But this capability also poses a significant threat if what is being approximated is not correctly understood. For instance, imagine the safety issues involved if the stiffness of a material provided by Youngs Modulus was approximated rather than calculated. To address bias in AI, engineers must not only acknowledge its existence but actively work to mitigate it.

Engineers play a pivotal role in establishing these guardrails by adhering to strict ethical guidelines and proactively seeking ways to make AI systems safer and more reliable

Ethics in AI: The moral imperative

Ethics are the moral compass that guides AI development. As AI systems become increasingly integrated into our daily lives, ethical considerations become paramount. Issues such as privacy, consent, and the impact of AI on society must be carefully considered.

Engineers must adhere to ethical principles when designing AI systems, prioritising human well-being, and avoiding harm. This involves a commitment to transparency, fairness, accountability, and addressing the potential misuse of AI technologies. DeepMind’s AI has made significant advances in the field of protein folding, but should ethical constraints be applied to minimise the threat to biological life and the environment of artificially created proteins and compounds? Policymakers, in turn, must establish ethical guidelines and frameworks to ensure that AI development aligns with societal values and norms.

Guardrails: Navigating the AI frontier

Guardrails are essential to prevent AI from careering out of control. They serve as a protective mechanism, ensuring that AI systems are developed and deployed responsibly and safely. These guardrails can take the form of regulations, industry standards, and best practices.

Engineers play a pivotal role in establishing these guardrails by adhering to strict ethical guidelines and proactively seeking ways to make AI systems safer and more reliable. However, they cannot do it alone. Policymakers must work hand in hand with engineers to create a regulatory environment that encourages responsible AI development while discouraging reckless experimentation and exploitation.

The crucial collaboration: Engineers and policymakers

Engineers are the architects of AI systems, responsible for designing and building these powerful tools. However, without independent ethical guidance and regulatory oversight, the potential for unintended consequences and misuse looms large.

Policymakers, on the other hand, provide the necessary framework to ensure that AI benefits all of society. They set the rules of the game, establish accountability mechanisms, and protect the rights and interests of citizens. Without effective policies, AI development can become a race where corners are cut, and ethics are compromised in the pursuit of technological dominance.

The interaction between engineers and policymakers is a dynamic one. Engineers need to engage with policymakers to understand the societal impact of their creations, while policymakers must collaborate with engineers to craft regulations that are both effective and practical. This partnership ensures that AI development is not only cutting-edge but also ethically sound and accountable.

Great examples of where this has worked well include the development of PAS (publicly available specification) pre-standards with the British Standards Institution (BSI). A particular example is PAS 1192, which focused on Building Information Modelling (BIM) standards for the built environment and had significant engineering co-authoring, input, and review.

To date, the governmental AI focus in the UK has been very startup, academic and policy advisor rich with very little applied engineering input. This balance appears now to be switching to commercially interested large tech businesses and we should ensure the voices of AI engineering developers and engineering practitioners are heard, perhaps by using the voice of the membership of the professional engineering institutions such as IChemE and the Institution of Engineering and Technology.

We must work to avoid large tech businesses constraining innovation through the creation of heavyweight regulation and certification, which could become prohibitively expensive for smaller AI developer and AI adopter businesses. The UK has a long history of regulatory development, and we must ensure the voice of engineers are taken into account in this key next phase of AI regulatory development to ensure a future where innovation and ethics walk hand in hand. 

AI in pharma

AI for generative design

Life sciences and the chemical industries have begun using generative AI to accelerate the process of developing new drugs and materials. In a report published in July, McKinsey pointed to biotech company Entos which has combined generative AI with automated synthetic development tools to design small molecule drugs. It notes the same principles can be applied to large-scale physical products: https://bit.ly/3PoHIXQ

AI suggests synthesis routes

Xia Ning and colleagues at Ohio State University have developed a generative AI method called G2Retro that has been trained on 40,000 chemical reactions to suggest synthesis routes for drugs. The model accurately predicted synthesis routes for established drugs and suggested alternative options: https://doi.org/gr9ttv

Article by Graham Herries CEng FIET

Chair of the Innovation and Skills Panel at the Institution of Engineering and Technology

Recent Editions

Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.