Unpacking the Hype Around AI

Article by Stuart W Prescott

Stuart Prescott provides some background to the emergence of AI as we kick off our look at how generative AI can shape the future of chemical engineering

ARTIFICIAL INTELLIGENCE (AI) tools have been available in specialist forms for many years. However, they were limited in scope. The programmer’s toolbox included utilities to perform single tasks on text, for example sentiment analysis and keyword extraction, images (classification, feature extraction), and data (black-box prediction). When linked, the simple tools could perform more sophisticated tasks such as harvesting social media data to track reaction to a marketing campaign.

As end-users, our interaction with AI was somewhat different. The most common AI tools we interacted with were the chatbots that attempted to provide customer helpdesk functions for anything from banking to hardware supply. These were still highly specialised tools, trained on a bespoke body of documentation that suited the deployment, and they were frequently frustrating.

ChatGPT captures the imagination

The chemical engineer might recognise those programmers’ tools as unit operations that do only one thing and do it well. They require a specialist to design, deploy, optimise, and maintain them. They transform input into output, usually in a reductive manner such that there is less output than input, and they can be combined into larger processes.

What made tools including ChatGPT, Galactica, and Bard stand out when they stormed the public consciousness in late 2022/early 2023, was that they were general tools that could be applied to seemingly any task. These new tools were also generative. While a sentiment analysis tool turns a sentence of text into a single binary assessment (eg like/dislike), the output generated by ChatGPT can be vastly larger than the input prompt given by the user. Likewise, image generation tools like DALL-E, Midjourney, and Stable Diffusion turned short text prompts into entire images.

Robots have hallucinations too

One feature that quickly became evident from generative AI tools was that they are based on the statistics of correlations of words and do not have discipline knowledge as we would describe it. They are built on large language models (LLMs) that are only statistical models about how groups of words frequently appear. These tools are not like a smart friend. They don’t know anything other than how words are often grouped, and they most certainly do not understand any of what you say to them, or what they say in reply.

As the AI is based on word statistics not facts, the text generated can be a wildly incorrect mishmash of concepts. While the creators of the tools call this phenomenon “hallucination”, critics have coined the term “stochastic parrots”. Text generated by hallucinating LLMs still sounds confident and we are not always good at seeing through that. The equivalent problems for AI image generators are easier to spot as we quickly notice that the images of people have strangely distorted features and signs have odd symbols on them, even if the rest of the image looks well-polished.

Professional interactions with AI tools

As AI utilities become part of our professional lives, we need to take responsibility for their output in the same way as we must take professional responsibility for the output from a calculator, spreadsheet, or multiphysics model. An excellent knowledge of the subject matter will still be needed, else the user will blindly believe things that are untrue.

In the following set of articles, authors from across the profession explore some of the strengths and weaknesses of existing AI tools in professional practice, safety, ethics, and education.

AI Glossary

Generative AI: a type of artificial intelligence that generates new outputs in contrast to systems that perform functions such as classifying data, grouping data, and choosing actions. Its outputs include text, images, and audio based on the data it is trained on.

A large language model (LLM) is trained on huge amounts of data to carry out language-related tasks. Following a prompt from a user, LLM chatbots such as ChatGPT work by predicting the next word in a sequence. They can be prompted for summarisation, problem-solving and calculations, though the results can be unreliable and need verifying.

Prompt: this is how you communicate with AI, typically by typing natural language questions or instructions into a chatbox to instruct the AI to perform a task.

Prompt engineering: the term describes the technique of crafting prompts to improve the output of generative AI.

Article by Stuart W Prescott

Deputy head of school at the School of Chemical Engineering, UNSW Sydney, Australia

Recent Editions

Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.