Stuart Prescott explores the use of AI tools to support students practicing and developing their skills in both self-assessment and providing feedback within a project team
ENGINEERING degrees start trainee engineers on the path of developing reflective practice. Educators have long known that students can struggle with these “soft” skills, where advice and feelings intersect with their more familiar engineering knowledge. Significant cultural and emotional hurdles sometimes need to be overcome, and there are potentially confronting situations in admitting weaknesses or offering well-meaning criticism to others. A scaffolded approach slowly builds these skills throughout the degree.
In our classes on chemical product engineering, [associate professor] Patrick Spicer and I bring together students with food science, product engineering, and process engineering backgrounds. Working in project teams, our students work towards three milestones: (a) literature-based summary of nano- and micro-structures in the product, (b) literature and experimental descriptions of flow behaviour of the product, (c) connections between microstructure, flow, and product performance from the consumer perspective.1
For each of these milestones, the project teams create a short video to present their findings. The students watch the videos of other groups to learn about a broader range of products, to provide additional feedback to each team, and to develop their reflective practice.
Our class contains students from different degrees, stages of study, and educational backgrounds. These diverse teams are excellent for the technical part of the project work. However, for the individual steps of feedback and reflection, the students have extremely different preparation, with many of the students being surprisingly unprepared.
Over the years, we had made many attempts to instruct students about the expectations for giving peer feedback, giving content, length, and stylistic suggestions. It will surprise few educators that we have had little success in this effort. Students would often only summarise the video and state something good about it; meaningful suggestions on how the work could be improved were rare. The responses were almost always too short to contain meaningful feedback.
A strength of AI Large Language Models (LLMs) is generating text in a given style. LLMs can also offer stylistic suggestions about a piece of text. Importantly, we’re not asking the LLMs to produce technically correct text but only to evaluate the style. This plays to the strength while avoiding potential weaknesses in “hallucinations” or factual errors.
The “feedback sandwich” is a common format: say something that is good, provide constructive criticism about how to improve, finish with a positive summary or another positive aspect.2
In this investigation into the use of LLMs as a training assistant, we modified the online submission tool that collected the peer feedback to insert an extra section above the “submit” button. Our addition invited students to get some comments on their draft feedback from an “AI trainer”. After receiving the comments, students could edit their feedback and get further comments from the AI trainer before submitting their work. The blue loop in Figure 1 shows how we added the AI trainer to the peer feedback process.
The algorithm for the AI trainer was relatively simple:
Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.