Putting the AI in Feedback TrAIning

Article by Stuart W Prescott

Stuart Prescott explores the use of AI tools to support students practicing and developing their skills in both self-assessment and providing feedback within a project team

Quick read

  • Development of Reflective Practice: Engineering education emphasises building reflective practices in students, which can be challenging due to cultural and emotional barriers, necessitating a scaffolded approach throughout their studies
  • AI-Assisted Peer Feedback: Implementing an AI trainer to provide personalised critiques of peer feedback significantly improved the quality and quantity of peer feedback, allowing students to learn the “feedback sandwich” technique effectively
  • Scalable Educational Support: Generative AI can enhance educational practices by offering scalable, personalised training and feedback, complementing traditional teaching methods and enabling instructors to manage larger volumes of student work efficiently

ENGINEERING degrees start trainee engineers on the path of developing reflective practice. Educators have long known that students can struggle with these “soft” skills, where advice and feelings intersect with their more familiar engineering knowledge. Significant cultural and emotional hurdles sometimes need to be overcome, and there are potentially confronting situations in admitting weaknesses or offering well-meaning criticism to others. A scaffolded approach slowly builds these skills throughout the degree.

Practising critique of project outputs

In our classes on chemical product engineering, [associate professor] Patrick Spicer and I bring together students with food science, product engineering, and process engineering backgrounds. Working in project teams, our students work towards three milestones: (a) literature-based summary of nano- and micro-structures in the product, (b) literature and experimental descriptions of flow behaviour of the product, (c) connections between microstructure, flow, and product performance from the consumer perspective.1

For each of these milestones, the project teams create a short video to present their findings. The students watch the videos of other groups to learn about a broader range of products, to provide additional feedback to each team, and to develop their reflective practice.

Our class contains students from different degrees, stages of study, and educational backgrounds. These diverse teams are excellent for the technical part of the project work. However, for the individual steps of feedback and reflection, the students have extremely different preparation, with many of the students being surprisingly unprepared. 

Writing to a genre

Over the years, we had made many attempts to instruct students about the expectations for giving peer feedback, giving content, length, and stylistic suggestions. It will surprise few educators that we have had little success in this effort. Students would often only summarise the video and state something good about it; meaningful suggestions on how the work could be improved were rare. The responses were almost always too short to contain meaningful feedback.

A strength of AI Large Language Models (LLMs) is generating text in a given style. LLMs can also offer stylistic suggestions about a piece of text. Importantly, we’re not asking the LLMs to produce technically correct text but only to evaluate the style. This plays to the strength while avoiding potential weaknesses in “hallucinations” or factual errors.

A matter of style

The “feedback sandwich” is a common format: say something that is good, provide constructive criticism about how to improve, finish with a positive summary or another positive aspect.2

In this investigation into the use of LLMs as a training assistant, we modified the online submission tool that collected the peer feedback to insert an extra section above the “submit” button. Our addition invited students to get some comments on their draft feedback from an “AI trainer”. After receiving the comments, students could edit their feedback and get further comments from the AI trainer before submitting their work. The blue loop in Figure 1 shows how we added the AI trainer to the peer feedback process.

The algorithm for the AI trainer was relatively simple:

  • If the feedback written by the student was too short (< 200 characters), the student was informed they needed to write more, suggesting the feedback sandwich format. This was completed without using the LLM
  • For longer feedback, the LLM was asked to critique the feedback for the student. The prompt asked the LLM to write about 200 words that started with what the student had done well, offered suggestions on what was missing or how to improve the specificity of what they had written, and finished on a positive note about how to improve
Figure 1: Flow for peer assessment of student videos with AI trainer shown in blue. Students can get help from the AI trainer on whether their written peer feedback or self-reflection is meeting stylistic expectations prior to submission

Stylistic success stories

Article by Stuart W Prescott

Deputy head of school at the School of Chemical Engineering, UNSW Sydney, Australia

Recent Editions

Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.