ABSTRACT VIEW
TRANSFORMING FEEDBACK WITH AI: A SMARTER WAY TO SUPPORT STUDENT LEARNING
N. Anderson, J. Browning, D. Stewart, A. McGowan, L. Galway
Queen's University Belfast (UNITED KINGDOM)
Providing high-quality, learner-friendly feedback is a cornerstone of effective education, but it often demands significant time and effort from academic staff. To address this challenge, we developed a simple AI-powered feedback tool that transforms raw feedback points into polished, student-focused text. The tool allows markers to select pre-written feedback statements tailored to common performance areas, which are then restructured by AI into cohesive, professional feedback that is clear, actionable, and engaging.

Furthermore, the tool includes a scoring mechanism where each feedback statement is assigned a score reflecting its quality or relevance. These scores are aggregated to calculate a proposed average score for each section of the marking rubric. This feature ensures consistency and fairness across evaluations while enabling educators to focus on the qualitative aspects of assessment. By aligning scores with feedback, the tool provides transparency and coherence between student outcomes and marking criteria.

Research highlights the importance of well-constructed, actionable feedback in improving students’ engagement with their learning and promoting a sense of fairness in assessments. This tool addresses these needs while also significantly reducing the time spent on marking, allowing academic staff to focus on other essential teaching tasks. By automating the polishing of feedback and embedding consistent scoring, the system bridges the gap between efficiency and personalization.

The system integrates seamlessly into existing marking workflows, offering a library of customizable feedback points across multiple categories, such as introductions, technical performance, and areas for improvement. These categories are designed to align with marking rubrics, ensuring the tool's adaptability to an assessed course. The AI component ensures the final feedback is well-structured and eliminates repetitive language, resulting in a more professional and thoughtful tone. Staff also retain the flexibility to add custom feedback, balancing automation with the nuances of individual student assessments.

Initial testing has shown high satisfaction rates, with users reporting improved marking efficiency without compromising academic rigor. By combining structured feedback selection, scoring functionality, and AI-generated refinement, the tool ensures a balance between automation and human judgment. This approach preserves our ability to provide meaningful assessments while maintaining clarity, consistency, and alignment with pedagogical best practices.

This paper outlines the development process, functionality, and initial outcomes of the tool, showcasing its ability to enhance both the quality and efficiency of feedback. Future iterations will focus on expanding its adaptability to other disciplines, improving its scoring and analytics capabilities, and exploring its impact on student outcomes and satisfaction. By integrating advanced AI capabilities with a user-friendly interface, this tool represents a significant step forward in improving the feedback and assessment process for both academics and learners.

Keywords: AI-powered feedback, Student assessment, Feedback automation, Learning engagement.

Event: EDULEARN25
Session: Personalized Learning
Session time: Monday, 30th of June from 17:15 to 19:00
Session type: ORAL