A. Sharma, S. Sharifi
Feedback is widely recognised as one of the most powerful drivers of student learning, yet its implementation in higher education often falls short of its potential. Students frequently receive vague or poorly timed comments, while educators face increasing assessment loads and institutional demands for efficiency. This project introduces NEXUS, a feedback system that uses Large Language Models (LLMs) to generate, structure, and visualise feedback in ways that explicitly support feedback literacy: the ability to interpret, act on, and reflect meaningfully on feedback.
Grounded in feedback literacy theory, which centres on appreciating feedback, making judgements, managing affect, and taking action, the study explores how feedback can shift from isolated remarks to sustained learning dialogue. The research followed three design and evaluation phases.
In Phase 1, a structured feedback format using Strength, Weakness, and Next Steps headings was developed through prompt engineering. A blind preference study with final year engineering students showed 100 percent preferred LLM generated structured feedback over unstructured human comments. Participants cited improved clarity, alignment with criteria, and greater usability, all of which support decoding and actionable use. They also found the structured format less discouraging and easier to engage with, especially when presented separately from grades, which reinforced the emotional and motivational aspects of literacy.
Phase 2 examined educator trust and workflow. Tutors from multiple programmes trialled a low fidelity dashboard that generated LLM based draft feedback in under two minutes. While some raised concerns about domain nuance and voice, 71 percent considered the tool a useful scaffold for drafting. The structured templates and inline rubric tagging were especially valued, offering consistent models that support novice markers and reduce cognitive load. These features contribute to more equitable and reliable development of feedback literacy across student cohorts.
Phase 3 focused on addressing the fragmentation of feedback across modules through a student facing dashboard. This interface aggregated historical feedback, summarised recurring strengths and weaknesses, and proposed actionable next steps. Post trial surveys showed strong effects on student attitudes: 75 percent said the system helped them focus more on learning than grades, while 85 percent found it useful for academic planning. Students described the dashboard as a memory prosthetic that enabled them to see feedback as a longitudinal narrative, a critical factor in fostering self regulated learning and feedback literacy over time.
Overall, the study demonstrates that carefully designed AI systems can act as scalable scaffolds for feedback literacy rather than simply automating assessment comments. NEXUS shows how LLMs, when aligned with pedagogical goals, can improve clarity, reduce marking effort, and support reflective practice. The system provides a replicable model for embedding ethical and pedagogically grounded AI tools into feedback processes, promoting a shift from marking to mentoring in contemporary higher education.
Keywords: Feedback Literacy, AI in Education, Learning Technologies.