PERSONALIZING FORMATIVE FEEDBACK THROUGH GENERATIVE ARTIFICIAL INTELLIGENCE: AN EXPERIMENT IN HIGHER EDUCATION
H. Akhasbi1, N. El Kamoun1, F. Lakrami1, J.L. Gilles2, J.M. Rigo3, E. Aliss4
In higher education, providing personalized formative feedback remains challenging. Generative Artificial Intelligence (GAI) offers an innovative way to enhance the quality and relevance of student feedback. Formative feedback is crucial for fostering student autonomy and self-regulated learning, yet providing individualized feedback tailored to each student's needs creates significant workload pressures for educators, especially in large-scale contexts.
Generative AI models, particularly Large Language Models (LLMs) like ChatGPT, enable automatic generation of detailed, contextually relevant feedback aligned with educational goals. By analyzing students' responses, these tools effectively identify strengths and areas for improvement, significantly easing educators' workloads while maintaining high feedback quality.
A key factor influencing these AI tools' effectiveness is the clarity and precision of the provided instructions, known as prompting. The specificity of these prompts directly affects the feedback's relevance, accuracy, and appropriateness. By carefully refining these instructions, educators can more effectively tailor feedback to learners' individual needs.
Our research explored integrating GAI into formative assessment via the Evalviz-Dashboard platform, including an assistant called the Personalized Feedback Generator. This assistant analyzes student responses, generates targeted feedback, and enables educators to review and validate feedback before dissemination. Additionally, a participatory feature allows students to report inaccuracies or suggest improvements, supporting continuous refinement of feedback quality.
The Personalized Feedback Generator is part of a broader initiative to improve evaluation quality, forming a suite of digital tools supporting educators throughout the Construction and Quality Management Cycle for Standardized Tests (CGQTS). This initiative aligns with the Erasmus+ CORETEV project's goals to integrate quality-driven approaches into educational assessments.
The study involved 34 Master's students enrolled in a Smart Grid module who completed five formative assessments composed of multiple-choice questions (MCQs). Feedback included detailed explanations, practical examples, and targeted recommendations to enhance concept understanding.
Results indicated high student satisfaction with the AI-generated feedback: 46% reported being very satisfied, 28% satisfied, and only 10% expressed dissatisfaction. Moreover, overall feedback accuracy was high, with errors reported in only 5% of instances.
Educators noted that integrating GAI significantly reduced their workload, facilitating more targeted pedagogical interventions. A suggested improvement is providing educators direct interaction with GAI to further refine feedback based on specific educational contexts.
Nevertheless, challenges remain, notably the ongoing need for human oversight to ensure feedback relevance and mitigate potential algorithmic biases. Incorporating Generative AI into feedback personalization represents a promising advancement for enhancing formative assessment quality in higher education.
Keywords: Formative assessment, generative artificial intelligence, personalized feedback, higher education.