ABSTRACT VIEW
Abstract NUM 2395

PROMPT ENGINEERING AS FEEDBACK ASSISTANT: A CASE STUDY IN SOFTWARE ENGINEERING SUBJECTS
M.A. Bacallado1, L.M. Infante-Fernández2
1 Universidad de La Laguna (SPAIN)
2 Universidad de Extremadura (SPAIN)
The use of Generative Artificial Intelligence (GAI) is marking a turning point in higher education, offering new opportunities to enhance teaching and learning processes. Among the most promising applications is Prompt Engineering (PE), focused on the strategic design of instructions—known as prompts—to guide Large Language Models (LLMs) in specific tasks. In education, this technique can facilitate the automated generation of feedback, complementing lecturers’ work and optimising time without compromising quality. However, questions remain regarding the reliability of these tools, their pedagogical alignment, and integration into real academic contexts.

This study analyses Prompt Engineering as a feedback assistant in two courses of the Computer Engineering degree at the University of La Laguna: Software Systems Analysis (SSA) and Risk Management in Software Engineering (RMSE), both part of the Software Engineering specialisation. The main objective is to evaluate the ability of LLMs to generate useful and coherent feedback aligned with learning objectives, using prompts tailored for tasks ranging from practical reports to final projects. The goal is to determine whether this technology improves student feedback experience while reducing assessment workload without undermining quality.

The methodology follows an experimental, descriptive approach in two phases. In the first, prompts were designed and refined to analyse simple tasks, assigning the LLM the role of a subject-matter expert. Each prompt specified assessment criteria such as clarity of ideas and technical accuracy, requiring section-by-section evaluation, a global grade, and two feedback components: “Observations” and “Suggestions for Improvement”, ensuring actionable guidance. Responses were provided in two formats: a formal message for students and a structured JSON file storing all results. The second phase, focused on final projects, adapted the prompt to include the previous JSON file, verifying whether students applied earlier feedback and generating the final grade and comments.

Results using ChatGPT reveal a high degree of alignment and efficiency between AI-generated responses and evaluated tasks, delivering grades and feedback suitable for direct integration without modifications. However, assigning the model an “expert evaluator” role occasionally produced strict grading, ignoring that students are learners rather than professionals. In such cases, lecturers retained the AI-generated feedback but adjusted final grades. Despite this limitation, Prompt Engineering significantly reduced evaluation time and provided comments rated as useful by lecturers.

This work demonstrates the potential of Prompt Engineering as a complementary tool to optimise assessment in higher education while maintaining human oversight. It also highlights pedagogical and ethical implications and suggests future research on integrating these technologies into hybrid teaching models to ensure quality, equity, and transparency.

Keywords: Prompt Engineering, Generative AI, Higher Education, Computer Science.

Event: ICERI2025
Track: Innovative Educational Technologies
Session: Generative AI in Education
Session type: VIRTUAL