ABSTRACT VIEW
EXPLORING THE INTEGRATION OF AN AI-ASSISTED MIDTERM EXAM IN A THERMAL ENGINES COURSE
E. Cano-Pleite, A. Anca-Couce, N. García-Hernando, L.M. García-Gutiérrez, J.F. Guil-Pedrosa, A. Soria-Verdugo
University Carlos III of Madrid (SPAIN)
This work explores the results resorting from the inclusion of a midterm examination in which the students had the possibility of using Artificial Intelligence (AI), specifically chatGPT. The exam was carried out in the course on Thermal Engines, part of the Industrial Technologies Engineering degree program at Universidad Carlos III of Madrid (uc3m).

The course of Thermal Engines is divided into two parts: internal combustion engines and turbomachinery. Typically enrolling around 25 students, the course allows a dynamic and innovative class environment. A significant proportion of the grade of the turbomachinery part is dedicated to practical problems and deliverables. This fact has resulted in students focusing more on assignments than theoretical understanding, often dividing tasks within groups, leading to the acquisition of knowledge only in some fractions of the course and a relatively poor performance in the exam corresponding to this section. To address this issue, and to motivate a solid foundation for the final exam by the students, a midterm exam assisted with the use of generative AI tools was proposed. During the exam, students could ask chatGPT a limited number of times for each exam question, always under the supervision of the course instructors. This restriction ensured concise and focused prompts and a critical evaluation of the AI-generated responses. Students who used chatGPT were partially penalized for the score in those specific questions. The aim of this work was, therefore, to assess to which extent students relied on their knowledge versus AI assistance, given the specificity and complexity of the exam questions. Furthermore, this midterm was carried out the day before the exam of this section of the course, aiming also to enhance the students’ comprehension of some theoretical key aspects.

In a pilot run, 22 students participated in an exam with 10 test and 3 writing questions, with 73 % students using chatGPT to some extent. The average score (over 10) was 6.2 for students not using AI and 5.0 for those who did, indicating an over-reliance on AI for the exam preparation. All AI users among the students believed they performed better due to AI assistance. However, all the students that asked writing question #3 (very specific) obtained an incorrect answer, showing the need to carefully and critically interact with AI tools and to have a deeper understanding of the subject.

Overall student satisfaction with the course has been traditionally high, with ratings of 4.5/5 in 2021/22 and close to 4/5 in 2022/23. However, the feedback indicated a need for improved evaluation methods. The introduction of this additional midterm exam raised the satisfaction and implication of the students, provided their willingness to use new technologies and to employ them as direct applications in their learning. In fact, satisfaction with this section of the course reached 5/5 in 2023/24, thus highlighting the good reception of this innovative proposal by the students.

The experience developed in this work demonstrated that AI may not always, at first, provide accurate answers, highlighting the importance of the students’ knowledge, not only in trusting the answer provided by the AI, but also on the prompting of the questions. Additionally, this innovative exam format increased student engagement and motivation. From the instructor’s perspective, the project revealed the level of dependency of the students on AI technology.

Keywords: Education, exam, artificial intelligence, technology.