ADAPTATIVE TESTS AS AN EVALUATION METHOD IN THE STEM CONTEXT: AN EXPERIENCE IN THE ENERGY TECHNOLOGY DOMAIN
J.J. Serrano-Aguilera, J. Prieto, J.P. Jiménez-Navarro, C. Martin, A. Tocino
Evaluation is a fundamental element of the student learning process. It serves two key purposes: providing feedback to students on their acquisition of competences, skills and knowledge[1] and verifying whether they have attained the expected competencies. Assessment based on test-type examinations covers a wide range of concepts, reduces correction time, and enhances the perception of objectivity during the review process, among other benefits [2]. This study proposes an assessment methodology, which is widely used in the field of language learning [3], for subjects in STEM education.
We implemented adaptative tests to assess students’ skill acquisition. The aim is to gain experience to determine:
(i) how this technique affects the learning and evaluation process and
(ii) the suitability of the criteria followed to tailor these tests.
The adaptative tests were structured around two key criteria:
(a) the level of difficulty of the multiple-choice question and
(b) the lesson (within the subject syllabus) to which it belongs.
The adaptative tests were developed using the SIETTE tool, created at the University of Malaga. SIETTE is a web-based system (https://www.siette.org/siette/) that allows the creation and maintenance of question banks, including adaptive tests. The adaptative tests were implemented in the Energy Technology course of the Master of Industrial Engineering program at the University of Malaga. In this course, students took an adaptative test consisting of multiple-choice questions selected from a database of 80 questions. Their response times and perceptions of difficulty were recorded on a discrete scale from 1 to 4. The questions were automatically selected based on a specific decision tree applied to each of the four thematic blocks that made up the course. This decision tree was structured on the premise that a correct answer to a difficult question reduces the likelihood that the student will encounter difficulty with simpler questions. This approach enables students to answer fewer questions and spend less time on the test, particularly if their competency levels are adequate. In the second stage, students answered all the 80 questions in the database to establish a comparison between the adaptive and non-adaptive tests.
Considering the comparison and the collected results, a consistent correlation was observed between the level of difficulty from both the teacher’s and students’ perspective, reinforcing the idea that the teacher’s subjective judgment is adequate for tailoring adaptive tests. Similar conclusions were drawn from the relationship between answer time and difficulty level, except for the most challenging questions. The reduction in the number of questions answered was greater than 50% in all cases due to the adaptative techniques.
The compilation of all students’ answers in the 80-question database opens new opportunities to explore the construction of new decision trees via simulation, which can be further optimized to enhance the selection criteria for questions in adaptive tests.
References:
[1] C. Evans. Making Sense of Assessment Feedback in Higher Education. Rev. of Educational Research 83, 1 (2013), 70–120.
[2] M. G. Simkin, W.L. Kuechler. Multiple-choice tests and student understanding: What is the connection? Decision Sciences Journal of Innovative Education, 3(1), 73-98.
[3] M.M. Hicks, The TOEFL computerized placement Test: Adaptative Conventional Measurement. ETS Research Report Series, 1989: i-29.
Keywords: Adaptative tests, STEM Education.