METHODICAL APPROACH FOR ANALYZING LEARNING PATH FITNESS IN AN AI-BASED ADAPTIVE LEARNING SYSTEM
P. Schoppel1, J. Haug2, J. Manz1, D. Bigler2, I. Hock1, J. Abke1, G. Hagel2
Learning paths are a cornerstone of many adaptive learning systems, particularly those focusing on adaptive navigational techniques. Evaluating and analyzing these paths is therefore crucial to ensure they effectively support both learners and instructors. For eLearning this process must be highly scalable despite minimal oversight and little to no control over learners’ behavior. Consequently, learning path evaluation should be automated, user-friendly, and precise. However, current research on this topic often emphasizes simulations, performance metrics, or mathematical models, without fully considering the broader, learner-centered aspects necessary for meaningful adaptation. The authors prior findings also indicate that approaches to assessing the suitability of learning paths must be optimized. To address these gaps, this paper presents a potential methodological approach for comprehensive learning path evaluation, aiming to enhance both the precision of adaptive learning systems and the overall learning experience.
Three different algorithms, derived from learning style tendencies and a lecturer recommendation, were analyzed as an illustrative example, although the method itself is not constrained by the form or data basis of these algorithms. The adaptive learning system utilizes various measures to gauge the suitability of a learning path, all gathered through real-time learner feedback. These measures include the correlation between students’ preferred path and each algorithm, referred to as it’s fitness, the alignment between students’ actual adherence to a generated path and their own perception of their study behavior as well as their satisfaction with the path, and the connection between algorithm fitness and both actual performance and perceived performance.
To collect data, students were asked to create their own preferred learning paths by digitally arranging the provided learning elements after receiving an introduction to the respective categories. Once they had completed a topic with a generated learning path, they rated their satisfaction with it and indicated whether they had followed its sequence. They also estimated whether their knowledge level had changed. Learning analytics were then employed to compare these self-reports with students’ actual study behavior. Performance was measured using a rating system, while Spearman’s Rho and Kendall’s Tau served as the main correlation metrics for data analysis.
The results indicate that all three algorithms produce paths more closely aligned with students’ preferred learning paths than the lecturer recommendation, although no single algorithm demonstrated clear dominance. Student satisfaction showed some correlation with the fitness of the generated learning path. Additionally, student ratings appeared to have a slight positive correlation with learning path fitness, whereas self-perceived performance showed no discernible difference. Analysis of the link between actual student behavior and their feedback suggested that students were not reliable in judging whether or not they had followed a learning path.
These findings are consistent with the authors earlier work suggesting the potential effectiveness of the learning path algorithms examined, thus supporting this new methodological approach to analyzing learning paths. The study also provided valuable insights for further development; however, its limited sample size remains a challenge for validation.
Keywords: Learning Path Evaluation, AI-driven Learning Path Generation, Learning Analytics, Adaptive Learning System, Higher Education.