EXPLORING THE IMPACT OF PERSONALIZED SCAFFOLDS ON SELF-REGULATED LEARNING AND LEARNING EFFECTIVENESS WITH LARGE LANGUAGE MODELS
F.H. Wang
The integration of Large Language Models (LLMs) in education provides personalized learning support and real-time feedback. This study explores how LLMs scaffold students’ problem-solving strategies and learning outcomes in Object-Oriented Programming (OOP) within higher education. Three experimental groups with 29 participants were formed: one with no scaffolding, one with scaffolding, and one with scaffolding plus interactive agents. Using a mixed-methods approach, data was collected through questionnaires, code analysis, and interviews. Results show no significant differences in cognitive load, motivation, self-regulation, trust, or performance among groups, but notable variations in learning behaviors and individual changes in cognitive load, motivation, self-regulated learning, and human-machine trust. These findings contribute to optimizing AI-driven learning systems and enhancing AI-agent interactions in personalized scaffolding education.
Introduction:
Generative AI, particularly LLMs, facilitates personalized learning through dynamic content generation and interactive feedback. This study examines the impact of LLM-powered scaffolding on self-regulated learning and overall learning effectiveness, focusing on AI-driven peer learning and problem-solving support.
Method:
The study was conducted in an OOP course using a web-based learning platform integrated with AI-driven scaffolding. Learning modules were structured based on interdependencies, requiring mastery of prerequisites before progression. ChatGPT assessed students’ work and sequenced learning activities. The AI-assisted system featured peer and tutor agents, as well as an SRL-scaffolding agent supporting goal setting, progress monitoring, and reflection. Personalized scaffolding was driven by a student model, constructed using behavioral indicators and dialogue interactions.
Student Model and Scaffolding:
The student model included six factors: cognitive load, emotion, thinking levels, engagement, creativity, and proficiency. AI-derived insights from click events, chat history, and coding activities informed tailored scaffolding strategies. The BERT model was employed to encode problem contexts and match them with relevant topics, updating student models dynamically. Behavior-based modeling analyzed engagement patterns, coding errors, and learning progress to refine scaffolding interventions.
Results and Discussion:
Preliminary findings revealed no significant differences in overall performance but indicated distinct learning behaviors across groups. Personalized scaffolding influenced individual cognitive load, motivation, and self-regulation. Students in scaffolded conditions demonstrated improved engagement and adaptability, while some exhibited dependency on AI support. These insights underscore the need for adaptive scaffolding strategies to balance guidance and learner autonomy.
Conclusion:
AI-driven scaffolding enhances self-regulated learning and problem-solving in programming education. Personalized interventions should consider cognitive, emotional, and behavioral factors to optimize learning experiences. Future research should refine AI-driven scaffolding frameworks to foster deeper learning while mitigating over-reliance on AI support.
Keywords: Large Language Model, personalized learning, reciprocal peer agent, scaffolding support, self-regulated learning.