UNDERSTANDING LARGE LANGUAGE MODELS: A PRACTICAL APPROACH FOR SECONDARY STUDENTS TO LEARN AND CRITICALLY EVALUATE AI LANGUAGE TOOLS
G. Monrós-Andreu, R. Martínez-Cuenca, S. Torró, S. Chiva
This study explores how Large Language Models (LLMs), like GPT and BERT, can be introduced into secondary education to help students understand their basic principles and how they work, focusing on helping them grasp the most basic and intuitive concepts behind these AI tools. The objective is to demystify how LLMs work, showing that they rely on probability and large datasets for training rather than anything mysterious. Through clear, classroom-friendly explanations, this paper demonstrates how LLMs predict text, use embeddings to handle data, and apply attention mechanisms to generate coherent language across different contexts. With hands-on activities, students engage with core ideas about LLMs, such as how data volume and probability-based predictions create patterns that enable these models to function. This approach aims to provide students with the basic understanding of AI language tools, allowing them to critically assess the capabilities and limitations of LLMs in everyday applications.
Keywords: Education, Large Language Models, AI tools.