T. Vieira1, E.M. Silva Martins2, N.J. da Silva Farias2, R. da Silva Barboza1, V. Ferreira de Lucena Jr2
The growing impact of Large Language Models (LLMs) in industrial, educational, and social contexts demands a pedagogical reformulation that prepares professionals to engage with these technologies critically and practically. Teaching LLMs is no longer optional; it has become essential, especially in the context of Industry 4.0, where intelligent automation, decision support, and natural language interfaces are becoming standard. This paper presents a comprehensive pedagogical plan designed to train high-performing undergraduate students, selected based on their academic records and awarded incentive scholarships, to participate in the course.
LLMs are not merely theoretical concepts but tools that require practical experimentation. Students need to evaluate, make mistakes, and reflect on the results generated by the models. To enable a dynamic and high-quality learning experience, an initial investment was made in high-performance laptops equipped with Intel Core i9-12900HX and NVIDIA RTX 4060 GPUs. This setup allows students to run models locally, with fast responses and smooth simulations, overcoming the most time-consuming stage in AI learning: model processing and fine-tuning.
The course's learning objectives are twofold. From a technical perspective, it aims to provide a solid foundation on how LLMs work, including their architecture and prompt-based interactions. From a pedagogical perspective, it seeks to foster critical thinking, ethical reflection, and autonomy in problem-solving. Students learn to formulate effective prompts, utilize strategies such as prompt chaining and few-shot or zero-shot prompting, and critically evaluate model outputs, considering risks including hallucinations, bias, and the improper use of generative AI.
The course structure combines theory and practice across multiple modules. Students begin with foundational topics such as Python and Artificial Intelligence before progressing to the LLM module, which is the focus of this paper. At that stage, they are expected to possess prior knowledge in related areas already. The LLM content is distributed across 12 lessons. It includes the foundations of models such as Word2Vec, GloVe, and FastText, the Transformer architecture, prompt engineering, Retrieval-Augmented Generation (RAG), agent development using LangChain, integration with external data sources, and best practices for using APIs like OpenAI and Hugging Face, as well as local execution with Ollama.
The training includes not only the creation of functional solutions but also critical reflection on the ethical and responsible use of AI. Students are evaluated through practical activities, technical deliverables, and a final project that includes the development of prototypes, technical documentation, an oral presentation with a functional demonstration, and a reflective report in which they discuss what they learned, the challenges they faced, real-world applicability, and the ethical considerations involved.
By the end of the course, students are expected to be capable of developing LLM-based solutions applicable to industrial contexts, with a critical awareness of their social impacts. This integrative approach offers a replicable teaching model that aligns technological relevance with educational responsibility, fostering a new generation of professionals prepared to apply generative AI in creative, practical, and ethical ways within Industry 4.0.
Keywords: Industry 4.0, LLMs applied to industry, teaching LLM, LLM courses.