C. Ford
As large language models (LLMs) become increasingly embedded in creative, academic, and professional domains, there is a growing need for Artificial Intelligence (AI) literacy beyond computer science. However, teaching LLM foundations to non-technical students poses unique challenges: abstract model behaviours, inaccessible tools, and low programming confidence create barriers to understanding. This submission presents a pedagogical approach designed to make technical LLM concepts concrete and accessible to social science postgraduates through guided experimentation with a real model.
The author developed and taught a 10-week module, Language Models and Methods, to 61 postgraduate students across four programmes (Human-Computer Interaction, Information Systems, Social Data Science, and Information and Communication Studies). A custom Hugging Face Space interface powered by OpenAI’s GPT-3.5 and funded via a personal API key enabled controlled and guided hands-on interaction with a live LLM. Despite regular use across the semester, total API costs remained under $15, making this a sustainable and scalable approach.
Beginning in Week 4, students participated in weekly lab sessions that complemented the lecture component of the module. These labs focused on core model behaviours such as temperature effects, system prompts, and reasoning strategies, and were explored through a custom interactive interface aligned with weekly learning goals. As the semester progressed, new tabs introduced additional NLP tasks, including sentiment analysis, summarisation, named entity recognition, and error detection. Two major assignments also incorporated the interface, allowing students to systematically investigate LLM behaviour and critically apply their learning to real-world use cases.
Labs were accompanied by plain-language walkthroughs with reflection questions to support learning and prompt discussion. While no formal data was collected, students reported high engagement and demonstrated critical insight during in-class discussions. The source code was made available, allowing technically inclined students to explore the interface further.
This approach demonstrates how technical AI topics can be made accessible through thoughtful pedagogical design and interactive tooling. By integrating a live, production-grade LLM into a guided interface tailored for non-technical learners, the module offered an alternative to purely conceptual or pre-scripted teaching tools. It fostered both conceptual understanding and ethical reflection, and its modular design supports adaptation across disciplines. The core innovation lies in the use of an open, extensible interface for hands-on experimentation with LLM behaviour, bridging the gap between technical AI systems and the critical digital literacies needed to engage with them.
Keywords: AI literacy, large language models, pedagogical interface design, interactive learning.