NAVIGATING THE CHALLENGES AND OPPORTUNITIES OF GENERATIVE AI IN EDUCATION: A CASE STUDY ON PERSONALIZED ASSESSMENTS
V. Allemann-Ravi, P. Bachmann, R. Moist
Many lecturers are skeptical about generative AI (GAI) because it challenges established teaching and examination methods. These concerns are not unfounded, as sociology has long warned about the risks associated with technological advancements. Beck (1986), for instance, highlighted that despite its clear benefits, technological progress carries inherent, often imperceptible risks beyond everyday experience. Despite these concerns, integrating new technologies into educational settings is crucial. AI literacy equips students for the workforce (e.g., Long & Magerko, 2022; Laupilcher et al., 2023), necessitating an exploration of the new opportunities that GAI presents in education.
Our exploration involved using ChatGPT to create personalized tasks as alternatives to traditional exams. Drawing on literature and Giddens’ (1984, 1990) theories of structuration and late modernity, we defined generative AI literacy as an individual's agency in acting and reflecting on solving diverse problems involving or facilitated by GAI. This includes routine behavior and the ability to adapt reflexively to varying situations.
The essence of our teaching and grading approach is the personalization of tasks based on students' interests while maintaining consistent links to the semester’s curriculum for all. Our goal was to balance individual customization with general class standards. The process involved several communicated steps. Initially, students completed an online questionnaire to identify their career interests. Using their responses, we employed ChatGPT and structured prompts to generate fictitious company descriptions tailored to these interests, mindful of data protection. ChatGPT created many original organizations and brands from the structured prompts, such as a candy manufacturer, a tea shop, and a media company. The product names were also highly original.
Throughout the semester, each student received three tasks related to the course content. The tasks were identical for all students except for personalized salutations and company details. Thus, solutions had to be derived from these tailored descriptions, encouraging knowledge application and deterring plagiarism.
The response from students (N = 13) was overwhelmingly positive. An evaluation asked whether this assessment method should be continued, and the average rating was an impressive 4.9 out of 5. Students appreciated the individualization and the progressive approach to GAI, and they particularly liked the flexibility of being able to complete assessments in stages throughout the semester.
This teaching and assessment method is adaptable to other courses. However, it is essential to note that, despite some degree of automation, refining descriptions and assessing performances still require some effort.
Keywords: Generative AI, teaching, assessment, theory of structuration.