ABSTRACT VIEW
Abstract NUM 2346

COGNITIVE DETECTION OF AUTHENTIC STUDENT AUTHORSHIP WITH AI
W. Khan1, C. Semmler2, E. Palmer2, A. Grandison3, D. Gooch3, P. Lally3, D. Lee2, C. Perry2, C. Pope2
1 Auth+ (UNITED KINGDOM)
2 University of Adelaide (AUSTRALIA)
3 University of Surrey (UNITED KINGDOM)
With OpenAI reporting 500 million Weekly Active Users (WAUs), and a recent Times Higher Education survey highlighting that out of 1,041 students surveyed 88% admitted to using AI in submitted assessments, the rapid rise of Generative AI tools has been remarkable. However, with the technology’s radical functionality to produce ‘original’ text at unprecedented speed, it has opened ethical questions about the nature of authorship, originality and how to best support AI use in student learning within Higher Education (HE).

Empirical studies indicate that educators often struggle to accurately differentiate between student-written and AI-generated texts. Moreover, existing research on AI detection tools reveals inconsistent effectiveness, highlighting the need for a multifaceted strategy to address academic integrity concerns related to AI use in HE. Indeed, studies have shown that when it comes to AI, 78% of educators felt in need of more training and support from their institutions. A recent study published by Massachusetts Institute of Technology (MIT) has noted that when compared to ‘brain only’ and ‘search-engine only’ users, users who used AI to write their essay report feeling less ownership over their essay, exhibited less detailed memory recall, and struggled to accurately quote their own work.

The current study examines Auth+, a novel tool that leverages Large Language Models (LLMs) to generate, deliver, and assess personalised quizzes based on students’ own assessment submissions. This novel approach to academic integrity enables scalable, post-submission questioning that emphasises authorship verification through reflective engagement. This is targeted at mitigating some of the now known cognitive risks of high dependency on Generative AI tools as mentioned above.

Grounded in the premise that “writing is thinking", the tool utilises the cognitive processes embedded in writing — such as drafting, revising, and self-reflection — as indicators of authentic learning. Rather than focusing on AI-generated content detection, Auth+ aims to diagnose the presence of episodic memory traces formed during the writing process, thereby identifying whether students genuinely engaged with their work. In doing so, the tool not only helps mitigate risks associated with generative AI use but also fosters deeper learning and reinforces responsible academic practices.

This paper presents early findings from a randomised control trial evaluating the efficacy and student experience of Auth+ in a live classroom setting. The task involved five students completing 25 authorship tests each across three conditions: ‘true’ (quiz on their own writing), ‘rote’ (quiz on memorised content they didn’t write), and ‘blind’ (guessing answers to a quiz not on their own writing). This generated a total of 25 authorship tests. In a separate task, Auth+ was used in a live class setting across 5 courses and 118 students feedback was recorded to collect student feedback.

Overall, the study explores the potential of AI-generated authorship quizzes as a scalable alternative to traditional AI detection methods. Findings highlight the potential of Auth+ as a pedagogical tool to promote responsible use of generative AI in the learning process.

Keywords: AI detection, artificial intelligence, academic integrity.

Event: ICERI2025
Track: Innovative Educational Technologies
Session: Generative AI in Education
Session type: VIRTUAL