ABSTRACT VIEW
MITIGATING MISINFORMATION IN AI-POWERED EDUCATIONAL SYSTEMS: THE FORT VERIFICATION APPROACH
B.J. Spiteri
University of Malta (MALTA)
The increasing integration of Artificial Intelligence, particularly Large Language Models, in educational systems provides the potential to revolutionise personalised learning experiences. However, the susceptibility of LLMs to generating misinformation and hallucinations necessitates the development of robust verification techniques to ensure student safety and trust. This research paper presents FORT Verification, a novel multi-stage verification framework specifically designed to mitigate the known risks of AI-powered educational systems. FORT Verification targets the known weaknesses of LLMs in educational settings, such as hallucinations, irrelevant information, and potential misuse, by leveraging AI concepts to enhance the accuracy, relevance, and appropriateness of AI-generated content. The approach combines the strengths of Generative Adversarial Networks, Retrieval Augmented Generation, and online cross-referencing techniques, ensuring the trustworthiness of educational content while safeguarding against potential pitfalls. The efficacy of FORT Verification was evaluated through a combination of benchmark-testing and real-world testing and feedback. Utilising a modified GSM8K benchmark the FORT framework was able to correctly identify hallucinations and incorrect information with an accuracy of 90%. The results highlight the superior performance of FORT Verification compared to other approaches, particularly in addressing mathematical reasoning tasks and ensuring the delivery of factually accurate information to students.

Keywords: AI in Education, Large Language Models, Hallucinations, Mitigation Techniques.

Event: INTED2025
Track: Innovative Educational Technologies
Session: Generative AI in Education
Session type: VIRTUAL