A. Peláez1, H. Bernal2, D. Santo3
The accelerated integration of artificial intelligence (AI)-based tools into educational environments is transforming how teaching, learning, and the academic experience are managed at all levels, from primary school to university. However, this process poses significant challenges regarding privacy and data protection for students and teachers. The use of AI systems—such as virtual assistants, adaptive learning platforms, automatic assessment systems, or content generators—involves collecting and processing large volumes of personal data. This data includes academic results, behaviour patterns, learning styles, online interactions, and, in some cases, particularly sensitive information. Similarly, teachers generate data through their interaction with AI tools, exposing pedagogical content, teaching patterns, and methodological preferences that, in many cases, are beyond their effective control. Despite regulatory frameworks such as the General Data Protection Regulation (GDPR), the reality in educational environments shows a worrying gap between technological adoption and privacy literacy. Many students and teachers are unaware of what data is collected, for what purposes, who accesses it, and whether it is used to train commercial models. This paper proposes a critical reflection on the impact of AI on data protection at different educational levels, as well as on the emerging risks in creating opaque learning profiles and the possible erosion of the right to privacy. It also makes recommendations for moving towards the ethical and transparent use of AI in education, ensuring both pedagogical innovation and the protection of the fundamental rights of educational community members.
Keywords: Artificial intelligence, Education, Privacy, Data protection, GDPR, Digital ethics, Digital educational tools.