SAFE HAVENS OR SILENT TRAPS? A CRITICAL QUEST FOR SECURITY POLICIES, MEASURES AND PRACTICES FOR GENERATIVE AI IN THE EDUCATIONAL SETTINGS
M. Artsın1, A. Bozkurt2, S. Sani-Bozkurt2
Applications for generative AI technologies have been growing rapidly in recent years. It is becoming difficult to keep up with the development of OpenAI, Gemini, DeepSeek, Grok and other applications. Every day we hear about a new feature and a new record being broken, and it seems that only the advantages of these technologies are examined. The frequency with which we see headlines such as 'OpenAI introduces GPT 4.5', 'Google introduces Gemini 2.0', 'Chinese AI assistant DeepSeek rocks stock markets', 'Elon Musk's Grok-3 introduced' is increasing. Those who have superficial knowledge about these technologies can use these technologies in different areas in daily life.
When the studies on generative AI are examined, it is seen that issues such as trust, reliability and transparency are examined as well as increasing student participation in educational environments. Generative AI can contribute to the creation of more qualified educational environments in line with sustainable development goals. However, while creating these environments, what concerns about reliability and what measures should be taken to address these concerns is an important issue. While generative AI provides information to users with complex algorithms, it does not provide clear information about the data processing in the background.
In order to ensure the reliability of generative AI, it is important to examine the security debates regarding the use of these applications in educational environments. It is of great importance to determine what are the concerns of users about this complex technology and what are the security policies offered by these technological applications. In this direction, a descriptive review of studies on security in generative AI will be conducted. Publications on generative AI and security will be analyzed through Web of Science. As a result of the review, it will be revealed what are the measures and actions to increase the security of users.
As a result of this study, answers will be sought to issues such as what are the security measures offered by generative AI applications and what security measures users should take when using generative AI in educational environments. We believe that the study will provide useful findings for all stakeholders using generative AI in educational environments.
Keywords: Artificial intelligence, generative AI, education, security policies, security measures, security practices.