M.A. Vicente, C. Fernández, D. García-Torres
Last innovations in artificial intelligence (AI), particularly in text-to-image generation, have opened up new possibilities in digital creation, education, creativity, and visual communication. However, these technologies may replicate or reinforce gender biases embedded in their training data, posing a significant challenge for educational contexts committed to gender equity.
Previous research has shown that AI-based image generation platforms such as DALL·E, Midjourney, and Flux tend to associate specific professions, social roles, and physical traits with particular genders, perpetuating stereotypical representations. In response to this issue, we are working under the umbrella of the LENA project.
The main goal of the LENA Project is to develop a robust methodological framework for identifying and assessing gender bias in AI-generated images, with a specific focus on its implications for gender and equality in education.
LENA’s approach combines theoretical analysis with empirical research. The project has included a comprehensive literature review on gender bias in generative AI. Building on this, a series of carefully designed prompts related to professions, social roles, and everyday situations have been input into the most widely used AI-based image generation platforms. The resulting images are being examined using visual content analysis techniques to identify recurring gender-based patterns. Additionally, a blind evaluation phase is underway, in which external reviewers assessed the images without knowing their source. This allows for an unbiased appraisal of perceived gender stereotypes. The insights gained from this ongoing process will inform the creation of a generalizable protocol for detecting gender bias in AI-generated content across educational and research settings. Our preliminary findings confirm the consistent presence of gender bias in AI-generated imagery. There is a noticeable feminization of caregiving, teaching, and support roles, and a masculinization of prestigious or high-authority professions such as engineering, executive leadership, and surgery. Furthermore, women are more frequently portrayed with stereotypical features such as hypersexualization and constant smiling, while men are depicted in positions of authority with serious expressions.
The LENA Project highlights that generative AI platforms are far from neutral. Instead, they replicate and amplify preexisting societal biases, which can have profound implications for education. If unaddressed, such biases risk reinforcing harmful stereotypes among students and educators alike, undermining efforts to promote gender equality in learning environments.
Keywords: Educational technology, gender bias, generative AI, artificial intelligence.