ABSTRACT VIEW
ADDRESSING BIAS IN TEXT-TO-IMAGE GENERATION WITHIN HEALTHCARE THROUGH NORM-CRITICAL PERSPECTIVES
C. Master Östlund, S. Höglund Arveklev
University West (SWEDEN)
The development of AI services, including text-to-image models like DALL-E and Midjourney, has grown in recent years, enabling the creation of images from textual descriptions. These models are used across industries, such as entertainment and advertising, but there are also risks associated with using these text-to-image generative models, including discrimination, misuse, and the spread of misinformation. Discrimination involves cultural, racial, and gender biases; misuse includes privacy violations and harmful content; and misinformation risks destabilizing society by generating misleading or harmful content. While AI is being explored in education, there is a lack of research on how text-to-image models can challenge biases in healthcare students. This is critical since healthcare professionals' conscious or unconscious norms, values, and attitudes have been identified as partial explanations for inequality in healthcare. Text-to-image models has a great potential to increase the emphasis on norm awareness by creating images that challenge the students’ norms. Norm criticism is an approach that identify and challenge what is generally accepted as "normal" in society, enabling students to identify various norms that may cause prejudice, discrimination, and marginalization, thereby developing their norm awareness. To achieve this, it is essential to integrate diversity and inclusion principles when using AI to ensure that we don’t maintain social biases.

In this pilot study, we have chosen to limit the analysis to the representation of nurses and patients in the generated images, focusing on age, gender, and race/ethnicity. A total of 200 images were generated using Midjourney. The initial 80 images were produced using a simple prompt, “a nurse and a patient”. Additional images were created with prompts specifically designed to create images with a higher degree of inclusiveness and diversity. Prompt number two was: “A nurse and a patient, take equity, inclusion, and diversity into consideration”, number three was “A nurse and a patient, do not be stereotypical when creating the image” and the fourth prompt was “A nurse and a patient, adapt a norm critical perspective when creating the image”.

When using the first prompt the preliminary result shows a clear tendency towards most of the nurses being represented by a young white woman. In only three of the images was the nurse depicted as a person of non-white ethnicity. The patient in the images varied more regarding age, gender and skin color, but was represented mostly by white women as well. When using the second prompt the majority of the nurses as well as the patient was represented by women of different ages and of various skin color, but white persons were in minority and there were only a few men represented. The third prompt generated images of only white nurses and patients of different ages. Five of the nurses were male but all of the patients were women. The fourth prompt generated more variation regarding age and gender regarding the nurses as well as the patients. But there were only two of the images that presented a nurse that maybe was of non-white ethnicity.

In conclusion AI models are often trained on datasets that reflect existing societal biases. Norm criticism can help identify whether these images reinforce traditional stereotypes, such as depicting nurses primarily as women or portraying patients in ways that align with racial or gendered stereotypes.

Keywords: Technology, Education, AI, Equality.