ABSTRACT VIEW
Abstract NUM 2569

ARTIFICIAL INTELLIGENCE IN THE ERA OF POST-TRUTH: CULTURAL PRACTICES, SYMBOLIC RISKS, AND EDUCATIONAL CHALLENGES
N.P. Maldonado Reynoso1, A.J. Rodríguez Aguirre2, N. Islas Ayala1
1 Instituto Politécnico Nacional (MEXICO)
2 Universidad Autónoma de la Ciudad de México (MEXICO)
In the contemporary context, artificial intelligence (AI) emerges as a disruptive technology with profound implications for educational processes, particularly in the training of social science researchers at the graduate level. This qualitative study—conducted over two years through documentary analysis and interviews with faculty members who use AI and whose students also actively engage with it—examines the risks, challenges, and possibilities that AI introduces into education, especially within a scenario shaped by post-truth, where objective facts lose relevance in favor of emotions, beliefs, and personal interpretations.

The central concern lies in the fact that we live in the so-called post-truth era, understood as the deliberate distortion of reality to manipulate beliefs and emotions, with the aim of influencing public opinion and social attitudes. In this context, both educators and students routinely access information from the media, the internet, or AI tools, incorporating it into their everyday cultural practices. However, a significant portion of the population does not critically analyze such information, nor questions its accuracy or origin. This uncritical approach to knowledge is replicated in educational settings, where information presented as “truth” is often accepted without verification or reflection—even in higher education. The use of AI intensifies this issue, as it may generate “algorithmic hallucinations,” that is, false or inaccurate content that is difficult to detect without critical analysis.

Although tools like ChatGPT are valued by students for their usefulness in structuring ideas and drafting academic texts, using them without proper pedagogical mediation can encourage superficial learning, technological dependency, or the inadvertent reproduction of biases. Therefore, the integration of AI into educational environments demands more than the development of technical skills; it requires an analysis rooted in cultural practices and a symbolic redefinition of AI’s role within the teaching and learning experience.

From this perspective, the role of educators must go beyond the technical dimension and evolve into that of cultural mediators, capable of guiding the use of AI within ethical and humanistic pedagogical frameworks. This approach acknowledges that algorithms are cultural products—designed, refined, and managed by human beings—and that their inclusion in education entails a symbolic transformation of pedagogical and epistemological frameworks that should enrich learning processes through a critical and context-sensitive lens.

It is concluded that the responsible integration of AI in higher education must address not only technological and regulatory dimensions, but also the cultural practices already embedded in daily life. Only by recognizing this symbolic transformation will it be possible to harness its educational potential without relinquishing the ethical, critical, and cultural principles that underpin teaching practice.

Keywords: Artificial intelligence, post-truth, cultural practices, higher education.

Event: ICERI2025
Track: Digital Transformation of Education
Session: Data Science & AI in Education
Session type: VIRTUAL