ABSTRACT VIEW
Abstract NUM 1806

AXIOLOGICAL ALIGNMENT FOR LEARNING: DESIGNING ETHICAL AI-ENHANCED EDUCATIONAL ENVIRONMENTS
R. Fernández Concha, M. Solórzano
Pontificia Universidad Católica del Perú (PERU)
As Generative Artificial Intelligence (GAI) systems, such as ChatGPT, Claude, and Gemini, become embedded in higher education, their impact extends far beyond the automation of tasks. They increasingly shape how students access knowledge, engage in learning and are assessed. More importantly, they mediate value structures, influencing which forms of reasoning, participation, and evaluation are privileged. In this sense, GAI systems are not merely technical tools but axiological agents whose underlying value orientations frame pedagogical interactions. This paper examines how the axiological foundations of GAI systems align with core educational values such as autonomy, fairness, inclusivity, and critical thinking.

The research employs a conceptual-analytical methodology, supported by illustrative scenarios from higher education. It proceeds in two steps. First, alignment techniques in AI are examined and then mapped against educational value frameworks, revealing key gaps. While technical alignment promotes consistency and safety, it often fails to capture the normative complexity of classrooms where plural epistemic traditions, learner agency, and critical debate are essential. Second, the study develops a structured model for axiological alignment in educational contexts.

The analysis identifies three tensions. First, while GAI systems increase efficiency and scalability, they risk diminishing learner agency by privileging pre-generated responses. Second, by optimizing for dominant perspectives, they may narrow epistemic diversity and marginalize non-Western knowledge traditions. Third, in striving for consistency, they risk reinforcing prevailing ideologies and reducing space for dissent, ambiguity, and critical questioning, vital features of higher education.

To address these risks, the paper proposes the Framework for Axiological Alignment for Learning (FAAL). This framework draws from AI ethics, educational philosophy, and humanistic pedagogy. It advances practical strategies such as: (a) value-audit rubrics for evaluating AI tools against educational values; (b) ethics-informed curricular design embedding reflection on AI-mediated learning; and (c) inclusive cross-training experiences that prepare educators and learners to critically engage with GAI outputs. These mechanisms guide responsible adoption and empower institutions to align AI use with broader educational missions.

Embedding values into the design and deployment of GAI should not be treated as a peripheral concern or technical afterthought. Axiological alignment must be recognized as a foundational principle for sustainable, values-driven innovation in education. By centering ethical considerations, institutions can foster learning environments that are equitable, transparent, and human-centered. Moreover, the framework provides a conceptual foundation for future empirical research, enabling systematic evaluation of GAI behavior in ethically salient scenarios such as classroom assessment, advising, or collaborative problem-solving.

This paper contributes to the emerging field of AI Pedagogical Ethics by articulating a theoretical model, preliminary evaluative tools, and research pathways. Its central claim is that alignment of GAI in education must extend beyond technical robustness to include the explicit integration of pedagogical and ethical values.

Keywords: Axiological Alignment, Generative AI in Education, AI Pedagogical Ethics, AI Ethics.

Event: ICERI2025
Session: Challenges in Education and Research
Session time: Tuesday, 11th of November from 15:00 to 18:30
Session type: POSTER