PERCEIVED FAIRNESS AND ACCEPTANCE OF AI-MEDIATED DECISIONS IN EDUCATION: A THEORETICAL MODEL, RESEARCH AGENDA, AND TACTICAL RECOMMENDATIONS
R.A. Turner
As artificial intelligence (AI) becomes more integrated in educational institutions, it will increasingly affect critical student outcomes such as grades, admissions decisions, and scholarship awards. Such decisions have profound consequences for students, and the extent to which they perceive them as appropriate can significantly impact trust in their schools. Yet, how students evaluate the fairness of AI-mediated decisions that affect them is not well understood. On the one hand, AI may bolster perceptions of fairness by increasing consistency in decision processes and mitigating cognitive biases that distort human judgments. On the other hand, AI lacks empathy, struggles to account for nuance, and generally operates as a ‘black box’ with little transparency. These limitations may undermine perceptions of fairness, particularly in high-stakes situations. Addressing this gap is essential to ensure educators and administrators can harness the benefits of AI-mediated academic decision-making while mitigating its potential pitfalls.
This research advances a model to examine how including AI in education decisions affects students’ perceptions of fairness and, in turn, their acceptance of decisions and trust in the institutions responsible for them. The central theory underpinning this model is organizational justice, a common framework used in organizational research for understanding fairness evaluations according to three critical justice dimensions: equitability of outcomes received (distributive justice); transparency and consistency of process determining those outcomes (procedural justice); and quality of communication and respect shown during interactions (interactional justice). It illuminates areas where AI systems excel, such as enhancing consistency and reducing cognitive bias errors, which should strengthen perceptions of procedural justice, while identifying challenges, such as AI’s limitations in addressing nuance or demonstrating respect for the individual, which may weaken interactional justice.
The model integrates individual and cultural factors likely to affect students’ fairness perceptions. For example, those more familiar with AI may view its decisions as more impartial and thus more credible, while those less familiar perceive its processes as opaque, cold, and mechanical. Students from cultures emphasizing acceptance of authority may place greater value on consistent and predictable processes, whereas those from cultures prioritizing equality may focus more on open communication and opportunities for input. And students from cultures centered on individual achievement may evaluate fairness based on how well a decision serves personal interests, while those from community-oriented cultures emphasize the broader impact of decisions. Such factors underscore the importance of designing systems with differences in knowledge and cultural values in mind.
This paper also presents a research agenda to validate the proposed model and concludes with recommendations for effective AI implementation. These include tactics such as making decision criteria clear from the outset, integrating human oversight, designing culturally responsive systems, and instituting robust appeals processes. In sum, this work marks a meaningful step toward greater understanding of how educational institutions can achieve the efficiencies of AI while maintaining and bolstering students’ trust.
Keywords: Impact of AI on education, fairness in decision-making, trust in educational institutions, organizational justice.