UNINTENDED CONSEQUENCES OF TECHNOLOGY-INFORMED ASSESSMENTS: MITIGATING THE RISK OF DISCRIMINATORY EMPLOYMENT DECISIONS
B. Barrett1, C. Miller2
Technological innovation continues to move at lightning speed, and the accompanying promises of increased efficiency and competitiveness are alluring to organizations hoping that early adoption of new technologies will result in a high return on investment in the long run. At the same time, ensuring an ethical and equitable implementation of these new technologies has become a key concern for many, spurring dialogue across industry and academia alike. One of the better known examples of problematic technological implementations is that of Amazon’s use of faulty AI algorithms which flagged female applicants as unfit for positions. Incidents such as these are an early warning of the damage that can be inflicted if the appropriate processes, policies, and safeguards are not put in place. A particular concern is the fact that these newer technologies, including artificial intelligence (AI) and extended reality (XR), are growing in popularity for training and assessment purposes, yet they often have built-in barriers for a variety of end users. Many of the end users who may find themselves at a disadvantage in these scenarios are of protected classes, including, but not limited to, employees with disabilities (EWD), international employees, older employees, and employees with limited access and exposure to new technologies due to socioeconomic factors. The threat lies in the fact that, as the world becomes more digitalized, these individuals may find themselves at an increased risk of economic inequality should employment decisions, such as hiring, promotion, or performance management, be based on assessments skewed by unexpected and uncontrollable variables: an employee’s physical, mental, or psychological limitations that impede the ability to use the technologies, an employee’s lack of experience with the technologies that put them at a disadvantage compared to their reference group, or the technology’s limitations in assessing employees who are not represented within the training data. The goal of this paper is to analyze existing academic publications, business literature, social media, regulations, and current trends to provide recommendations that can be used to mitigate the risk of such unintended consequences. A secondary goal of this paper is to promote awareness and proactive measures to reduce the potential of discrimination through early intervention as these technologies and related policies continue to evolve.
Keywords: Artificial intelligence, extended reality, training, assessment, discrimination, economic equity, employee protection.