Artificial intelligence (AI) is rapidly transforming the hiring process, with companies like Amazon, Hilton, AT&T, and others implementing AI tools to streamline recruitment and selection.[1] While these tools offer potential benefits such as increased efficiency and reduced bias, they also raise significant concerns about discrimination and unequal access to employment opportunities.[2]
A key issue with AI hiring tools is their susceptibility to inherent biases present in the data used to train the algorithms. Historical hiring data may contain intentional or unintentional prejudices that get encoded into the AI system, perpetuating and even amplifying discrimination.[3] This can occur through various mechanisms, such as the choice of target variables, labeling of training data, and unrepresentative data collection.[4] For example, if the target variable is defined based on characteristics that are systematically absent in a particular group, the algorithm may negatively classify candidates from that group.[5] Similarly, training data that underrepresents certain groups or contains biased performance evaluations can lead to discriminatory outcomes.[6]
A prominent example of AI discrimination in hiring is Amazon’s experimental candidate scoring and ranking system. Trained on resumes submitted over a 10-year period, the tool developed a bias against female candidates, penalizing resumes containing words like “women” and downgrading graduates from all-women’s colleges.[7] Although Amazon ultimately abandoned the project, it highlights the potential for AI to reinforce historic prejudices and further ingrain discrimination into the workforce.[8]
Under the current legal framework, victims of AI discrimination may face challenges in seeking redress through Title VII of the Civil Rights Act of 1964. The lack of transparency in AI hiring algorithms can make it difficult to demonstrate key elements of disparate treatment and disparate impact claims, such as discriminatory intent and knowledge.[9] This uncertainty, coupled with the reactive nature of Title VII, puts society in a position of addressing harms only after they have occurred.[10]
To proactively protect job applicants and promote a more equitable workforce, some jurisdictions are considering or implementing regulations on AI hiring tools. In 2021, New York City passed a law requiring employers to conduct bias audits of automated employment decision tools and provide notices to candidates.[11] The law, set to take effect on January 1, 2023, aims to increase transparency and accountability in the use of AI hiring systems.[12]
Similarly, Illinois enacted the Artificial Intelligence Video Interview Act in 2019, which requires employers to notify candidates when AI is used to analyze video interviews and to obtain consent before doing so.[13] The law also prohibits employers from using AI to evaluate candidates based on race, color, national origin, religion, gender, age, or disability.[14]
While these regulations represent important steps towards addressing the risks of AI hiring tools, more comprehensive measures may be needed. Scholars have argued for ex ante regulations that require employers to actively monitor and audit their AI systems for bias and take corrective action when necessary.[15] This could involve mandating the use of uninterested committees to review algorithms for disparate treatment or impact, promoting greater transparency in AI decision-making, and encouraging collaboration between stakeholders to anticipate and prevent discrimination.[16]
Furthermore, employers should strive to create AI hiring tools that are designed with inclusion in mind from the outset. This may involve carefully selecting target variables and class labels, ensuring representative training data, and regularly auditing algorithms for biases.[17] By proactively addressing these issues during the development process, employers can harness the power of AI to promote diversity and inclusion rather than perpetuate historical inequities.[18]
In addition to regulatory efforts, it is crucial to address the lack of diversity within the technology industry itself. The underrepresentation of women and minorities in tech roles contributes to the development of biased algorithms, as the perspectives and experiences of these groups are often overlooked.[19] Employers should prioritize diversity initiatives and create inclusive workplace cultures that attract and retain talent from underrepresented backgrounds.[20]
Ultimately, the ethical use of AI in hiring demands a multi-faceted approach that combines proactive regulation, ongoing monitoring, and a commitment to diversity and inclusion. By redistributing the costs and benefits of AI hiring technologies through ex ante regulation, promoting transparency and accountability, and fostering diverse teams, employers can leverage AI to create a more efficient and equitable hiring process for all.[21]
References:
[1] Jackson, A. (2019, January 1). Popular Companies Using AI to Interview & Hire You. Glassdoor. https://www.glassdoor.com/blog/popular-companies-using-ai-to-interview-hire-you/
[2] Raub, M. (2018). Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices. Arkansas Law Review, 71(2), 529-570.
[3] Kodiyan, A. A. (2019, November 12). An overview of ethical issues in using AI systems in hiring with a case study of Amazon’s AI based hiring tool. arXiv. https://arxiv.org/abs/1911.04422
[4] Barocas, S., & Selbst, A. D. (2016). Big Data’s Disparate Impact. California Law Review, 104, 671-732.
[5] Barocas, S., & Selbst, A. D. (2016). Big Data’s Disparate Impact. California Law Review, 104, 671-732.
[6] Kim, P. T. (2017). Data-Driven Discrimination at Work. William & Mary Law Review, 58, 857-936.
[7] Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
[8] Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
[9] Barocas, S., & Selbst, A. D. (2016). Big Data’s Disparate Impact. California Law Review, 104, 671-732.
[10] Biggers, B. E. A. (2020, January 19). Curbing Widespread Discrimination by Artificial Intelligence Hiring Tools: An Ex Ante Solution. Boston College Intellectual Property & Technology Forum. http://bciptf.org/2020/01/curbing-widespread-discrimination-by-AI-hiring-tools
[11] New York City Council. (2021). Int. No. 1894-A. https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9
[12] Turley, M. (2022). New York City Enacts Law Regulating Use of Artificial Intelligence in Employment Decisions. National Law Review. https://www.natlawreview.com/article/new-york-city-enacts-law-regulating-use-artificial-intelligence-employment-decisions
[13] H.B. 2557, 101st Gen. Assemb., Reg. Sess. (Ill. 2019).
[14] H.B. 2557, 101st Gen. Assemb., Reg. Sess. (Ill. 2019).
[15] Houser, K. A. (2019). Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making. Stanford Technology Law Review, 22, 290-339.
[16] Biggers, B. E. A. (2020, January 19). Curbing Widespread Discrimination by Artificial Intelligence Hiring Tools: An Ex Ante Solution. Boston College Intellectual Property & Technology Forum. http://bciptf.org/2020/01/curbing-widespread-discrimination-by-AI-hiring-tools
[17] Kelan, E. K. (2018). Algorithmic inclusion: Shaping the predictive algorithms of artificial intelligence in hiring. Human Resource Management Journal, 1-14.
[18] Raub, M. (2018). Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices. Arkansas Law Review, 71(2), 529-570.
[19] Yao, M. (2017, May 1). Fighting Algorithmic Bias and Homogenous Thinking in AI. Forbes. https://www.forbes.com/sites/mariyayao/2017/05/01/dangers-algorithmic-bias-homogenous-thinking-ai/
[20] Houser, K. A. (2019). Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making. Stanford Technology Law Review, 22, 290-339.
[21] Raub, M. (2018). Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices. Arkansas Law Review, 71(2), 529-570.




Leave a comment