Artificial Intelligence (AI) hiring algorithms, utilized by an estimated 70% of companies and 99% of Fortune 500 companies, are increasingly coming under scrutiny for their inherent biases.
These biases, a reflection of the real-world hiring data on which the algorithms are trained, have significant implications, particularly for those who frequently face systemic discrimination in hiring processes.
Last year, in response to the growing concerns around employment discrimination, New York City lawmakers enacted the Automated Employment Decision Tool Act. This legislation mandates companies using AI for employment decisions to undergo audits assessing biases in "sex, race-ethnicity, and intersectional categories." However, critics argue that the law, despite its groundbreaking nature, falls short of providing the necessary protections.
The ordinance lacks enforcement measures and quality control standards and notably omits disability, one of the most commonly reported forms of identity-based employment discrimination, from the listed bias assessment categories. This omission is not entirely surprising, given that New York lawmakers, including Mayor Eric Adams, are strong advocates of AI. Stricter or broader assessments of AI hiring tools could potentially lead to their downfall as the full extent and inevitability of their algorithmic biases, particularly against disabled applicants, become apparent.
AI hiring tools, developed over years by technicians who write code, are resistant to change. Consequently, there is no single corrective action that companies can take to address the problem of bias deeply embedded in the code. The tool will find numerous patterns in the training data (usually a list of past or ideal job holders) to guide its decision-making, thus perpetuating the same biased outcomes.
These outcomes are particularly pronounced for disabled applicants, who have historically been excluded from various types of jobs. The inclusion of more disabled profiles in training models would not solve the problem, due to the stubbornness of algorithms and the vast diversity of disabilities that exist.
"Disability" is a broad term that encompasses a range of conditions, not all of which are equally represented in the training data, especially disabilities that intersect with one or more other marginalized identities. Moreover, AI hiring tools such as Pymetrics, which compare applicant scores with those of former and current employees considered successful in their roles, systematically overlook how employers set their disabled employees up for failure rather than success by denying them workplace accommodations.
AI hiring tools have not only automated violations of the Americans with Disabilities Act but have also created new paradigms for further scrutiny and discrimination. In addition to screening resumes, AI hiring tools attempt to measure and rate applicants' personality traits and job performance based on how they play a video game or speak and move in a video recording. These tools are likely to create discrimination on two fronts. Firstly, the video analysis technology has difficulty registering the faces of both non-whites and those with disabilities that affect their appearance. Secondly, these tools cannot accurately discern mental capability and emotions as they claim.
Given the extreme and well-documented biases of AI hiring tools, particularly against disabled applicants, the effectiveness of the New York City bill is questionable. It encourages more audits and half-measures, but fails to address the root issue, which is the use of these tools in hiring practices in the first place.
New efforts in the state legislature are similarly misguided, as they merely seek to fill gaps in the New York City bill with stronger and more inclusive auditing systems. Even if they are done thoroughly, algorithmic audits are not a solution to the pervasive biases in AI hiring tools.
The very existence of these biases cannot be eradicated until companies cease using AI for hiring and personnel decisions altogether. New York lawmakers are accused of avoiding the call for a ban on AI hiring tools, shirking their responsibilities, and hiding behind bills that obscure the full extent of discrimination perpetuated by these technologies. They also place the burden of holding companies accountable on the applicants.
Their reluctance to take decisive action prolongs and exacerbates AI-driven employment discrimination, especially for disabled job seekers.
Login