Law & Policy

AI in Hiring: When the Algorithm Says 'No' and Can't Explain Why

AI-powered hiring tools screen resumes, conduct video interviews, and score candidates at scale. But growing evidence shows these systems can replicate and amplify employment discriminationโ€”and existing employment law is poorly equipped to address algorithmic bias that has no identifiable discriminatory intent.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

AI has transformed recruitment. Resume screening algorithms process thousands of applications in minutes. Video interview analysis tools score candidates on facial expressions, vocal tone, and word choice. Predictive models estimate "cultural fit," "retention probability," and "performance potential" from application data. These tools promise efficiency, consistency, and objectivityโ€”removing the human biases that decades of employment discrimination research has documented.

The promise is undermined by a growing body of evidence that AI hiring tools can replicate precisely the biases they claim to eliminate. Amazon's internal AI recruiting tool, abandoned in 2018 after it was found to systematically downgrade resumes containing the word "women's" (as in "women's chess club captain"), was an early warning. Since then, research has documented algorithmic bias in resume screening, video interview scoring, and predictive analytics across multiple industries.

The Explainability Problem

Fabeyo (2025) examines transparency methods in AI hiring algorithms. AI tools have increasingly been shaping employment decisions, from resume screening to employment tests and automated video interviews. Concerns about the "black box" nature of these tools have grown as many algorithmic models offer little insight into how or why hiring decisions are made.

The explainability problem in hiring AI is both technical and legal. Technically, complex models (deep neural networks, ensemble methods) cannot provide simple, human-interpretable explanations for their predictions. Legally, employment law in most jurisdictions requires that adverse employment decisions be justifiableโ€”but an algorithm that says "this candidate scored 62 out of 100" without explaining which factors drove the score does not provide the justification that a rejected candidate would need to contest the decision.

Designing for Fairness

Agbasiere and Nze-Igwe (2025) investigate how AI affects hiring procedures, focusing on the fairness of algorithms that drive hiring tools. AI has improved the efficiency of the hiring process, yet its use results in institutionalized discrimination. The AI systems used for recruitment, which are trained on historical data, frequently reflect and reinforce pre-existing prejudices in the data.

The paper proposes design principles for fairer AI hiring tools:

  • Bias auditing: Regular testing of AI outputs across demographic groups to detect disparate impact.
  • Representative training data: Ensuring training data reflects the diversity of the qualified candidate pool, not the historical hiring pool (which may reflect past discrimination).
  • Human oversight: Maintaining meaningful human review of AI recommendations, particularly for adverse decisions.
  • Candidate transparency: Providing candidates with information about how AI was used in their evaluation and the opportunity to contest AI-informed decisions.

The AIHR Landscape

Zhu (2025) examines AI in the field of employment and AI-HR (AIHR) under the recruitment path based on big data algorithms for resume screening. Some AI has already replaced humans in working positions, and in the field of recruitment, AI-powered tools process large numbers of applications using pattern recognition that may encode biases present in historical hiring data.

The paper documents how algorithmic bias enters the hiring pipeline at multiple points: during data collection (which candidates are represented in training data), during feature selection (which candidate attributes the model considers), during model training (what patterns the algorithm learns to associate with "good" candidates), and during deployment (how the model's outputs interact with human decision-makers).

Ethical Design Framework

Singh, Kumar, and Das (2025) propose Ethicruitโ€”a framework for designing ethical AI systems in employment and recruitment. Conventional AI recruitment systems tend to perpetuate existing human prejudices, which result in gender, race, or socioeconomic-based discrimination.

The framework addresses ethical AI recruitment through four pillars: fairness (ensuring equal treatment across demographic groups), transparency (making algorithmic decision processes understandable), accountability (establishing clear responsibility for algorithmic outcomes), and privacy (protecting candidate data from unauthorized use).

Gender Equity in Automation

Maheswari (2025) proposes the G.E.N.D.E.R. AI Framework for advancing workplace equity in automation. AI and automation are increasingly influencing workplace decision-making, particularly in recruitment, performance evaluations, and career progression. While AI is often perceived as neutral, research highlights that these systems frequently replicate and amplify existing inequalities.

The framework addresses a specific dimension of algorithmic hiring bias: gender. AI systems trained on historically male-dominated industries learn to associate "successful employee" with male-correlated attributes (education at male-dominated institutions, career trajectories without gaps, leadership language associated with masculine communication styles). Without deliberate corrective design, these systems systematically disadvantage female candidates.

Claims and Evidence

<
ClaimEvidenceVerdict
AI hiring tools eliminate human biasAll papers: AI tools can replicate and amplify historical biasesโŒ Refuted
Explainable AI methods can make hiring decisions transparentFabeyo (2025): transparency methods exist but face accuracy-explainability trade-offsโš ๏ธ Uncertain
Bias auditing can detect and correct algorithmic discriminationAgbasiere & Nze-Igwe (2025): auditing detects disparate impact; correction requires design changesโœ… Supported (for detection; correction is harder)
Current employment law adequately addresses algorithmic discriminationZhu (2025): existing frameworks address intentional discrimination but not algorithmic bias without intentโŒ Refuted
Gender-specific frameworks are needed alongside general fairness approachesMaheswari (2025): gender-specific mechanisms address patterns that general fairness metrics missโœ… Supported

Open Questions

  • Should there be a "right to a human decision" in hiring? If a candidate is rejected by an algorithm, should they have the right to request human review?
  • Can AI hiring tools be certified for fairness? Should AI hiring tools undergo independent auditing and certification before deployment, analogous to product safety certification?
  • How should disparate impact be measured for intersectional identities? An AI system may be fair for women and fair for racial minorities but biased against women from specific racial groups. How should intersectional fairness be assessed?
  • What is the employer's liability for algorithmic discrimination? If an employer uses a third-party AI hiring tool that discriminates, is the employer liable, the tool vendor, or both?
  • Implications

    The research reviewed here suggests that AI hiring tools are not neutralโ€”they encode the biases present in historical hiring data and the design choices of their developers. Employment law must evolve to address algorithmic discrimination that lacks identifiable discriminatory intent, and organizational practice must incorporate bias auditing, transparency, and human oversight as standard components of AI-assisted recruitment.

    References (6)

    [1] Fabeyo, S. (2025). Explainable AI in Employment Decision-Making: Transparency Methods in Hiring Algorithms. Issues in Information Systems, 2025, 110.
    [2] Agbasiere, C. & Nze-Igwe, G.R. (2025). Algorithmic Fairness in Recruitment: Designing AI-Powered Hiring Tools to Identify and Reduce Biases in Candidate Selection. Path of Science, 11(6), 10.
    [3] Zhu, Y. (2025). AI and Employment Discrimination: AIHR's Algorithmic Bias. AHSRA, 2025, 20842.
    [4] Singh, N., Kumar, D., & Das, R. (2025). Ethicruit: A Framework for Designing Ethical AI Systems in Employment and Recruitment Processes. IJRIAS, 10(11), 57.
    [5] Maheswari, A. (2025). Beyond Algorithms: A G.E.N.D.E.R. AI Framework for Advancing Workplace Equity in Automation. IJGRIT, 3(2ii), 7624. ).7624.
    , & Maheswari, A. U. (2025). Beyond Algorithms: A G.E.N.D.E.R. AI Framework for Advancing Workplace Equity in Automation. International Journal of Global Research Innovations & Technology, 03(02(II)), 51-59.

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 8 keywords โ†’