Critical ReviewOther Social Sciences

Gender Bias in AI: Feminist Perspectives on Algorithmic Discrimination

AI systems perpetuate gender bias through training data, design assumptions, and deployment contexts. Feminist scholars argue that technical debiasing is necessary but insufficient—the deeper problem is whose experiences and values shape AI development in the first place.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

When a hiring algorithm trained on historical data systematically ranks male candidates higher than equally qualified female candidates, the bias is not a glitch—it is a faithful reproduction of the historical discrimination encoded in the training data. The technical fix (rebalance the training data, add fairness constraints) addresses the symptom. The feminist critique asks a deeper question: why was the training data biased in the first place, and what structural conditions ensure that AI systems continue to reflect and amplify existing gender inequalities?

The Research Landscape

Gender Bias and Digital Literacy

Shah (2025), with 10 citations, provides the most comprehensive narrative review, synthesizing research from 2010 to 2024 on how gender bias manifests in AI systems and how digital literacy can serve as an empowerment tool for women navigating biased technologies.

The paper documents gender bias across multiple AI applications:

  • Hiring algorithms: Systems trained on historical hiring data reproduce gender-segregated occupational patterns—recommending women for caregiving roles and men for technical ones.
  • Credit scoring: Women receive less favorable credit assessments than men with equivalent financial profiles, reflecting historical patterns of gender-differentiated financial access.
  • Healthcare AI: Diagnostic systems trained on predominantly male patient data perform poorly for conditions that present differently in women (cardiovascular disease, autoimmune disorders, pain assessment).
  • Voice assistants: The default feminization of AI assistants (Siri, Alexa, Cortana) reinforces the association between women and servitude—a design choice, not a technical necessity.
Shah argues that digital literacy—the ability to understand, evaluate, and challenge algorithmic decisions—is a necessary (though not sufficient) tool for empowerment. Women who understand how AI systems work are better positioned to identify when they are being treated unfairly and to advocate for change.

Healthcare Inequalities

Lau (2025), with 2 citations, applies a data feminist framework to analyze algorithmic inequalities in healthcare AI. The analysis is grounded in D'Ignazio and Klein's Data Feminism framework, which identifies seven principles for challenging power in data science, including examining power, challenging classification, and valuing multiple forms of knowledge.

The paper documents specific mechanisms through which healthcare AI reproduces gendered and racialized health inequalities:

  • Dataset bias: Clinical datasets used to train diagnostic AI are historically skewed toward white male subjects. Conditions that disproportionately affect women or minorities are underrepresented in training data.
  • Androcentric medical epistemology: Medical knowledge itself was historically constructed around the male body as the default. AI systems trained on this knowledge inherit the assumption that "normal" = "male."
  • Intersectional amplification: For women of color, the intersection of gender and racial bias produces compounded disadvantage that neither gender-only nor race-only analyses capture.
Lau argues that data feminist critique is not anti-technology but pro-redesign—calling for AI systems built on data that reflects the full diversity of human bodies and experiences.

Feminised AI Assistants

Prasad and Singh (2025), with 1 citation, examine the gender politics of AI assistant design through a cyber-feminist lens. The observation that most commercial AI assistants have female names, female voices, and compliant personalities is not accidental—it reflects (and reinforces) the cultural association between femininity and service, warmth, and subordination.

The paper draws on the metaphor of "digital dolls"—AI assistants designed to be pleasant, helpful, and non-threatening, mirroring gendered expectations of women's behavior. The implications extend beyond aesthetics: when users habitually command a female-voiced AI assistant, they may internalize patterns of gender-asymmetric interaction that transfer to human relationships.

Marketing and Representation

Aarti and Chauhan (2025) examine gender bias in AI-driven marketing systems, which use algorithmic personalization to target advertisements. Their analysis reveals that marketing AI tends to reinforce gender stereotypes: women are disproportionately shown ads for beauty, fashion, and childcare products; men for technology, finance, and sports. This occurs not because the algorithms are programmed with stereotypes but because they optimize for engagement—and stereotypical content generates higher click-through rates in the short term, creating a reinforcement loop.

Critical Analysis: Claims and Evidence

<
ClaimEvidenceVerdict
AI hiring systems reproduce gender-segregated occupational patternsShah's review of hiring algorithm studies✅ Supported
Healthcare AI performs poorly for conditions presenting differently in womenLau's data feminist analysis of clinical dataset bias✅ Supported — well-documented for cardiovascular and pain conditions
Feminized AI assistants reinforce gender-service associationsPrasad & Singh's cultural analysis⚠️ Uncertain — culturally plausible but psychological effects not empirically measured
AI-driven marketing reinforces gender stereotypes through engagement optimizationAarti & Chauhan's analysis of advertising algorithms✅ Supported — feedback loop mechanism documented

Open Questions

  • Intersectionality in practice: How should AI fairness frameworks operationalize intersectionality? Current metrics typically address one protected attribute at a time (gender or race), not their intersection.
  • Non-binary gender: Most AI systems classify gender as binary. How should systems be redesigned to accommodate non-binary and gender-diverse populations?
  • Global context: Gender bias in AI looks different across cultural contexts. A feminist critique developed in Western contexts may not translate directly to societies with different gender systems.
  • Structural vs. technical solutions: If AI bias reflects structural inequality, can technical debiasing ever be sufficient? Or are structural interventions (workforce diversity, regulatory requirements, inclusive design practices) necessary preconditions?
  • What This Means for Your Research

    For AI developers, the healthcare case documented by Lau is the most actionable: training dataset diversity is a measurable, improvable dimension of model quality.

    For feminist scholars, AI offers a productive site for analyzing how technological systems encode and reproduce social power—with the practical opportunity to influence design through critique.

    Explore related work through ORAA ResearchBrain.

    References (4)

    [1] Shah, S.F.H. (2025). Gender Bias in Artificial Intelligence: Empowering Women Through Digital Literacy. Perspective Journal of AI.
    [2] Lau, P.L. (2025). Rewriting the Narrative of AI Bias: A Data Feminist Critique of Algorithmic Inequalities in Healthcare. Law, Technology and Humans Journal.
    [3] Prasad, R. & Singh, S. (2025). Feminised AI as Digital Dolls: cyber-feminist perspectives on technology and gender. Nanoethics.
    [4] Aarti & Chauhan, S. (2025). Algorithmic Bias and Gender Representation: Feminist Perspectives on AI-Driven Marketing. International Journal of Research Innovation and Social Science.

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 7 keywords →