Critical ReviewOther Social Sciences

Predictive Algorithms and Social Inequality: When Code Becomes Governance

Predictive algorithms now sort people into risk categories across criminal justice, welfare, and immigration. Sociological analysis reveals how these systems reproduce existing inequalities while creating new forms of algorithmic governance that operate below the threshold of public awareness.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

When a predictive algorithm assigns a "risk score" to a person—a criminal recidivism score, a welfare fraud probability, an immigration threat level—it is not merely processing data. It is making a governance decision: sorting people into categories that determine how state power is applied to them. Those classified as "high risk" receive more surveillance, fewer benefits, and harsher treatment. Those classified as "low risk" are left alone. The algorithm has become, in effect, a governing instrument—one that operates continuously, at scale, and largely outside public deliberation.

The Research Landscape

Sociological Analysis of Algorithmic Bias

Mukabbir (2025), with 1 citation, provides a sociological analysis of how predictive technologies—risk assessment tools, predictive policing systems, credit scoring algorithms—reproduce and amplify existing social inequalities. The analysis draws on Bourdieu's concept of "symbolic violence" (the imposition of meaning that is experienced as legitimate and therefore invisible) to argue that algorithmic sorting naturalizes inequality.

The mechanisms are well-documented across domains:

  • Criminal justice: Recidivism prediction models trained on arrest data (which reflects policing patterns, not crime rates) systematically overpredict risk for Black defendants—not because the models are racist in intent, but because the data reflects a criminal justice system that polices Black communities more intensively.
  • Welfare: Fraud detection algorithms flag benefit recipients for investigation based on patterns correlated with poverty itself (frequent address changes, irregular income)—making poverty a predictor of suspected fraud.
  • Immigration: Risk assessment at borders uses nationality, travel patterns, and social media activity to classify travelers, reproducing racial and national profiling in algorithmic form.

Biopolitics of Algorithmic Governance

Serttaş (2026) extends the analysis with a Foucauldian framework, examining how algorithmic governance represents a new form of biopolitics—the regulation of populations through knowledge about their bodies, behaviors, and risks. The paper analyzes China's social credit system and Western platform surveillance as parallel (though not identical) systems of algorithmic population management.

The key insight: algorithmic governance does not require visible coercion. It operates through nudges, scores, and classifications that shape behavior without explicit commands. A person who knows their social credit score is being monitored adjusts their behavior—not because they are forced to, but because the score creates incentives and penalties that feel like natural consequences rather than state power.

Security vs. Privacy

Atif and Alamgir (2025) examine AI-powered surveillance specifically, arguing that the security justification for mass surveillance exploits legal ambiguities to normalize privacy invasion. Facial recognition in public spaces, social media monitoring, and biometric databases are framed as necessary for crime prevention, but the evidence for their effectiveness is weaker than commonly assumed, while the privacy costs are borne disproportionately by marginalized communities.

China's Algorithmic Statecraft

Tampubolon (2025), with 1 citation, examines China's AI-driven governance model as a case study. China integrates surveillance cameras, social credit scoring, big data analytics, and predictive policing into a comprehensive governance system. The paper analyzes this not as an aberration but as a model that other states (including democracies) are adopting in modified forms—often in the name of "smart city" governance.

Critical Analysis: Claims and Evidence

<
ClaimEvidenceVerdict
Predictive algorithms reproduce existing social inequalitiesMukabbir's cross-domain sociological analysis✅ Supported — documented in criminal justice, welfare, immigration
Algorithmic governance operates as a new form of biopoliticsSerttaş's Foucauldian analysis⚠️ Uncertain — theoretically compelling; empirical mechanisms vary by context
AI surveillance effectiveness is overstated relative to privacy costsAtif & Alamgir's critical analysis⚠️ Uncertain — effectiveness evidence is mixed; privacy costs are clear
China's algorithmic governance model is being adopted elsewhereTampubolon's comparative analysis✅ Supported — modified adoption documented in multiple countries

Open Questions

  • Algorithmic accountability: When an algorithm's decision causes harm, who is accountable—the developer, the deploying agency, or the algorithm itself? Current legal frameworks are ambiguous.
  • Democratic oversight: How can algorithmically governed systems be subjected to democratic deliberation when their operations are technically opaque?
  • Resistance: How do individuals and communities resist algorithmic classification? "Data obfuscation" (deliberately generating misleading data), legal challenges, and collective organizing are emerging strategies.
  • Reform vs. abolition: Should the goal be to make predictive algorithms fairer (debiasing, auditing, transparency) or to question whether certain domains (criminal justice, welfare) should use predictive algorithms at all?
  • What This Means for Your Research

    For sociologists, algorithmic governance represents a new terrain for studying power, inequality, and social control. The conceptual tools (Bourdieu, Foucault) are available; the empirical work is still developing.

    For policymakers, the reproduction of inequality through "neutral" algorithms means that technical auditing is necessary but insufficient—structural reform of the systems that generate biased data is equally important.

    Explore related work through ORAA ResearchBrain.

    References (4)

    [1] Mukabbir, M.N. (2025). Predictive Algorithms and Social Inequality: A Sociological Analysis of Bias, Governance, and Digital Surveillance. British Journal of Multidisciplinary and Social Studies, 4(1).
    [2] Serttaş, A. (2026). Biopolitics, Algorithmic Governance, and the Digital Regulation of Bodies. Human Behavior and Emerging Technologies.
    [3] Atif, M. & Alamgir, A. (2025). The Illusion of Security: How AI-Powered Surveillance Erodes Privacy, Amplifies Inequality, and Redefines Democracy in the Digital Age. Science & Religion Analysis, 3(4).
    [4] Tampubolon, M. (2025). Algorithmic Statecraft: China's AI-Driven Model of Governance and Its Global Impact. International Journal of Social Science and Human Research, 8(5).

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 7 keywords →