India is conducting one of the world's largest experiments in AI-powered surveillance. The National Automated Facial Recognition System (AFRS), with a call for tenders issued in 2019 and progressively deployed since, aims to create a centralized database linking CCTV footage and passport photos to enable real-time identification across the country. (Note: NCRB has formally stated that the AFRS will not be integrated with the Aadhaar biometric database.) State-level systems are already operational: Delhi Police used facial recognition during the 2020 protests, Telangana police deployed it for routine identification, and airports across the country have implemented DigiYatra facial recognition for boarding.
This deployment occurs within a legal framework that is still under construction. India's Digital Personal Data Protection Act (DPDPA), enacted in 2023 after years of deliberation, provides the first comprehensive data protection legislation for the country. But the Act contains significant gaps when it comes to surveillance—most critically, broad exemptions for government agencies acting in the interest of "sovereignty and integrity of India" and "public order."
The result is a governance asymmetry: India has the technological infrastructure for comprehensive surveillance and the legal infrastructure for data protection, but the two are not aligned. The surveillance apparatus operates under exemptions that the data protection framework provides.
The DPDPA and Facial Recognition
Rangari (2025) examines the relationship between AI facial recognition technologies and the DPDPA specifically. The analysis focuses on how India's data protection framework addresses—or fails to address—the unique privacy challenges posed by biometric surveillance.
The DPDPA establishes general principles for personal data processing: consent, purpose limitation, data minimization, and data subject rights. But facial recognition in public spaces tests each of these principles:
- Consent: Citizens walking through a public space equipped with facial recognition cameras cannot meaningfully consent to biometric data collection. The DPDPA's consent requirement is effectively nullified when data is collected passively and at scale.
- Purpose limitation: Facial recognition data collected for one purpose (airport security) can potentially be used for others (protest monitoring, movement tracking) without the data subject's knowledge or consent—particularly under the government exemption provisions.
- Data minimization: Facial recognition systems are inherently data-maximizing. Their effectiveness increases with the size of the reference database, creating an institutional incentive to collect and retain biometric data from as many individuals as possible.
The Digital Transformation Paradox
Verma, Mittal, and Bala (2025) situate facial recognition within India's broader digital transformation. India stands at a pivotal moment where the rapid integration of AI into governance—through facial recognition, Aadhaar's biometric framework, and predictive analytics—promises administrative efficiency and security but simultaneously raises significant privacy concerns.
The paradox is that India's digital governance infrastructure—Aadhaar, UPI, DigiLocker—has been remarkably successful at financial inclusion and service delivery. The same infrastructure that enables a rural farmer to receive government subsidies directly into a bank account also creates the surveillance capability to track that farmer's location, transactions, and social connections. The tools of inclusion and the tools of surveillance are architecturally identical.
This dual-use character makes the governance challenge qualitatively different from surveillance debates in Western democracies. In the US or EU, surveillance infrastructure is typically distinct from service delivery infrastructure. In India, they are the same infrastructure—which means that constraining surveillance capabilities may also constrain the government's capacity to deliver services to the populations that most need them.
Ethical Dimensions of AI Surveillance
Thumma et al. (2025) examine the ethical considerations in AI-driven surveillance systems. Surveillance technology is increasingly employed both in private and public places with enhanced capability, but with grave ethical concerns including invasion of privacy, discriminatory algorithms, and insufficient legal accountability mechanisms.
The paper identifies three dimensions of the ethical challenge:
Accuracy and bias: Facial recognition systems perform unevenly across demographic groups, with discriminatory algorithms disproportionately endangering marginalized communities. The paper develops a measurable Ethical Risk Score (ERS) model and finds that facial recognition in policing carries the highest ethical risk score at 77.6%, compared to retail consumer tracking at 45.0%. The broader literature on facial recognition bias (e.g., Buolamwini & Gebru, 2018) has documented higher error rates for darker-skinned individuals and women—populations that are also disproportionately subject to police surveillance.
Mission creep: Surveillance systems deployed for specific, limited purposes (counterterrorism, missing persons) tend to expand their scope over time. The technical capability exists to use facial recognition for traffic enforcement, tax compliance, social behavior monitoring, and political dissent tracking—and the institutional pressures to expand use typically overwhelm the governance mechanisms designed to constrain it.
Chilling effects: The knowledge that one is being surveilled—even if no action is taken—affects behavior. Citizens who know they may be identified at a protest, a religious gathering, or a political meeting may choose not to attend. The chilling effect on assembly, expression, and association is a form of harm that does not require any individual to be misidentified or punished.
AI in Criminal Justice
Kumar (2025) extends the analysis to AI's role in India's criminal justice system, from surveillance and investigation through prosecution and sentencing. The adoption of AI technologies is accelerating, particularly in response to the growing prevalence of cybercrime and the need for more efficient criminal investigation.
The paper examines several AI applications in Indian criminal justice: facial recognition for suspect identification, predictive policing for patrol allocation, digital forensics for cybercrime investigation, and natural language processing for case analysis. For each application, the analysis identifies a gap between the technology's capability and the legal framework governing its use.
India's criminal procedure code and evidence law were designed for a pre-digital era. The admissibility of AI-generated evidence, the standards for algorithmic identification, and the rights of defendants when AI systems are used against them are largely unaddressed in current legislation. The result is a system where AI tools are deployed operationally while the legal framework for evaluating their reliability, challenging their outputs, and holding their operators accountable remains undeveloped.
Lokeshwari and Saravanan's Broader Framework
Lokeshwari and Saravanan (2025) provide a broader examination of the social and ethical challenges of AI in surveillance and security systems. AI is enabling real-time facial recognition, anomaly detection, behavior prediction, and predictive policing, offering improvements in public safety and operational efficiency. But the paper argues that these capabilities require governance frameworks that do not yet exist at adequate scale.
The analysis identifies a governance deficit across three dimensions: technical governance (standards for system accuracy, testing, and validation), legal governance (legislation defining permissible uses, rights of affected individuals, and accountability mechanisms), and democratic governance (public participation in decisions about where and how surveillance is deployed).
Claims and Evidence
<| Claim | Evidence | Verdict |
|---|---|---|
| India's DPDPA adequately regulates facial recognition | Rangari (2025): consent, purpose limitation, and minimization principles are undermined by public deployment and government exemptions | ❌ Refuted |
| AI facial recognition is equally accurate across demographic groups | Thumma et al. (2025): highest ethical risk score (77.6%) for policing applications; discriminatory algorithms disproportionately affect marginalized groups | ❌ Refuted |
| Surveillance infrastructure and service delivery infrastructure can be separated | Verma et al. (2025): in India's digital architecture, they are the same infrastructure | ❌ Refuted |
| India's criminal justice framework is prepared for AI evidence | Kumar (2025): admissibility, reliability standards, and defendant rights are largely unaddressed | ❌ Refuted |
| AI surveillance improves public safety | Lokeshwari & Saravanan (2025): operational capability documented; safety impact not rigorously evaluated | ⚠️ Uncertain |
Open Questions
Implications
India's experience with AI surveillance is consequential far beyond its borders because it is occurring at a scale (1.4 billion people), pace (rapid deployment with minimal regulatory constraint), and institutional context (strong digital infrastructure, developing legal framework, democratic but contested governance) that makes it a test case for the global governance of AI surveillance.
The lesson from the Indian case is that data protection legislation and surveillance deployment must be developed in coordination, not sequentially. Building the surveillance infrastructure first and the legal framework second creates facts on the ground—installed cameras, populated databases, trained algorithms—that are difficult to constrain retroactively. The governance challenge is not to catch up with the technology but to shape its deployment from the outset.