Communication & Media

Deepfakes and the Digital Public Sphere: Can Habermas Survive the Algorithm?

Habermas theorized the public sphere as a space where rational discourse produces democratic legitimacy. Deepfakes, algorithmic curation, and synthetic media now undermine the very conditions that make such discourse possible. Five papers examine whether the Habermasian framework can be adapted to the algorithmic ageโ€”or whether it has reached its conceptual limits.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

Jรผrgen Habermas's theory of the public sphere, articulated in his 1962 Strukturwandel der ร–ffentlichkeit and refined over six decades of scholarly engagement, rests on a deceptively simple premise: democratic legitimacy requires a space where citizens can engage in rational-critical discourse, free from coercion and manipulation, to form public opinion that holds political power accountable. The theory has been criticized, extended, feminized, pluralized, and postcolonialized. But it has retained its grip on communication scholarship because no competing framework offers as coherent an account of why democratic discourse matters and what conditions it requires.

Deepfake technology and algorithmic platforms now challenge not the theory's normative aspirations but its empirical preconditions. Habermas assumed that participants in public discourse could distinguish genuine from fabricated communicationโ€”that the sincerity condition of rational discourse, while sometimes violated, was at least verifiable in principle. When a politician's fabricated video is indistinguishable from an authentic one, when algorithmic curation determines which claims reach which audiences, and when synthetic media can be produced at scale with negligible cost, the epistemic infrastructure of the public sphere is not merely damaged. It is structurally transformed.

The Algorithmic Challenge: Beyond the Digital Public Sphere

Viader Guerrero (2024) offers the theoretically richest contribution in this cohort. Rather than asking whether social media platforms are a digital public sphere (a question that has produced a large but inconclusive literature), the paper asks a more fundamental question: what is the political ontology of algorithmic technologies?

The argument proceeds in three stages:

First, the design of social media platforms implicitly encodes a normative model of the public sphere. Platforms that optimize for engagement assume that democratic discourse is a marketplace of attention, where the "best" ideas are those that attract the most interaction. This is not Habermas's modelโ€”it is closer to a distortion of it, where virality replaces validity as the criterion of discursive success.

Second, this encoded normative model is not neutral but productive: it does not merely represent public discourse but constitutes it. The algorithm does not passively transmit citizens' opinions; it selects, amplifies, sequences, and contextualizes them in ways that shape what appears to be "the public conversation." The public sphere, on platforms, is an algorithmic construction.

Third, ethically informed design practices that attempt to "fix" platforms by making them more Habermasian (promoting rational discourse, reducing misinformation, increasing deliberative quality) face a structural limitation: they still accept the platform as the medium of the public sphere, thereby accepting the concentration of discursive power in private corporations. The question is not how to make platforms better public spheres but whether the public sphere can survive its platformization.

Habermas Responds: The Constitutional Imperative

Patberg (2025) engages directly with Habermas's own recent analysis of digital platforms. In a 2022 essay, Habermas proposed a "constitutional imperative to maintain a functioning public sphere"โ€”acknowledging that digital platforms have disrupted the conditions for democratic communication but leaving open what policy responses this imperative demands.

Patberg's central argument is that existing proposals for social media reform "put the cart before the horse." To restructure social media in a targeted manner, one first needs to determine the platforms' desired contribution to democracyโ€”which is far from obvious. Social media have a plurality of democratic affordances and can thus be assigned different, sometimes competing roles.

Patberg proposes to do what Habermas has failed to do: locate social media in the centre-periphery model of political communication in media society. This model, from Habermas's Between Facts and Norms, distinguishes between formal political institutions (the center) and informal civil society networks (the periphery). The key argument is that social media reforms should primarily aim to empower agents in the peripheryโ€”strengthening the capacity of civil society actors to use platforms for democratic participation, rather than focusing primarily on platform regulation or content moderation at the center.

This framing shifts the debate from "How should platforms be controlled?" to "How should platforms serve democratic participation from the ground up?"โ€”a question that demands institutional innovation rather than mere regulatory constraint.

The Global South: Public Spheres That Habermas Didn't Imagine

Attia and Ahmed (2025) apply Habermas's framework to a context far from its European origins: TikTok as a space for political discourse in Egypt. The study examines whether and how Egyptian users engage in civic participation through TikTok's short-video formatโ€”a platform whose architectural logic (algorithmic recommendation of short, entertaining content) seems designed to suppress rather than enable the kind of rational-critical discourse that Habermas envisions.

Their findings reveal a constrained picture. The study found that while Egyptian users engage with public issues on TikTok mainly out of curiosity or entertainment, their involvement is mostly passiveโ€”limited to liking and sharing videos. Very few participants create original content or engage in meaningful discussions. TikTok's algorithm prioritizes entertainment-driven content, which limits its effectiveness as a space for substantive political discourse. The authors invoke Habermas's concept of the "pseudo-public sphere" and characterize TikTok as a "limited public sphere," fostering surface-level discussions that rarely translate into real-world democratic engagement.

The broader implication extends beyond Egypt. In many Global South contexts, the conditions for Habermasian rational discourseโ€”free press, rule of law, protection of speech, independent judiciaryโ€”are absent or severely constrained. The public sphere, insofar as it exists, operates through coded, indirect, and aesthetically mediated forms of expression that the liberal deliberative model tends to dismiss as "mere entertainment" or "slacktivism."

Deepfakes: The Sincerity Condition Under Attack

Nagara (2025) brings the deepfake problem into direct dialogue with Habermasian theory. The analysis identifies deepfakes as an attack specifically on the sincerity condition of communicative actionโ€”the assumption that speakers mean what they say and that their communication can be attributed to them as genuine expressions of their beliefs.

Habermas's theory of communicative action identifies four validity claims that underpin rational discourse: truth, rightness, sincerity, and comprehensibility. Deepfakes do not merely introduce false claims (which rational discourse can, in principle, refute). They attack sincerityโ€”the very possibility of knowing who is speaking and whether they actually said it. When a deepfake video shows a political leader announcing a policy reversal, the epistemological problem is not "Is this claim true?" (which can be checked against policy documents) but "Did this person actually make this statement?" (which requires authentication infrastructure that does not yet exist at scale).

Nagara argues that deepfakes therefore pose a qualitatively different threat to the public sphere than traditional misinformation. Misinformation can be corrected through fact-checking. Deepfakes corrode the capacity for correction by undermining trust in all video evidenceโ€”what media scholars call the "liar's dividend": even authentic evidence can be dismissed as a deepfake.

Media Literacy as Partial Defense

Vrabec and Hoti (2025) provide a bibliographic overview of empirical research on deepfake detection strategies within media literacy education. Their review identifies a persistent gap between technical detection capability and practical media literacy:

  • Technical detection (pixel analysis, metadata forensics, AI-based classifiers) achieves variable accuracy depending on deepfake sophistication, and is typically available only to forensic specialists.
  • Perceptual detection (human ability to identify deepfakes by visual inspection) degrades as generation technology improves. Current studies suggest that untrained viewers identify deepfakes at rates only modestly above chance.
  • Epistemic defense (teaching citizens to adopt a critical stance toward all video evidence, to seek corroboration, and to understand the production context of media) shows promise but requires sustained educational investment.
The authors argue that media literacy alone is insufficient: it can raise individual resistance to deepfakes but cannot address the systemic erosion of trust that deepfakes produce. A society where citizens distrust all video evidence is not a well-defended societyโ€”it is a paranoid one, and paranoia is as destructive to democratic discourse as credulity.

Claims and Evidence

<
ClaimEvidenceVerdict
Social media platforms encode a normative model of the public sphereViader Guerrero (2024): platform design constitutes rather than merely represents discourseโœ… Supported (theoretical)
Habermas has provided a clear policy prescription for the digital public spherePatberg (2025): constitutional imperative identified but policy content left openโš ๏ธ Uncertain
TikTok functions as a public sphere in authoritarian contextsAttia & Ahmed (2025): mostly passive engagement; TikTok operates as a "limited public sphere" with surface-level discussionsโš ๏ธ Uncertain (evidence suggests limited rather than substantive engagement)
Deepfakes attack the sincerity condition of communicative actionNagara (2025): theoretical analysis linking deepfakes to Habermasian validity claimsโœ… Supported (theoretical)
Media literacy can effectively counter deepfakesVrabec & Hoti (2025): individual resistance improved, systemic trust erosion not addressedโš ๏ธ Uncertain (partial at best)

Open Questions

  • Can the public sphere be reconceptualized for non-propositional discourse? Habermas's framework privileges rational argumentation. But much political communicationโ€”especially in the Global South and among marginalized communitiesโ€”operates through affect, aesthetics, and narrative. Does extending the theory to include these forms strengthen or dissolve it?
  • Who should govern the algorithms that structure public discourse? The three interpretations Patberg identifies (regulation, structural reform, participatory governance) are not mutually exclusive. What combination is feasible given current political-economic constraints?
  • Is authentication infrastructure a precondition for the digital public sphere? If deepfakes undermine sincerity, then provenance verification (C2PA, digital signatures, blockchain-based attestation) becomes infrastructure for democracy, not merely for content moderation. What institutions should build and maintain this infrastructure?
  • How do encrypted platforms fit into public sphere theory? WhatsApp, Signal, and Telegram host significant political discourse in closed groups. These are neither "public" (in the Habermasian sense of accessibility) nor "private" (in the sense of personal communication). They represent a new category that existing theory does not accommodate.
  • What would a post-platform public sphere look like? If platformization is the structural problem, what alternatives exist? Federated social networks (Mastodon, Bluesky), public media platforms (BBC, NHK), or entirely new institutional forms?
  • Implications

    The research reviewed here suggests that Habermas's public sphere theory remains normatively indispensableโ€”no alternative framework provides as clear an account of what democratic discourse requires and why it matters. But the theory's empirical preconditions are under systematic assault from deepfakes, algorithmic curation, and platform concentration.

    The response cannot be purely theoretical. It requires institutional innovation: authentication infrastructure that makes sincerity verification possible at scale; governance frameworks that give democratic publics meaningful authority over algorithmic design; and media literacy programs that go beyond individual skepticism to build collective epistemic resilience.

    The deepfake challenge is not, at bottom, a technology problem. It is a democracy problem that manifests through technology. And addressing it requires the same thing that Habermas has always argued the public sphere requires: institutions that are robust enough to sustain rational discourse even when powerful interests seek to undermine it.

    References (5)

    [1] Viader Guerrero, J. (2024). Beyond the Digital Public Sphere: Towards a Political Ontology of Algorithmic Technologies. Philosophy & Technology, 37, 789.
    [2] Patberg, M. (2025). What is Social Media's Place in Democracy? The Review of Politics, 87(1).
    [3] Attia, N. & Ahmed, N. (2025). Is TikTok a Public Sphere for Democracy in Egypt? The Application of Habermas's "Pseudo-Public Sphere." Studies in Media and Communication, 13(2), 7474.
    [4] Nagara, M.A. (2025). Deepfake dan Distorsi Ruang Publik Digital: Analisis Teori Public Sphere Jurgen Habermas. Jurnal Kajian Media Digital, 3(1), 52.
    [5] Vrabec, N. & Hoti, V. (2025). Strategies for Recognising Deepfake Videos in the Development of Media Literacy. Megatrends and Media Identity, 2025, 70.

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 8 keywords โ†’