Critical ReviewCommunication & Media

Deepfakes and the Digital Public Sphere: A Habermasian Analysis

Deepfake technology threatens the conditions for rational public discourse that Habermas identified as essential for democratic societies. Recent analyses examine how synthetic media distorts the public sphere, erodes trust, and what media literacy strategies might offer partial defense.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

In 1962, Jรผrgen Habermas described the "public sphere" as a space where private citizens come together to discuss matters of common concern, forming public opinion through rational-critical debate. The conditions for this sphere are demanding: participants must have access to reliable information, the ability to assess its credibility, and the freedom to express and contest views. Deepfake technologyโ€”AI-generated synthetic media that can make anyone appear to say or do anythingโ€”threatens all three conditions simultaneously.

The Research Landscape

Habermasian Framework for Deepfake Analysis

Nagara (2025), with 1 citation, provides the most direct application of Habermas' public sphere theory to deepfake technology. The analysis identifies three mechanisms through which deepfakes distort the digital public sphere:

Information corruption. The public sphere depends on shared access to reliable information. When deepfake videos can fabricate statements by political leaders, CEOs, or public figures, the information environment becomes unreliable. The problem is not just false informationโ€”it is the uncertainty about what is real, which undermines trust in all information.

The "liar's dividend." When deepfakes become common knowledge, real videos can be dismissed as fake. A politician caught on camera making a compromising statement can claim the video is a deepfakeโ€”and the claim is plausible regardless of whether it is true. This "liar's dividend" benefits those who would evade accountability for their actual words and actions.

Participation chilling. If anyone's likeness can be weaponized through deepfake technology (deepfake pornography, fabricated confessions, synthetic harassment), some participants will withdraw from public discourseโ€”particularly women, minorities, and dissidents who are disproportionately targeted. The public sphere shrinks not because access is denied but because participation is punished.

Media Literacy as Defense

Vrabec and Hoti (2025) survey empirical research on strategies for detecting deepfake videos, examining whether media literacy education can equip citizens to identify synthetic media. Their bibliographic review of detection strategies reveals a discouraging finding: human ability to detect deepfakes decreases as the technology improves. Multiple studies in the 2020-2023 period show detection accuracy declining toward chance levels as generation quality increased (readers should consult the original paper for specific figures across studies reviewed).

The implication is that individual detectionโ€”teaching people to "spot the fake"โ€”is an increasingly unreliable strategy. More effective approaches may include:

  • Provenance systems: Cryptographic signatures that verify the origin and integrity of media content (the C2PA standard).
  • Platform-level detection: Automated deepfake detection systems deployed by social media platforms before content is distributed.
  • Institutional verification: Strengthening fact-checking organizations and journalism as institutional buffers against synthetic misinformation.

Disinformation Ecosystems

Balogun and Olaniyi (2025), with 2 citations, broaden the analysis from deepfakes as individual artifacts to deepfakes as components of larger disinformation ecosystems. Using the Deepfake Detection Challenge dataset for technical evaluation and social media datasets for network analysis, they document how deepfakes are produced, distributed, and amplified through coordinated networks.

The network analysis reveals that deepfake content follows the same distribution patterns as other disinformation: initial seeding by small numbers of accounts, amplification through bot networks and coordinated sharing, and mainstreaming through engagement-optimizing algorithms that prioritize provocative content regardless of its veracity.

Societal Implications

Ghariwala (2025), with 3 citations, provides a broader overview of deepfake impacts across domains: politics (fabricated political statements), personal harm (deepfake pornography), financial fraud (voice-cloned CEO authorization of wire transfers), and legal evidence (the admissibility of video evidence when deepfakes exist).

The paper notes an asymmetry: creating a deepfake is becoming cheaper and easier (requiring only a few images and free software), while detecting one requires significant computational resources and expertise. This asymmetry favors attackers over defendersโ€”a pattern familiar from cybersecurity but now extending to the information environment.

Critical Analysis: Claims and Evidence

<
ClaimEvidenceVerdict
Deepfakes threaten the conditions for rational public discourseNagara's Habermasian analysisโœ… Supported โ€” mechanisms clearly articulated
Human deepfake detection accuracy is declining as technology improvesVrabec & Hoti's survey of detection studiesโœ… Supported โ€” accuracy approaching chance levels
Deepfakes function as components of coordinated disinformation campaignsBalogun et al.'s network analysisโœ… Supported
The creation-detection asymmetry favors deepfake creatorsGhariwala's technology analysisโœ… Supported โ€” creation costs declining faster than detection improves

Open Questions

  • Provenance at scale: Can cryptographic media provenance (C2PA) be adopted widely enough to create a meaningful trust infrastructure? The challenge is achieving universal adoptionโ€”content without provenance is not necessarily fake.
  • Regulation: Should deepfake creation be regulated, or only malicious use? Broad creation bans would restrict legitimate uses (entertainment, education, accessibility), while use-based regulation is difficult to enforce.
  • The global dimension: Deepfake regulation varies by jurisdiction. A deepfake created in one country can cause harm in another with different legal standards. International coordination is needed but politically difficult.
  • AI detection arms race: As detection improves, generation adapts to evade detection. Is this an arms race with no equilibrium, or will one side gain a lasting advantage?
  • What This Means for Your Research

    For media scholars, the Habermasian framework provides a normative vocabulary for articulating why deepfakes matter beyond their technical characteristicsโ€”they threaten the democratic conditions for public discourse.

    For policymakers, the declining effectiveness of human detection suggests that individual media literacy, while valuable, cannot be the primary defense. Institutional and technical solutions (provenance systems, platform detection) must carry more of the burden.

    Explore related work through ORAA ResearchBrain.

    References (5)

    [1] Nagara, M.A. (2025). Deepfake dan Distorsi Ruang Publik Digital: Analisis Teori Public Sphere Jurgen Habermas. Jurnal Komunikasi dan Media Digital, 3(1).
    [2] Vrabec, N. & Hoti, V. (2025). Strategies for Recognising Deepfake Videos in the Development of Media Literacy. MM Identity, 2025.
    [3] Balogun, A.Y., Alao, A.I., & Olaniyi, O. (2025). Disinformation in the digital era: The role of deepfakes, AI, and OSINT. Computer Science IT Research Journal, 6(2).
    [4] Ghariwala, L. (2025). Impact of Deepfake Technology on Social Media: Detection, Misinformation and Societal Implications. IJRASET.
    Adebayo Yusuf Balogun, Adegbenga Ismaila Alao, & Oluwaseun Oladeji Olaniyi (2025). Disinformation in the digital era: The role of deepfakes, artificial intelligence, and open-source intelligence in shaping public trust and policy responses. Computer Science & IT Research Journal, 6(2), 28-48.

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 8 keywords โ†’