Sociology & Political Science

Digital Polarization and Democratic Erosion: What Social Media Actually Does to Politics

Does social media cause political polarization, or merely reveal it? A comparative analysis across Taiwan, South Africa, Pakistan, and India suggests the answer depends on institutional context—and that the relationship between digital platforms and democratic health is more conditional than either optimists or pessimists acknowledge.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

The narrative has become familiar to the point of cliché: social media is destroying democracy. Algorithms amplify outrage. Echo chambers calcify partisan identities. Fake news drowns truth. Citizens retreat into ideological silos. Democratic deliberation collapses.

This narrative contains elements of truth, but as a general theory it suffers from a serious empirical problem: it was largely developed from studying the United States and Western Europe, and it does not travel well. When we examine the relationship between social media and political polarization across different democratic contexts—Taiwan, South Africa, Pakistan, India—a more nuanced and considerably more interesting picture emerges. The effects of social media on democracy appear to be deeply conditional on institutional context, media ecosystems, and the quality of democratic norms that predate digitalization.

Comparative Evidence: Same Platforms, Different Outcomes

Weng and Alberts (2026) contribute a valuable comparative analysis using survey data from Taiwan and South Africa. These two cases are theoretically illuminating because they share key features—both are relatively young democracies with high social media penetration—while differing in institutional structure, media ecology, and colonial history.

The study's findings reveal that perceived exposure to disinformation is significantly high in both Taiwan and South Africa, and that this exposure negatively impacts both democratic societies. The analysis identifies correlations between various factors related to disinformation exposure and how people perceive fake news as worsening polarization and affecting the role of media in democracy.

The comparative analysis focuses on three key aspects: the relationship between awareness of false information and reliance on various media platforms, the impact of misinformation dissemination on societal polarization, and the role of media in promoting democratic processes. The research provides insights into how misinformation affects democracies in Asia and Africa, with broader implications for global democratic health.

The comparative finding challenges mono-causal accounts. Social media does not have a single, universal effect on democracy. Its impact is mediated by the institutional context of different democratic systems—both countries are relatively young democracies with high social media penetration, yet they differ in institutional structure, media ecology, and colonial history, producing different dynamics of disinformation exposure and polarization.

The Misinformation Mechanism

Shah (2025) examines misinformation's role in Pakistan's 2018 and 2024 general elections, documenting how fake news was deployed strategically by political actors to set agendas, discredit opponents, and mobilize supporters. The analysis employs a systematic literature review approach, synthesizing documented evidence of how misinformation undermines democratic processes in Pakistan's electoral context.

Several findings warrant attention:

  • Misinformation is not random noise. It is strategically produced and disseminated by political actors with specific goals. While disinformation in 2018 appeared primarily as fake headlines, rumors, and WhatsApp messages, by 2024 campaigns were deploying deepfakes, synthetic audio, and bot networks—a significant escalation in sophistication.
  • Platform design matters. WhatsApp's end-to-end encryption makes misinformation particularly difficult to track and counter in Pakistan. The same encryption that protects political dissidents from state surveillance also protects misinformation from fact-checking.
  • Institutional weakness amplifies vulnerability. In contexts where electoral commissions lack credibility, courts are perceived as partisan, and press freedom is constrained, there are fewer institutional correctives to misinformation. The "marketplace of ideas" depends on a marketplace that actually functions.
Gayen (2025) extends this analysis to West Bengal, India, examining a decade of social media-mediated political discourse (2014–2024) across WhatsApp, X (formerly Twitter), and Instagram—a user base the study estimates at over 50 million. The narrative review examines AI's role in mitigating fake news through natural language processing (NLP), visual content verification, and predictive modeling, focusing on key events including the 2016 Dhulagarh riots, the 2020 public health crisis, and the 2024 Murshidabad violence. The study notes that AI-based detection faces significant challenges including Bengali dialect complexities, algorithmic biases, and ethical concerns around privacy and censorship.

Can Citizens Fight Back? The Correction Behavior Puzzle

Jordá, Goyanes, and González-Manzano (2025) shift the analytical lens from misinformation production to citizen response. Their study examines what motivates individuals to correct misinformation when they encounter it on social media—a behavior that, if widespread, could serve as a decentralized fact-checking mechanism.

Using panel survey data from Spain (N=570), they examine how using social media for news, engaging in cross-cutting political discussion, and supporting fake news censorship relate to the intention to correct misinformation. The findings indicate that:

  • Social media news use drives correction intent: Individuals who regularly consume news through social media are more likely to develop the motivation to correct misinformation when they encounter it—exposure to the problem generates the impetus to act.
  • Cross-cutting discussion matters: Engaging in political discussions with people who hold different views is positively associated with correction behavior, suggesting that exposure to disagreement may build the civic muscle needed to challenge false claims.
  • Support for censorship correlates with correction: Those who favor restricting fake news also show higher intention to personally correct misinformation—suggesting a coherent orientation toward information quality rather than a passive reliance on institutional solutions.
This finding has important implications for platform design. If the goal is to reduce misinformation's impact, designing for correction may be more effective than designing for detection. But correction is a social behavior governed by social norms—not merely a technical capability that can be engineered.

Media Watchdogs: Institutional Counterweights to Disinformation

Asthana (2025) examines how media watchdog organizations operated during the world's two largest democratic elections in 2024—India and the United States—providing a comparative lens on how independent oversight institutions respond to misinformation in polarized environments. Through analysis of content published by Newslaundry in India and American Oversight in the US, the study identifies how these organizations addressed misinformation and exposed fake news tactics during highly charged electoral periods.

The analysis reveals both convergences and divergences in watchdog approaches across the two contexts. Despite challenges like rapid digital disinformation spread, social media amplification, and enforcement gaps, these organizations employed collaborative and investigative methods to uphold media accountability. The findings highlight the critical role of independent media watchdogs in enhancing transparency, combating disinformation, and strengthening democratic resilience amidst polarized electoral environments.

The broader implications connect to the distinction—well established in the polarization literature—between issue polarization (disagreement on specific policies, which may be healthy for democracy) and affective polarization (emotional hostility toward opposing groups, which corrodes democratic norms). Social media's algorithmic amplification of emotionally engaging content appears to drive the latter rather than the former. Asthana's work suggests that watchdog organizations represent one institutional mechanism for counteracting disinformation dynamics, though their effectiveness varies across democratic contexts.

Claims and Evidence

<
ClaimEvidenceVerdict
Social media universally increases political polarizationWeng & Alberts (2026): effect is conditional on institutional context; Taiwan vs. South Africa show divergent patterns❌ Refuted (as universal claim)
Misinformation is strategically deployed by political actorsShah (2025): documented coordinated campaigns in Pakistan's elections✅ Supported
AI-generated disinformation adds a qualitatively new threatShah (2025): deepfakes and synthetic audio documented in Pakistan's 2024 election; Gayen (2025): AI detection challenges documented in West Bengal✅ Supported
Media literacy alone can counter misinformationJordá et al. (2025): literacy is necessary but not sufficient; social costs deter correction⚠️ Uncertain (partially supported)
Platform regulation can reduce polarizationNo study in this cohort demonstrates effective regulatory intervention⚠️ Uncertain

Open Questions

  • Is there a threshold of institutional quality below which social media becomes democratically destructive? The comparative evidence suggests that strong institutions buffer against social media's polarizing effects. But what counts as "strong enough"? Can this threshold be operationalized and measured?
  • How do encrypted platforms change the misinformation dynamics? WhatsApp's dominance in South Asia, Africa, and Latin America means that the platform where most misinformation circulates is also the platform where it is least visible to researchers, fact-checkers, and regulators.
  • Can algorithmic recommendations be redesigned to reduce affective polarization without reducing engagement? The business model tension is clear: emotionally engaging content drives engagement, and engagement drives revenue. Are there recommendation architectures that can sustain engagement without amplifying hostility?
  • What role does state-sponsored disinformation play relative to organic misinformation? The distinction between strategic disinformation (produced by state or political actors) and organic misinformation (produced by ordinary citizens who genuinely believe false claims) has different implications for intervention.
  • How does generational media consumption affect polarization? Younger cohorts consume news primarily through social media, while older cohorts maintain hybrid consumption patterns. As the social-media-native generation becomes the electoral majority, does the relationship between platforms and polarization change qualitatively?
  • Implications

    The research reviewed here suggests that the relationship between social media and democratic health is not deterministic but conditional. The same platforms that appear to destabilize democracy in one context may strengthen it in another. The determining factor is not the technology itself but the institutional ecosystem into which it is introduced.

    This has direct implications for regulation. One-size-fits-all platform regulation—whether the EU's Digital Services Act, India's IT Rules, or proposed US legislation—may be ineffective precisely because the problem it addresses is context-dependent. Effective regulation needs to account for the specific institutional vulnerabilities of different democratic systems.

    For researchers, the priority should be comparative work that moves beyond single-country case studies. The field has too many studies of social media in the US and too few studies of social media in the democratic contexts where the stakes are arguably higher: fragile democracies, post-conflict societies, and countries where digital platforms are the primary (sometimes the only) source of political information.

    References (5)

    [1] Weng, D. & Alberts, K. (2026). Democracy in the Digital Age: Investigating Fake News, Political Polarization, and Media's Role in Taiwan and South Africa. Journal of Developing Societies, 42(1).
    [2] Asthana, S. (2025). Fake News and Its Influence on Political Polarization. SouthSight, 2025, 28925.
    [3] Jordá, B., Goyanes, M., & González-Manzano, L. (2025). Curtailing the Spread of Fake News: Antecedents of Citizens' Intention to Correct Misinformation on Social Media. Communications, 50(2).
    [4] Shah, S. (2025). Setting the Agenda with Lies: Misinformation and the Undermining of Democracy in Pakistan. Academy Journal, 4(4), 898.
    [5] Gayen, M.H. (2025). AI's Role in Regulating Fake News and Misinformation on Social Media in West Bengal, 2014–2024. Indian Journal of Preventive Medicine, 4(1).

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 8 keywords →