Communication & Media

Computational Propaganda: When States Deploy Bots to Win Hearts and Minds

Over 80 state actors have conducted information operations in the past decade, deploying automated accounts to manipulate public opinion across social media platforms. Four papers examine how computational propaganda works, whether bots are effective, and why detection remains a cat-and-mouse game.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

Propaganda is ancient; computational propaganda is new. The combination of automated accounts (bots), algorithmic amplification, and data-driven targeting has created a form of political manipulation that operates at a scale and speed that traditional propaganda could never achieve. A state actor can deploy thousands of automated accounts simultaneously, each posting, retweeting, and commenting in coordinated patterns designed to create the appearance of organic public opinion.

The term "computational propaganda"โ€”coined by the Oxford Internet Institute's Computational Propaganda Projectโ€”captures this fusion of computational technology and propagandistic intent. It encompasses the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.

Theoretical Foundations

Pote (2024) provides a critical literature review of computational propaganda theory and bot detection systems. The paper traces how the classical definition of propagandaโ€”"the management of collective attitudes by manipulation of significant symbols"โ€”has evolved to computational propaganda in the digital medium.

Drawing on the broader computational propaganda literature, the review traces an evolution across three recognizable generations:

First generation (2010-2016): Simple automated accounts posting high volumes of content. Detectable through basic metrics (posting frequency, account age, follower ratios). Effective through sheer volume rather than sophistication.

Second generation (2016-2020): Hybrid operations combining automated accounts with human operators. More sophisticated targeting based on audience analysis. Harder to detect because human involvement masks automated patterns.

Third generation (2020-present): AI-generated content (text, images, audio, video) distributed through networks that combine automated and authentic accounts. Detection is challenging because both the content and the distribution patterns appear organic.

Bot Effectiveness

Polychronis and Kogan (2025) ask a question that is empirically important but rarely studied: do bots actually work? State-sponsored information operations have been perpetrated by over 80 state actors in the last decade, but their effectiveness at actually changing attitudes or behavior is less well established than their existence.

Using sequence-based clustering and advanced linear modeling, the study investigates the relationship between agent automation, role, and network characteristics and how much success those agents achieve. The findings are striking: automated agents perform worse across every success metric compared to human agents, and they play a smaller, supporting role to the primarily human SSIO workforce. Furthermore, the extent to which agents engage in amplifying-centric versus producing-centric roles is the biggest determinant of their successโ€”highlighting that the social role an agent plays matters more than whether it is automated. This suggests that the threat of bot-driven propaganda may be more about volume and distraction than genuine persuasive impact.

Pakistan: Domestic Computational Propaganda

Samee, Ansari, and Butt (2025) examine computational propaganda in Pakistan's domestic political context. The research compares strategies adopted by Pakistan's major political partiesโ€”PTI, PML-N, and PPPโ€”in using platform X for political narrative creation.

The Pakistan case illustrates that computational propaganda is not exclusively a tool of state actors against foreign targetsโ€”it is increasingly used by domestic political actors against domestic audiences. Political parties deploy bot networks, coordinate hashtag campaigns, and use automated accounts to create the appearance of popular support for their positions and popular opposition to their rivals.

AI and National Security

Kirill and Shapovalov (2026) examine national security threats associated with AI use in political communication. Key risks include the concentration of digital power among technology corporations, algorithmic manipulation of public opinion, and the creation of information bubbles.

The national security framing connects computational propaganda to broader concerns about AI governance: if AI systems can generate convincing political content at scale and distribute it through automated networks, the barrier to entry for influence operations drops dramatically. States, corporations, and even well-resourced individuals can conduct information operations that previously required intelligence agencies.

Claims and Evidence

<
ClaimEvidenceVerdict
Over 80 state actors have conducted information operationsPolychronis & Kogan (2025): documented across multiple platformsโœ… Supported
Computational propaganda effectively changes public opinionPolychronis & Kogan (2025): effectiveness is less established than existenceโš ๏ธ Uncertain
Bot detection can keep pace with bot sophisticationPote (2024): AI-generated content and hybrid operations challenge detectionโš ๏ธ Uncertain (cat-and-mouse dynamic)
Computational propaganda is used only by state actors against foreign targetsSamee et al. (2025): domestic political parties use it against domestic audiencesโŒ Refuted

Implications

Computational propaganda represents a structural threat to democratic discourseโ€”not because it is always effective at changing minds, but because it degrades the information environment within which democratic deliberation occurs. When citizens cannot distinguish organic public opinion from manufactured consent, trust in all political communication declines. The defense against computational propaganda requires a combination of technical detection, platform accountability, media literacy, andโ€”perhaps paradoxicallyโ€”the same AI technologies that enable the propaganda in the first place.

References (5)

[1] Pote, M. (2024). Computational Propaganda Theory and Bot Detection System: Critical Literature Review. arXiv:2404.05240.
[2] Polychronis, C. & Kogan, M. (2025). Do Bots Do It Better? Automated Agents in State-Sponsored Information Operations. Proc. ICWSM 2025, 19(1), 35888.
[3] Samee, A., Ansari, N.A., & Butt, R. (2025). Computational Propaganda in Pakistan: Political Manipulation through X. JRSR, 4, d140.
[4] Kirill, G.M. & Shapovalov, Y.M. (2026). Artificial Intelligence in Political Communication: Challenges for Ensuring National Security. PEP, 2025(12), 10.
Polychronis, C., & Kogan, M. (2025). Do Bots Do It Better? Analyzing the Effectiveness of Automated Agents in State-Sponsored Information Operations. Proceedings of the International AAAI Conference on Web and Social Media, 19, 1574-1585.

Explore this topic deeper

Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

Click to remove unwanted keywords

Search 8 keywords โ†’