The COVID-19 pandemic made the relationship between science and the public visible in ways it had not been for decades. Scientists became daily presences in media, their recommendations shaping policies that affected billions of lives. Public trust in science—never as universal as scientists assumed—became a matter of life and death: vaccine acceptance, mask compliance, and social distancing adherence all correlated with trust in scientific expertise.
But the pandemic also revealed that "trust in science" is not a simple variable that people either have or lack. It is a complex, multidimensional construct that varies across scientific domains, institutional levels, and cultural contexts. People may trust climate science but distrust nutrition science, trust academic researchers but distrust pharmaceutical companies, or trust scientists' competence while questioning their motives. Understanding these dimensions is essential for science communication that builds rather than erodes trust.
Measuring Trust: A Multidimensional Scale
Reif, Taddicken, and Guenther (2024) make the most significant methodological contribution by developing and validating the Public Trust in Science (PuTruS) Scale. The scale recognizes trust in science as a multidimensional perception that operates across different levels of trust objects—individual scientists, scientific institutions, and science as a system of knowledge production.
The scale integrates prior research on trust dimensions across these levels. Among the dimensions captured are aspects related to competence, integrity, and benevolence—complemented by considerations of transparency and openness to dialogue. By distinguishing these dimensions and levels, the PuTruS Scale enables researchers to identify precisely where trust is strong and where it is fragile. A person who trusts scientists' expertise but doubts their benevolence presents a very different communication challenge than a person who doubts scientific competence itself.
Trust as a Network: The Centrality of Sincerity
De Almeida and Pilati (2025) reconceptualize trust in scientists as a network rather than a latent variable, using network psychometrics on responses from 71,922 individuals across 68 countries. This is the largest-scale study of trust in science to date, and the network approach reveals structural relationships between trust components that traditional factor analysis cannot capture.
The most striking finding: sincerity emerges as the most central node in the trust network. Not competence, not expertise, not institutional prestige—sincerity. When people perceive scientists as genuinely motivated by truth-seeking rather than career advancement, funding acquisition, or political alignment, other dimensions of trust strengthen. When sincerity is doubted, no amount of demonstrated competence compensates.
This finding has profound implications for science communication. The traditional approach—demonstrating expertise through credentials and data—addresses a peripheral rather than central node of the trust network. Communicating sincerity—transparency about uncertainty, acknowledgment of limitations, honesty about conflicts of interest, and visible commitment to truth over advocacy—may be more effective at building trust than communicating authority.
Audience-Centered Quality
Taddicken (2025) defines what constitutes "good science communication" for fostering public trust through an audience-centered quality perspective. The essay argues that because scientific information reaches the public through intermediaries—journalists, social media, influencers, institutional press offices—communication design must translate complex knowledge while managing ambiguity and uncertainty.
The audience-centered perspective shifts the evaluation criterion from accuracy (is the communication scientifically correct?) to quality (does the communication serve the audience's epistemic needs?). Accurate communication can still fail if it is inaccessible, dismissive of legitimate concerns, or unable to convey uncertainty without undermining confidence. Quality science communication balances precision with accessibility, certainty with honesty about limitations, and authority with humility.
Evolving Models of Science Communication
Tsurkan (2025) provides historical and theoretical context by examining the three dominant models of science communication: deficit, dialogue, and participation. The deficit model assumes the public lacks knowledge that scientists possess and must fill. The dialogue model assumes two-way exchange between scientists and publics. The participation model assumes publics should actively shape research agendas and knowledge production.
Each model implies different assumptions about the social position of scientists: the deficit model positions scientists as teachers, the dialogue model as conversation partners, and the participation model as collaborators. Tsurkan argues that contemporary science communication requires elements of all three models, deployed strategically according to context: deficit approaches for technical information transfer, dialogue for policy-relevant science where values and evidence intersect, and participation for research that directly affects communities.
Dimensions of Trust in Science
<| Dimension | Definition | Central to Trust? | Communication Strategy |
|---|---|---|---|
| Expertise | Scientists have knowledge to produce reliable findings | Moderate (Reif et al.) | Credentials, track record, data presentation |
| Integrity | Scientists follow ethical standards and report honestly | High | Transparency about methods, data sharing |
| Benevolence | Scientists act in public interest | High | Visible public service, pro-bono engagement |
| Sincerity | Scientists are genuinely motivated by truth | Highest (de Almeida & Pilati) | Acknowledge uncertainty, disclose conflicts, show intellectual humility |
| Openness | Science is transparent and accessible | Moderate | Open access, plain language, engagement |
What To Watch
The rise of AI-generated scientific content—from chatbot-provided health information to AI-authored preprints—introduces a new variable into the trust equation. If sincerity is the central node of trust in scientists, what happens when the "scientist" communicating is an AI system? Early evidence suggests that audiences apply different trust heuristics to AI-generated scientific information, relying more heavily on source institution credibility and less on perceived personal sincerity. This may create an environment where AI-mediated science communication is trusted more by audiences who distrust individual scientists (seeing AI as objective) and less by audiences who value the human sincerity that AI cannot authentically perform.