Decades of psychological research suggest that authoritarian leaders and their admirers consistently share one thing in common: they twist the truth.
To accomplish this, such leaders frequently follow a common playbook of attacking truth tellers and truth-telling institutions as a prelude to controlling information infrastructure and a broad-scale decimation of scientific programs. Authoritarian leaders also spread unambiguous disinformation with harmful consequences, such as the conspiracy theory that the 2020 election was stolen, which led to a violent and deadly insurrection at the U.S. Capitol on Jan. 6, 2021.
In order to manipulate the truth, authoritarian leaders cultivate those who either willingly or inadvertently defend the authoritarian agenda by undermining research on disinformation and misinformation. Examples of this undermining include a predictable variation on one of the following arguments: “misinformation cannot properly be defined because facts are subjective or uncertain,” “misinformation is a distraction because the real problem is something else,” and “fighting misinformation amounts to bias and censorship.”
This is why facts matter now more than ever and claiming otherwise risks endangering public health and democracy. Too often, we excuse misinformation as a matter of opinion or free speech and subsequently face dire consequences.
An unvaccinated child with no underlying health conditions recently died in Texas from measles. Measles doesn’t care about your opinion and misinformation about vaccination can kill you. Luckily, research suggests we can reliably address misinformation—without the need for any kind of censorship.
A common critique of systems that hold liars to account—such as the fact-checking industry—is the accusation that they are biased. In January, Mark Zuckerberg announced that he’s getting rid of Meta’s third-party fact-checking program in the U.S. because he determined that third-party moderators are “too politically biased.” Although it’s possible that any individual fact-checker may be biased, empirical research strongly refutes the idea that the fact-checking industry as a whole is biased. A 2024 study looking at members of the U.S. Congress finds that Republicans and Democrats are fact-checked at equal rates and what predicts getting fact-checked is not partisanship but prominence: the more prominent the politician, the higher the likelihood of getting fact-checked. Moreover, much research shows that ratings of claims from independent fact-checkers strongly correlate with one another. This correlation might lead one to argue that fact-checkers must all share the same ideological bias. But when you ask regular bipartisan crowds to rate claims—a crowdsourcing technique that underlies initiatives such as community notes—their ratings highly correlate with the ratings of expert fact-checkers. What this tells us is that when large diverse groups of regular people and experts converge on the same answer, there is a ground truth that people agree on: some claims are simply false or misleading.
The perception of bias comes from the fact that far-right echo chambers objectively spread more misinformation and are therefore impacted more even under politically neutral platform policies. One reason for this is that conservatives are more susceptible to misinformation as a recent study with over 65,000 people across 24 countries concluded. But why is this the case? One factor to consider is that many conservatives are bombarded with disinformation from far-right elites.
Unfortunately, community notes initiatives alone have proven insufficient to reduce the virality of misinformation on social media. Community notes can’t fix this problem because bad actors cannot be counted on to prefer truth over a chance to amplify their own ideology. We therefore need multiple independent means of holding social media companies accountable for their role in sharing misinformation.
A related common objection to viewing misinformation as an objective problem is that because science comes with uncertainty we cannot really define what is likely to be true or false. The weaponization of uncertainty is a classic disinformation technique often exhibited, and perhaps perfected, by the tobacco industry and later adopted by the fossil fuel industry. Leaders from both sectors figured out that when you persistently cast doubt on the scientific consensus people become less inclined to act on an issue. To be sure, the tobacco industry was found guilty of deliberately misleading the public on the health consequences of smoking for over 50 years.
This trend can also be seen in some inconsistent cultural attitudes towards facts and misinformation. First, conservative pundits will critique postmodern rhetoric by saying the left unfairly depicts facts as subjective. For example, Jordan Peterson will absurdly claim that universities are dominated by biased “postmodern neo-Marxists” and will express his disdain towards using people’s preferred gender pronouns. However, these same thinkers who maintain that facts cannot be subjective will turn around and say that what counts as misinformation is entirely subjective. In fact, Peterson claims that misinformation and disinformation are “Soviet era” terms meaning “opinions that run contrary to mine.” Elon Musk has similarly echoed that “one person’s misinformation is another person’s information.” But empirical research strongly refutes the idea that misinformation is simply in the eye of the beholder.
We can clearly determine what is classified as misinformation based on the presence of markers of falsehood and manipulation. For instance, highly negative emotional language is a common marker of falsehood because misinformation frequently exploits outrage. We often don’t even need to know about the context, or have specific knowledge of the topic, as centuries-old manipulation techniques (including falsely presenting opinion as fact, scapegoating groups, leaving out crucial context, impersonating experts, logical fallacies, and conspiracy theories) can help us identify potential bullsh-t.
The presence of several such cues leads to an accuracy rate of over 80% in correctly classifying a claim as misinformation (using a wide variety of datasets and sources). This is perhaps unsurprising given that lies and truthful communication systematically differ in their characteristics and people can learn to identify lies with good accuracy using just a few simple cues.
So why would people argue that misinformation isn’t something you can define non-politically? One reason: if you cannot define a concept, you cannot act on its consequences. And the consequences of misinformation are brutal. One way to minimize the misinformation problem is to perform a sleight of hand by suggesting that something else is the real or bigger problem. Distrust. Polarization. Economic inequality. You name it. Misinformation becomes the symptom and not the disease.
But this way of thinking completely misunderstands the complex nature of how social forces interact in the real world. Misinformation has both direct and indirect consequences for society. Misinformation can kill people quite literally. For example, patients have refused life-saving treatment because they believe in anti-vaccination conspiracy theories. Economists can isolate the causal impact of misinformation broadcasts on death rates during a genocide. Fake rumours on social media led to mob lynchings in India and national riots in the UK.
But at other times, the impact of misinformation is more subtle, indirect, and builds over time, including by lowering trust in the electoral process, mainstream media, and official institutions. Misinformation impacts society through a vicious cycle: misinformation breeds distrust and existing distrust leads people to seek out, believe in, and share more misinformation.
So what can we do? There are many evidence-based solutions to misinformation that have absolutely nothing to do with censorship. In fact, the “censorship industrial complex” narrative is a fantasy.
Meanwhile evidence-based interventions such as psychological inoculation or “prebunking” have proven effective in empowering audiences around the world to identify manipulation. Psychological inoculation uses a vaccine analogy where exposing people to a weakened dose of the techniques used to produce falsehoods—and deconstructing and refuting them in advance—can help cultivate greater cognitive immunity to future misinformation. Such inoculations have been deployed as ads on social media to prebunk harmful content and scaled by public health authorities and tech companies to reach many millions of people globally.
Empowering people to spot manipulation—irrespective of the issue—is the opposite of censorship. It’s not about taking stuff down or reducing visibility. Prebunking, debunking, and fact-checking are examples of “counter-speech.” They allow more speech to happen. Critical thinking helps people make up their own minds about specific issues. Democracy doesn’t function well when speech is exploited and manipulated to advance falsehoods. But fighting misinformation doesn’t require censorship. In fact quite the opposite, it requires that we all speak out against it.
In the words of Supreme Court Justice Louis Brandeis: “If there be time to expose through discussion, the falsehoods and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”
Read the full article here