Misinformation poses a smaller threat to democracy than you might think
A new article in Nature by prominent misinformation researchers is badly argued, misunderstands basic issues, and misrepresents critics.
I have written many highly critical essays on how the topic of “misinformation” is approached within social science and popular discourse. Briefly, I think:
Misinformation research often rests on shaky theoretical and empirical foundations.
Anti-misinformation initiatives are often politically biased.
If misinformation is defined narrowly (i.e. as very clear-cut falsehoods and fabrications), conventional wisdom greatly exaggerates its prevalence and harms.
If misinformation is defined expansively (i.e. including any content that is somehow misleading), misinformation researchers tend to be naïve about the challenges of detecting it reliably and impartially.
This week, Nature, one of the world’s top scientific journals, published a short commentary by a group of leading misinformation researchers. Titled “Misinformation poses a bigger threat to democracy than you might think”, it addresses three criticisms of misinformation research:
“[1] that the threat has been overblown; [2] that classifying information as false is generally problematic because the truth is difficult to determine; and [3] that countermeasures might violate democratic principles because people have a right to believe and express what they want.”
They argue that these criticisms are “based on selective reading of the available evidence”. They also imply that certain critics are deploying the same tactics “used in the decades-long campaigns led by the tobacco and fossil-fuel industries to delay regulation and mitigative action.”
I respect the commentary's authors. Moreover, I share their commitment to the importance of science, evidence-based public discourse, and democracy. However, the commentary is badly argued, misunderstands basic issues, and misrepresents critics' views.
Issue #1: How much alarm is warranted?
Nobody has ever claimed that misinformation has zero harmful consequences. Rather, critics of alarmist narratives about misinformation correctly identify a profound mismatch between (a) established scientific findings about its prevalence and impact and (b) the sheer amount of panic about it (see here, here, here, here, here, here, and here).
Misinformation is routinely identified as one of the world’s most (if not the most) serious threats and a leading cause of major sociopolitical events and trends. Moreover, this alarmism appears to be shared by ordinary citizens. The authors of this commentary deny that this panic is overblown. In fact, their title—“Misinformation poses a bigger threat to democracy than you might think”—implies they think the average reader underestimates the harms of misinformation.
Given this, it is surprising that they provide almost no evidence that misinformation has any effect on people’s real-world behaviour. Instead, they primarily cite other reviews they have written, correlational evidence, and a handful of experimental studies that (interpreted charitably) support very weak effects. (For example, they cite a study showing that “correcting election-fraud misinformation in the United States has been shown to positively affect trust in electoral processes”. They do not mention that the study is based on survey results, the effects are very small, there was no change in voting behaviour, and the corrections had no effect on Republicans.)
More generally, the article implicitly treats the prevalence of misperceptions (false or unsupported beliefs) as evidence of misinformation’s impact. As many scholars have pointed out (e.g., here, here, here, and here), this is misleading. For example, Republican election denial (a misperception) has complex psychological, social, and political causes, including general psychological biases, intense political polarisation, and institutional distrust. Misinformation undoubtedly plays a role, but estimating the nature and magnitude of that role is complicated. This means the presence of alarmingly popular misperceptions does not vindicate alarmism about misinformation.
Admittedly, when it comes to the harms of misinformation, it is reasonable to note that one should not equate the absence of evidence with evidence of absence. However, given the absence of evidence, one cannot claim that existing alarmism about misinformation is supported by scientific research, and it is wrong to accuse critics of alarmist narratives of using the propaganda tactics of tobacco companies. Moreover, it is especially odd to make this accusation in an article endorsing “the promotion of social norms… such as not making claims without evidence."
Issue #2: Strawmanning
The authors of the commentary criticise scholars who raise issues about how the concept of misinformation is defined and operationalised by misinformation researchers. Against such critics, they offer claims such as:
“The Holocaust did happen. COVID-19 vaccines have saved millions of lives. There was no widespread fraud in the 2020 US presidential election.”
“There are many incontrovertible historical and scientific facts”
“Scientific knowledge cannot be understood as absolute, but this does not imply that scientific findings are arbitrary or unreliable, or that there are no valid standards for adjudicating scientific claims.”
Perhaps there are critics of misinformation research who deny the Holocaust, think scientific findings are arbitrary, and believe one should never classify information as true or false. However, I have never encountered them, and they are not found among any of the critics the authors cite in the commentary.
Consider an article cited as a source of problematic views. Written by Joseph Uscinski, Shane Littrell, and Casey Klofstad, the article is a thoughtful, much-needed critique of the epistemological naivety prevalent within lots of misinformation research. Although Uscinski and colleagues recognise that there are difficult philosophical issues concerning what truth is and whether we can ever fully overcome subjectivity, they never deny that truth or facts exist. They also never claim that science is arbitrary or unreliable; they literally are scientists. Instead, their article concludes:
“Investigating truth claims is a tough slog, and even scientists and trained fact checkers are bound to get things wrong. As such, misinformation researchers should proceed with caution, taking care not to stigmatize certain ideas as “false” that might not be demonstrably false.”
By treating their critics as anti-science subjectivists, the authors invent a strawman instead of engaging with legitimate critiques of how the concept of misinformation is defined and operationalised. For example:
There is no mention of cases—or even the possibility of cases—where the concept of misinformation is or has been misapplied.
There is no mention of the fact that their focus on unambiguously false claims will miss the most consequential forms of misleading content, and no mention of difficulties in defining or operationalising the concept of misleadingness.
There is no mention of the risk of political and ideological bias in how the concept of misinformation is applied in misinformation research. Contrary to the commentary’s assumption, even if classifications by misinformation researchers are typically accurate, they can nevertheless be extremely—and problematically—biased if they are one-sided in which examples of misinformation they focus on. As if to illustrate this, the commentary does not mention a single example of prominent misinformation spread by educated liberals or progressives.
More generally, one of the striking things about the commentary—and, again, most misinformation research—is that the entire framing takes for granted that misinformation is entirely other people’s problem. Whereas “the public” is vulnerable to misinformation spread by corporations and right-wing, populist politicians, the authors of the article are implicitly depicted as objective, impartial, and omniscient. It is this epistemological naivety and lack of intellectual humility that frustrates so many observers of misinformation research.
Issue # 3: The difference between citizens and technocrats
According to the commentary, those who criticise misinformation research support “acquiescence in the face of widespread misinformation” and deny “that information can ever be confidently classified as true or false.” Against such critics, they argue that we must “promote evidence-based information and stand firm against false or fraudulent claims, unafraid to call them out as such”.
This framing rests on a basic confusion. It fails to distinguish between:
People (whether journalists, scientists, pundits, writers, or ordinary voters) participating in public debate in their capacity as democratic citizens.
Misinformation experts advising governments, international organisations, and major technology companies based on allegedly objective scientific findings about misinformation.
(1) and (2) are completely different. When people criticise misinformation research or policies, they raise worries about (2), not (1). Given this, it is inaccurate to depict such critics as rejecting the importance of truth, reason, and evidence in public debate. For example, I am a critic of (2) in many cases, and yet I devote a lot of my spare time to writing essays in which I try to communicate evidence-based claims and criticise popular misperceptions.
To establish the legitimacy of (2), which is what the commentary sets out to achieve, it is therefore not sufficient to establish that truth exists, that misinformation can often be unambiguously identified, or that science is reliable. Instead, you need to establish that misinformation research qualifies as an objective science of a sort that ought to inform technocratic (i.e. expert-driven) policy guidance.
Maybe there are good arguments that can establish this. However, because the authors fail to understand this point, most of the arguments offered in the essay are irrelevant and—again—end up simply misrepresenting critics.
Issue #4: Misrepresenting Misinformation Interventions
One of the central points made in the commentary is that expert-driven interventions against misinformation do not violate democratic principles or infringe on freedom of speech. This is true in many cases. Moreover, I support many such interventions. For example, public information campaigns launched by governments on topics like vaccines are important, and specific fact-checking and prebunking initiatives are often worthwhile and legitimate, at least if the focus is exclusively on very clear-cut falsehoods and fabrications.
Nevertheless, the commentary completely omits any mention of the fact that there has been widespread censorship of content deemed “misinformation” on social media platforms based on expert classifications. Moreover, even if the authors oppose censorship, decisions by experts about what to classify as “misinformation”—for example, the claim by one of the authors that even true claims about people dying after being vaccinated should be labelled “misinformation”—can indirectly influence decisions about how governments, international organisations, and technology companies regulate the internet.
Second, even when misinformation interventions do not involve censorship, they can still be problematic. For example, the authors champion “logic-based inoculation”, which aims to “vaccinate” the public against misinformation by teaching them to identify alleged “misinformation techniques” through short videos and interactive games. This is an extremely influential and well-funded intervention, deployed or supported by numerous tech companies, governments, and international organisations. The authors tell us:
“As well as proving successful in the laboratory, large-scale field experiments (on YouTube, for example) have shown that brief inoculation games and videos can improve people’s ability to identify information that is likely to be of poor quality.”
They fail to mention any of the serious critiques of this whole idea. Elsewhere, I have identified how subjective and confused the theoretical foundations of the intervention are (for example, its treatment of emotional language as inherently manipulative). Moreover, its laboratory “proof” is extremely dubious. Not only does the intervention often just make people more likely to rate all content (true or false) as false, but experiments—including the YouTube “field experiment” they cite—use “synthetic” (i.e. made-up) examples of misinformation, which means there is no reason to think success in the experiments will generalise to real-world success. Further, even if the interventions were successful, the fact that most of the information people encounter is not misinformation means it is highly likely the intervention will backfire, causing people to reject much more true information than false information.
This illustrates why it is so important to have an accurate model of misinformation’s prevalence and dangers when it comes to designing interventions. If you think people are gullible, misinformation is rampant, and misinformation is the leading cause of troubling beliefs and behaviour in society, it makes sense to try to design interventions that teach people to be more paranoid about misinformation. However, if you think—as seems to be the case—that people are already highly suspicious of manipulation and low-quality misinformation is relatively rare in their information diet, you will realise there is a high chance such interventions will backfire, exacerbating problems of distrust that lie at the root of many profound epistemic problems in society.
It also illustrates why it is appropriate to hold misinformation researchers and misinformation interventions to very high standards. Even if expert classifications and research are not being used to censor, there is a risk that faulty and highly subjective assumptions will shape popular, well-funded interventions that either achieve little of value or worsen the problems they aim to fix.
I think the moral panic around misinformation has such staying power because it is resolves the cognitive dissonance between two facts:
1) Present day western elites considers themselves defenders of democracy and egalitarianism.
2) Actually existing voters have views that elites think are dumb.
Misinformation locates the source of 2) within other elites, hostile foreign countries, corporations, etc and therefore allows people to criticize the views of voters without violating their commitment to 1).
Thanks. The commentary is pretty bad indeed, and that’s a further indictment of Nature if any were still needed. I’ve been disappointed to see Oreskes, whose work on climate change I highly respect, turn into a super-partisan misinformation researcher since the pandemic.
This particular passage was very jarring to me:
"To be proactive — for example, if the misinformation is anticipated but not yet disseminated — psychological inoculation is a strong option that governments and public authorities can consider using. Inoculation involves a forewarning and a pre-emptive correction — or ‘prebunking’ — and it can be fact-based or logic-based."
For the love of god, please no.