How Dangerous is Misinformation?
The problem with alarmism about "misinformation" is not that it is too pessimistic about the state of media and public discourse. The problem is that it is not pessimistic enough.
Misinformation panic
Since 2016, many people have been greatly concerned about the insidious effects of “misinformation” in society. Of course, concerns about falsehoods, lies, propaganda, bullshit, distortive ideologies, and so on long predate 2016—they have been recognised as social evils as long as humanity has been reflecting on politics and society—but the beliefs (i) that such problems are usefully framed around the concepts of “misinformation” and “disinformation” and (ii) that misinformation and disinformation are uniquely bad today are recent developments. They emerged in response to two populist revolts of 2016—Brexit and Trump’s election—and, if anything, they have only gained in popularity since then.
I have been very critical of this focus on “misinformation”. This is for many reasons:
Prominent misinformation research often rests on shaky theoretical and scientific foundations;
Much of this research is clearly biased in terms of which examples of false and misleading content it focuses on;
The whole post-2016 focus on misinformation often rests on a preposterous image of a pre-2016 golden age of truth and objectivity;
Most analyses of misinformation (both by researchers and journalists) mistakenly treat human beings as gullible rubes who believe whatever claims appear on their social media feeds.
In addition, I think alarmist narratives about misinformation tend to greatly exaggerate its prevalence and effects. At least in Western democracies, empirical research suggests that misinformation is pretty rare and largely symptomatic of other problems. Given this, I have (drawing on the work of numerous others) referred to the post-2016 preoccupation with misinformation as a “moral panic.”
I obviously do not think misinformation is mythical or harmless. The point of the “moral panic” framing is rather to highlight that existing scientific evidence does not support popular alarmist narratives about the prevalence and dangers of misinformation. Specifically, this alarmism neglects the following facts, all of which are well-supported by extensive scientific research:
Most people do not pay close attention to politics or current affairs at all. They do not encounter much news of any kind, let alone fake news.
When people do follow news or current affairs, they overwhelmingly tune into mainstream media. Indeed, television is five times more popular as a source of news than online news, and people greatly exaggerate the importance of social media as a source of political or scientific information for most ordinary people.
Those who engage with fringe media outlets where misinformation is prevalent are not otherwise ordinary people who slipped on a banana peel and fell down a “rabbit hole.” They are people who are strongly pre-disposed to engage with fringe content based on their pre-existing worldviews, traits, and identities—for example, because they are political fanatics who want to demonise their political enemies; disaffected trolls who want to sow chaos in society; or segments of the population that distrust (and often despise) the establishment and elites. Because misinformation overwhelmingly preaches to the choir this way, its behavioural impacts tend to be limited.
Under-estimating misinformation?
Admittedly, this research on the prevalence and impact of misinformation generally measures misinformation by focusing on fringe, low-quality websites. Although this greatly over-overestimates the prevalence of misinformation in some ways—for example, it classifies everything (100% of content) from sites like The Daily Wire and Breitbart as misinformation—it also misses some misinformation. For example, if, say, CNN truthfully reports something Trump says, and what Trump says is insane, this would be coded as non-misinformation in most studies.
In some ways, this classification decision makes sense. Initially, the term “misinformation” was widely used to refer to the coverage of media outlets that made stuff up (i.e., published fake news). Politicians have always lied, bullshitted, and been careless with the truth; fake news, in contrast, was supposed to be a media phenomenon that was genuinely novel in terms of its scale and dangers. Moreover, if a media outlet accurately reports a politician’s false claims, there is an important sense in which audiences have not been misled: they have acquired accurate information about a feature of the world (i.e., what certain politicians think).
Nevertheless, some people understandably want to include blatantly false claims from politicians and pundits in their estimation of the prevalence of misinformation. Once included, does this justify current alarmism about misinformation?
Not really. Although it suggests that misinformation is more prevalent than many estimates in the scientific literature would suggest, the same lessons detailed above apply.
First, most people do not pay much attention to politics. For example, the number of people attending closely to what Trump says is a very small percentage of the population. In general, people who are very interested in politics—pundits, journalists, activists, political hobbyists, and so on—are a (statistically) weird minority of the population who tend to greatly underestimate the average voter's extraordinary ignorance and political disinterest.
Of course, most people are dimly aware of Trump’s very general views, including his misinformed claim that the 2020 presidential election result was fraudulent. However, this leads to a second point: the factor that predicts whether people agree with Trump’s views is not exposure to those views but whether they support Trump. (This is true even among Republicans). In this sense, election denial is downstream—symptomatic—of this support. The idea that people support Trump because they have been duped by his misinformation is simply not a plausible theory of public opinion formation.
Misinformation ≠ misperceptions
More generally, there is a pervasive tendency among misinformation researchers and commentators to conflate two very different things: (1) the impact of misinformation and (2) the prevalence of misperceptions (i.e., false beliefs). For example, people often treat the fact that most Republicans (63%) believe the “big lie” as proof that misinformation is extremely dangerous.
This rests on a confused picture of how people form their political beliefs. The origins of political misperceptions are complex. For example, in understanding the popularity of election denial among Republican voters, the following factors are a good place to start:
Conspiracy theories are for losers. Roughly 30-40% of supporters of the losing party in US elections tend to think they were cheated.
Republican voters’ trust in institutions (government, mainstream media, universities, etc.) is extremely low and has been falling in recent years. Hence, their tendencies towards biased and self-serving interpretations of political events—e.g., that they only lost an election because they were cheated out of it—are far less constrained by corrections from establishment institutions than they were in the past.
There is intense and growing affective polarisation in the US, with many Republicans having very negative attitudes towards Democrats and viewing them as a threat (and vice versa). Hence, many treat the idea that Democrats would do something diabolical like rig an election as much more plausible than they would have in the past.
These and numerous other factors conspire to make election denial appealing and attractive to many Republican voters. The story many liberals have in their heads—that such voters are being brainwashed en masse by a sophisticated disinformation campaign—is, therefore, simplistic and implausible. As the social scientist Dan Kahan points out, in most cases, “misinformation is not something that happens to the mass public but rather something its members are complicit in producing.”
In response to this kind of reasoning, people often generate a counterfactual test: “Are you saying that the same number of Republicans would support election denial if Trump had not pushed the narrative?!”
This counterfactual is misleading, however.
First, we do not know the answer. Consider the case of vaccines. Trump actually did push the idea that COVID-19 vaccines are good (because he wanted to take responsibility for them). This generated a hostile response from his base, so he stopped doing that. If he had come out and said, “The election was completely legitimate, and I lost fair and square”, the reaction might have been the same. We do not know.
More importantly, Trump would never have acknowledged the legitimacy of the election result. A person who did that would not be Trump. If you want to understand the real world of American politics, you must understand why so many people actively support Donald Trump, not an imaginary person who somehow rose to power within the modern Republican Party whilst conforming to ordinary pro-establishment political norms. We know what happened to such people within the Republican Party; they were outcompeted by Trump so spectacularly over the past decade that the party now effectively functions as the Trump Party.
None of this should be understood as apologetics for Trump. I think he is a truly terrible human being, far worse than the average political leader, and a genuine threat not just to American democracy but to the world. The point is rather that concepts like “misinformation” and “disinformation” do not provide a useful framework for understanding his popularity or the threats he poses. Indeed, in some ways, I think the whole “dis/misinformation” framing that is so popular today is simply a confused, roundabout way of condemning Trump and other right-wing populists in seemingly “objective”, “non-partisan”, technical language. It would be better to just drop the pretence and be honest about the real game here.
Further, I am also not claiming that misinformation—whether from Trump or anyone else—has zero harmful effects in the world. I think fake news is bad. I think blatant lies are bad. They undoubtedly have negative consequences in politics. However, an annoying motte-and-bailey fallacy happens in this context. Alarmists treat dis/misinformation as an extreme social danger, the top global threat, the “coin of the modern realm”, and so on. When one points out that these concerns are simplistic, unsupported, and greatly exaggerate the dangers of misinformation, alarmists retreat to a very different view: “Are you claiming that misinformation has zero effects on the world and has no negative consequences?!”. “Are you claiming that scholars and policymakers should pay zero attention to misinformation?!”
Of course not. Nobody is claiming those things or has ever argued for them.
Too much optimism?
One objection to this deflationary view about the dangers of misinformation is that it implies an absurdly optimistic analysis of the quality of political debate, popular punditry, media coverage, and mass belief systems. If misinformation is not a great societal threat, does that mean that most media coverage and contributions to political discourse are honest, truthful, reliable, evidence-driven, and so on? That seems absurd.
This is how a group of leading misinformation researchers frame the controversy in a recent (not-yet-published) article titled “Why misinformation must not be ignored”. The article begins:
“Misinformation has received much public and scholarly attention in recent years. The fundamental question of how big a concern misinformation should be, however, has become a hotly debated topic. On the one hand, some scholars highlight the potential harms of misinformation and the dangers to public health and democracy. On the other hand, critics of the concern over misinformation offer reassurance that we need not worry about the health of public discourse.”
In other words, people who criticise alarmism surrounding “misinformation” are claiming that the state of public discourse is healthy.
This is mistaken. In my view, the problem with alarmism about “misinformation” is not that it is too pessimistic about the health of public discourse. The problem is that it is not pessimistic enough. Specifically, such alarmism typically rests on two extremely optimistic—and extremely implausible—beliefs: (1) that misinformation is the main form of bad information in society and (2) that bad information is easy to identify, at least for misinformation researchers.
Misinformation is not the main form of bad information in society
The term “misinformation” is mired in definitional controversy and confusion. Some suggest that it boils down to little more than “ideas with which misinformation researchers happen to personally disagree.” Nevertheless, in practice, most research on misinformation focuses on what misinformation researchers and experts consider to be extremely clear-cut—that is, “unambiguous”—falsehoods and fabrications.
More specifically, they focus on information that explicitly contradicts the consensus judgements of people deemed to be experts (e.g., scientists, public health authorities, and established fact-checking organisations). For example, if someone fabricates a news story (e.g. “The Pope endorses Donald Trump for president”), contradicts overwhelming scientific consensus (e.g. “vaccines cause autism”), or asserts a claim that is debunked by a consensus of fact-checkers (e.g. “the software used in voting machines was created in Venezuela at the direction of Hugo Chavez”), that constitutes misinformation. (When such claims are made deliberately with the intention of deceiving audiences, they are typically classified as “disinformation”).
Given this, it should be obvious that criticising alarmist narratives about misinformation, so defined, does not amount to claiming that the state of media coverage, political rhetoric, and public debate are “healthy”. It just means that people exaggerate the dangers of that specific kind of content. There are so many ways in which communication can be—and is—highly biased, propagandistic, and misleading, even if it never involves misinformation in this sense.
First, because “misinformation” is overwhelmingly identified by focusing on information that contradicts the consensus judgements of experts and elites within society’s leading knowledge-generating institutions, the focus on misinformation ignores how such institutions can themselves be deeply dysfunctional and problematic. This includes science, intelligence agencies, mainstream media, and so on.
In fact, it is noteworthy that the most spectacular epistemic fuck-ups in recent decades overwhelmingly occurred within leading establishment institutions. The US, the UK, and other countries invaded Iraq partly based on falsehoods about WMDs and Saddam Hussein’s connections to Al Qaeda. These falsehoods were spread, supported, and amplified by most establishment politicians (e.g. those who now complain about “mis/disinformation”), intelligence agencies, a wide range of experts, and much of mainstream media. Likewise for the popular—again, establishment, mainstream—beliefs about financial stability and economic risk that led to the 2007-2008 financial crisis, the worst economic crisis since the Great Depression.
More generally, the research of Philip Tetlock and others shows that putative experts and intellectuals—namely, credentialed elites who exert considerable and disproportionate influence on policy-making through their advising, punditry, and commentary—are often extremely unreliable, dogmatic, and ideological.
Second, there are countless ways in which communication can be highly biased, propagandistic, and misleading, even if it never involves “misinformation.” In fact, it is a consensus view within media research that publishing outright falsehoods and fabrications constitutes the least common mechanism of media bias. One reason is that publishing misinformation in this sense is almost always unnecessary: even if your explicit goal is to mislead audiences, you can just cherry-pick, omit context, consult with congenial experts, situate facts in specific interpretative and explanatory frameworks, and so on. Another reason is that publishing outright “misinformation” is reputationally and financially hazardous. Given this, the media very rarely makes things up.
These same lessons generalise to politicians and political pundits. Politicians like Donald Trump and Marjorie Taylor Green are exceptional not just in the quantity of falsehoods they assert but in their mode of deception and bullshittery: Because they define themselves as anti-establishment politicians, they do not mind making claims that get debunked by establishment institutions. Indeed, insofar as their supporters often despise the establishment, saying things that outrage elites within the establishment is probably very often a feature, not a bug, when it comes to their rhetoric. Most politicians in liberal democracies are not like this. However, the fact they try to refrain from asserting outright misinformation obviously does not mean that the claims they make are therefore thoughtful, reasonable, honest, and well-supported by evidence. Instead, they engage in spin, propaganda, insinuation, cherry-picking, framing, and so on.
These remarks barely scratch the surface of this complex issue. The basic point, however, is that there are countless ways in which public debate and media coverage can be “unhealthy” that have nothing to do with “misinformation” as it is typically understood. Given this, the idea that misinformation is the only threat to the quality of public debate and deliberation is extremely naive.
This was demonstrated in a recent study by Jennifer Allen and colleagues on the effects of vaccine-related content on Facebook on vaccination intentions in the US. Although there is lots of nuance in their methodology, their headline finding is that factually accurate information that nevertheless cast doubt on the safety of vaccines (e.g., reports of rare vaccine-correlated deaths) was MUCH more prevalent and impactful—roughly, 46 times more consequential—on the platform than outright misinformation about vaccines:
“…[W]e found that flagged misinformation URLs received 8.7 million views during the first 3 months of 2021, accounting for only 0.3% of the 2.7 billion vaccine-related URL views during this time period. In contrast, stories that were not flagged by fact-checkers but that nonetheless implied that vaccines were harmful to health—many of which were from credible mainstream news outlets—were viewed hundreds of millions of times.
“We find that flagged misinformation does causally lower vaccination intentions, conditional on exposure. However, given the comparatively low rates of exposure, this content had much less of a role in driving overall vaccine hesitancy compared with vaccine-skeptical content, much of it from mainstream outlets, that was not flagged by fact-checkers.”
Bad information is not easy to identify
At this point, many misinformation researchers are tempted to broaden their definition of misinformation. If factually accurate information can be misleading, then why not just define “misinformation” to include this kind of content?
I think this idea is terrible for reasons I have elaborated on elsewhere. Roughly, I think that the concept of <misleading content> is so expansive (it encompasses nearly all of media and political communication), amorphous (it is genuinely difficult to even say what it means for content to be misleading), and value-laden (what content you judge to be misleading is heavily shaped by your priors, values, and political allegiances) that it is not suitable for scientific classification. The job of determining whether claims are misleading and which context they should be situated in is the job of political debate among the democratic citizenry. We should not delegate this job to a class of experts.
In the present context, the relevance of this point is as follows: Misinformation researchers and those spreading alarmism about misinformation are often naive and overly optimistic in their view that bad information is easy to identify, at least by highly educated, credentialed experts and journalists with the “right” ideology and values (i.e., establishment liberal and progressive politics).
This attitude might be defensible regarding misinformation in a very narrow sense—that is, as clear-cut contradictions of expert consensus. However, once you broaden your focus to the realm of misleading communication (where even true claims can be misleading), this perspective is deeply implausible. Whether a given act of communication qualifies as misleading will only be assessable in an extremely context-sensitive way, drawing on a suite of values and a worldview largely acquired from exposure to the media outlets one would presume to judge.
Consider one of the allegedly “misleading” headlines in Allen and colleagues’ study Viewed by nearly fifty-five million people on Facebook, it was from the Chicago Tribune and reads:
“A ‘healthy’ doctor died two weeks after getting a COVID-19 vaccine; CDC is investigating why.”
According to misinformation researchers Sander van der Linden and Yara Kyrychenko,
“This headline is misleading because the framing falsely implies causation where there is only correlation (i.e., there was no evidence that the vaccine had anything to do with the death of the doctor).”
I find this reasoning difficult to understand. The headline simply states a fact—a healthy doctor died two weeks after being vaccinated—and accurately reports that the CDC was investigating the reason why this happened. Van der Linden and Kyrychenko confidently declare that “there was no evidence that the vaccine had anything to do with the death of the doctor”. But presumably it was the job of the CDC to evaluate the evidence there—hence why they were investigating and why the Chicago Tribune reported that they were investigating.
What exactly is the underlying principle here? Is it always misinformation if a media outlet reports on rare events that organisations in society are investigating? Is it always misinformation if they report on rare events? (Should media outlets never report rare events?!). Or is it only misinformation if an outlet implies the possibility of a causal relationship in the absence of strong positive evidence?
I am confident that nobody in the world—and not a single misinformation researcher—would consistently endorse any of these definitions of what constitutes misinformation.
To make this clear, let me pick the most incendiary example possible: Consider the enormous media coverage of the killing of George Floyd. It might be one of the most intensely covered events in the history of mass media. Many more people saw more headlines, news stories, and commentary about that one event than saw the Chicago Tribune headline.
Well, that was—statistically speaking—a rare event that was going to be investigated by an official organisation. So, was it misinformation when every media outlet in the world covered it extensively? Just like many people greatly overestimate the risks of vaccines, evidence suggests that many people greatly overestimate how common it is for unarmed Black men to be killed by the police. A conservative might argue, therefore, that the media coverage—in both its content and intensity—was highly misleading, painting an extremely unrepresentative image of police behaviour in the US.
Perhaps one could respond that it is fine to cover rare events; it only becomes misinformation if causation is implied without strong positive evidence. But this looks even more problematic. Whereas the Chicago Tribune headline does not assert anything about causation, there was an enormous amount of media coverage and commentary that directly asserted (and in many cases simply assumed) that George Floyd’s killing was motivated by racism. That was a very strong causal claim. Indeed, conservatives were quick to point out that a white man named Tony Timpa was killed by police in 2016 in extremely similar circumstances to George Floyd. At the very least, that complicates any direct inference of racism as a cause of Floyd’s death. Insofar as media outlets failed to note this in covering Floyd’s death, does that mean that they were omitting relevant “context” in such a way that their coverage amounted to “misinformation”?
Let me be very clear about what I am not saying: I am absolutely NOT saying that the media coverage of George Floyd’s killing was misinformation. I think it is extremely important to shine a spotlight on police misconduct, even if such misconduct is rare and it might lead some people to form misperceptions. I also find it plausible that racism played a role in what happened to Floyd. Moreover, although I am less confident about this judgement and I would never label it as “misinformation”, I think the Chicago Tribune’s headline was irresponsible: Given that we had strong independent evidence at the time that vaccines were safe, and given my belief that public health authorities are generally trustworthy, the newspaper should have taken more care not to give lots of attention to an extremely rare and unrepresentative occurrence.
But here is the thing: My judgement calls here are driven by an entire background worldview, ideology, and set of values. You will never be able to reduce these judgements to simple, mechanical principles like “Don’t report on rare events” or “Don’t imply causality in the absence of strong positive evidence”. A large range of reasonable judgment calls will arise from the different priors, experiences, interests, values, and biases people bring to public debate. Given this, although we can be confident that lots of communication in the public sphere is misleading, we should not be confident in anyone’s ability to detect it with high reliability and impartiality.
I don't know whether you've ever touched on this before but I've the impression that the focus on "misinformation" is supported by the desperate hope that one can stop what you call the populist revolts without having to address the underlying material conditions that give rise to them.
For me, as someone on the left, it seems obvious that the problem is increasing income (and wealth) inequality, worsening working and living conditions, and the realization that the people in power don't care about their populations' opinions (most extremely seen around the attack on Gaza). But addressing this in any meaningful way would mean unraveling the mainstream political project of the last 50 years.
So instead, they focus "misinformation" and hope that they can develop tools that allow them to reign this in without having to change anything concerning the state of the world.
And as a side-note: I think that the belief in the power of social media to manipulate people stems from the "Arab Spring", where Facebook (and Twitter to a lesser degree) were supposed to have been the vectors that allowed populations to organize into a revolutionary movement. I don't know to what degree this is actually true, and in any case, the Arab Spring collapsed without having brought meaningful change but I think that this was the lesson learned.
Discovery is a dynamic process. This has cautionary implications for censorship and "misinformation policy".
a) What is labelled misinformation at one moment might turn out to be fact at a later moment. For example, the "lab leak" hypothesis was verboten in 2020, but now is mainstream.
b) Even somewhat kooky hypotheses can have epistemic value in dynamic perspective, if they motivate people to dig deeper and so, eventually, to uncover evidence and to get closer to truth.