Misinformation researchers are wrong: There can't be a science of misleading content.
Clear-cut cases of misinformation are rare and largely symptomatic of other problems. Subtler forms of misinformation are widespread and harmful - but not suitable for scientific study.
Summary
To address objections that modern worries about misinformation are a moral panic, researchers have broadened their focus to include true but misleading content. However, there can’t be a science of misleading content. It is too amorphous, too widespread, and judgements of misleadingness are too subjective.
The emergence of misinformation studies
According to a popular narrative, democratic societies are plagued by misinformation, which is driving the public to accept falsehoods about election fraud, conspiracies, climate change, vaccines, and more. Given the dangers associated with such falsehoods, misinformation threatens democracy, social harmony, the environment, and public health. As Joe Biden declared recently, it is literally “killing people.”
This narrative first gained popularity in 2016 after Brexit and the election of Donald Trump, and it gave birth to the field of modern misinformation studies, which researches the nature and causes of misinformation and designs interventions to limit its harms.
The field has been extremely influential, both scientifically and politically. It attracts big grants. It has fuelled a steady stream of publications in elite scientific journals. And its researchers are consulted by governments, international organisations, and corporations (Google, Meta, etc.) in their efforts to combat misinformation.
Can there be a science of misinformation?
The influence of misinformation studies is rooted in its reputation as a scientific field composed of experts generating scientific findings. For example, researchers measure people’s amount of exposure to misinformation, sometimes to several decimal places; they make generalisations about misinformation (e.g., that it is associated with “fingerprints”) and people’s “susceptibility” to it (e.g., that conservatives are more susceptible to misinformation than liberals); and they quantify the effects of interventions designed to combat misinformation.
For any of this to make sense, we must know what misinformation is – that is, how to define the concept. Joan Donovan, an influential misinformation researcher, argued in 2021 that providing such a definition is straightforward: “Misinformation,” she told a journalist, is simply “false information that’s being spread.”
This is a common definition. Charitably, what it means is something like “demonstrably false information.” There are many false claims that nobody is in a position to debunk because the truth is difficult to obtain and we lack evidence. However, there are some claims – “Vaccines cause autism”, “Climate change is a hoax”, “The 2020 US presidential election was invalidated by extensive voter fraud”, etc. – that can be debunked with a very high degree of reliability and certainty.
Sticking to such a narrow definition has obvious attractions. Although there will inevitably be edge cases and mistakes, demonstrably false content is relatively easy to identify, both by experts and algorithms. Moreover, because the examples of misinformation are so clear-cut, misinformation researchers can avoid accusations of bias and overreach.
A moral panic
Nevertheless, this narrow definition also confronts a problem: at least in Western democracies, most citizens do not encounter much misinformation in this sense, and its existence is largely symptomatic of other problems.
Consider some general facts about misinformation on a narrow definition:
First, it is relatively rare. Extensive empirical research shows that the average person consumes very little of it, especially when contrasted with information from mainstream sources (e.g., the BBC, CNN, the NYT, etc.). In fact, typical estimates are likely to greatly exaggerate the amount of clear-cut misinformation because they measure it at the source level. That is, they classify 100% of the content from low-quality websites and outlets as misinformation. However, even extremely unreliable outlets generally refrain from publishing demonstrable falsehoods.
Second, engagement with misinformation is heavily concentrated in certain groups. Most people share and consume very little, but a relatively small minority of very active social media users engage with quite a lot. Call this the “misinformation minority.”
Third, the misinformation minority is not a cross-section of the population. They are people with very specific traits, such as strong conspiratorial worldviews, high partisan animosity (i.e., they actively hate political and cultural enemies), anti-establishment attitudes, and—most importantly—institutional distrust. The causes of this distrust are complex, but exposure to narrow misinformation seems to play a minor role. People seek out narrow misinformation because they distrust institutions (science, public health, mainstream media, etc.), not vice versa.
Given this, some people argue (myself included) that the post-2016 preoccupation with misinformation among journalists, policy makers, social scientists, documentary makers, and so on is a moral panic. Narrow misinformation is not mythical, and it can be harmful. Nevertheless, the degree of alarmism surrounding misinformation seems totally unsupported by the scale of the threat.
Broadening the definition?
An obvious response to this kind of critique is to broaden the definition of misinformation. There are many ways in which communication can be highly misleading without expressing demonstrable falsehoods. If people are strategic in how they select, omit, frame, package, contextualise, de-contextualise, and explain facts – genuine facts of the sort that “fact-checkers” endorse – they can mediate reality to audiences in ways that are extremely misleading.
Perhaps, then, misinformation should be defined broadly as misleading information. Unlike clear-cut examples of misinformation, misleading content is widespread. Moreover, it also seems highly consequential. In fact, it is so widespread that it is difficult to see how it could fail to be consequential.
To illustrate, consider a recent study on vaccine misinformation. Jennifer Allen and colleagues study the impact of Facebook content on vaccination intentions among US citizens. Their basic finding is striking:
“[T]he impact of misinformation was 50X less than that of content not flagged by fact-checkers that nonetheless expressed vaccine skepticism.”
In other words, content that was not demonstrably false – for example, true reports of rare vaccine deaths that get widely amplified on social media – was much more prevalent and much more impactful than demonstrable falsehoods.
A similar point could be made about highly partisan media outlets like Fox News. Although such outlets rarely communicate direct lies or outright fabrications, research consistently shows that their highly selective reporting, framing, and packaging of facts shape the attitudes and behaviours of their audiences in important ways.
Given this, broadening the definition of misinformation to focus on true but misleading content has obvious advantages for misinformation researchers. It also provides a natural way of addressing the charge that they have fuelled - and benefited from - a moral panic. So defined, misinformation is not rare and symptomatic. It is common and consequential.
There can’t be a science of misleading content
But here is the problem: although misleading information is widespread and harmful, there can’t be – more precisely, there shouldn’t be – a science of misleading content.
By this, I don’t mean that scientists should refrain from studying the ways in which communication can be misleading. I also don’t mean that scholars should avoid studying specific disinformation campaigns and the manipulative tactics they use. What I mean is that misleading information is not the kind of thing that lends itself to scientific detection, measurement, and generalisation. That is, it is misguided - inappropriate, even - to pretend to scientifically measure people’s amount of exposure to misleading content, their “susceptibility” to it, or what percentage of the informational ecosystem is constituted by it. And it is extremely misguided to delegate the task of determining which true claims are nevertheless misleading to a class of misinformation experts.
There are three related reasons for this.
First, what is misleading content? The concept is amorphous and value-laden, more akin to concepts like cowardice or ugliness than technical scientific concepts. The mere fact that communication is selective cannot be sufficient. The number of possible facts to report is infinite; communication must be selective. Perhaps it is selective reporting that leads to false beliefs? But that can’t be quite right either. Much misleading reporting doesn’t lead directly to false beliefs. Cherry picking, for example, leads audiences to form true beliefs about the cherries being revealed.
Few people are more knowledgeable about the events of 9/11 than 9/11 conspiracy theorists. Few people are more knowledgeable about statistics that reflect unfavourably on minorities than racists. The problem in such cases is not straightforwardly that people have false beliefs (many of their beliefs might be true) or that their beliefs are partial (beliefs are necessarily partial) but that they infer inappropriate conclusions from biased evidence or in some sense have the “wrong” beliefs. But determining whether communication leads indirectly to inappropriate conclusions or the wrong beliefs in this sense is a complex, highly context-sensitive, and often value-laden task. It doesn’t look like the kind of job we should simply assign to a class of misinformation experts.
Second, on any understanding of the concept, misleading information is not just widespread; it is so widespread that the concept loses all scientific value. Even setting aside innumerable subtle partisan, ideological, and economic biases, mainstream news media which misinformation researchers often simply define as reliable report on a highly selective, non-random sample of everything bad happening in the world. This in turn leads to pervasive false beliefs among their audience. Does that mean that most of mainstream news should be classified as misinformation?
Similar points could be made about lots of scientific research (including misinformation research) and communication by public health authorities. The concept of misinformation was intended to pick out an aberration in the informational ecosystem. However, the kinds of tactics that underlie misleading content - cherry-picking, removing context, subtle framing devices, selectively consulting congenial experts, hyperbole, and so on - are so pervasive that any attempt to distinguish between misleading content and non-misleading content will end up looking hopelessly arbitrary.
Finally, it is difficult to see how assessments of which acts of communication are misleading could ever be unbiased. Unlike the identification of clear-cut examples of misinformation, determining which content is misleading – problematically selective, stripped of relevant context, and so on – seems highly vulnerable to the kinds of biases, prejudices, and partiality that constitute ineradicable features of human cognition.
Suppose, for example, that researchers include the accurate reporting and amplification of rare vaccine-related deaths in their classification of misinformation. Does accurate, wall-to-wall media coverage of rare police shootings of unarmed black citizens in the USA also get included? What if, as might be the case, such coverage leads people to systematically overestimate how frequent such shootings are? This is just one (admittedly incendiary) example, but one could pick a million.
All communication across all contexts - whether news, opinion journalism, science, misinformation studies, political debate, and so on - involves countless decisions about what information and context to include, what to exclude, how to present information, which narratives and explanatory frameworks to embed the information in, and so on. Any attempt to divide this communication into a misleading bucket and a non-misleading bucket will inevitably be biased by pre-existing beliefs, interests, and allegiances.
The (Human, All Too Human) Marketplace of Ideas
Let me be clear what I am not saying:
I am not endorsing a weird postmodernist view where there are no differences in the reliability, honesty, and objectivity of different outlets, institutions, and communicators. The BBC is more reliable than Infowars. Astronomy is more reliable than astrology. Public health authorities give more reliable coverage of vaccines than highly conspiratorial anti-vaxxers. My point is rather that a concept like misleadingness is so amorphous, value-laden, and subjective that it inevitably requires misinformation researchers to make far more contentious judgements than these.
I am not claiming that we are never in a position to accurately determine when content is misleading or to distinguish between communicators in terms of how misleading their content is. My point is rather that this is much more challenging than most misinformation researchers seem to appreciate in ways that should weaken our confidence that such a project could ever be truly scientific.
Finally, I am not claiming that people should refrain from calling out misleading content when they encounter it. My point is rather that these judgements should be left to democratic citizens - fallible, biased, partial citizens - hashing it out within the marketplace of ideas. They should not be delegated to a class of misinformation experts pretending to occupy a neutral scientific vantage point outside of it.
Further reading
A reader asked me to include reading recommendations at the end of my essays, so here are some excellent articles and books that I recommend for those interested in exploring this general topic in more depth:
Walter Lippmann, Public Opinion - classic, extremely insightful statement of the epistemic challenges of democracies and the inevitable biases in journalism and media.
Joe Uscinski, What Are We Doing When We Research Misinformation? - brilliant analysis of the challenges of defining misinformation and problems of subjectivity in misinformation research.
Scott Alexander, The Media Very Rarely Lies - the clearest, most accessible summary of the fact that misleading content is (1) ubiquitous and (2) rarely takes the form of demonstrable falsehoods.
Ruxandra Teslo, The Road to (Mental) Serfdom & Misinformation Studies - an insightful critique of misinformation research and useful distinction between “brute misinformation” and “haute bourgeois propaganda.”
In the legal field we already make a distinction between libel & slander, parody humour & entertainment, sincerely making a mistake when you attempt to report the facts, negligently ignoring the facts when you write a story, and having an opinion that other people disagree with. A great deal of what happens in lawsuits revolves around the intent of the person who wrote or said the thing that is the subject of the lawsuit. And where it was said -- what is libel when written by the NY Times isn't slander when it is a conversation you are having with your mates at the pub. You are evaluating the speaker, and not just the speech. If you could just evaluate the speech, then our legal system would be quite different.
You do not get around these problems by inventing a field of misinformation. It used to be that journalists spent much of their time analysing what politicians and public officials said, to see if they were lying, or at best making claims that did not hold up under scrutiny. These days, 'misinformation specialists' analyse what the general public has to say to see if it contradicts or calls into question what politicians and public officials say. Instead of the press speaking truth to power, we end up with the censors speaking power to truth.
We can definitely establish a fixed category of 'misinformation'... once we know absolutely everything and have settled all questions (including those relating to values).