Should we trust misinformation experts to decide what counts as misinformation?
Science, truth, bias, ideology, postmodernism, expertise, and more.
I recently published an essay arguing that modern misinformation research confronts a dilemma when it comes to defining ‘misinformation’:
On the one hand, if researchers define the concept so that it only includes clear-cut falsehoods, misinformation appears to be relatively rare in the media ecosystem and largely symptomatic of other problems, at least in Western democracies.
On the other hand, if researchers define the concept to include subtler ways in which communication can be misleading even when it’s not demonstrably false, the concept becomes so expansive, amorphous, and value-laden that we shouldn’t trust misinformation experts to decide what counts as misinformation.
Unsurprisingly, many people, including numerous leading misinformation researchers, disagreed with the essay. In this post I will respond to several critiques. First, however, I will re-state my argument, hopefully in a clearer and more persuasive form.
What is misinformation?
The term ‘misinformation’ is everywhere. We’re told that we live in the “misinformation age” (see also “disinformation age” and “post-truth era”); that misinformation has “reached crisis proportions”; that misinformation is (along with disinformation) the “top global threat over the next two years”; that misinformation is a major cause of all sorts of troubling social and political trends; and so on.
Accompanying such claims, there is also a vast body of scientific research on misinformation that purports to establish various findings about it.
For example, misinformation experts tell us that conservatives are more “susceptible” to misinformation than liberals; that misinformation has certain surface-level “fingerprints”; that people can be “inoculated” against misinformation by learning these fingerprints; that misinformation spreads differently to reliable information; that the prevalence of misinformation can be quantified; that misinformation is more common in right-wing media than in centrist or left-wing media; and so on.
For any of this to make sense, we must know what misinformation is. Moreover, whatever it is - however we define the concept - it must be the kind of thing that lends itself to scientific classification, measurement, and generalisation by experts.
So what is it?
False information?
Some critics of misinformation research argue that the term is just a deceptively technical-sounding way of dismissing - or, worse, censoring - content that powerful people dislike.
I think that’s too strong. At least when it comes to really clear-cut cases of false content - absurd conspiracy theories, clear contradictions of overwhelming and reliable expert consensus (e.g., “the Earth is flat”, “vaccines cause autism”, etc.), and so on - I think it’s fine for experts to classify such content as misinformation. (I’m against censorship in such cases, but that’s a different issue).
In my essay I referred to such content as “demonstrably false content.” Admittedly there are still many complications that arise even for this simple definition. Experts can be wrong, for example. (In practice, wouldn’t this definition have classified Galileo and Einstein’s ideas as misinformation?). And who counts as an expert, anyway? And what degree of expert consensus is necessary? And so on.
Moreover, I think there are legitimate worries that misinformation researchers and Big Disinfo more broadly are not very even-handed in which examples of demonstrably false content they focus on.
Nevertheless, at least on a very narrow, conservative interpretation of “demonstrably false content”, I don’t have much of a problem when misinformation researchers classify such content as misinformation. In this sense I’m different from critics who want to do away with the concept altogether.
Misleading Information?
The problem is that many misinformation researchers do not want to stick to a narrow definition. Instead, they want to use the term ‘misinformation’ to classify content that isn’t demonstrably false but is nevertheless in some sense misleading - for example, because it’s cherry-picked or lacking appropriate context.
I think this is a bad idea.
I don’t deny that misleading communication exists and is often harmful. Rather, my view is that it is so ubiquitous in politics, media, culture wars, and public debate that any attempt by misinformation experts to use the concept of misleadingness to sort communication into buckets labelled ‘misinformation’ and ‘not misinformation’ will inevitably be extremely - and problematically - selective.
For example, there’s an enormous amount of misleading content associated with mainstream media, opinion pieces in elite liberal media outlets (the NYT, The Guardian, etc.), expert commentary, social science, and more. In fact, there’s an enormous amount of misleading content within misinformation research itself. And yet it’s very unlikely that misinformation experts would ever classify any such content as misinformation.
To see this, consider just one case.
The spread of true and false news online
In 2018, the journal Science (one of the world’s most prestigious scientific journals) published an article titled ‘The spread of true and false news online’. The article is extremely influential. It has almost certainly shaped how policy makers think about misinformation and social media. As of February 2024, it’s been cited over 8,000 times, making it one of the most highly-cited social science articles in the last decade.
Here is the headline finding of the study, stated in the abstract of the article:
“Falsehood [i.e., false rumours on Twitter during the period within which data was collected] diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.”
That’s a striking finding, and it has been repeated many, many times in mainstream media. Here is just a small sample of headlines:
‘Why fake news on social media travels faster than the truth’ (The Guardian)
‘Falsehood flies and the truth comes limping after’ (The Financial Times)
The finding is also constantly cited as an established result in scientific research on misinformation. Again, here is a small sample (just the tip of the iceberg):
A review article on the psychology of misinformation cites the article as justification for the claim that “the internet is an ideal medium for the fast spread of falsehoods at the expense of accurate information”.
An article on health misinformation cites it as justification for the claim that “falsehoods have been shown to spread faster and farther than accurate information”.
An article on social media cites it as justification for the claim that “false rumors spread farther and faster on Twitter than true ones, especially in the domain of politics”.
An article on the fingerprints of misinformation cites it as justification for the claim that “misinformation spreads six times faster than factual information”.
Here is the problem: The article provides no justification for these claims!
Remarkably, the study attempts to make generalisations about the spread of true and false claims on Twitter by studying the spread of claims that are classified as true or false by six fact-checking organisations. This is extremely dubious. Even granting the implausible assumption that fact-checking organisations are infallible, the methodology has an obvious sampling bias.
What false claims are likely to get fact-checked by six independent fact-checking organisations? Those that go viral! Fact-checkers won’t bother to check false claims that people don’t pay attention to. So the study tries to establish generalisations about the virality of false claims by focusing specifically on viral false claims.
But it’s even worse than that. In addition, fact-checking organisations won’t bother to check most of the really popular true stories - for example, that the Queen of the United Kingdom died last year - because they’re universally accepted and hence not contested. So the study tries to make generalisations about the virality of true news with data that excludes most of the biggest, viral true new stories.
Importantly, the original study flags this as a potential concern inside a technical discussion in the article (mostly confined to an appendix) and says something confusing and unconvincing in response. So strictly speaking, the article itself - if read in its entirety - is careful not to explicitly declare anything that is demonstrably false. Nevertheless, the article and the established “finding” that is now repeated constantly in mainstream media and misinformation research is extremely misleading.
Here is the question: Is there a single study on misinformation, or a single misinformation researcher, that will characterise any of this as “misinformation”? Of course not.
Admittedly this is just one example. However, one could easily pick a million more. Misleading communication is not an aberration in our informational ecosystem. It is the norm. Given this, once misinformation experts start classifying content as misinformation on the grounds that it’s misleading, they will inevitably be picking and choosing content in ways that are highly biased and frequently self-serving.
To me, this is obvious. To many other people, it’s not. I’ll now turn to the following objections that have been advanced to my views on this topic:
That my arguments presuppose a weird postmodernist rejection of science and truth.
That there are in fact legitimate scientific research programmes centred on misleading information.
That misinformation researchers aren’t interested in classifying content as misinformation.
That I’m too optimistic about a marketplace of ideas unguided by the expert classifications of misinformation researchers.
Objection #1: Am I a postmodernist?
Sander van der Linden and Stephan Lewandowsky are two of the most influential misinformation researchers, and they seem to think I endorse “full-on postmodernism”. When I asked van der Linden to elaborate on his specific disagreements with my essay, this is what he said:
I respect both van der Linden and Lewandowsky as people and scholars, but I disagree with their assessment.
First, I’ve never written of “evil” epistemic elites. I don’t believe in evil, and I don’t think misinformation researchers have bad intentions. I just think they’re human beings - not a special class of human beings in unmediated contact with the Truth, but not a class of human beings with unusually sinister motives either.
Second, I’m not a “postmodernist” on any possible interpretation of that term. I think there’s such a thing as reality and that the best way of learning about it is through science, broadly construed.
Nevertheless, I think science has taught us that human beings are ineradicably biased, fallible, and groupish; that experts are often unreliable and overconfident; that much of science (especially social science) is untrustworthy; that any given person’s access to reality is typically mediated by countless layers of testimony, interpretation, and abstraction; that the truth is not self-evident although we’re strongly disposed to think it is; and that science is a social institution often powerfully shaped by human biases and broader cultural, political, and economic forces.
I think these things not because I endorse a weird postmodernist rejection of reality but because of what I think science has taught us about reality and our biased, partial, and limited access to it.
Let me be concrete. In my essay, I argued that we should not trust a class of experts in determining which content qualifies as misinformation if misinformation is defined as misleading information. As I’ve noted, one reason I think this is because misleading information is so pervasive that any attempt to deploy this definition will inevitably involve researchers choosing content to focus on in highly selective ways.
Let me be more concrete. Misinformation experts, as with social-scientific experts within academia more broadly, are not politically or intellectually diverse. To pick just two examples of this: They are overwhelmingly left-leaning politically, and they overwhelmingly belong to the same social class of highly-educated, cosmopolitan professionals marked out by distinctive cultural values, identities, tastes, preferences, and worldviews. If they start classifying true but misleading content as misinformation, I’m extremely sceptical their classifications will be impartial.
To the extent misleading content supports right-wing political views or culturally conservative ones, they will be good at identifying it. To the extent misleading content supports moderately left-wing, socially liberal, or progressive views, they won’t be good. They might not notice the content is misleading because they endorse the relevant misperceptions themselves. But even if they do notice, they will likely ignore it because they think it supports the right “ground truth” or because treating it as misinformation would be awkward within their social milieu.
Again, this isn’t because they’re “evil”. It’s because they’re human.
When it comes to governments, mainstream media, and international organisations like the World Economic Forum, the issue is even worse. I think they will be highly selective in focusing on misleading content they find threatening. Insofar as they’re a major source of funding and support for misinformation research, I worry that such research will end up being biased towards the legitimisation of pro-establishment narratives.
Error and Partiality
More generally, in thinking about bias, it’s helpful to distinguish error from partiality. When people worry about bias in this area, they typically think of cases where legitimate content (e.g., the lab leak theory of SARS-CoV-2 or stories about Hunter Biden’s laptop) is incorrectly classified as misinformation. But an equally serious worry - in many cases a more serious one - concerns partiality, not error.
For example, suppose that misinformation researchers focus primarily on right-wing misinformation and counter-establishment misinformation. Even if all the content they focus on really is misinformation (i.e., they make few errors), their project is still extremely biased.
That’s why it’s not very persuasive when Lewandowsky, van der Linden, and Lee McIntyre, in a recent article in Scientific American, attempt to address worries about bias in misinformation research by arguing that Republican politicians are bad and that right-wing disinformation can be “unambiguously identified.” At best, this addresses worries about error, not partiality.
Van der Linden, Lewandowsky, and many others disagree with my assessment. Maybe they’re right. However, our disagreement doesn’t arise because they “believe in science” and I’m a postmodernist. It arises because we disagree over what our best science tells us about human psychology, bias, science, the complexity of social and political reality, and the nature and limits of expertise.
Objection #2: Science is possible
Joe Bak-Coleman
Joe Bak-Coleman, another leading misinformation researcher, responded as follows:
I agree with this. I should have been much clearer in my essay about its scope. For example, some people understandably interpreted me as claiming that any kind of research into psychological, social, and institutional factors that generate misperceptions is bad.
In fact, my argument is simply that when it comes to expansive definitions of misinformation, we should be extremely sceptical of a technocratic (i.e., expert-driven) project that sorts communication into misinformation/non-misinformation buckets and then deploys such classifications in the service of establishing scientific findings and guiding technocratic interventions.
Here is what I wrote:
…although misleading information is widespread and harmful, there can’t be – more precisely, there shouldn’t be – a science of misleading content.
By this, I don’t mean that scientists should refrain from studying the ways in which communication can be misleading. I also don’t mean that scholars should avoid studying specific disinformation campaigns and the manipulative tactics they use.
In addition, I didn’t mean to rule out the kind of project Bak-Coleman mentions. In fact, in much of my own research, I’ve sought to explore the ways in which distinctive kinds of social conditions and practices inevitably distort how people view the world.
Gordon Pennycook
The distinction here is important. For example, Gordon Pennycook, another leading misinformation researcher and someone whose work I follow closely and respect, objected that the difficult and fuzzy nature of misleading content makes it more important to study it scientifically, not less. (“Imagine if scientists restricted inquiry to only straightforward questions”).
If by “scientific research”, Pennycook means we should study the ways in which communication can be misleading, and the kinds of psychological processes and social institutions, incentives, and informational architectures that are conducive to forming accurate beliefs, I agree this research is important and worthwhile (and try to undertake it myself).
If by “scientific research,” he means a project in which a class of experts surveys the informational landscape and makes decisions about which misleading communication qualifies as misinformation, I disagree.
To appreciate why this distinction matters, consider a study that Pennycook identified as an example of valuable scientific research into misleading content. According to Pennycook, the study demonstrates that the cognitive processes underlying how people deal with misleading content are similar.
However, this study focuses on fake news (completely fabricated news stories) and hyper-partisan news (e.g., Breitbart). Given this, there is a vast amount of misleading content - in mainstream media, in partisan (but not hyper-partisan) media, in opinion pieces in elite liberal media outlets, in social science, and so on - that is not featured in the study. This means the study can’t establish anything about the cognitive processes that underlie how people handle misleading content in general.
More importantly, no study could achieve this unless we trusted misinformation researchers to collect a sample of content that is representative of, and hence can license inference about, misleading content. But what would it even mean to have a representative sample of misleading content?
There are extremely difficult issues here, and, with a few exceptions, I don’t think misinformation researchers take them seriously enough.
David Pinsof
David Pinsof, a social scientist and one of my favourite essayists, also objected to the essay.
Pinsof argued that his own theory of political ideology provides a kind of science of misleading content. This Alliance Theory proposes that political claims and belief systems emerge as people deploy propagandistic tactics to promote their interests and the interests of their allies - for example, by exaggerating their victim status, minimizing the threats and harms they inflict on others, interpreting the world in self-serving ways, and so on.
I think Alliance Theory gets a lot right about political psychology and communication (see here and here). Nevertheless, there’s an important difference between detailing the psychological and social mechanisms that give rise to political beliefs and communication, which is what Alliance Theory does very well, and making first-order classifications about which communication is misleading.
In fact, if Alliance Theory is correct - if our brains are as propagandistic as it suggests - it seems highly likely that such classifications will be biased by the kinds of propagandistic tactics it identifies. Alliance Theory is therefore the kind of scientific theory that makes me sceptical about trusting a class of experts to classify misleading content as misinformation in a balanced and impartial way.
Objection #3: Misinformation researchers are not interested in classifying content as misinformation
My essay argued that we should not trust misinformation experts to decide what misleading content qualifies as misinformation. Some people responded that misinformation researchers aren’t interested in doing that. This objection came in two basic forms.
Outsourcing
First, some argued that that misinformation experts themselves don’t make decisions about whether content qualifies as misinformation. Instead, they outsource such decisions to domain-specific scientific experts (e.g., epidemiologists, climate scientists) or independent fact-checking organisations.
This misses the point. Of course it’s true that misinformation researchers aren’t themselves researching vaccines, climate change, election integrity, or Joe Biden’s rate of cognitive decline. Nevertheless, they’re still the ones making decisions about which content qualifies as misinformation in their research.
When they assume fact-checking organisations are reliable and impartial, they’re making a decision. When they assume certain people qualify as domain-specific experts, or that a certain degree of expert consensus is sufficient for a claim to qualify as true, or that certain claims are in contradiction with expert consensus, they’re making decisions.
These decisions might not be very troubling when it comes to classifying very clear-cut examples of demonstrable falsehoods. However, once researchers start focusing on subtler ways in which communication can be misleading (e.g., through cherry-picking), they become much more contentious.
For example, cherry-picking is ubiquitous. By its very nature, all news organisations do it. “If it bleeds, it leads” is quite literally a recipe for cherry-picking, and one that systematically distorts audience perceptions of the world.
Similarly, pretty much all political communication involves cherry-picking because it aims to be persuasive, which means selecting - cherry-picking - the evidence and reasons that support a preferred conclusion.
The bottom line is that once misinformation researchers start classifying content as misinformation on the grounds that it involves cherry-picking, they will inevitably be making highly selective and biased decisions about which examples of misleading communication to focus on.
Avoiding Classification Completely
Second, some argued that misinformation researchers aren’t in the business of classifying content as misinformation at all. For example, philosopher Lee Mcintyre claimed that misinformation experts are not “trying to classify content. They’re doing empirical work to offer tools to everyone so that we can all become better at recognizing and fighting disinformation.”
As I’ve noted already, it’s true that there is valuable research studying the kinds of psychological, social, and institutional factors that distort how individuals form beliefs about the world and communicate with each other. Moreover, some research simply aims to give people better tools with which to evaluate information in general, such as media literacy training.
Nevertheless, the fact is that a vast amount of misinformation research does involve classifying content as misinformation.
Consider, for example, some of the alleged findings within misinformation research referenced above:
Misinformation spreads differently to reliable information.
Conservatives are more “susceptible” to misinformation.
Misinformation is more prevalent in right-wing media than in centrist and left-wing media.
Misinformation has distinctive surface-level “markers”.
People can be inoculated against misinformation by learning its markers.
There is no way of establishing any of these or countless other alleged findings without first classifying content as misinformation and then using such classifications to draw inferences.
Objection #4: Am I too optimistic about the marketplace of ideas?
If we shouldn’t trust misinformation experts to decide whether communication is sufficiently misleading to qualify as misinformation, who should make such decisions? In my essay, I argued that it should be left to democratic citizens arguing, debating, and deliberating within the marketplace of ideas, not to a class of experts who pretend to occupy a neutral scientific vantage point outside of it.
This left me vulnerable to two objections: that I’m anti-expertise, and that I’m overly optimistic that truth will automatically “win out” in the free and open exchange of ideas and arguments.
Both objections misrepresent my argument.
Expertise
First, I’m not anti-expertise. Expertise is real and “doing your own research” is typically a terrible idea for most citizens. If you want to understand biology, you should defer to biologists. If you want to understand vaccines, you’re typically much better off trusting medical experts than trying to work your way through technical, scientific papers yourself.
Given this, experts should give their opinions about things, and citizens should trust those experts who are trustworthy. Admittedly this is extremely complicated because experts often give opinions way outside their area of expertise, experts often disagree, and self-proclaimed experts in some areas are absurdly overconfident, partisan, and untrustworthy. But still, the ideal of reliable experts informing a public that trusts their expertise is a good one.
Nevertheless, this is completely different from saying that citizens should trust a class of experts to decide which examples of misleading communication count as misinformation. And it is also completely different from saying that such classifications can form the basis of a science capable of establishing reliable findings about misinformation.
In fact, I would go further. In my view, many of the problems associated with misinformation are symptomatic of the fact that many citizens mistrust institutions such as mainstream media, science, public health, and so on. Given this, the most important thing for leaders in such institutions to do in this area is to try to win public trust - most importantly by becoming more trustworthy.
As I’ve argued elsewhere, the highly partisan nature of the liberal establishment’s post-2016 panic about misinformation, fake news, “post-truth”, and so on, which frames dissent from establishment views as a form of brainwashing by viral misinformation and disinformation campaigns, is likely to have the very opposite effect.
Similarly, to the extent that experts pick and choose which misleading communication counts as misinformation in highly selective ways, I suspect that this will simply further exacerbate institutional mistrust.
The Marketplace of Ideas
Second, I’m the last person who should be accused of being optimistic about the marketplace of ideas. Here is what I wrote:
“I am not claiming that people should refrain from calling out misleading content when they encounter it. My point is rather that these judgements should be left to democratic citizens - fallible, biased, partial citizens - hashing it out within the marketplace of ideas. They should not be delegated to a class of misinformation experts pretending to occupy a neutral scientific vantage point outside of it.”
I’m strongly against censorship, but that’s not because I think the free and open exchange of ideas will lead automatically to truth.
For public deliberation and debate about complex social and political issues to promote collective understanding, it must be scaffolded by social norms and institutions of various kinds, and it must be informed by reliable scientific findings and experts. At present, every democracy in the world fails to meet these conditions, and the result is collective epistemic dysfunction.
However, it is precisely this collective epistemic dysfunction that makes me sceptical about a project in which experts deploy highly expansive definitions of misinformation.
As I’ve already noted, selective and biased communication is the norm in modern democracies. It’s not confined to online conspiracy theories, fake news, or right-wing populists. It’s pervasive, including within many of our elite epistemic institutions.
Citizens and policy makers should be taking many steps to improve this situation. For the reasons I’ve outlined here, trusting experts to decide - in an inevitably selective and biased way - which examples of misleading communication count as misinformation should not be one of them.
You are characteristically generous in your acknowledgement of various criticisms and efforts to carefully think them through. But I'm wondering where you would situate the more basic objection Eric Funkhouser made awhile ago, about conflating absence of cause with lack of influence - that just because the primary driver of harmful beliefs is not misinfo but motivated reasoning and signaling, doesn't mean genuine misinfo isn't prevalent and very influential in its own right. See:
https://twitter.com/RealFunkhouser/status/1751366593433555048
[Edit]: I now see that you did eventually chime in at the bottom of the chain, so I will review. But it would be great to know for the record where you eventually landed on that particular disagreement (if you still disagree at all), since his objections were somewhat different from what you discuss above. Among other things, it raises a question of when and where it's most appropriate to employ counterfactuals in making these determinations.
Not a very persuasive rebuttal.
I get that you do not like a group of experts calling the shots on misinformation, but you seem unaware that in times of information warfare, sowing doubt about institutions and attacking any notion of "objective truth", is part of authoritarian playbooks.
The idea of objective relativism and outsourcing the shot calling to citizens is not only unworkable practically, when democratic societies better develop immune systems to defend themselves against targeted exploitations of the info sphere, it also is sure to backfire when information flows and attention are directed with respect to power.
It is a naive understanding of the mechanism that drive public discourse today and why they are broken; even counterproductive to invoke a vox populi of enlightened citizens that never existed but as a myth while bad actors shape our perceptions of reality with manipulative garbage that us fallible humans can not resists.
We need mis/disinformation researchers, journalists, educators and other "information" experts to call the shots so to speak; just as we need doctors to develop medicines, if we want to have any hope of preserving the public good against the assault of myth, manipulation and magical thinking.
Maybe the tent of those who call the shots is too small and too elite and too socially restrictive at the moment; but then the solution is to make it a bigger tent, not rip off the roof and let citizens alone in figuring out what is real or true; thereby exposing them to the most convincing charlatans, not actual enlightened discourse and an evidence-based worldview.