Criticising misinformation research doesn't make you a Trump supporter
It's frustrating that this even needs to be said.
The good news is that a recent article in Time Magazine cited two of my essays. The bad news is that it alleges I’ve been “cultivated” by authoritarian leaders like Donald Trump to “defend their authoritarian agenda.”
The article by Sander van der Linden and Lee McIntyre claims that “authoritarian leaders and their admirers consistently share one thing in common: they twist the truth.” (Apparently, we have “decades of psychological research” to thank for this insight.)
It then argues that, as part of their truth-twisting, these leaders “cultivate those who either willingly or inadvertently defend the authoritarian agenda by undermining research on disinformation and misinformation.” (That’s where they cite me.)
This undermining involves “predictable variations” on the following arguments: “misinformation cannot properly be defined because facts are subjective or uncertain,” “misinformation is a distraction because the real problem is something else” (here, they cite me again), and “fighting misinformation amounts to bias and censorship.”
Van der Linden and McIntyre then respond to these arguments. Along the way, they liken criticisms of misinformation research to the propaganda tactics used by the tobacco and fossil fuel industries to manufacture doubt about their products’ dangers.
I disagree with this analysis, which rests on a set of misrepresentations, omissions, and exaggerations. Because I’ve addressed many of these before, here I will restrict my focus to the topic where they explicitly cite me as a defender of authoritarian politics: my criticisms of expansive definitions of “misinformation”.
First, though, I’ll note some points of agreement:
I agree that Trump and other authoritarian populist movements are dangerous. That’s why it’s so important to understand their widespread appeal and political success in an intellectually rigorous way, instead of embracing self-serving copium that exaggerates the role of “misinformation”.
I agree that Trump and other authoritarians engage in large amounts of lying, propaganda, and bullshitting, which have bad consequences. One can describe this with terms like “disinformation” and “misinformation” if one wants.
I agree that the intellectual standards among right-wing populist politicians, pundits, and media are much worse than those within traditional knowledge-generating institutions dominated by highly educated, socially liberal professionals (science, universities, elite mainstream media, etc.).
I agree that professional fact-checking organisations are broadly reliable on relatively narrow matters of fact. Although I think they’re also politically biased in which facts they check and in the direction of their mistakes (the evidence van der Linden and McIntyre cite to rebut this view isn’t persuasive), they’re much less biased than those on the populist right allege.
I agree that we should trust established scientific consensus (e.g., about the safety and efficacy of vaccines, the reality of human-driven climate change, etc.).
I agree that we should “fight misinformation” and that this can be achieved without censorship. Anyone who participates in public debate in good faith—arguing for views they think are true, presenting evidence and arguments, addressing objections, and so on—understands themselves as “fighting misinformation” in a broad sense. The world didn’t need to wait for professional misinformation researchers or anti-misinformation policies to argue against bad ideas.
On defining “misinformation”
Here’s where they cite my work:
“In order to manipulate the truth, authoritarian leaders cultivate those who either willingly or inadvertently defend the authoritarian agenda by undermining research on disinformation and misinformation.”
One immediate problem with this claim is that it’s too coarse-grained. “Research on disinformation and misinformation” is diverse. If understood very broadly—for example, as any study of the causes of false or unsupported beliefs and narratives—I conduct such research.
Moreover, criticisms of misinformation research are also diverse. In fairness, some criticisms are biased and of low quality. I would place much commentary about a “censorship industrial complex” by figures like Matt Taibbi, Michael Shellenberger, and Mike Benz in this category. It’s not that these critics have nothing valuable to say. However, the occasional grains of truth in their reporting are buried under vast mountains of slop, alarmism, and partisan conspiracy theorising. If van der Linden and McIntyre had cited their work, I wouldn’t have a problem with this part of their article. But they didn’t. They cited one of my essays.
So, what does my essay say? Here’s what it doesn’t say:
It doesn’t say misinformation can’t be defined. (It lists as clear-cut examples of misinformation “vaccines cause autism”, “climate change is a hoax”, and “the 2020 US presidential election was invalidated by extensive voter fraud”.)
It doesn’t say fact-checkers are unreliable. (It explicitly says that they can identify demonstrable falsehoods with high reliability.)
It doesn’t say there’s no valuable research into misinformation or disinformation. (It cites much of this research in support of my argument.)
It doesn’t say misinformation has no harmful consequences. (It explicitly says that clear-cut misinformation can be harmful and that misleading communication is widespread and highly impactful.)
Instead, the article presents a definitional dilemma for misinformation researchers and policymakers.
The definitional dilemma
The dilemma is this:
Misinformation can be defined either narrowly (i.e., as clear-cut falsehoods) or broadly (i.e., as any information that misinforms audiences).
If misinformation is defined narrowly, research suggests it’s relatively rare and largely symptomatic of other problems, at least in Western democracies.
If misinformation is defined broadly, the concept becomes unsuitable for objective, expert classification.
I will briefly review this dilemma before explaining why this argument and my other writings don’t defend authoritarian politics.
The limited reach and impact of clear-cut misinformation
If researchers and policymakers define misinformation narrowly as clear-cut falsehoods, such content appears to be relatively rare in Western democracies and largely symptomatic of other problems.
For example,
Clear-cut examples of fake news make up a relatively small portion of most people’s overall information diet.
The narrow fringe of active social media users who engage with a large amount of clear-cut misinformation is mostly comprised of individuals with strong pre-existing attitudes and traits, such as distrust of institutions, conspiratorial mentalities, and hyper-partisanship. This causes them to seek out and engage with content that aligns with their existing beliefs.
Summarising the state of scientific research into these topics, a team of social scientists write,
“In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information.”
As I stress in the article, such content “is not mythical, and it can be harmful.” Nevertheless, it doesn’t seem to be a significant factor in driving large-scale changes in public opinion.
Moreover, none of this should come as a surprise. It’s one of the oldest lessons in propaganda and media studies that publishing outright fabrications constitutes the least common mechanism of propaganda and media bias.
There are two reasons for this.
First, making things up is rarely necessary to mislead audiences. If you’re sufficiently creative in which facts you select and omit, and how you frame, contextualise, and comment on those facts, you can advance highly misleading claims and narratives without ever publishing fake news.
Second, publishing fake news is the riskiest propaganda tactic precisely because it’s the easiest to detect and punish. Hence, the media—even incredibly unreliable media—rarely makes things up.
Why, then, do clear-cut falsehoods and fabrications exist at all? One reason is that much of this content is spread by people who don’t believe in it—for example, by extremists or conspiracy theorists aiming to demonise their enemies in hyperbolic ways, create chaos, troll normies, or simply have a good laugh.
Another reason is that there are segments of the population who actively distrust establishment institutions. So, they are unbothered when such institutions fact-check fake news. However, this simply reiterates that the more fundamental problem in such cases is the mistrust, not the misinformation. If you don’t address that, no amount of fact-checking, content moderation, or censorship will make much of a difference.
In recent years, many researchers and policymakers have come to appreciate these lessons. In response, a common suggestion is to expand the meaning of the term “misinformation” so that it doesn’t just capture clear-cut falsehoods and fabrications but “anything that leads people to be misinformed”.
In other words, even if claims are true and well-supported by evidence (e.g., reports of rare vaccine-related deaths), researchers and policymakers should classify them as “misinformation” if they misinform audiences (e.g., about the dangers of vaccines).
The challenges of identifying misleading information
This leads to the second horn of the dilemma: although misinformation, so defined, is widespread and harmful, experts and policymakers are not well-positioned to apply such an expansive definition objectively. Determining which communication is misleading in this broad sense is simply part of first-order political debate. It’s not the sort of task we should delegate to a class of social scientists or technocrats.
Three problems with expansive definitions of misinformation
1. Misleading information is ubiquitous
First, misleading communication isn’t just widespread. It’s so widespread in media, political communication, and social science that the concept of misinformation will either become useless or be applied in highly selective ways.
For example, elite mainstream media notoriously report on a highly non-random sample of attention-grabbing events (mostly ones connected to threats and dangers) in ways that systematically mislead audiences about broader statistics and trends. Should all mainstream news reporting be classified as misinformation?
Similarly, tactics such as cherry-picking, framing, and the omission of inconvenient context are endemic to almost all political communication and punditry. Which politicians or pundits present the “whole truth” about an issue? Should all selective, biased, or one-sided political speech be classified as misinformation? What political speech wouldn’t count as misinformation on such an expansive definition?
Notably, these tactics are also ubiquitous within the social sciences. The badly-named “replication crisis”—the fact that vast amounts of putatively established scientific findings don’t replicate—is largely rooted in phenomena like publication bias and p-hacking, which essentially involve cherry-picking and selective reporting of studies and results. (It’s badly named because the failure to replicate is only one of many problems that undermine the reliability and generalisability of vast amounts of scientific research). And yet—unsurprisingly—shoddy, low-quality, non-replicable, and ungeneralisable social science is almost never included as examples of “misinformation” in misinformation research.
All these vices also apply to much misinformation research. For example, some of the canonical “findings” in the field, such as that false news travels faster than true news on social media (reported a vast number of times in mainstream media and scientific research), are extremely misleading. Similarly, van der Linden frequently touts the alleged evidence for his “inoculation” intervention against misinformation without acknowledging the large body of evidence and arguments against its efficacy. Does that constitute misinformation?
2. Evaluating the entire informational ecosystem
Second, determining whether communication is misleading often involves extremely complex, holistic judgments that situate the communication in a broader context.
Consider an analogy: If you only heard arguments from the prosecution in a legal case, they would be highly one-sided and misleading. However, judges and juries don’t only hear such arguments. They hear from both sides. And in this broader context, one-sided arguments can serve a valuable informational purpose.
A similar observation applies to much political communication and debate. Sometimes, coverage and punditry that seem highly biased or one-sided, considered individually, can play a beneficial role within the broader political culture. The quality and impact of communication must, therefore, be evaluated in a holistic context, which makes it highly challenging to classify the misleadingness of content without forming expansive assessments of the overall political culture.
Would our society be better off without highly partisan media outlets? Do passionate advocates for specific viewpoints who argue in lawyerly, one-sided ways harm the overall informational health of society or counteract systemic biases in mainstream media coverage?
Why should we think misinformation researchers and policy-makers are better-placed than anyone else to answer these extremely complex questions?
3. The role of values
Finally, determining whether content is misleading is often influenced by our values and underlying belief systems. Consider intense media coverage of vaccine injuries, police killings of unarmed black citizens, immigrant crime, hate crimes, natural disasters, fake news, or terrorist attacks. Does such reporting misleadingly focus on statistically rare events or draw much-needed attention to urgent social and political problems? When determining whether an event is statistically “rare” or “representative”, what’s the appropriate reference class? What’s the appropriate context for a given claim or report, and how much context is needed?
These kinds of questions are the lifeblood of political debate and argument. Misinformation researchers and policymakers don’t possess unique expertise when it comes to answering them.
Why does this matter?
There’s a lot more to say about each of these points. Nevertheless, the bottom line is that it’s difficult to see how misinformation researchers or policymakers could apply a broad definition of misinformation with high reliability and impartiality. And if a definition can’t be applied objectively, this poses a big problem for misinformation science, which aims to establish sweeping generalisations about misinformation’s properties, prevalence, causes, and consequences. It also implies that broad definitions shouldn’t be used as the basis for policymaking by governments, international organisations, or large technology companies.
Does this argument support an authoritarian agenda?
There are many ways one could object to the arguments I’ve just outlined. For example:
Maybe the dilemma presents a false dichotomy. For example, it might overlook definitions of misinformation broader than “demonstrably false information” but narrower than “misleading information”.
Maybe my estimates of the prevalence and impact of clear-cut misinformation are wrong. For example, studies on fake news typically measure people’s exposure to fringe fake news websites, which misses most demonstrably false opinions expressed by politicians and pundits.
Maybe estimates of the mass public’s direct exposure to fake news, which is minimal, miss important pathways by which such content can still be politically influential. For example, it might shape the attitudes and behaviour of extremely online political elites (e.g., Elon Musk, J.D. Vance) who wield enormous influence in politics.
These are all legitimate objections. They engage with my arguments on their merits. They respond to what I’ve actually written.
In contrast, van der Linden and McIntyre’s accusation that I’ve been “cultivated” by authoritarian leaders to defend their agenda is an evidence-free smear.
The explicit content of my writings is philosophically liberal and anti-authoritarian. I support a thriving, pluralistic public sphere with robust debate, substantial viewpoint diversity, and respect for reasonable disagreement. That’s why I think we should be highly sceptical when a narrow fringe of experts and policymakers claims authority to determine what constitutes legitimate and illegitimate contributions to public debate.
Even if such determinations don’t result in censorship (and we shouldn’t forget that they have often resulted in censorship and “content moderation”), the “scientific” findings premised on such determinations influence policymaking in numerous ways. To the extent that these findings rely on expansive definitions of misinformation applied subjectively and selectively, that’s a serious problem.
Moreover, the main reason I think it’s important to push back against overreach by researchers and policymakers on the topic of “misinformation” is precisely that I think it exacerbates the very problems they’re motivated to combat.
Elsewhere in their article, van der Linden and McIntyre cite another one of my essays as an example of someone trying to “minimize the misinformation problem” to prevent people from combating its brutal consequences. In this essay, I argue that misinformation should often be understood not as a societal disease but as a symptom of deeper problems like institutional distrust, political sectarianism, and anti-establishment worldviews.
They write,
“But this way of thinking completely misunderstands the complex nature of how social forces interact in the real world. Misinformation has both direct and indirect consequences for society.”
This gives the impression that my article denies the harms of misinformation. But here’s what I wrote:
“In some cases, exposure to misinformation manifestly does have harmful consequences. Powerful individuals and interest groups often propagate false and misleading messages, and such efforts are sometimes partly successful. Moreover, evidence consistently shows that the highly biased reporting of influential partisan outlets such as Fox News has a real-world impact.”
Nevertheless, the reason I think the “symptom” perspective is helpful in many cases is precisely that it directs our attention to the kinds of interventions likely to be effective in addressing problems associated with “misinformation”.
The simple fact is that many of these problems are downstream of collapsing trust in institutions among conservative Americans. Although it’s tempting to pathologise this collapsing trust or explain it away entirely by appeal to manipulation and misinformation, it’s partly a rational response to the fact that highly educated, socially liberal, Democrat-voting professionals dominate these institutions, which have become increasingly progressive and politicised over recent decades.
Given this, if you want to address the “misinformation problem”, you need to focus on how to rebuild trust in these institutions among Americans who view them as highly partisan. From this perspective, letting overwhelmingly liberal, left-leaning misinformation researchers apply expansive definitions of misinformation seems much more likely to exacerbate this problem than help solve it.
Maybe this viewpoint is wrong. But if so, van der Linden and McIntyre should address it on its merits, not treat intellectual disagreements as a simplistic morality tale where they’re the heroes defending democracy against attacks from their critics.
Where do you find the patience to keep responding to these people?
This, to me, is the key point: “In this essay, I argue that misinformation should often be understood not as a societal disease but as a symptom of deeper problems like institutional distrust, political sectarianism, and anti-establishment worldviews.” I would offer a friendly amendment to your statement “The simple fact is that many of these problems are downstream of collapsing trust in institutions among conservative Americans.” That is, collapsing trust is not confined to conservatives, and I would add that, particularly on the most contentious issues, we as liberals could stand to view sources we continue to trust with a much more skeptical eye.
Allied with that, this essay also seems to me to dovetail nicely with those where you examine the conundrum of making intelligent judgments in a complex society where we all, no matter how assiduous in assessing the available information, can only attain partial knowledge.
I am reminded, too, of recent exchanges I’ve observed on one contentious issue in which the two participants were equally credible, but each had personal experiences that were diametrically opposed. Neither was misinformed, rather the problems seemed to me to arise in efforts to generalize from those personal experiences.
I come back to what you have discussed in many essays here: we must take on board that none of us have full information, and that our personal experiences, while often valid, are not enough basis on which to generalize. We must therefore be vigilant about these possible pitfalls, be open to new or better information, and be ready to reassess. And it is OK to make mistakes!