I understand your frustration. To me, there is something uniquely annoying about the misinfo researchers. Since I was in grad school in the 90s, I've been listening to, -- and disagreeing with -- the pomo critique that objectivity is impossible and that all scholarly research is inevitably biased. In contrast, I believe that researchers/academics should strive for objectivity as much as possible, and that the institutions and norms of the academy are meant to assist with that.
Then, around 2016, after the Trump election and supported by a huge amount of US federal funding (see: https://liber-net.org/federal-awards-project/ ), misinfo studies comes along with a research program that obviously, almost comically, doubles as a partisan political campaign (for the Democrats or more broadly for the laptop class).
But, when someone calls them out on their contradictions, they whine that your pomo critique of objectivity undermines reason and truth. It's as if a 19th century scientist were to complain that you can't criticize phrenology b/c no one will believe in science anymore.
I appreciate your critique b/c I think that the misinfo researchers are not only especially annoying but also especially corrosive to the scientific project. Insiders who misappropriate the institutions of science often can do more damage than those who lob bombs from the outside.
This, to me, is the key point: “In this essay, I argue that misinformation should often be understood not as a societal disease but as a symptom of deeper problems like institutional distrust, political sectarianism, and anti-establishment worldviews.” I would offer a friendly amendment to your statement “The simple fact is that many of these problems are downstream of collapsing trust in institutions among conservative Americans.” That is, collapsing trust is not confined to conservatives, and I would add that, particularly on the most contentious issues, we as liberals could stand to view sources we continue to trust with a much more skeptical eye.
Allied with that, this essay also seems to me to dovetail nicely with those where you examine the conundrum of making intelligent judgments in a complex society where we all, no matter how assiduous in assessing the available information, can only attain partial knowledge.
I am reminded, too, of recent exchanges I’ve observed on one contentious issue in which the two participants were equally credible, but each had personal experiences that were diametrically opposed. Neither was misinformed, rather the problems seemed to me to arise in efforts to generalize from those personal experiences.
I come back to what you have discussed in many essays here: we must take on board that none of us have full information, and that our personal experiences, while often valid, are not enough basis on which to generalize. We must therefore be vigilant about these possible pitfalls, be open to new or better information, and be ready to reassess. And it is OK to make mistakes!
My complaint with your writing is you don't have a theory of propaganda, and one could be forgiven for thinking you don't think anyone changes their mind about anything ever. I also think your opponents are just finding you, like, obtuse -- denying not just the existence of misinformation, but the existence of the bad effects assumed to be caused by misinformation. To be clear, van der Linden needs to be pissed off, and there's nothing like a good letter-penning fight between academics, but yeah I think you have gaps in your theory you paper over.
For instance, we should move toward disagreement as a starting point in persuasion conversations and as the primary object of academic research https://misinforeview.hks.harvard.edu/article/disagreement-as-a-way-to-study-misinformation-and-its-effects/. This is consistent with your point that academics can't really classify misinformation accurately because the boundary between just disagreeing is too fuzzy. It also encompasses the thesis that resistance to reliable information is a bigger cause of disagreement than misinformation.
But you throw your hands up and say misinformation is dead as a concept. They say let's remodel how we study misinformation. I see how that might be interpreted as bad faith by a misinfo researcher.
Regarding gullibility in general, this is from your van der Linden review: "In general, people are highly skeptical of information they encounter: if a message conflicts with their preexisting beliefs, they demand arguments that they find persuasive from sources that they judge to be trustworthy. Otherwise, they typically reject the message."
OK so just find the 1% of the time people are not skeptical and do that thing a lot. That's what "right-wind standup comedy" (fka left-wing) and the Joe Rogan show are. People lower their guard during comedy that just "makes you think." Instead of say "when are people gullible?" you said "people are broadly not gullible and that's good enough for my purposes."
Also you're fully ignoring flood-the-zone propaganda. The mechanism of such misinformation is not that people believe it, it's that it makes truth-seeking harder, and that it produces the perception that other people are believing it. This even applies to narrow misinformation. Even narrow misinformation that I assume other people believe severely affects how I try to persuade other people. Like yeah propaganda is a thing, relies on misinfo and it works.
There's also the Steve Bannon quote that went something like, "just say outrageous shit and people will meet you 75% of the way there." For instance, "maybe vaccines don't have microchips in them, but other toxins sounds plausible." Your theory of narrow misinformation would just call the "microchips" thing failed misinfo only received by a narrow few conspiracy theorists. You wouldn't consider how it might fuel in a non-narrow-misinformation "vaccines are unsafe" disagreement.
BTW your writing has been extremely informative and helpful. The best thesis you've espoused is that trust in reliable info has fallen and that produces strong misinfo effects. I read a bunch of that Harvard misinfo review and I think the above paper on disagreement is extremely applicable in both research and normal-person persuasion. But all this stuff surrounding propaganda and hyper-gullibility has emerged as a gap in your writing and theory.
This was a great response, but these kinds of progressive-authoritarian “a liberal society with free debate is a tool of oppression” screeds don’t deserve it.
A shorter response would be “hey, did you miss the memo that about two or three years ago we decided that we can again handle open debate of ideas, and that accusing academics of supporting oppression doesn’t work to get them cancelled anymore?”
Even shorter would be “hey, 2018 Twitter called, they’re reporting you missing”.
I find myself less impressed by Van Der Linden with every encounter! The social sciences have really disgraced themselves in recent years and they don’t seem to be getting the memo.
One challenge is that you're engaging in a dialogue in a slightly broadened academic (Substack+) context with the general aim of exploring/discovering the elusive truth about a complicated, multi-layered issue. Yet your approach intersects with (or intrudes into, or is intruded upon by) a highly politicized world of political debate in which "winning the argument" is the primary goal. People with overtly polemic aims read you (or anyone) not to try to understand what you are saying, but rather to rebut what they believe or claim (for self-interested reasons) you have said. They sometimes do this by means of deliberate misrepresentation, other times in less willful ways.
Feels like a collision of worlds more than anything, and one I've quite familiar with.
For example, even I see how a highly nuanced and "objective" analysis and investigation of misinformation of the kind you conduct, might be used to buttress the perspective of those with a diametrically opposed aim--that is, not to discover the truth, but to propagate a given political position. Hence, echoing you, they argue that, in such a complicated world as ours where we really can't "know" anything and the truth is elusive and people believe what they believe for a host of complex and often contradictory reasons, so (and now parting ways with you without telling anyone) there really is no difference between true and false. BS and so-called reality are the same. And if you say they're different, that means you're lying.
It seems like this could have been easily avoided if they had bothered to hold a 30-minute Zoom call with you. Sadly, that would have prevented them from having an easy target to strawman. They would have had to make their piece more fair and reasonable, and thus less inflammatory and clickbaity.
Well, it's hard to be a somewhat centrist independent thinker, which means to be shunned by most "camps". Thanks for finding the patience to so thoroughly clarify your own position, it made me think as well.
You are as thorough as ever rehashing your usual lines of argument on this topic, and carefully responding to the latest criticisms with those familiar points. Which is all well and good, if repetitive (which I guess was your point, that they are beating the same dead horse and forcing you to rescue the same dead horse). But you guys have already been through this debate several times, and I don't see this going anywhere different.
What surprises me is that you haven't addressed the most central criticism, which in my opinion is key to all of this. Have your own writings and ideas in fact gotten lots of uptake from bad-faith actors and those with ideological agendas, who are exploiting your arguments to justify and perpetuate toxic or problematic narratives? Is there evidence these researchers can point to that influential people (or a critical mass of regular people) were specifically inspired by your words, and have taken this and run with it in questionable directions? If multiple people in the Trump administration are citing you approvingly and using your work to justify bad policies, or popular figures are spreading distorted versions of your ideas around in the service of things you would never endorse, or your stuff has gone viral on YouTube in a really awful context - then I would worry that maybe you're becoming a useful idiot for Trump etc., and you have a responsibility to counter that. But if all of this is based on hypothetical concerns and not in real developments on the ground, then I would say, the proof of harm is in the harm.
With all due respect, I just don't think most people are that interested in what some nerdy philosopher has to say. Let alone, fuel a right-wing movement with it. (And I suspect you're humble enough to agree). But focusing exclusively on the merits of the various arguments may be missing the point. If you want to rebut charges that you are enabling the bad guys, what ultimately matters is whether your words have actually enabled the bad guys. I really doubt it.
Thanks - good points. Well I point out that it's an evidence-free smear. Beyond that, I don't know what to rebut - they didn't offer any evidence. And of course you're right it it's extremely unlikely my blog would make any material difference to anything in politics anyway. Beyond that I agree these debates are tired now - I didn't mean to write about it again but was triggered by what felt like an unfair smear.
Wow — didn’t expect to see a defence of the “academics can’t pursue lines of research that might be used by ‘toxic’ actors” line here.
Academics do not have a responsibility to never enable ‘the bad guys’. They have a responsibility to seek the truth in whatever their line of research is. Sure, I’d say that they shouldn’t work directly for institutions they see as doing evil, but the research itself should be brought to light regardless. The loss of this guiding light is what’s currently destroying public trust in the academy!
I mean — you’re telling on yourself referring to ‘the bad guys’. That’s the whole problem with the field of misinformation research — it’s being used to ‘counter the bad guys’ implying that the misinformation researchers have a privileged understanding of right and wrong on any number of complex, widely-debated issues.
I think it's less that "misinformation researchers have a privileged understanding of right and wrong", but more at the enormous right-wing lying machine has a morally monstrous misunderstanding of right and wrong. Yes, these are "bad guys". I have zero patience these days for anyone on this topic who will not full-throatedly denounce the anti-vaccine insanity, where we have an anti-vax nutcase as Secretary Of Health. This Is Bad! This badness should not be "debatable" any more than Holocaust Denial is "debatable". If right-wing propagandists ever embrace a post-1945 version of antisemitism, I fully expect we'd be inundated with flood-the-zone talking-points about how historians have lost the trust of the public by not respecting the questioning of Establishment narratives.
I don’t think you want a religion of vaccines, rather than an environment of objective examination of each particular instantiation of the vaccine concept, any more than the next guy. Your argument seems to be free-loading off the measured care of the past, and those fumes won’t do much for you with new products.
“Cars are safe!” “Sure, generally speaking, but let’s still crash-test, and then rate, this particular make and model! And also, I’d still rather not drive in Car X, based on the crash-test results!” “But it passed! Are you anti-engineer?”
Ultimately, what you’re saying is that the political project of trying to draw lines (how much risk is acceptable, given the benefits) is an entirely factual exercise, and that’s an impermissible attempt to steal the high-ground in a policy discussion.
Policy discussions are always about risk balancing: How much risk should citizens be allowed, or forced, to take on? What happens when you get the short end of the stick after taking a vaccine? What is the legal relationship between the parties in a vaccination event (patient, doctor, manufacturer, community)? And so forth. Unfortunately, none of those questions can be answered either objectively or with science, and you will get nowhere speaking with people who disagree with you as if they do.
The fact that you have some answers to these questions that have convinced you, and that your motivations come from a place of harm-minimization, doesn’t actually have as tight a connection with “my side good, opponents bad” as you think it does. Consider: “I think we should allocate more of the budget to reforestation.” “But we need to rebuild this bridge before it collapses.” “Yes, and we also need more trees.” “You’re evil and you are indifferent to the lives that will be killed when the bridge collapses!”
If you jump from “I balance risks and benefits, or two competing risks in a finite world, differently than you do” to “You’re evil because my policy proposal obviously reduces harm, which means that you’re automatically indifferent to that harm unless you adopt it,” you’ve committed a false moral dichotomy: the fallacy of presuming that any disagreement with your preferred harm-minimization policy is equivalent to willful indifference to harm. But in complex policy tradeoffs, rational actors can both acknowledge the same harms and still differ on priority, mechanism, or acceptable collateral effects.
Mistaking disagreement on policy for moral depravity is not argument; it’s moral narcissism masquerading as ethical clarity. The invocation of “You’re evil because you disagree with a harm-minimization strategy I support” effectively politicizes empathy, converting a spectrum of ethical judgments into a binary of good and evil. You’re not engaging in moral reasoning at that point; you’re using moralistic language to short-circuit it.
Dan, after your insightful analysis of the difficulties of political knowledge, how can you give Seth’s impatient diatribe absolutely loaded with emotional straw men a “like”? Consider: “enormous right-wing lying machine,” “morally monstrous misunderstanding of right and wrong,” “full-throatedly denounce the anti-vaccine insanity,” “anti-vax nutcase,”…These sorts of “thinkers” will never respond charitably or insightfully, since their political paradigm is their only measure of truth.
Tim, "insightfully" is in the eye of the beholder, though I'll try to respond relatively "charitably" to your criticism of me. Key question - does every position, no matter what, get treated with respect? In my life (from free-speech activism) I have interacted with bonafide, literal, hardcore, Holocaust Deniers - must I be nice about their views? How about prominent lawyers who want to legalize torture? Fervent racists who argue extensively about how the dark-skinned "race" is less intelligent and more violent than light-skinned? Please do not rush to tell me that there's a logical problem of falling into being close-minded. I know that, you don't need to enlighten me. My question to you, is how to deal with the reverse problem, having one's brains fall out? (n.b., I regard that as just good writing, after the aphorism "It pays to keep an open mind, but not so open your brains fall out").
Personally, I have a red line on vaccine denial. I will not apologize for this. We are awash with demands that anyone with any liberal-leaning inclinations thoroughly denounce a whole list of views, "to regain trust", or some such. I am going to hold conservative's "feet to the fire" about the insanity of having a Secretary Of Health who is anti-vaccination. I don't feel bad about the term "insanity" there, even though it's a emotional term. See "brains fall out" above. Again, if you think I'm wrong, tell me what you recommend for resolving dealing with the hordes of lies of politics (another phrase I think is merely descriptive).
I agree that this badness *shouldn't* be debatable, but there are times when the badness *must* be debated - again - anyway.
There is no other way. Point all the fingers you want, and I think you'll mostly be right, but it doesn't change the reality that a significant fraction of the population won't be swayed by arguments from authorities they no longer trust.
My intention certainly wasn't to defend "“academics can’t pursue lines of research that might be used by ‘toxic’ actors!" Quite the contrary. That's precisely why I specified, "But if all of this [criticism] is based on **hypothetical concerns** and not in real developments on the ground, then I would say, the proof of harm is in the harm." Which is to say, even if someone thinks that an academic's arguments lend themselves easily to misappropriation by cynical parties, "might be used" doesn't cut it - I would want to see proof that it's being abused at scale (and leading to downstream effects) to start worrying about it. And even then, this doesn't mean they must fall silent or that the problem was necessarily in the substance of what they said. But maybe it signals a concerning development that they ought to acknowledge, and address in whatever way they can. Anyway, in Dan's case I'm not concerned about this realistically happening, which was part of my point.
As for "bad guys" - this was intended with implicit scare quotes, as in: whomever a responsible citizen or reader of this blog, including yourself, might personally deem to be misappropriating problematically. No a priori assumptions (though I'm fine namechecking Trump here). Surely there are problematic agendas and narratives, even if we might disagree on which ones or which people.
But maybe my comment was worded too bluntly - I should have added that I mostly *agree* with Dan's broader arguments! And have always respected his work and personal character, and look forward to his posts even when I sometimes disagree. (I assume he knows that, which is why I didn't bother with niceties this time). And his arguments certainly aren't irrelevant. It's just that I'm not sure arguments based on substance alone are enough to get at the larger concern (however exaggerated) raised by people like van der Linden et. al - which has to do with supposed harms from someone like Dan. To which I'm saying: show me the proof, and then maybe we can talk.
There is an old and tested way to battle misinformation, and it is called fostering critical thinking,skepticism and open-mindedness. The problem for people like van der Linden is they don't want to miss the ability to spread their messages without criticism. He doesn't like if people as questions such as :
Why black people do well in sports?
Why Koreans score high on SAT tests?
Why you can be transgender and not transracial? ( they had to fire a professor just for asking this ) What is a woman? etc.
So his solutions is that can highlight information as untrustworthy beforehand, while dismissing anything uncomfortable by quoting 'trust the experts' ( Who couldn't give a definition of the word woman without falling in logical fallacies despite having gender studies degrees, even amateur bloggers/youtubers still in college did better )
I was recenlty watching a debate between 4 women about sexism in academia. The pro male advocacy ladies gave some arguments citing data. The pro female advocacy tried to counter with accusing them (falsely at least in Hoff Sommer's case) that they voted for Trump
How is accusing someone of siding with a political side an argument supposed to persuade people? Is it scientific ? Appealing to the morality of the person making a claim doesn't any academic substance, it is purely optics used by populist politicians, such as the people your accusers blame.
Postmodernism is a quasi-religious cult. As a postmodernist you never have to argue your point because to the enlightened it's just self-evident truth and those who disagree with the cult are "N*zis". This is not a bug of postmodernism but a "feature" because they pride themselves that they're anti-rational because "rationality is a construct by the white supremacy". End of discussion! These people are pathetic. I'm a traditional leftist btw and I always try to argue my point but that concept is entirely alien to these cultists. And since you can simply denounce your opponents with no need to justify what you're reasoning is it's super popular with a lot of young people who are not particularly bright and conformists.
I’m a bit late to the party, but I’ve been thinking about your misinformation dilemma. On the one hand, I think it neglects a type of misinformation that is both clear-cut (narrow definition) and pervasive, thus escaping the two horns of the dilemma (e.g. vaccines cause autism; 9/11 was an inside job; the world is 10.000 years old). On the other hand, your dilemma is strikingly similar to an argument I once made myself in a paper about logical fallacies, which we called the ‘Fallacy Fork’. https://maartenboudry.be/2017/06/the-fallacy-fork-why-its-time-to-get.html In a nutshell: either you define fallacies as formally and deductively invalid inferences, but then you will find very few real-life examples and you will have to resort to silly toy examples that are easy to knock down (first horn); or you define fallacies in a non-deductive way that captures real-life arguments, but then you can no longer claim that every instance of that argument is invalid, as it will all depend on context, probabilistic factors, intended force, etc. (second horn). It works similarly to your dilemma, because the underlying premise is that arguments/misbeliefs that are blatantly false will persuade very few people and thus have little importance in real life. So am I being inconsistent in rejecting your dilemma? But here’s the interesting difference, and I wonder what you make of it: the logical invalidity of an argument is often immediately transparent. When it comes to misinformation, however, that’s not always the case. Sure, stuff like “Pope endorses Trump” is very easy to debunk, but “Vaccines cause autism” or “the Jews orchestrated 9/11” not so much. I agree with you that concerns about “fake news” as traditionally defined are overblown, because it reaches only very few people and has a negligible impact on public discourse. But still you can have misinformation that is both unambiguously and 100% false, but that still manages to fool a lot of people, because its falsehood is not immediately transparent. I’m thinking mostly about systems of misbelief, such as pseudoscience and conspiracy theories. These misbeliefs are hard to refute because the world is just complex and also because they are often shielded from refutation with ad hoc explanations and conspiracy theories. In that sense your dilemma is weaker and less “destructive” than the fallacy fork. Does that make sense?
Hi Maarten. Thanks - that does make sense. The connection with your fallacy fork argument is very interesting. I agree with you in general, I think. There are prominent examples of clear-cut misinformation that go beyond fake news, as you say, and they're much less easily debunked. Then again, it's important to distinguish misperceptions (the false beliefs in these cases) from misinformation, both because the causes of misperceptions are complex (going beyond mere exposure to misinformation, however defined) and because they are often supported and rationalised by true but misleading information, at which point the dilemma I've raised emerges. Nevertheless, I still think your basic point is pretty much correct. Thanks for the comment.
What a great post. Much of my work over the last 4-5 years has addressed political extremism and authoritarianism. Here is one a priori standard in the real world for figuring out who is and is not engaging in authoritarian-like behaviors:
Is Person X trying to shut up those with whom X disagrees, especially by demonizing, denouncing, or ostracizing them?
It is pretty clear that van der Linden fits this bill far better than do you.
I want to comment before I read the article, but suffice to say I think you’re pulling a “Helen Pluckrose”, as in you’re making your stances *even* clearer to people that might not even bother to give a charitable interpretation.
I do understand that there are utility in enganging with criticism, particularly signalling to those that already buy in into your idea (me and all your subscribers). However, I do think you can find better constructive criticism elsewhere.
1. The arguments seems to rest on the assumption that if you question the definition of a concept, it implies that it doesn’t exist, and can be used by authoritarian regimes for nefarious purposes. I can understand the logic if this is use in climate science. I’m not sure this is true with misinformation.
Perhaps because misinformation in a way is a meta concept, one can argue that often people are “misinformed” about “misinformation”.
2. However, for the sake of argument, let’s assume that because one question it’s definition, the concept cease to exist. The next step is to specify the mechanism by which the questioning of a definition can be used for harm. And I don’t think people think about this thoroughly.
It speaks to a very old technical problem in psychology, that is the relationship between belief and behavior. Or knowledge and behavior. The relationship between the two is not one to one correspondence.
It also speaks to another problem, that is how much accuracy do you need to navigate the world? That is how much “misinformed” can you get away with to survive and reproduce?
3. If the problem with “misinformation” is actually about “mistrust”, then why not misinformation researcher debate about that? Why do people ended up trusting political figures that has authoritarian tendencies? How do we build better institutions given existing elites?
4. Personal note: Overall, it’s a good post, but if your critics are very skilled in their incompetency of making good criticism, maybe you should write one for yourself.
Take this example from the Time article: "Fake rumours on social media led to [] national riots in the UK".
A report out on 7 May 2025 by https://hmicfrs.justiceinspectorates.gov.uk/publication-html/police-response-to-public-disorder-in-july-and-august-2024-tranche-2/ flatly contradicts Time authors Slander van der Libel and Liar McIntyrely, asserting "no single issue [] caused the disorder ... [the extensive interviews] do not support [] the prevailing narrative that emerged from the riots which was subsequently accepted: that online misinformation [was] to blame". Perhaps Slander and Liar need to look in the mirror for misinformation :-)
Whatever the truth behind the Southport riots, the point is that accusations of "misinformation" and the nasty ad hominem culture that goes with them will only increase if the broad definition is adopted.
That's why I agree with the supremely Millian conclusion that Sander van der Linden and Lee McIntyre come to: "the remedy to be applied is more speech, not enforced silence".
Where do you find the patience to keep responding to these people?
Ha. I left this topic for a while but I felt pretty annoyed by this latest smear. But yes, there are significant diminishing returns here...
I understand your frustration. To me, there is something uniquely annoying about the misinfo researchers. Since I was in grad school in the 90s, I've been listening to, -- and disagreeing with -- the pomo critique that objectivity is impossible and that all scholarly research is inevitably biased. In contrast, I believe that researchers/academics should strive for objectivity as much as possible, and that the institutions and norms of the academy are meant to assist with that.
Then, around 2016, after the Trump election and supported by a huge amount of US federal funding (see: https://liber-net.org/federal-awards-project/ ), misinfo studies comes along with a research program that obviously, almost comically, doubles as a partisan political campaign (for the Democrats or more broadly for the laptop class).
But, when someone calls them out on their contradictions, they whine that your pomo critique of objectivity undermines reason and truth. It's as if a 19th century scientist were to complain that you can't criticize phrenology b/c no one will believe in science anymore.
I appreciate your critique b/c I think that the misinfo researchers are not only especially annoying but also especially corrosive to the scientific project. Insiders who misappropriate the institutions of science often can do more damage than those who lob bombs from the outside.
I’ve been feeling like such an alien sharing these precise sentiments and it’s so refreshing to hear them in the wild.
This, to me, is the key point: “In this essay, I argue that misinformation should often be understood not as a societal disease but as a symptom of deeper problems like institutional distrust, political sectarianism, and anti-establishment worldviews.” I would offer a friendly amendment to your statement “The simple fact is that many of these problems are downstream of collapsing trust in institutions among conservative Americans.” That is, collapsing trust is not confined to conservatives, and I would add that, particularly on the most contentious issues, we as liberals could stand to view sources we continue to trust with a much more skeptical eye.
Allied with that, this essay also seems to me to dovetail nicely with those where you examine the conundrum of making intelligent judgments in a complex society where we all, no matter how assiduous in assessing the available information, can only attain partial knowledge.
I am reminded, too, of recent exchanges I’ve observed on one contentious issue in which the two participants were equally credible, but each had personal experiences that were diametrically opposed. Neither was misinformed, rather the problems seemed to me to arise in efforts to generalize from those personal experiences.
I come back to what you have discussed in many essays here: we must take on board that none of us have full information, and that our personal experiences, while often valid, are not enough basis on which to generalize. We must therefore be vigilant about these possible pitfalls, be open to new or better information, and be ready to reassess. And it is OK to make mistakes!
Beautifully put
My complaint with your writing is you don't have a theory of propaganda, and one could be forgiven for thinking you don't think anyone changes their mind about anything ever. I also think your opponents are just finding you, like, obtuse -- denying not just the existence of misinformation, but the existence of the bad effects assumed to be caused by misinformation. To be clear, van der Linden needs to be pissed off, and there's nothing like a good letter-penning fight between academics, but yeah I think you have gaps in your theory you paper over.
For instance, we should move toward disagreement as a starting point in persuasion conversations and as the primary object of academic research https://misinforeview.hks.harvard.edu/article/disagreement-as-a-way-to-study-misinformation-and-its-effects/. This is consistent with your point that academics can't really classify misinformation accurately because the boundary between just disagreeing is too fuzzy. It also encompasses the thesis that resistance to reliable information is a bigger cause of disagreement than misinformation.
But you throw your hands up and say misinformation is dead as a concept. They say let's remodel how we study misinformation. I see how that might be interpreted as bad faith by a misinfo researcher.
Regarding gullibility in general, this is from your van der Linden review: "In general, people are highly skeptical of information they encounter: if a message conflicts with their preexisting beliefs, they demand arguments that they find persuasive from sources that they judge to be trustworthy. Otherwise, they typically reject the message."
OK so just find the 1% of the time people are not skeptical and do that thing a lot. That's what "right-wind standup comedy" (fka left-wing) and the Joe Rogan show are. People lower their guard during comedy that just "makes you think." Instead of say "when are people gullible?" you said "people are broadly not gullible and that's good enough for my purposes."
Also you're fully ignoring flood-the-zone propaganda. The mechanism of such misinformation is not that people believe it, it's that it makes truth-seeking harder, and that it produces the perception that other people are believing it. This even applies to narrow misinformation. Even narrow misinformation that I assume other people believe severely affects how I try to persuade other people. Like yeah propaganda is a thing, relies on misinfo and it works.
There's also the Steve Bannon quote that went something like, "just say outrageous shit and people will meet you 75% of the way there." For instance, "maybe vaccines don't have microchips in them, but other toxins sounds plausible." Your theory of narrow misinformation would just call the "microchips" thing failed misinfo only received by a narrow few conspiracy theorists. You wouldn't consider how it might fuel in a non-narrow-misinformation "vaccines are unsafe" disagreement.
BTW your writing has been extremely informative and helpful. The best thesis you've espoused is that trust in reliable info has fallen and that produces strong misinfo effects. I read a bunch of that Harvard misinfo review and I think the above paper on disagreement is extremely applicable in both research and normal-person persuasion. But all this stuff surrounding propaganda and hyper-gullibility has emerged as a gap in your writing and theory.
Fair enough. Interesting thoughts. Thanks for the pushback.
This was a great response, but these kinds of progressive-authoritarian “a liberal society with free debate is a tool of oppression” screeds don’t deserve it.
A shorter response would be “hey, did you miss the memo that about two or three years ago we decided that we can again handle open debate of ideas, and that accusing academics of supporting oppression doesn’t work to get them cancelled anymore?”
Even shorter would be “hey, 2018 Twitter called, they’re reporting you missing”.
I find myself less impressed by Van Der Linden with every encounter! The social sciences have really disgraced themselves in recent years and they don’t seem to be getting the memo.
One challenge is that you're engaging in a dialogue in a slightly broadened academic (Substack+) context with the general aim of exploring/discovering the elusive truth about a complicated, multi-layered issue. Yet your approach intersects with (or intrudes into, or is intruded upon by) a highly politicized world of political debate in which "winning the argument" is the primary goal. People with overtly polemic aims read you (or anyone) not to try to understand what you are saying, but rather to rebut what they believe or claim (for self-interested reasons) you have said. They sometimes do this by means of deliberate misrepresentation, other times in less willful ways.
Feels like a collision of worlds more than anything, and one I've quite familiar with.
For example, even I see how a highly nuanced and "objective" analysis and investigation of misinformation of the kind you conduct, might be used to buttress the perspective of those with a diametrically opposed aim--that is, not to discover the truth, but to propagate a given political position. Hence, echoing you, they argue that, in such a complicated world as ours where we really can't "know" anything and the truth is elusive and people believe what they believe for a host of complex and often contradictory reasons, so (and now parting ways with you without telling anyone) there really is no difference between true and false. BS and so-called reality are the same. And if you say they're different, that means you're lying.
Keep up the good work.
Thanks - good points
It seems like this could have been easily avoided if they had bothered to hold a 30-minute Zoom call with you. Sadly, that would have prevented them from having an easy target to strawman. They would have had to make their piece more fair and reasonable, and thus less inflammatory and clickbaity.
Well, it's hard to be a somewhat centrist independent thinker, which means to be shunned by most "camps". Thanks for finding the patience to so thoroughly clarify your own position, it made me think as well.
You are as thorough as ever rehashing your usual lines of argument on this topic, and carefully responding to the latest criticisms with those familiar points. Which is all well and good, if repetitive (which I guess was your point, that they are beating the same dead horse and forcing you to rescue the same dead horse). But you guys have already been through this debate several times, and I don't see this going anywhere different.
What surprises me is that you haven't addressed the most central criticism, which in my opinion is key to all of this. Have your own writings and ideas in fact gotten lots of uptake from bad-faith actors and those with ideological agendas, who are exploiting your arguments to justify and perpetuate toxic or problematic narratives? Is there evidence these researchers can point to that influential people (or a critical mass of regular people) were specifically inspired by your words, and have taken this and run with it in questionable directions? If multiple people in the Trump administration are citing you approvingly and using your work to justify bad policies, or popular figures are spreading distorted versions of your ideas around in the service of things you would never endorse, or your stuff has gone viral on YouTube in a really awful context - then I would worry that maybe you're becoming a useful idiot for Trump etc., and you have a responsibility to counter that. But if all of this is based on hypothetical concerns and not in real developments on the ground, then I would say, the proof of harm is in the harm.
With all due respect, I just don't think most people are that interested in what some nerdy philosopher has to say. Let alone, fuel a right-wing movement with it. (And I suspect you're humble enough to agree). But focusing exclusively on the merits of the various arguments may be missing the point. If you want to rebut charges that you are enabling the bad guys, what ultimately matters is whether your words have actually enabled the bad guys. I really doubt it.
Thanks - good points. Well I point out that it's an evidence-free smear. Beyond that, I don't know what to rebut - they didn't offer any evidence. And of course you're right it it's extremely unlikely my blog would make any material difference to anything in politics anyway. Beyond that I agree these debates are tired now - I didn't mean to write about it again but was triggered by what felt like an unfair smear.
Wow — didn’t expect to see a defence of the “academics can’t pursue lines of research that might be used by ‘toxic’ actors” line here.
Academics do not have a responsibility to never enable ‘the bad guys’. They have a responsibility to seek the truth in whatever their line of research is. Sure, I’d say that they shouldn’t work directly for institutions they see as doing evil, but the research itself should be brought to light regardless. The loss of this guiding light is what’s currently destroying public trust in the academy!
I mean — you’re telling on yourself referring to ‘the bad guys’. That’s the whole problem with the field of misinformation research — it’s being used to ‘counter the bad guys’ implying that the misinformation researchers have a privileged understanding of right and wrong on any number of complex, widely-debated issues.
I think it's less that "misinformation researchers have a privileged understanding of right and wrong", but more at the enormous right-wing lying machine has a morally monstrous misunderstanding of right and wrong. Yes, these are "bad guys". I have zero patience these days for anyone on this topic who will not full-throatedly denounce the anti-vaccine insanity, where we have an anti-vax nutcase as Secretary Of Health. This Is Bad! This badness should not be "debatable" any more than Holocaust Denial is "debatable". If right-wing propagandists ever embrace a post-1945 version of antisemitism, I fully expect we'd be inundated with flood-the-zone talking-points about how historians have lost the trust of the public by not respecting the questioning of Establishment narratives.
I don’t think you want a religion of vaccines, rather than an environment of objective examination of each particular instantiation of the vaccine concept, any more than the next guy. Your argument seems to be free-loading off the measured care of the past, and those fumes won’t do much for you with new products.
“Cars are safe!” “Sure, generally speaking, but let’s still crash-test, and then rate, this particular make and model! And also, I’d still rather not drive in Car X, based on the crash-test results!” “But it passed! Are you anti-engineer?”
Ultimately, what you’re saying is that the political project of trying to draw lines (how much risk is acceptable, given the benefits) is an entirely factual exercise, and that’s an impermissible attempt to steal the high-ground in a policy discussion.
Policy discussions are always about risk balancing: How much risk should citizens be allowed, or forced, to take on? What happens when you get the short end of the stick after taking a vaccine? What is the legal relationship between the parties in a vaccination event (patient, doctor, manufacturer, community)? And so forth. Unfortunately, none of those questions can be answered either objectively or with science, and you will get nowhere speaking with people who disagree with you as if they do.
The fact that you have some answers to these questions that have convinced you, and that your motivations come from a place of harm-minimization, doesn’t actually have as tight a connection with “my side good, opponents bad” as you think it does. Consider: “I think we should allocate more of the budget to reforestation.” “But we need to rebuild this bridge before it collapses.” “Yes, and we also need more trees.” “You’re evil and you are indifferent to the lives that will be killed when the bridge collapses!”
If you jump from “I balance risks and benefits, or two competing risks in a finite world, differently than you do” to “You’re evil because my policy proposal obviously reduces harm, which means that you’re automatically indifferent to that harm unless you adopt it,” you’ve committed a false moral dichotomy: the fallacy of presuming that any disagreement with your preferred harm-minimization policy is equivalent to willful indifference to harm. But in complex policy tradeoffs, rational actors can both acknowledge the same harms and still differ on priority, mechanism, or acceptable collateral effects.
Mistaking disagreement on policy for moral depravity is not argument; it’s moral narcissism masquerading as ethical clarity. The invocation of “You’re evil because you disagree with a harm-minimization strategy I support” effectively politicizes empathy, converting a spectrum of ethical judgments into a binary of good and evil. You’re not engaging in moral reasoning at that point; you’re using moralistic language to short-circuit it.
An "anti-vaxxer" is someone willing to point out that slapping a vaccine label on a vial of sulfuric acid doesn't magically make it safe to inject.
Dan, after your insightful analysis of the difficulties of political knowledge, how can you give Seth’s impatient diatribe absolutely loaded with emotional straw men a “like”? Consider: “enormous right-wing lying machine,” “morally monstrous misunderstanding of right and wrong,” “full-throatedly denounce the anti-vaccine insanity,” “anti-vax nutcase,”…These sorts of “thinkers” will never respond charitably or insightfully, since their political paradigm is their only measure of truth.
Tim, "insightfully" is in the eye of the beholder, though I'll try to respond relatively "charitably" to your criticism of me. Key question - does every position, no matter what, get treated with respect? In my life (from free-speech activism) I have interacted with bonafide, literal, hardcore, Holocaust Deniers - must I be nice about their views? How about prominent lawyers who want to legalize torture? Fervent racists who argue extensively about how the dark-skinned "race" is less intelligent and more violent than light-skinned? Please do not rush to tell me that there's a logical problem of falling into being close-minded. I know that, you don't need to enlighten me. My question to you, is how to deal with the reverse problem, having one's brains fall out? (n.b., I regard that as just good writing, after the aphorism "It pays to keep an open mind, but not so open your brains fall out").
Personally, I have a red line on vaccine denial. I will not apologize for this. We are awash with demands that anyone with any liberal-leaning inclinations thoroughly denounce a whole list of views, "to regain trust", or some such. I am going to hold conservative's "feet to the fire" about the insanity of having a Secretary Of Health who is anti-vaccination. I don't feel bad about the term "insanity" there, even though it's a emotional term. See "brains fall out" above. Again, if you think I'm wrong, tell me what you recommend for resolving dealing with the hordes of lies of politics (another phrase I think is merely descriptive).
I agree that this badness *shouldn't* be debatable, but there are times when the badness *must* be debated - again - anyway.
There is no other way. Point all the fingers you want, and I think you'll mostly be right, but it doesn't change the reality that a significant fraction of the population won't be swayed by arguments from authorities they no longer trust.
My intention certainly wasn't to defend "“academics can’t pursue lines of research that might be used by ‘toxic’ actors!" Quite the contrary. That's precisely why I specified, "But if all of this [criticism] is based on **hypothetical concerns** and not in real developments on the ground, then I would say, the proof of harm is in the harm." Which is to say, even if someone thinks that an academic's arguments lend themselves easily to misappropriation by cynical parties, "might be used" doesn't cut it - I would want to see proof that it's being abused at scale (and leading to downstream effects) to start worrying about it. And even then, this doesn't mean they must fall silent or that the problem was necessarily in the substance of what they said. But maybe it signals a concerning development that they ought to acknowledge, and address in whatever way they can. Anyway, in Dan's case I'm not concerned about this realistically happening, which was part of my point.
As for "bad guys" - this was intended with implicit scare quotes, as in: whomever a responsible citizen or reader of this blog, including yourself, might personally deem to be misappropriating problematically. No a priori assumptions (though I'm fine namechecking Trump here). Surely there are problematic agendas and narratives, even if we might disagree on which ones or which people.
But maybe my comment was worded too bluntly - I should have added that I mostly *agree* with Dan's broader arguments! And have always respected his work and personal character, and look forward to his posts even when I sometimes disagree. (I assume he knows that, which is why I didn't bother with niceties this time). And his arguments certainly aren't irrelevant. It's just that I'm not sure arguments based on substance alone are enough to get at the larger concern (however exaggerated) raised by people like van der Linden et. al - which has to do with supposed harms from someone like Dan. To which I'm saying: show me the proof, and then maybe we can talk.
They are just trying to change the subject because they now that they cannot defend their arguments with logic and facts.
Also, they can smell Dan's weakness, i.e., willingness to go along with the "misinformation experts" in every specific instance.
There is an old and tested way to battle misinformation, and it is called fostering critical thinking,skepticism and open-mindedness. The problem for people like van der Linden is they don't want to miss the ability to spread their messages without criticism. He doesn't like if people as questions such as :
Why black people do well in sports?
Why Koreans score high on SAT tests?
Why you can be transgender and not transracial? ( they had to fire a professor just for asking this ) What is a woman? etc.
So his solutions is that can highlight information as untrustworthy beforehand, while dismissing anything uncomfortable by quoting 'trust the experts' ( Who couldn't give a definition of the word woman without falling in logical fallacies despite having gender studies degrees, even amateur bloggers/youtubers still in college did better )
I was recenlty watching a debate between 4 women about sexism in academia. The pro male advocacy ladies gave some arguments citing data. The pro female advocacy tried to counter with accusing them (falsely at least in Hoff Sommer's case) that they voted for Trump
How is accusing someone of siding with a political side an argument supposed to persuade people? Is it scientific ? Appealing to the morality of the person making a claim doesn't any academic substance, it is purely optics used by populist politicians, such as the people your accusers blame.
Postmodernism is a quasi-religious cult. As a postmodernist you never have to argue your point because to the enlightened it's just self-evident truth and those who disagree with the cult are "N*zis". This is not a bug of postmodernism but a "feature" because they pride themselves that they're anti-rational because "rationality is a construct by the white supremacy". End of discussion! These people are pathetic. I'm a traditional leftist btw and I always try to argue my point but that concept is entirely alien to these cultists. And since you can simply denounce your opponents with no need to justify what you're reasoning is it's super popular with a lot of young people who are not particularly bright and conformists.
Err, what do you think is the answer to such questions as:
"Why black people do well in sports?
Why Koreans score high on SAT tests? "
I’m a bit late to the party, but I’ve been thinking about your misinformation dilemma. On the one hand, I think it neglects a type of misinformation that is both clear-cut (narrow definition) and pervasive, thus escaping the two horns of the dilemma (e.g. vaccines cause autism; 9/11 was an inside job; the world is 10.000 years old). On the other hand, your dilemma is strikingly similar to an argument I once made myself in a paper about logical fallacies, which we called the ‘Fallacy Fork’. https://maartenboudry.be/2017/06/the-fallacy-fork-why-its-time-to-get.html In a nutshell: either you define fallacies as formally and deductively invalid inferences, but then you will find very few real-life examples and you will have to resort to silly toy examples that are easy to knock down (first horn); or you define fallacies in a non-deductive way that captures real-life arguments, but then you can no longer claim that every instance of that argument is invalid, as it will all depend on context, probabilistic factors, intended force, etc. (second horn). It works similarly to your dilemma, because the underlying premise is that arguments/misbeliefs that are blatantly false will persuade very few people and thus have little importance in real life. So am I being inconsistent in rejecting your dilemma? But here’s the interesting difference, and I wonder what you make of it: the logical invalidity of an argument is often immediately transparent. When it comes to misinformation, however, that’s not always the case. Sure, stuff like “Pope endorses Trump” is very easy to debunk, but “Vaccines cause autism” or “the Jews orchestrated 9/11” not so much. I agree with you that concerns about “fake news” as traditionally defined are overblown, because it reaches only very few people and has a negligible impact on public discourse. But still you can have misinformation that is both unambiguously and 100% false, but that still manages to fool a lot of people, because its falsehood is not immediately transparent. I’m thinking mostly about systems of misbelief, such as pseudoscience and conspiracy theories. These misbeliefs are hard to refute because the world is just complex and also because they are often shielded from refutation with ad hoc explanations and conspiracy theories. In that sense your dilemma is weaker and less “destructive” than the fallacy fork. Does that make sense?
Hi Maarten. Thanks - that does make sense. The connection with your fallacy fork argument is very interesting. I agree with you in general, I think. There are prominent examples of clear-cut misinformation that go beyond fake news, as you say, and they're much less easily debunked. Then again, it's important to distinguish misperceptions (the false beliefs in these cases) from misinformation, both because the causes of misperceptions are complex (going beyond mere exposure to misinformation, however defined) and because they are often supported and rationalised by true but misleading information, at which point the dilemma I've raised emerges. Nevertheless, I still think your basic point is pretty much correct. Thanks for the comment.
What a great post. Much of my work over the last 4-5 years has addressed political extremism and authoritarianism. Here is one a priori standard in the real world for figuring out who is and is not engaging in authoritarian-like behaviors:
Is Person X trying to shut up those with whom X disagrees, especially by demonizing, denouncing, or ostracizing them?
It is pretty clear that van der Linden fits this bill far better than do you.
Thanks Lee!
I want to comment before I read the article, but suffice to say I think you’re pulling a “Helen Pluckrose”, as in you’re making your stances *even* clearer to people that might not even bother to give a charitable interpretation.
I do understand that there are utility in enganging with criticism, particularly signalling to those that already buy in into your idea (me and all your subscribers). However, I do think you can find better constructive criticism elsewhere.
I’ve finished reading it. Few thoughts:
1. The arguments seems to rest on the assumption that if you question the definition of a concept, it implies that it doesn’t exist, and can be used by authoritarian regimes for nefarious purposes. I can understand the logic if this is use in climate science. I’m not sure this is true with misinformation.
Perhaps because misinformation in a way is a meta concept, one can argue that often people are “misinformed” about “misinformation”.
2. However, for the sake of argument, let’s assume that because one question it’s definition, the concept cease to exist. The next step is to specify the mechanism by which the questioning of a definition can be used for harm. And I don’t think people think about this thoroughly.
It speaks to a very old technical problem in psychology, that is the relationship between belief and behavior. Or knowledge and behavior. The relationship between the two is not one to one correspondence.
It also speaks to another problem, that is how much accuracy do you need to navigate the world? That is how much “misinformed” can you get away with to survive and reproduce?
3. If the problem with “misinformation” is actually about “mistrust”, then why not misinformation researcher debate about that? Why do people ended up trusting political figures that has authoritarian tendencies? How do we build better institutions given existing elites?
4. Personal note: Overall, it’s a good post, but if your critics are very skilled in their incompetency of making good criticism, maybe you should write one for yourself.
A blistering counter-attack.
Take this example from the Time article: "Fake rumours on social media led to [] national riots in the UK".
A report out on 7 May 2025 by https://hmicfrs.justiceinspectorates.gov.uk/publication-html/police-response-to-public-disorder-in-july-and-august-2024-tranche-2/ flatly contradicts Time authors Slander van der Libel and Liar McIntyrely, asserting "no single issue [] caused the disorder ... [the extensive interviews] do not support [] the prevailing narrative that emerged from the riots which was subsequently accepted: that online misinformation [was] to blame". Perhaps Slander and Liar need to look in the mirror for misinformation :-)
Whatever the truth behind the Southport riots, the point is that accusations of "misinformation" and the nasty ad hominem culture that goes with them will only increase if the broad definition is adopted.
That's why I agree with the supremely Millian conclusion that Sander van der Linden and Lee McIntyre come to: "the remedy to be applied is more speech, not enforced silence".