Reflections on conversations with Jesse Singal and Fin Moorhouse
Misinformation, political psychology, social media, cultural evolution, mind viruses, luxury beliefs, the marketplace of rationalisations, political persuasion, and artificial intelligence.
I’m working on several ambitious essays, including on why I’m sceptical that a “censorship industrial complex” exists, the psychological and social roots of paranoia and conspiracy theories, and why I’ve become less conventionally “left-wing” over the past decade or so. However, this week, I’m just going to highlight and briefly reflect on two recent podcasts I did.
For the past several years, I’ve said no to most invitations to give talks, participate in public events, or go on podcasts. There are two main reasons for this: (i) I’m much better at writing than talking, and (ii) I’ve been dealing with periods of severe brain fog since 2020 caused by a mixture of insomnia and (what I think is) the long aftermath of a Covid-19 infection. Recently, I’ve had a stretch where this brain fog is less of an issue, and it has also dawned on me that the only way to get better at public speaking is by doing lots of it, so I’ve been saying yes to more invitations.
Blocked and Reported
I was grateful that Jesse Singal invited me on his podcast (co-ran with Katie Herzog), Blocked and Reported, to talk about “misinformation” and other things. I’m a fan both of Jesse’s work and the podcast. I especially like his book, The Quick Fix, which criticises highly influential fad social science that seeks to solve complex social problems with simple psychological interventions.
After doing the podcast, I heard that some people would not be impressed about the fact I went on it because Jesse is controversial for his writings about transgender issues and youth gender medicine. I received similar messages when I agreed for my article on the UK riots to be published in France’s centre-right magazine, Le Point, and when I agreed to be on a panel at an event on misinformation organised by The Academy of Ideas, which included conservatives, populists, and contrarians of various kinds.
I’m not surprised by such responses, but I am depressed by them. I think it’s not just fine but important to talk to and engage with people who hold a diverse range of views. The opposite attitude—namely, that engagement with anyone who doesn’t endorse the bundle of orthodoxies that are identity-defining for progressives during the current moment—reflects intellectual arrogance and treats politics as a counter-productive, cult-like activity of maintaining ideological purity. I would think this even if I endorsed all such progressive orthodoxies, which I don’t.
Jesse and I talked about a range of issues, including:
What is misinformation?
How impactful is misinformation (and information in general)?
What role does social signalling play in giving rise to political beliefs?
Does misinformation have simple, surface-level “fingerprints”?
Can people be “inoculated” against misinformation?
A few observations:
Sometimes, I’m characterised as someone who criticises alarmism about misinformation. However, that’s only true if misinformation is defined very narrowly as extremely clear-cut falsehoods and fabrications (e.g., fake news). On a broader understanding of misinformation (e.g., as any communication that might be misleading), I think it is widespread and impactful in shaping attitudes and beliefs. For this reason, my main objection to the liberal establishment’s post-2016 panic about misinformation is not that it’s too pessimistic about the health of public discourse but that it’s not pessimistic enough. Specifically, it imagines that society’s epistemological problems involve a simple, discrete pathology (“misinformation”) that is easily identifiable by society’s intellectual elites. I disagree.
These philosophical issues of what we mean by “misinformation” have knock-on implications for countless other issues. For example, one issue that came up in the conversation is whether misinformation is more prevalent on the left or right. Clearly any answer to that question depends on what “misinformation” refers to. If it denotes things like fake news or absurd, easily-debunked conspiracy theories, I think it’s more prevalent today on the right, at least in the USA. However, if “misinformation” means “misleading information”, there’s an enormous amount of highly misleading communication in progressive and liberal spaces from highly-educated academics, journalists, pundits, and politicians. It’s just that the construction of misleading narratives among this segment of the population rarely takes the form of outright fake news. Moreover, this complication is exacerbated by the fact that it’s this segment of the population typically making decisions about what constitutes “misinformation”. Unless one assumes that this segment of the population is infallible, there will therefore inevitably be many forms of misleading content the misleadingness of which is not detectable by them (i.e., the misleading content they endorse).
To make such complications concrete, consider, say, Marxism. Does Marxism constitute misinformation? In one important sense, I would argue that it does. Insofar as the core ideas of Marxism involve gross simplifications, distortions, and misrepresentations of reality, it misinforms people about reality. If so, it would constitute one of the most influential and harmful forms of misinformation in human history, inasmuch as regimes and movements heavily influenced by a Marxist understanding of society, economics, and history are responsible for tens of millions of deaths. Nevertheless, I suspect the overwhelming majority of misinformation researchers would never classify it as “misinformation”, either because it doesn’t involve fake news in a straightforward sense or because of an incoherent distinction between “facts” (about which people can be wrong) and “opinions” (about which they allegedly can’t be). However, notice that this decision rests on a million assumptions about how people are deciding to define and measure “misinformation”. One could make similar points about any number of other classification decisions. The bottom line is that once one moves away from focusing on very clear-cut falsehoods and fabrications, what looks like a simple definitional question (“What is misinformation?”) quickly expands into fundamental questions about politics, epistemology, truth, and much more. Although I am repeatedly smeared as a “postmodernist” for making these observations, I am yet to encounter any persuasive response to them.
One of the main things I talked about with Jesse is my scepticism that misinformation comes with simple surface-level “fingerprints” like emotional language or polarising rhetoric. Given this, I’m sceptical that massively influential, well-funded interventions that try to “inoculate” people against misinformation by teaching them how to identify such fingerprints are likely to be successful. To be clear, I’m not sceptical that fake news as it exists online today has such fingerprints. Rather, I think fake news is so rare that trying to detect it on the basis of such fingerprints would likely produce more false positives than true positives. Moreover, this whole idea misunderstands why people engage with fake news in the first place, which is rooted in things like polarisation, institutional distrust, and trolling, none of which would be fixed by instructing people with short games that they should be more suspicious of emotional language.
I’ve written about such points and more extensively before, and also touch on some of them in the podcast, so I won’t repeat my arguments here. Instead, I will just make a very simple point. Recently, some prominent misinformation researchers are pushing to expand the meaning of “misinformation” to encompass a wide range of content beyond fake news. Even perfectly true content, they argue, should qualify as misinformation if it misleads people. (As always, they assume they are well-positioned to reliably and impartially detect which examples of true content fit within this category). At the same time—literally, in the same article—they argue that misinformation has “fingerprints” on the basis of studies which look at the fingerprints of fake news. The problem here is simple: If you think the concept of “misinformation” is far broader than fake news, you can’t draw on findings about the latter to establish sweeping generalisations about the former.
Hear This Idea
I was also grateful that Fin Moorhouse invited me on his excellent podcast, Hear This Idea. (I was on the podcast several years ago when it was just starting out). Fin is an extremely smart researcher, currently working at “Longview Philanthropy”. He’s a member of the effective altruist community. (Although I’m not part of this community, it’s one of my favourite social movements.) You can find out more about Fin in this really interesting conversation he had on Dwarkesh Patel’s podcast.
We talked about a wide range of things, including:
If reasoning is so useful, why are we often so bad at it?
Do some bad ideas really work like ‘mind viruses’? Is the ‘luxury beliefs’ concept useful?
What’s up with the idea of a ‘marketplace for ideas’? Are people shopping for new beliefs, or to rationalise their existing attitudes?
How dangerous is misinformation, really?
Will AI help us form more accurate beliefs, or will it persuade more people of unhinged ideas?
Does fact-checking work?
Under transformative AI, should we worry more about the suppression or the proliferation of counter-establishment ideas?
A couple of comments:
The conversation provided an opportunity to touch on my views about Dan Sperber’s and Hugo Mercier’s social-interactionist theory of reasoning. There is nobody in the social sciences who has been more influential in how I think about human minds and culture than Sperber and Mercier. Nevertheless, I disagree with their theory of reasoning. Roughly, their theory assumes that the capacity to reason is unique to human beings, evolved to perform social functions of persuasion and reputation management, and is subject to a built-in “myside bias”. In contrast, I think the capacity to engage in reasoning is shared with many other animals (although is far more advanced and complex in our species), and that its operation is highly sensitive to specific goals and contexts. Given this, I don’t think reasoning is subject to a built-in “myside bias”. Rather, I think that whether we reason in ways that are lawyerly depends on our goals. When our goal is to form accurate beliefs or make decisions, we tend to reason in broadly disinterested, objective ways. For this reason, how we reason is heavily shaped by the social norms that prevail in our environment, which affect our goals. Needless to say, there is much more to say about this. One of the things I have in my “drafts” section of this blog is an extended essay where I explore these issues in greater depth.
Fin asked me about my claim that Joseph Henrich’s book The Secret of Our Success is one of the most important works of conservative political thought in recent decades. There wasn’t a chance to also outline my disagreements with the book and more broadly with the field of “cultural evolution” Henrich endorses, which is rooted in the contributions of Robert Boyd and Peter Richerson. I have many such disagreements. Briefly, I think research on cultural evolution both dramatically underestimates the sophistication and complexity of individual cognition and the strategic nature of human social life, and greatly exaggerates the importance of cultural evolutionary forces like “cultural group selection”. This is a big topic that I will be writing about more in the future.
I wonder how much the prevalance of clear cut falsehoods, and our inability to agree on them, drives the misinformation debate on everything else.
One could imagine a world where intellectual duellists are more willing to concede that their ideas are just as plausible a perception of reality as another, if all abided by baseline information about fact. Because we don't have alignment on the basics, I think this makes us unwilling to concede that our interpretations of reality are merely interpretations. This is almost a defensive posture in the presence of such factual fracturing.
E.g. it's much easier to rethink your opinions on free trade when not needing to debate about Obama's birthplace
Thanks Dan, really enjoyed our conversation! Now keen to read about your disagreements with Henrich / Boyd & Richerson…