The confused crusade against online misinformation
(Mis)information: the battle for free speech online
This week (later today, in fact), I’m a panellist at an event in London organised by the Academy of Ideas. My panel is on “(Mis)information: the battle for free speech online”. You can find out more about it here. The other speakers are Timandra Harkness, Fraser Myers, Nico Perrino, and Eli Vieira.
As part of the event, each panellist gives a six-minute speech. For multiple reasons, I really struggled with drafting mine. It’s necessary to simplify complex issues to get core points across at public events like this. However, this necessity can be used as an excuse for over-simplifying ideas. In other words, such events raise complex ethical and epistemic questions.
In preparing, I asked myself, “For this specific event with this specific audience, is my contribution likely to improve people’s understanding of the relevant issues?”. That’s more complicated than “Is my contribution true?” because it depends on the audience’s specific preconceptions and on getting one’s points across in a memorable way. The problem is that there’s more room for self-deception in answering the former question than the latter.
Moreover, at this specific event, I’m especially concerned that there will be a bit of an echo chamber on the panel and maybe in the audience. Anyone who reads this blog knows I’m sceptical of modern alarmism about misinformation, think many anti-misinformation initiatives are politically biased, and am strongly anti-censorship. I suspect most other panellists hold similar views.
Given this, I had initially planned to use my speech to do something unpopular: to criticise how the professional contrarian class talks about a “censorship industrial complex”, a paranoid discourse which typically involves even more alarmism (not to mention conspiracy theorising and low-quality analysis) than the liberal establishment’s moral panic over online misinformation. However, I couldn’t find a way to make my core points clearly and concisely, and I worried the topic might be too niche, so I decided to draft a different speech. Nevertheless, I intend to bring up the issue if I can, and it will form the basis for one of my next essays here, which will criticise the views of people like Matt Taibbi, Michael Shellenberger, Glenn Greenwald, and Jacob Siegel.
With that throat-clearing out of the way, here is my speech. Thoughts, feedback, and criticisms are all welcome!
The confused crusade against online misinformation
Since 2016, the year of two populist revolts—Brexit and then Trump—members of the liberal establishment throughout Western societies have been gripped by an intense panic about online misinformation.
According to the narrative driving this panic, false and incendiary content online, fuelled by sinister politicians, pundits, and algorithms, is infecting minds, amplifying division and distrust, and driving people to make damaging decisions, like voting for demagogues and rejecting public health advice.
This narrative was on full display earlier this year when the World Economic Forum drew on advice from nearly 1500 experts worldwide to list misinformation and disinformation as the number one global risk over the next two years, ahead of nuclear war and economic catastrophe.
Like most popular narratives, this one contains a grain of truth.
It would be absurd to deny that online misinformation sometimes creates real problems.
It’s easy to think of examples, such as the role of fake news in the awful race riots over the summer, where online misinformation was a contributing factor.
Nevertheless, this grain of truth is buried under a mountain of alarmism, fuzzy thinking, and self-serving spin.
First, although online misinformation can be a problem, misinformation—people being wrong about things, intentionally or accidentally—has always been a problem.
Lies, propaganda, dogmatism, self-serving ideologies, unfounded conspiracy theories, and plain old human fallibility are as old as humanity, and there’s no reason to think social media has made them worse.
Moreover, the most dangerous misinformation comes from the powerful—from political, economic, and cultural elites—who have every incentive to paint themselves as objective truth-tellers in contrast to the misinformed masses on social media.
This obviously doesn’t mean that social media has no effect on public discourse.
Democratising the public sphere and loosening the grip of elite gatekeepers has inevitably produced a complex mix of benefits and costs.
To take only one example, the much greater scope for dissent from establishment orthodoxies that social media provides can be a blessing and a curse. It depends on the quality of those orthodoxies and the dissent.
The point is simply that contrasting a mythical golden age of objectivity with a post-truth digital apocalypse doesn’t illuminate such effects. It obscures them in a way that serves the interests of elites and establishment institutions.
Second, contrary to conventional wisdom, research suggests that clear-cut, unambiguous examples of online misinformation—for instance, actual fake news or deep fakes—are relatively rare and not a significant driver of large-scale social and political trends.
For example, a 2020 study in Science, one of the world’s leading scientific journals, estimates that online fake news makes up only 0.15% of Americans’ daily media diet, a finding replicated across other research in Western countries, including this one.
Moreover, another robust finding is that engagement with such content online is highly concentrated among a narrow fringe of social media users who seek it out because it aligns with their pre-existing beliefs.
That sounds very abstract and academic, but the essential point should be obvious.
Which people were eager to spread a baseless rumour that the killer in Southport was a Muslim asylum seeker? Far-right fanatics and racists who distrusted mainstream institutions and already harboured intense hostility towards Muslims, asylum seekers, and immigration.
As the riots illustrate, the fact that online misinformation overwhelmingly preaches to the choir in this way—that is, to people highly receptive to it—doesn’t mean it’s harmless. Nevertheless, it suggests it’s often symptomatic of much deeper societal problems.
Given this, trying to solve such problems by targeting online misinformation is a bit like trying to cure a brain tumour by taking painkillers to deal with the headaches it produces.
Of course, that’s part of the attraction of modern misinformation alarmism.
It replaces deep-rooted and profound social and political pathologies, many caused or exacerbated by establishment failures, with a simple, discrete, technocratic problem: “combatting online misinformation.”
In fact, it’s even worse than that because many proposed solutions to the problem of online misinformation involve aggressive censorship. And aggressive censorship, inevitably enforced by biased and fallible human beings, seems likely to exacerbate the social divisions and institutional distrust that produce a market for online misinformation in the first place.
It would be nice if combating online misinformation could return us to a pre-digital golden age of truth and objectivity. But not only is that golden age a myth, most of the genuine problems in modern politics and political discourse are merely reflected on social media, not rooted in it.
This means that the modern crusade against online misinformation will, at best, have minimal benefits and—at least if those calling for much greater top-down censorship get their way—might even make things worse.
I agree with everything you said, and yet I still feel that there must be something we can do to stem the harmful kind of misinformation that is not censorship.
For instance, on Substack everyone can say whatever they want and the most outrageous thing someone can say may be read by the people who follow them, but it won’t reach the rest of us who aren’t in that sphere. If they do pop into our spheres somehow, they are easily blocked or reporters by readers. Most likely that community will stay small and/or blocked by people who don’t want to see it and thus it won’t gain traction the way outrageous things do on other platforms. We can self regulate our own little communities without needing someone above us to make that choice for us. This seems a better way to design.
As for presidential candidates, I’m not sure the right answer. Again, it’s not censorship, but perhaps there is something else. Like independent writers who have gained trust for taking a bi-partisan view. Who can research the answers and share their own independent fact checking with an audience that doesn’t trust Fox News or CNN to do it. Less partisan media sources might be a good solution.
I agree that misinformation has always been an issue, but it has also adapted for our times, and that means adapting how we handle it. Censorship is not the answer, but I think there are other things that could be and it’s worth coming up with those solutions.
We are in some sort of epistemic crisis. The "experts" are claiming that misinformation is our greatest threat while at the same time public trust in many institutions is declining. For me, the issue seems to be that there is little (if any) accountability for when these institutions mislead the public, much like there is little accountability for scientific fraud. Unfortunately, this incompetence opens the door for misinformation. People want to go back to a world where these institutions are trusted again, but they don't deserve our trust and they don't seem willing to change or even acknowledge that they are part of the problem.
We are getting to the point where there simply are no universally trusted sources of truth in our society.