Misinformation and disinformation are not the top global threats over the next two years
The World Economic Forum's ranking of top global threats is either wrong or not even wrong.
Last Wednesday, I published an essay about the alarmism surrounding misinformation that has gripped experts, social scientists, and policymakers since 2016. As with much of my work on this topic, the essay is critical of modern misinformation research. It argues that this research confronts a dilemma:
On the one hand, if researchers focus on clear-cut cases of misinformation, misinformation is rare and largely symptomatic of other problems, at least in Western democracies.
On the other hand, if researchers focus on subtler ways in which communication can be misleading even when it is not demonstrably false, the concept of misinformation becomes so broad, amorphous, and value-laden that we should not delegate the task of identifying misinformation to a class of experts.
On the same day I published that essay, the World Economic Forum - the organisation that hosts the annual Davos meeting - published its Global Risk Report for 2024. Reporting on the views of “1,500 global experts from academia, business, government, the international community and civil society”, the report identified the top global threats over the next two years. In a perfect encapsulation of the panic surrounding this topic since 2016, it placed misinformation and disinformation at the very top of the list:
Responses to the ranking were polarised. The influential American writer and pundit Nate Silver, who linked my article in support of his views, responded as follows:
Unsurprisingly, many leading misinformation researchers disagreed with this assessment. They argued (e.g., here, here, here, and here) that the ranking is appropriate because misinformation and disinformation cause or exacerbate all other theats, including war.
Although I don’t think experts are idiots, my own view is closer to Silver’s. Specifically, I think that the ranking is either wrong or so confused and ill-defined that it is not even wrong.
On technical, narrow definitions of dis/misinformation, the ranking is wrong
The most obvious problem with the ranking is this: On most widely-accepted technical definitions of these terms, misinformation and disinformation are not very prevalent, at least in Western democracies. Among those who pay attention to politics at all - many citizens don’t - the overwhelming majority get their information from mainstream, broadly reliable sources.
There are some citizens who engage with a large amount of very low-quality, deceptive, or conspiratorial content online, but this engagement is driven by deeper attitudes and characteristics such as institutional distrust, anti-establishment worldviews, or extremist political identities.
On narrow definitions of the terms “dis/misinformation,” then, they are relatively rare and largely symptomatic of other problems - most fundamentally, of the fact that a significant chunk of the population distrusts establishment institutions.
Importantly, that doesn’t mean that disinformation campaigns and false information are mythical or never harmful. It also doesn’t mean that citizens are generally well-informed or make good political decisions. Many people pay little attention to politics or current events, extreme ignorance and political misperceptions are widespread, and people are often highly biased and tribal in how they process political information.
However, such things have always been true in democracies and there is little reason to believe they have generally gotten worse in recent years. Moreover, the idea that citizens only hold bad beliefs or make bad political decisions because they have been duped by dis/misinformation is one of the great myths of our time. As Dietram Scheufele and colleagues put it, the idea that dis/misinformation
“distorts attitudes and behaviors of a citizenry that would otherwise hold issue and policy stances that are consistent with the best available scientific evidence… [has] limited foundations in the social scientific literature.”
That’s putting it mildly.
Given this, at least on narrow definitions of dis/misinformation, there is - despite widespread misconceptions about this topic - simply not good evidence for the claim that they are greatly impactful in shaping political events. That’s certainly true within Western democracies, where most of the research on this topic exists, but it seems even more misguided to think that dis/misinformation are the greatest near-term threats that countries in the Global South confront.1 The World Economic Forum is therefore wrong to list them as the most serious global threat over the next two years.
As Brendan Nyhan (a leading and very rigorous political scientist) put it on Bluesky:
On expansive definitions of dis/misinformation, the ranking is not even wrong
At this point, one might respond as follows:
“Maybe that conclusion follows from very narrow definitions of dis/misinformation, but on broader definitions - for example, referring to whatever causes human error - they are much more worrying. After all, if everyone had completely accurate beliefs and made good decisions, most other global threats - war, climate change, economic disasters - either wouldn’t arise or would be much more manageable.”
Here, for example, is the response by Gordon Pennycook, a psychologist I really respect:
In an important sense, I agree with Pennycook here. The truth matters. We can generally only solve political problems if we have an accurate understanding of reality (including the reality of our own limitations when it comes to solving complex problems). Given this, even though accurate beliefs are not sufficient to address global threats - merely knowing the truth doesn’t give people good intentions, for example - they are plausibly necessary.
Perhaps, then, the term “misinformation” should just refer to whatever factors cause people - whether politicians, experts, policymakers, voters, corporate CEOs, or whatever - to hold bad beliefs about the world. In support of this, one might point out that babies don’t enter the world with bad beliefs about elections, vaccines, immigrants, climate change, international relations, and so on. People acquire such beliefs from others because they are misinformed by others - and hence, ultimately, because of misinformation. Misinformation, then, really is the top global threat.
This is an attractive, plausible-sounding line of reasoning. I can see why it grips very smart, thoughtful people. However, on closer inspection I think it turns out to be confused. The problem is not that it is wrong, but that it is - to quote the physicist Wolfgang Pauli’s famous put-down - not even wrong. There are three reasons for this.
Reason #1: Are the world’s elites in the best position to determine the Truth?
Once we turn our focus away from clear-cut cases of misinformation to include subtler ways in which partial, misleading, or otherwise bad worldviews arise, there is little reason to think that advisors to the World Economic Forum are in a privileged position to identify misinformation. Although the organisation might not spread fake news, a critic might point out that its existence and agenda function to promote the interests of elites and legitimise political arrangements from which they benefit. They might also argue that the organisation’s endorsement of the post-2016 moral panic about online misinformation as a leading driver of the world’s problems conveniently diverts blame away from elites, establishment institutions, and deeper social and economic factors.
The basic point is that once we start focusing on how narratives, ideologies, and systems distort people’s priorities and perceptions of reality in harmful ways, our attention should probably turn towards organisations like the World Economic Forum rather than to the examples of “misinformation” they are most concerned about.
Of course, at this point one might respond as Pennycook does: if the World Economic Forum or experts or mainstream media peddle harmful misinformation (in the broadest sense of that term), that just illustrates how harmful misinformation is! This is a “heads I win, tails you lose” kind of response, however. The question is whether misinformation as classified by specific groups or organisations is the most serious global threat, not whether human error is.
Reason #2: When defined expansively, ‘misinformation’ is not a useful concept.
The reasons why people - whether the ultra-rich and powerful at Davos or the hoi polloi - see the world in partial or distorted ways are extremely complex. They involve many factors and many subtle interactions between them. These include the information (and misleading information) that people encounter, but they also include human nature, cognitive biases, pre-scientific intuitions, self-deception, ignorance, bad reasoning, who people trust and distrust, and the extreme complexity of reality. They also include the social, economic, political, and institutional context within which people acquire beliefs and make decisions.
Using the term “misinformation” to refer to this complex web of factors does not help us to understand the world or address global threats. It is like placing “bad things happening” at the top of a list of dangers. It creates a comfortable illusion of explanation, but all it really achieves is to redescribe the very thing we need to explain: namely, why people often hold bad beliefs and make bad decisions.
Reason #3: There is no reason to believe that human error and fallibility have become more dangerous
For a list of global threats to be useful, it must refer to specific events (e.g., military conflicts, invasions, economic disasters) or trends (e.g., climate change, increasing polarisation) that we should be directing our attention towards. This is why it wouldn’t make sense to include, say, “human nature” on such a list, even though most of the problems we confront are in some sense downstream of flaws in human nature.
Given this, the expert advisors to the World Economic Forum must think that misinformation is - or is at risk of being - worse than is usually the case. But on an extremely expansive understanding of ‘misinformation’, this is dubious. Human ignorance, error, and bias are ancient features of the human condition, just as propaganda, cartoonish political narratives, and distortive ideologies are constant features of political systems. There is little reason to think these things have recently gotten worse - in some ways they have probably gotten better - or that they are likely to become worse in the near future. Given this, it is unclear why it makes sense to rank them as top global threats now. To the extent that "misinformation” is a placeholder for whatever factors explain the fact that people hold bad beliefs and make bad decisions, it should presumably always be at the top of the list, and hence there is not much point including it on the list at all.
Interestingly, the World Economic Forum does think something has changed recently. It writes:
The growing concern about misinformation and disinformation is in large part driven by the potential for AI, in the hands of bad actors, to flood global information systems with false narratives.
As I will write in next week’s essay, this concern about AI-based disinformation looks like another moral panic. So far, there is little evidence it has had harmful political effects. This is because people are extremely difficult to influence in politics, most people (at least in Western democracies) get their news from mainstream sources, and the minority of the population that distrusts these sources and seeks out counter-establishment content is already well-served by human-generated misinformation.
It could be that this will all change radically over the next two years, or that AI-based disinformation will play a much bigger role in the Global South. These are currently speculations based on little evidence, however. And the fact that people have been so willing to greatly exaggerate the prevalence and harms of online misinformation over the past few years doesn’t exactly inspire confidence.
When it comes to the world outside of Western democracies, there is much less systematic research on the prevalence and harms of dis/misinformation, but amidst all the problems such countries often confront - high rates of corruption, low (or non-existent) rates of democratic participation, high levels of poverty (including extreme poverty), higher rates of violence and crime, sectarian conflict, etc. - it is difficult to see why or how dis/misinformation could be their biggest problem, except in the extremely expansive and confused definition of these terms explored in the second half of this essay.
Good essay. Perhaps what scares Davos and its experts - and explains their scoring - is the loss of their science-mediated monopoly on truth. Dan Sarewitz and I wrote that much here: https://www.thenewatlantis.com/publications/reformation-in-the-church-of-science.
This is a nice extension of your points from last week's post. It also raises the broader question of how to characterize, let alone compare, problems of very different types and causal structure. When I read the WEF list, I see a bunch of apples and oranges reflecting different ontology, levels of analysis, positions upstream or downstream from the source or outcome of concern. "Extreme weather events" highlights what is most visible in terms of suffering - catastrophic outcomes enabled by a warming planet - but to call it the *target* problem rather than what caused it is almost incoherent. (Meanwhile, "involuntary migration" will increasingly be driven by extreme weather events). "Cyber insecurity" is not an event but an enabling condition, and impacts infrastructure, not physical health like pollution (which doesn't cause violent death like war). Misinformation can reflect "societal polarization," but perhaps these are better conceived as two sides of an overarching phenomenon such as breakdown of trust (which comes from....?)
So any of these listed problems and their framings is laden with interpretive choices. But what stands out with "misinformation" is its intuitive appeal as a static noun, a reified *thing*we imagine is directly responsive to intervention. A hammer always looks for a nail, and "misinformation" is that perfect nail - something to be eliminated. The other problems cited by WEF, even phrased as nouns, describe broad overdetermined processes and conditions.. Misinformation in theory is reducible to individual propositional statements, replaceable by "this-information."