Why do people believe true things?
Ignorance and misperceptions are not puzzling. The challenge is to explain why some people see reality accurately.
[Note: Sometimes, I try to write rigorous, well-argued essays based on ideas I have thought a lot about and feel confident in. Other times, I write up whatever inchoate ideas are floating around my consciousness to figure out what I think—and should think—about a topic. This falls in the latter category. You were warned].
Deep questions
A large part of being well-informed involves the ability to generate plausible explanations. However, acquiring a good sense of which things demand explanation—which things are genuinely surprising or puzzling—is equally important.
To illustrate, I recently watched an interview with the British politician Bridget Phillipson. At the end of the interview, one of the interviewers, Rory Stewart, was exasperated at her shallow, boilerplate answers. Barely concealing his irritation, he asked,
“Can you think of an intriguing question that you’ve ever had that allowed you to show a different side of yourself or go in a direction that interested you as opposed to being kind of hit in the normal way?”
Clearly taken aback and annoyed, Phillipson replied that it is the kids she encounters on school visits who ask the toughest and most “on-point” questions. For example, kids ask deep questions like, “Why is there poverty in the world?”
Why is there poverty in the world?
I am not surprised a child would ask this question. It is a childish question, one which betrays a naive, uninformed understanding of economics and history.
The reason is that nothing is puzzling about poverty. It is humanity's default state. Until very recently, everyone lived in what would now be regarded as shocking poverty. Even the elites—monarchs, aristocrats, and so on—lived in conditions that were appalling relative to the prosperity enjoyed by most people in affluent societies today.
Here is a graph of inflation-adjusted global GDP over the last two thousand years:
The long, flat line that characterises most of human history? That is poverty: crushing, subsistence-level, Malthusian poverty.
And it is entirely unsurprising. It is easy to achieve poverty. As Adam Smith realised, the more profound—the genuinely puzzling—economic question concerns the origins of wealth, not poverty.
Wealth is surprising. In recent centuries, certain parts of humanity have created unprecedented abundance through consistent increases in human productivity. This is amazing. It cries out for a deep explanation.
Wealth creation depends on complex, improbable, and fragile institutions and incentives for coordinating human activity and the division of labour. Minimally, this includes robust property rights, a culture that rewards science and innovation, the impartial rule of law, and free, competitive, and open markets. There is nothing “natural” or “automatic” about such conditions.
Of course, poverty seems strange and surprising in a modern, wealthy society such as the UK. That is why it is understandable it would strike a child as puzzling. And in some ways, it is puzzling: against the backdrop of sustained economic growth and material affluence, it is reasonable to ask why pockets of relative poverty still exist.
Nevertheless, unless you understand that the real puzzle—the deep question—of economics concerns wealth, not poverty, you will be fundamentally confused about the world around you. You will think poverty is an aberration that demands a special explanation—most commonly, someone or some group of people to blame—rather than treating it as the default state humanity will revert to in the absence of improbable and precarious institutional arrangements.
Explanatory Inversions
As Joseph Heath points out in Cooperation and Social Justice, the ability to appreciate “explanatory inversions” is one of the things that most sharply distinguishes a scientifically informed worldview from a “commonsense” one:
“One of the clearest points of demarcation between specialist discourses and everyday commentary and debate is that the former are often structured by what might be thought of as “explanatory inversions.” These arise as a consequence of discoveries or theoretical insights that have the effect of changing, not our specific explanations of events, but rather our fundamental sense of what needs to be explained.”
To illustrate, Heath gives the example of social deviance in criminology:
“Common sense tells us that most people, most of the time obey the law. Crime is an anomaly, and as such, stands in need of explanation. Common sense provides us with a wealth of explanations, which seek to identify the motives that impel people toward criminal acts. But if one stops to examine these motives… the most striking thing about them is how ordinary and ubiquitous they are. For every angry person who commits an assault, or greedy person who steals from others, there are hundreds of equally angry, equally greedy people who refrain from doing so.”
In other words, the “root causes” of crime are simply the benefits of crime. Nothing is deeply puzzling about why people cheat, steal, fight, rape, and murder. People rob banks because that is where the money is.
“This is what prompted the realization… that it is not crime that cries out for explanation, but rather law-abidingness.”
Why do most people fail to appreciate this?
“Common sense is wrong on this point because we are all reasonably well-socialized adults, living in a well-ordered society, and so we take for granted the institutional arrangements that secure our compliance with the rules. But the underlying mechanisms are ones that we do not really understand, as a result of which it is difficult to explain why more people do not break the law more often (since it is so often in their interest to do so).”
As Heath observes, the crime example manifests a more fundamental, explanatory inversion concerning human cooperation.
The puzzles of cooperation
At the most abstract level, human societies are complex systems of cooperation. And to many of us, at least much of the time, cooperation feels effortless, frictionless, self-evident. We join cliques, clubs, and teams. We form long-lasting friendships and relationships. We help others when they need help. And so on.
And yet cooperation is fundamentally puzzling. Not only can the fruits of cooperation often be stolen in a way that undermines any incentive to cooperate, but one of the great discoveries of modern social science is the problem of free-riding.
It seems obvious—unremarkable—that people will automatically work together when they have a shared interest. But this assumption is mistaken. In many contexts, individuals benefit from cooperation but benefit even more from free-riding on the hard work of others. When that happens, people often forgo the fruits of collaboration in ways that leave everyone worse off.
In consequence, cooperation is inherently challenging. Human history is, therefore, saturated not just with competition, domination, exploitation, and conflict but with recurrent failures to achieve win-win cooperation even when it would seem possible.
Once you understand the challenges of cooperation, it becomes clear that these failures are not puzzling. Instead, the truly puzzling fact is that humanity has—sometimes, in some places—achieved spectacular systems of large-scale cooperation that pressure, cajole, persuade, encourage, incentivise, and tempt a species of competitive apes to work together, dampening down immediate self-interest and overcoming collective action problems.
This fact is surprising from an evolutionary point of view: humans are famously unique in the degree to which we cooperate with those beyond our close genetic relatives. It is also surprising from any social-scientific perspective that treats individuals as somewhat rational, somewhat self-interested agents.
Unfortunately, many people do not understand this. In their analysis of the social world, they take cooperation for granted, treating it as the default mode of social interaction that arises automatically in response to its anticipated benefits.
Many even become outraged at the suggestion that cooperation—helping, generosity, collaboration, and so on—is puzzling. For example, the philosopher Susan Neiman criticises evolutionary psychology for being concerned with “the problem of altruism”. The very idea that altruism poses an explanatory problem, she argues, betrays an ugly, misguided view of the world.
Neiman’s analysis is mistaken but understandable. It reflects “commonsense”. However, commonsense is prescientific and, hence, frequently wrong. We do not leave physics to the intuitions of commonsense. We should not leave our understanding of society in its hands either.
Social epistemology needs an explanatory inversion
And now to the main point of this post: Social epistemology—very, very roughly, the study of phenomena such as knowledge, belief, and understanding in society—similarly needs an explanatory inversion.
Many people in social epistemology are concerned with the following question: Why do people believe false things?
For example, this question is central to Marxist analyses of “ideology” and their descendants in critical theory, feminist epistemology, and so on, which ask why people (*other people, i.e., the hoi polloi) endorse false beliefs and social theories (i.e., beliefs and social theories not endorsed by left-wing intellectuals).
Similarly, in modern social science, there is a vast body of research on “misinformation,” “post-truth,” and so on, which asks why people (*other people, i.e., the hoi polloi) believe in misinformation (i.e., beliefs that deviate from the expert judgements endorsed by establishment liberals).
Despite my snarky parentheses, these are not absurd questions. Nevertheless, there is an important sense in which they are shallow and reflect a misguided understanding of the relationship between belief and reality. The deep question of social epistemology—the genuine puzzle—is not why people hold false beliefs. It is why people sometimes form true beliefs.
The truth is not the default
“The truth about distant or complex matters,” writes Walter Lippmann, “is not self-evident.” Given this, “The pictures inside people’s heads do not automatically correspond with the world outside.”
These points are obvious in some ways. But I think they are greatly underappreciated in how many people instinctively think about topics like “misinformation,” “ideology,” and “science denial.”
In complex, modern societies, the relationship between reality and our representations of reality—between what Lippmann called the “real environment” and the “pseudo-environments” that make up our mental models of the real environment—is heavily mediated by complex chains of trust, testimony, and interpretation.
Think of the economy, society-wide crime trends, vaccines, history, climate change, or any other possible focus of “public opinion.” Not only is the truth about such topics typically complex, ambiguous, and counter-intuitive, but almost everything you believe about them is based on information you acquired from others—from the claims, gossip, reports, books, remarks, opinion pieces, teaching, images, video clips, and so on that other people communicated to you.
Moreover, to organise all that socially acquired information, you relied on simplifying categories, schema, and explanatory models that reduce reality's complexity to a tractable, low-resolution mental model.
In this heavily mediated process, there are countless sources of error and distortion. This is true even if you are ideally rational. But of course, you are not; you are human. Not only is the construction of your pseudo-environment twisted and distorted by prescientific intuitions and innumerable cognitive biases, but you are not a disinterested truth seeker. Instead, your beliefs are biased by motives and interests like self-aggrandisement, status-seeking, tribalism, and social conformity.
Just as importantly, the people from whom you have acquired your information about the world are similarly flawed, fallible, and biased. In some cases, they were outright liars and propagandists, but most were simply influenced by the same mundane sources of motivated reasoning as you.
For these reasons, the truth is not the default when people form beliefs about the world beyond their immediate material and social environment.
Of course, in some sense, this should be obvious. Just as poverty is humanity’s default state throughout history, so are ignorance and misperceptions. At least relative to a modern scientific worldview, almost everything people have ever believed about the world they are not in close perceptual contact with has been completely wrong.
Nothing is puzzling about this. The puzzle is that humans sometimes overcome the countless sources of error and illusion that distort beliefs and form accurate perceptions of how things are.
The sources of error
Why, then, do many approach social epistemology as if the deep question, the real puzzle, is why some people “believe in misinformation”?
One answer is what psychologists call “naive realism.” Although our access to reality is heavily mediated, most people do not instinctively appreciate this. They treat their mental model of reality as an unproblematic mirror of nature, endorsing the attitude that
“I see entities and events as they are in objective reality… [M]y social attitudes, beliefs, preferences, priorities, and the like follow from a relatively dispassionate, unbiased, and essentially “unmediated” apprehension of the information or evidence at hand.”
If you see things this way, then (what you take to be) false beliefs really do seem puzzling. If the truth is self-evident, why do people not hold true beliefs? They must be crazy, deluded, or evil.
However, there is another, deeper reason: Just as citizens of modern, affluent societies take wealth—an extraordinarily fragile achievement—for granted, many take knowledge for granted.
Today, educated people in affluent, liberal societies are beneficiaries of the scientific revolution, the Enlightenment, and centuries of cultural and institutional development designed to overcome the many sources of ignorance and misperception in human judgement.
Norms and institutions
Norms
Part of this inheritance involves a profound normative change. One of the revolutionary ideas of the Enlightenment was that we should apply standards of evidence and reason to our beliefs, even when the beliefs concern the distant past or future, the broad cosmos, or the nature of social reality.
Many educated people in affluent, liberal societies take this norm for granted today. Even though motivated reasoning is widespread, people are typically embarrassed by it. They think letting self-interest or tribalism bias judgement is shameful and strive to present themselves as disinterested, rational truth-seekers.
This attitude and its associated social norms are neither universal nor the default way people treat their deepest convictions. For most people in most places, there is little embarrassment in the fact that their worldview, religion, or ideology are designed not for truth but for things like identity formation, celebrating the glory of their tribe, achieving cooperation, and demonising enemies.
When Enlightenment philosophers celebrated reason and the rational pursuit of knowledge (“Sapere aude!”), they were calling for a profound—a radical—norm change. Even today, a surprisingly large minority of people in Western societies openly acknowledge that they do not think their beliefs should be based on evidence.
Similarly, the idea that people should be expected to provide rational justifications for their beliefs is another rare social norm. When Bertrand Russell observed that “it is undesirable to believe a proposition when there is no ground whatsoever for supposing it is true,” he was not expressing a boring platitude. He was observing, correctly, that most people, most of the time, violate this norm. And he was suggesting—and relative to much of human history, this suggestion is revolutionary—that this should be treated as a bad thing. (This simple idea, Russell suggested, “would completely transform our social life and our political system).
Just as importantly, the norm that people should base their convictions on evidence and reason applies not just to individuals but to how people should resolve disagreements and differences of opinion. For example, rather than massacring those who hold different views—as C.S. Peirce observes, this was historically treated as “a very effective means of settling opinion in a country”—the norm encourages you to try to persuade them rationally instead.
Institutions
In recent centuries, radical institutional developments have occurred alongside these profound normative changes.
Most obviously, science itself is an institution.
On one level, science is a status game, one in which ambitious scientists compete for social and professional rewards by making novel discoveries and advancing knowledge. This is already a radical departure from much of human history, in which bold, novel ideas were often more likely to result in punishment or death unless they aligned with the preferred vision of society’s elites.
Nevertheless, some people have always been motivated to figure out the nature of reality and generate bold hypotheses about it. Science's unique feature—the source of its unprecedented power as an engine of knowledge and discovery—concerns how it certifies reliable discoveries and adjudicates disagreements. For example, rather than relying on abstract philosophical considerations to settle arguments, it focuses obsessively—and counterintuitively—on empirical evidence as the sole means of adjudicating between competing hypotheses and theories.
More generally, science has developed numerous subtle institutional norms, procedures, and practices for governing the generation, communication, and revision of scientific information. These include professional journals and societies, universities and research centres, organised forms of training, specialisation, and collaboration, peer review, funding mechanisms, and so on.
Of course, the replication crisis and other issues with science that have come to light over the past decade or so demonstrate how imperfect these institutional mechanisms are. But that simply illustrates the general point: Even extraordinarily complex institutions designed, refined, and shaped over centuries with the explicit goal of generating knowledge—institutions that constitute humanity’s best and most successful attempt at generating knowledge—still often fall short.
The same is true of other knowledge-generating institutions in liberal societies, from the legal system to media outlets that conform to professional norms of journalistic objectivity. As imperfect as such institutions might be, their flaws should not detract from the amazing fact that they often succeed in generating and distributing broadly reliable information in society.
Taking knowledge for granted
Just as many people in affluent societies take wealth for granted today, many also take knowledge for granted. As a result of centuries of complex, cumulative normative and institutional changes, many people living within liberal societies with high rates of institutional trust enjoy something extraordinary and unprecedented: reliable knowledge about distant, complex facts.
For example, many of us are in a position to form accurate beliefs about vaccines, macroeconomic trends, the evolutionary history of our species, the misdeeds of powerful political elites, the micro-scale and large-scale structure of the cosmos, and much more.
From this position, phenomena such as flagrant lies, misinformation, ignorance, and misperceptions seem surprising. In an era of unprecedented epistemic wealth, it can seem odd that so many people endorse non-scientific—indeed, pre-scientific—beliefs about ghosts, astrology, paranormal activity, and supernatural forces and agents; that evidence-free conspiratorial narratives are so influential; that people embrace stick-figure, one-sided, biased ideological narratives; that political and economic elites lie in obvious ways to audiences who do not punish them for it; and so on.
And in some sense, these things are puzzling. Relative to the possibilities of knowledge and understanding in modern society, it is striking—and regrettable—that shocking forms of ignorance, misinformation, and misperceptions are so pervasive.
However, in a deeper sense, this situation is not surprising. The more puzzling feature of modern societies is that many people reliably form accurate beliefs about distant, complex issues. Unless we understand this, we will fail to appreciate how impressive—and fragile—this achievement is.
We will think—mistakenly—that ignorance and misperceptions are aberrations that require deep explanations, when they are instead the default state humanity will revert to in the absence of improbable and precarious norms and institutions.
Why does any of this matter?
As readers of this blog will know, I am not a fan of the way in which so many journalists, social scientists, and commentators today use concepts like “misinformation” and “post-truth” to diagnose epistemic problems in society.
There are many reasons for this. In connection with the themes of this essay, it is partly because the modern panic about these things is historically illiterate. There never was a “truth” era. The dominant world religions are vast repositories of fake news and rumours; conspiracy theories are as old as humanity; and false, cartoonish, and biased narratives and ideologies are the norm throughout human history.
However, it is also because I think locating modern epistemic problems in “misinformation” and related buzzwords is explanatorily shallow. Once you appreciate that the truth is not the default—that it is an exceptional, fragile, improbable achievement—it should shift how you approach social epistemology.
First, it should encourage a conscious rejection of naive realism. The truth is not self-evident. Given this, those who claim to know the truth, including misinformation researchers, should have more intellectual humility than they often have. Moreover, they should acknowledge that there are many sources of error, bias, and partiality in human judgement distinct from misinformation.
Second, it should make us understand that lies, conspiracy theories, misinformation, bias, pseudo-science, superstition and so on are not alien perversions of the public sphere. They are the epistemic state of nature that society will revert to in the absence of fragile—and highly contingent—cultural and institutional achievements.
Given this, the real epistemic challenge for the twenty-first century is not to combat misinformation, except insofar as doing this helps us achieve a deeper, more fundamental goal: maintaining and improving our best epistemic norms and institutions, and winning trust in, and conformity to, them.
The point that poverty is a normal state is missing the obligatory Heinlein quote:
'Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded — here and there, now and then — are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.
This is known as "bad luck.”'
Great post. Another puzzle I’d add is why we care about distant abstract things at all. It’s not obvious why we should have *any* beliefs about such things, given their lack of relevance to everyday decisions. Why do we even care? Why do we form beliefs about them at all? I genuinely don’t know the answer, but I suspect it has something to do with the social functions such beliefs serve, and the need to segregate these social functions from the epistemic functions of our normal beliefs.