The harder it is to find the truth, the easier it is to lie to ourselves
A simple observation with complex implications
If you look at humanity, both today and throughout history, you can’t help but notice that people believe a lot of things that seem stupid and irrational. Pick your favourite example: conspiracy theories, religion, prejudice, ideology, pseudoscience, ancestor myths, people who hold different political opinions from your own, and so on.
This observation provokes a central question for the social sciences. Why do broadly rational people, people who often seem intelligent and competent in most aspects of their lives, sometimes believe highly irrational things?
One classic answer is that people are not disinterested truth seekers. In some contexts, our practical interests conflict with the aim of acquiring accurate, evidence-based beliefs. For example, we might want to believe things that make us feel good, that impose a satisfying order and certainty on a complex world, that help us persuade others that we’re noble and impressive, or that win us status and approval from our friends and allies.
Famously, when our goals come into conflict with the pursuit of truth in this way, the truth often loses out. We lie to ourselves, bury our heads in the sand, and engage in elaborate mental gymnastics. Less colloquially, we engage in what psychologists call “motivated cognition”: we—or our minds, at least—direct cognitive processes toward favoured conclusions, not true ones. For example, we instinctively seek out evidence that confirms those conclusions (confirmation bias), shield ourselves from evidence against them (motivated ignorance), insist on higher standards for arguments we dislike than for those we like (biased evaluation), and remember and forget information in convenient patterns (selective forgetting).
Throughout most of history, scholars had little doubt that this tendency was a central and destructive feature of the human condition.
For Adam Smith, for example, it “is the fatal weakness of mankind” and “the source of half the disorders of human life.” For Socrates in the Cratylus, “the worst of all deceptions is self-deception.” And of course, thinkers such as Freud and Nietzsche placed motivated cognition at the centre of their understanding of human psychology.
Against Motivated Cognition
This consensus continued from the emergence of scientific psychology until relatively recently. In the last decade or so, however, some researchers have become increasingly sceptical that motivated cognition is a significant force in human affairs. There are many reasons for this, including reinterpretations of experimental findings, failures to replicate certain findings, and a growing body of evidence that people are broadly rational in how they process information, even in domains like politics.
I’m not very convinced by these sources of scepticism. I think they often rest on naïve assumptions about how to interpret psychological findings and how to understand motivated cognition. When properly understood, motivated cognition is consistent with the finding that people update their beliefs when presented with corrective information.
I also think that, as with human cognition more broadly, most widespread and consequential forms of motivated cognition are distributed and socially scaffolded. They are a “team project” involving complex systems of social norms, incentives, and coordination that function to promote and protect favoured narratives and belief systems. So, if you want to understand how we lie to ourselves, you must move beyond the lone thinker in decontextualised psych experiments and focus on how humans co-construct social worlds optimised for scaffolding self-deception.
There is much more to say about all of this, obviously. But here, I want to focus on a different, more “philosophical” source of scepticism about motivated cognition, one which draws attention to what the philosopher Jeffrey Friedman calls “epistemic complexity”.
Epistemic complexity
“Epistemic complexity” is a bit of jargon for the simple idea that it’s often really hard to figure out what’s true.
Partly, this is because reality itself is often complex, but it’s also due to the highly fallible ways in which we access that reality. We rarely have “direct”, perceptual access to the facts we form beliefs about, especially in domains like politics and religion. Our access is mediated by other people and institutions—priests, teachers, writers, journalists, pundits, scientists, social media feeds, etc.—and by our pre-existing beliefs (“priors”), which, given the world's scale and complexity, typically involve highly selective, low-resolution compressions of reality. Of course, these representations were also primarily acquired from others who are in exactly the same situation.
This is what Walter Lippmann meant when he observed that the modern world is “out of reach, out of sight, and out of mind”, and that public opinion “deals with indirect, unseen, and puzzling facts, and there is nothing obvious about them.”
To make this concrete, consider your beliefs about climate change. Maybe you think it’s our most pressing political problem, an urgent crisis and existential risk, or maybe you think the whole thing is an overblown, leftist moral panic. But whatever you believe, take a moment to reflect on where your beliefs came from.
Reality didn’t just imprint itself directly on your brain, whatever that would mean. You learned about climate change in the same way that you learn about almost everything else: through a highly path-dependent process in which, at every stage of encountering new information (testimony, news reports, articles, education, political commentary, etc.), you filtered it through your priors about the world and about which sources were trustworthy.
Through this process, you arrived at your current opinions, which inevitably take the form of low-resolution compressions of an extremely complex geophysical and political reality into a manageable, understandable form. Indeed, unless you are someone with significant expertise in this area, your “opinions” probably involve little more than socially-learned slogans and soundbites. (To test yourself, open a blank document and write out your current understanding of the topic exclusively from memory.)
It doesn’t take a philosophy PhD to appreciate that this process is highly fallible. Once you realise that the pictures inside people’s heads aren’t simple reflections of reality but the output of complex and fragile processes of interpretation and social learning, you should recognise that there are countless reasons why those pictures might distort or misrepresent that reality.
And yet, most of us don’t intuitively think this way. When we compare our beliefs against the facts, we always find a comforting 1:1 correspondence. Unless we force ourselves to reflect, there doesn’t seem to be a highly fallible process mediating between reality and our representations of it. Reality just is whatever we represent it to be.
The truth often seems obvious, self-evident, so much so that we are frequently baffled when people don’t share our understanding of the truth. The idea that rational people could have encountered the same reality and come away with different opinions doesn’t even register as a serious possibility in many cases. In the language of modern psychology, we are instinctive “naïve realists”. As Karl Popper characterised this intuition, we believe that the truth is “manifest”. If others don’t see the truth, they must, therefore, be deeply irrational, if not outright psychotic.
Given this, epistemic complexity is not merely a feature of our situation that we must grapple with. It is a feature that most people don’t instinctively appreciate, let alone reflect on. That is, it seems much easier to become “informed”—to figure out what’s true—than it really is.
Back to Motivated Cognition
This is where these reflections on epistemic complexity become relevant to questions about motivated cognition.
Historically, scholars have invoked motivated cognition to explain why people hold mistaken beliefs that appear highly irrational. If people confront epistemic complexity, this appearance of irrationality may simply be an illusion produced by naïve realism. That is, once we appreciate that the truth is not self-evident and that it’s extremely challenging to acquire knowledge, we should realise that there is nothing deeply puzzling about why people hold mistaken beliefs. Even perfectly rational individuals will form such beliefs if the challenges of forming accurate ones are sufficiently severe. Perhaps, through no fault of their own, they have simply been exposed to misleading evidence or unreliable sources.
If so, the motivation for positing motivated cognition evaporates. There is no irrationality to explain.
Although this move takes various forms, I think one can find versions of it in the writings of many recent scholars, even when it is not stated explicitly, including Jeffrey Friedman, Neil Levy, C. Thi Nguyen, and Cailin O’Connor and James Owen Weatherall. The core idea is that theorists have traditionally been too quick to jump from observing false beliefs to inferring motivated irrationality. Once we recognise epistemic complexity, we can see that there are countless ways in which individually rational thinkers can acquire false beliefs.
In most cases, these theorists advance alternative explanations that focus on features of the social environment, including how social-informational networks of trust and testimony are corrupted by malicious actors. Hence, this move typically goes hand in hand with the idea that to understand why people hold mistaken beliefs, we should turn our attention away from individual rational failings and toward “structural” and “systemic” pathologies in our society. (Friedman is an exception here, inasmuch as he seems to think that epistemic complexity is so severe that theorists shouldn’t even make judgements about which beliefs are true or false in the first place.)
Motivated Cognition and Epistemic Complexity
It’s an interesting and insightful line of reasoning, but I think it draws the wrong lesson from a recognition of epistemic complexity. Although such complexity opens the possibility that false beliefs can result from rational belief formation, its existence should actually increase our confidence in the likely impact of motivated cognition. This is because epistemic complexity exacerbates motivated cognition, making it easier for us to become convinced of desired conclusions.
In plain terms: The more challenging it is to figure out what’s true, the easier people will find it to lie to themselves.
To see why, think about the factors that determine whether people will engage in motivated cognition. It’s tempting to think that the only relevant variable is the strength of motivations that conflict with the pursuit of truth, such that the stronger those motivations, the greater the propensity to engage in motivated cognition.
However, a moment’s reflection suggests this can’t be the whole story. There are severe limits on what we can convince ourselves of, and these limits are largely independent of the strength of our motives. As Ziva Kunda puts it, “People do not seem to be at liberty to conclude whatever they want to conclude merely because they want to.” One might add: and no matter how much they want to. That is, there is no amount of money (or status, sex, etc.) that could induce me to believe that 2+2=5 or that the moon is made of cheese. These beliefs simply don’t fall within my cognitive grasp.
The reason is simple: For motivated cognition to be possible, we must be capable of providing some justification of the relevant belief. Elsewhere, I have called this a “rationalisation constraint”. But in some cases, we can satisfy it not by explicitly constructing or seeking post hoc rationalisations, but simply by insulating ourselves from disconfirming evidence. (This is captured by the “burying one’s head in the sand” metaphor).
Whatever we call it, however, the point is the same: our ability to become convinced of desired conclusions depends on our ability to feel that they are in some sense justified. That’s why the psychological acrobatics associated with motivated cognition—confirmation bias, biased evaluation, selective forgetting, etc.—are necessary in the first place.
For this reason, the extent to which motivated cognition biases belief depends not only on incentives but also on how easily individuals can satisfy this constraint. To be clear, this isn’t an original point; it’s one of the oldest observations about motivated cognition. The observation that I want to make here, however, is simply that when it comes to justifying desired beliefs, epistemic complexity is a help, not a hindrance. That is, as it becomes increasingly difficult to determine what’s true, it becomes correspondingly easier to convince ourselves of desirable untruths.
This suggests that many people are drawing the wrong lesson from epistemic complexity. Although such complexity implies that even disinterested, rational truth seekers could acquire inaccurate beliefs, its existence should increase our confidence that people are not behaving as disinterested truth seekers.
Of course, it is still ultimately an empirical question to what extent motivated cognition is operative in specific cases. There may be other reasons to think it is less prevalent than many have traditionally assumed. The point is just that the fact of epistemic complexity is not one of them.
So what?
Why does any of this matter? There are potentially many reasons, I think, but I’ll end with two.
First, it’s plausible that elites often benefit when target audiences engage in motivated cognition. So, politicians who spread self-serving lies benefit when their supporters prioritise political tribalism over accuracy. For example, they will be more likely to believe that an election was stolen from their side if they’re motivated to embrace and signal tribal beliefs. This means that many elites have an incentive to do whatever they can to increase a domain’s epistemic complexity—for example, by manufacturing uncertainty, flooding the zone with shit, recruiting congenial “experts”, and so on.
This is a familiar lesson from research on propaganda in some ways, of course, but reflecting on the interactions between motivated cognition and epistemic complexity casts it in a new light.
Second, many studies of the role of motivated cognition in belief formation provide participants with corrective evidence and measure the extent to which they update their beliefs. If they update in a rational direction, this is taken as evidence against the importance of motivated cognition.
One way to understand such experiments is that they artificially and temporarily reduce epistemic complexity. By presenting strong evidence against desired conclusions, they momentarily weaken people’s ability to subjectively justify those conclusions. To the extent that the real-world context in which people think involves much higher levels of epistemic complexity—for example, greater choice over which media and political sources to consult, heightened exposure to conflicting viewpoints and arguments, and greater contact with like-minded friends and colleagues—this suggests that such experiments might be limited in what they can tell us about the real-world significance of motivated cognition.


I really like this perspective and will reread to internalize it more.
I'm curious though what you think of something like Dan Kahan's study about motivated reasoning: that people will reinterpret math facts to align with tribal norms, and this effect is stronger when people are better at math. That does seem to show that motivated reasoning supersedes "rationality" when identity is at stake even when the relevant facts are right in front of you.
Unfortunately, generalizing often blurs reality.
So I would say that some aspects of reality are epistemically complex, others less so. So we need to distinguish these. Instead of generalizing, maybe try a couple of real world examples.
Also, some people engage in motivated cognition to a great degree, others perhaps a little, and others less so. Again, generalizing casts everyone in the same hopeless light and supports views such as everyday people are unqualified to be involved in societal decisions.
Language matters and influences our worldviews.