Political animals
What role do bias and irrationality play in shaping people's political opinions?
Politically motivated reasoning
Politics does not seem like a domain where rationality abounds. Instead, people often appear tribal, dogmatic, and unreasonable. As Joseph Schumpeter observed,
"The typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyses in a way which he would readily recognise as infantile within the sphere of his real interests."
Much research in the social sciences seems to vindicate this impression.
For example, voting appears to be mostly rooted in group allegiances. Rather than choosing which leaders or parties to vote for based on a fair-minded, rational calculation of their merits, citizens typically inherit stable political loyalties or acquire them relatively early in life. These allegiances then shape their political opinions. In other words, political opinions often seem to follow political allegiances, not determine them.
Moreover, extensive psychological research suggests that people’s understanding of the political universe is heavily biased by motivated reasoning. Instead of approaching politics as disinterested truth seekers, people appear to be motivated to endorse whatever conclusions align with their political preferences and partisan allegiances. Given this, they seek out, interpret, and process information in ways conducive to rationalising those conclusions, not discovering the truth.
This bias seems to explain many things, including why people frequently endorse political misinformation, why political disagreements are so difficult to resolve, and why partisans tend to endorse a whole range of “rationally orthogonal beliefs”.
For example, why does someone’s beliefs about climate change often predict their beliefs about abortion, the Israel/Palestine conflict, and gender identity? The existence of politically motivated reasoning provides a ready answer: because they are tribal. More carefully: because they are motivated to embrace the arbitrary bundle of identity-defining beliefs socially rewarded within their political ingroup.
The “rationality” critique
Recently, numerous social scientists and philosophers have challenged this analysis of political psychology (e.g., here, here, here, here). Although they acknowledge that group allegiances influence people’s political beliefs, they deny that this involves motivated reasoning.
Truth-seekers can polarise and form misperceptions
First, they point out that members of different groups (e.g., supporters of different parties) have different experiences, inhabit different social networks, trust different sources, and encounter different information, including different false and misleading information. Given this, groups can embrace polarised and often mistaken belief systems even if their members are disinterested truth seekers.
Replication problems
In addition, critics highlight that numerous findings that seemed to support the existence of politically motivated reasoning have failed to replicate.
For example, some findings suggested that rational persuasion involves a “backfire effect”, whereby presenting people with evidence against strong political views causes them to—very irrationally—double down, increasing their confidence in those views. Subsequent studies suggest this is very rare, if it exists at all. Instead, people typically update their beliefs toward corrective information. They are not nearly as dogmatic as conventional wisdom suggests.
Similarly, one of the most influential findings in recent psychology suggested that more “cognitively sophisticated” (e.g., numerate) people simply use their superior smarts to rationalise identity-defining convictions. Given this, such people end up even more polarised than their less sophisticated co-partisans.
It is an exciting finding, cited seemingly everywhere, including in the early published research of a certain British philosophy blogger. However, it has generally not held up in subsequent replication attempts.
Observational equivalence
Finally, even findings that do seem to replicate (for now…) can be explained without appealing to motivated reasoning.
Most studies on motivated reasoning use “matched information designs”. These present participants with information that is identical in every way except for its implications for their political preferences. Because people seem to process information differently based solely on its consequences for these preferences, it is inferred that they are biased—their political preferences distort their judgement.
For example, studies demonstrate that participants
are more likely to endorse policies just because their party supports them
judge identical protest footage differently based on whether the protesters’ goals align with liberal or conservative political goals
evaluate the quality of otherwise identical studies more favourably when they support their political views.
Although many infer the existence of motivated reasoning as the cause of such findings, this is too quick.
The reason is that people’s preferences—the conclusions they are assumed to be motivated to reach—strongly correlate with their pre-existing beliefs (“priors”). This means that when you vary the implications of information for people’s preferences, you also vary the priors they draw upon in interpreting the information. However, people’s priors should and do influence how they interpret information for reasons that have nothing to do with motivated reasoning. Given this, the findings of such studies are consistent with purely “cognitive” (i.e., non-motivational) processes.
For example, people might prefer policies supported by their party simply because they believe their party is trustworthy; they might form varied interpretations of protesters based solely on preconceptions about the likely character of different protestors; and they might evaluate studies supporting their beliefs as more reliable simply because they assume studies with correct findings are more reliable than false ones.
This image, taken from a recent article by Peter Ditto and colleagues that reviews this issue, illustrates the problem:
Moving forward
How should we resolve this disagreement?
One possibility is to develop better experimental designs that discriminate between the effects of preferences and priors. Some recent studies have tried this and found compelling evidence for politically motivated reasoning.
However, experiments are not a panacea. Decades of research in psychology show that people are ingenious at coming up with non-motivational explanations of findings alleged to vindicate the existence of motivated reasoning.
Moreover, it is important to remember that the goal of scientific explanation is not to develop stories that are consistent with experimental results. It is to develop plausible theories that explain the world—in this case, people’s political psychology. Experimental findings are relevant to that task, but so are many things. For example, we want theories that are parsimonious, that cohere with well-established theories and ideas elsewhere in science, and which unify diverse findings and subtle patterns in those findings.
In other words, the question should not be whether purely cognitive explanations of political psychology are consistent with certain experimental results and coarse-grained descriptions of phenomena (e.g., polarisation, political misperceptions, and so on). The question is whether such explanations are plausible.
Anyone who reads this blog or my published research will know that I do not find them plausible. Although I think it is good that brilliant people are trying to develop the “people are disinterested truth seekers” model as far as it can go, I am not—at least not yet—persuaded.
Viewing political beliefs and ideologies through a purely epistemological lens—neglecting the distorting role of power, status, reputation, intergroup conflict, identity, and self-interest in political psychology—strikes me as an interesting project, but not a very promising one.
Understanding politically motivated reasoning
One problem with this debate is that “politically motivated reasoning” is often underspecified. Given this, it is not clear how to understand the phenomenon or what findings it existence predicts. For example, does its predict that people will be insensitive to evidence and arguments? Does it predict that they will “backfire”? Does it predict that more intelligent people will be worse at reasoning about hot-button political topics in psychological experiments?
If you read the scientific literature on this topic, there is a lot of confusion.
In my article The case for partisan motivated reasoning, I draw on a wide range of ideas from the social sciences to develop a very schematic account of politically motivated reasoning that avoids these issues. The account rests on two simple claims:
People are disposed to internalise claims and narratives they are motivated to advocate for.
People are coalitional animals disposed to advocate for the interests of coalitions they support.
Advocacy-biased cognition
First, it is a familiar idea in psychology that people often think and reason in ways that are lawyerly. That is, when it suits our interests to promote or spread a specific conclusion, we tend to think and reason in ways aimed at justifying that conclusion and defending it from counter-evidence and criticism.
In such cases, it is tempting to think there must be a sharp distinction between what people privately believe and what they advocate for. However, although this gulf sometimes exists, people tend to internalise their propaganda.
The reasons for this are not yet clear. For example, it could be that internalising our self-serving advocacy has positive adaptive value by making us more persuasive advocates. However, it could also be that self-persuasion is simply a byproduct of advocacy: in the process of recruiting evidence and arguments designed to persuade others, it is difficult to avoid persuading oneself.
Nevertheless, whatever the cause, the phenomenon itself seems to be moderately well-supported by experimental research. Of course, this is experimental psychology we are talking about here, so perhaps we will discover that none of these findings replicate. However, I also think the tendency illuminates a wide range of beliefs, which often seem to track people’s propagandistic incentives.
For example, at the individual level, at least within highly individualistic, WEIRD (Western, Educated, Industrialised, Rich, and Democratic) societies, individuals are susceptible to numerous familiar self-serving and self-aggrandising biases. They form flattering self-images, take responsibility for good outcomes but externalise responsibility for bad ones, depict their actions as norm conforming, minimise the harms of their behaviours and exaggerate the harms inflicted on themselves, and diminish the qualities and achievements of rivals.
This makes sense if people’s beliefs are biased by advocacy: in highly individualistic societies in which individual success depends strongly on impression management, people benefit from presenting themselves in the most attractive light they can get away with.
Coalitional psychology
People do not just function as self-serving advocates. Humans are also notoriously “groupish” in some sense. That is, when it comes to groups of otherwise diverse kinds (bands, tribes, sects, subcultures, political parties, unions, religions, nations, etc.), people often form strong group attachments and then exhibit a familiar cluster of groupish psychological tendencies. For example, we draw sharp ingroup/outgroup distinctions between us and them. We exhibit various forms of ingroup favouritism, including greater trust in and empathy for insiders over outsiders. We often derogate rival outgroups. We experience emotions of pride, shame, and anger connected to group-level outcomes. And so on.
What explains these tendencies?
Describing people as “tribal” does not explain them; it merely redescribes them. Neither do social-psychological theories which try to explain various things by connecting them to “psychological needs”. (Social identity “theory” posits that humans hold favourable views of their ingroup because they have a deep psychological need to think well of their ingroup).
Instead, they are better understood in the context of our evolved coalitional psychology. Coalitions are teams, not groups. They are bounded collectives that cooperate and coordinate to promote shared interests and achieve shared goals. Although many such coalitions are highly stable, they can also be fluid, and individuals treat them not as irrational shadows of a tribal past but as instruments for advancing their interests and values.
Although there is much more to say about coalitional psychology, the important aspect for our purposes is that once people join and support a coalition they tend to align their interests to some degree with those of the coalition. Given this, they internalise costs and benefits to the coalition as costs and benefits to themselves. (This is why individual emotions can map onto group outcomes). Just as people care about their own power, status, and reputation, they care about the power, status, and reputation of allies.
This has an important consequence: people tend to instinctively behave as advocates for the coalitions they support. In fact, this advocacy role arises not just from the alignment of individual and group interests. It also arises because coalitions tend to reward those members who perform this role—the true believers who proselytise, evangelise, and propagandise on behalf of group narratives—with heightened status within the community.
When combined with the natural human tendency to internalise the claims and narratives we are motivated to advocate for, the result is politically motivated reasoning: a disposition—especially among strong partisans—to internalise claims and narratives that promote and justify the interests of their political coalition in competition with rival coalitions.
The case for partisan motivated reasoning
Why is this story about political psychology more plausible than purely cognitive explanations?
First, it is integrated with a significant body of relatively well-established ideas elsewhere in science. If you accept that people are disposed to internalise their propaganda and advocate for the interests of coalitions they support, the theory falls out automatically. More generally, it also coheres better with an understanding of human beings as strategic political animals that evolved not to pursue abstract ideals like Truth and Knowledge but to achieve more down-to-earth material, political, and social goals.
Second, this simple account of political psychology illuminates a wide range of relevant findings and subtle patterns in those findings. For example, it does not just predict that partisans will sometimes polarise over their beliefs and endorse misperceptions, which is consistent with purely cognitive explanations. Instead, it predicts the direction of partisan belief systems—that they will be biased in the direction of promoting and justifying the interests of relevant political coalitions. Lots of evidence seems to support this prediction. Partisans’ errors are not random; they are what one would expect if they function as instinctive coalitional lawyers.
As David Pinsof and colleagues have documented, the idea that partisans instinctively deploy propagandistic tactics to defend their allies (the groups their party supports) against their rivals (the groups they oppose) also illuminates a wide range of apparent inconsistencies in political belief systems. For example, partisans do not consistently endorse abstract principles (equality, freedom, tolerance); instead, they strategically deploy such principles in ways suited to mobilising support for their coalition, burnishing its reputation, and discrediting and demonising rival coalitions.
Importantly, this account also diverges from traditional theories of politically motivated reasoning, which assume that partisans are motivated to believe what they want to be true. (This is central to how Peter Ditto and colleagues frame the issue in the image above).
Once you understand that biased beliefs reflect advocacy, you can see that this is wrong: partisans will be disposed to advocate for (and hence internalise) claims they do not want to be true—and might even find extremely distressing—if such claims promote their coalition’s interests. Again, this is supported by evidence. For example, partisans inflate the dangers their coalition claims to combat, including the power of rival parties and ideological enemies. (Republicans do not desire an epidemic of pet-eating Haitian immigrants any more than Democrats want Republicans to be demonic fascists, yet both claims suit their propagandistic purposes).
Third, notice what this account of political cognition does not predict. It does not predict that people will be impervious to evidence or reason, or “backfire” when confronted with uncomfortable facts. Such tendencies would undermine advocacy. To be effective advocates, people must be responsive to reality and capable of integrating novel information.
Nevertheless, the account does predict that when partisans are forced to acknowledge unfavourable facts, they will creatively interpret and contextualise such facts in ways that align with their advocacy goals. Once again, there is some evidence for this. This tendency would also illuminate why people rarely abandon deeper political attitudes even when they accept politically uncongenial information. Again, the analogy with lawyers is helpful: do defence lawyers quit when confronted with persuasive arguments from the prosecution?
Finally, this account of political cognition explains why the same pattern of biased beliefs recurs across a wide variety of different intergroup contexts. That is, it is not just in democratic party politics where one finds the embrace of coalition-serving, coalition-aggrandising belief systems. They seem to emerge whenever intergroup conflict arises, whether between nations, races, religions, or sects. Purely cognitive explanations must assume, implausibly, that disinterested truth-seekers somehow recreate the same pattern of belief-forming tendencies across these diverse contexts.
Clarifications
This account must nevertheless be clarified in various ways.
First, although I think the tendency to function as an instinctive propagandist for coalitions one supports is a very general feature of psychology, this tendency is likely influenced in various ways by individual and environmental factors. For example, communities differ in the social-epistemic norms they enforce, which can either counteract or harness biased belief-forming tendencies. This is a topic for future research.
Second, advocacy-biased cognition does not necessarily influence belief updating, the process by which people revise their beliefs when confronted with novel evidence. Although there is still much that we do not know, research suggests that this process is automatic and cannot be influenced (and hence biased) by practical goals like advocacy. Nevertheless, the psychological processes that underlie beliefs encompass much more than belief updating in this sense. For example, advocacy goals might bias how one searches through memory, which evidence and sources one attends to, which hypotheses and explanations one generates, and how long one spends reasoning about a topic.
Third, advocacy-biased cognition is not the only possible source of politically motivated reasoning. Elsewhere, I have explored how beliefs and belief systems can also be biased by motivations to signal one’s characteristics and allegiances. This signalling likely interacts with advocacy-biased cognition in interesting ways. For example, behaving as a passionate advocate can signal group allegiance, as can endorsing beliefs that serve clear propagandistic purposes.
Finally, motivated cognition is not a purely individual-level phenomenon. I am increasingly of the view that one cannot understand motivated cognition without focusing on the ways in which it is socially scaffolded. Communities motivated to spread or embrace shared belief systems coordinate and cooperate in ways conducive to promoting, protecting, and justifying such beliefs.
I have written about this elsewhere, both in connection with the idea of a marketplace of rationalisations and with a focus on the ways in which community norms and incentives regulate how groups bound by shared narratives communicate and self-censor.
This is yet another reason why purely cognitive theories of political psychology are implausible: they miss how group allegiances do not just bias individual cognition but shape the broader social environment within which cognition occurs.
FURTHER READING
This post draws on ideas from a much longer article. I will be developing them in much greater length in a forthcoming book on delusions to be published with Oxford University Press.
I would also highly recommend the following excellent articles for anyone interested in reading more about this topic:
David Pinsof et al: Strange Bedfellows: The Alliance Theory of Political Belief Systems.
Elise Woodard, What’s Wrong With Partisan Deference
Ben Tappin et al: Thinking Carefully About Causal Inferences of Politically Motivated Reasoning
Adam Gibbons, Rational and Radical Ignorance.
I think there is an understandable yet misplaced desire for parsimony among these “everything is bayesian” folks (for lack of a better term). It is admirable that they are trying to minimize assumptions. But they are forgetting that parsimony is not just about minimizing the number of positive assumptions, but about minimizing the number of positive and negative assumptions (what is *not* the case) multiplied by their plausibility in light of pre-existing knowledge. The assumption that humans have no status or coalitonal instincts, or that these instincts have no effect on our cognition or reasoning, is so wildly implausible in light of everything we know about human evolution that it reduces the parsimony of the Bayesian account to near zero. What the Bayesians are striving for is the illusion of parsimony—not the genuine article. Also, as an aside, I am now kicking myself for not using the phrase “advocacy biases” instead of “propagandistic biases.” Probably would have gone down more easily for folks. Alas…
People HATE uncertainty. But uncertainty is the only rational path. In addition, people just don't give a damn about the grey, nuanced truth. People only care about their team winning. Or they don't have the bandwidth to seek truth. "Tribalism is destiny. Humanism is optional." -Jaime Wheal