Are people too flawed, ignorant, and tribal for open societies?
A deep dive into four factors that prevent members of open societies from understanding political reality: complexity, invisibility, incentives, and politically motivated cognition.
This week and the next, I am on the faculty for a two-week summer school in Budapest on “The Human Mind and the Open Society”. Organised by Thom Scott-Phillips and Christophe Heintz, the summer school focuses “on how understanding the human mind as a tool for navigating a richly social existence can inform our understanding and advocacy of open society, and the ideals it represents”:
“The notion of open society is an attempt to answer the question of how we can effectively live together in large and modern environments. Its ideals include commitments to the rule of law, freedom of association, democratic institutions, and the free use of reason and critical analysis. Arguments in favour of these ideals necessarily depend on assumptions—sometimes hidden and unexamined—about the human mind.”
I agreed to take part in the summer school because it would allow me to interact with a group of fantastic researchers and because it brings together two of my favourite things: (1) evolutionary social science and (2) the ideals of open, liberal societies—ideals that I regard as some of humanity’s most important and most fragile achievements.
In my role, I am giving two lectures on “The epistemic challenges of open societies”. The first lecture explores four factors that distort the capacity of citizens within open societies to acquire accurate beliefs about the world: complexity, invisibility, ignorance, and tribalism.
The second criticises the optimistic idea that the “truth” will emerge from a thriving, free “marketplace of ideas”. Against this, I explore how the public sphere within open societies often functions more like a marketplace of rationalisations in which pundits, intellectuals, and media outlets churn out justifications for the actions, policies, and narratives favoured by society’s competing political and cultural tribes. The lecture also gestures—speculatively and unsatisfyingly—at some ways of attempting to address these challenges.
In this post, I will walk through some of the main ideas in the first of these lectures.
What are open societies?
It is not always clear what the “open society” is supposed to refer to. In my lectures, I focus on two of its uncontroversial ideals.
First, open societies are democratic. They involve political equality among citizens at essential stages of collective decision-making, typically in the form of “one person, one vote” during elections of political representatives.
Second, they encourage the free and open exchange of ideas and arguments. In contrast with societies that feature either substantial top-down informational control and censorship or powerful bottom-up pressures of social conformity and political correctness, open societies strive to embody J.S. Mill’s endorsement of radical freedom of thought, expression, and debate.
Traditionally, theorists have argued that both these features of open societies have positive epistemic consequences—that is, positive effects on the social production and distribution of knowledge and understanding. For example, at least since Aristotle, theorists have argued that democracies benefit from “wisdom of crowds”-style phenomena, and a classic argument for a free, thriving public sphere is that it is either more likely to generate truth or at least minimise costly errors.
Nevertheless, these arguments might be too optimistic.
Complexity
First, giant, modern societies must navigate highly complex problems and constraints. Think of macroeconomic questions, redistribution, immigration, crime, climate change, international trade agreements, geopolitics, and so on. In open societies, ordinary citizens are expected to play an important role in identifying such problems and choosing solutions to them. It is not obvious that they are up to the job.
As Walter Lippmann wrote in Public Opinion, one of the most insightful critiques of democracy ever written,
“There are few big issues in public life where cause and effect are obvious at once. They are not obvious to scholars who have devoted years, let us say, to studying business cycles, or price and wage movements, or the migration and assimilation of peoples, or the diplomatic purpose of foreign powers. Yet somehow we are all supposed to have opinions on these matters.”
Considerations like this led Lippmann to advocate for a kind of technocracy (i.e., rule by experts) over democracy.
Invisibility
Second, and relatedly, public opinion in modern societies inevitably deals with problems that are not just complex but—to quote Lippmann again—“out of reach, out of sight, out of mind.” That is, unlike in small-scale societies, public opinion in large, modern societies concerns phenomena and activities that citizens are almost never in direct perceptual contact with. Have you ever met Joe Biden? Can you perceive GDP, inequality, or society-wide crime rates and trends? Can you observe planet-wide climate change, let alone its causes?
Of course not. In these cases and countless more, everything we believe is based on socially mediated information—the reports, testimony, gossip, images, and cherry-picked video clips others have shared. Describing this situation, the philosopher John Dewey noted that in modern democracies,
“The local face-to-face community has been invaded by forces so vast, so remote in initiation, so far-reaching in scope and so complexly indirect in operation that they are, from the standpoint of the members of local social units, unknown. . . . They act at a great distance in ways invisible to [them]”.
This has an important implication: In ordinary social life (and in the small-scale societies that characterise almost all of human evolution), humans can typically check what people tell them against reality. That is, although our species has always depended on communication and social trust, we can often test people’s trustworthiness. If Harry tells me that Sally is dull, and I learn firsthand that she is exciting and interesting, I will decrease my trust in Harry.
Nothing like this is true in democratic politics. We can rarely directly verify the information we acquire from others. We can cross-check it against the competing testimony of others, but we cannot directly verify that testimony either. This complicates the optimistic vision of a “marketplace of ideas.” As Jeffrey Friedman puts it,
“Ultimately, a “market” for enlightenment cannot work in the way that ideal-typical consumer-goods markets work because in the latter, the ultimate guarantor of efficacy is supposed to be the feedback consumers get from the products they buy... There is no such feedback with most political knowledge… If people have been politically misinformed, how would they know it? If they were capable of knowing it on their own, they would not need the division of epistemic labor to enlighten them.”
Rational Ignorance
Perhaps the problems of complexity and invisibility are not fatal to open societies. Even if they make it challenging for citizens to acquire informed beliefs, people can overcome these challenges if they are sufficiently careful, rational, and hard-working in seeking out and evaluating political information.
However, this confronts a third challenge for open societies: incentives.
In most forms of collective decision-making, the contributions of individuals involved in the decision-making process influence the outcome. My input matters when deciding where to go for dinner with friends. Similarly, when hunter-gatherer bands choose how to allocate food and which social rules to enforce, each individual granted political authority will influence the ultimate decisions.
The situation is very different in complex, modern democracies. In this context, individual voters have almost no impact on political decision-making. Even if one stays home on voting day, it will make approximately no difference to the outcome.
“One person, one vote” sounds good. However, as Anthony Downs pointed out in the 1950s, it creates a deep problem of incentives. On the one hand, the costs of becoming politically informed—learning about politicians, issues, policies, and relevant social science—are very high for individual voters. It takes a lot of time and energy, which could be spent on other important—or simply more fun—activities. On the other hand, the negligible impact of individual votes means that being informed has little benefit. Given this, political ignorance is rational.
Of course, what is rational for individuals is often disastrous at the level of collectives. Even if voters do not benefit from becoming politically informed, democracies depend on an informed electorate to make good decisions. The result is… what we observe in the real world: widespread political ignorance and frequently uninformed—often bad, sometimes disastrous—democratic decision-making.
Of course, this is a problem for all democracies, including illiberal ones, but it is unclear how encouraging a free, open marketplace of ideas helps. If it is rational to be ignorant in democratic politics, opting out of participating in public debate and deliberation is equally rational. And it would seem positively irrational to express bold, daring positions out of step with the opinions of one’s friends, families, and colleagues.
Not everyone is ignorant
As a model, “rational ignorance” seems to explain the behaviour of many voters. Still, it struggles with a significant minority (roughly 15-20%) of the population in modern democracies who tend to be highly politically engaged. Such people follow politics, news, and current affairs quite closely. They are also much more likely to get involved in political activities beyond voting (e.g., activism, campaigning, local governance, etc.), including participation in public debate, deliberation, and political argument.
Perhaps this politically active group is the saviour of open societies?
Unfortunately, although some people are highly interested in and engaged with politics, they (ok, we) tend to view the world in ways distorted by self-serving goals and tribal allegiances. In other words, among those who take the time and effort to get involved in the political process in open societies, they are—unsurprisingly—not disinterested truth seekers. They are biased.
In my view, the best way to understand this bias is in terms of four big ideas:
Motivated cognition
Coalitional psychology
Advocacy-biased cognition
Social signalling
Motivated cognition
Ordinarily, when we form beliefs, we—or at least our unconscious cognitive mechanisms—aim at accuracy. The reason is simple: In navigating the world and attempting to achieve our goals, having an accurate model of reality is typically beneficial. When rats navigate their environments, they need accurate mental maps of the spatial layout of those environments. When chimps navigate their local dominance hierarchy, they must track dominance and alliance relationships between other chimps. The same is true of most human beliefs: to figure out what to eat, let alone whom to marry and what career to go into, we need mental models that track reality.
Nevertheless, it is a familiar idea of commonsense psychology that our beliefs are sometimes distorted by practical motives and interests distinct from—and sometimes sharply in conflict with—accuracy. Consider common phrases and sayings like “self-deception”, “burying your head in the sand”, “wishful thinking”, “drinking your own Kool-aid”, “it’s difficult to get a man to understand something when his salary depends on his not understanding it”, and this classic cartoon:
“Motivated cognition” is simply the technical scientific term for this phenomenon. That is, when people form beliefs in ways distorted by motives or goals distinct from accuracy—for example, emotion regulation, showing off, fitting in, and so on—they succumb to “motivated cognition.” Because these goals “direct” cognition towards favoured conclusions (rather than conclusions best supported by available evidence), they are typically called “directional goals”.
Here is what “motivated cognition” is not:
Motivated cognition is not about choosing to believe something. In general, it is impossible to decide to believe something voluntarily. Instead, biased cognitive processes mediate between practical motives and beliefs in ways heavily constrained by evidence: People only believe things they can subjectively rationalise. In this respect, the common-sense description of self-deception—namely, “convincing yourself of what you want to believe”—contains an important insight: Unless you can muster sufficient rational support for a favoured conclusion—unless you can become convinced of it—you cannot believe it.
Motivated cognition is not—and does not predict—the “backfire effect”, a hypothesised phenomenon in which people increase their confidence in a belief when they encounter evidence against it. Contrary to popular belief, the scientific evidence for this phenomenon is very weak. In almost all cases, people update their beliefs when presented with contrary information from trusted sources, even if only slightly.
Motivated cognition is (probably) not about belief updating at all. Belief updating—that is, revising one’s beliefs when presented with novel evidence—is (probably) automatic and “impenetrable” by practical goals and interests. If you open your eyes and notice no beer in the fridge, you will abandon your belief that beer is in the fridge, irrespective of what you might like to believe. The same is true if a trusted friend informs you that no beer is left. (You might not wholly abandon your belief in this case because we trust testimony less than perception, but you will automatically lower your confidence in it).
Nevertheless, “belief updating” is just one part of belief formation, and there are many other ways directional goals can bias cognition: for example, by biasing memory search and the acquisition and avoidance of evidence; by biasing which hypotheses one considers to explain (or explain away) evidence; and by biasing how long one spends thinking and reasoning about a topic.
As Ziva Kunda puts it,
‘[T]here is considerable evidence that people are more likely to arrive at conclusions that they want to arrive at, but their ability to do so is constrained by their ability to construct seemingly reasonable justifications for these conclusions.’
This fact—that motivated cognition is subject to a rationalisation constraint—will be important when I turn in the second post to the concept of a marketplace of rationalisations.
Politically motivated cognition
Motivated cognition is widespread in everyday social life. For example, in Western, individualistic societies where people compete intensely to be liked and admired by others, the average person believes herself to have better-than-average traits. She takes credit for her successes but blames external factors (or others) for her failures. She minimises her responsibility for wrongdoing and exaggerates her victimisation. She frames her prospects in an overly optimistic light. She subtly and sometimes not-so-subtly interprets her rivals in a negative light. And so on.
However, with the possible exception of religion, motivated cognition is more intense in politics than in any other domain. This is true of political cognition within open societies, which provide highly fertile ground for motivated cognition:
Politics touches on almost every hot-button, highly-charged, emotional issue imaginable.
Public opinion concerns complex, ambiguous, and invisible truths where it is easy to interpret evidence in preferred directions.
There is abundant evidence, reporters, pundits, and propagandists from which to acquire congenial rationalisations of whatever one is motivated to claim or believe.
Of course, the fact that people are not exactly paragons of intellectual virtue in politics is not a new (or deep) insight. The question is: How should politically motivated cognition be understood?
Here is my best guess.
Coalitional Psychology
Although simple self-interest is sometimes relevant in democratic politics, especially among powerful political and economic elites, citizens cannot accomplish much independently. To win power and influence policy, they must join and support alliances. As a result, democratic politics involves subtle mixtures of cooperation and competition between groups of multiple kinds and sizes.
It is a platitude that human beings are “groupish”. That is, group attachments of otherwise diverse kinds—religious, national, tribal, ethnic, sports fandom, and so on—often seem to evoke a familiar cluster of motives and dispositions in people: for example, sharp ingroup/outgroup distinctions, ingroup favouritism, prejudice towards outsiders, greater trust in and empathy for insiders relative to outsiders, sending markers of group identity, and internalising costs and benefits to the group as costs and benefits to oneself.
Why? The popular idea that people are “tribal” does not explain such tendencies; it redescribes them. Likewise for social identity “theory”, an old idea in social psychology according to which people “identify” with groups and then succumb to ingroup biases as a way of boosting their “self-image”. Although this feels explanatory, it merely replaces one puzzle (people are groupish) with another (people have an inexplicable desire to maintain a positive group identity).
A better, deeper analysis appeals to coalitions and coalitional psychology. As John Tooby points out, coalitions are teams, not “groups”. They are bounded collectives that harness within-group cooperation and coordination to promote shared interests and achieve shared goals. Given this, people can belong to a group without feeling allegiance to it—without supporting that group within their team (e.g., white progressives and white people)—and they can treat groups they do not belong to as part of their team (e.g., white progressives and racial minorities).
So understood, coalitions come in multiple sizes and varieties, ranging from small-scale, fleeting alliances organised around narrow goals—for example, the football teams people break into when kicking a ball around—to durable, large-scale communities such as sects, religions, nations, and political parties.
However, most coalitions confront similar problems: achieving high levels of cohesion, coordination, and cooperation; sending and monitoring signals of coalitional allegiance; recruiting new members and mobilising support; and defending the coalition’s actions and social reputation.
Within politics, many different kinds of coalitions exist. For example, over the past couple of centuries, nation-states and corresponding feelings of nationalism have significantly shaped political organisation and conflict, as have forms of coalitional competition rooted in race, ethnicity, and class. Within modern democracies today, the most salient coalitions are typically political parties. However, such parties exist alongside—and interact in complex ways with—broader social and political movements (e.g., Black Lives Matter, Extinction Rebellion, etc.), unions, and coalitions organised around support for specific leaders. (For example, in the US, many Republican voters feel more allegiance to Donald Trump than to the Republican Party.)
The question, then, is this: Why do coalitional allegiances drive motivated cognition? More concretely, Why would feeling part of a political team—a political tribe—distort how one understands the world?
Coalitional Propagandists
Here is one part of the answer: When we support a coalition, we are often motivated to advocate for its interests. That is, just as we instinctively behave as lawyers, press secretaries, and propagandists for our individual interests, we apply these tendencies to the political and cultural tribes we support. We are motivated to push claims and narratives that promote our tribes’ interests, paint its actions and goals in an attractive light, defend its reputation, and discredit—and often demonise—its rivals. Such advocacy is rooted in our evolved coalitional psychology and the fact that coalitions often shower members who function as effective advocates with approval and status.
It is tempting to think that such advocacy can co-exist with objectivity. On this view, people would maintain a distinction between what they privately believe, which is oriented towards accuracy, and the claims and narratives they are motivated to spread as a way of promoting and justifying their tribe’s interests.
In practice, this rarely happens. Instead, it is a general feature of human psychology that people’s beliefs—including their deepest, most heartfelt beliefs—tend to shift in the direction of the claims and positions they advocate for. That is, humans tend to believe their propaganda. This is true at the level of individual self-serving biases; it is equally true regarding claims and narratives designed to mobilise support for one’s coalition, defend its reputation, and discredit its rivals.
In many contexts, this is obvious. Nationalists—those who treat their nation-state as a coalition they support and invest in—are notoriously biased in how they view reality, forming beliefs that paint their nation in an attractive light, exaggerate its virtues, deny or minimise its responsibility for crimes and wrongdoing, inflate its victim status, and so on. The same applies to those who treat their race or ethnicity as a salient coalitional identity.
However, similar tendencies also shape ordinary, run-of-the-mill partisan cognition. Partisanship is a “perceptual screen through which the individual tends to see what is favourable to his partisan orientation.” As I have argued elsewhere, the tendency to instinctively behave as a press secretary and defence lawyer for one’s political tribe explains why partisans tend to form beliefs that:
reflect favourably on their party
inflate their party’s successes and rival parties’ failures
exaggerate the threats and dangers their party seeks to address
Exaggerate the degree to which they are victimised.
As David Pinsof shows, engaged partisans also tend to instinctively apply propagandistic biases to defend the groups their party supports and attack the groups their party opposes.
Importantly, such advocacy-biased cognition is not completely insensitive to reality. (Remember the “rationalisation constraint”). Instead, just as good lawyers juggle extreme bias—they are quite literally paid to rationalise predetermined conclusions (e.g., “my client is innocent!”)—with the ability to integrate and respond to evidence, partisans are creative in selecting, framing, and interpreting reality in ways that support favoured partisan narratives. For example,
When partisans acknowledge bad economic conditions, they polarise over who to blame.
When partisans converge in their perception of economic indicators (e.g., the direction of the stock market), they polarise over their interpretation.
When partisans are forced to acknowledge bad actions by their party/leader, they interpret it as the “lesser of two evils”.
In an interesting experiment: When Democrats and Republicans acknowledged the number of American casualties and failure to find WMDs during the Iraq War, they polarised in how to interpret the numbers (e.g., “high/low”) and in why weapons were not found.
Coalitional signalling
Successful coalitions require high levels of cooperation. In fact, in the context of intense intergroup competition (i.e., all of politics), achieving higher levels of coordination and cohesion than rival coalitions is typically necessary. However, cooperation is inherently challenging and fragile due to the perennial attractions of free riding. In many forms of group-based cooperation, individuals benefit from cooperating but benefit even more from free-riding on the hard work of others. If this strategy becomes possible, it rapidly spreads through the group, and cooperation collapses, leaving everyone worse off.
Coalitions have many strategies for solving this problem, including incentive structures—systems of norms, surveillance, rewards, and punishments—that encourage cooperation and discourage free riding. One such strategy involves signalling. To distinguish loyal, hard-working coalition members from fickle free riders, coalitions often invest substantially in sending and monitoring signals of coalitional allegiance and support. Are you a good progressive—a true progressive, really devoted to the cause—or merely a faker trying to virtue signal without advancing the progressive agenda?
Coalitional signalling uncontroversially shapes people’s outward behaviours, speech, and dress. However, it also plausibly shapes and distorts people’s beliefs and ideologies. This is true outside of politics. For example, much religious “belief” seems rooted in motivations to signal coalitional identity and conformity to group-specific norms. However, it is also plausibly the case in politics. Partisans are not just motivated to display bumper stickers and profile pictures that display their political allegiances; they are also often encouraged to embrace and advertise whatever package of identity-defining beliefs, narratives, and positions signal their coalitional identity and allegiances.
Unlike the hypothesis that advocacy goals bias political beliefs, I am less confident in this proposal that political beliefs are biased by social signalling. More research is needed. (This was my best attempt to make sense of the general phenomenon of belief signalling). Nevertheless, it illuminates a range of phenomena. For example,
Partisans often seem emotionally attached to beliefs and narratives and proud to affirm them.
Partisans often adopt semi-random (“rationally orthogonal”) packages of beliefs, policies, and narratives associated with specific political tribes. (Why do your beliefs about gender identity predict your beliefs about climate change, “capitalism”, and the Israel/Palestine conflict, for example?). This is puzzling from the perspective of being even mildly thoughtful about politics but intelligible from the perspective of signalling support for a political coalition.
Partisans are often eager to advertise their sincere commitment to specific beliefs—to display that they are true believers.
Adoption and abandonment of beliefs often seem to track social incentives (although the evidence here, such as it is, mostly concerns religious beliefs; I suspect the same is true of politics, but as far as I know, it is not much studied).
People often adopt a strong “orthodoxy mindset” in politics that makes sense in terms of signalling but is difficult to understand in terms of the pursuit of truth.
Summary
The ideals of open societies sound good. I think they are good. Nevertheless, such societies confront considerable epistemic challenges that must be acknowledged. Given the existence of complexity, invisibility, ignorance, and politically motivated cognition, it seems unlikely that citizens—the ultimate locus of decision-making within such societies—will form responsible, rational, accurate beliefs about reality. And without such beliefs—without informed public opinion—it is difficult to see how open societies can make good collective decisions.
In the next post, I will argue that the situation is actually even bleaker than the picture I have painted here and suggest some ways of attempting to improve it.
Interesting stuff. Another dimension....
Pre-internet: information (and disinformation) about the world beyond the individual's direct experience was essentially a scarce resource. Demand exceeded supply.
The internet has changed this information 'landscape' in profound ways that we as a civilisation have hardly begun to come to terms with. The gigantic 'supply' of information/disinformation now exceeds the demand for it. Keeping sane and 'centred' as a citizen now involves (amongst other things) learning how to partially shield oneself against this great noisy digital wind. It has probably become more of a boon to the less sane and centred than the other way round.
This is a strong bid to wrestle with the forces that often underly and undermine our 'rational' pursuit of whatever we have defined as 'good' (or have accepted someone else's definition of). Thank you. The matter is crucial if we wish to move from wishful thinking to acting in good faith to good effect (and also want to be sure we are not lying to ourselves or virtue signalling to ourselves/others/God Above etc).
I was reminded of the Elephant/Mahout distinction Jonathan Haidt draws in The Righteous Mind.