The marketplace of good and bad ideas
On why the free exchange of ideas is a complex breeding ground for truth, appealing falsehoods, and self-serving rationalisations.
“Societally corrosive beliefs can persist when they are intuitively appealing or they serve some believers’ agendas” - Manvir Singh
I. The Marketplace of Ideas
According to some people, the truth will ultimately emerge from free and open debate within the “marketplace of ideas”. This is too optimistic. However, it contains an important grain of truth: when people are free to exchange information in domains where they care about accuracy and are capable of testing ideas, they tend to share reliable information. Moreover, there are quasi-economic reasons—reasons to do with demand, exchange, competition, and incentives—why this is so.
In this essay, I will sketch a simple economic model that explains why human communication and the division of cognitive labour are often organised around the transmission of reliable information. I will then identify two systematic reasons why the marketplace of ideas often falls short of this ideal. The first is that in evaluating untestable ideas, people often rely on systematically inaccurate intuitions. The second is that people frequently seek out information not to get at the truth but to rationalise self-serving beliefs, narratives, and decisions.
I will conclude with some broader lessons about censorship, the necessity of trustworthy knowledge-generating institutions, and the importance of rewarding better contributions to public debate.
II. The Marketplace of Good Ideas
The puzzle of epistemic cooperation
Humans are epistemically cooperative. We acquire—and depend on acquiring—a vast amount of information (concepts, beliefs, ideas, skills, etc.) from others. Moreover, we acquire most of this information because others deliberately share it with us through communication, instruction, and argument.
This epistemic cooperation is unusual. Although many other species engage in social learning, we are unique in the massive amount of information we share and in our reliance on that information for survival and success. From an evolutionary point of view, this extensive communication is puzzling. Like all sharing, information sharing looks like a form of altruism, a tendency to place another organism’s interests above one’s own. Such tendencies are normally filtered out of the gene pool. A Darwinian analysis would therefore seem to imply a picture of human beings as information scroungers, eager to extract information from others but reluctant to share it. This is rarely what we find. If anything, people are often more eager to talk than to listen.
The economics of communication
As with much of human social life, the solution to this puzzle rests on incentives. People generally want accurate information relevant to their interests. Given this, they tend to reward those who give them such information with trust, sympathy, and gratitude. Likewise, they punish those who are deceptive, unhelpful, or just boring. For example, think of the anger you might direct at someone who deceives you, or the motivation to avoid those who are uninteresting and self-absorbed.
These reactions help to align individual self-interest with honest and helpful communication. However, as with most human cooperation, they are also scaffolded by broader social incentives involving norms, reputations, and gossip. To the extent you deceive someone, gossip ensures you will harm your reputation in the eyes of many others if it is discovered. Moreover, because of community norms against dishonesty, you have not just wronged the deceived. You have wronged the community.
Of course, gossip is itself a form of communication, and people are encouraged to transmit it for the same reason they share other information: because audiences are strongly motivated to acquire “juicy” and “delicious” gossip, and so will reward those who share it with them.
Humans depend on social approval and a good reputation. By tying people’s self-interest to honest and helpful communication, this simple system of incentives tends to favour the transmission of reliable information. In our deep-rooted desire for and dependence on a good reputation, we are typically led—as if by an invisible hand—to communicate in accurate and diligent ways.
The economics of cognitive labour
So far, this analysis focuses on cases where people share information they happen to be in possession of. For example, I can tell you tomorrow’s weather forecast if I was motivated to learn it for self-interested reasons. However, human beings sometimes invest time, energy, and other resources into generating or acquiring information for the purpose of sharing it with others. That is, we engage in cognitive labour.
In the modern world, the prime examples of cognitive labour are things like journalism and science. However, cognitive labour is not a uniquely modern phenomenon and it is not essentially tied to money. It is motivated and coordinated by an ancient and universal currency: social status. As sociologist Robert Merton pointed out, this is true even of modern institutions like science, within which scientists compete to get “credit” (i.e., recognition) for originating new ideas, discoveries, insights, and so on. It is also true of journalists and writers, who are often far more motivated by prestige than by financial rewards.
An epistemic bazaar
Stepping back, an extremely simplified, quasi-economic model of epistemic cooperation therefore takes the following form:
People (i.e., consumers) seek out information that is relevant to their interests in much the same way that people seek out other goods and services.
Given this, they are motivated to reward (via approval, sympathy, respect, trust, etc.) those who share such information with them. Equally, those who share dishonest or unhelpful information are punished either directly or reputationally.
As a consequence, a system of social profits and losses encourages people to invest time and resources into generating (via cognitive labour) and sharing (via communication) information that other people value.
Because people tend to value reliable information relevant to their interests, these incentives generally favour the social transmission of accurate and helpful information.
Of course, this model is cartoonish and misses many complexities and subtleties. Nevertheless, it provides a simple and illuminating explanation of an otherwise puzzling fact that too many social scientists and philosophers take for granted: that human communication and the division of cognitive labour are often highly successful. Moreover, its simplicity helps to highlight systematic reasons why this success will sometimes fail to materialise.
III. The marketplace of bad ideas
There are many reasons why the processes just described can fail to generate good ideas. Here, I will focus on just two:
Untestable ideas and inaccurate intuitions.
The search for self-serving justifications.
Untestable ideas and inaccurate intuitions
The paradox of information economies
There is a paradox at the heart of a marketplace of ideas. If I learn something from you, I am not in a position to know whether the relevant idea is true or false. If I were in such a position, I would not need to acquire the idea from you to begin with. This is why the advice “Trust, but verify” is typically incoherent. If you are in a position to verify, you do not need to trust. Likewise, if those “buying” ideas are in a position to verify the ideas, they do not need to buy them. But then how do buyers evaluate ideas?
One answer to this question is that people do not evaluate ideas directly. They evaluate the trustworthiness of those selling ideas. However, this merely kicks the can down the road. What grounds could you have for thinking a source is trustworthy if you are never in a position to evaluate the veracity of the ideas they share?
Of course, sometimes we can rely on the fact that we have overlapping interests with a source, but this alignment of interests is rarely absolute, even when it comes to family members. Further, sometimes we can rely on testimony about the source’s trustworthiness from other people we trust, but—again—this merely postpones the puzzle. How can we evaluate whether that testimony is trustworthy?
Plausibility checking
I think there are two basic solutions to this paradox. One is that even though we cannot know the truth of ideas that are communicated to us, we can and do evaluate their plausibility. If someone tells me something that aligns with my general worldview, I am more likely to think it is true. Likewise, I am less likely to accept counterintuitive claims. This process of “plausibility checking” is central to human communication. Moreover, it is supplemented by the process of argumentation. Even when I deem a message to be implausible, its source might be able to present me with an argument that reveals its coherence with other things I believe.
Testable ideas
However, there is another factor at play in the evaluation of communication: most mundane ideas transmitted among people are testable. If Sally tells me that Bob is a horrible bastard, and I later spend time with Bob and discover he is a lovely guy, I will lose all trust in Sally. I might even be angry and spread gossip about her. This process, whereby people can ultimately test the information they receive from others, gives communicators a strong incentive to be honest. It also helps us to escape the conservatism of plausibility checking. I will be more willing to accept counterintuitive ideas if the ideas are testable because I know I will be able to hold the communicator accountable if they mislead me.
Importantly, this dynamic applies even if I will not personally be in a position to test an idea, but others that I trust will. To the extent that ideas are directly testable, we have much greater resources at our disposal when it comes to evaluating them and hence evaluating the trustworthiness of those who spread them.
Untestable ideas
Unfortunately, many ideas are not testable in this sense. In fact, in complex, modern democracies, we are rarely in a position to directly test the information we acquire from others within the public sphere, either because it concerns phenomena that are too distant in space and time, or because the phenomena are too complex. (Think of the difficulty of testing policy or medical interventions without the capacity to run randomised control trials). Under these conditions, we are therefore forced to rely mostly on subjective plausibility assessments in evaluating ideas.
It should be clear why this is a problem: In many domains—in fact, in most domains outside of those we evolved to deal with (i.e., middle-sized physical objects and small-scale social dynamics)—our subjective evaluations of plausibility are systematically unreliable. In areas as diverse as medicine, cosmology, biology, economics, sociology, and politics, people often have strong intuitions that are completely misaligned with the truth. For example, many people:
have a strong instinctive aversion to vaccines;
have a strong intuition that disease is produced by toxic substances that should be removed from the body (e.g., through bloodletting);
conceptualise economic transactions as inherently zero-sum (i.e, win-lose);
think that minds can be separated from bodies;
believe that complex social and political problems have obvious solutions;
are disposed to interpret politics in a paranoid, conspiratorial way.
Although the origins of these and many other kinds of intuitions and interpretative tendencies are complex, they lead people’s subjective understanding of reality to be systematically distorted. Given this, when people draw on them to evaluate ideas, they end up making bad evaluations. In the competition to win social and financial rewards, this in turn encourages people—pundits, media outlets, journalists, writers, and so on—to generate and share ideas in response to these bad evaluations. That is, it leads to a marketplace of subjectively appealing but unreliable ideas.
I think this helps to explain a lot of the low-quality information circulated in the public sphere. That is, lots of the bad ideas emerge not through top-down attempts to manipulate audiences but from competition to win attention, respect, and trust from audiences who reward content that aligns with mistaken intuitions. Moreover, because the ideas are fundamentally untestable for the audiences involved, they are never in a position to learn that the information they are acquiring is such low-quality.
Overcoming unreliable intuitions
To get around this problem, modern societies have designed various epistemic institutions—most obviously the institution of science—which function to produce reliable and often counter-intuitive knowledge via the rigorous application of scientific and statistical methodologies. At least when these institutions function effectively, ideas win acceptance not because they are intuitively appealing—in fact, some of modern science (e.g., quantum mechanics) is incomprehensible—but because they satisfy rigorous criteria of predictive success, coherence, precision, and so on.
For those of us who recognise science (or at least certain parts of science) as a uniquely reliable source of knowledge, we are therefore willing to replace our pre-scientific intuitions with scientific information. However, the problem is that large numbers of people do not trust science, either in general or in specific contexts (e.g., public health), in some cases for good reasons. As I have argued elsewhere, this suggests that many problems associated with “misinformation” in modern society are ultimately symptomatic of this lack of trust.
The marketplace of rationalisations
The game of giving, asking, and paying for reasons
Setting aside things like jokes, art, and entertainment, for the most part when we seek out information from other people, we are interested in truth. More precisely, we are motivated to learn about those features of reality relevant to our interests. I follow the news to learn what is happening in the world. I watch film reviews to learn what films are worth watching. I read gossip magazines to learn who Taylor Swift is dating. And so on.
However, sometimes we seek information not to figure out the truth but to acquire justifications for our preferred beliefs, narratives, and behaviours. Human social life involves what the philosopher Wilfrid Sellars called a “game of giving and asking for reasons”. If you want to persuade others of something—that a claim is true, a narrative is accurate, a decision is moral, and so on—you must often provide them with reasons. In this sense reasoning is frequently a social activity, one bound up with social processes of argument, persuasion, and reputation management.
In many cases, the task of coming up with reasons falls on individual reasoners. If I want to persuade you that my beliefs are true and decisions rational, the task of coming up with justifications for these beliefs and decisions will likely fall on me. However, sometimes we can outsource this task to others and leave the hard work of generating effective justifications to them. In a way, this is what lawyers, press secretaries, and public relations teams do for people and companies. They deploy resources and ingenuity in the service of selecting, framing, and packaging facts in ways designed to rationalise predetermined conclusions—most commonly, that their client's actions are good and just. In return for their services, they are often handsomely rewarded.
As I have argued elsewhere, much communication within the public sphere functions very similarly. It involves a marketplace of rationalisations, an informational economy in which people compete to produce high-quality justifications of favoured beliefs, narratives, and decisions in society. In return for their cognitive labour, such rationalisation producers sometimes receive financial rewards. However, as with human communication and cognitive labour more broadly, the more fundamental currency in such markets involves social rewards such as attention and status.
The consumer demand for self-serving justifications
In politics, there is no such thing as a disinterested or impartial observer. Humans are self-serving, status-seeking, hypocritical, sectarian apes. When it comes to politics, we are therefore motivated to act in ways that promote our interests and—more commonly—the interests of political and cultural tribes that we identify with. Similarly, we are strongly motivated to push narratives that advance our individual and tribal interests.
These motivations create a strong demand for rationalisations—that is, for evidence and arguments that justify self-serving and tribal beliefs, narratives, and decisions. For example:
People seek out arguments for why their interests align with the public good. (Cutting taxes on the wealthy stimulates economic growth. Social arrangements that benefit elites are “natural”. Etc.).
Low-status, anti-establishment groups seek out arguments for why elites are evil and the establishment is bad.
People who benefit from establishment arrangements seek out arguments for why such arrangements are good and their critics are bad.
Loyal members of competing tribes (e.g., ethnicities, nations, political parties, ideological groups, factions in culture wars, etc.) seek out arguments for why their side deserves power, status, and resources; why their side’s actions are good and just; and why the opposing side is bad.
People seek out arguments for why the trendy beliefs that maximise social status within their social milieu are rational.
And so on.
The production of self-serving rationalisations
Because people are motivated to acquire evidence and arguments that rationalise their preferred beliefs, decisions, and narratives, they will attend to those who are capable of generating such rationalisations and reward them with respect, admiration, and trust. Given this, producers in the marketplace of ideas compete to win such social rewards. This competition selects for the production of high-quality rationalisations and coordinates cognitive labour in such a way that people devote time, resources, and ingenuity to the task of generating and sharing ideas that justify whatever pre-determined conclusions happen to be favoured in society.
In my view, this simple idea helps to explain a lot of the highly biased and misleading communication you find in the public sphere. For example, there is extensive evidence that much of media bias is demand-driven and that partisan media skew their reporting in ways that favour the political sympathies of audiences. I think the role of rationalisation production is even more obvious when it comes to successful pundits, opinion journalists, commentators, and so on, many of whom behave much more like lawyers or press secretaries than thoughtful, balanced, and reasonable analysts.
Are rationalisation markets bad?
Importantly, in some cases, rationalisation markets are not necessarily bad from the point of view of truth. Just as the legal system can benefit from adversarial arguments in which lawyers seek the most persuasive arguments for opposing sides, society can sometimes benefit from competition between the self-serving rationalisations sought out by different groups. However, I think it is clear that if you look at actually existing rationalisation markets within society, they generally fall short of this ideal.
I am not entirely sure why this is this, but I suspect it has to do with the fact that media is highly fragmented, meaning that the arguments of different sides are rarely brought into contact. Moreover, there are also profound asymmetries in the power, resources, and influence of different interest groups within society, which arbitrarily favours some narratives over others. Unlike an ideal courtroom where the prosecution and defence have access to equally capable lawyers and arguments are exchanged in an orderly and rule-governed fashion, in modern democracies the most powerful and politically engaged possess much more skilful lawyers, and they often do not bother engaging with or responding to the arguments of outsiders.
IV. Summary
The idea that truth will ultimately emerge victorious in a free marketplace of ideas is incorrect. Although there are principled reasons why the free exchange of ideas often favours the transmission of reliable information, this breaks down both (i) when the information is untestable and concerns counter-intuitive truths and (ii) when people seek out information to justify self-serving narratives and decisions. Unfortunately, both conditions—that is, the combination of untestable ideas and unreliable intuitions, and the search for rationalisations—are both central features of free and open debate within complex, modern societies. Even setting aside deliberate propaganda, then, we should expect this free and open debate to frequently favour highly misleading ideas, which is of course exactly what we find.
Sometimes people draw on the concept of a naturally truth-seeking marketplace of ideas to argue against censorship. Given this, you might think the arguments given here are arguments in favour of censorship. That would be a mistake. Market failures do not justify state interference unless state interference is immune from even worse failures. When it comes to censorship, state failures are likely to be worse. Moreover, many arguments against censorship—including some I have developed elsewhere—have nothing to do with truth anyway.
Nevertheless, if the arguments developed here are correct, they do demonstrate something that should have been obvious anyway: in complex domains where people are not disinterested truth seekers, there is no natural tendency for the truth to win out. This is why throughout history our species has wallowed in ignorance, superstition, misperceptions, and myths in our beliefs about those parts of the world beyond immediate material and social existence. As Robert Edgerton puts it in Sick Societies,
“The bulk of available evidence suggests that people in all societies tend to be relatively rational when it comes to the beliefs and practices that directly involve their subsistence… The more remote these beliefs and practices are from subsistence activities, the more likely they are to involve nonrational characteristics.”
Two important lessons can be drawn from these considerations.
First, science and expertise are indispensable in modern democracies. To the extent our untutored intuitions are often unreliable, we must invest in knowledge-generating institutions that overcome our natural limitations and generate counter-intuitive knowledge about distal and complex domains. For democracies to be successful, these institutions must therefore win public trust, which means they must be trustworthy—much more trustworthy, and much more reliable, than they currently are.
Second, to the extent that many pundits, writers, opinion givers, and so on effectively function as lawyers for different factions and interests in society, we should recognise that this often undermines the collective pursuit of knowledge and therefore treat such behaviour with contempt. Likewise, we should strive to reward those who make better, more thoughtful, and less biased contributions to public debate with admiration and respect. Humans will never be disinterested truth seekers, but it is possible to construct norms and incentives that channel the human desire for status and social approval into collectively beneficial ends.
Further reading
“The Enigma of Reason” (by Hugo Mercier and Dan Sperber) and “Not Born Yesterday” (by Hugo Mercier) are in my view the best books on human communication, social learning, and reasoning.
I explore some of the ideas in this essay in much greater depth (and with greater academic rigour) elsewhere, including here, here, and here.
Lionel Page has an excellent series on the marketplace of ideas, including on the marketplace of rationalisations.
For a somewhat different but in many ways complementary take on the concept of a marketplace of ideas, Jonathan Rauch’s The Constitution of Knowledge is worth reading.
The marketplace for ideas has been transformed by mass media technologies. This tranformation has - like everything - come with both benefits and costs to the marketplace.
Costs:
1) It deludes people (and especially the uncurious) into thinking they know more than they really do because they've seen it on 'The News' or their social media feed. Thus they will feel minded to have an (ill-informed) opinion on a far greater range of subjects beyond their direct experience then they would in earlier times.
2) A tragedy of the mass media age is that it affords such a disproportionate voice to the one-track-minded, politico-activists, mouthy obsessives, narcissists and permanent malcontents. Well balanced people tend to be less media obsessed.
Benefits:
In very crude (and inevitably somewhat simplistic) terms, it seems to me that the invention of the search engine was a marvel and a massive boon for people with curiosity about the world beyond their direct experience. But then, after a few golden years, social media came along....something which - again it seems to me - has been almost entirely destructive.
I discussed these things at greater length in this piece: https://grahamcunningham.substack.com/p/non-binary-sibling-is-entertaining
"......Instead of an Orwellian thought police what we have is more like thought social workers and therapists - a great spider’s web of journalists, scriptwriters, opinionated actors, pop academics and advertising ‘creatives’, alternately flattering you, nagging at you and generally helping you to think correctly. It is a cancerous organism out of the control of anyone - even its own media elite - that brainwashes everyone, politicians included. It ‘keeps you informed’ with ‘The News’ and it entertains you with tv film and drama. It’s not - in the West anyway - generally a deliberate attempt to tell lies. It’s actually worse than that: the very concept is flawed. Flawed by virtue of editorial selectivity; by virtue of newsroom groupthink; by virtue of a journalistic mindset whereby the dramatic narrative is more important than the actual subject matter and worst of all by the illusion that the consumer of news can really know what is going on all over the world without any great effort......"
I suppose that it's possible that Rauch's book is excellent, or worth reading, but I gotta tell you given the subject matter, these claims on the page your link sends us to have me seriously doubting the credibility:
"In 2016 Russian trolls and bots nearly drowned the truth in a flood of fake news and conspiracy theories..."
Research has shown in fact that the amount of this was fairly tiny. Surely nowhere has it been demonstrated that such trolls and bots "nearly drowned the truth in a flood of..." anything.
And then there's the next claim: "Social media companies struggled to keep up with a flood of falsehoods, *and too often didn’t even seem to try*." [emphasis mine] The Twitter Files alone make the last part of this statement laughable, do they not?
Given how laughably false these two claims are, can you defend why one should read a book where these are given as reasons for buying it?