A deep dive into four factors that prevent members of open societies from understanding political reality: complexity, invisibility, incentives, and politically motivated cognition.
Pre-internet: information (and disinformation) about the world beyond the individual's direct experience was essentially a scarce resource. Demand exceeded supply.
The internet has changed this information 'landscape' in profound ways that we as a civilisation have hardly begun to come to terms with. The gigantic 'supply' of information/disinformation now exceeds the demand for it. Keeping sane and 'centred' as a citizen now involves (amongst other things) learning how to partially shield oneself against this great noisy digital wind. It has probably become more of a boon to the less sane and centred than the other way round.
Did you think I was saying there are no benefits to the internet?....Surely not! (In fact I think the internet has been the best (non-personal) thing to happen in my lifetime.)
I was simply talking about another dimension to the kind of changes that you were discussing in your own post. What do you mean by "plausible"? Was my observation not self-evidently true? It had no implicit agenda of any kind.
Much of the information before the Internet was far beyond our immediate experiences as well, and even more so I would claim because information and the narratives were untouchable by the people as they had no means to create their counter narratives and arguments. Today, everyone with a digital device can produce information.
You seem to have entirely missed my point. Yes of course "Much of the information before the Internet was far beyond our immediate experiences" and I didn't say it wasn't.
My point was about ease of access to such information....you had to invest effort in getting it and so be selective. (ie Demand exceeded Supply). What the internet has done is deluged people with such information (and disinformation) without much (or any) intellectual investment on their part. (ie Supply exceeds Demand).
To be fair, this was made perfectly clear in my original comment.
There are several points of overlap with my upcoming positions, one of which relates to noise more generally, which is why I am applying here to include Graham.
First and most generally, I want to point out the analog of "Rational Ignorance" with what I consider to be the distribution problem of Cynicism, approaching some point of saturation. If Cynicism in effect reduces trust, effort and sensitivity to the efforts of others, then it becomes increasingly rational for each (marginal) person to be cynical on a similar cost/benefit basis. Your points were so well presented, that I feel I need not say more to clarify what I mean.
What I most appreciated about this post was your care in addressing what motivated cognition is not. Echoing this, one of the points I wish to make in my writing is that "confirmation bias" in it's typical use, is a rhetorical accusation that, ironically, is all the more likely to happen through the confirmation bias of the accuser. What's strange is that the real, passive bias gets replaced with a prisoners dilemma that can easily result in people who agree defecting at the slightest sign of disagreement. I won't expound too much on it here, but I view confirmation bias and "conformation" bias (the proposed active ingredient in mob mentality, group think, coalition psychology, etc.) offer a kind of dialectical "binocular vision."
Finally, your four factors (complexity, invisibility, ignorance and tribalism) are similar to what I call, tongue in cheek, "The Four Horsemen of Cynicism:" Noise, Urgency, Exhaustion and Blame, which by their powers combined, make us susceptible to what I call "Saliencentrism." Essentially, each of the four compoundingly and cyclically saps our resources and capacities, which are required for managing our own attention. When there is an available "breadcrumb trail" of dopamine hits punctuated by noise, and people selling us on urgency, and showing us low hanging fruit to blame and high hanging fruit that is out of our reach, then our attentional resources struggle to recover from perpetual exhaustion and susceptibility to repeat cycles.
Cheers! Which I guess is a weird way to sign off after that description...
I have a higher resolution description if you would like it, but I also know that sometimes I prefer to chew on something before seeing someone else's answer. Otherwise I feel I might become overly anchored by someone else's take, after which exploring the territory feels subtly guided by someone else map.
What I might add to your considerations is the potential stakes given the shift from algorithms that we already suspect learn to exploit attentional vulnerabilities to AI that we can totally expect will be trained on combinatorial efficacy. Among many other things, AI holds huge leverage over noise. People already project onto inkblots and clouds, and I don't want to find out what the AI Slop equivalent of 3D posters might be.
This comes back to whether Cynicism is meant to perform a "Safe Mode" function for cutting through noise, getting back to the basics, and troubleshooting what the naive of yore have done with the place. But what happens if the noise and complexity keep pace? What will the next generation of naïfs do after being starved of meaning for an extended period? It feels to me like we could get caught between a critical mass and an escape velocity.
Also, apologies if I am coming across as something more than urgent. I'm not sure how to navigate epistemic humility, not-quite-alarmism and generally niche considerations.
Ha...your thoughts are both deep and timely. The leap from traditional algorithms to advanced AI is significant and could increase the noise we're already dealing with. The way you compare our reactions to AI-generated content to seeing shapes in inkblots or clouds is spot on. It shows how easily we can get caught up in personalized noise, making it harder to find real insights.
I find the idea of cynicism as a "Safe Mode" to cut through the noise and get back to basics is interesting. But if the complexity keeps growing, the next generation might struggle to find meaning. It's a real challenge to balance being cautious without sounding alarmist.
I think your concerns are understandable, and we need to work together to foster critical thinking and resilience. Navigating this new landscape will require both individual and collective efforts to truly understand these technologies. Thanks for sharing such a thought-provoking perspective!
Re: new generations struggling to find meaning, there's a sense in which the common "rebel phase" of youth (and beyond) offers a safety valve. Cynics often turn into lazy gatekeepers and settle for purity tests, trials by fire and other means of weeding out naivety. The joke ends up being on them when the majority of people who "pass" their filters are those that rebel against them and simply "go around," naivety fully intact. Lazy flavors of cynicism quickly turn into new forms of naivety!
Graham, you’ve brought up a crucial point about the impact of the internet on our information landscape.
I find that the shift from scarcity to an overwhelming abundance of information has indeed changed how we navigate the world. It’s true that staying sane and centered now requires us to learn how to filter and manage this constant influx of 'sucky fluff'.
This is a strong bid to wrestle with the forces that often underly and undermine our 'rational' pursuit of whatever we have defined as 'good' (or have accepted someone else's definition of). Thank you. The matter is crucial if we wish to move from wishful thinking to acting in good faith to good effect (and also want to be sure we are not lying to ourselves or virtue signalling to ourselves/others/God Above etc).
I was reminded of the Elephant/Mahout distinction Jonathan Haidt draws in The Righteous Mind.
Despite the challenges and shortcomings outlined, and the dire concluding note ("the situation is actually even bleaker than the picture I have painted here and suggest some ways of attempting to improve it.") one must consider the enormous good that open societies and institutions in large nation states have already done and continue to do? Just a handful of random examples: eradication of many deadly pathogens, exploration of outer space, harnessing energy of an atom, long healthy lives and wealth and riches and a countless other things that small hunter gatherer societies cannot even comprehend.
Nice piece, Dan. I found the analogy between lawyers in an adversarial legal system (being "quite literally paid to rationalise predetermined conclusions”) and coalitional groups in society particularly compelling. In the legal system, each side vigorously argues its case, with the belief that the truth will emerge through this process. If this serves as a microcosm for the broader problem you’re discussing, then might we not also expect truth to emerge when members of various coalitional groups in society advocate for their perspectives?
I guess there are multiple reasons to be sceptical. For one thing, just as some parties in legal cases can afford much more effective representation than others, not all groups in society have equal access to platforms for expression; so more powerful groups dominate the discourse. And a disanalogy with the legal system is that in broader society there is no oversight from an impartial judge. So mechanisms to ensure fair and evidence-based debate are critical.
Thanks Ryan. Good questions - I genuinely don't know what to think. A few thoughts:
- It's not obvious to me that the truth does reliably emerge even in the ideal adversarial conditions of the legal system. Moreover, I think Nina Strohminger has some interesting work suggesting that advocacy even biases the judgements of lawyers - even lawyers who explicitly think it doesn't and shouldn't bias their judgement - in ways that lead to worse decision-making on their part.
- Yes, completely agree re. the issues about power differences between groups and the absence of a judge. I think there are also issues to do with trust that make things complicated. In the US, for example, the narratives promoted by different sides seem to be less and less constrained by a shared basis in reality, at least in part because of Republican distrust of institutions.
- Another disanalogy with the legal system is that we have set it up so that lawyers are incentivised to generate arguments even for socially stigmatised conclusions (e.g.., behaving as a defence lawyer for someone who many think is guilty). However, there are various reasons why nobody is encouraged - and in many cases people are actively discouraged - from generating arguments for specific conclusions in the context of public debate and deliberation.
Just some half-baked (and not very satisfying) thoughts!
Regarding "Partisans often adopt semi-random (“rationally orthogonal”) packages of beliefs, policies, and narratives associated with specific political tribes. (Why do your beliefs about gender identity predict your beliefs about climate change, “capitalism”, and the Israel/Palestine conflict, for example?)":
This might be post-hoc rationalization but to me the common denominator is oppression/domination. Non-cis persons are often oppressed minorities, Palestinians have lived under an apartheid occupation for decades, capitalists exploit and dominate workers, and (over)exploit nature.
It could be - but I’m sceptical. Are the people who fall on the other side of the divide “pro-oppression”? Seems implausible - the divide seems rather to concern what constitutes oppression in specific cases.
I suspect that they're pro-"status quo": not shaking up the gender identities that European societies have taken for granted for the last couple hundred years, not changing the economic system that's been running for the last two hundred, not siding with the insurgents against a state actor. Their priorities are different and not upsetting the status quo, which they assume is in existence for a reason (obviously I'm speculating here), is more important to them than alleviating cases of oppression.
For me the question is actually one of leaving certain beliefs out: I don't get leftists that are anti-trans (or anti-feminist, for that matter), I don't understand vegans that are not anti-capitalist etc. (well, accept for the "health"-guided ones). There's an organizing principle there that I haven't found yet.
I do wonder if this is rationalization. Your last sentence seems to convey a directional certainty on some subjects that are gray areas to anyone approaching them objectively and individually, and that direction is aligned with a foundational belief (intersectionality & power dynamics) of a specific coalition (what some would call "woke").
I suspect that the idea that anyone who supports one of these causes should support all of them, despite say the inhererent contradictions of organized labor vs. degrowth climate change advocacy (as an example), is exactly the kind of idea that's strongly influenced by coalitional psychology. In fact the union of a large number of inherently contradictory causes vis a vis a theoretical abstraction (oppressor/oppressed) exclusively defined and promoted by a coalition that's highly motivated by identity would seem to be a textbook example of what's being discussed.
First off: criticize to your heart's extent, wooden beam in the eye and all that.
Second: as I wrote in my original comment, and as anybody reading this blog probably bears in mind, this might indeed very well be rationalization. I work in ML research so I always try to find the patterns, and it is quite possible that humans don't come to their beliefs like this. But it's also possible that they have a or several core value(s), which inform which beliefs they pick up, no?
My veganism as case in point: I don't have an emotional reaction to animal suffering, as many vegans I've met, but I could not be in favor of slowing global warming and against oppression and continue to eat meat and dairy.
Finally: I find it interesting that you think organized labor and "degrowth" has "inherent contradictions". Degrowth, as I understand it, means producing less surplus and using less energy. Both of which don't actually mean fewer jobs but different ones, hopefully less tedious ones. A great example for degrowth, for instance, would be stopping the generative AI idiocy right now.
I think the question of core values vs. political beliefs is an interesting one, and I wholly agree that one influences the other. For example, the desire to protect and improve the position of the marginalized and oppressed in the world comes from core values of fairness and equality. To the extent that "core values" are taken as personal priors or biases, these are good ones to have.
However, I think we can differentiate between core values, and political beliefs.
Whereas a core value, once taken, can be held axiomatically, taking on a political belief requires us to rationalize and assess whether that particular position or philosophy is good or not relative to the alternatives, considering pros, cons, and tradeoffs, etc. I suggest that it is in this process where coalitional reasoning may be at work (to the extent that any does it instead of just accepting a belief that their coalition hands to them).
For example, I believe most people hold as a core value that killing a fellow human is bad, and we ought not to do it. However, people with that same core value could reasonably argue that either 1) pacificism is just because all violence is bad, or 2) a strong military is just because we need to protect our people from external violent actors. Neither violates that core value, they are just different positions arrived at using some sort of reasoning.
Do people truly arrive at different positions because their rationality has led them to select different modes of reasoning, like different ethical systems or mental frameworks? Or is it that they find a mode of reasoning that allows them to arrive at the position that's held amongst their coalition? We see the latter all the time. Clarence Thomas is an Originalist so long as it suits the conservative agenda.
And so, returning to the core values of equality and fairness - I would argue that most people hold these values. But it's not obvious at all that someone with these values would necessarily embrace intersectionality and power dynamics as the defacto framework for assessing the world and forming political positions. Someone with these values could be stridently meritocratic. They could be radically against all organized religion. They could be a neoliberal. In either case, they've applied some reasoning to get their, and I'd wager that reasoning is certainly influenced by coalitional psychology.
I agree with most of what you're writing, especially since I've seen this shift in myself over the years, especially w.r.t. what's often called "meritocracy". Was all for it when I was younger before I started reading texts pointing out that what you start out with - family situation and income, educational level of parents, personal health, gender, etc. - impacts so strongly on one's development that "we give everyone the same starting point and assessment" tends to produce unfair outcomes.
Just two quibbles: 1) neoliberalism in practice means the state intervening on the side of corporations and at the expense of workers. I've some trouble seeing how this aligns with core values of equality and fairness.
2) I don't think that most people hold those two values. At best, this might be true for people who look like them but "I'm inherently better so I deserve more" underlies so many belief systems that a lot of people get exposed to it as children and probably never unlearn that.
I think you've defined signalling a bit too narrowly. It's not merely signalling to demonstrate that one is a coalition member but is usually signalling about values with the desire to be perceived a certain way then creating a strong pressure for motivated reasoning. Indeed, there are interesting studies showing that people make different choices when they think their 'vote' is one of a tiny number or one of many.
For instance, consider an urban liberal who cares a great deal about people who can't afford housing and is deciding whether to support a policy that requires developers and landlords offer a certain amount of affordable housing. If they were ideally rational they'd just evaluate whether or not such a policy increases or decreases the affordable housing independently of their value judgement.
The problem is that they see that people who say such policies make housing less affordable are accused of being shills for landlords because you might reach that view because you really care about affordability or you might reach it because you care about landlords and it's a convenient argument. OTOH the only salient reason to advocate for said policy is because you do (or want to be perceived as) caring about affordability.
And yes, people don't choose to believe things, but social pressure is very strong and they really want to be approved of and be part of the group that shares their values. That creates strong pressure to say the things that are evidence of sharing those values not the things that are ambiguous.
Beautifully insightful as always, Dan. Minor question: you write that it’s generally “impossible” to choose one’s beliefs. Do you disagree then with Van Leeuwen’s argument that some beliefs (e.g. religious beliefs) are voluntary? It does seem like a lot of religious people talk of belief as a choice, and though you don’t see it as much in secular culture, you do see it occasionally (e.g., “I choose to believe that people are fundamentally good.”). I’m writing a post on the apparent voluntariness of some bullshit and am curious what your thoughts are on the topic.
Yeah, good question. I'm inclined to think in those cases, the psychology is very different - that there is a sense in which they don't really believe it, although they use the word "belief". (In a way, this would be somewhat similar to van Leeuwen's view that the "beliefs" in these cases are a different kind of mental state, more akin to imagining). However, I genuinely don't know. Maybe people always have a strong preference for justifications of their views, but if there are strong enough incentives to hold a view and little reputational cost from not being able to justify them, people can simply "choose" to believe them. Maybe there's also something else going on in some of these cases where people think the available evidence doesn't force any *specific* conclusion, so they in some sense choose one among a set of possible options. Like, if someone said - "I choose to believe that my arm grow back" - that would seem incredibly bizarre in a way that "I choose to believe that people are fundamentally good" (which is incredibly vague and bull-shitty and consistent with all sorts of evidence if you interpret it creatively) doesn't. But I'm just really not sure - it's a good question. Looking forward to your post!
This seems to relate to the difference between "faith" and belief. The whole point of faith is that you *don't* rely on evidence - it is the willingness to assume the attitude of belief in the absence of evidence or technically believing yet. I know among Orthodox Jews, it is considered less important that you "believe" deep down than that you pray and observe the Sabbath regardless. So in that case it would amount to a voluntary behavioral commitment. But maybe it need not be behavioral; William James implies in Will To Believe that you can choose to believe without evidence, while making this sound like a cognitive matter. I checked and Van Leeuwen doesn't cite Will To Believe (only Varieties of Religious Experience). Either way, what would make it voluntary is the express removal of evidence as a consideration, as opposed to motivated reasoning where there is biased use of evidence.
Of course, it's still possible to bullshit yourself at the meta-level: believing that you believe just because you go to church or pray, when really you don't even have faith.
Great post, Dan. I had been struggling with a similar essay for some months now, and am genuinely relieved that I don't have to write it now - you said everything I had wanted to say, and then some.
The question that remains, I think, is why given all these limitations, open societies do so well compared to closed ones? Is it for the reasons the advocates of open societies claim - the free contest of ideas - or is it, as Hayek claimed in The Road to Serfdom, an effect of free markets?
Yes, very good questions. I do think open markets (including of ideas) have very positive qualities relative to real-world alternatives; the point here was just to identify some of their challenges.
I would underline Jeffrey Friedman's point that when you form a political opinion you often cannot find out when you are wrong. We don't learn from our mistakes.
This is sometimes true in consumer decisions (how do I know whether I am saving too much or too little?), but usually I can learn from repeat decisions and from polling other consumers.
As long as people find purpose in cultural identities and group-belonging, tribalism will stay put and a truly open society will be hard to achieve. Individualism is the answer, which allows people to find meaning in what objectively is a meaningless life. This will not solve the entire problem of course as the deepest drive in human nature is status, recognition/a feeling of being important, but it will do away with the (imagined) status that is attached to identities, nationalism, and religions which are some of the greatest sources that drive tribalism, us-and-them, and bloodbaths.
I struggled for some time to come to terms with the fact that my coalition was not, in fact, the paragon of reason I had always believed it to be. Call it growing up. I'm no longer surprised that in 2024, the people calling an assassination attempt a "false flag" are on the Left.
I wish I had this essay and mental model back then to help me make sense of this.
What might appear to be a rationally orthogonal package of beliefs at first glance can sometimes have a common thread upon closer observation. As an example, “capitalism” and the Israel/Palestine conflict make sense if your worldview is oppressor vs oppressed.
Interesting stuff. Another dimension....
Pre-internet: information (and disinformation) about the world beyond the individual's direct experience was essentially a scarce resource. Demand exceeded supply.
The internet has changed this information 'landscape' in profound ways that we as a civilisation have hardly begun to come to terms with. The gigantic 'supply' of information/disinformation now exceeds the demand for it. Keeping sane and 'centred' as a citizen now involves (amongst other things) learning how to partially shield oneself against this great noisy digital wind. It has probably become more of a boon to the less sane and centred than the other way round.
Yep - plausible. Although there are benefits to the internet as well, so it's difficult to make too-strong generalisations.
Did you think I was saying there are no benefits to the internet?....Surely not! (In fact I think the internet has been the best (non-personal) thing to happen in my lifetime.)
I was simply talking about another dimension to the kind of changes that you were discussing in your own post. What do you mean by "plausible"? Was my observation not self-evidently true? It had no implicit agenda of any kind.
Much of the information before the Internet was far beyond our immediate experiences as well, and even more so I would claim because information and the narratives were untouchable by the people as they had no means to create their counter narratives and arguments. Today, everyone with a digital device can produce information.
You seem to have entirely missed my point. Yes of course "Much of the information before the Internet was far beyond our immediate experiences" and I didn't say it wasn't.
My point was about ease of access to such information....you had to invest effort in getting it and so be selective. (ie Demand exceeded Supply). What the internet has done is deluged people with such information (and disinformation) without much (or any) intellectual investment on their part. (ie Supply exceeds Demand).
To be fair, this was made perfectly clear in my original comment.
Fantastic insights, Dan!
There are several points of overlap with my upcoming positions, one of which relates to noise more generally, which is why I am applying here to include Graham.
First and most generally, I want to point out the analog of "Rational Ignorance" with what I consider to be the distribution problem of Cynicism, approaching some point of saturation. If Cynicism in effect reduces trust, effort and sensitivity to the efforts of others, then it becomes increasingly rational for each (marginal) person to be cynical on a similar cost/benefit basis. Your points were so well presented, that I feel I need not say more to clarify what I mean.
What I most appreciated about this post was your care in addressing what motivated cognition is not. Echoing this, one of the points I wish to make in my writing is that "confirmation bias" in it's typical use, is a rhetorical accusation that, ironically, is all the more likely to happen through the confirmation bias of the accuser. What's strange is that the real, passive bias gets replaced with a prisoners dilemma that can easily result in people who agree defecting at the slightest sign of disagreement. I won't expound too much on it here, but I view confirmation bias and "conformation" bias (the proposed active ingredient in mob mentality, group think, coalition psychology, etc.) offer a kind of dialectical "binocular vision."
Finally, your four factors (complexity, invisibility, ignorance and tribalism) are similar to what I call, tongue in cheek, "The Four Horsemen of Cynicism:" Noise, Urgency, Exhaustion and Blame, which by their powers combined, make us susceptible to what I call "Saliencentrism." Essentially, each of the four compoundingly and cyclically saps our resources and capacities, which are required for managing our own attention. When there is an available "breadcrumb trail" of dopamine hits punctuated by noise, and people selling us on urgency, and showing us low hanging fruit to blame and high hanging fruit that is out of our reach, then our attentional resources struggle to recover from perpetual exhaustion and susceptibility to repeat cycles.
Cheers! Which I guess is a weird way to sign off after that description...
Very interesting observations. Especially the connection to the idea of four horsemen of cynicism - I need to think more about that…
I have a higher resolution description if you would like it, but I also know that sometimes I prefer to chew on something before seeing someone else's answer. Otherwise I feel I might become overly anchored by someone else's take, after which exploring the territory feels subtly guided by someone else map.
What I might add to your considerations is the potential stakes given the shift from algorithms that we already suspect learn to exploit attentional vulnerabilities to AI that we can totally expect will be trained on combinatorial efficacy. Among many other things, AI holds huge leverage over noise. People already project onto inkblots and clouds, and I don't want to find out what the AI Slop equivalent of 3D posters might be.
This comes back to whether Cynicism is meant to perform a "Safe Mode" function for cutting through noise, getting back to the basics, and troubleshooting what the naive of yore have done with the place. But what happens if the noise and complexity keep pace? What will the next generation of naïfs do after being starved of meaning for an extended period? It feels to me like we could get caught between a critical mass and an escape velocity.
Also, apologies if I am coming across as something more than urgent. I'm not sure how to navigate epistemic humility, not-quite-alarmism and generally niche considerations.
Ha...your thoughts are both deep and timely. The leap from traditional algorithms to advanced AI is significant and could increase the noise we're already dealing with. The way you compare our reactions to AI-generated content to seeing shapes in inkblots or clouds is spot on. It shows how easily we can get caught up in personalized noise, making it harder to find real insights.
I find the idea of cynicism as a "Safe Mode" to cut through the noise and get back to basics is interesting. But if the complexity keeps growing, the next generation might struggle to find meaning. It's a real challenge to balance being cautious without sounding alarmist.
I think your concerns are understandable, and we need to work together to foster critical thinking and resilience. Navigating this new landscape will require both individual and collective efforts to truly understand these technologies. Thanks for sharing such a thought-provoking perspective!
I appreciate your thoughtful response!
Re: new generations struggling to find meaning, there's a sense in which the common "rebel phase" of youth (and beyond) offers a safety valve. Cynics often turn into lazy gatekeepers and settle for purity tests, trials by fire and other means of weeding out naivety. The joke ends up being on them when the majority of people who "pass" their filters are those that rebel against them and simply "go around," naivety fully intact. Lazy flavors of cynicism quickly turn into new forms of naivety!
Lazy flavors of cynicism quickly turn into new forms of naivety?
Graham, you’ve brought up a crucial point about the impact of the internet on our information landscape.
I find that the shift from scarcity to an overwhelming abundance of information has indeed changed how we navigate the world. It’s true that staying sane and centered now requires us to learn how to filter and manage this constant influx of 'sucky fluff'.
Thanks for highlighting this important aspect.
Thanks. I think you would find this an interesting read. https://grahamcunningham.substack.com/p/take-me-to-your-experts
This is a strong bid to wrestle with the forces that often underly and undermine our 'rational' pursuit of whatever we have defined as 'good' (or have accepted someone else's definition of). Thank you. The matter is crucial if we wish to move from wishful thinking to acting in good faith to good effect (and also want to be sure we are not lying to ourselves or virtue signalling to ourselves/others/God Above etc).
I was reminded of the Elephant/Mahout distinction Jonathan Haidt draws in The Righteous Mind.
Thanks Christopher. And yes - I'm a big fan of that book.
Thought provoking throughout, thank you!
Despite the challenges and shortcomings outlined, and the dire concluding note ("the situation is actually even bleaker than the picture I have painted here and suggest some ways of attempting to improve it.") one must consider the enormous good that open societies and institutions in large nation states have already done and continue to do? Just a handful of random examples: eradication of many deadly pathogens, exploration of outer space, harnessing energy of an atom, long healthy lives and wealth and riches and a countless other things that small hunter gatherer societies cannot even comprehend.
Completely agree - that is why I am pro open societies!
Nice piece, Dan. I found the analogy between lawyers in an adversarial legal system (being "quite literally paid to rationalise predetermined conclusions”) and coalitional groups in society particularly compelling. In the legal system, each side vigorously argues its case, with the belief that the truth will emerge through this process. If this serves as a microcosm for the broader problem you’re discussing, then might we not also expect truth to emerge when members of various coalitional groups in society advocate for their perspectives?
I guess there are multiple reasons to be sceptical. For one thing, just as some parties in legal cases can afford much more effective representation than others, not all groups in society have equal access to platforms for expression; so more powerful groups dominate the discourse. And a disanalogy with the legal system is that in broader society there is no oversight from an impartial judge. So mechanisms to ensure fair and evidence-based debate are critical.
Thanks Ryan. Good questions - I genuinely don't know what to think. A few thoughts:
- It's not obvious to me that the truth does reliably emerge even in the ideal adversarial conditions of the legal system. Moreover, I think Nina Strohminger has some interesting work suggesting that advocacy even biases the judgements of lawyers - even lawyers who explicitly think it doesn't and shouldn't bias their judgement - in ways that lead to worse decision-making on their part.
- Yes, completely agree re. the issues about power differences between groups and the absence of a judge. I think there are also issues to do with trust that make things complicated. In the US, for example, the narratives promoted by different sides seem to be less and less constrained by a shared basis in reality, at least in part because of Republican distrust of institutions.
- Another disanalogy with the legal system is that we have set it up so that lawyers are incentivised to generate arguments even for socially stigmatised conclusions (e.g.., behaving as a defence lawyer for someone who many think is guilty). However, there are various reasons why nobody is encouraged - and in many cases people are actively discouraged - from generating arguments for specific conclusions in the context of public debate and deliberation.
Just some half-baked (and not very satisfying) thoughts!
Regarding "Partisans often adopt semi-random (“rationally orthogonal”) packages of beliefs, policies, and narratives associated with specific political tribes. (Why do your beliefs about gender identity predict your beliefs about climate change, “capitalism”, and the Israel/Palestine conflict, for example?)":
This might be post-hoc rationalization but to me the common denominator is oppression/domination. Non-cis persons are often oppressed minorities, Palestinians have lived under an apartheid occupation for decades, capitalists exploit and dominate workers, and (over)exploit nature.
It could be - but I’m sceptical. Are the people who fall on the other side of the divide “pro-oppression”? Seems implausible - the divide seems rather to concern what constitutes oppression in specific cases.
I suspect that they're pro-"status quo": not shaking up the gender identities that European societies have taken for granted for the last couple hundred years, not changing the economic system that's been running for the last two hundred, not siding with the insurgents against a state actor. Their priorities are different and not upsetting the status quo, which they assume is in existence for a reason (obviously I'm speculating here), is more important to them than alleviating cases of oppression.
For me the question is actually one of leaving certain beliefs out: I don't get leftists that are anti-trans (or anti-feminist, for that matter), I don't understand vegans that are not anti-capitalist etc. (well, accept for the "health"-guided ones). There's an organizing principle there that I haven't found yet.
I do wonder if this is rationalization. Your last sentence seems to convey a directional certainty on some subjects that are gray areas to anyone approaching them objectively and individually, and that direction is aligned with a foundational belief (intersectionality & power dynamics) of a specific coalition (what some would call "woke").
I suspect that the idea that anyone who supports one of these causes should support all of them, despite say the inhererent contradictions of organized labor vs. degrowth climate change advocacy (as an example), is exactly the kind of idea that's strongly influenced by coalitional psychology. In fact the union of a large number of inherently contradictory causes vis a vis a theoretical abstraction (oppressor/oppressed) exclusively defined and promoted by a coalition that's highly motivated by identity would seem to be a textbook example of what's being discussed.
Not a criticism, just an observation.
First off: criticize to your heart's extent, wooden beam in the eye and all that.
Second: as I wrote in my original comment, and as anybody reading this blog probably bears in mind, this might indeed very well be rationalization. I work in ML research so I always try to find the patterns, and it is quite possible that humans don't come to their beliefs like this. But it's also possible that they have a or several core value(s), which inform which beliefs they pick up, no?
My veganism as case in point: I don't have an emotional reaction to animal suffering, as many vegans I've met, but I could not be in favor of slowing global warming and against oppression and continue to eat meat and dairy.
Finally: I find it interesting that you think organized labor and "degrowth" has "inherent contradictions". Degrowth, as I understand it, means producing less surplus and using less energy. Both of which don't actually mean fewer jobs but different ones, hopefully less tedious ones. A great example for degrowth, for instance, would be stopping the generative AI idiocy right now.
I think the question of core values vs. political beliefs is an interesting one, and I wholly agree that one influences the other. For example, the desire to protect and improve the position of the marginalized and oppressed in the world comes from core values of fairness and equality. To the extent that "core values" are taken as personal priors or biases, these are good ones to have.
However, I think we can differentiate between core values, and political beliefs.
Whereas a core value, once taken, can be held axiomatically, taking on a political belief requires us to rationalize and assess whether that particular position or philosophy is good or not relative to the alternatives, considering pros, cons, and tradeoffs, etc. I suggest that it is in this process where coalitional reasoning may be at work (to the extent that any does it instead of just accepting a belief that their coalition hands to them).
For example, I believe most people hold as a core value that killing a fellow human is bad, and we ought not to do it. However, people with that same core value could reasonably argue that either 1) pacificism is just because all violence is bad, or 2) a strong military is just because we need to protect our people from external violent actors. Neither violates that core value, they are just different positions arrived at using some sort of reasoning.
Do people truly arrive at different positions because their rationality has led them to select different modes of reasoning, like different ethical systems or mental frameworks? Or is it that they find a mode of reasoning that allows them to arrive at the position that's held amongst their coalition? We see the latter all the time. Clarence Thomas is an Originalist so long as it suits the conservative agenda.
And so, returning to the core values of equality and fairness - I would argue that most people hold these values. But it's not obvious at all that someone with these values would necessarily embrace intersectionality and power dynamics as the defacto framework for assessing the world and forming political positions. Someone with these values could be stridently meritocratic. They could be radically against all organized religion. They could be a neoliberal. In either case, they've applied some reasoning to get their, and I'd wager that reasoning is certainly influenced by coalitional psychology.
Anyways these are just my opinions.
I agree with most of what you're writing, especially since I've seen this shift in myself over the years, especially w.r.t. what's often called "meritocracy". Was all for it when I was younger before I started reading texts pointing out that what you start out with - family situation and income, educational level of parents, personal health, gender, etc. - impacts so strongly on one's development that "we give everyone the same starting point and assessment" tends to produce unfair outcomes.
Just two quibbles: 1) neoliberalism in practice means the state intervening on the side of corporations and at the expense of workers. I've some trouble seeing how this aligns with core values of equality and fairness.
2) I don't think that most people hold those two values. At best, this might be true for people who look like them but "I'm inherently better so I deserve more" underlies so many belief systems that a lot of people get exposed to it as children and probably never unlearn that.
I think you've defined signalling a bit too narrowly. It's not merely signalling to demonstrate that one is a coalition member but is usually signalling about values with the desire to be perceived a certain way then creating a strong pressure for motivated reasoning. Indeed, there are interesting studies showing that people make different choices when they think their 'vote' is one of a tiny number or one of many.
For instance, consider an urban liberal who cares a great deal about people who can't afford housing and is deciding whether to support a policy that requires developers and landlords offer a certain amount of affordable housing. If they were ideally rational they'd just evaluate whether or not such a policy increases or decreases the affordable housing independently of their value judgement.
The problem is that they see that people who say such policies make housing less affordable are accused of being shills for landlords because you might reach that view because you really care about affordability or you might reach it because you care about landlords and it's a convenient argument. OTOH the only salient reason to advocate for said policy is because you do (or want to be perceived as) caring about affordability.
And yes, people don't choose to believe things, but social pressure is very strong and they really want to be approved of and be part of the group that shares their values. That creates strong pressure to say the things that are evidence of sharing those values not the things that are ambiguous.
Yes, very good points - I agree.
Beautifully insightful as always, Dan. Minor question: you write that it’s generally “impossible” to choose one’s beliefs. Do you disagree then with Van Leeuwen’s argument that some beliefs (e.g. religious beliefs) are voluntary? It does seem like a lot of religious people talk of belief as a choice, and though you don’t see it as much in secular culture, you do see it occasionally (e.g., “I choose to believe that people are fundamentally good.”). I’m writing a post on the apparent voluntariness of some bullshit and am curious what your thoughts are on the topic.
Yeah, good question. I'm inclined to think in those cases, the psychology is very different - that there is a sense in which they don't really believe it, although they use the word "belief". (In a way, this would be somewhat similar to van Leeuwen's view that the "beliefs" in these cases are a different kind of mental state, more akin to imagining). However, I genuinely don't know. Maybe people always have a strong preference for justifications of their views, but if there are strong enough incentives to hold a view and little reputational cost from not being able to justify them, people can simply "choose" to believe them. Maybe there's also something else going on in some of these cases where people think the available evidence doesn't force any *specific* conclusion, so they in some sense choose one among a set of possible options. Like, if someone said - "I choose to believe that my arm grow back" - that would seem incredibly bizarre in a way that "I choose to believe that people are fundamentally good" (which is incredibly vague and bull-shitty and consistent with all sorts of evidence if you interpret it creatively) doesn't. But I'm just really not sure - it's a good question. Looking forward to your post!
This seems to relate to the difference between "faith" and belief. The whole point of faith is that you *don't* rely on evidence - it is the willingness to assume the attitude of belief in the absence of evidence or technically believing yet. I know among Orthodox Jews, it is considered less important that you "believe" deep down than that you pray and observe the Sabbath regardless. So in that case it would amount to a voluntary behavioral commitment. But maybe it need not be behavioral; William James implies in Will To Believe that you can choose to believe without evidence, while making this sound like a cognitive matter. I checked and Van Leeuwen doesn't cite Will To Believe (only Varieties of Religious Experience). Either way, what would make it voluntary is the express removal of evidence as a consideration, as opposed to motivated reasoning where there is biased use of evidence.
Of course, it's still possible to bullshit yourself at the meta-level: believing that you believe just because you go to church or pray, when really you don't even have faith.
Great post, Dan. I had been struggling with a similar essay for some months now, and am genuinely relieved that I don't have to write it now - you said everything I had wanted to say, and then some.
The question that remains, I think, is why given all these limitations, open societies do so well compared to closed ones? Is it for the reasons the advocates of open societies claim - the free contest of ideas - or is it, as Hayek claimed in The Road to Serfdom, an effect of free markets?
Yes, very good questions. I do think open markets (including of ideas) have very positive qualities relative to real-world alternatives; the point here was just to identify some of their challenges.
I would underline Jeffrey Friedman's point that when you form a political opinion you often cannot find out when you are wrong. We don't learn from our mistakes.
This is sometimes true in consumer decisions (how do I know whether I am saving too much or too little?), but usually I can learn from repeat decisions and from polling other consumers.
The link to the summer school in Budapest is a dead end. ("Page not found.")
thanks for flagging that - fixed now.
As long as people find purpose in cultural identities and group-belonging, tribalism will stay put and a truly open society will be hard to achieve. Individualism is the answer, which allows people to find meaning in what objectively is a meaningless life. This will not solve the entire problem of course as the deepest drive in human nature is status, recognition/a feeling of being important, but it will do away with the (imagined) status that is attached to identities, nationalism, and religions which are some of the greatest sources that drive tribalism, us-and-them, and bloodbaths.
Yep - I'm sympathetic to that.
David pinsoff's version of the Summary:open societies are bullshit
Ha.
I struggled for some time to come to terms with the fact that my coalition was not, in fact, the paragon of reason I had always believed it to be. Call it growing up. I'm no longer surprised that in 2024, the people calling an assassination attempt a "false flag" are on the Left.
I wish I had this essay and mental model back then to help me make sense of this.
To be clear - thank you for writing this, it rocks!
Is it purposeful irony that this is held in Hungary? 😬😄
It's purposeful :)
What might appear to be a rationally orthogonal package of beliefs at first glance can sometimes have a common thread upon closer observation. As an example, “capitalism” and the Israel/Palestine conflict make sense if your worldview is oppressor vs oppressed.
Hmm. Maybe, although I’m sceptical..
My point is that anti-capitalists view business owners as oppressing workers and Israelis oppressing Palestinians {not my view}.
Where - if at all - would Timur Kuran's preference falsification fit in this?
"That is, humans tend to believe their propaganda."
Great question. Need to think more about this (am an enormous fan of Kuran’s work).