46 Comments

I think there is an understandable yet misplaced desire for parsimony among these “everything is bayesian” folks (for lack of a better term). It is admirable that they are trying to minimize assumptions. But they are forgetting that parsimony is not just about minimizing the number of positive assumptions, but about minimizing the number of positive and negative assumptions (what is *not* the case) multiplied by their plausibility in light of pre-existing knowledge. The assumption that humans have no status or coalitonal instincts, or that these instincts have no effect on our cognition or reasoning, is so wildly implausible in light of everything we know about human evolution that it reduces the parsimony of the Bayesian account to near zero. What the Bayesians are striving for is the illusion of parsimony—not the genuine article. Also, as an aside, I am now kicking myself for not using the phrase “advocacy biases” instead of “propagandistic biases.” Probably would have gone down more easily for folks. Alas…

Expand full comment

I’m not sure I understand the question, William, but I’ll give it a shot. I see Bayesian idealism as asserting a set of independent, unstated assumptions about how coalitonal/status motives either do not exist or are irrelevant to human cognition. Those unstated assumptions are, in my view, very implausible. If we view parsimony as the minimality and plausibility of your assumptions, then Bayesian idealism is a low-parsimony theory masquerading as a high-parsimony theory. The masquerade comes from refusing to acknowledge the full evolutionary implications of Bayesian idealism (e.g., that humans evolved to be pure, impartial truth-seekers whose thinking is uninfluenced by self-interest or group-interest). I think all psychological theories necessarily have evolutionary implications, because the psyche is a product of evolution. That’s why it’s good to state those evolutionary implications explicitly and scrutinize them, so you can ensure they are plausible. Merely going through the exercise of thinking about evolutionary implications allows you to rule out of a lot of psychological theories as being very implausible. It’s an incredibly underrated tool for theory building. It’s why I’m such a strong proponent of evolutionary approaches to social science.

Expand full comment

Very well stated, and I could not agree more.

Perhaps if I restated my question with regard to your take on parsimony (which you are free to ignore if it is still unclear):

Did “parsimony” evolve?

Did the capacity/capability for discovering parsimony evolve?

Is the process of evolution itself approximately parsimonious relative to some set of constraints?

#3 is a roundabout way of asking whether “efficiency” and “parsimony” can be made distinct in your conception, or if parsimony reduces to “efficiently plausible.”

I'm not sure that all Bayesian Idealists make evolutionary computationalist assumptions. If, for example, we evolved the capacity to understand that Baye's is bae, then their idealism could just lead to prescribing we assimilate all presumptions onto its framework, which is, in their telling, maximally parsimonious if we did it correctly.

Which I still think is bullshit, but with a more subtle fragrance.

Cheers!

Expand full comment

Ah I see. No, parsimony itself did not evolve, but the capacity to discover it did evolve. No, evolution is not subject to any parsimony constraint. Biological structures are incredibly complex and heterogeneous, shaped by distinct selection pressures, and evolving through a messy process of descent with modification from pre-existing structures, and there is no reason to suspect anything in biology to be neat and tidy, or easily reducible to simple and elegant principles, other than to evolution itself. I suppose efficiency is one kind of constraint, but that is merely another kind of selection pressure for using resources (like time and energy) efficiently and not profligately. So evolution creates “efficient” adaptations in the sense that they do not waste resources, but not in the sense that they are simple or homogenous. And I think parsimony and efficiency are different things. You can have a parsimonious theory with many different interlocking assumptions (a kind of inefficiency), but so long as each assumption is on rock-solid ground, it can still be a relatively parsimonious theory overall.

Expand full comment

I think the problem here is that not everyone agrees that those obvious instances of irrationality are actually irrational. Not because they doubt the facts or behavior but because they’d describe it differently.

To give a really simple illustration suppose that every morning I check to see if my headphones are on their charger next to the bed to make sure they get charged before I leave. At that level of description it sure seems rational.

But take a day where I left my headphones at the office. I just checked if my headphones were next to my bed despite having clearly seen convincing evidence they were at the office (me putting them on my desk and not picking them up). That's clearly irrational. Heck, even if I could remember my headphones were at the office if I really thought about it at the level of a general heuristic it sure seems rational to always check.

So the problem is that even obvious instances of irrationality can be described as instances of a perfectly rational but fallible heuristic were these are the failing cases.

That's why it's been so hard to develop anything like a formal model of limited rationality. We understand full logically omniscient update by conditionalization but it's not clear more limited notions offer principled judgements about what is rational at all.

Expand full comment

I'm glad someone said it!

I agree with your diagnosis of "Bayesian Idealism," David. However, might I inquire how your telling of Occam's survives your follow-up, evolutionary points that detract from the Bayesian telling?

Importantly, I do not mean this in a bad faith, drag you into the epistemological quagmire sense. Rather, given that I am writing about parsimony, or more specifically, the presumptions required for Occam's to indeed govern parsimony, it would be a kindness to share if you have a sufficient extension from evolutionary principles to some level of abstraction (and perhaps back).

For a bit of context, I anticipate that were I to steel man the Bayesian conception, parsimony would essentially reduce to the "fit" with Baye's theorem, with biases and heuristics measured according to their "underfitting" to it, and with no real means of addressing what "overfitting" would look like.

From your telling, and from the diagram of the Bayesian update model, what I hear is (effectively) addressing this blind spot in light of the evolutionary expectation of deviation as the rule rather than the exception. This frames their modeling as one of overfittiting to Bayes (or equivalently, underfitting Bayes to the deviation).

However, I find that my steel manning of what I hear to be your take is currently identical, if perhaps inverted. I'm not sure how to interpret "plausibility" except as some magnitude of fit or deviation from Occam's Razor at two levels: the "fit" to pre-existing knowledge (as if built upon the razor) and the ongoing fit to the Razor.

Is there such a thing as "overfitting" to Occam's in your telling? And if not, would this results of this take differ from a Bayesian Idealist who treats Occam's as the immutable portion of any base rate or prior?

Expand full comment

Well, I guess I need to update my priors regarding how probable Bayesianism is. Let's see, I'll take your result that Bayesian parsimony is near zero as the likelihood, multiply that by my prior, and ... ah, yes, just as I expected: near zero times zero is zero!

Expand full comment

People HATE uncertainty. But uncertainty is the only rational path. In addition, people just don't give a damn about the grey, nuanced truth. People only care about their team winning. Or they don't have the bandwidth to seek truth. "Tribalism is destiny. Humanism is optional." -Jaime Wheal

Expand full comment

Yep - lots of truth, in that. (Although it raises the question of why people hate uncertainty, given that it is often the rational path. I have another post on this: https://www.conspicuouscognition.com/p/in-politics-the-truth-is-not-self

Expand full comment

I'm going to wager a guess that evolutionary psychology says that we're terrified of uncertainty because uncertain times lead to death. I'm totally uncertain if this is true. Bahdumtssss

Expand full comment

I love your reply, so I have added it to my extensive personal "Quote Bank".

Intellect is a voluntary tax on our more instinctive needs and desires.

Those *unable* to pay that tax, such as those working most of their waking hours, *must* spend it on our more instinctive *needs*.

Most all who are *able* to pay that tax, such as those with free time, typically prefer to spend it on more instinctive *desires*. Such as dating / lust masquerading as love / sex / often subsequent child rearing, socializing, audio-visual stimulation (entertainment), physical fitness, spectator sports, hobbies, etcetera.

So in place of intellect, we typically substitute often impoverished and destructive heuristics. Such as groupthink / tribalism / herd mentality, tradition, emotion, violence, apathy, evasion, denial, instinct, and willful ignorance.

Expand full comment

Great point. We're all on autopilot all the time, unless we carve out time to take a detailed look at ourselves. Lockdown was the first time I ever experienced where I didn't have to focus on daily survival, and it completely changed my life.

Decision fatigue is very real, and some suffer it very easily.

Throw in the fact that all media is now in the business of confirming beliefs instead of informing, and the tribalism is even worse. Many people can't even bear to speak to someone who asks questions about their beliefs, much less someone who contradicts them.

Then you've got the fact that secular materialism has people grasping at ideological straws to try to find some structure for their lives.

Its a fascinating time to be alive.

Expand full comment

While I suspect there are exceptions to the "...all..." in your "...all media is now in the business of confirming beliefs instead of informing...", my first impression of you, is that you are quite closer to cognitive bullseyes than most of society.

It is likely socially unacceptable for you to brag that about yourself.

But *I* can. 😉 Cheers.

Expand full comment

I guess I should say "legacy media". Plus social media algorithms feed cognitive biases.

Expand full comment

My reply was poorly phrased, so I have withdrawn it.

Expand full comment

Okay, read your original paper for the more thorough treatment of the argument.

I like a lot of your overview and reasoning; and the model about coalitional press secretaries has its use to strengthen the argument for motivated cognition.

That being said, I am a bit less convinced that the dispute between "non-motivational" explanations and "motivated cognition" is actually characterized by such hardened fronts; but not my area of expertise so take this comment with a lot of salt.

I will offer some hopefully useful criticisms:

- you focus on highly partisan people, and this might make "motivated cognition" as a phenotype much stronger in these examples than it would be in people without strong partisan affiliation. This is not necessary bad, as often phenotypes are best explained at these extremes; but a more comprehensive theory must maybe qualify how much weight motivated cognition has as an explanatory model; or whether mixed-model versions with "non-motivational" cognition make more sense when talking about less partisan people. In other words; maybe "non-motivation" cognition is more prevalent in light partisans, and "motivated cognition" is stronger in strong partisans

- what is the role of people participating in many different tribes/groups that hold contradictory views? Does it shift their motivated cognition errors towards more non-motivational errors?

- what is the role of highly active/high status/leaders within these groups versus affiliated bystanders? In other words, do elites that shape perception of others by having stronger incentives for motivated cognition also have stronger motivated cognition that low status individuals/bystanders? Do the latter find themselves easier falling into "non-motivational" cognition errors

Just a few of my immediate thoughts. Might add more at a future point.

Thanks for the paper, very good food for thought.

Expand full comment

Thanks. These are good points and questions. I completely agree about the limitations. The point of the paper (and post) is to establish the existence of a very general tendency that arises in certain contexts. But quantifying the strength of that tendency, and exploring how it is influenced (e.g. amplified or attenuated) by individual and environmental factors is very much a task for future research. I’m not yet sure exactly what to think about these questions.

Expand full comment

After reading your comment here with impressive phraseology, I then read your Substack bio:

"Scientist-turned-science writer. Trying to equip citizens against anti-science conspiracy myths plaguing our information age. Also arguing for the human dimension in our technological future. Usually trying something new.

Science journalism, scicomm, and science fiction prototyping to bring science and society closer together."

My first impression? I know this is unseemly, but:

Intellectual swoon. Your scientific goals and lingua franca enchant me. Smelling salts, please.

Expand full comment

I found this post very hard to follow. Is it just me, or are some aspects of political behavior just too difficult to pin down? Question: is there such a thing as objective political truth? Is it reasonable to expect psychologists wearing their political science cap to develop tests that reveal with predictable results the “if and when “ people vote in addition to the “how” people vote? Are there some questions about human nature that defy scientific explanations? Randomness and chance, as explanations, can be hard to accept.

Expand full comment

Fair enough - all good questions!

Expand full comment

Seems like bias comes in many forms- motivated reasoning, peer pressure conformity, virtue signaling, psychological epistemological bias, etc.. If one accepts that implicit bias is universal, it’s the same as acknowledging that being human is being biased, isn’t it? So, in terms of asking what is truth, in a political context, it follows that to define truth, bias is built into the definition. Doesn’t mean a group can’t agree upon shared truths. Just means truth will always reflect or incorporate some aspect of bias. And group affiliations will incorporate and reflect some sort of bias. I don’t see how one measures individual or group bias in a useful, accurate manner, to help explain political power contests. And Darwin does come to mind as a pro-bias argument - certain types of extreme bias may play a role in group competition.

Expand full comment

>For example, voting appears to be mostly rooted in group allegiances.

You know, when mass democracy was invented, serious people (Condorcet, Hegel) were worrying that people will not vote because one vote in millions of votes changes nothing.

Turned out, people do vote, because it is a ritual to reaffirm their allegiances. But I think it is not simply group membership. Rather, the allegiance says something about you. If you are left, you are smart and compassionate, if you are right, you are hard-nosed and loyal to your nation.

However in practice it does not matter. The reality is expertocracy. Addictologists came up with a way of curing drug addicts that is 95% therapy and 5% coercion. So no matter how much they bullshit, acting compassionate vs. tough, almost all politicians will just let them do that, because it works. Only the truly extreme ends of the political spectrum don't do it. One extreme is San Fran where even 5% coerecion is intolerable.

Expand full comment

Where is the opening for the argument that people can reason together and find agreement, can evolve their thinking, or even have revolutionary changes in their beliefs, Dan? I read in this post only one such opening: when they are presented with "novel evidence".

When the partisanry or coalition or community has a leadership contest, how do its members make a sensible or even any decision? How can you explain intra-group conflict?

Does your explanation pass your own plausibility test? Indeed, can anyone logically say that bias and irrationality are the order of the day without contradicting themselves?

How soon will you start propounding Critical Theory that Truth is Lived Experience, and advocating for Identity Politics? :-)

I appreciate that we've recently had a General Election here in Britain, but are we as humans condemned forever to be political bigots?

Expand full comment

Ha - good questions. To be clear I'm just trying to identify and illuminate one general tendency in this paper and post. I'm not trying to develop a full model of political psychology, which is obviously much more complex and should feature an important role for truth-seeking, individual differences, values, etc. - I'm not a *complete* cynic.

Expand full comment

Understood; in trying to identify the tendency, do you think the point made by @philippmarkolin and @hgrumpy about degrees of political zealousness has some bearing?

Never thought you were a cynic at all, just nihilist in this article alone, despite your upcoming book Why it’s OK to be Cynical :-)

Expand full comment

While I am a bit skeptical of some of your questions, if the blog author answers, that may help elucidate your questions for me.

Expand full comment

I wonder to what degree "citizens typically inherit stable political loyalties or acquire them relatively early in life" depends on the party choices available and the electoral system. Western Europe has seen big changes over the last two decades, with parties that had stable 40ish percent dropping by twenty or more percentage points, other parties newly appearing and bumping up quickly, again others apparently assembling big coalitions before falling apart again. And in many cases, there's a lot of documented voter movement.

So I wonder whether what you describe and what one finds in the research has to do with the anormal US and UK systems where the electoral conditions push things towards a party binary, and political coalitions are largely absent.

Expand full comment

Very good question. I'm inclined to think the role of inter-coalitional conflict and coalitional instincts is still central in these alternative political systems, but you're right the whole picture is much more complicated when people don't feel strong party allegiances (although ofc it's not intended to be restricted just to parties, but to any coalitions - classes, movements, ideological communities, etc. - people might support).

Expand full comment

FYI, you linked Neil Levy's book twice in a row at the top of the "Rationality Critique" section. I like to think this is further evidence you love disagreeing with him. (I wonder if that's another source of motivated reasoning, where this fuels disagreement: the pure joy of arguing).

In terms of deciding between the two paradigms: it's kind of the chicken or the egg problem, isn't it? We can go with "Less filling!" (boring idealized truth-seeking) or "Tastes great!" (naughty motivated reasoning), but they're both describing Miller Lite - and we all know that Miller Lite tastes like shit. What is the practical difference between going astray in spite of not wanting to, and going astray because we want to?

The biggest difference I see is that epistemic-cognitive locates the problem outside the individual and with our natural cognitive limits, and views this problem in terms of error or failure. Whereas motivated reasoning locates the problem both within the individual and in the relations among individuals and/or affiliated groups, and sees this problem as a successful feature not a bug. The latter is much more compelling to me intuitively, but only if it can accommodate some elements of the former (that we still care about truth and have some rational faculties for reaching it, that epistemology matters). It's harder to envision how the former could accommodate elements of the latter.

Expand full comment

Ha - thanks for catching that re. the double Neil reference. I do love disagreeing with him! But also very much respect his work.

Completely agree with your characterisation here - although I think the motivational picture can still place a lot of "blame" on factors outside the individual. And yes, any picture must make room for the fact that we still care about the truth - I think we are often biased, but we also care about accuracy, and about appearing reasonable.

Expand full comment

I think this analysis is truer of political organizers than of ‘people’ more generally. It fits very well with what I’ve seen of organizers on the left: A founder of Black Lives Matter say of intersectionality “I don’t have any use for academic wordplay, it’s about building a political coalition.” Constant mentions of solidarity. Frequent workshops on ‘allyship’ – practices intended to build and maintain trust among the different oppressed identity groups. The practices include always describing the different factions as a cohesive whole and avoiding mention of any frictions. But these are consciously employed tactics, taught to organizers and designed to counter underlying tendencies to factionalism. It is not obvious that they themselves spring from the would-be organizers psychological make-up.

More typical voters, with less extreme levels of political involvement and partisanship, seem to have less need to assert that everything in their party’s platform is great, just that the pros outweigh the cons. And to the extent that they buy into more of the platform than they would absent their party affiliation, this could be because they realize complexity and invisibility limit their knowledge on many issues. On these, they default to trusting the party’s experts.

Expand full comment

Ignore. Started reading your ‘much longer piece’ that looks like it covers most of this. Looking forward to the book.

Expand full comment

I’m actually somewhat skeptical there is any genuine empirical difference between the sides. I suspect and much of the argument really cashes out to a verbal dispute and question of attitude: do you want to adopt a more positive or critical sounding account of people’s behavior.

Everyone recognizes that people aren't logically omniscient* so that model isn't even on the table. The argument is whether to describe someone as applying generally sound -- but imperfect -- heuristics or fallacious ones.

But once you make that shift it's no longer clear there is a fact of the matter or these are incompatible categories. For instance, you can describe someone who resists accepting some negative conclusion about the champion of their party either by saying they are engaging in motivated reasoning or saying they are applying the heuristic of giving substantial weight to the judgement of their community (perhaps even as to reasoning as well as probability). Virtually any kind of widely held irrational disposition can be described as a reasonable heuristic by choosing to describe it at a different level of generality or as an application of some more general rule.

Ultimately, my sympathies do rest with the irrationality side in some sense because I think it's more illuminating/explanatory but i think it's easy to think there is a clear question of fact here only to find out both sides will often predict the same behavior in the same circumstances but differ in how they describe the rule being applied and thus whether it can be regarded as irrational or a fallible but good rule of thumb (eg by considering it as part of a broader rule)

--

*: genuinely consistent priors require assigning probability 1 to all provable facts whether they’ve been proved or not (so 1 to P=NP if it's provable) because probability is description independent.

Expand full comment

While reading this post I was struck how consistent your theory fit with my experience with religious converts (or "Baalei Teshuva" in Judaism).

They will slowly adopt positions & beliefs consistent with the religious community that they've joined. This includes both religious and (unconnected) political beliefs.

This might be a fruitful area to explore as a way of testing your theories.

Expand full comment

Interesting - thanks!

Expand full comment

You should write a book about cognition and psychology,it might take a lot of time but it's a good investment,you are very talented and knowledgeable

Expand full comment

Thanks for this. I’ve forwarded it to friends with whom I have ongoing political differences and discussions. I suggested it provides a mirror into which we can gaze and not only our own reflections but background reflections of other points of view.

Expand full comment

To what extent political tribalism is endogenous to the political system? In Proportional Representation systems, there is far more real political choice, Do we observe less opinion clustering?

https://forum.effectivealtruism.org/posts/uW77FSphM6yiMZTGg/why-not-parliamentarianism-book-by-tiago-ribeiro-dos-santos

Expand full comment

V. good question - thanks for the rec.

Expand full comment

Your blog illuminates endless paths to error--which humanity seems to utilize as a trail map to reach it.

Expand full comment

Ha - maybe I focus too much on such paths...

Expand full comment

Ultimately, whether you focus too much on such paths, depends on the value and viability of any alternatives you see. Personally, I do not see *enough* of humanity doing so.

Our species nearly unanimously focuses on either apathy or action, and thus committing typically destructive error.

The world has a shortage of those identifying, studying, and illuminating error. Such as through books, blogs, social media, and universities. These are paths to correcting error.

How do we know there is a paucity of your type?

Because of all the errors - many enormous - that we continue to make, repeat, and often worsen. As individuals, societies, and as a species.

Expand full comment

Excellent answer. Now, if I might throw a small wrench into the works, how then would you treat "heuristics" in this case? Their typical attributions are "simple and efficient," but you also say "evolution creates 'efficient' adaptations in the sense that they do not waste resources, but not in the sense that they are simple or homogenous."

At what level, if any, are the attributions of "simple and efficient" correct if the biological substrate is complex, the information being processed is complex, and the simplicity must be interactive with other forms/modes of simplicity? And how does "parsimony" relate to what is heuristical and what is not?

Again, this is genuine curiosity and not meant in bad faith, despite my expectation that even those in the field will struggle to round this out with a consistent use of concepts such as simplicity, efficency, parsimony, and heuristics, which you have been able to do so far.

I anticipate that this amounts to a consistency/completeness tradeoff. If not, you have spared me a great deal of work (but not heartache!) if you can complete the perspective.

Again, you are welcome to ignore me if I am being unclear, or if this just seems like a hassle.

Thank you for engaging at all!

Expand full comment