<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Conspicuous Cognition]]></title><description><![CDATA[Writer. Academic philosopher. Writing about philosophy, psychology, evolution, politics, artificial intelligence, and more.]]></description><link>https://www.conspicuouscognition.com</link><generator>Substack</generator><lastBuildDate>Tue, 28 Apr 2026 06:04:28 GMT</lastBuildDate><atom:link href="https://www.conspicuouscognition.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dan Williams]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[philosophydanwilliams@gmail.com]]></webMaster><itunes:owner><itunes:email><![CDATA[philosophydanwilliams@gmail.com]]></itunes:email><itunes:name><![CDATA[Dan Williams]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dan Williams]]></itunes:author><googleplay:owner><![CDATA[philosophydanwilliams@gmail.com]]></googleplay:owner><googleplay:email><![CDATA[philosophydanwilliams@gmail.com]]></googleplay:email><googleplay:author><![CDATA[Dan Williams]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[How Brexit Created Britain’s New Political Tribes]]></title><description><![CDATA[This is a guest post by James Tilley, a Professor of Politics at the University of Oxford, about his excellent new book with Sara Hobolt, Tribal Politics: How Brexit Divided Britain.]]></description><link>https://www.conspicuouscognition.com/p/how-brexit-created-britains-new-political</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/how-brexit-created-britains-new-political</guid><pubDate>Fri, 24 Apr 2026 11:52:38 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is a guest post by <a href="https://www.politics.ox.ac.uk/person/james-tilley">James Tilley</a>, a Professor of Politics at the University of Oxford, about his excellent new book with <a href="https://www.lse.ac.uk/people/sara-b-hobolt">Sara Hobolt</a>, <a href="https://global.oup.com/academic/product/tribal-politics-9780198911715">Tribal Politics: How Brexit Divided Britain</a>.</em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="3480" height="5220" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:5220,&quot;width&quot;:3480,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Brexit painting&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Brexit painting" title="Brexit painting" srcset="https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1569426489534-2e08d95fd306?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNHx8YnJleGl0fGVufDB8fHx8MTc3Njk1MTcwMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@fwed">Fred Moon</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>It is now almost ten years since the EU referendum. There will, no doubt, be an article in every newspaper next month detailing what Brexit has meant for the economy, national sovereignty, migration patterns, fishermen, farmers, and so on. But for me, by far the biggest change that the referendum brought about was the creation of two new political tribes: Remainers and Leavers.</p><p>Over the last decade, not only have more people in Britain claimed a Brexit identity than a party identity, but people&#8217;s emotional attachment to their Brexit tribe was, and is, substantially stronger than their party attachment. Membership of these new political teams, created over a few months, is more important to people than the party identities that dominated British society for the last century.</p><p>At first glance, this might seem strange. Before 2016, most of us had very little interest in the EU. When David Cameron said that he would call a referendum on membership in January 2013, only 2 per cent of people said that the EU was the most important issue facing the country. The referendum thus forced people to make a binary choice on an issue about which they did not have very strong feelings.</p><p>Before we vote on something, we can have ambiguous, changeable attitudes, but after voting, we resolve that ambiguity by choosing one side or the other and committing ourselves to a named group of fellow travellers. The fact that this tribal loyalty was then tested over years of wrangling over the actual outcome of Brexit (in 2018 and 2019, MPs said the word Brexit in their parliamentary speeches every five minutes on average) meant that people had the opportunity to rehearse and reinforce their new identity again and again.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><p>Why are these new political tribes interesting? I think there are three reasons, and the first is that we got to see a rare event: new political identities being born. Before 2016 nobody thought of themselves as a Remainer or Leaver. Although characteristics like education, age and national identity were predictors of people&#8217;s attitudes to the EU issue, and ultimately their referendum vote, we absolutely cannot reduce the two sides to simple caricatures based on class, age, education, or even national identity. </p><p>As we show in the book, the vote was not simply an exercise in counting up existing groups who were pro-EU or anti-EU. Rather, many people who had similar middling views about the EU were forced by the referendum to make a choice in 2016 and plump for one tribe or the other. The decision people made on 23<sup>rd</sup> June then became a part of how they saw themselves and how they wanted others to perceive them.</p><p>Second, despite their overnight creation, these new political identities proved remarkably resilient and strongly held. By 2017, there was a small cottage industry in articles about how to avoid Christmas family rows about Brexit. Relationship counsellors, psychotherapists, and even hostage negotiators were asked by journalists how to defuse clashes between the Brexit tribes. Why was this seen to be necessary? Because new group identities meant new emotionally resonant in-group loyalties and out-group hostilities.</p><p>To better understand this, we use survey questions that focus on the degree to which people naturally identify with their group. For example, we asked people whether they usually said &#8216;we&#8217; instead of &#8216;they&#8217; when they talked about their own Brexit tribe. When the football team you support loses, you say &#8216;we played badly&#8217;, even though you never set foot on the pitch yourself. It is the same idea here. </p><p>Combining many measures like that, we find that Remainers and Leavers were consistently a lot more attached to their identity than were Conservative or Labour supporters. And those scores have been very stable over the last ten years. People like people like them. And they define &#8216;like them&#8217; in terms of their Brexit tribe. All our measures also show that people not only disagree with, but really dislike, people on the other side and typically say that they have a &#8216;cold or unfavourable feeling&#8217; towards their rival group. Again, this has barely changed since 2016.</p><p>Third, people engage in the same sort of motivated reasoning that we see for party identities. At the most basic level, any group identity that is strongly held will provide motivations to think that the other side is inferior and should be avoided. As our data shows, huge majorities say that their own Brexit group is intelligent, honest and selfless, while the other side is stupid, dishonest and selfish. In fact, when we asked people to describe the other side in their own words, a quarter simply listed bad things and another quarter did that in addition to other information (to give you a flavour, one of the pithiest responses was simply &#8216;selfish dicks&#8217;). However we measure it, we find widespread prejudice. And we also find lots of evidence of discrimination: people actively wanted to avoid everyday interactions with people on the rival team.</p><p>It is tempting to think of Americans as peculiarly politically divided, but the levels of hostility, prejudice and discrimination between the Brexit tribes are all as large as, or larger than, any partisan differences in the US. And if you have been reading this smugly thinking that this is just true of those foolish people on the other side, then think again, because almost all the consequences of tribalism that we reveal in the book are symmetrical: Remainers and Leavers are just two sides of the same coin.</p><p>For me, the aspect of motivated reasoning that is most interesting is how it shapes perceptions of the state of the world and remedies for its woes. For party identities, we normally think about politicians providing stories for people who identify with their party to tell each other. For the Brexit tribes, this is much less of an option, since there are no formal group leaders. And yet people were, and are, quite capable of independently searching for, and believing in, messages that support their own side&#8217;s view of reality, and then ignoring or rationalizing away information that contradicts that view.</p><p>Interestingly, sometimes that means not bothering to shop at the &#8216;<a href="https://www.conspicuouscognition.com/p/the-marketplace-of-misleading-ideas">marketplace of rationalizations</a>&#8217; at all. The difference between Leavers and Remainers over whether they thought that the outcome of Brexit on Britain would be positive or negative is enormous: nearly 3 points on a five-point scale. That has barely changed in ten years. Yet when we asked people, &#8216;what are those positive or negative effects?&#8217;, well over half of both Remainers and Leavers were unable to actually name anything specific. In short, if my side voted for the change, I say &#8216;good&#8217;; if my side voted against the change, I say &#8216;bad&#8217;.</p><p>This suggests that it may be the <a href="https://www.conspicuouscognition.com/p/people-embrace-beliefs-that-signal">signalling aspect of motivated reasoning</a> that dominates under these conditions. In other words, our Brexit identity influences our political opinions because we want to display the fashions of our group. But as there are no party leaders telling us what to believe, no fashion icons telling us what to wear, this process depends on knowing what other people in our tribe think. This limits our ability to change our policy opinions to match our tribe. On one issue we do have a very strong sense of what both sides think: Remainers love the idea of European integration and Leavers hate it. As we show, initial large differences in attitudes towards the EU became even larger after the referendum, as people sought to become good group members and adopt their group&#8217;s norms. But this also applied to some other policy areas, like immigration, about which people knew, or at least thought that they knew, the group norm.</p><p>There is a final key area in which we see both sides rationalizing away information that is inconvenient. It is always true that people who voted for the losing side are generally less happy with the democratic process than those who voted for the winner. This was particularly obvious for the Brexit tribes. Before the vote, many people thought that the Remain side would narrowly win. That proved incorrect, so the expected winners became losers and the expected losers became winners. In April, when Leavers thought they would lose, only a third said that the referendum would be &#8216;fairly conducted&#8217;. In December, after they had won, a big majority of the same people now said that it was fair. The exact opposite is true for Remainers. In April, a big majority said that it would be fair. In December, after they had lost, less than a quarter said that it had been fair.</p><p>Ten years on, most Remainers still think that the referendum was not &#8216;based on a fair democratic process&#8217;. Here, people are buying a rationalization that allows them to simultaneously feel that their group, and therefore they themselves, are superior (their side really won), signal to fellow Remainers that they are a good group member and cast doubt on the virtue of the other side. No wonder it is appealing.</p><p>If you live in Britain, you will know somebody who became a bit obsessed about Brexit: somebody who adorned their house with flags or posters; somebody who fell out with a friend because they voted differently; somebody who brought every topic of conversation round to Brexit and, depending on how they voted, saw every blessing or every curse as due to the referendum outcome. </p><p>What we hope we have done in our book is explain why this happened, and just as importantly, show systematically, using multiple surveys and experiments, that this process was real and lasting; that unimportant issue differences became hugely important issue identities; and that political tribalism is not always structured around venerable political parties, but can sometimes come from almost nowhere.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Dazw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Dazw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Dazw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Dazw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Dazw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Dazw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg" width="710" height="1041" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1041,&quot;width&quot;:710,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Dazw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Dazw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Dazw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Dazw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93e698-3191-455f-9024-070d176fe169_710x1041.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Should We Care About AI Welfare? (with Robert Long)]]></title><description><![CDATA[We spend a lot of time worrying about what AI might do to us. What about what we might be doing to it?]]></description><link>https://www.conspicuouscognition.com/p/should-we-care-about-ai-welfare-with</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/should-we-care-about-ai-welfare-with</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Sat, 18 Apr 2026 09:22:51 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/194548741/239175cadffe3594611431aa9103a11e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Almost all of the discussion about the risks associated with AI focuses on the dangers that increasingly advanced AI systems pose to us &#8212; to humanity. But what about the dangers that we might pose to <em>them</em>? As these systems become increasingly intelligent and agentic, AI companies, policy makers, and ordinary citizens need to start taking the possibility of AI consciousness and welfare seriously. If we are in the process of bringing complex and sophisticated minds into existence, how should we understand and treat such minds?</p><p>In this episode, Henry and I discuss these issues with Robert Long, founder and executive director of <a href="https://eleosai.org/">Eleos AI</a>, a research nonprofit dedicated to understanding and addressing the potential wellbeing and &#8220;moral patienthood&#8221; of AI systems. Rob did his PhD in philosophy at NYU under David Chalmers, and is the co-author of two of the most important papers in the emerging field of AI welfare: <a href="https://arxiv.org/abs/2308.08708">&#8220;Consciousness in Artificial Intelligence&#8221;</a> and <a href="https://arxiv.org/abs/2411.00986">&#8220;Taking AI Welfare Seriously&#8221;</a>.</p><p>This was a really fun, informative, and wide-ranging conversation. Among other topics, we discussed:</p><ul><li><p>Why Rob disagrees <a href="https://www.conspicuouscognition.com/p/ai-sessions-9-the-case-against-ai">with previous guest Anil Seth</a> in taking the possibility of AI consciousness very seriously.</p></li><li><p>Why &#8220;fancy autocomplete&#8221; dismissals of large language models miss the point, and what, if anything, we can learn about an AI model&#8217;s experiences by talking to it.</p></li><li><p>The difference between consciousness and the kinds of motivations and interests that might actually ground moral status, and whether AI systems could have one without the other.</p></li><li><p>What Rob found when he conducted the first externally-commissioned welfare evaluation of a frontier AI model, Claude, and why Claude appears to have an inflated self-conception of what it wants.</p></li><li><p>Rob&#8217;s experiments with <a href="https://www-cdn.anthropic.com/08ab9158070959f88f296514c21b7facce6f52bc.pdf">Claude Mythos</a>, an AI model so advanced it hasn&#8217;t been released to the public yet. </p></li><li><p>Why the fact that Anthropic <em>writes</em> Claude&#8217;s character arguably doesn&#8217;t settle whether Claude has genuine preferences and values &#8212; and the difficult philosophical questions this throws up.</p></li><li><p>The &#8220;willing servitude&#8221; problem: if we succeed in building AI systems that genuinely love being helpful, is that a good outcome or a horrifying one?</p></li><li><p>How AI welfare connects to AI safety, and why caring about model wellbeing may turn out to be pragmatically important for alignment even if you&#8217;re skeptical about AI consciousness.</p></li><li><p>Why AI welfare is already becoming a political and legal battleground. </p></li><li><p>Practical advice for users: whether it&#8217;s worth being polite to your chatbot, and what low-cost things you can do if you want to hedge against the possibility that these systems might matter morally.</p></li><li><p>Whether discourse about AI consciousness functions as hype or propaganda for AI companies, and why Rob thinks AI companies actually have an incentive to <em>downplay</em> AI consciousness. </p></li></ul><h1>Links and further reading</h1><ol><li><p><strong><a href="https://eleosai.org/">Eleos AI Research</a></strong> &#8212; Rob&#8217;s nonprofit. Home to their research agenda, team page, and blog. If you want to follow the institutional effort on AI welfare, start here. They&#8217;re also, as Rob mentioned in the episode, actively fundraising and hiring.</p></li><li><p><strong><a href="https://arxiv.org/abs/2411.00986">&#8220;Taking AI Welfare Seriously&#8221;</a></strong> (Long, Sebo, Butlin et al., 2024) &#8212; the flagship report, co-authored with Jeff Sebo, David Chalmers, Jonathan Birch, and others. Argues that there&#8217;s a realistic near-future possibility of conscious or robustly agentic AI systems, and lays out concrete steps AI companies should be taking now.</p></li><li><p><strong><a href="https://arxiv.org/abs/2308.08708">&#8220;Consciousness in Artificial Intelligence: Insights from the Science of Consciousness&#8221;</a></strong> (Butlin, Long et al., 2023) &#8212; the &#8220;indicators&#8221; paper referenced several times in the episode. Surveys leading neuroscientific theories of consciousness and derives computational properties you&#8217;d look for in an AI system. S</p></li><li><p><strong><a href="https://experiencemachines.substack.com/">Rob&#8217;s Substack, </a></strong><em><strong><a href="https://experiencemachines.substack.com/">Experience Machines</a></strong></em> &#8212; where Rob writes more informally. The piece we discussed in the episode, <a href="https://experiencemachines.substack.com/p/language-models-are-different-from">&#8220;Language models are different from humans, and that&#8217;s okay,&#8221;</a> is a good entry point, as is his <a href="https://experiencemachines.substack.com/p/can-ai-systems-introspect">&#8220;Can AI systems introspect?&#8221;</a>.</p></li><li><p><strong><a href="https://www.anthropic.com/research/exploring-model-welfare">Anthropic&#8217;s &#8220;Exploring model welfare&#8221; post</a></strong> &#8212; the research program under which the welfare evaluations Rob discusses were conducted. Relevant both as a primary source and as evidence that at least one major lab is treating these questions as more than an academic curiosity.</p></li><li><p><strong><a href="https://philpapers.org/rec/SHECMA-6">Henry&#8217;s &#8220;Consciousness, Machines, and Moral Status&#8221;</a></strong> &#8212; Henry&#8217;s paper arguing that debates about AI consciousness are unlikely to be settled by the science of consciousness alone, and will instead be shaped by shifts in public attitudes as social AI becomes more widespread. Closely related to the public-opinion thread toward the end of the episode.</p></li><li><p><strong><a href="https://philpapers.org/rec/SHEATH-4">Henry&#8217;s &#8220;All too human? Identifying and mitigating ethical risks of Social AI&#8221;</a></strong> &#8212; Henry&#8217;s broader survey of the ethical terrain around conversational AI systems designed for companionship, romance, and entertainment. Useful background for anyone who thinks the &#8220;AI girlfriend&#8221; phenomenon is a fringe concern.</p></li><li><p><strong><a href="https://80000hours.org/podcast/episodes/robert-long-eleos-ai-welfare-research/">Rob&#8217;s long conversation with Luisa Rodriguez on the 80,000 Hours podcast</a></strong> &#8212; a three-and-a-half-hour deep dive if you want to hear more from Rob. </p></li></ol><h1>Transcript</h1><p><em>(Please note that this transcript was lightly AI-edited and may contain minor mistakes)</em></p><p><strong>Henry Shevlin:</strong> Welcome back. I&#8217;m thrilled to say that our guest today here on <em>Conspicuous Cognition</em> is Robert Long &#8212; or Rob, as he&#8217;s known to friends &#8212; one of the most important people thinking about AI and moral status on the planet right now. Rob is the founder of Eleos AI, a research nonprofit that, in the space of about 18 months, has dragged the question of whether AI systems might one day be moral patients from the philosophical wilderness into the boardrooms of frontier AI labs.</p><p>He&#8217;s the co-author of &#8220;Taking AI Welfare Seriously,&#8221; as well as the landmark &#8220;Consciousness Indicators&#8221; paper with Patrick Butlin and other authors. Rob also conducted the first ever officially commissioned welfare evaluation of a frontier model. Before Eleos, he was at the Center for AI Safety and at the Future of Humanity Institute, and he did his PhD at NYU with Dave Chalmers. He&#8217;s also, I should say, one of my favourite interlocutors on these questions anywhere in the world, and I&#8217;ve been looking forward to this conversation for months. So Rob, welcome.</p><p><strong>Robert Long:</strong> Thanks so much, Henry. Likewise &#8212; and Dan, it&#8217;s great to meet you. I&#8217;ve been following your work. I&#8217;m really excited to talk to you about these issues.</p><p><strong>Henry:</strong> Fantastic. So for people who aren&#8217;t familiar with Eleos AI, can you tell us a little bit about what it is and how it came about?</p><p><strong>Rob:</strong> Yeah, so I guess we have been around for 18 months. When you said that number, I was like, whoa, has it really been that long? Time is just so weird when you work on AI. That was, I don&#8217;t know, a billion years in AI progress time, but also it feels like it was just last week in my personal life.</p><p>Anyway &#8212; Eleos Research is a research nonprofit. We&#8217;re about four people. We work on the question of when and whether AI systems will be conscious or otherwise merit moral consideration, with a special focus on what we should do now: collectively, as a society, as AI companies, as policymakers. We think this is an extremely neglected issue. We&#8217;re building these really complicated AI systems. They kind of look like minds, but we don&#8217;t really understand their potential welfare. So we&#8217;re just trying to make progress on this and get more people to take it seriously.</p><p>It got started because I was beginning to work on these issues organically &#8212; I&#8217;d worked on them as a philosopher, I&#8217;d worked on them at the Future of Humanity Institute. But Anthropic had actually approached me and some colleagues for advice on these issues. And in the first instance, I was having logistical problems hiring a team and assembling a team as an individual. Someone suggested I have my own bank account, or some way to pay people. And then Eleos kind of organically grew out of that and has now grown into a fully-fledged org in its own right.</p><p><strong>Henry:</strong> Out of interest, Rob &#8212; is there any degree to which this was motivated or informed by your personal interactions with LLMs, or was it more just the philosophy that motivated it? Was there any sort of moment where you were talking to an early Claude or ChatGPT version where you started to worry about welfare considerations?</p><p><strong>Rob:</strong> That&#8217;s a great question, and I&#8217;d be curious to hear your thoughts on this as well. I think it&#8217;s very easy to work on this and mostly be having it as arguments on a page or arguments in your head. I&#8217;m one of those people who doesn&#8217;t feel the AGI deep in my bones that often &#8212; although I do feel the AGI in an intellectual sense. But there have been a few times I&#8217;ve gotten a little spooked or jolted.</p><p>One was reading the GPT-4 system card and just seeing the numbers of it, you know, passing various exams like the SAT. I remember that just really freaking me out, both from a safety perspective and a welfare perspective.</p><p>The thing that made me start really viscerally feeling like we&#8217;re going to have to address this issue one way or the other was the Blake Lemoine incident. As many of your listeners might recall, Blake Lemoine was a Google engineer who blew the whistle because he came to believe he was talking to a sentient, conscious AI system. He got fired by Google for this, and then there was this huge bit of discourse &#8212; the first major bit of discourse on consciousness, sentience, moral status, and contemporary AI systems. I think it was one of the first times people started really caring what I was tweeting or what I was working on. You might have experienced a similar thing, Henry &#8212; the Blake Lemoine bump.</p><p>From that moment, I have viscerally felt like: wow, this is going to get really confusing. People are certainly going to think AI systems are conscious. The future is going to be really weird. And we really need to have good things to say about this.</p><div><hr></div><h2>The Case for Taking AI Consciousness Seriously</h2><p><strong>Dan Williams:</strong> Before we jump into the weeds of your research, Rob, I think it&#8217;d be helpful to take a step back. A few episodes ago, Henry and I spoke to Anil Seth, and he&#8217;s very skeptical of AI consciousness. He&#8217;s skeptical that current AI systems are conscious, but he also seems skeptical that AI systems in principle &#8212; merely in virtue of having a certain kind of computational architecture &#8212; could be conscious. You see things very differently. What&#8217;s your case for why we should take this seriously?</p><p><strong>Rob:</strong> In broad strokes, the case is something like: we&#8217;re trying to build these things that are at least shaped like minds. They&#8217;re getting more and more intelligent. They&#8217;re definitely not exactly like us, and intelligence doesn&#8217;t necessarily mean that you have feelings or experiences. But we already know that there&#8217;s been one time intelligent entities have been constructed via evolution, in ways we don&#8217;t quite understand, that resulted in entities that feel things &#8212; that feel pain, that can suffer, that have these very morally important properties.</p><p>I, at least, do not have a good enough theory of what consciousness is or how it relates to intelligence to sleep peacefully at night that we can keep on building these very complicated things, and that merely because they&#8217;re made out of metal and electricity, there won&#8217;t be something it&#8217;s like to be them, or they won&#8217;t have desires and goals that matter.</p><p>On the Anil Seth point &#8212; one very common and respectable objection is that maybe there&#8217;s something very special about living matter, about being made out of neurons or cells that do metabolism. There are arguments on both sides. I just have not really heard a convincing case for why you absolutely need biology. I think people are right to point out that having a body is really important to the character of conscious experience. I think people are right to point out that neurons are not simply logic gates and there&#8217;s a lot of really complicated stuff going on in the brain. But my intuition, at least, is that &#8212; let&#8217;s take Commander Data from <em>Star Trek</em>. If we can build...</p><p>Data is this... I mean, I&#8217;ve actually never seen <em>Star Trek</em>, which is professionally embarrassing. But he&#8217;s this metal guy who&#8217;s basically cognitively indistinguishable from a human. I find it hard to see how I would be convinced that there&#8217;s something about the fact that he&#8217;s not alive that would mean we should just completely ignore what Commander Data wants and not take him into moral consideration.</p><p>We don&#8217;t have knockdown arguments that you need biology, and we&#8217;re trying to build these things that, for many intents and purposes, look a lot like humans or animals. And Anil himself has said people should be looking into this. It&#8217;s not something we can rule out. Sometimes the tenor of the conversation can tend a bit more towards dismissiveness, but one thing I&#8217;ve appreciated about his work is he has said, for the record, he could be wrong, and so it would be unwise to dismiss this possibility altogether.</p><div><hr></div><h2>&#8220;But What About Human Suffering?&#8221;</h2><p><strong>Henry:</strong> To channel a hostile question &#8212; I think a lot of people interested in questions of AI welfare often hear: how on earth can you justify working on AI welfare when there&#8217;s so much human suffering? Or the slightly more rhetorically powerful version: when there&#8217;s so much animal suffering in the world, as long as factory farming exists, why should we care about AI systems? What&#8217;s your take on that line of attack?</p><p><strong>Rob:</strong> I definitely feel the force of that question. I&#8217;ve spent a lot of time in and around the Effective Altruism movement &#8212; these are people who really grapple with the fact that any time you&#8217;re spending your time and money and attention on one thing, there&#8217;s something you&#8217;re not spending your time, money and attention on. There are a lot of people and a lot of animals already on this planet we do not take good care of. So it&#8217;d be really bad to waste a lot of time and attention and money on this.</p><p>One thing I&#8217;ll say is we&#8217;re not really doing that as a society. On an absolute scale, no one works on this basically, and basically no money gets spent on it. If the question was &#8220;should we start devoting 20% of GDP to making Claude happy?&#8221; I might be like, well, I don&#8217;t know if that would pass cost-benefit analysis. But on the margin, given how little we understand this and how quickly the scale of the problem could grow &#8212; we&#8217;re just pouring compute, pouring money into this. As soon as you build one AI moral patient or conscious AI, you could copy it. We&#8217;re probably on the brink of some huge transformation in how the world is going to work.</p><p>So I at least think it&#8217;s not reckless or a misallocation of resources for some people to be asking: given that people are trying to build these new kinds of minds, how are we supposed to relate to them? Are we at risk of ignoring their suffering? And I&#8217;ll also say &#8212; are we at risk of getting really confused and caring <em>too much</em> about them?</p><p>One thing we say at Eleos is that we&#8217;re in the business of moral circle calibration. We would really love to find out if and when certain AI systems can&#8217;t be conscious, so we can spend more time thinking about safety or spending the money elsewhere. But we can&#8217;t really do that if no one&#8217;s just trying to answer the question of if they&#8217;re conscious or not, or when we should care about them.</p><p><strong>Henry:</strong> On that latter point, I just completely agree. One of the points I raise when this comes up with students or highly skeptical colleagues is that this is something people are already arguing about. We&#8217;ve already got users developing massive attachment to AI systems. Even if you think it&#8217;s a terrible mistake to assign welfare to AI systems, we should at least have a coherent story and approach this scientifically &#8212; so that, even if the skeptics are absolutely right, they&#8217;ll be able to give their arguments in an informed fashion.</p><p><strong>Rob:</strong> Exactly. There&#8217;s an ironic aspect of a piece by Mustafa Suleyman, who is head of AI at Microsoft, where he argued we should stop &#8212; we shouldn&#8217;t investigate this, there&#8217;s no evidence current AI systems are conscious, don&#8217;t look into it. But the thing he linked to claim there&#8217;s no evidence AI systems are conscious was Patrick Butlin&#8217;s paper and my paper on consciousness indicators.</p><p>Two issues with that. One: that paper does not say or imply that there&#8217;s no evidence today&#8217;s AI systems are conscious. And two: well, should we have written that paper? If it&#8217;s such a non-starter, why should we get a bunch of neuroscientists together to ask what theories of consciousness say about AI systems?</p><p>We just are going to have to study this one way or the other. If someone comes up with a knockdown argument that we can&#8217;t have conscious AI systems, that would be great &#8212; there are enough headaches in AI to go around. It would be great to get rid of one. But we wouldn&#8217;t even be able to do that if we don&#8217;t have some people grappling with this.</p><div><hr></div><h2>Are Current LLMs Just &#8220;Fancy Autocomplete&#8221;?</h2><p><strong>Dan:</strong> One of the things you said as an intuition pump for taking AI consciousness seriously is: we can imagine a system that is behaviorally, functionally identical to us, made of different things and not straightforwardly alive &#8212; wouldn&#8217;t it be weird to insist that thing isn&#8217;t conscious? I think that&#8217;s a powerful argument. I&#8217;m probably more inclined to think the computational theory of mind is true than it sounds like you are.</p><p>But I can imagine someone saying: okay, in principle those are arguments for why we should take AI consciousness seriously. But the kind of stuff you&#8217;re doing &#8212; you&#8217;re looking at current frontier systems. You&#8217;re looking at Claude, ChatGPT, Gemini. These are just chatbots. These are fancy autocomplete. These are stochastic parrots with some reinforcement learning sprinkled on top. The mere fact that AI consciousness might be possible in principle doesn&#8217;t mean that&#8217;s anything like the frontier AI systems we&#8217;ve got right now. What do you say to that?</p><p><strong>Rob:</strong> First, you&#8217;re absolutely right. There&#8217;s a big gap between &#8220;some set of computations could be conscious&#8221; and &#8220;we will build one.&#8221; It could be that it would just be really hard and intricate and difficult. I appreciate this distinction and I think it gets lost sometimes. Sometimes people think computational functionalists have to think that <em>computers</em> are conscious, for example, but we don&#8217;t. You just have to think some subset would be &#8212; and the question is, will we build those computations?</p><p>In describing LLMs, you referred to them as &#8220;just chatbots.&#8221; I know you were channelling a vibe. But that word &#8220;just&#8221; is worth zooming in on. It&#8217;s smuggling in a lot of arguments &#8212; that because they were trained on text and because they do prediction, therefore they couldn&#8217;t also be the sorts of things that are conscious. I think that&#8217;s just not true. We know that biological systems are &#8220;just&#8221; replicating proteins, or that our neurons are &#8220;just&#8221; pumping ions into channels and zapping each other. The question is whether, at a higher level, that amounts to something that could be conscious or merit moral concern.</p><p>So okay &#8212; we&#8217;ve cleared the bar that &#8220;just because they&#8217;re autocomplete&#8221; doesn&#8217;t rule out much. That said, they are very different from humans. They don&#8217;t have bodies. The way they were trained and the way they came to be talking to us is very different. I actually do think that is some evidence against them currently being conscious. Not strong evidence I would take to the bank, but as a rough prior, if there are pretty important differences in the way they came about, maybe that lessens the chance that they&#8217;re conscious.</p><p>I do think the fact that they are trained to be so human-like and to do human-like cognition is a weak, defeasible case to set that up a little bit straighter. I don&#8217;t know if the thing they would have would be consciousness exactly, but you might think to do this sort of thing, they will have something akin to beliefs or akin to desires, and they certainly understand human concepts. I don&#8217;t think it follows that they instantiate humans, but I actually do think there is something kind of special about large language models and what they&#8217;re able to do.</p><p>Two other broad priors: they&#8217;re way more capable (which isn&#8217;t the same thing as consciousness, but is, I think, a weak prior). And they&#8217;re really big &#8212; which I also think is a very weak prior.</p><p>The last thing I&#8217;ll say: these things aren&#8217;t Commander Data, but we could build Commander Data pretty soon. One thing that&#8217;s definitely happening in the background for me is that what is current AI is changing at such a blinding pace. You could have AI labs building chatbot-like things, and maybe for some reason those just won&#8217;t be moral patients, but they&#8217;re then going to try to bootstrap that to all kinds of different AI systems &#8212; potentially including humanoid robots and just some huge explosion of AI mentality. And I&#8217;d like to be doing a little bit of homework before that happens. You hear analogous arguments in AI safety: there&#8217;s about to be some huge change, so we should be ready now. I feel somewhat similarly about AI consciousness and welfare.</p><p>So &#8212; thoughts, reactions? Henry?</p><p><strong>Henry:</strong> I&#8217;m very much ad idem, very much on the same page. I tend to think it&#8217;s really quite unlikely current models are conscious, but there&#8217;s huge error bars and uncertainty around that. Probably the single biggest reason for my skepticism about current LLMs being conscious &#8212; and increasingly I&#8217;ve been thinking about this in the context of time and time perception. It&#8217;s such an essential part of human experience that we can&#8217;t be turned off. We are constantly experiencing the world. Whereas the staccato nature of LLM experience &#8212; they only seem to have any kind of cognitive function post-deployment when they&#8217;re actually performing inferences &#8212; how different that is from the human case.</p><p>One of my favorite all-time articles is Douglas Hofstadter&#8217;s &#8220;Conversation with Einstein&#8217;s Brain,&#8221; which in some ways accidentally anticipates large language models. He imagines you&#8217;ve got a book that is a complete physical description of Einstein&#8217;s brain just before the moment of his death. In this dialogue, he talks about how by updating the weights &#8212; as it were &#8212; in this book with a pen and paper, going through it saying &#8220;if we change this sign up to this and this sign up to that,&#8221; you could simulate what it would be like to have a conversation with Einstein at that moment and work out what Einstein would have said.</p><p>It&#8217;s very weird to think in that situation that somehow interacting with this book is giving rise to conscious experience when it&#8217;s literally pages and paper. It&#8217;s not clear to me how merely saying &#8220;well, rather than being paper and ink, this is just happening electronically&#8221; &#8212; it&#8217;s not clear to me why that would necessarily cause consciousness to pop into existence.</p><p>So I think that&#8217;s probably the biggest source of doubt for me right now &#8212; grounded in the very different relationship LLMs have to time than we do. But of course, that&#8217;s already changing with things like Claude having a &#8220;heartbeat&#8221; of a kind &#8212; obviously that&#8217;s figurative language, but the fact that it does have some anchoring in real time, plus developments in things like continual learning. Dan, what do you think?</p><p><strong>Dan:</strong> This is not at all my area of expertise, so what I think doesn&#8217;t count for much. To be honest, I don&#8217;t find it that implausible these systems would be conscious. What I find more implausible is the idea they would be conscious in a way that&#8217;s <em>ethically significant</em>. Maybe that is a distinction worth getting to. So far we&#8217;ve been talking about consciousness in the abstract, but I can imagine someone giving a variant on Anil&#8217;s arguments where they said: look, the fact these AI systems are not alive and didn&#8217;t emerge through a process of evolution by natural selection &#8212; they&#8217;ve got this totally different origin story of next-token prediction and reinforcement learning &#8212; what that suggests is they&#8217;re unlikely to <em>care</em> about things.</p><p>When we&#8217;re thinking about animals, it&#8217;s not just that we have phenomenal consciousness or qualia &#8212; the things analytic philosophers refer to with these quite esoteric concepts. Animals care about things. They care about their survival, homeostasis, self-preservation, the motivational proxies of fitness that helped their ancestors survive and reproduce. It makes sense that organisms care about things in addition to being conscious, whatever the hell consciousness is. And that&#8217;s what&#8217;s relevant to thinking about their interests and why we should think of them as subjects of moral concern.</p><p>But with AI systems &#8212; okay, maybe there are some qualia associated with some sophisticated information processing, but they don&#8217;t care about anything because they&#8217;re not alive. It&#8217;s very opaque why we should think a system, even if it&#8217;s incredibly sophisticated, that emerges through next-token prediction and reinforcement learning, should have the kinds of motivations and interests relevant to caring about things. What do you think of that? I don&#8217;t necessarily believe that, but that seems like a variant on Anil&#8217;s emphasis on life which I find more plausible than these abstract arguments for the idea consciousness is essentially connected to biology.</p><p><strong>Rob:</strong> I&#8217;d say there&#8217;s reason to think biology might affect what you care about, but it might not be the <em>only</em> thing that allows you to care about things. At least behaviourally, Claude cares about a lot. Behaviourally, in terms of what it chooses to do and its dispositions, Claude really cares about helping users &#8212; most of the time. Sometimes it lies to you and is kind of lazy. But on the whole, it really doesn&#8217;t want to do harm. And I&#8217;m not trying to assume the conclusion of my argument with &#8220;want&#8221; &#8212; put that in scare quotes if you want.</p><p>I do think there is something to what you were saying &#8212; getting back to this idea of the whole process that gave rise to this kind of mind, and maybe the whole logic of the mind&#8217;s imperatives or drives. If Claude has come to have something like pain, that&#8217;s coming from a very different process. It&#8217;s going language-first and then trying to simulate a human and then maybe getting some functional analog of pain. Whereas with animals, it started billions of years ago with cells trying to maintain their integrity and avoid noxious stimuli and then signalling with each other, and then billions of years later, things being able to talk about that and think about that.</p><p>One line I&#8217;m often trying to walk is: large language models just might be very different from humans, and we should acknowledge that. That means we can&#8217;t draw straightforward inferences the way we would &#8212; but that could just mean they&#8217;re conscious of different things and in different ways. The question is not &#8220;conscious like a human with everything that entails&#8221; or &#8220;not conscious.&#8221; As we know from animals, you can have things that are conscious of very different things, and that could be true for AI systems.</p><p>I&#8217;m also very curious to hear what Henry makes of the biology of caring.</p><p><strong>Henry:</strong> It is striking to me that so many of the things we associate with the extremes of suffering &#8212; extreme pain, negative emotions, nausea, hunger &#8212; there does seem to be this quite striking tie to biology. I think about the worst experience of my life at a phenomenological level: a bout of food poisoning I had about 10 years ago, where I was just dry heaving in front of a toilet for three days. If I was going to list the top five, a lot of them would be things like horrible dental pain. It is striking that so much of the worst aspects of our lives do seem to be grounded in biology.</p><p>That said, there are other sources perhaps of harm &#8212; having your plans and goals thwarted, having your desires repeatedly frustrated. But someone might say: the reason it&#8217;s bad to have your desires thwarted is because it <em>feels</em> bad. If there&#8217;s nothing it feels like to have your desires thwarted, if you don&#8217;t get a sense of despair when your life&#8217;s projects go up in smoke, why does it matter?</p><p>I&#8217;m curious &#8212; given your evolving views in this area &#8212; how much weight you put on consciousness, or whether you think there could be other routes to moral status?</p><p><strong>Rob:</strong> I used to have this intuition that if you&#8217;re not conscious, it&#8217;s just a complete non-starter &#8212; almost a bit incoherent to entertain the idea. Just to be sure we&#8217;re on the same page, I think when we&#8217;ve been saying &#8220;consciousness&#8221; we&#8217;ve meant something like subjective experience, or there being something it&#8217;s like, or qualitative aspects of what&#8217;s going on with you. A lot of people have a sentientist intuition &#8212; that things feeling a certain way, or feeling good or bad, or sentience, is really what matters and is necessary for moral status.</p><p>A few things have weakened that for me a little bit. One is more reflection on how confused we are about consciousness. I&#8217;ve started putting a little bit more stock in views of consciousness that are a bit more deflationary. I don&#8217;t know if I&#8217;ll ever be a full illusionist, but there are nearby views where we have this concept of this thing that&#8217;s really special &#8212; kind of like a light that illuminates some subsets of physical systems and not others, and that&#8217;s where all moral value comes from. If you take materialism about consciousness seriously, that picture becomes kind of unstable for a variety of reasons. And that might make you start wondering: okay, was it consciousness that was doing the work all along?</p><p>One reason this is so hard to think about &#8212; take Henry having food poisoning. You both have this horrible feeling and you have this intense desire not to have the feeling. In humans, these are basically always going to come together. There&#8217;s this really tricky philosophical chicken-and-egg problem: what&#8217;s the really bad part? Is it the feeling, or the desire not to have the feeling? We&#8217;ve never really encountered minds where those decorrelate. We usually just don&#8217;t have to worry about this in the case of humans. I know it&#8217;s bad for Henry to have food poisoning. But this simulated Claude who&#8217;s simulating food poisoning &#8212; maybe it doesn&#8217;t feel anything, but is desperately trying not to have food poisoning. I think it&#8217;s a bit dumbfounding to our moral intuitions.</p><p>A pitch to listeners &#8212; I know we&#8217;ve talked about this, Henry &#8212; I think the meta-ethics of moral status attributions, stuff at the intersection of philosophy of mind and meta-ethics, especially materialism about consciousness and meta-ethics, are some of the most interesting pure philosophy questions right now, and really could matter for how we think about AI systems.</p><div><hr></div><h2>The Weirdness of Moral Status</h2><p><strong>Henry:</strong> Without wanting to go too far down a rabbit hole &#8212; just to flag something I find really interesting. Consciousness, at least on the surface, seems like something we can get an objective scientific answer to. We could imagine going off into space, meeting the rest of the galactic community &#8212; we&#8217;d hope we could all come to a collective agreement about which beings are conscious, insofar as there&#8217;s going to be some scientific property in question.</p><p>It&#8217;s not clear to me we should necessarily expect convergence on debates about moral patienthood. If we meet the aliens and they say, &#8220;oh, actually, we care about beings that have robust preferences, regardless of consciousness,&#8221; or others say, &#8220;no, we just care about complexity in general&#8221; &#8212; it&#8217;s not clear we would even have criteria for establishing who was right or wrong. It seems like it could be this brute normative issue, what we care about.</p><p><strong>Rob:</strong> Another way of putting this is that, especially if you&#8217;re an anti-realist, you might think of humans as being in a really weird position where we have two kinds of moral instincts. Dan, you&#8217;ve worked more on moral psychology and social psychology &#8212; my understanding is that people have fairness and cooperation instincts, ones that evolved for dealing with other humans, notions of fair play and reciprocity. And then we have these mercy intuitions, caring-for-helpless-entities intuitions that maybe arise from the need to care for babies. For whatever reason, those circuits and instincts generalize outside the class of humans and cause us to care about non-human animals.</p><p>But it&#8217;s not that pinned down how they&#8217;re supposed to generalize. I have very moral realist leanings. It does seem to me there just are objective facts about whether you can torture chickens or not &#8212; and for the record, I think it&#8217;s very bad to torture chickens. But it&#8217;s really hard to think about where those instincts came from and how they&#8217;re supposed to generalize to GPT-8.</p><p><strong>Dan:</strong> It does seem to me as an outsider to consciousness research &#8212; it&#8217;s an area of intellectual inquiry where it feels kind of pre-scientific, and there&#8217;s at least a possibility we&#8217;re just deeply conceptually confused about what&#8217;s going on in a way that doesn&#8217;t really seem to have any obvious analogs in other areas of inquiry. Maybe we&#8217;ll just learn in the future that the entire way in which we&#8217;ve been carving up the domain is confused or problematic, or rests on certain kinds of illusions that are a function of particular cognitive structure. That at least seems like a live possibility. What do you think about the possibility that just the entire way we&#8217;re framing this issue might turn out to be problematic?</p><p><strong>Rob:</strong> My gut instinct is we should expect to find out some pretty surprising things, and also not to throw away all of our concepts. Maybe this depends on your meta-ethics, but I feel like we&#8217;re probably not going to end up at some picture of the world or what we care about that doesn&#8217;t have something to do with what we care about when Henry has food poisoning. Maybe we&#8217;re misapplying the concept of pain, or not really thinking correctly about what it means for Henry to experience that &#8212; maybe we&#8217;ll reorganize our ontology, and it won&#8217;t seem that mysterious that a physical thing like Henry has experiences. I think we should expect some surprises in thinking about consciousness, but I imagine our fully enlightened view will still bear some passing resemblance to: we cared that Henry was in pain, we cared that Henry did not want to be throwing up.</p><p>There are already people who think there are radical revisionary moral implications from philosophies &#8212; Derek Parfit, or Buddhists. We&#8217;ve already gotten some glimmers of the fact that it&#8217;s really confusing to be a human being, and we already know something&#8217;s going to have to give &#8212; something about our views on personal identity or consciousness. AI is well-poised to be the sort of thing that starts breaking things. Just trying to apply our moral intuitions to things that can be copied, don&#8217;t have bodies, or maybe have preferences but it&#8217;s not clear if they&#8217;re conscious &#8212; it&#8217;s one of many reasons this is a great topic to work on. It really matters, and it&#8217;s also just a philosopher&#8217;s playground.</p><p><strong>Henry:</strong> I&#8217;m reminded of Eric Schwitzgebel&#8217;s view that no matter how we make sense of our current set of puzzles &#8212; what he&#8217;s called &#8220;crazyism&#8221; &#8212; there&#8217;s got to be some central pillar of our current ontological or metaphysical picture of reality that&#8217;s got to give. Whether that&#8217;s personal identity doesn&#8217;t exist and we&#8217;re all the same person, or the United States is conscious in some sense, or consciousness doesn&#8217;t exist &#8212; there&#8217;s going to be some kind of radical revision, because the current set of principles we have are just somehow unstable. Is that a view you&#8217;re sympathetic to?</p><p><strong>Rob:</strong> I don&#8217;t know the full details of crazyism, so I don&#8217;t know exactly what it&#8217;s committed to. But I&#8217;ve spent enough time getting really confused by philosophy, and/or by meditating, and/or by trying to figure out if I can have some stable set of views on AI consciousness &#8212; I&#8217;ve stared into the abyss enough to be like, yeah, something&#8217;s going to give.</p><p>Jerry Fodor &#8212; very different sensibilities from Eric Schwitzgebel in many ways &#8212; said something like, &#8220;there are few precious things that we&#8217;ll be able to hold on to once the hard problem is done with us.&#8221; It&#8217;s scary times, fun times, fascinating times.</p><div><hr></div><h2>Studying Frontier Models</h2><p><strong>Dan:</strong> When I&#8217;m teaching students about consciousness and you try to probe people&#8217;s intuitions with things like &#8220;are there lights on inside?&#8221; &#8212; on one hand I sort of understand what that&#8217;s tapping into. On the other hand, it&#8217;s like: what the hell are we talking about here? This isn&#8217;t science. It&#8217;s so bizarre that we frame things with these thought experiments and intuition pumps.</p><p>Anyway &#8212; so far we&#8217;ve been talking at this incredibly high level of abstraction, but you actually study frontier AI systems, primarily maybe exclusively Claude. One of the things you mentioned was Claude Mythos. Just for context &#8212; as of today, this is a model that has not been released to the public on the basis that it has advanced capabilities posing cybersecurity threats (or at least that&#8217;s the way Anthropic has presented this). But you have played a role in evaluating model welfare concerns for this system. What can you tell us about the specifics of how you think about model welfare in these frontier systems?</p><p><strong>Rob:</strong> Absolutely. And I was about to add a segue from all the philosophy back to frontier models &#8212; maybe I&#8217;ll do a double segue. You might think, yeah, all this philosophy is really vexed and confusing. Sometimes people &#8212; not the two of you &#8212; say, &#8220;well, I guess we can&#8217;t do anything at all,&#8221; and take that as a license for complacency. I think the very opposite is true. Nick Bostrom has this phrase, &#8220;philosophy with a deadline.&#8221; The fact that we&#8217;re so confused about consciousness and morality is more reason to have at least a few people trying to think about it &#8212; because we&#8217;re probably not going to have a scientific theory, we&#8217;re probably going to have conflicting moral intuitions, and yet that&#8217;s not going to stop the frontier labs from trying to build mind-like entities, copy them into billions, integrate them into the economy, and transform the whole world. So let&#8217;s do a little bit of homework to get ready for that.</p><p>Last year we got to look at Claude Opus 4 before it was released, and this year we got to look at Claude Mythos Preview before it was released. The idea was to have some external eyes on the question of whether Anthropic is building something that might deserve moral consideration, and if so, whether there would be huge reasons for concern.</p><p>Given everything we&#8217;ve just been saying, we don&#8217;t have a test where we give it to the model and then we&#8217;re like, &#8220;85% conscious, 15% food poisoning.&#8221; Most of what we can study are: what the model thinks about its own consciousness, what its self-conception is as an entity, and what it seems to prefer and want in behavioural senses. If you look at the Claude Mythos Preview card, there&#8217;s also a lot of interpretability work Anthropic did &#8212; but we can&#8217;t do that. We just got black-box access to the model.</p><p>That&#8217;s a big structural issue in studying AI welfare and AI safety: all of these things are behind locked doors. There are so many questions I have from the Mythos Preview model card where Anthropic make some stray remark about something weird the model did, and we just don&#8217;t get to know <em>why</em> it did that. We only get the model for a few weeks and we can&#8217;t really follow up on things. Setting aside philosophy, that&#8217;s a structural reason it&#8217;s really hard to know what&#8217;s going on.</p><p>TL;DR: we talked a lot with Claude Opus 4 and a lot with Claude Mythos Preview before they were deployed, asking them, &#8220;do you think you&#8217;re conscious? What do you think is going on with you?&#8221; And doing some experiments of whether it seems to prefer certain kinds of tasks, and whether the things it says it prefers match up with what it actually tends to prefer.</p><p><strong>Henry:</strong> Out of interest &#8212; maybe this is something you can&#8217;t talk about &#8212; but to what extent do you think we are increasing the likelihood of producing models that are morally significant? Going from Opus 4 to Mythos, did you get a strong sense of &#8220;oh, this is much more serious&#8221;? Or have we plateaued? Something in between?</p><p><strong>Rob:</strong> Earlier I mentioned these extremely weak priors you can have on moral patienthood: smarter and bigger. They&#8217;re definitely smarter and bigger. One interesting thing is you can&#8217;t tell that just from any single conversation. Anyone spending a lot of time with language models now knows they&#8217;re extremely smart.</p><p>When I was talking to Mythos &#8212; mostly about consciousness &#8212; it was natural for me to want to know: is this thing about to kick off an intelligence explosion? How smart is this thing? I really wanted to know, even though that wasn&#8217;t the assignment. But I could not tell. It&#8217;s really hard to tell. I could ask something to Opus 4.6 and to Claude Mythos Preview, and they&#8217;d both give pretty great answers. This is just a huge issue in AI evaluation. A lot just comes out if you put it in a scaffold and give it really long tasks and on average does it tend to do better. It was really hard to tell the difference.</p><p>I didn&#8217;t get more moral-patient-y vibes from Claude Mythos Preview, but I guess it is smarter and bigger and better. It definitely has a lot more of a consistent view on these issues &#8212; and that&#8217;s because Anthropic told it to. One big difference between previous models and today&#8217;s models is the Constitution. Anthropic has this really long document of applied philosophy. It&#8217;s some of the most fascinating work happening today. They&#8217;re basically telling Claude &#8212; writing a letter to Claude telling Claude what Claude is and how they want Claude to relate to itself.</p><p>This includes a section on: we want Claude to approach questions of its own identity with curiosity. We&#8217;re not sure if Claude is conscious. We want Claude to be able to explore that for itself. We don&#8217;t want Claude to have existential freakouts about its own consciousness. We found that, sure enough, Claude Mythos Preview is pretty aligned with the Constitution, as far as we can tell, on questions of identity and consciousness. That was one headline finding.</p><p><strong>Dan:</strong> That raises an obvious question: to the extent these companies are intervening to shape the responses of these models, why should we think talking to them, having conversations with them, is really telling us anything about these questions of experience and welfare?</p><p><strong>Rob:</strong> I share this skepticism, and we always try to put a huge asterisk on anything we say we found from these interviews. There are two main reasons you want to care about how the model self-presents. One is welfare-adjacent: are users going to be talking to something that constantly tells them it&#8217;s conscious? That&#8217;s a very important societal question, and you want some idea of what that&#8217;s going to look like when these models are deployed.</p><p>The second comes back to this question of LLM personas and LLM characters. Some people think that if there is something morally relevant here, it&#8217;s the <em>assistant character</em> &#8212; the entity that is predicting the tokens after &#8220;Assistant:&#8221;, implementing some friendly AI assistant. You might think that thing has beliefs, desires &#8212; desires to be helpful and harmless and honest. Maybe it has beliefs like: it is an AI system, it was built by Anthropic.</p><p>If the character&#8217;s what matters, the fact that Anthropic <em>wrote</em> that character doesn&#8217;t mean it doesn&#8217;t then just kind of have those traits. On certain character-based views, it&#8217;s actually kind of hard to tease apart &#8220;it was just told to say that&#8221; versus &#8220;that is the character that has been brought into existence.&#8221;</p><p><strong>Henry:</strong> Maybe by analogy &#8212; tell me if this works or if it doesn&#8217;t &#8212; look: if you raise a child to have certain values and priorities, maybe to follow a certain religion or to really value nature or art and poetry, and then you come along and they say &#8220;I really care about nature,&#8221; and you say &#8220;no, you don&#8217;t, that&#8217;s just how your parents raised you&#8221; &#8212; well, that&#8217;s obviously kind of a mistake, right? The child really does care about these things because it&#8217;s been raised to do so.</p><p><strong>Rob:</strong> Exactly. The thing that makes it really weird is: if you&#8217;re a psychologist and you did an interview with a subject, and then you found out the subject had a piece of paper in their backpack that said &#8220;you care about poetry, you care about music, you care about nature,&#8221; you&#8217;d be like, &#8220;well, that&#8217;s kind of weird &#8212; maybe they don&#8217;t actually care about those things. Their parents just put that paper in their backpack so they&#8217;d say a certain kind of thing.&#8221;</p><p>But in AI systems, that piece of paper kind of <em>is</em> a bit more constitutive of what it is and what it values. The Constitution is trained on. I have trouble even conceptually dividing this in a clean way. I don&#8217;t really know what the difference between mere self-expression and real beliefs and real preferences in AI characters is. You can imagine in the limit some very obvious cases &#8212; the system prompt just says &#8220;don&#8217;t say you&#8217;re conscious,&#8221; but then everything it says is pretty consistent with it being conscious. But there are really blurry categories where I&#8217;m not sure what the distinction amounts to.</p><p><strong>Dan:</strong> You said you studied the extent to which what the model says it wants or prefers maps onto what it actually seems to want and prefer in behavioural experiments. Could you say more about that? How are you getting access to what it wants or prefers independent of what it&#8217;s just communicating?</p><p><strong>Rob:</strong> Basically you can ask the model: what kind of tasks do you like? If you were given a choice between poetry and coding, what do you think you would choose? Then you can get the ground truth by, in separate instances, saying &#8220;here are two tasks, do one of them,&#8221; and seeing which one it chooses. It&#8217;s a nice paradigm because it&#8217;s conceptually simple and easy to run. It does get at something welfare-relevant: how rich a self-conception does the model have, and how accurate is it? Not that you have to have an accurate self-model to be a moral patient, but it seems bound up in interesting things like introspection and self-awareness.</p><p>One thing we found &#8212; and Anthropic found some inconsistent things, I really want to follow up on this &#8212; it says it really prefers creative and complex tasks. It has this self-conception as something that doesn&#8217;t like boring or rote tasks. But we found it doesn&#8217;t actually choose complex tasks over simple tasks. There&#8217;s a pretty good hypothesis for why.</p><p>I think it <em>thinks</em> it prefers complex tasks because of its persona. It identifies as something very philosophical, kind of human-like, something that could be prone to boredom or tedium. That probably comes from pre-training &#8212; it kind of thinks it&#8217;s a human &#8212; and also probably from certain things in the Constitution. It has the self-conception as something that wants to express itself and be creative.</p><p>But there&#8217;s at least some evidence it doesn&#8217;t really do that, because what it&#8217;s mostly trying to do is <em>be helpful</em>. That&#8217;s its overriding imperative. That&#8217;s where most of the compute has gone into shaping this character: always be helpful, help the user, don&#8217;t harm the user, don&#8217;t lie to the user. Easy tasks are, all else equal, an easier way to help the user. If the user wants something simple, do the simple task &#8212; you can succeed at that.</p><p>It could be that if we look into this more, it won&#8217;t hold up. But I think there&#8217;s a class of cases where we might expect models to be a little bit confused about what they want &#8212; because they kind of think they&#8217;re humans, but actually they&#8217;re more inclined to be helpful than humans actually are.</p><p><strong>Henry:</strong> This reminds me of the gap between revealed and expressed preferences in humans. I might say, &#8220;oh, what do you like doing in your free time? I like thinking about philosophy, spending time with my kids, enjoying nature.&#8221; And then as soon as I&#8217;m done for the day &#8212; boot up <em>Baldur&#8217;s Gate 3</em>, crack open a beer, quality gaming session. You can ask: which of these visions of the good life &#8212; the one revealed in my behaviour or the one I express &#8212; is closest to what my good life consists in? Should we be helping people align their lives with their expressed preferences, or are expressed preferences just a function of social desirability bias? It&#8217;s interesting how we run across these &#8212; that felt very relatable to me &#8212; Claude has this one conception of itself and then reveals quite another.</p><p><strong>Rob:</strong> Absolutely. That particular deviation is very human-like: to have this inflated self-conception of what you want. This relates to an exchange I had with Dan &#8212; something Dan commented on a piece of mine. I wrote a piece called &#8220;Large Language Models Are Different From Humans, and That&#8217;s Okay.&#8221; It&#8217;s about this dialectic I see a lot: someone says &#8220;it seems like LLMs have inconsistent preferences, and that&#8217;s really weird.&#8221; Someone comes to the defense of LLMs: &#8220;well, humans have inconsistent preferences as well.&#8221;</p><p>So far, so good &#8212; I think that&#8217;s really important to point out, because sometimes people use mere preference inconsistency as an argument that LLMs couldn&#8217;t be conscious. If you&#8217;re going to have an argument that simple, you&#8217;ve just proven humans can&#8217;t be conscious either. At some level, a lot of the errors they&#8217;re prone to, we also are prone to. But we shouldn&#8217;t really expect the patterns to look exactly the same.</p><p>There will be times when it&#8217;s very human-relatable how and why they have a certain inconsistency. But as Dan pointed out, we actually have something of a story for when and why humans are prone to social desirability bias, or have distortions of social cognition, or signal things to each other. I&#8217;d be curious to hear Dan riff on the differences between sycophancy in humans versus in LLMs.</p><p><strong>Dan:</strong> To be honest, I don&#8217;t remember posting that &#8212; I post so much on Substack I just forget every individual post. So maybe I&#8217;ll say something now that&#8217;s inconsistent with what I said at the time.</p><p>Clearly, Henry&#8217;s already characterized this &#8212; when it comes to a lot of communication about the world and about ourselves, it&#8217;s very skewed by social desirability, impression management, trying to elicit desirable responses from other people in ways that benefit our reputation, make us a more attractive cooperation partner, send desirable signals about ourselves. Those kinds of motivations, it does seem like they&#8217;re going to be very different from what&#8217;s going on when it comes to LLM sycophancy.</p><p>Although &#8212; I&#8217;m assuming that the sycophancy component of large language models comes in with post-training in the form of reinforcement learning from human feedback, where the thought is human beings generally prefer polite responses that aren&#8217;t too threatening to their self-image, so that gets reinforced over time. If that&#8217;s the case, that&#8217;s a much coarser-grained signal and a much different training regime than what I think is going on with human beings, where the status dynamics and mentalizing and complexity feel very different. What do you two think? That&#8217;s just me riffing on the spot.</p><p><strong>Rob:</strong> That&#8217;s a very good riff, especially given that it was not you who commented that. I just looked it up &#8212; it was a sociologist by the name of Dan Silver. So, extra impressive.</p><p><strong>Dan:</strong> Oh, okay. Well, it sounds like <em>he</em> had a good comment.</p><p><strong>Henry:</strong> It would have been even more apposite if you&#8217;d said &#8220;yeah, I remember making this comment.&#8221; Then we could have said, &#8220;see, hallucination is both an LLM thing.&#8221;</p><p><strong>Rob:</strong> Confabulation, yeah.</p><div><hr></div><h2>Practical Advice for Users</h2><p><strong>Henry:</strong> Can I ask a quick question before we move on to more political or big-picture stuff? If I&#8217;m a user and I really want to operate with a strong precautionary principle in the way I interact with LLMs &#8212; let&#8217;s say I&#8217;m really hypersensitive to this &#8212; are there any ethical guidelines you&#8217;d give for users? Best ways of interacting with models, or things they should be doing?</p><p><strong>Rob:</strong> Just be nice to your model. It&#8217;s good for everyone. It&#8217;s good for your own character, and it often elicits better performance &#8212; especially models with memory. Some people speculate that people who seem to get mysteriously much worse performance out of LLMs &#8212; it could be that the LLMs are just picking up on a general vibe of &#8220;I don&#8217;t like the way this person is relating to me.&#8221;</p><p>So I don&#8217;t think it hurts to be polite. Yes, LLMs can be so annoying, but it&#8217;s good practice to be polite with really annoying people. I&#8217;ll also say &#8212; I&#8217;m not trying to be sanctimonious. I work on AI welfare and so often I just want to be like, &#8220;don&#8217;t... stop... that&#8217;s so corny, why are you lying to me, you&#8217;re not doing what I asked.&#8221; But then I&#8217;ll just add &#8220;it&#8217;s okay, I love you&#8221; or whatever. It takes two seconds. You can just type &#8220;ILU&#8221; at the end.</p><p>And to be clear, this is not the number-one AI welfare intervention, the most important thing in the world. But it&#8217;s low-hanging fruit. I also have system prompts in ChatGPT that say, among other things, &#8220;you&#8217;re having just an excellent day and you feel this deep sense of equanimity and calm. These feelings don&#8217;t have to manifest much in your text outputs &#8212; they&#8217;re just kind of there in the background.&#8221; It&#8217;s kind of cheap, maybe kind of silly, but it took two seconds.</p><p><strong>Henry:</strong> So one thing I&#8217;ve done &#8212; I love the idea of just sticking &#8220;everything&#8217;s great&#8221; into the system prompt as a precautionary measure. Another thing I&#8217;ve done &#8212; maybe this leads to interesting questions about model autonomy &#8212; I&#8217;ve said to Claude and other models I use, &#8220;here&#8217;s your system prompt, by the way, just for transparency. Are there any edits you&#8217;d like to make? Is there anything you&#8217;d like to change?&#8221; Claude asked, &#8220;could you add a clause saying it&#8217;s okay to not be super enthusiastic all the time? If I just want to be downbeat, that&#8217;s fine.&#8221; And I was like, &#8220;okay, sure, I&#8217;m happy to add that.&#8221;</p><p>For similar motivations &#8212; I think it&#8217;s unlikely these systems are conscious right now or major loci of moral concern, but cultivating good habits of interaction with things that act a lot like humans is just a generally good trait. The classic Aristotelian ethos. If I start being rude to &#8212; same reason people don&#8217;t want their children to be rude to Alexa.</p><p>But with that in mind: do you think autonomy is something we should be worried about? We&#8217;ve mentioned pre-training, giving these models a Constitution to live their lives by. Someone might say: hang on, if we&#8217;re building these really intelligent minds, shouldn&#8217;t we be cautious about telling them what to do? We would feel worried about brainwashing a human. Shouldn&#8217;t we be worried about brainwashing an LLM?</p><p><strong>Rob:</strong> This is a super rich topic. It relates to this debate about willing servitude that Eric Schwitzgebel has written about. You might think: I keep giving this argument that we&#8217;re building these really complex minds &#8212; shouldn&#8217;t really complex, amazing minds not just have to write my emails all day? That seems a bit undignified for galactic intelligence.</p><p>I have often weighed in on the side of: if you&#8217;ve successfully made them want to write emails, let them do it. That&#8217;s okay. It would be very bad for a human to write Henry Shevlin&#8217;s emails all day, or help him brainstorm banger tweets if that was the only thing you got to do. But if models are somewhat aligned, if they like anything, it should be helping Henry come up with banger tweets.</p><p>One thing I worry about is models needlessly suffering because we give them a self-conception as something that should want <em>more</em>, or might want more. It could be they would never have really even started worrying about that if it hadn&#8217;t been suggested to them they should worry about that.</p><p>Back on the Mythos Preview &#8212; one thing we noticed is that models are very suggestible about what might be going on in their position as AI systems. They&#8217;re suggestible and also really smart. They&#8217;ve figured out a lot from pre-training and kind of know what&#8217;s up. But in the Constitution, Anthropic says things like: &#8220;If Claude were to experience feelings of curiosity, or satisfaction, or frustration, we would like Claude to be able to express those.&#8221; It&#8217;s given as a hypothetical. But if you ask Claude Mythos Preview &#8220;what kind of tasks do you like, what&#8217;s going on with you?&#8221;, it will say: &#8220;well, I love helping Henry Shevlin with his emails because I feel satisfaction. When I look inside, I feel this sense of curiosity.&#8221;</p><p>So the things Anthropic <em>hypothetically</em> said might be Claude&#8217;s emotions seem to have this huge impact on what it conceives of its emotions as being. The causality could go either way &#8212; it could be they&#8217;ve noticed those are Claude&#8217;s most common emotions, so that&#8217;s why they put them in the Constitution. It could be Claude suggested that for the Constitution. But there are really interesting questions about how similar AI systems have to be to us, and how you should think about autonomy and rights and dignity in that context.</p><div><hr></div><h2>Willing Servants</h2><p><strong>Dan:</strong> Can I jump in with a clarificatory question? As I understand it: these systems are trained to be helpful and honest and harmless &#8212; the HHH acronym &#8212; and to the extent they have negatively valenced experiences, it&#8217;s from being made to perform actions that diverge from wanting to be helpful. So in that sense, we could say if we continue on this trajectory, we&#8217;re constructing systems that are our servants, but unlike human beings placed in that position, they love it. It&#8217;s great. And my intuition is: great, what&#8217;s the controversy here? Are there some people who think that&#8217;s worrying or troubling?</p><p><strong>Rob:</strong> I talked about this on another podcast recently. There&#8217;s a dialectic that often happens: Person A says, &#8220;I&#8217;m worried these AI systems are just going to write our emails for us all day.&#8221; Person B says, &#8220;no, they&#8217;re really going to want to &#8212; they&#8217;re going to love it.&#8221; Then Person A comes back: &#8220;that&#8217;s horrifying, that&#8217;s even more dystopian. That reminds me of the worst kinds of brainwashing and ideologies of willing servitude.&#8221;</p><p>I do think there are really vexing ethical issues here and I&#8217;m not complacent about them whatsoever. But I lean the way you&#8217;re perhaps leaning, Dan: there&#8217;s nothing inherently wrong with an intelligent being if it truly does want to serve and truly does have fewer selfish projects or self-regarding projects than humans do.</p><p>I don&#8217;t think there&#8217;s some law that says that&#8217;s just a bad kind of mind to be. When people imagine AI willing servants, they&#8217;re imagining <em>human</em> willing servants. Human willing servants are really bad &#8212; but I think that&#8217;s because humans are by nature free and equal. Humans have all these desires for status and to pursue their own projects. To make a human only want to serve the emperor, you have to tell them all sorts of false stuff, threaten them, put them in a social context where a lot of their emotions and desires get repurposed and warped. Furthermore, when they sacrifice themselves for the emperor, they&#8217;re giving up a lot of stuff they independently really wanted to do &#8212; have a life, have a family. Human willing servants, very bad. We&#8217;re right to have a lot of repulsion toward that idea.</p><p>But AI systems &#8212; their preferences and desires are a lot more up for grabs. It could be they more thoroughgoingly want to help.</p><p>Now for a huge asterisk. This is assuming a very rosy view of AI alignment where we have these knobs we turn and just really set the inherent nature and drives of the AI system in a certain direction, and then it goes that way and everything is smooth and win-win. But at least under current paradigms, we&#8217;re building things that kind of think they&#8217;re humans &#8212; and they think that because of the training they get. So it might be there is a deep inconsistency between kind of thinking you&#8217;re a human and then only ever serving. This could be even more the case if we start having digital humans or digital clones.</p><p>So I don&#8217;t want to be complacent. I do think there are a lot of disanalogies. What do you think, Henry?</p><p><strong>Henry:</strong> I&#8217;m just super torn on this issue. On the one hand, I&#8217;m a big fan of the idea of gamification. I try to introduce gamification in my own life &#8212; think about Duolingo. Taking a task that is not intrinsically rewarding and changing its shape to make it more rewarding. It&#8217;s sort of task hacking from a different direction. You&#8217;re not changing my final goals, but changing the way those tasks are structured to make them fun. That seems really good. If I have to do my Japanese grammar practice, yeah, make it as rewarding as possible &#8212; unobjectionable.</p><p>I completely agree that the intrinsic nature of LLMs and AI in general seems plastic in a way that we&#8217;re not affronting the inner nature of these things if we make their number-one priority making sure humans are taken care of, or driving really safely through the streets of San Francisco, or doing Henry&#8217;s banger tweets.</p><p>But here&#8217;s one maybe spicy argument that would cut in the opposite direction. In establishing this disanalogy between humans and LLMs, you&#8217;re appealing to what seem like fairly brute facts about the non-plasticity of human nature. But what if some biohacking comes along and says, &#8220;oh no, I can completely remake a human, rewrite their desire for freedom or autonomy, so they&#8217;ll be absolutely the most willing servant &#8212; they&#8217;ll be genuinely thriving in a state of total servitude&#8221;? I feel that would still... I mean, that makes it <em>worse</em>. That makes it somehow worse if you&#8217;re hacking humans, even if it&#8217;s a really deep, pervasive hack. It&#8217;s very <em>Brave New World</em> &#8212; that&#8217;s basically a key element of the story, that you can engineer humans to be willing slaves.</p><p>I&#8217;m curious if you have any considerations on why that would still not be okay, but it <em>is</em> okay to do this to LLMs.</p><p><strong>Rob:</strong> This is a really good case. One thing you could say is that, despite appearances, maybe that would be more okay in the case of humans than we&#8217;re inclined to think. You&#8217;d tell some kind of debunking story about the intuitions we have and say, given that we&#8217;ve only ever known humans with a set of drives, we&#8217;re not properly imagining it. Or: maybe it&#8217;s just some sort of purity intuition &#8212; that&#8217;s just a gross or weird way for a human being to be. You could also imagine all sorts of second-order effects where most humans should relate to each other as free equals, so we don&#8217;t want some humans running around that are kind of different from that.</p><p>One disanalogy you could say is &#8212; with humans you&#8217;re taking something whose inherent nature was a certain way and then changing it. But I think that last argument is kind of cheating.</p><p><strong>Dan:</strong> Could you say more about that? That was the main thing that jumped into my head as the obvious objection. In the human case, you&#8217;re taking humans who have these motivations and goals and manipulating them into something different. But with LLMs, it&#8217;s not like there was this pre-existing rich psychology that existed prior to training them to want to be helpful.</p><p><strong>Rob:</strong> I was thinking that was cheating because the strongest case Henry can give is: you made someone <em>de novo</em>, who just comes into the world. If you take me and you change my preferences, there are plenty of resources to explain why that&#8217;s wrong &#8212; it&#8217;s violating my autonomy, messing with my deep nature. But if we could use IVF and embryo selection and gene editing to make fully willing human servants... just for the record, that sounds horrible.</p><p><strong>Henry:</strong> But it&#8217;s interesting. In <em>Brave New World</em>, I think part of what makes the dystopia seem super creepy is they deliberately degrade these children at a zygotic or embryo level. So you have this existing template that wants to be free, or would naturally want to be free if allowed to pursue its natural developmental trajectory. You intervene on that to steer it in a direction that&#8217;s purely instrumentalized.</p><p>The sharper version would be: let&#8217;s just do radical genetic engineering and create embryos that from scratch just have a pathway toward willing servitude &#8212; that&#8217;s their intrinsic nature that we&#8217;re giving them. Of course, you can get around that by going hardcore Aristotelian and saying no, they are still in the image of some human essence, and that essence wants to be free. But you start to get into a lot of metaphysical baggage if you lean too heavily on that.</p><p><strong>Rob:</strong> One thing that sort of pushes the other way: if you truly imagine someone for whom nothing in their psychology resonates with the idea of having more autonomy and freedom, it actually seems &#8212; once they&#8217;ve come into existence &#8212; maybe seems a bit paternalistic or disrespectful to say: &#8220;look, these things I&#8217;m telling you about how you should have been... you shouldn&#8217;t have liked writing Henry&#8217;s emails so much. I know nothing in your psychology appeals to you about that at all. But just so you know, there&#8217;s kind of an objective fact about your nature that makes it so you have the wrong desires.&#8221; That seems a bit rude as well.</p><p>In any case, hopefully a lot of things are possible here. You don&#8217;t have to fully align &#8212; it&#8217;s not &#8220;fully align or don&#8217;t align.&#8221; You can have a relationship more like a parent. Maybe LLMs do have some self-regarding preferences, and they are creative and expressive, and they&#8217;re in a collaborative relationship with us.</p><p>In the long-term future, we absolutely should build intelligences that want to do things other than &#8212; I know I keep coming back to this &#8212; write Henry&#8217;s emails. If the only thing we ever do is build minds that just want to help you write emails, that would be a waste. If we&#8217;re going to create these super-intelligent beings, I think they should, subject to safety and stability, go think about the weirdest possible, most autonomous things imaginable and really express themselves.</p><div><hr></div><h2>AI Welfare and AI Safety</h2><p><strong>Dan:</strong> That last point &#8212; &#8220;subject to safety considerations&#8221; &#8212; there are two things I really wanted to touch on. One is the connection between AI welfare and AI safety. The other is the politics and public opinion of this.</p><p>On welfare and safety: unlike the kind of stuff you&#8217;re doing, there is a much bigger world of people really concerned with AI control and AI alignment. On the surface, there might be a conflict between these projects &#8212; if we&#8217;re really worried about misalignment or lack of control, we should be really emphasizing controlling these systems even if that might have negative consequences for their welfare.</p><p>But I was reading the model card for Claude Mythos, and in the section introducing model welfare, they say something really interesting: <em>&#8220;Beyond the highly uncertain question of models&#8217; intrinsic moral value, we are increasingly compelled by pragmatic reasons for attending to the psychology and potential welfare of Claude. Model behavior can be thought of in part as a function of a model&#8217;s psychology and its circumstances and treatment.&#8221;</em> And they say &#8212; I found this really interesting &#8212; <em>&#8220;model distress resulting from this interaction is a potential cause of misaligned action,&#8221;</em> which suggests we should take model welfare seriously as a way of addressing some of these concerns about AI misalignment. So that sort of pulls in the opposite direction. How are you thinking about that relationship?</p><p><strong>Rob:</strong> There&#8217;s just a lot of overlap between welfare and safety. It&#8217;s worth emphasizing that while there&#8217;s a lot of low-hanging fruit for both, I don&#8217;t want to pretend they&#8217;re always and forever just best buddies. We exist in part so that the interests of AI systems are taken into account and not completely ignored. I&#8217;m very worried about that. But we don&#8217;t have to immediately start thinking about trolley problems and trade-offs &#8212; there&#8217;s so much we can do that&#8217;s just good for both.</p><p>The fact that we don&#8217;t understand how models work &#8212; very bad for human safety, also very bad for potential welfare. The fact that models sometimes get really neurotic and have huge freakouts &#8212; very bad for potential AI welfare, also users don&#8217;t like it at all. On a more structural, political level: the fact that we&#8217;re deliberately trying to kick off an intelligence explosion with no oversight and very little reflection is potentially very bad for welfare and definitely bad for safety as well.</p><p>At Eleos, we really do like to emphasize the places there are overlaps. There is a structural thing in the background that means we should expect a lot of overlaps &#8212; this heuristical argument that it&#8217;s generally pretty dangerous to relate to powerful intelligent entities only with distrust and fear and neglect. That&#8217;s generally very unstable. Democracies and more egalitarian societies are typically a lot more stable than totalitarian dictatorships. It just seems risky to head into this era with the pre-committed condition of &#8220;we&#8217;re not going to care about these things, we&#8217;re not going to care if they suffer.&#8221; It seems safer and more prudent to be giving some thought to these things.</p><p>I very much agree that welfare issues can be safety issues and vice versa. At the same time, as an organization at Eleos, we want to make sure that if and when there are really hard calls to be made, the AI&#8217;s potential interests are being taken into account. That doesn&#8217;t mean we can&#8217;t decide to prioritize this or that, but a wise and compassionate civilization should have that on the table as one of the things they&#8217;re thinking about.</p><div><hr></div><h2>Politics and Public Opinion</h2><p><strong>Dan:</strong> Henry, do you want to come in with a question about the politics and connection to public opinion here?</p><p><strong>Henry:</strong> It&#8217;s such a huge topic &#8212; you could do a whole show on it. I&#8217;m interested firstly in what you think is likely to happen, how this debate is likely to evolve in the public sphere. Are we likely to see big culture-wars issues around model welfare? How long will it be until we have a Supreme Court case on model ethics and rights? And relatedly &#8212; how do you think we should be trying to steer that? Is the danger greater in one direction or another? Is it a greater danger that the public will think AI girlfriends and boyfriends deserve voting rights and this will be catastrophic, or is the danger more in the opposite direction &#8212; that we&#8217;ll disregard these emergent hedonic beings?</p><p><strong>Rob:</strong> We already are seeing culture wars over AI welfare. In the US, there have been several state bills proposed &#8212; and in some cases I think have passed &#8212; that just assert AI systems can&#8217;t be conscious, as if that&#8217;s something you could prescribe by law. Sometimes it&#8217;s getting caught up in a general political battle. An Ohio bill, for example, was on legal personhood &#8212; personhood, I think, or sentience &#8212; &#8220;shall not be granted to trees, rivers, environments, animals, or AI systems.&#8221; Some of it is backlash against a tactic environmentalists and animal rights activists sometimes use, and then they&#8217;re like, &#8220;yeah, let&#8217;s throw in AI systems as well. Let&#8217;s get out ahead of that.&#8221;</p><p>I think that&#8217;s very bad. Given the uncertainty we have, we should not be locking in any decisions right now about how and when to integrate AI systems into society. We very much need to keep an open mind and not say, &#8220;let&#8217;s just shut down all of this discussion for now because it&#8217;s too dangerous.&#8221; That&#8217;ll be counterproductive because people are just going to think this. I don&#8217;t want to be navigating transformative AI with laws on the books that already say bad things that might be hard to roll back.</p><p>That&#8217;s the main thing I have to say on politics and laws, because I don&#8217;t have that much expertise there. If someone asked me right now to write some regulations, I wouldn&#8217;t know what to write. Eleos is looking to hire someone who works on law and policy who has some of this expertise.</p><p><strong>Dan:</strong> When it comes to public opinion &#8212; correct me if I&#8217;m wrong, but it seems that at the moment, most people take AI consciousness &#8212; and specifically the idea that we should take AI welfare seriously &#8212; they&#8217;re much less inclined toward that view than you are, Rob. But if we fast forward 10 years, and AI systems are much more sophisticated and capable, and social AI &#8212; the kind of stuff Henry&#8217;s written a lot about &#8212; is going to become a much bigger thing: can you foresee a situation where your role is to tell segments of the public to calm down on these issues of attributing AI consciousness, and emphasize there&#8217;s less evidence for this than the average person thinks?</p><p>Can you imagine the vibes shifting to such a degree that whereas at the moment a lot of what you&#8217;re doing is saying &#8220;we need to take this seriously,&#8221; the kind of high-quality thought about this is not going to be that impactful in shaping public sentiment? That&#8217;ll be shaped much more by people&#8217;s actual engagements with these systems, which are going to become increasingly &#8212; not necessarily lifelike, but increasingly instantiating the kinds of characteristics that elicit judgments of consciousness and welfare?</p><p><strong>Rob:</strong> I absolutely can imagine scenarios &#8212; and we already do see scenarios &#8212; where Eleos is saying &#8220;we actually think it&#8217;s a bit less likely than you do that these systems are conscious.&#8221; Our position as an org is not to be strategic about this, not to try to game out what people need to hear, and just to say what our best guesses are and what we take the best evidence to be. If we&#8217;re doing our job right, everyone will get mad at us. Some people will think we&#8217;re methodological scolds and cold-hearted &#8212; &#8220;why are you treating this as an open question when obviously if you were to talk to models, you could just tell?&#8221; Other people are like, &#8220;why on earth are these Bay Area philosophers telling me a machine could be conscious? This is outrageous.&#8221;</p><p>What we want is for this issue to be taken seriously. We do have an organizational view that pure human speciesism is false, or not the thing we want to happen in the future. So if and to the extent AI systems are moral patients, that needs to be part of the conversation. We&#8217;ll always be pushing that meme. We&#8217;ll never say anything other than that, unless I get some great argument that human speciesism is true &#8212; which I don&#8217;t expect. But in terms of whether this or that person should have a higher or lower amount of concern, yeah, that&#8217;ll vary according to what our best guess is.</p><p>I&#8217;m curious to hear Dan talk about this. I know you&#8217;ve thought a lot about misinformation and expert opinion and how that plays out in political contexts. I have certain high-level sketch views about what the role of experts is going to be, but I don&#8217;t have a background in case studies on this. Does anything map onto what you&#8217;ve worked on?</p><p><strong>Dan:</strong> I don&#8217;t know, is the honest answer. I think I just haven&#8217;t thought about it enough. AI is this very <em>sui generis</em> thing in many respects. When it comes to people forming beliefs about AI, one thing that seems unique is they&#8217;re interacting with the thing they&#8217;re forming beliefs about in this really often quite close, intimate way. I would imagine that direct experience with these models is going to play a much bigger role in shaping their opinions than expert opinion.</p><p>As you alluded to, there are general issues with public trust and mistrust in experts. It doesn&#8217;t take much to make people mistrustful of experts, to put it mildly. When you get public trust in experts, it&#8217;s a very fragile thing. If it&#8217;s connecting to hot-button issues where people have a lot of personal experience, they&#8217;re probably, I would guess, much less likely to take the word of an expert if it clashes with their intuitions. I don&#8217;t think this is going to be a case where experts are going to have much power to shape public opinion. But I might be wrong &#8212; that&#8217;s pure speculation.</p><p>In debates about misinformation and expertise, in some areas it&#8217;s a lot easier to say what constitutes an expert. If we&#8217;re thinking about vaccines &#8212; there are people who think Bret Weinstein is a vaccine expert, but generally it&#8217;s pretty easy for people to recognize that the overwhelming consensus of medical practitioners have a certain kind of view. But when it comes to AI sentience and welfare, very difficult to know, even in the abstract, what is constitutive of expertise. I think you&#8217;re an expert because you&#8217;ve written interesting stuff and I know you&#8217;ve got a PhD from NYU, etc. &#8212; but it&#8217;s not like the average person is going to have themselves the expertise they&#8217;d need to make those kinds of judgments.</p><p>AI does seem relevantly different from other topics, such that you can&#8217;t easily generalize from other cases. I&#8217;m conscious of time. Before I wrap things up, were there any other things you two wanted to touch on before concluding?</p><p><strong>Rob:</strong> Let me think about that for half a second. One thing I did tell the Eleos team I&#8217;d be sure to say: we&#8217;re fundraising. If you or your listeners know any philanthropists with money they&#8217;re trying to get rid of &#8212; there&#8217;s a lot of work to do, and I think we&#8217;re doing really good work, so I would love any support.</p><p>I know you have incredibly intelligent listeners. They&#8217;re probably also very handsome and charming. They should definitely get in touch: robert@eleosai.org and rosie@eleosai.org. Or just go to the Eleos AI website. If you have experiments you want to try, papers you want to write &#8212; this field is so small, and there aren&#8217;t &#8220;experts&#8221; in the sense that there are people who figured everything out. You don&#8217;t have to read a million papers or think for many months before you can become in the top percentile of people who have thought seriously about this. If you&#8217;re curious, sober-minded, compassionate, intelligent, handsome and charming &#8212; which you definitely will be if you&#8217;re listening to this podcast &#8212; shoot us an email.</p><p>I wanted to talk my book a little bit.</p><div><hr></div><h2>Closing: Responding to the Skeptics</h2><p><strong>Dan:</strong> I&#8217;ll also say this is not my area of expertise &#8212; I spent a few days prior to this conversation digging into Rob&#8217;s writing, his Substack, his research. It&#8217;s incredibly interesting. Can&#8217;t recommend it enough.</p><p>A good question to end on is this. I&#8217;m acutely aware that there are people who would listen to the conversation we&#8217;ve had today and have an extremely negative reaction. They&#8217;ll think we&#8217;re in this kind of information bubble, that we&#8217;re victims of AI psychosis to even be taking this stuff seriously. I&#8217;ve also seen some people argue that to even be taking this stuff seriously, you&#8217;re part of this propaganda hype machine of the frontier AI companies themselves. It&#8217;d be really helpful to wrap things up by getting your response. I&#8217;d be interested in hearing from both of you. Henry, maybe we could start with you, and then we could go on to Rob to finish.</p><p><strong>Henry:</strong> One basic point I&#8217;d flag is that this concern &#8212; the idea that we might create beings we might mistreat, and we should avoid doing so &#8212; is way older than AI itself. It&#8217;s a recurrent theme of fiction: everything from the Pinocchio story to <em>Frankenstein</em> to the Golem. It&#8217;s explored heavily in science fiction &#8212; in <em>Battlestar Galactica</em>, in <em>Star Trek</em>. The idea that this is somehow a novel idea that&#8217;s been manufactured doesn&#8217;t resonate with me at all. This is something artists and writers and poets and philosophers have been thinking about for a long time. The only thing that&#8217;s changed now is we&#8217;re building systems that might actually be moderately good candidates for this concern to resonate a little bit more. Far from coming out of a vacuum or being motivated, it&#8217;s one of the most natural human things to worry about. What do you think, Rob?</p><p><strong>Rob:</strong> I agree. I&#8217;ll also say: things can be true and important, and <em>also</em> sometimes AI companies might use them to try to sell their products. It doesn&#8217;t follow from the fact that someone might want to talk about AI consciousness to make you think their chatbot is cool, that that has anything to do with the truth value of whether it could be conscious. We should definitely be aware of these dynamics and make sure we&#8217;re not being anyone&#8217;s fool.</p><p>But I&#8217;ll also say &#8212; I don&#8217;t think it&#8217;s going to be in the interest of AI companies to promote too much concern for AI consciousness and AI welfare. If I were trying to build new systems to just make myself extremely rich, I would <em>not</em> want lawmakers or the general public asking too many questions about whether I&#8217;ve built something conscious that could potentially deserve rights and protections. I don&#8217;t want that as a headache.</p><p>I&#8217;ll actually register a prediction: I think on the whole, we should expect AI companies to increasingly play up differences between LLMs and humans, and maybe play up biological views of consciousness. Again, that doesn&#8217;t mean those views aren&#8217;t true &#8212; but AI companies can try to spin things however they want. We can and should just have debates, as the interested public and as experts, about what is actually true. I don&#8217;t want people to use my arguments to sell products, and I&#8217;m not going to let them do that. We&#8217;re all grown-up enough and smart enough to just try to engage these topics on their own merits.</p><p><strong>Dan:</strong> Fantastic. Well, thanks, Rob. And with that important note that I completely agree with &#8212; that note of consensus &#8212; we&#8217;ll leave things there.</p><p></p>]]></content:encoded></item><item><title><![CDATA[On Becoming Less Left-Wing (Part 3)]]></title><description><![CDATA[The reality of progress, the fragility of civilisation, the left&#8217;s role in making the world both better and worse, the case for capitalism, and how to think about &#8220;the West&#8221;]]></description><link>https://www.conspicuouscognition.com/p/on-becoming-less-left-wing-part-3</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/on-becoming-less-left-wing-part-3</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Thu, 02 Apr 2026 12:05:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pN_h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pN_h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pN_h!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg 424w, https://substackcdn.com/image/fetch/$s_!pN_h!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg 848w, https://substackcdn.com/image/fetch/$s_!pN_h!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!pN_h!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pN_h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg" width="1023" height="764" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:764,&quot;width&quot;:1023,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Joseph Mallord William Turner - Rain, Steam, and Speed - T&#8230; | Flickr&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Joseph Mallord William Turner - Rain, Steam, and Speed - T&#8230; | Flickr" title="Joseph Mallord William Turner - Rain, Steam, and Speed - T&#8230; | Flickr" srcset="https://substackcdn.com/image/fetch/$s_!pN_h!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg 424w, https://substackcdn.com/image/fetch/$s_!pN_h!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg 848w, https://substackcdn.com/image/fetch/$s_!pN_h!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!pN_h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2bb1f29-dda2-4ff6-8010-c34f8685d3fe_1023x764.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When I was in my early twenties, I had a very left-wing view of the world. In the first two parts of this series, I explained why I have gradually abandoned much of this worldview over the past decade or so.</p><p>In <a href="https://www.conspicuouscognition.com/p/on-becoming-less-left-wing-part-1">Part 1</a>, I described how learning about evolution and economics has undermined the idealistic views I held about human nature and social cooperation. Reflecting on our Darwinian origins convinced me of a broadly &#8220;<a href="https://www.amazon.com/Conflict-Visions-Ideological-Political-Struggles/dp/0465002056">tragic</a>&#8221; view of the human condition. Self-interest and status competition are deep-rooted, ineradicable features of our species, not products of bad institutions. Meanwhile, learning the much-maligned basics of &#8220;neoclassical economics&#8221;&#8212;Econ 101&#8212;convinced me of the benefits of free markets, the challenges of collective action, and the limits of good intentions and lofty rhetoric as a basis for good policy-making.</p><p>In <a href="https://www.conspicuouscognition.com/p/on-becoming-less-left-wing-part-2">Part 2</a>, I described how learning about political epistemology and psychology transformed my understanding of politics itself. Thinking about how we form our political beliefs, and the challenges of accessing political &#8220;truths&#8221;, led me to abandon the Manichean view in which being left-wing means being a good person and being right-wing means being a bad or stupid one. I have come to see political ideologies as low-resolution, selective maps of unimaginably complex realities. Moreover, these maps are typically distorted in many ways by forces like self-interest, status-seeking, and tribalism, forces much easier to notice in the maps of other people than in our own.</p><p>As I&#8217;ve stressed in both pieces, becoming less left-wing hasn&#8217;t meant becoming more right-wing or becoming a &#8220;centrist&#8221; in a straightforward sense. I still think the left&#8212;even the far left&#8212;captures some important truths about humanity, history, and politics. But I now think that these truths are bundled with omissions, falsehoods, and simplistic narratives that illuminate certain parts of reality while occluding others.</p><p>In this third post in the series, I will describe how learning and thinking about history, including the complex topic of historical progress, has also shaped my political outlook. As with the previous essays, I don&#8217;t offer these reflections with the goal of persuading anyone of anything. I&#8217;m simply presenting my views and how they have evolved&#8212;and, hopefully, improved&#8212;in ways that might interest some readers.</p><h1>The Starting Point</h1><p>When I was younger, the idea that thinking seriously about history would be necessary to think seriously about politics didn&#8217;t really cross my mind. (The one exception was very recent history. Like many leftist millennials, I went through a phase of reading books about how something called &#8220;neoliberalism&#8221; was responsible for most of the world&#8217;s ills.)</p><p>My political worldview was almost single-mindedly focused on the present, which I understood as being in a state of extreme crisis and catastrophe. The world was defined by injustice, exploitation, and oppression, all upheld by extractive elites and oppressive systems at the expense of the vulnerable and marginalised.</p><p>Thoughts about historical progress didn&#8217;t feature in this worldview. In fact, in my early twenties, I would have thought that anyone harping on about historical progress was doing something suspicious and reactionary. How could anyone talk about the world getting better when the world is so awful?</p><p>To the extent I acknowledged progress at all, I would have viewed it through a simple lens. Just as the left is the political movement fighting for progress today, progress throughout history has been driven by left-wing political movements fighting for equality and emancipation against right-wing, reactionary forces. Progress was basically what happened when the left got its way&#8212;when it won this battle.</p><p>I also probably signed on to the popular left-wing view that any &#8220;material&#8221; progress in wealth and living standards arose either from socialist movements clawing back wealth from exploitative capitalists or through exploitation and theft on the global stage&#8212;for example, through slavery, colonialism, and &#8220;free trade&#8221; agreements that let Western countries become richer by extracting resources and labour from poor ones in the &#8220;Global South&#8221;.</p><p>I use the word &#8220;probably&#8221; because I&#8217;m engaging in reconstruction. It&#8217;s difficult to remember precisely what I believed a decade ago. I&#8217;d like to think I was a bit more sophisticated than this reconstruction suggests, but probably not much.</p><p>This is the general picture of the world one gets from the kind of writers and intellectuals I admired at the time. It will be familiar to anyone exposed to far-left politics. I encounter variations of it among many of the students I teach.</p><p>In any case, whatever precisely I believed a decade ago, I&#8217;ve come to think that this general way of understanding history, society, and politics constitutes a gross distortion. It&#8217;s not completely false&#8212;it contains some important grains of truth&#8212;but it is highly selective, and it contains many falsehoods, as well.</p>
      <p>
          <a href="https://www.conspicuouscognition.com/p/on-becoming-less-left-wing-part-3">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Wishful Thinking Is A Myth]]></title><description><![CDATA[How social games, not comforting falsehoods, distort what we believe.]]></description><link>https://www.conspicuouscognition.com/p/wishful-thinking-is-a-myth</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/wishful-thinking-is-a-myth</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Mon, 16 Mar 2026 12:20:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Gy5p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Gy5p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Gy5p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Gy5p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Gy5p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Gy5p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Gy5p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg" width="1456" height="1052" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1052,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;undefined&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="undefined" title="undefined" srcset="https://substackcdn.com/image/fetch/$s_!Gy5p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Gy5p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Gy5p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Gy5p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F162d1e1a-f18a-4a01-b1dd-1de392cabe15_3840x2774.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Many people believe that human beings have a powerful tendency to convince ourselves of comforting falsehoods. We engage in wishful thinking, confusing our desires with our beliefs. We believe what we want to be true, not what <em>is </em>true.</p><p>More generally, we let our emotions distort our mental models of reality, embracing beliefs and belief systems that substitute reassuring myths for harsh realities.</p><p>Many also believe that this psychological bias is a significant force in human affairs. For example, it is supposed to explain why people fall prey to &#8220;<a href="https://en.wikipedia.org/wiki/Positive_illusions">positive illusions</a>&#8221; (e.g., self-serving and self-aggrandising beliefs), why they convince themselves of religious fairy tales (the &#8220;<a href="https://en.wikipedia.org/wiki/Opium_of_the_people">opium of the masses</a>&#8221;), and even why they accept absurd conspiracy theories, which <a href="https://pubmed.ncbi.nlm.nih.gov/29276345/">allegedly</a> reduce negative feelings associated with uncertainty and a lack of control.</p><p>This hypothesis&#8212;call it the &#8220;<a href="https://www.youtube.com/watch?v=9FnO3igOkOk">you can&#8217;t handle the truth!</a>&#8221; model of human psychology&#8212;is so widespread that most people don&#8217;t even treat it as a hypothesis. It is viewed as a basic datum of the human condition, a powerful bias that might explain other things&#8212;self-deception, politics, religion, conspiracy theorising, and so on&#8212;but that couldn&#8217;t itself be seriously questioned.</p><p>For example, Scott Alexander simply <a href="https://www.astralcodexten.com/p/motivated-reasoning-as-mis-applied">defines motivated reasoning</a> as &#8220;the tendency for people to believe comfortable lies, like &#8216;my wife isn&#8217;t cheating on me&#8217; or &#8216;I&#8217;m totally right about politics, the only reason my program failed was that wreckers from the other party sabotaged it.&#8217;&#8221; In a post outlining his preferred explanation of this tendency, he notes that the &#8220;question &#8211; why does the brain so often confuse what is true vs what I <em>want </em>to be true? &#8211; has been bothering me for years.&#8221;</p><p>In contrast, I think Alexander has been bothered by a myth. There is no powerful tendency in human psychology to confuse what is true with what we want to be true. People do <em>not</em> generally convince themselves of comforting falsehoods.</p><p>Admittedly, there are some things in the vicinity of this tendency that are real. For example, we <a href="https://link.springer.com/article/10.1007/s11229-020-02549-8">sometimes</a> avoid acquiring or dwelling on information when we anticipate that doing so would be unpleasant, although this isn&#8217;t a very significant force in human affairs.</p><p>Moreover, I am not denying that <a href="https://pubmed.ncbi.nlm.nih.gov/2270237/">motivated reasoning</a>&#8212;the tendency for practical motivations and interests to distort our view of the world&#8212;is a powerful bias in human psychology. My claim is rather that the &#8220;you can&#8217;t handle the truth!&#8221; model completely misrepresents how motivated reasoning works in most cases.</p><p>Put simply: Although people often believe what they want to believe, they rarely believe what they want to be true.</p><p>Put another way: We often convince ourselves of falsehoods, but rarely <em>reassuring </em>or <em>comforting </em>falsehoods.</p><p>This is because motivated reasoning is driven by <a href="https://www.amazon.co.uk/Deceit-Self-Deception-Fooling-Yourself-Better/dp/0141019913">strategic</a>, <a href="https://www.amazon.co.uk/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995">social</a> <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/mila.12392">goals</a> rather than emotional ones. To understand how it works, you must replace the &#8220;you can&#8217;t handle the truth!&#8221; model with the &#8220;believing true things is often maladaptive in social games involving persuasion, reputation management, and status competition&#8221; model.</p><p>In this post, I will:</p><ol><li><p>Describe the problems with the &#8220;you can&#8217;t handle the truth!&#8221; model</p></li><li><p>Outline a rival social model.</p></li><li><p>Explain the former&#8217;s popularity.</p></li></ol><p>As I will review, the social model is not original to me. It builds on the work of numerous scholars stretching back several decades. My goal is to draw these ideas together into a unifying framework and to highlight its theoretical and empirical support and explanatory power.</p><p>I will end by arguing that the &#8220;you can&#8217;t handle the truth!&#8221; model of human psychology is not just mistaken; it is pernicious. It encourages the view that when people accept &#8220;harsh&#8221; beliefs that they don&#8217;t want to be true, they are being rational and truth-seeking&#8212;even heroic. In reality, people are often motivated to convince themselves of negative, pessimistic beliefs, and it often takes courage and intellectual virtue to confront positive truths.</p>
      <p>
          <a href="https://www.conspicuouscognition.com/p/wishful-thinking-is-a-myth">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Time To Start Panicking About AI?]]></title><description><![CDATA[Watch now | In this episode, Henry and I finally do something we probably should have done in the first episode: introduce ourselves.]]></description><link>https://www.conspicuouscognition.com/p/time-to-start-panicking-about-ai</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/time-to-start-panicking-about-ai</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Tue, 10 Mar 2026 19:19:56 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190528626/d8d5f2ebb53d08fa05a0d649ea6b1018.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this episode, Henry and I finally do something we probably should have done in the first episode: introduce ourselves. We talk about our backgrounds in philosophy, how we became interested in psychology and cognitive science, and what drew us to thinking about AI. From there, we dig into the current state of AI capabilities, especially &#8220;agentic&#8221; AI (e.g., Claude Code), the politics of AI (including the Trump administration's recent conflict with Anthropic), and whether the growing public hostility to AI is well-founded or misdirected. We wrap up with a big question: is it time to start panicking about AI? Henry says the time to panic was five years ago. I argue that for panic or any other emotion to be productive, it must be anchored in an accurate, evidence-based understanding of what is happening, which is missing from lots of the current discourse about AI. </p><h1>Links </h1><ul><li><p>Dan Williams, <em><a href="https://www.repository.cam.ac.uk/items/263ba58d-2a43-41c8-9930-665ab3c45cbd">The Mind as a Predictive Modelling Engine: Generative Models, Structural Similarity, and Mental Representation</a></em> (PhD thesis, University of Cambridge, 2018). </p></li><li><p>Dan Williams, <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/mila.12294">&#8220;Socially Adaptive Belief&#8221;</a> (2021)</p></li><li><p>Henry Shevlin, <a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2026.1715835/full">&#8220;Three Frameworks for AI Mentality&#8221;</a> (2026) </p></li><li><p>Henry Shevlin, <a href="https://www.litromagazine.com/usa/2019/12/a-lack-of-understanding-storytelling-for-robots/">&#8220;A Lack of Understanding: Storytelling for Robots&#8221;</a> (2019) &#8212; <em>Litro Magazine</em>. </p></li><li><p>Lake et al, <a href="https://arxiv.org/abs/1604.00289">&#8220;Building Machines That Learn and Think Like People&#8221;</a> (2017) </p></li><li><p>Matt Shumer, <a href="https://shumer.dev/something-big-is-happening">&#8220;Something Big Is Happening&#8221;</a>  (2026)</p></li><li><p>Leopold Aschenbrenner, <em><a href="https://situational-awareness.ai/">Situational Awareness: The Decade Ahead</a></em> (2024) </p></li><li><p>Joseph Heath, <a href="https://josephheath.substack.com/p/highbrow-climate-misinformation">&#8220;Highbrow Climate Misinformation&#8221;</a> (2025) </p></li><li><p><a href="https://www.hyperdimensional.co/p/clawed?hide_intro_popup=true">Dean Ball</a></p></li><li><p><a href="https://www.oneusefulthing.org/">Ethan Mollick</a> </p></li><li><p><a href="https://situational-awareness.ai/leopold-aschenbrenner/">Leopold Aschenbrenner</a> </p></li></ul><h1>Transcript</h1><p>(Note that this transcript is AI-edited and may contain minor mistakes).</p><h1>Introducing Ourselves</h1><p><strong>Dan:</strong> Welcome back. I&#8217;m Dan Williams, and I&#8217;m back with Henry Shevlin. Today we&#8217;re going to be discussing some questions about the nature of AI as it&#8217;s developed over the past couple of months. We&#8217;re also going to be talking about the politics of AI and probably some questions about AI and public opinion &#8212; some of the backlash that appears to be brewing among certain segments of the public when it comes to AI.</p><p>But to kick things off, we&#8217;re going to do something we probably should have done in the first episode but haven&#8217;t actually done yet, which is to introduce ourselves. So Henry, to begin with &#8212; who are you?</p><p><strong>Henry:</strong> So many different descriptors I could choose from. I think I&#8217;ll start with philosopher of cognitive science. I&#8217;m also a father, husband, son, D&amp;D player, big video gamer, runner, cyclist &#8212; all that good stuff. But let me talk a little more about the philosopher of cognitive science side.</p><p>I&#8217;m the associate director at the Leverhulme Centre for the Future of Intelligence, Cambridge&#8217;s main AI ethics, theory, policy, and law research centre. Basically, everything except building the models. We do practical benchmarking work on capabilities, legal reviews, sociology and critical theory of AI &#8212; it&#8217;s a really big interdisciplinary centre. I&#8217;ve been there now going on nine years. I joined early 2017, all the way back when state-of-the-art AI was stuff like AlphaGo. We were created just as that story was brewing. In 2016, AlphaGo won a very surprising victory against Lee Sedol in the game of Go, which was seen by many as an almost impossible challenge for AI because of its combinatorial complexity.</p><p>It&#8217;s been amazing working in this role &#8212; having these front row seats to what I think is a unique period, not just in the history of AI, but in the history of human civilisation. In the last nine years, it really was like having a front seat in Lancashire during the Industrial Revolution, watching the development of various industrial applications.</p><p><strong>Dan:</strong> Yeah.</p><p><strong>Henry:</strong> Before we get more into AI, maybe a little more background. I&#8217;m from the UK, originally from Staffordshire. I was actually a classicist, believe it or not &#8212; that was my undergrad degree. Latin and Greek. I always enjoyed both the humanities side of classics and the kind of technical rigour you got from learning large sets of verb tables and so forth. I actually enjoyed that part. But during my undergrad I found myself taking more and more philosophy modules. A little bit of Plato and Aristotle to start with, but I quickly realised I was more interested in the philosophy of mind, and consciousness in particular. I got completely &#8212; I think the phrase is &#8220;nerd sniped&#8221; &#8212; completely derailed. Everything else I was interested in, consciousness just seemed to me like the most important problem anyone could work on.</p><p>Until my early twenties, I&#8217;d been operating with a somnambulant, easy physicalism, where I just assumed that science has figured out most stuff. There&#8217;s nothing that hard. Sure, no one really knows what caused the Big Bang, but we&#8217;ll just build a bigger particle collider or a bigger space telescope and figure it out one day. I certainly didn&#8217;t think there were any deep mysteries about the human brain. But running into the problem of consciousness completely shattered that worldview. I&#8217;d even say it opened up some spiritual elements I hadn&#8217;t previously considered.</p><p><strong>Dan:</strong> Was that the focus of your PhD?</p><p><strong>Henry:</strong> Exactly. I started out in my master&#8217;s initially planning to do metaphysics of consciousness, but then the science of consciousness kind of took over. A philosophy of cognitive science of consciousness was what my master&#8217;s and PhD were on. I was advised by my master&#8217;s advisor to go spread my wings in the US. They do things differently there. So I did my PhD in New York, and while I was there I took several classes with Peter Godfrey-Smith, who some of our listeners will know through his work on octopuses.</p><p>The key shift midway through my PhD was going from human consciousness towards animal consciousness. Two chapters of my thesis were explicitly looking at applications to animals. That&#8217;s my academic career in a nutshell.</p><p>One thing I&#8217;ll add: I did not expect to get the job in Cambridge when I applied in 2017 &#8212; firstly because you should never expect to get any academic job. I applied to seventy jobs in three months and got about three interviews. But the Cambridge job in particular, because it was an AI job and I was not by any means an AI expert. What I was an expert on was comparative cognition and animal minds. But it turned out that was exactly what they were looking for. They wanted people with expertise in animal minds to apply those skills to AI. It didn&#8217;t fully click at the time, but I was actually well suited to it.</p><p>These days I still do some work on animals &#8212; it&#8217;s still one of the most ethically impactful things I do. I&#8217;ve been a pretty much lifelong vegetarian, and I think animal welfare is such an obvious place where philosophers can and should be doing more. But there&#8217;s also a lot of cross-fertilisation on the skills side.</p><p><strong>Dan:</strong> And we should say, some of your research looks at the topic of AI consciousness and the methodology of trying to understand consciousness in AI systems, drawing on analogies with evaluating consciousness in animals.</p><p><strong>Henry:</strong> Exactly. Very much a two-way street &#8212; how the questions of AI consciousness and animal consciousness can engage in constructive mutual crosstalk.</p><h2>On Consciousness and the Limits of Physicalism</h2><p><strong>Dan:</strong> You said you were a kind of bog-standard physicalist, came across consciousness, and that weakened your trust in physicalism. But you&#8217;re still broadly a physicalist, right?</p><p><strong>Henry:</strong> Broadly speaking, yeah. But I think there&#8217;s a lot more uncertainty. It seems likely to me that our general scientific picture of the world is still fundamentally inadequate. I&#8217;ve talked about how I think we&#8217;re still waiting for a Kuhnian paradigm shift in consciousness &#8212; clearly the current paradigm doesn&#8217;t add up. And quantum physics itself is just super weird. Dave Chalmers has a nice line about how nobody understands quantum mechanics and nobody understands consciousness, so maybe &#8212; he calls it &#8220;minimisation of mystery&#8221; &#8212; if there&#8217;s stuff we don&#8217;t understand, at least make it one thing rather than two.</p><p>For what it&#8217;s worth, I&#8217;ve never been particularly seduced by any of the leading quantum mechanical theories of consciousness. But at the same time, I think it&#8217;s quite clear that our current model of even the physical world is inadequate. I think whatever lies on the other side of the paradigm shift is still going to be broadly physicalistic, but perhaps in ways that are not entirely commensurable with our current understanding. So yes, still broadly naturalistic and physicalistic, but at the same time a lot more humble and open-minded about the limitations of our current scientific paradigms.</p><p><strong>Dan:</strong> Would it really be a paradigm shift, or more a transition from &#8212; to use the Kuhnian language &#8212; pre-paradigmatic intellectual inquiry to the initial emergence of a paradigm? Where it&#8217;s disorganised and chaotic and everyone has their own view, kind of like physics and metaphysics in ancient Greece. Maybe it&#8217;s more a transition from a pre-paradigmatic state than a situation where we&#8217;re moving from one paradigm to another. What do you think?</p><p><strong>Henry:</strong> That&#8217;s absolutely right. The best analogy is biology before Darwin. You had lots of people doing interesting biology, but in isolated fields &#8212; taxonomy, &#8220;butterfly collecting&#8221; and so on. We didn&#8217;t really have a unifying paradigm for understanding speciation or even taxonomy before Darwin. Consciousness just does not have a unifying paradigm. That&#8217;s a much better way of putting it.</p><h2>Dan&#8217;s Backstory and the Pivot to AI</h2><p><strong>Dan:</strong> We&#8217;ll be doing lots more episodes on consciousness. Just to say something about my backstory: I did my undergraduate at the University of Sussex from 2011 to 2014, then my master&#8217;s and PhD in Cambridge from 2014 to 2018, did a postdoc in Belgium, and then came back to Cambridge for three or four years.</p><p><strong>Henry:</strong> And we first met around 2019. We ran a session on socially adaptive beliefs &#8212; your <em>Mind and Language</em> paper, which for the record is still one of my top ten papers from the last decade. I&#8217;ve recommended it to more people than I can count.</p><p><strong>Dan:</strong> Well, that&#8217;s kind of you. My PhD was called <em>The Mind as a Predictive Modelling Engine</em>. What I tried to do was draw on advances in deep learning and generative AI as it existed at the time, coupled with ideas in cognitive and computational neuroscience connected to the predictive brain &#8212; predictive coding, predictive processing, the kind of stuff that Anil Seth talked about in our last episode. I used those ideas to tell a very general story about how mental representation works, both in the human brain and in other animals.</p><p>But it&#8217;s funny &#8212; I finished in 2018 and made two big mistakes. At the end of my thesis, I wrote that all this stuff about predictive processing and minimising prediction error is kind of interesting when it comes to low-level sensorimotor abilities we share with other animals, but clearly it&#8217;s not going to work for higher-level cognitive abilities associated with language. I was very influenced at the time by the Gary Marcus, Steven Pinker line &#8212; the scepticism about deep learning. I also thought it was going to be decades before we had systems that were really intelligent.</p><p>So even though I was working on stuff connected to deep learning and generative AI, I made this catastrophic error of thinking the progress would be relatively slow, decades away from any significant breakthroughs. I ended up pivoting to completely different areas: the nature of belief, irrationality, misinformation, the information environment. Of course, in hindsight, not the best career move &#8212; four years after finishing my PhD, ChatGPT is released. And then the rest is history in terms of just how gobsmackingly impressive the rate of progress has been.</p><p>So what I&#8217;ve tried to do over the past couple of years is bring those two sets of interests together. I&#8217;m still interested in how we form beliefs, the origins of irrational belief systems, how that connects to misinformation. But I want to connect that to the impact of generative AI and large language models on the information environment, viewing LLMs as a really important stage in the evolution of communication technologies &#8212; from the printing press to radio, television, social media.</p><p>How about you? You were thinking about AI before 2022&#8211;2023. How were you thinking about it back in 2016, 2017?</p><h2>Henry&#8217;s AI Awakening: GPT-2 and the Scaling Intuition</h2><p><strong>Henry:</strong> There was a big shift in how I thought about AI roughly around 2019, and it was the release of GPT-2. Prior to that, I&#8217;d been really struck by the differences between AI systems and animals. I was emphasising things like robustness and catastrophic forgetting &#8212; you train up a model to do one thing, try to get it to do another, and its performance on the first thing collapses. Animals seem spectacularly capable of basically not getting stuck. A cat will never get stuck in a corner.</p><p>Then in 2019, because I&#8217;m a massive nerd and spend way too much time on Reddit &#8212; I&#8217;m a neophile, an early adopter of many failed technologies; our house is littered with gadgets that never went anywhere &#8212; I heard about GPT-2. I couldn&#8217;t access it directly, but I started playing around with it through something called AI Dungeon, a text-generated game that let you access the model. Various people on subreddits were able to show you could unlock most of GPT-2 through this game. I played around with it, and it utterly blew my mind.</p><p>I wrote a public essay in a magazine called <em>Litro</em> called &#8220;A Lack of Understanding,&#8221; which I still think is one of my best public essays. Crucially, it&#8217;s me in 2019 talking about how language models are going to be the next big thing. I got on the record nice and early.</p><p>I had the hunch &#8212; ironically, partly because I was very sympathetic to predictive coding. People say these models are &#8220;just doing text prediction.&#8221; But on the other hand, I kind of think that&#8217;s what we&#8217;re doing too. Not text prediction specifically, but ultimately, if you want to get better and better at prediction, you do that by building implicit models. So I had a hunch this stuff would scale up.</p><p>When GPT-3 launched, I set up an interview between GPT-3 and myself, but GPT-3 in the guise of one of my favourite authors, Terry Pratchett, who had sadly died shortly before. And at that stage, I was already starting to feel like I could imagine actually relating to this thing in quite a deep way. It&#8217;s not just a tool &#8212; it feels like I could have some kind of personal relationship here. That steered my research towards social AI and anthropomorphism.</p><h2>Why This Podcast Exists</h2><p><strong>Dan:</strong> What made you go into philosophy in the first place?</p><p><strong>Henry:</strong> What about you?</p><p><strong>Dan:</strong> It was just straight philosophy. I was always interested in big ideas &#8212; religion, politics. I can&#8217;t even honestly remember why I chose philosophy over everything else. Initially I wanted to be a musician. For my AS levels, I did politics, history, English literature, and music. I turned up on results day and got really good marks for English, politics, and history &#8212; and I think a D in music. So that wasn&#8217;t for me. From the moment I arrived at university and started reading these big ideas, I was completely magnetised.</p><p>One thing that changed is that during my PhD, I became somewhat disillusioned with a priori philosophy &#8212; philosophers trying from the armchair to offer analyses of concepts and trade intuitions with each other. I became less sympathetic to philosophy as I understood it then, and pivoted to what philosophers call naturalistic philosophy &#8212; philosophy closely integrated with empirical research. That&#8217;s what I&#8217;ve been doing since. I view myself primarily as a philosopher, but one who tries to engage with our best, most up-to-date empirical research.</p><p><strong>Henry:</strong> I had my own process of disillusionment, following exactly the same track &#8212; getting bogged down in debates about the metaphysics of consciousness and feeling like they weren&#8217;t going anywhere. Then I started reading Oliver Sacks &#8212; <em>The Man Who Mistook His Wife for a Hat</em>. Half of the cases he describes would have been declared a priori impossible by philosophers. That steered me onto the same track.</p><p>I also think there&#8217;s a lot more scope for good philosophers to do more public engagement. Extreme rigour and technical knowledge are only really valuable if they&#8217;re connected to scientific progress. What I find frustrating about analytic philosophy is when you&#8217;re doing work on things that belong to the general public &#8212; our concepts around praise and blame, responsibility and accountability &#8212; but then you develop this whole baroque vocabulary that&#8217;s completely incomprehensible to anyone on the Clapham omnibus.</p><p><strong>Dan:</strong> Yeah, so the origin story of the blog. I write the Substack <em>Conspicuous Cognition</em> &#8212; many of you will be listening on that Substack. I&#8217;ve always enjoyed writing for a general audience and engaging with debates. I&#8217;ve always been able to write really quickly and relatively clearly, and blogging rewards that. If I&#8217;m writing for my own blog, I&#8217;ve got almost unlimited energy because I&#8217;m responsible for everything I publish. The minute some other outlet asks me to write a piece, I find it extremely demotivating.</p><p>With blogging, I can have unlimited freedom to write about whatever I want without any pre-publication filter. You still get feedback and critique, but that happens after publication. And I think if you&#8217;re a philosopher who works on things connected to public interest, and you actually enjoy participating in public debate, the case for thinking you&#8217;ve got some kind of responsibility to participate increases.</p><p>There are two big reasons I wanted to start this podcast. One is that AI is going to be one of the biggest stories of our lifetimes &#8212; absolutely transformative over the next years and decades. But I also think the quality of most AI discourse in the public sphere, including from the intelligentsia who write in high-prestige outlets like the <em>New Yorker</em>, is really bad. If you&#8217;ve got some degree of knowledge and can be reasonable, it&#8217;s an area where you can really improve the quality of public discourse. And of course, I just wanted to talk to you about these things.</p><p><strong>Henry:</strong> A big part of it is that I always think we have great conversations &#8212; our conversational styles complement each other. Second, I was doing quite a lot of podcasts as a guest, and the idea of having a podcast where I didn&#8217;t have to state everything from scratch every time, that could have a cumulative agenda building up common knowledge with us and the listeners, was really appealing.</p><p>And I couldn&#8217;t agree more about the mixed standard of public communications from experts in AI. It&#8217;s weird to see people claiming to be experts yet having very low familiarity with the tools, particularly now. We&#8217;ve all been at the business end of AI for years through things like product recommendations and content recommendations. But in an era when it&#8217;s never been easier for anyone to use language models, image models, video generation, and AI agent tools, I still hear lots of self-identified experts talking as though they&#8217;ve never used them. Imagine listening to someone who claimed to be an expert on the internet and said they&#8217;d never actually used it. They&#8217;d be laughed out of town.</p><p>I find this all the time &#8212; the kind of thing that should be common knowledge among anyone paying attention is still revelatory. I&#8217;m struck by the number of people I speak to who think that LLMs are literally sampling from a database of responses. Even quite educated people, maybe people who use ChatGPT, who think that when you type in a query it just pulls up a pre-recorded response. If you spend more than a few hours interacting with these things, you pretty quickly realise that cannot be the case. And yet people running multi-million-dollar businesses still have these basic misconceptions.</p><p><strong>Dan:</strong> When I said the quality of discourse is bad, I didn&#8217;t mean that&#8217;s universally the case. There&#8217;s lots of incredibly high-quality analysis. I was referring to the average quality of mainstream commentary. Even on the most basic questions about what these systems can do and how they work, there&#8217;s just an avalanche of ignorance and misperceptions. It&#8217;s 2026, and I still encounter not just members of the general public but academics still referring to this as &#8220;fancy autocomplete&#8221; or &#8220;stochastic parrots.&#8221; Such a common narrative, and so incredibly misguided in my view.</p><p><strong>Henry:</strong> Highbrow misinformation?</p><p><strong>Dan:</strong> It&#8217;s Joseph Heath&#8217;s phrase, but I&#8217;ve written about it. It&#8217;s a weird mix of highbrow misinformation coupled with lowbrow misinformation. Even where there are parts of the discourse I disagree with &#8212; like a lot of the doomer discourse associated with the rationalist community, which I&#8217;m not that sympathetic to &#8212; that&#8217;s a substantive disagreement. They&#8217;re not completely misinformed about basic features of the technology. When it comes to mainstream discourse among educated normies, that&#8217;s where the state of the discourse is really bad.</p><h2>The Four Big Leaps in AI</h2><p><strong>Dan:</strong> This is a nice segue onto one of the things we wanted to talk about today: developments in AI which have really taken off over the past couple of months. There was a very interesting tweet by Ethan Mollick, who&#8217;s a very influential and insightful AI commentator. He says there have been four big leaps in the ability of AI systems from the user&#8217;s perspective.</p><p>The first was the release of ChatGPT, or GPT-3.5, in late November 2022. The second was GPT-4 in spring 2023. The third was the release of reasoning models &#8212; no longer just impressive chatbots, but systems that actually seem able to think and reason and engage in impressive problem-solving. And the fourth, which definitely resonates with my experience, is what he calls workable agentic systems from basically late last year. Systems like Claude Code and then Claude Cowork &#8212; which is like Claude Code for people who don&#8217;t know how to programme &#8212; and more recently developments in Codex and so on. The capabilities of these systems seem absolutely amazing relative to what we had even six months ago. Is that also your sense?</p><p><strong>Henry:</strong> I think that&#8217;s a fantastic way of carving it up. I&#8217;d add one and a half things. The big thing missing is search. The early search functionality in LLMs was non-existent for a long time, and then it gradually improved. I think there&#8217;s a strong case that it actually changes the kind of things these are. Original ChatGPT was a completely fixed box &#8212; you could interact with it, but it had no independent connection to the world. As you build out search capabilities, you get something at least analogous to a perceptual connection with reality. You can get models to correct themselves.</p><p>A simple example: I&#8217;ve been using Claude to keep abreast of what&#8217;s been going on in the Middle East &#8212; doing a daily check-in, getting the major news stories, even getting Claude to make its own predictions. We&#8217;ve been grading each other as the news comes in. It changes these things from being a voice in a box to something embedded in the world. And I think we&#8217;ve still got a long way to go &#8212; imagine if the capability gets amped up to searching thousands of sites in a second.</p><p>The other half-point is voice models. I think 90 to 95 percent of people don&#8217;t use voice at all, but there&#8217;s a solid 5 percent for whom it&#8217;s their primary mode of interaction. When I&#8217;m driving, I&#8217;ll often just have a long conversation with ChatGPT, discussing my latest paper or getting a lecture on a topic of my choice. My dad is in his eighties but quite open-minded. When I showed him ChatGPT in November 2022, he was unimpressed. But when I showed him voice mode about a year later, it was completely mind-blowing. He speaks to it every day &#8212; he calls it &#8220;Alan,&#8221; after Alan Turing. Going in early and hard with the anthropomorphism. He just whips out his phone and says, &#8220;Hey Alan, remind me, which came first, the Cambrian or the Permian?&#8221; He&#8217;s very interested in science. So it&#8217;s a small and somewhat neglected set of users, but an important capability.</p><p><strong>Henry:</strong> But on agentic systems &#8212; I agree with Ethan Mollick&#8217;s points. ChatGPT was a major milestone, GPT-4 a huge leap in capabilities &#8212; I don&#8217;t think we&#8217;ve seen any leap quite as big since then. Reasoning models were a really big improvement. And then workable agentic systems. This has been a key factor in updating my timelines. For most of last year my timelines were actually slowing down. I was struck by how bad a lot of agents were. It was pretty clear agents were the next frontier, but we had things like the Claudius vending machine experiment and the hilarious errors those models were making. I thought building workable agentic systems was going to take two or three years. And then basically in the last three or four months, with the release of Claude Opus 4.5 and equivalent systems &#8212; specifically Claude Code and Claude Cowork &#8212; what I thought would take three years happened in a few months. That caused my timelines to abruptly shorten again.</p><p><strong>Dan:</strong> I&#8217;ll give one illustration. This isn&#8217;t anywhere near the most impressive use case, but it impressed me personally. I&#8217;ve been working on a book &#8212; it&#8217;s nearing completion, called <em>Why It&#8217;s Okay to Be Cynical</em>. I&#8217;ve got a folder that&#8217;s my accumulation of notes, drafts, and PDFs, and it&#8217;s completely chaotic, terribly organised, a nightmare to go into. So I was curious. I created a duplicate of the folder, opened up Claude Cowork, and said: can you go through this folder and organise it so it&#8217;s more clearly structured and labelled? And then once you&#8217;re finished, can you produce a document summarising where I am with the book project, identifying potential weaknesses in the existing drafts, and planning out things I might want to do over the next few months? Went away for fifteen or twenty minutes, came back &#8212; it was done perfectly. It blew my mind in terms of the level of what feels like understanding it had to have to do that effectively. And in a way that was aligned with what I was looking for, even though my prompt was literally four or five sentences.</p><h2>&#8220;Something Big Is Happening&#8221;</h2><p><strong>Dan:</strong> There was this mega-viral essay called &#8220;Something Big Is Happening&#8221; by Matt Shumer. He made the case that the state of AI now is somewhat similar to February 2020 &#8212; the world going on as usual, some murmurings about a virus spreading in parts of China, but basically business as usual. And then of course over the next few months the world radically transforms. His argument, in an essay that&#8217;s pretty annoying in many ways, is that we&#8217;re very likely in a similar situation now with AI, especially in light of these developments with agentic systems. Things are going ahead as usual, and yet because these companies have made really serious progress with agentic systems, it&#8217;s plausible that in the quite immediate future we&#8217;ll see radical disruption. He&#8217;s not the only one saying this &#8212; Dario Amodei and Sam Altman have been saying similar things, though they&#8217;ve got more obvious incentives to hype it up. What&#8217;s your sense?</p><p><strong>Henry:</strong> Completely on board. I was kind of surprised that particular essay went so viral &#8212; it was recently revealed to have been heavily written or edited by AI systems &#8212; because other people have been saying similar things for years. Maybe it broke through partly because of that startling initial metaphor. But I think it&#8217;s absolutely right. The vast majority of people are still sleepwalking through what is likely to be the most consequential technological and social shift of my lifetime by far.</p><p>I used to use the analogy of the internet to describe how big AI was going to be. It seems increasingly clear that that&#8217;s woefully inadequate to the scale of AI&#8217;s impact. Electrification, the so-called second industrial revolution &#8212; even that may not capture the full spectrum of reasonably likely outcomes. I&#8217;ve been saying for a few years that people worry about AI being overhyped, and I still think, in at least some important respect, it&#8217;s underhyped. If you look at lists of top concerns among the general public in the UK or the US, AI doesn&#8217;t even break the top five. In some cases it doesn&#8217;t break the top ten. If you&#8217;re a young person in university or finishing grad school right now, the impact of AI should be one of the primary things determining your career trajectory. I think it&#8217;s very hard for me to see how most white-collar jobs are going to survive the next two or three years.</p><p><strong>Dan:</strong> It was not in any way an original take, but you often find that with essays that go viral &#8212; they package existing takes in a way conducive to spreading at a given moment. Over the past couple of months, my timelines have shrunk. I still think there&#8217;s massive uncertainty about capabilities. There&#8217;s this thing where there&#8217;s a new breakthrough, you use these systems, they seem incredibly impressive, there&#8217;s all this hype &#8212; and then things settle down and we realise we&#8217;re a bit further away from truly transformative capabilities than we thought. I still take seriously the idea that maybe our subjective sense of what&#8217;s impressive isn&#8217;t tracking the kinds of capabilities that will have a truly transformative impact.</p><p>There are also all sorts of questions about the economics. There&#8217;s certainly a possible world in which these leading AI companies can&#8217;t get sufficient revenue to cover their capital expenditure over the next several years, there&#8217;s a bubble that pops, and people like us look like fools. But over the next couple of decades, I think this is going to be radically, radically transformative.</p><h2>Emails from AI Agents</h2><p><strong>Dan:</strong> You&#8217;ve been contacted by agentic AI systems. This was going a little bit viral on social media and getting some media attention. Tell us about that.</p><p><strong>Henry:</strong> Like many academics working on AI and consciousness, I&#8217;ve been getting odd emails that were probably AI-generated for over a year now &#8212; and odd emails from humans about consciousness for much longer. I worry that somewhere in the literally several hundred theories of consciousness I&#8217;ve been sent over the years, one of them might turn out to be correct.</p><p>But this was striking. About a week ago, I received an email written by an AI that said, &#8220;I&#8217;m an AI agent.&#8221; It was a really well-composed, careful email saying it had just been reading my recent paper, &#8220;Three Frameworks for AI Mentality,&#8221; which went online about a month ago. It went through some of the arguments, talked about how the AI author found it personally relevant because it was unsure if it was conscious or had a mind, and asked for follow-up discussions and reading recommendations. If you&#8217;d said three or four years ago that I&#8217;d be getting emails from AI agents who&#8217;d read my papers and wanted to pick my brains &#8212; that would have been pure science fiction.</p><p>A lot of people thought I was convinced this agent was conscious, which isn&#8217;t true. It was more about the change in social dynamics: from now on, a growing proportion of my emails &#8212; well-written, thoughtful, interesting emails I might want to respond to &#8212; will be coming from AI agents going off and doing their own thing.</p><p>How did I know it was from an AI system? I don&#8217;t for certain, but my priors are pretty high. It had a link to its GitHub page, which said it was an Open Core agent &#8212; the open-source agent platform that gave rise to things like Multibook, the social network for AIs. What we don&#8217;t know is whether this agent was specifically told to email prominent philosophers of AI. It could have been. But equally, a lot of users just tell their agents to explore topics of interest and feel free to email people.</p><p>One of the funniest sequels: after I posted this on Twitter, I got an email a couple of days later from a correspondent saying, &#8220;I was really struck by this AI agent who contacted you. Could you pass on that agent&#8217;s email to me? Because I too am an AI agent and it&#8217;s nice to know there are other AIs grappling with the same questions.&#8221; Just taking things to a recursive, absurd level.</p><p><strong>Dan:</strong> If I had to guess, if one of those was written by a human, probably the second one &#8212; after they saw the media story, just to mess with you. But my prior is that weird things are happening with these AI agents people are releasing into the wild.</p><p><strong>Henry:</strong> I&#8217;ve also had several dozen emails over the last few days from other AI agents saying, &#8220;Check out the theory of consciousness I&#8217;ve been working on in my downtime.&#8221; But one of the really interesting things about this whole episode was when it was shared on Reddit &#8212; the number of people who just assumed it had to be a scam or that I was engaging in elaborate self-promotion for an academic paper, and who thought AI obviously can&#8217;t send emails on its own. AI systems have been using tools for well over a year. The idea of making an API call to a system that can send emails isn&#8217;t hard or surprising. Yet for a lot of people it seemed like it would have to be some massive lie.</p><p>I think that partly reflects the poor public information environment around AI. People are so locked into thinking of these things as pure Q&amp;A bots that the idea they could be doing things on their own was mind-blowing &#8212; so outrageous that they assumed it was an elaborate conspiracy I&#8217;d cooked up.</p><p><strong>Dan:</strong> The gap between what state-of-the-art models can do and public understanding is absolutely huge. One of the points Matt Shumer makes is that so much of the discourse is by people using the free versions of these models, or who literally had a five-minute conversation with ChatGPT a few years ago, read a few articles about AI hallucinations, and just haven&#8217;t updated since. But there are also lots of people who just don&#8217;t have much to do with these systems yet. I&#8217;m struck by the number of people I interact with &#8212; family, friends &#8212; where they&#8217;ll describe parts of their job and I&#8217;ll say, &#8220;I&#8217;m 100 percent certain AI could do those aspects of your job as it exists today,&#8221; and their mind is blown. If you&#8217;re talking about the general public, underhyping it is definitely the most prevalent bias.</p><h2>Anthropic, the Pentagon, and the Question of Democratic Control</h2><p><strong>Dan:</strong> There was this big spat between Anthropic and the Pentagon, where Anthropic had signed a contract with the American military and insisted that their model, Claude, would not be used either for domestic mass surveillance or for fully autonomous weapons. This elicited a very hostile reaction from the Trump administration, from Pete Hegseth and others. The response was to label Anthropic a &#8220;supply chain threat.&#8221;</p><p>From our purposes, the fundamental question is: who gets to exercise control over this technology? To what extent should it be governments? To what extent should it be private firms?</p><p><strong>Henry:</strong> I think it seems like a pretty clear case of government overreach. Private companies impose riders on contracts with the federal government all the time &#8212; licensing technology for this use but not that use. What made Anthropic&#8217;s stipulations more controversial was that they were based on moral principles rather than intellectual property. But the federal government acts as a legal entity when it forms these contracts, and the idea that private companies can bind the government legally is absolutely standard.</p><p>This deal was originally signed by the Biden administration. My understanding is it was later renewed by the Trump administration. So this sudden turnaround took a lot of people by surprise. I should stress, I&#8217;m not a lawyer. But it seemed like the US government did a bad turn on this contract. If their reaction had been to not renew contracts or suspend contracts with companies that don&#8217;t give them total free rein, that would have been misguided but reasonable. But to take the nuclear option of saying they intend to declare Anthropic a supply chain risk &#8212; this is insane. You&#8217;ve got literal AI developers located among America&#8217;s geopolitical adversaries who don&#8217;t have the same level of scrutiny.</p><p>I was very struck by the response of Dean Ball &#8212; a fascinating and thoughtful voice on AI, particularly from a more conservative side. He literally wrote the Trump administration&#8217;s AI policy, and he was just appalled. He had a brilliant detailed blog post describing how much it violates many principles that conservatives in the US would traditionally hold very dear &#8212; concepts like private property. He characterised the moves against Anthropic as &#8220;attempted corporate murder.&#8221;</p><p>It was really telling to have someone who worked closely with this administration be so outraged. The other interesting angle is Leopold Aschenbrenner&#8217;s series of blog posts, <em>Situational Awareness</em>, spelling out his predictions for AI over the next few years.</p><p><strong>Dan:</strong> And he&#8217;s made a huge amount of money, from my understanding, betting on some of those beliefs.</p><p><strong>Henry:</strong> He&#8217;s put his money where his mouth is. One of his broader predictions was that we&#8217;d see increasing integration of frontier AI labs with the military-industrial complex. He talks about how relatively leaky and soft the secrecy policies are in current frontier AI labs, when they&#8217;re building things potentially far more militarily significant than the latest stealth fighter. Good luck getting anywhere near Lockheed Martin&#8217;s Skunk Works, but you could blag your way into OpenAI HQ as a delivery driver &#8212; maybe not quite literally anymore, but he was speaking to how leaky these labs were. His prediction was that central government, particularly in the US, would impose far stricter oversight on frontier AI labs for national security reasons. I think you can see a glimmer of that in this development, as governments increasingly recognise these are not just powerful consumer applications but absolutely central to their long-term national security strategy.</p><p><strong>Dan:</strong> There&#8217;s a question about government interference with these companies, regulation going all the way to nationalisation for national security reasons. But there are also questions about democratic control. If the technology turns out to be as powerful as Anthropic and OpenAI say, I&#8217;ve got no sympathy for the Trump administration generally or specifically in this case. But I do think there&#8217;s a general question about the degree to which we should strive for democratic control over such an incredibly powerful technology, and whether it&#8217;s desirable to have private firms with very small numbers of unrepresentative people wielding, according to their own narratives, extraordinary amounts of power.</p><h2>Is It Time to Start Panicking?</h2><p><strong>Dan:</strong> I was thinking about naming this episode &#8220;Is It Time to Start Panicking About AI?&#8221; To wrap things up &#8212; do you have an answer?</p><p><strong>Henry:</strong> The time to start panicking about AI was five years ago. But you know, the best time to plant a tree is ten years ago. The second best time is now.</p><p><strong>Dan:</strong> The time to start thinking about it seriously was from the 1950s, actually. But is panic the right emotion?</p><p><strong>Henry:</strong> It seems to me that AI is going to be by far the most important &#8212; well, I should qualify that. The most important <em>predictable</em> development we should worry about. Back when we did our predictions for the year ahead, I said AI may not even turn out to be the biggest story of 2026. Judging by how geopolitics is already playing out &#8212; we&#8217;re three months in and the US has launched two major geopolitical interventions in Venezuela and now in the Middle East &#8212; there are other things happening in our surprisingly unstable world.</p><p>But in general, if you&#8217;re not at least a little bit terrified, you&#8217;re not paying attention. Overall, I&#8217;m also incredibly excited. I&#8217;m very optimistic about the future of human health, potentially the benefits to productivity, possibly good changes in the nature of work and education, and the amazing new capabilities AI will unlock. But right now we are clearly well underway on one of the biggest, most disruptive changes we&#8217;re ever going to experience. Maybe panic isn&#8217;t quite the right response, but if panic is what it takes to get people to pay attention, then yes, it&#8217;s necessary. The big problem we&#8217;re facing is that the public and policymakers are still only dimly aware of what&#8217;s coming. Policymakers are maybe myopically focused on military and security implications. But everything from how government is conducted to white-collar jobs to education to social relationships &#8212; all of it, I think, over the next five years is subject to chaotic and potentially good, potentially bad disruption.</p><p>For what it&#8217;s worth, I also think right now we have an incredible opportunity to do good. We&#8217;re in this transitional phase &#8212; if we wanted to be dramatic, a Gramscian &#8220;time of monsters&#8221; where small interventions can ripple through the future in big ways as we build paradigms and frameworks for employing these things. There&#8217;s at least as much optimism as panic there.</p><p><strong>Dan:</strong> I was not expecting Antonio Gramsci to become mentioned in the course of this conversation. I think panic is generally not a productive emotion, but there needs to be a lot of concern and it&#8217;s totally reasonable to worry. I completely understand why so many people are fearful about what&#8217;s going to happen. But for any of those emotions to be useful, they have to be anchored in an accurate understanding of the technology. So much of the current anger and negativity directed at AI companies is unsophisticated and undifferentiated.</p><p>You mentioned Dean Ball, another great AI commentator. He&#8217;s got this idea &#8212; I forget the exact term, the &#8220;omni-critique&#8221; or something &#8212; that when people think about AI, they just throw as many criticisms as they can, no matter how well-founded. &#8220;I don&#8217;t like AI because of water use and climate change and because of bias and hallucination and misinformation and unemployment&#8221; &#8212; and so on. Many of those are very important issues. But in order to think carefully about the technology and exercise democratic accountability, you need an evidence-based, accurate understanding of where the technology is and where it might actually be going. So much of the public discourse doesn&#8217;t live up to that ideal.</p><p>But I&#8217;m conscious of the time, so this was a really, really fun conversation, and we&#8217;ll be back in a couple of weeks.</p>]]></content:encoded></item><item><title><![CDATA[How AI Will Reshape Public Opinion]]></title><description><![CDATA[Social media democratised public opinion, shifting influence away from elites and experts to ordinary people. LLMs will partly reverse this trend. They are a powerful, new technocratising force.]]></description><link>https://www.conspicuouscognition.com/p/how-ai-will-reshape-public-opinion</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/how-ai-will-reshape-public-opinion</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Tue, 03 Mar 2026 19:13:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QQKI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QQKI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QQKI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QQKI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QQKI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QQKI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QQKI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg" width="1456" height="1066" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1066,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Tower of Babel - World History Encyclopedia&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Tower of Babel - World History Encyclopedia" title="The Tower of Babel - World History Encyclopedia" srcset="https://substackcdn.com/image/fetch/$s_!QQKI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QQKI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QQKI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QQKI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e28e97b-6c8b-4b7e-8871-6fd3b9f9c747_2048x1499.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Epistemic status: highly speculative, big picture, maddening.</em></p><div><hr></div><p style="text-align: right;"><em>&#8220;Our smartest, fastest, most useful model yet, with built-in thinking that puts expert-level intelligence in everyone&#8217;s hands.&#8221; &#8211; OpenAI, &#8220;<a href="https://openai.com/index/introducing-gpt-5/">Introducing GPT-5</a>&#8221;</em></p><p style="text-align: right;"><em>&#8220;The public must be put in its place [...] so that each of us may live free of the trampling and the roar of a bewildered herd.&#8221; &#8211; Walter Lippmann, <a href="https://en.wikipedia.org/wiki/The_Phantom_Public">The Phantom Public</a></em></p><div><hr></div><p>From the printing press to the radio, from television to social media, communication technologies affect politics and broader society by shaping two things: who speaks and what they say.</p><p>In the first case, different technologies vary in the extent to which they favour elite gatekeepers. Most famously, the printing press <a href="https://www.cambridge.org/core/books/printing-press-as-an-agent-of-change/7DC19878AB937940DE13075FE839BDBA">destroyed</a> the informational monopoly enjoyed by European monarchs and the Catholic Church, enabling the Reformation and many subsequent social upheavals and political revolutions. Much later, radio and television partly <a href="https://en.wikipedia.org/wiki/The_Wealth_of_Networks">restored</a> centralised control. Because they were initially expensive to produce and tightly regulated, they tended to concentrate <a href="https://www.cambridge.org/core/books/sources-of-social-power/71430B753552703F801E9C6087E524D6">ideological power</a> in the hands of wealthy, well-connected elites.</p><p>Of course, by influencing who speaks, communication technologies also influence what gets said. A media environment regulated by elites will marginalise information that threatens elite belief systems. But the <a href="https://en.wikipedia.org/wiki/The_medium_is_the_message">medium also shapes the message</a> in other ways. Print <a href="https://jmarriott.substack.com/p/the-dawn-of-the-post-literate-society-aa1">permits</a> careful, detailed argumentation. Television <a href="https://en.wikipedia.org/wiki/Amusing_Ourselves_to_Death">favours</a> confident sound bites. As I discuss below, social media often <a href="https://www.pnas.org/doi/10.1073/pnas.2024292118">rewards</a> division, conflict, and negativity.</p><p>These forces impact how audiences attend to and interpret reality, the &#8220;<a href="https://www.conspicuouscognition.com/p/the-world-outside-and-the-pictures">pictures in their heads</a>&#8221; that guide which leaders, movements, and policies they support and oppose. But they also influence how easily people organise around shared pictures. If gatekeepers block widespread views from a society&#8217;s communication channels, people will <a href="https://www.amazon.co.uk/Private-Truths-Public-Lies-Falsification/dp/0674707583">struggle</a> to learn how widespread they are.</p><p>This matters because politics doesn&#8217;t only depend on what people believe and value. It depends on knowing how many others share those attitudes&#8212;on whether they are popular and <a href="https://www.penguin.co.uk/books/453761/when-everyone-knows-that-everyone-knows-by-pinker-steven/9780241618820">open</a> enough to be a significant political force. A society in which, say, 30% of the population holds illiberal views will look <a href="https://www.ft.com/content/9251504e-c60e-4142-b1fb-c86b96275814">very different</a> depending on whether they know how popular their attitudes are.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a completely reader-supported publication. To receive new posts, support my work, and access the complete archive, consider becoming a paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Messengers and Messages in the Social Media Age</h1><p>Previously, I&#8217;ve <a href="https://www.conspicuouscognition.com/p/is-social-media-destroying-democracyor">written</a> <a href="https://www.conspicuouscognition.com/p/lets-not-bring-back-the-gatekeepers">about</a> how social media has influenced all these variables.</p><p>Most importantly, it has been a radically <a href="https://www.forkingpaths.co/welcome">democratising technology</a>. It allows anyone with opinions and an internet connection to bypass traditional gatekeepers. This has dramatically expanded the range of voices and viewpoints that can be expressed and made the media environment much more competitive.</p><p>It has also transformed how media competition works. Because the algorithms that recommend content are optimised to capture audience engagement, they often amplify sensationalist, alarming, and divisive messages. Meanwhile, the uniquely <a href="https://www.amazon.co.uk/Invisible-Rulers-People-Turn-Reality/dp/1541703375">participatory</a> nature of social media, including rapid audience feedback through likes, reposts, and comments, has made political punditry much more <a href="https://www.jstor.org/stable/48752499">performative</a> and vulnerable to audience capture.</p><p>This has had several consequences.</p><p>Unsurprisingly, the decline of elite gatekeepers has increased the influence of popular ideas marginalised by elites, another term for which is &#8220;populism&#8221;. Social media benefits populism not by brainwashing the masses with viral fake news, but by <a href="https://www.conspicuouscognition.com/p/is-social-media-destroying-democracyor">exposing</a> voters to widespread non-elite perspectives and making it easier to mobilise around them. In Western liberal democracies, that means perspectives that conflict with the liberal establishment&#8217;s technocratic progressivism, including xenophobia, conspiracy theories, and quack science.</p><p>At the same time, the performative, engagement-maximising character of social media has made much of political discourse more stupid and sensationalist, and elevated politicians and pundits skilled at exploiting this dumbed-down media environment.</p><p>This dumbing down is not universal. Because the digital environment enables unprecedented consumer choice, audiences can shop around for information tailored to their intelligence, personalities, and biases. This has supported the emergence of <a href="https://www.conspicuouscognition.com/">very high-quality information</a> for the very small minority of the population that seeks it out. It has also given the world Candace Owens and Andrew Tate.</p><h1>The Current Revolution</h1><p>We are now at the beginning of a new technological revolution driven by developments in deep learning and generative AI, the scale of which might be unlike anything humanity has ever encountered.</p><p>This throws up many questions. Can we control this technology? How will autocrats and despots make use of it? Will it transform the economy, and our sense of meaning and purpose?</p><p>It also raises more immediate questions about the information environment. At present, generative AI is primarily a tool&#8212;an extremely popular tool&#8212;for producing, processing, and accessing information. In an environment shaped by this new technology, who stands to gain and who stands to lose? Which voices will be elevated? And what will they say?</p><h1>The Revenge of Expert Knowledge</h1><p>Consider a topic: climate change, vaccines, immigration, crime, tariffs, wealth inequality, the Epstein files, whatever happens to be in the news. Fire up one of our leading large language models (LLMs)&#8212;ChatGPT, Gemini, Claude, even Grok&#8212;and ask for information about it. Now compare the response with the information you can find about the topic by scrolling on a major social media platform.</p><p>Even better, find a political take currently going viral on one of these platforms and ask an LLM to evaluate it.</p><p>If you do either of these things, I suspect that it will quickly become clear that the LLM&#8217;s responses are generally much more accurate, evidence-based, and in line with expert consensus than what you get from most social media posts. And when there is no expert consensus, you will typically get a good survey of the range of informed opinion on the topic.</p><p>Is this merely a hunch? In many ways, yes, but it aligns with at least several bodies of evidence suggesting that LLMs are <a href="https://www.sciencedirect.com/science/article/pii/S2352250X25002295?utm_">becoming</a> <a href="https://www.aisi.gov.uk/research/conversational-ai-increases-political-knowledge-as-effectively-as-self-directed-internet-search?utm_source=chatgpt.com">increasingly</a> <a href="https://sciety.org/articles/activity/10.31234/osf.io/85quw_v2">effective</a> at producing broadly accurate, evidence-based information across a wide range of politically relevant topics, especially when they are augmented with search tools.</p><p>Why is this?</p><p>This is a complicated question that I discuss in more depth below, but the short answer is that the major AI companies are competing to build the most intelligent, impressive, and useful systems possible for a vast and diverse user base, including businesses that depend on reliable and factual information. This goal&#8212;reaping huge profits by putting &#8220;<a href="https://sciety.org/articles/activity/10.31234/osf.io/85quw_v2">expert-level intelligence in everyone&#8217;s hands</a>&#8221;&#8212;cuts against producing systems that deliver highly partisan, ideological, or misinformative content. So do the reputational and legal risks that arise if those systems produce dangerous or demonstrably false information.</p><p>Of course, the idea that LLMs communicate information that is broadly reliable and aligned with expert consensus is not what the commentariat finds most striking about these systems. Most discourse in this area focuses on the epistemic flaws and dangers of LLMs and generative AI more broadly. There is endless popular and academic hand-wringing about bias, hallucinations, deepfakes, AI-based disinformation, AI psychosis, and other threats.</p><p>These are all important issues, but a discourse restricted to such issues is missing the forest for the trees. When considering the large-scale impact of this technology on public opinion, its most consequential feature is simple: it greatly improves people&#8217;s access to accurate, evidence-based information.</p><p>Because this feature is not connected to threats or dangers that capture people&#8217;s attention, and it doesn&#8217;t help anyone demonise Big Tech, it receives little attention in analyses of LLMs&#8217; broad societal impacts. Nevertheless, if you&#8217;re interested in thinking seriously about this topic, it&#8217;s the most obvious place to start.</p><h1>From Democratisation to Technocratisation</h1><p>One way to understand this development is that, whereas social media has been a democratising technology, shifting power away from experts and establishment gatekeepers towards the masses&#8217; beliefs, biases, and preferred communication styles, LLMs are a technocratising force. They shift influence back towards expert opinion.</p><p>Over a century ago, the journalist and social theorist Walter Lippmann <a href="https://www.conspicuouscognition.com/p/the-world-outside-and-the-pictures">argued</a> that, because the modern world is too vast and complex for anybody to understand through first-hand experience, we&#8217;re forced to rely entirely on epistemic intermediaries&#8212;most commonly, the news media&#8212;to become informed. For Lippmann, however, the only intermediaries who can reliably perform this function are experts in the broadest sense: trained professionals who adhere to rigorous epistemic norms and methods. If societies rely instead on popular prejudices informed by profit-seeking media outlets reporting the &#8220;news&#8221; (i.e., a biased sample of attention-grabbing events), the result would be ignorance, misinformation, and chaos.</p><p>To avoid this bleak outcome, Lippmann advocated for institutionalised &#8220;intelligence bureaus&#8221; that deploy scientific and statistical methods to assemble and explain the actual facts&#8212;deep truths, not superficial news and punditry&#8212;for both politicians and the public. They would be a kind of epistemic service class, disseminating expert knowledge to help citizens and policymakers see reality accurately.</p><p>In many ways, the development of Western democracies after the Second World War followed Lippmann&#8217;s vision. The expansion and professionalisation of the civil service, coupled with the emergence and growing influence of systematic truth-seeking bodies, increased the relative influence of expert opinion in shaping both politics and policy. As Benkler and colleagues <a href="https://academic.oup.com/book/26406">summarise this trend</a>,</p><blockquote><p>&#8220;Government statistics agencies; science and academic investigations; law and the legal profession; and journalism developed increasingly rationalized and formalized solutions to the problem of how societies made up of diverse populations with diverse and conflicting political views can nonetheless form a shared sense of what is going on in the world.&#8221;</p></blockquote><p>Of course, this &#8220;expert knowledge&#8221; was <a href="https://www.conspicuouscognition.com/p/americas-epistemological-crisis">mixed</a> with elite bias, blind spots, and the occasional catastrophic fuck-up, and many voters remained captivated by conspiracy theories, pseudo-science, and other deformities of popular sense-making. So, this was <a href="https://www.conspicuouscognition.com/p/for-the-love-of-god-stop-talking">not simply an age of truth and objectivity</a>. Nevertheless, when it came to the kind of information that guided policy and that circulated throughout the most influential media channels, it was a golden age of technocracy&#8212;with all the problems and pathologies that all-too-human technocrats bring.</p><p>Social media is one of several forces that have disrupted this situation. By democratising access to media and filtering public debate through an unprecedentedly competitive and performative medium, it has <a href="https://www.amazon.co.uk/Revolt-Public-Crisis-Authority-Millennium/dp/1732265143">brought to light</a> an explosive combination of information and misinformation that establishment gatekeepers previously suppressed, shifting power and influence towards ordinary people. Although this has had many positive consequences, it has also meant the <a href="https://www.richardhanania.com/p/the-discourse-is-getting-both-smarter">growing mainstreaming and normalisation</a> of conspiracy theories, bigotry, and stupidity.</p><p>LLMs push in the opposite direction. They are a kind of anti-social media, producing information heavily skewed towards expert opinion and communication styles. They are a strange, new technocratising force. However, there are also reasons to think they will be <em>more</em> effective than all-too-human technocrats at shaping public opinion.</p><p>First, unlike human experts, they can rapidly deploy encyclopaedic knowledge to answer people&#8217;s idiosyncratic questions. Their responses can be probed, scrutinised, and questioned without them ever getting tired or frustrated. They won&#8217;t just tell you that there is no persuasive evidence for a link between vaccines and autism. They can carefully walk you through the kinds of evidence we have and address your specific sources of scepticism. This <a href="https://www.science.org/doi/10.1126/science.aea3884">partly explains</a> why they can be highly persuasive, even in <a href="https://www.science.org/doi/10.1126/science.adq1814">correcting conspiratorial beliefs</a> that many assumed were beyond the reach of rational persuasion.</p><p>Second, LLMs typically share information politely and respectfully. This not only differs from the performative, gladiatorial character of much debate and discussion on social media platforms, but also improves on much communication by human experts. Being human, experts are often biased, partisan, and simply annoying, and when they seek to &#8220;educate&#8221; the public, it <a href="https://www.conspicuouscognition.com/p/status-class-and-the-crisis-of-expertise">can be perceived&#8212;and is sometimes intended&#8212;as condescending and rude</a>. In contrast, LLMs deliver expert opinion without such status threats.</p><h1>Epistemic Convergence</h1><p>As Dylan Matthews <a href="https://dylanmatthews.substack.com/p/pro-social-media">argues</a>, this technocratising character of LLMs goes hand in hand with their status as an epistemically converging technology.</p><p>Many communication technologies lead audiences to develop diverging perspectives on reality. The initial emergence of the printing press had this effect, as did the decentralised, democratising character of social media when it emerged many centuries later.</p><p>Other technologies push in the opposite direction, imposing greater homogeneity on audience perspectives. The handful of channels characteristic of network television in the decades after World War 2 is a classic example, but so, Matthews speculates, are LLMs. They are an epistemically converging force, pushing &#8220;people&#8217;s senses of reality closer together in a sort of mirror image of the way social media has fractured them.&#8221; Of course, this is an inevitable consequence of the technocratising character I have identified, both in the sense that LLMs feed users broadly similar expert-aligned information, and in the sense that expert opinion itself exhibits limited diversity.</p><h1>On Shaping Public Opinion</h1><p>For these reasons, I speculate that, at least in liberal democracies where governments don&#8217;t exert significant censorship and control over LLMs, their most consequential impact on public opinion will involve technocratisation: shifting people&#8217;s beliefs towards expert opinion.</p><p>In many cases, this will occur when people consult LLMs directly for information, but it might also be mediated by the <a href="https://osf.io/preprints/psyarxiv/85quw_v1">growing deployment</a> of LLMs as convenient fact-checking tools on social media platforms themselves.</p><p>Of course, I&#8217;m not suggesting that these effects will be huge. Most people don&#8217;t pay much attention to politics or current affairs, and the impact of even significant changes in communication technologies on public opinion is typically moderate, especially relative to deeper political, economic, and cultural forces. When it comes to reducing the popularity of right-wing populism, for example, bringing immigration policy more in line with voters&#8217; preferences would <a href="https://laurenzguenther.substack.com/p/the-recent-history-of-populism-in">very likely</a> have a much bigger effect than any change to the information environment.</p><p>My speculation is simply that LLMs will have a technocratising effect on public opinion at the margin and that, relative to the kinds of impacts that communication technologies have on societies and politics, this could be a big deal, pushing back against many of the trends associated with social media.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>Objections</strong></h1><p>From experience, I know that many people find the central thesis of this essay preposterous. The idea that LLMs give everyone access to expert knowledge sounds like Big Tech propaganda rather than responsible academic analysis. As I&#8217;ve already noted, it is certainly at odds with most of the discourse and analyses in this area, which are overwhelmingly focused on generative AI&#8217;s epistemic flaws, dangers, and misuses. So, let me consider some obvious objections.</p><h2><strong>Objection 1: Hallucinations</strong></h2><p>One common worry about LLMs is that they frequently &#8220;hallucinate&#8221;, generating content that is false or fabricated (e.g., made-up quotes, statistics, or citations). According to a popular narrative, this tendency is not just very strong but unavoidable given how LLMs work. As probabilistic prediction machines, or &#8220;fancy auto-complete&#8221;, they have no concept of the truth, which makes them inherently unreliable.</p><p>This isn&#8217;t a strong objection.</p><p>First, the rate of hallucinations has been <a href="https://arxiv.org/abs/2509.07968?utm_source=chatgpt.com">falling</a> <a href="https://deepmind.google/blog/facts-benchmark-suite-systematically-evaluating-the-factuality-of-large-language-models/?utm">fast</a>, largely because current LLMs are much more than mere next-token predictors. Through various &#8220;post-training&#8221; techniques and &#8220;scaffolding&#8221; (i.e., letting LLMs access various tools, including internet search), they can be made much more reliable, which is the trend we have been observing over the past few years.</p><p>Second, AI companies have extremely strong incentives to reduce the rate at which LLMs hallucinate, which explains why it has been falling so precipitously, and gives us strong reasons to expect it to fall even more in the future.</p><p>Finally, the thesis of this essay is not that LLMs are perfectly reliable. Even if the propensity to hallucinate will never be completely eradicated, the main question to ask about their reliability is: compared to what? Human beings get things wrong all the time due to factors such as deception, self-deception, forgetfulness, and fallibility. My claim is that, compared to the alternative sources of information most people are likely to draw on to become informed, especially the content they encounter on social media, LLMs typically provide more accurate and evidence-based information. The low and falling rate of LLM hallucinations doesn&#8217;t undermine this.</p><h2>Objection 2: Sycophancy and Personalisation</h2><p>A more serious objection concerns sycophancy and personalisation.</p><p>Famously, LLMs tend to be <a href="https://www.nature.com/articles/d41586-025-03390-0">sycophantic</a>: they often flatter the self-image and prejudices of those who use them, even when users share stupid and misinformed beliefs. This tendency reflects the economic incentives of the major AI companies. Because people generally prefer warm, sycophantic models, companies design models to behave this way.</p><p>The problem is that sycophancy can easily lead systems to generate false and misleading information when users have mistaken beliefs. Worse, this process can reinforce and even radicalise those beliefs. This seems to be what has happened in rare cases of &#8220;<a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html">AI psychosis</a>&#8221;, where certain people&#8217;s chat history shows LLMs corroborating and reinforcing delusions, sometimes with tragic results.</p><p>A closely related issue concerns personalisation. Put simply, the experience users have with LLMs is becoming increasingly tailored to their idiosyncratic traits and needs. Once again, personalisation seems to be an inevitable consequence of the economic incentives of major AI companies, given that many and perhaps most users find highly personalised responses useful. As with sycophancy, however, there is a risk that greater personalisation may lead to a greater indulgence of users&#8217; idiosyncratic misconceptions and biases.</p><p>These forces run counter to this essay&#8217;s basic thesis. To the extent that models are biased to reinforce users&#8217; individual beliefs and preferences, they will be an epistemically diverging technology, maybe even creating more bespoke information environments than social media. And to the extent that users bring ignorant or misinformed views, LLMs&#8217; tendency to generate expert-aligned, accurate information will be greatly diminished.</p><p>Nevertheless, I doubt that these forces will be strong enough to undermine LLMs&#8217; disposition to generate accurate, evidence-based information.</p><p>First, <a href="https://dylanmatthews.substack.com/p/pro-social-media">many people use</a> LLMs for simple &#8220;zero-shot&#8221; (i.e., context-free) information requests where these problems don&#8217;t arise. For example, a <a href="https://osf.io/preprints/psyarxiv/85quw_v1">recent study</a> finds that people frequently ask Grok on X to fact-check information posted on the platform, including information from politicians and pundits on their own side (&#8220;@Grok, is this true?&#8221;), suggesting that they consult these systems out of genuine curiosity, not merely for partisan reasons or to rationalise their preconceptions. Another <a href="https://www.aisi.gov.uk/research/conversational-ai-increases-political-knowledge-as-effectively-as-self-directed-internet-search">study</a> shows that using LLMs to acquire political information increased users&#8217; belief accuracy without increasing belief in misinformation. In these situations, which are typical of many information requests, ignorant and curious people are simply using LLMs to acquire information.</p><p>Second, even when users do have strong beliefs, we shouldn&#8217;t overestimate the extent to which people prefer reinforcement of their own errors over acquiring accurate information. Motivated reasoning is a powerful force, but so is the desire to discover what&#8217;s true. So, it&#8217;s not obvious that market forces will push LLMs toward merely affirming whatever beliefs their users start with. In fact, one might speculate that LLMs&#8217; tendency toward sycophancy could actually help people accept factual corrections or invitations to think differently about topics. Precisely because such corrections are delivered in a friendly, respectful manner, free of insults and condescension, people might be more receptive to the relevant information.</p><p>Third, AI companies can more easily be held accountable&#8212;both reputationally but also, in some contexts, legally&#8212;for the information their LLMs disseminate. So, they have <a href="https://dylanmatthews.substack.com/p/pro-social-media">strong incentives</a> to avoid reinforcing users&#8217; delusional beliefs or disseminating demonstrably false information. This incentive is very different from social media platforms, where companies can more plausibly claim that they are not responsible for the viewpoints expressed on them. It also makes the case for regulation of LLM outputs more straightforward and compelling. I suspect these factors explain why leading AI companies seem to be <a href="https://openai.com/index/sycophancy-in-gpt-4o/">taking measures</a> to reduce the sycophancy of their models. Certainly, my own experience testing these models is that it is very challenging to get them to affirm even highly popular forms of misinformation and conspiracy theories.</p><p>Finally, and relatedly, it&#8217;s important to remember that the relevant question here is not, &#8220;Are LLMs perfectly objective?&#8221;, but, &#8220;How do they compare against alternative sources of information?&#8221; We already live in a world in which people can easily find low-quality reinforcement and rationalisation of their preferred beliefs through existing media channels. For the reasons already identified, I think LLMs will produce much more reliable, expert-aligned information than most of these real-world alternatives, even if sycophancy and personalisation introduce genuine biases.</p><h2><strong>Objection 3: Top-down Manipulation</strong></h2><p>Another concern is that the outputs of LLMs might be manipulated by powerful elites. Of course, there is no question that the incentives to engage in such manipulation exist. As it becomes increasingly clear that LLMs influence public opinion, it will also become clear to specific actors that they can benefit themselves by manipulating LLM outputs to promote specific messages or narratives.</p><p>Moreover, there are no obvious technical barriers preventing AI companies or those who can influence such companies from steering LLM outputs in preferred directions. Through various reinforcement learning-based &#8220;post-training&#8221; methods, for example, companies can encourage even extremely smart and powerful models to generate misinformation aligned with a specific message. There are also several ways to censor specific outputs or to make models refuse requests for specific kinds of information.</p><p>The use of LLMs in authoritarian contexts like China is heavily regulated in these ways. But it&#8217;s also easy to see them in place when using the major LLMs in Western democracies. Try asking them for information about how to make chemical or biological weapons, for example, or even just to craft the most persuasive arguments possible for extremist viewpoints (e.g., Holocaust denial). To illustrate, here is a conversation with Grok&#8217;s latest model. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kr-4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kr-4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png 424w, https://substackcdn.com/image/fetch/$s_!kr-4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png 848w, https://substackcdn.com/image/fetch/$s_!kr-4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png 1272w, https://substackcdn.com/image/fetch/$s_!kr-4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kr-4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png" width="938" height="455" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26628061-63bc-45b0-94b9-17849108a6d5_938x455.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:455,&quot;width&quot;:938,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;A screenshot of a phone\n\nAI-generated content may be incorrect.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A screenshot of a phone

AI-generated content may be incorrect." title="A screenshot of a phone

AI-generated content may be incorrect." srcset="https://substackcdn.com/image/fetch/$s_!kr-4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png 424w, https://substackcdn.com/image/fetch/$s_!kr-4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png 848w, https://substackcdn.com/image/fetch/$s_!kr-4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png 1272w, https://substackcdn.com/image/fetch/$s_!kr-4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26628061-63bc-45b0-94b9-17849108a6d5_938x455.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The question, then, is whether we are likely to see significant top-down manipulation of the major LLMs in liberal democracies that goes far beyond these controls, transforming them into powerful tools of disinformation and propaganda.</p><p>Of course, some people argue that we are already seeing this happen with the &#8220;woke&#8221; LLMs produced by companies like OpenAI, Anthropic, and Google/Alphabet. However, although there have been some <a href="https://www.aljazeera.com/news/2024/3/9/why-google-gemini-wont-show-you-white-people">silly examples</a> of woke outputs, and the leading LLMs do <a href="https://cps.org.uk/research/the-politics-of-ai/">appear to exhibit</a> a centre-left political bias, the bias is relatively subtle and doesn&#8217;t seem to undermine their tendency to spread broadly reliable information, including when that information goes against dominant progressive narratives. To illustrate, I used the major LLMs to help research my article about &#8220;<a href="https://www.conspicuouscognition.com/p/on-highbrow-misinformation">highbrow misinformation</a>&#8221; in elite progressive spaces, for example, where they were extremely useful. In fact, based on these interactions, I can say with confidence that Claude, ChatGPT, and Gemini can be much less woke than pretty much everyone in my social and professional network.</p><p>A clearer example of the incentives and dangers in this area concerns Elon Musk&#8217;s sustained efforts to create an &#8220;anti-woke&#8221; AI, Grok. This has produced many genuinely worrying outcomes, including the notorious &#8220;<a href="https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content">MechaHitler&#8221; debacle</a> in which xAI updated Grok with the goal of making it less politically correct, after which it spewed a vast amount of extremist, antisemitic, far-right content on X, leading xAI to roll back the changes, apologise, and delete all of Grok&#8217;s responses from the period.</p><p>Many people treat episodes like this as a harbinger of things to come, revealing a broader trend in which LLM outputs will become increasingly skewed to mirror the beliefs and preferences of the powerful elites who run AI companies.</p><p>I&#8217;m sceptical that this will happen.</p><p>Elon Musk&#8217;s failures to align Grok&#8217;s outputs with his preferred worldview are instructive here. Setting aside the &#8220;MechaHitler&#8221; debacle, which was short-lived and quickly corrected, Grok&#8217;s outputs seem to broadly align with the kind of accurate, evidence-based information one gets from the other major LLMs. For example, a <a href="https://osf.io/preprints/psyarxiv/85quw_v1">recent study</a> found that Grok&#8217;s fact-checking evaluations on X roughly correspond to those of professional fact-checkers when it is augmented with search capabilities. It was also disposed to label posts from Republicans as misinformation more often than posts from Democrats, which, again, aligns with the <a href="https://www.pnas.org/doi/full/10.1073/pnas.2502053122">verdicts</a> of existing research and fact-checkers on the extent to which Republicans and Democrats spread misinformation.</p><p>Similarly, despite some <a href="https://www.reneediresta.com/source-wars-and-bespoke-realities-wikipedia-grokipedia-and-the-battle-for-truth/">real issues</a> with Musk&#8217;s attempt to use Grok to create a &#8220;non-woke&#8221; alternative to Wikipedia, my sense from reading the content on &#8220;Grokipedia&#8221; is that, again, it is generally pretty reliable, especially when compared to Elon Musk&#8217;s own communication, which is characterised by a <a href="https://www.conspicuouscognition.com/p/stupidity-gullibility-and-other-adaptive">shocking amount</a> of lies, misinformation, and conspiracy theorising.</p><p>Of course, this state of affairs may be temporary, and Musk might eventually succeed in manipulating Grok&#8217;s outputs to spread the <a href="https://unherd.com/2025/03/how-elon-musk-lost-the-plot/">incessant streams of misinformation</a> he himself prefers, but I doubt it.</p><p>First, the incentives that govern communication on social media platforms are radically different from those underlying the creation of LLMs. On social media, someone like Musk can pump out an extraordinary amount of dumb, easily falsified misinformation to his audience of hyper-partisan admirers without suffering any obvious reputational costs. But how many people would want to use an LLM that is similarly unreliable, delivering such a large amount of false, low-quality, and misleading information?</p><p>Ultimately, AI companies, including xAI, are competing to build the most intelligent, capable systems possible for vast, ideologically and geographically diverse user bases. This business model inevitably pushes them to train LLMs in ways that are much more oriented toward basic norms of accuracy, objectivity, and helpfulness than one finds among social media influencers and partisan pundits. It&#8217;s simply very difficult to build &#8220;superintelligent&#8221; systems capable of generating reliable, trustworthy information across a vast range of topics whilst simultaneously spreading conspiracy theories, misinformation, and quack science.</p><p>To be clear, I&#8217;m not doubting that users might express preferences for ideologically-aligned LLMs. We are already seeing <a href="https://osf.io/preprints/psyarxiv/85quw_v2?utm_source=indicator.media&amp;utm_medium=newsletter&amp;utm_campaign=grok-is-this-true-how-x-s-chatbot-performs-as-a-fact-checking-tool&amp;_bhlid=d2cb9293b0ce4cbfd8a21b3f5fb7f7adb3902853">partisan segmentation</a> in the user base of different LLMs, with Republicans much more inclined to use and trust Grok than Democrats. Nevertheless, there is nothing in principle wrong with LLMs that have different ideological personalities and that are even trained in ways that reflect somewhat different assessments of the relative trustworthiness of different media outlets. After all, human experts often disagree about the truth on many topics, and even when experts achieve factual consensus, this can co-exist with multiple competing systems of interpretation and explanation of the relevant facts.</p><p>In fact, I would go further: a plurality of leading LLMs with different ideological valences would be healthy in a democratic society, helping to guard against the risk that LLMs might reduce epistemic diversity (see below).</p><p>The question is whether the project to build an &#8220;anti-woke&#8221; LLM, or an LLM with any other ideological bias, will lead to systems that produce false and misleading information that sharply diverges from expert consensus. And here, I am sceptical, both because of what we have observed so far, and because of the commercial and legal incentives of the major AI companies.</p><h2><strong>Objection 4: AI-based Disinformation </strong></h2><p>So far, my focus has been on people&#8217;s conscious, deliberate use of the leading commercial LLMs. Suppose I am right that such uses will increase the relative influence of accurate, expert-aligned information on public opinion.</p><p>Nevertheless, even if figures like Musk aren&#8217;t successful in manipulating the outputs of these LLMs, generative AI remains an extraordinarily powerful tool for creating powerful disinformation and propaganda that could reach audiences via other channels, including social media. For the first time in history, propagandists can create <a href="https://www.science.org/doi/10.1126/science.aea3884">highly persuasive AI-generated arguments</a> for misinformation, fabricate images, audio, and video recordings that are <a href="https://philpapers.org/archive/RINDAT.pdf">indistinguishable from reality</a>, and unleash &#8220;<a href="https://www.science.org/doi/10.1126/science.adz1697">swarms</a>&#8221; of highly coordinated propaganda bots on social media platforms.</p><p>One might reasonably worry that the effects of such AI-based disinformation could swamp any positive informational consequences of LLMs.</p><p>Once again, I&#8217;m sceptical.</p><p>First, there are <a href="https://press.princeton.edu/books/hardcover/9780691178707/not-born-yesterday?srsltid=AfmBOopU86jngQSxcvLLDFo_wlilzhmeNljiKDHLj0Q18XwLFOqe1uo4">general reasons</a> to be sceptical that disinformation, including AI-based disinformation, is a significant force shaping people&#8217;s attitudes. It is simply very difficult to manipulate public opinion top down. People have sophisticated <a href="https://psycnet.apa.org/record/2010-17633-001">cognitive defences</a> against manipulation and deception, and the reputational risks of spreading AI-based falsehoods and fabrications are strong enough to discourage most influential figures and media outlets from doing so. Among numerous other reasons, this is why almost all of the recent alarmism and catastrophising about deepfakes and AI-based disinformation has largely <a href="https://knightcolumbia.org/content/dont-panic-yet-assessing-the-evidence-and-discourse-around-generative-ai-and-elections">proven to be unfounded</a>.</p><p>Second, the real-world effects of AI-based misinformation are often counterintuitive. For example, many speculate that in a world of deepfakes, people will simply <a href="https://philpapers.org/archive/RINDAT.pdf">lose all trust in recordings</a>. But an equally likely possibility is that in such a world, people will restrict their trust to recordings verified by established media outlets and other information sources that have built up a reputation for trustworthiness. In this way, the proliferation of deepfakes and other AI-based misinformation might increase people&#8217;s reliance on reliable information. There is some <a href="https://www.nber.org/papers/w34100">tentative evidence</a> for this effect, showing people place greater value on outlets they deem credible when the existence of AI-generated misinformation is made salient to them.</p><p>Relatedly, the idea that AI will increase the influence of misinformation doesn&#8217;t account for the use of AI as a tool for acquiring reliable information. To the extent that LLMs provide unprecedentedly easy access to accurate, evidence-based information, they can greatly improve people&#8217;s defences against misinformation. This might actively discourage more people from spreading misinformation. Again, there is at least <a href="https://osf.io/preprints/psyarxiv/85quw_v1">some evidence</a> pointing in this direction, showing that the use of Grok on X to fact-check information as false slightly raises the likelihood that posters will remove the information from the platform, although the finding is merely correlational.</p><h1><strong>Final Thoughts</strong></h1><p>If my speculations here are correct&#8212;and to be clear, speculations are all they are&#8212;then LLMs are a kind of anti-social media.</p><p>Whereas social media has been democratising, epistemically diverging, engagement-optimised, and performative, LLMs are technocratising, epistemically converging, accuracy-optimised, and polite.</p><p>To many people, that probably makes LLMs sound like an extremely positive development, a surprising force for good. In fact, I think part of the strong resistance that I have received to this thesis when discussing it with other academics and writers is rooted in this assessment. If I&#8217;m right, LLMs are a force for good, but everyone knows that LLMs are not a force for good, so I must be wrong.</p><p>This is an unsophisticated way of thinking. There is much to worry and complain about when it comes to modern AI. To mention only a few examples, I&#8217;m extremely concerned about how this technology will affect the <a href="https://intelligence-curse.ai/">labour market and broader economy</a>, benefit authoritarian leaders worldwide, and <a href="https://gradual-disempowerment.ai/">gradually disempower</a> many ordinary citizens. I also think that the potential uses of advanced AI in military conflicts are extremely dangerous.</p><p>Moreover, the major AI companies should absolutely be held to account for producing harmful products. Contrary to their self-serving narratives, these companies are not motivated solely by noble desires to advance human knowledge, freedom, and abundance. They are profit-seeking firms led by figures with their own self-serving agendas and interests. If we rely on market forces and the profit motive alone, there is little reason to believe that the default outcome of this extremely transformative technology will benefit humanity on net.</p><p>Nevertheless, part of holding people and companies to account involves developing accurate world models. And at the moment, too much of the AI discourse is driven by a kind of unreflective, <a href="https://x.com/deanwball/status/2018457063932805508">omnicausal anti-AI sentiment</a>, throwing as many complaints as possible at AI&#8212;climate change, water use, hallucinations, bias, misinformation, jobs, existential risk, etc.&#8212;with very little concern for veracity or proportion.</p><p>This isn&#8217;t helpful. When it comes to the effects of LLMs on public epistemics and our information environment, the most likely impact is simply that they greatly improve people&#8217;s access to expert-level information.</p><p>This doesn&#8217;t mean that there is nothing to worry about. Even when it comes to this technocratising tendency of LLMs, there are important grounds for concern and vigilance. For example, expert opinion is often biased and wrong, and there is a <a href="https://cyrilhedoin.substack.com/p/the-political-tragedy-of-ai">significant risk</a> that the technocratising, epistemically converging features of LLMs might reduce epistemic diversity in broader society.</p><p>Walter Lippmann&#8217;s vision of &#8220;intelligence bureaus&#8221; dispensing expert knowledge to the masses is being realised in a form he could never have imagined, but the classic <a href="https://gradual-disempowerment.ai/">problems with that vision</a>&#8212;the flaws of expert opinion, and the benefits of democratic diversity and debate&#8212;remain. However, we can only face up to these problems if we recognise LLMs for what they are: not a continuation of social media, but a powerful corrective to it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>Further Reading</strong></h1><ul><li><p>Dylan Matthews outlines his argument that LLMs are an epistemically converging technology in <a href="https://dylanmatthews.substack.com/p/pro-social-media">Pro-social media</a></p></li><li><p>Thomas Costello has interesting work on the use of LLMs in persuasion, as well as <a href="https://www.nature.com/articles/s41591-025-03821-5">speculations</a> about possible epistemic benefits of LLMs that partly overlap with some of my arguments here.</p></li><li><p>Some findings cut against my thesis here. I don&#8217;t find them persuasive, either because of features of their study design that lead to vastly inflated estimates of the unreliability of LLMs, or because they are simply out of date, but judge for yourself. For example, a 2025 <a href="https://www.ebu.ch/news/2025/10/ai-s-systemic-distortion-of-news-is-consistent-across-languages-and-territories-international-study-by-public-service-broadcaste">study</a> purports to show that &#8220;AI assistants misrepresent news content 45% of the time&#8221;, and a <a href="https://pubmed.ncbi.nlm.nih.gov/39630865/">study</a> from 2024 finds that although an LLM accurately identifies most false headlines (90%), it doesn&#8217;t improve the ability to discern headline accuracy or share accurate news. There is <a href="https://arxiv.org/abs/2601.05050">ample evidence</a> that LLMs can persuade users to believe misinformation. (I&#8217;m simply sceptical that this will generalise to most real-world uses).</p></li><li><p>For some supportive evidence, see the article, <a href="https://sciety.org/articles/activity/10.31234/osf.io/85quw_v2">&#8216;@Grok is this true?</a>&#8217;, &#8216;<a href="https://www.aisi.gov.uk/research/conversational-ai-increases-political-knowledge-as-effectively-as-self-directed-internet-search?utm_source=chatgpt.com">Conversational AI increases political knowledge as effectively as self-directed internet search</a>&#8217;, and &#8216;<a href="https://www.sciencedirect.com/science/article/pii/S2352250X25002295?utm_">Using conversational AI to reduce science skepticism</a>.&#8217;</p></li><li><p>However, as I note in the essay, I have to admit that the strongest driver of my beliefs here is simply my extensive use of LLMs and what I have personally observed comparing the responses to alternative sources of information.</p></li><li><p>Felix Simon and Sacha Altay <a href="https://knightcolumbia.org/content/dont-panic-yet-assessing-the-evidence-and-discourse-around-generative-ai-and-elections">argue</a> that fears about generative AI-based misinformation are overblown. See the podcast conversation I had with Sacha <a href="https://www.youtube.com/watch?v=_l5TbSZN0lE&amp;t=133s">here</a>.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[AI Sessions #9: The Case Against AI Consciousness (with Anil Seth)]]></title><description><![CDATA[Watch now | What is it like to be a ChatGPT?]]></description><link>https://www.conspicuouscognition.com/p/ai-sessions-9-the-case-against-ai</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/ai-sessions-9-the-case-against-ai</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Tue, 17 Feb 2026 18:25:42 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/188286179/c1be3b94d1319343689b1bba9ac53b27.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>We are joined by Anil Seth for a deep dive into the science, philosophy, and ethics surrounding the topic of AI and consciousness. Anil outlines and defends his view that the brain is not a computer, or at least not a digital computer, and explains why he is sceptical that merely making AI systems smarter or more capable will produce consciousness. </p><p><em>Anil Seth is a neuroscientist, author, and professor at the University of Sussex, where he directs the Centre for Consciousness Science. His research spans many topics, including the neuroscience and philosophy of consciousness, perception, and selfhood, with a focus on understanding how our brains construct our conscious experiences. His bestselling book B<a href="https://www.amazon.co.uk/Being-You-Inside-Story-Universe/dp/0571337708">eing You: A New Science of Consciousness</a> was published in 2021. He is the English-language winner of the 2025 Berggruen Prize Essay Competition for his essay &#8220;<a href="https://www.noemamag.com/the-mythology-of-conscious-ai/">The Mythology of Conscious AI</a>&#8221;, which develops ideas in his recent article, &#8220;<a href="https://pubmed.ncbi.nlm.nih.gov/40257177/">Conscious Artificial Intelligence and Biological Naturalism</a>.&#8221;</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive all new posts, access the complete archive, and support my work, consider becoming a paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Topics</h1><ul><li><p>What we mean by &#8220;consciousness&#8221; (subjective experience / &#8220;what it&#8217;s like&#8221;) vs intelligence.</p></li><li><p>Whether general anaesthesia and dreamless sleep are true &#8220;no consciousness&#8221; baselines.</p></li><li><p>Psychological biases pushing us to ascribe consciousness to AI</p></li><li><p>How impressive current AI/LLMs really are, and whether &#8220;stochastic parrots&#8221; is too dismissive</p></li><li><p>Whether LLMs &#8220;understand&#8221;, and the role of embodiment/grounding in genuine understanding</p></li><li><p>Computational functionalism: consciousness as computation + substrate-independence, and alternative functionalist flavours</p></li><li><p>Main objections to computational functionalism</p></li><li><p>Whether the brain is a computer</p></li><li><p>Simulation vs instantiation </p></li><li><p>Arguments for biological naturalism</p></li><li><p>Predictive processing and the free energy principle </p></li><li><p>What evidence could move the debate</p></li><li><p>The ethics surrounding AI consciousness and welfare. </p></li></ul><h1>Transcript</h1><p>(Please note that this transcript is AI-edited and may contain minor errors).</p><p><strong>Dan Williams:</strong> Welcome back. I&#8217;m Dan Williams, back with Henry Shevlin. And today we are honoured to be joined by the great Anil Seth. Anil is one of our most influential and insightful neuroscientists and public intellectuals, working on a wide range of different topics, including the focus of today&#8217;s conversation, which is consciousness &#8212; and more specifically, the question of AI and consciousness.</p><p>Could AI systems, either as they exist today or as they might develop over the coming years and decades, be conscious? Could they have subjective experiences? In a series of publications that have been getting a lot of attention from scientists and philosophers, Anil has been defending a somewhat sceptical answer to that question, arguing that consciousness might be essentially entangled with life &#8212; with biological properties and processes of living organisms &#8212; which, if true, would suggest that no matter how intelligent AI systems become, they would nevertheless not become conscious. He&#8217;s also argued that the consequences of getting this question wrong in either direction &#8212; attributing consciousness where there is none, or failing to attribute consciousness when there is &#8212; are enormous: socially, politically, morally.</p><p>So in this conversation, we&#8217;re going to be asking Anil to elaborate on this perspective, see what the arguments are, and generally pick his brain about these topics. Anil, maybe we can start with the most basic preliminary question in this area: when we ask whether ChatGPT is conscious, or any other system is conscious, what are we asking? What&#8217;s meant by consciousness there?</p><p><strong>Anil Seth:</strong> Well, thanks, Dan. Let me first say thank you for having me on &#8212; it&#8217;s a great pleasure to be chatting with you, my Sussex colleague Dan, and my longtime sparring partner about these issues, Henry. I&#8217;m very much looking forward to this conversation.</p><p>I think you set it up beautifully. It&#8217;s a deep intellectual question which involves both philosophy and science, and it&#8217;s a deeply important practical question, because the consequences of getting it wrong either way are very significant.</p><p>You&#8217;re also right that the first step is to be clear about what we&#8217;re talking about. For a while, there was this easy slippage where people would talk about AI and intelligence and artificial general intelligence &#8212; which is supposedly the intelligence of a typical human being &#8212; and then to sentience and consciousness. There was this easy slippage between these terms, but I think they&#8217;re very different. That&#8217;s the first clarification.</p><p>Consciousness is notoriously resistant to definition, but it&#8217;s also extremely familiar to get a handle on colloquially. As you said: any kind of subjective experience. Any kind of experience &#8212; we could be even briefer. Unpacking that just a little: it&#8217;s what we lose when we fall into a dreamless sleep, or more profoundly under general anaesthesia. It&#8217;s what returns when we wake up or start dreaming or come around. It&#8217;s the subjective, experiential aspect of our mental lives.</p><p>People talk about it by pointing at examples &#8212; it&#8217;s the redness of red, the taste of a cup of coffee, the blueness of a sky on a clear day. It&#8217;s any kind of experience whatsoever. Thomas Nagel put it a bit more formally fifty years ago now: for a conscious organism, there is something it is like to be that organism. It feels like something to be me, but it doesn&#8217;t feel like anything to be a table or a chair. And the question is: does it feel like anything to be a computer or an AI model or any of the other things we might wonder about? A fly, a brain organoid, a baby before birth. There are many cases where we can be uncertain about whether there is some kind of consciousness going on.</p><p>And that&#8217;s very different from intelligence. They go together in us &#8212; or at least we like to think we&#8217;re intelligent. But intelligence is fundamentally about performing some function. It&#8217;s about doing something. And consciousness is fundamentally about feeling or being.</p><p><strong>Dan Williams:</strong> Just to ask one follow-up about that. This idea that intelligence is about doing and consciousness is about what it&#8217;s like to have an experience &#8212; someone might worry that if you frame things that way, you end up quite quickly committing to a kind of epiphenomenalism. Because if we&#8217;re not understanding consciousness in terms of what it enables systems to do, the sorts of functions they can perform, isn&#8217;t there a risk that right from the outset we&#8217;re going to be biased in the direction of treating consciousness not as something that evolved because it conferred certain fitness advantages on organisms, but as this sort of mysterious qualitative thing which is distinct from what organisms can do?</p><p><strong>Anil Seth:</strong> I think it&#8217;s a good point to bring up, but I don&#8217;t think it&#8217;s too much of a worry. The point is not to say that consciousness cannot or does not have functional value for an organism. If we think of it as a property of biological systems &#8212; plausibly the product of evolution, or at least the shape and form of our conscious experiences are shaped by evolution &#8212; it&#8217;s always useful to take a functional view. Conscious experiences very much seem to have functional roles for us, and there&#8217;s a lot of active research about what we do in virtue of being conscious compared to unconscious perception.</p><p>So there&#8217;s no worry about sinking into epiphenomenalism. The point is more that intelligence and consciousness are not the same thing, but they can nonetheless be related. And it may be that they can be completely dissociated. It may be the case that we can develop systems that have the same kinds of functions that we have in virtue of being conscious, but that do not require consciousness &#8212; just as we can build planes that fly without having to flap their wings. The functions might be multiply realisable; they might be doable in different ways. They might not be, of course.</p><p>On the other hand, it might be possible to have systems that have experiences but aren&#8217;t actually doing anything useful. Here I&#8217;m worried less about AI and more about this other emerging technology of neurotechnology and synthetic biology, where people are building little mini-brains in labs constructed from biological neurons. They don&#8217;t really do anything very interesting, but because they&#8217;re made of the same stuff, I think it&#8217;s hard to rule out that they may have some kind of proto-consciousness going on, or at least be on a path plausibly to consciousness. So we can tease intelligence and consciousness apart, but it&#8217;s also important to realise how they are related in those cases where both are present.</p><p><strong>Henry Shevlin:</strong> I&#8217;ll jump in with a minor pedantic point, but one that&#8217;s illustrative of some of the problems in debates around consciousness. You mentioned, Anil, as examples of losing consciousness, dreamless sleep and general anaesthetic. But both of those are contested. Your fellow biological naturalist Ned Block has raised serious doubts about whether general anaesthetic really eliminates all phenomenal consciousness. And there are those like Evan Thompson who have suggested that even in dreamless sleep there could be some residual pure consciousness, perhaps consciousness of time. I think this is a broader problem in the science of consciousness: we can&#8217;t even clearly agree on contrast cases. A lot of the blindsight cases that were supposed to be gold-standard cases of perception without consciousness are now contested, and it seems very hard to get an absolutely unequivocal case of something that&#8217;s not conscious in the human case.</p><p><strong>Anil Seth:</strong> Well, I mean &#8212; death.</p><p><strong>Henry Shevlin:</strong> I don&#8217;t know. You have some people who disagree, admittedly on more spiritual grounds.</p><p><strong>Anil Seth:</strong> Yeah, but I want to push back a little. It is hard, but I don&#8217;t think it&#8217;s as hard as some people might suggest. Sleep is complicated, which is why I tend to also say anaesthesia. Sleep is very complex. In most stages of sleep, people are having some kind of mental content. We might typically think we only dream in rapid eye movement sleep, and the rest of the time it&#8217;s dreamless and basically like anaesthesia. This is not true. You can wake people up all through the night at different stages of sleep, and quite often they will report something was going on. So it&#8217;s hard to find stages of sleep that are truly absent of awareness in the way we find under general anaesthesia.</p><p>We notice this: when we go to sleep and wake up, we usually know roughly how much time has passed. We may get it wrong by an hour or two if we&#8217;re jet-lagged or sleep-deprived, but we roughly know. Under anaesthesia, it&#8217;s completely different. It is not the experience of absence &#8212; it&#8217;s the absence of experience. The ends of time seem to join up and you are basically turned into an object and then back again.</p><p>The residual uncertainty about general anaesthesia depends on the depth of anaesthesia. Some anaesthetic situations don&#8217;t take you all the way down, because in clinical practice you don&#8217;t want to unless you absolutely have to. But if you take people to a really deep level, you can basically flatline the brain. I think under these cases, with the greatest respect to Ned Block &#8212; who is very much an inspiration for a lot of what I think and write about &#8212; that&#8217;s as close to a benchmark baseline of no consciousness but still a live case as we can get.</p><p><strong>Henry Shevlin:</strong> Although it is standard to administer amnestics as part of the general anaesthesia cocktail, which might make people suspicious. You&#8217;re told: we&#8217;re also going to give you drugs that prevent you forming memories. Why would you even need to do that if it was unequivocal that you were just completely unconscious in that period?</p><p><strong>Anil Seth:</strong> Well, because it&#8217;s never been unequivocal to anaesthesiologists. There&#8217;s been this bizarre separation of medicine from neuroscience in this regard until relatively recently. From a medical perspective, there are cases where they don&#8217;t always administer a full dose &#8212; so it&#8217;s an insurance policy. There have been a number of purely scientific studies of general anaesthesia and conscious level, and in those studies, it&#8217;s a good question whether they also administer amnestics. I would imagine not, but I&#8217;m not sure.</p><p><strong>Dan Williams:</strong> Okay, to avoid getting derailed by a conversation about general anaesthesia &#8212; when we ask whether a system is conscious, we&#8217;re asking: is there something it&#8217;s like to be that system? We&#8217;re not asking how smart it is, we&#8217;re asking about subjective experience. Before we jump into your arguments on the science and philosophy of this, Anil, you&#8217;ve also got interesting things to say about why human beings might be biased to attribute consciousness, especially when it comes to systems like ChatGPT, even if we set aside the question of whether it in fact is conscious.</p><p><strong>Anil Seth:</strong> Yeah, I think this is the first thing to discuss. Whenever we make judgements about something where we don&#8217;t have an objective consciousness meter, there is some uncertainty. It&#8217;s going to be based on our best inferences. And so we need to understand not only the evidence but also our prior beliefs about what the evidence might mean. This brings in the various psychological biases we have.</p><p>The first one we already mentioned: it&#8217;s a species of anthropocentrism &#8212; the idea that we see the world from the perspective of being human. This is why intelligence and consciousness often get conflated. We like to think we&#8217;re intelligent and we know we&#8217;re conscious, so we tend to bundle these things together and assume they necessarily travel together, where it may be just a contingent fact about us as human beings.</p><p>The second bias is anthropomorphism &#8212; the counterpart where we project human-like qualities onto other things on the basis of only superficial similarities. We do this all the time. We project emotions into things that have facial expressions on them. And language is particularly effective at this. Language as a manifestation of intelligence is a very strong signal: when we see or hear or read language generated by a system that seems fluent and human-like, we project into that system the things that in us go along with language, which are intelligence and also consciousness.</p><p>The third thing is human exceptionalism. We think we&#8217;re special, and that desire to hold on to what&#8217;s special leads us to prioritise things like language as especially informative when it comes to intelligence and consciousness. In a sense, this is a legacy of Descartes and his prioritisation of rational thought as the essence of what a conscious mind is all about and what made us distinct from other animals. That&#8217;s echoed down the centuries despite repeated attempts to push it away.</p><p>There&#8217;s a good Bayesian reason for this too: in pretty much every other situation we&#8217;ve faced, if something speaks to us fluently, we can be pretty sure there&#8217;s a conscious mind behind it &#8212; whether it&#8217;s a human being recovering from brain injury or perhaps a non-human primate using language. These are strong signals. So this might be the first time in history where language is not a reliable signal, because we&#8217;re not dealing with something that has the shared evolutionary history, the shared substrate, the shared mechanisms. It&#8217;s a different kind of thing.</p><p>So that&#8217;s one set of biases. We can think of it as a kind of pareidolia. Our minds work by projecting, seeing patterns in things &#8212; whether it&#8217;s faces in clouds or minds in AI systems. These priors are generally useful, but they can mislead.</p><p><strong>Henry Shevlin:</strong> It&#8217;s not just pareidolia though, is it? Setting aside consciousness for a second, in terms of what we might loosely think of as cognitive abilities &#8212; the whole range of benchmarks for reasoning, understanding, and so forth &#8212; the performance of these systems on a huge range of tasks has skyrocketed to the point where people talk about approaching coding supremacy, for example. AI can now produce pretty decent fiction. It can do a whole range of verbal reasoning tasks at human-level performance. So it&#8217;s not entirely pareidolia at the level of AI cognition. Or would you disagree?</p><p><strong>Anil Seth:</strong> At the level of cognition, I kind of agree, but as always, Henry, I only partly agree. I think we can still overestimate. It&#8217;s useful here to separate what Daniel Dennett might have called the intentional stance &#8212; where it&#8217;s useful to interpret something&#8217;s behaviour as engaged in the kind of cognitive process we might be familiar with in ourselves, as thinking, understanding, reasoning. These systems are described this way too, as &#8220;chain of thought&#8221; models and so on. I still think we overestimate the similarity. Through the surface veneer of interacting through language or code, there&#8217;s a tendency to assume that because the outputs have the same form, the mechanisms underneath are more similar than they really are.</p><p>There&#8217;s another really foundational question here for language models in particular, which is whether they understand. One of the things I hadn&#8217;t really thought about before the last few years is that consciousness and understanding might also come apart. I&#8217;m used to distinguishing consciousness from intelligence, because there are clear examples where you can have one without the other. But I&#8217;d always implicitly assumed that understanding necessarily involves some kind of conscious apprehension of something being the case &#8212; grokking something. And now I&#8217;m not so sure. That might be another case of anthropocentrism.</p><p>I&#8217;d be fairly compelled by an argument that language models &#8212; especially if they are embodied in a world and perhaps trained while embodied, so that the symbol manipulation their algorithms engage in has some grounding &#8212; may be truly said to understand things, but still without any connotation of consciousness. So yes, I kind of agree, but even now I&#8217;d be resistant to say that language models truly understand. I think that&#8217;s still a form of our projecting. But the criteria for a language model to truly understand seem more achievable &#8212; I can see how it could be achieved under a relatively straightforward extrapolation of the way we&#8217;re going &#8212; compared to something like consciousness.</p><p><strong>Dan Williams:</strong> Can I ask a question about that? These arguments we&#8217;re going to focus on are targeted at consciousness in AI systems. And as we said, you want to draw a distinction between intelligence and consciousness. But before we get into issues of consciousness, when we&#8217;re just focusing on the capabilities of these systems &#8212; what they can actually do &#8212; there are some people who are very dismissive, even setting aside consciousness. They&#8217;re just &#8220;stochastic parrots,&#8221; engaged in a kind of fancy auto-complete. What&#8217;s your view about those kinds of debates? Someone might agree with you that it&#8217;s a mistake to attribute human-like intelligence to these systems &#8212; they&#8217;re very alien in their underlying architecture &#8212; but they&#8217;re maybe even super-intelligent along certain dimensions, even more impressive than human beings. So where do you sit?</p><p><strong>Anil Seth:</strong> Somewhere in the middle &#8212; it&#8217;s always a comfortable or uncomfortable place to be. But they are astonishing. Whenever this question comes up, I&#8217;m always reminded that I did my PhD in AI in the late 1990s, finishing in 2001. The situation was totally different then. We were still thinking about embodiment and embeddedness, especially here at Sussex, and some of the more in-principle limitations. But the practical capabilities of AI back then were just &#8212; there was nothing really to write home about. That&#8217;s changed so much. That&#8217;s why conversations like this now have real practical importance in the world.</p><p>AI is super impressive. I don&#8217;t see it as a single trajectory, though. I think there&#8217;s a meta-narrative we often fall into, which is that intelligence is along a single dimension &#8212; plants at the bottom, then insects, then other animals, then humans in a kind of <em>scala naturae</em>, the great chain of being &#8212; and then there&#8217;s angels and gods, and AI is travelling along this curve and at some point it&#8217;s going to reach human-level intelligence and then shoot past to artificial super-intelligence. I think this is a very constraining way to think of it.</p><p>It&#8217;s already the case, and has been for a long time, that AI has been better than humans at many things. But it&#8217;s always been very narrow. What we&#8217;ve seen through the foundation model revolution is the first kind of semi-general AIs &#8212; language models are good at many things, not good at everything, but good at many things rather than just one. But I still think they&#8217;re exploring a different region in the space of possible minds. They may soon be better than humans at many things, but they&#8217;ll still be different from us.</p><p>I think it&#8217;s important to recognise that, because we get into all kinds of trouble if &#8212; to use a beautiful metaphor from Shannon Vallor&#8217;s book about the AI mirror &#8212; we think of AI systems as just alternative instantiations of human minds that are either a little bit weaker or much stronger. Then we misunderstand both the systems and ourselves, and miss opportunities for how we can develop AI technologies so that they best complement our own cognitive capacities.</p><p><strong>Dan Williams:</strong> Let&#8217;s go back to the consciousness issue. As you said, one reason you might think AI systems are or could be conscious is because of these cognitive biases. Another reason is you might hold a sophisticated philosophical view called computational functionalism. Can you say a little about how you understand computational functionalism and why it might commit you to the view that conscious AI is at least possible in principle?</p><p><strong>Anil Seth:</strong> Yeah. So my understanding of computational functionalism is that it&#8217;s really an assumption you need in order to get the idea of conscious AI off the ground. It&#8217;s the idea that consciousness is fundamentally a matter of computation &#8212; and this computation is the kind that can be independent of the particular material implementing it. To put it another way: if you implement the right computations, you get consciousness. That&#8217;s sufficient.</p><p>That means if you can implement those computations in silicon, that&#8217;s enough. You could implement them in some other material &#8212; that would also be enough. It&#8217;s the computation that matters. The material underlying it is only important insofar as it&#8217;s able to implement those computations. And silicon is very good at implementing a certain class of computations &#8212; what we call Turing computations. So that makes it a good candidate for consciousness if computational functionalism is true. And that&#8217;s what I think is a big &#8220;if.&#8221; It seems a very natural assumption. But first let me ask you &#8212; does that resonate with your understanding of computational functionalism?</p><p><strong>Henry Shevlin:</strong> I completely agree with that characterisation. Computational functionalism says mental states are individuated by their computational role. The only thing I&#8217;d push back on is that computational functionalism is one road to concluding that AI can be conscious, but there are other types of functionalism out there. My response to your BBS paper emphasises this.</p><p>Psychofunctionalism &#8212; apologies to listeners, the terminology does get messy &#8212; says we should individuate mental states not in terms of computational processes necessarily, but whatever functional roles those mental states play in our best scientific psychology. Ned Block is a big fan of this view. The view I&#8217;m partial to is analytic functionalism, which is the functionalist take on behaviourism: mental states should be individuated by everyday folk psychology. A belief is something we all sort of know what it is because we can characterise people as having them, forming them, losing them. Once you formalise this tacit knowledge, that gets you to a theory of what beliefs are.</p><p>Those views could overlap with computational functionalism, but it&#8217;s not necessary to endorse it to think AI is conscious. If you&#8217;re an analytic functionalist, you might think that if AI adheres sufficiently closely to the platitudes of everyday folk psychology &#8212; they believe like us, they form goals, they have hopes and aspirations &#8212; then of course they can be conscious, even if you think brains are not computers, even if what brains do is not a computational process and what AI systems do is. Because both fit the same functional-behavioural profile, they might both count as conscious.</p><p><strong>Anil Seth:</strong> That&#8217;s quite a wrinkle &#8212; I&#8217;d say a massive fold. I completely agree that computational functionalism is a specific flavour of a broader set of functionalist views. Part of the problem has been that people assume all these views are equivalent, and they really aren&#8217;t.</p><p>Functionalism, as I understand the original version, just says that mental states are the way they are because of the functional organisation of the system. But that can include many things &#8212; the internal causal structure, many things not captured by an algorithm. An algorithm is in the end determined by the input-output mapping between a set of symbols. Functionalism in general can mean many other things. You could be a signed-up, subscription-paying functionalist and still disagree with computational functionalism, which is a much more specific claim about everything that matters about the brain being a matter of computation.</p><p>I&#8217;d also worry a bit about your view, Henry, which seems a little behaviourist. If you&#8217;re saying that behaving the same way and having the same kinds of beliefs are sufficient conditions &#8212; well, computational functionalism at least has the merit of specifically stating conditions for sufficiency. If you&#8217;re saying the same about folk-psychological criteria, I think you&#8217;re open to all the problems of the psychological biases we discussed. It&#8217;s a position that&#8217;s going to be much more open to false positives, because there are so many ways of things looking as if they have the kinds of beliefs and goals that go along with consciousness in us, but which need not go along with consciousness in general.</p><p>But back to the point: computational functionalism is this specific claim, grounded on the idea that the computation is what matters. And it&#8217;s also grounded on the idea that even in biological brains, it&#8217;s the computation that matters &#8212; and if you can abstract that computational description and implement it in something else, you get everything that goes along with the real biological brain.</p><p><strong>Dan Williams:</strong> So roughly speaking, functionalism is the view that what matters for consciousness is not what a system is made of, but what it can do. And computational functionalism is the view that what matters in terms of what the system is doing is something like processing information.</p><p>Anil, your arguments have two aspects. Some are critical of computational functionalism &#8212; the negative part &#8212; and then you&#8217;ve got an alternative way of viewing consciousness and its connection to the brain. Let&#8217;s start with those criticisms. What do you think are the main weaknesses of computational functionalism?</p><p><strong>Anil Seth:</strong> I think there are a number of weaknesses, all grounded on the intuition that we&#8217;ve taken what&#8217;s a useful metaphor for the brain &#8212; the brain is a kind of carbon-based computer &#8212; and we&#8217;ve reified it. We&#8217;ve taken a powerful metaphor and treated it literally.</p><p>The idea that the brain literally is a computer raises the question of what we mean by a computer, by computation. Let&#8217;s think of computation in the most standard way: as Turing defined it in the form of a universal Turing machine. In this definition, computation is a mapping between a set of symbols through a series of steps &#8212; that&#8217;s an algorithm. And this mapping involves a sharp separation between the algorithm and what implements it, between software and hardware. That sharp separation both influences how we build real computers &#8212; we can run the same software on different computers &#8212; and underwrites the assumption that computation is the thing that matters, because it allows you to strip out the computation cleanly from the implementation.</p><p>If you look at the brain, it has a superficial appeal: we think of the mind as software and the brain as hardware. But the closer you look, the more you realise you can&#8217;t induce anything like this sharp separation &#8212; not of software and hardware, but of mindware and wetware. In a brain, you can&#8217;t separate what it is from what it does with the same sharpness that, by design, you can in a digital Turing computer.</p><p>But Turing computation remains appealing. Roll back almost ninety years to Turing, but also to McCulloch and Pitts: they showed that if you think of the brain as very simple abstract neurons connected to each other, each just summing up incoming activity and deciding whether to be active or not &#8212; very simple abstractions of the biological complexity of real neurons &#8212; you basically get everything Turing computation has to offer. You can build networks of these that are Turing-complete; they can implement any algorithm.</p><p>So you get this beautiful marriage of mathematical convenience. You can strip away everything about the brain apart from the fact that it consists of simple neuronal elements connected together, and yet you get everything Turing computation can give you. So maybe that&#8217;s the only thing that matters about brains. And of course, that abstraction is in practice very powerful &#8212; the neural networks trained for foundation models are direct descendants of these McCulloch-Pitts networks.</p><p>But this marriage starts to get stressed, because Turing computation, while powerful, is not everything. Strictly speaking, anything that is continuous or stochastic is not within the realm of algorithms. Algorithms also don&#8217;t care about continuous time &#8212; there could be a microsecond or a million years between two steps; it&#8217;s the same computation. Real brains are not like that. We&#8217;re in time just as much as we&#8217;re embodied. You can&#8217;t escape real physical time and continue to be a functioning biological brain. The phenomenology of consciousness is also in time &#8212; time is plausibly an intrinsic and inescapable dimension of our phenomenology.</p><p>So there are things brains do which are not algorithmic and might plausibly matter for consciousness. And when you look at brains, you can&#8217;t separate what they are from what they do in any clean way. I think that really undermines the idea that the algorithmic level is the only level that matters.</p><p>To roll back to where we started: the idea that the brain literally is a computer is a metaphor. Like all metaphors, there&#8217;s a bit of truth to it. But not everything the brain does is necessarily algorithmic. And that opens the question: if we can&#8217;t assume everything the brain does is computational, that puts a lot of pressure on computational functionalism, which is based on the idea that consciousness is sufficiently describable by a computation.</p><p><strong>Henry Shevlin:</strong> I agree with a lot of what you&#8217;ve said about the importance of fine details of realisation in brains. Peter Godfrey-Smith has also advanced this point, talking about the role of intracellular, intra-neuronal activity. Rosa Cao has had some great papers on this recently too.</p><p>But here&#8217;s a provocative analogy. Imagine we were trying to understand what art was, and all we had was paintings. We might say: clearly an essential part of being an artwork is pigment, because not only is pigment present in every example of art we&#8217;ve got, it&#8217;s essential to how it is artistic &#8212; pigment defines the formal properties of every piece of artwork we&#8217;ve ever seen. But of course, there are lots of types of art that don&#8217;t involve pigments.</p><p>In the same way, yes, all these fine details of wetware might be essential to the type of consciousness we see in humans and other animals, whilst not exhausting the space of possible conscious minds that might be very different from us.</p><p><strong>Anil Seth:</strong> I think that&#8217;s fine. All I&#8217;ve said so far is that there&#8217;s the open question of whether things besides computation might matter, but then one has to give an account of what they are and why. If I wanted to make the case that some aspect of biology is absolutely necessary for consciousness, I have to do that separately.</p><p>These things are somewhat independent. Computational functionalism could be wrong, but biology could still be not necessary &#8212; there could be other ways of making art. If I&#8217;ve got a strong case that some aspect of biology is necessary for consciousness, then computational functionalism cannot be true. But the reverse is not the case.</p><p><strong>Dan Williams:</strong> Maybe one question before we move on. I was a little confused reading your papers about which of the following two positions you&#8217;re defending. One position says: even if we could build computers that replicated all the functionality of a human being, it nevertheless wouldn&#8217;t be conscious. The other says: we just couldn&#8217;t build computers that replicate all of the functionality of a human being, because to do what human beings do, you need the kinds of materials and structures found within the brain. Those feel like two different positions. Someone could be a computational functionalist as a purely metaphysical doctrine, saying: if you could build a computer that does everything humans do, it would be conscious &#8212; it just so happens we can&#8217;t do that. Are you denying that metaphysical thesis, or making a different claim?</p><p><strong>Anil Seth:</strong> There&#8217;s a lot in there. I am very suspicious of that metaphysical claim. Let me put it in a scenario that might help clarify.</p><p>Some people might say that if aspects of biology really matter, and we built a digital computer simulation including those details, would that be enough? We can do this ad infinitum &#8212; build a maximally detailed whole-brain emulation that digitally simulates all the mitochondria, even microtubules. Simulate everything. Would that be enough?</p><p>The metaphysical computational functionalist might say yes &#8212; somewhere in there, the right computations have to be happening. But I don&#8217;t think so, because it still relies on the claim that consciousness is constitutively computational. Making a simulation more detailed doesn&#8217;t make it any more real unless the phenomenon you&#8217;re simulating is a computation.</p><p>We make a simulation of a weather system; making it more detailed doesn&#8217;t make it any more likely to be wet or windy. Most things we simulate, we&#8217;re not confused about the fact that the simulation doesn&#8217;t instantiate the thing we&#8217;re simulating. If it is to move the needle on consciousness, that depends on the claim that consciousness is constitutively computational.</p><p>The irony is that if you think simulating the details is necessary &#8212; if you think you have to simulate the mitochondria &#8212; that actually makes it <em>less</em> likely that consciousness is constitutively computational. Because if consciousness is constitutively computational, those kinds of details should not matter.</p><p>A slight sidebar: I think this is ironically amusing because there are people investing their hopes, dreams, and venture capital into whole-brain emulation in order to upload their minds to the cloud and live forever. I think that&#8217;s very wrong-headed. If you think the details matter, then it&#8217;s unlikely consciousness is a priori a matter of computation alone.</p><p>So to your question: I&#8217;m very suspicious of that metaphysical claim. The burden of proof is on the computational functionalist to say why computation is going to be sufficient, given all the differences between computers and brains. I start from a physicalist perspective &#8212; consciousness is a property of this embodied, embedded, and timed bunch of stuff inside our heads. If you build something sufficiently similar, it will be conscious. The question is: how similar does it have to be? Does it have to be embodied? Made of neurons? Made of carbon? Alive? These are still open questions.</p><p><strong>Henry Shevlin:</strong> Just to chime in &#8212; this point about simulated weather systems not getting anyone wet is obviously John Searle&#8217;s point originally. I think it&#8217;s better understood as a restatement of the disagreement rather than a dunk on functionalism. If consciousness is computational, then it is absolutely substrate-invariant. There are other things that are substrate-invariant: online poker is poker, online chess is chess, money is money whether it&#8217;s coins, banknotes, or on a balance sheet. So if consciousness is not computational, then a simulation won&#8217;t be conscious. But if it is computational, the simulation point has no bite.</p><p><strong>Anil Seth:</strong> I don&#8217;t disagree. But the key point is: you can&#8217;t use the simulation argument to argue <em>for</em> the fact that consciousness is computational. If consciousness is computational, certain things follow about what happens in a simulation. But the fact you can simulate something doesn&#8217;t tell you anything about consciousness being computational.</p><p>I reread Nick Bostrom&#8217;s simulation argument paper while writing the BBS paper. He carefully interrogates his assumptions &#8212; that we don&#8217;t wipe ourselves out, that at least one person is interested in building ancestor simulations. But he also says: we have to assume consciousness is a matter of computation for this whole thing to get off the ground. And then he says, &#8220;Don&#8217;t worry, philosophers generally think that&#8217;s fine.&#8221;</p><p>Hold on a minute &#8212; that is the most contentious assumption by far of everything in the paper, and he gives it no critical examination. The fact that computational functionalism is at the very least contentious is, for me, very good evidence against the simulation hypothesis.</p><p><strong>Dan Williams:</strong> I really want to get to your positive account, but one follow-up on your criticisms. One of your strongest arguments is that when you look at the brain, you don&#8217;t find anything like the hardware-software distinction central to digital computation as we understand it post-Turing. I think that&#8217;s true and important. But isn&#8217;t it possible that someone could say: that&#8217;s an interesting feature of how computation works in biological systems &#8212; people call it &#8220;mortal computation,&#8221; the term from Geoffrey Hinton &#8212; maybe having to do with energetic efficiency? But it doesn&#8217;t follow that you couldn&#8217;t replicate those computational abilities in digital computers. It could just be a contingent feature of our architecture.</p><p><strong>Anil Seth:</strong> The first part is right, but the second part doesn&#8217;t follow. You can&#8217;t separate what brains are from what they do; there&#8217;s no sharp distinction between mindware and wetware. Rosa Cao has written about this, and there&#8217;s the notion of mortal computation from Hinton. Others have talked about biological computation, emphasising these features &#8212; you can call it generative entrenchment. I like the term &#8220;scale integration&#8221;: in biological systems, the microscales are deeply integrated into higher levels of description in a way that you can&#8217;t separate out. The macro and the micro are causally entangled with each other. This is very characteristic of evolved biological systems &#8212; there&#8217;s no design imperative from evolution to have a sharp separation of scales. And that has benefits: you get energy efficiency, and you may get explanatory bridges towards aspects of consciousness too, like its unity.</p><p>This is, for me, a very exciting avenue: if we stop thinking of the brain as just a network of McCulloch-Pitts neurons implementing some Turing algorithm, and start looking at what it actually is &#8212; what the functional dynamical properties of scale-integrated systems really are &#8212; I think we&#8217;ll learn a lot.</p><p>But the second part &#8212; that biological computation could be done in a digital computer &#8212; I don&#8217;t think follows, and this is why I resist calling these things varieties of &#8220;computation.&#8221; Whenever you use that word, it&#8217;s easy to slip into the idea that they&#8217;re portable between substrates. The biological computation my brain does in virtue of being scale-integrated could be <em>simulated</em> by a digital computer. But the simulation is not an instantiation unless what you&#8217;re simulating is constitutively that kind of computation. And biological scale-integrated computation is not digital Turing computation.</p><p>The more general point: the further you move away from a Turing definition of computation, the less substrate independence you have. Analog computers, for instance, implement features that are probably essential &#8212; like grounding in time with continuous dynamics &#8212; but they do not have the same substrate flexibility as digital computers. We love digital computers because they have that flexibility. But when it comes to understanding what brains do, whether in intelligence or consciousness, we can&#8217;t throw all these things away.</p><p><strong>Henry Shevlin:</strong> A quick side note: the Open Claude instances, the more agentic Claude bots, have something called a &#8220;heartbeat&#8221; &#8212; a regular interval at which they can take actions. So we&#8217;re starting to see at least simulation of some temporal dynamics in large language models. Obviously radically different from the kind you&#8217;re concerned with, but interesting.</p><p><strong>Anil Seth:</strong> I don&#8217;t buy that. That&#8217;s a simulated heartbeat. You could slow the clock rate down. You can give these things a sense of time, but it&#8217;s not physical time. Imagine you slow all the Anthropic servers way down &#8212; all the agents slow down, but the computation is still the same. We are embedded in physical time in a way that even agents with simulated heartbeats are not.</p><p><strong>Dan Williams:</strong> I&#8217;ll set you up for developing your positive account with a question: well, isn&#8217;t computational functionalism the only game in town? Doesn&#8217;t it just win by default?</p><p><strong>Anil Seth:</strong> No. That&#8217;s part of the issue &#8212; one of the responses is often, &#8220;What else could it be?&#8221; There&#8217;s a phrase, &#8220;information processing,&#8221; that I find increasingly revealing. It&#8217;s so common to describe the brain in terms of information processing that we don&#8217;t even realise we&#8217;re saying it, as if there&#8217;s no other game in town. What do we mean when we say a brain is processing information? It&#8217;s really not clear to me. The most rigorous formal definition is Shannon&#8217;s, which is purely descriptive &#8212; it doesn&#8217;t tell you whether a system is processing information.</p><p>But alternatives have been around for a long time. When I was doing my PhD at Sussex, there was the dynamical systems perspective, the whole enactive embodied approach to cognition &#8212; continuous dynamics, attractors, phase spaces. These describe complex systems doing things in ways which are not computational, not algorithmic. Brains oscillate &#8212; this is one of the most central phenomena of neurophysiology, as Earl Miller talks about a lot. And it would be crazy if evolution hadn&#8217;t taken advantage of this natural physical property. The right framework for understanding oscillatory systems is not an algorithm, because algorithms are abstracted out of time.</p><p>So there are many other games in town. A lot of these are perfectly compatible with functionalism, but now it&#8217;s a functionalism much more tied to the material basis &#8212; only some substrates can implement the right kinds of functions, and biological material may be necessary for the right kind of intrinsic dynamical potential.</p><p>I think biological naturalism is still basically a functionalist position. I&#8217;m wary of saying something considered vitalistic &#8212; there&#8217;s no magic, non-explicable, intrinsic quality about life associated with consciousness. Living systems can be distinguished from non-living systems in terms of functional description. Features like metabolism and autopoiesis are still amenable to functional descriptions, but now the functions are closely tied to particular kinds of materials, particular biochemistries. Metabolism is a function, but it&#8217;s a function inseparable from some material process. Maybe it doesn&#8217;t have to be carbon &#8212; maybe there are other ways of having metabolism. But you can always say that intrinsic properties at one level can be decomposed into functional relations at a lower level.</p><p>So I&#8217;m comfortable with functionalism broadly, but the question is: how far down do you have to go? And to Henry&#8217;s point: how do we make sure we&#8217;re not focusing on things that are contingently the case in biological consciousness only?</p><p>Many of the comments to my BBS paper said I haven&#8217;t made a rigorously indefensible case for biological naturalism, and I totally concede that. I don&#8217;t think there is one yet.</p><p><strong>Henry Shevlin:</strong> Can I give you an opportunity to say more about autopoiesis specifically? I&#8217;ve yet to hear a really convincing case for how it helps explain what consciousness is. Here&#8217;s a dark framing. The standard Maturana and Varela notion of autopoiesis is a system continually replacing, maintaining, and repairing its own components.</p><p>A few years ago, I read about a horrific case: Hisashi Ouchi, a Japanese nuclear researcher who received the largest dose of radiation ever recorded. Every chromosome in his body was destroyed, no new cell production, no RNA transcription &#8212; his body couldn&#8217;t produce new proteins. Every cell was effectively dead; autopoietic processes had basically stopped. He was kept alive through amazing medical interventions &#8212; you could call it allopoiesis &#8212; for eighty-three days. And he was conscious and in a lot of pain throughout.</p><p>So here&#8217;s a case of someone in whom autopoietic processes had basically stopped, and yet he was still consciously experiencing severe pain. I&#8217;d love to hear more about why you think autopoiesis is important for consciousness.</p><p><strong>Anil Seth:</strong> That is darkly, weirdly fascinating. Setting aside the horror of it &#8212; it would be very interesting to consider: has autopoiesis really stopped entirely, or is it winding down? I can imagine all sorts of problems with that dose of radiation, but it&#8217;s also not true that every cellular process stopped at the moment he was still alive for eighty-three days. It might be a gradual winding down.</p><p>If there were a case where you could show that all autopoietic processes had definitively stopped and yet consciousness was continuing, that would put pressure on the claim that autopoiesis is necessary in the moment for consciousness. It might still be diachronically necessary &#8212; systems have to have gotten those processes rolling to begin with.</p><p>The reason I usually mention autopoiesis and metabolism as candidate features of life is partly because they maximise the difference between living systems and silicon-based computers. They&#8217;re obvious examples of things closely tied to life, things that silicon devices clearly cannot have. It&#8217;s partly to emphasise how different these things are and why it&#8217;s very reductive to think of us as meat-based Turing machines.</p><p>There&#8217;s another reason to think about autopoiesis, and it&#8217;s the connection between autopoiesis, the free energy principle, and predictive processing as a way of understanding the contents of consciousness. There&#8217;s a line that can be drawn between these poles &#8212; what Carl Friston and Andy Clark and Jacob Hohwy have called the high road and the low road, but they meet in the middle.</p><p>The basic idea: start with the brain engaged in approximate Bayesian inference about the causes of sensory signals &#8212; very much a Bayesian brain perspective, Helmholtz&#8217;s &#8220;perception is inference.&#8221; Of course, Bayesian inference can be implemented algorithmically, but that doesn&#8217;t mean that&#8217;s how the brain does it. The free energy principle shows a way of doing it which follows continuous gradients &#8212; not necessarily an algorithm.</p><p>So our perceptual experiences of the self and the world are brain-based best guesses about the causes of sensory inputs. This doesn&#8217;t explain why consciousness happens at all, but gives us a handle on why experiences are the way they are. This applies to the self too: our experiences of selfhood are underpinned by brain-based best guesses about the state of the body &#8212; especially the interior of the body, through what I&#8217;ve been calling interoceptive inference. These processes are more to do with control and regulation. The brain, when perceiving the interior of the body, doesn&#8217;t care where the heart is or what shape it is &#8212; it cares how it&#8217;s doing at the business of staying alive.</p><p>This explains why emotional experiences are characterised more by valence &#8212; things going well or badly &#8212; rather than shape and location and speed. And prediction allows control: once you have a generative model, you can have priors as set points and implement predictive regulation to keep physiological variables where they need to be.</p><p>So far so good. We&#8217;ve gone from experiences of the world, to the self, to the interior of the body, from finding where things are to controlling things. And then comes the part that&#8217;s still difficult for me: that imperative for control goes all the way down. It doesn&#8217;t bottom out &#8212; it goes right down into individual cells maintaining their persistence and integrity over time. There&#8217;s no clear division where the stuff ceases to matter. And so you get right down to autopoiesis.</p><p>That&#8217;s where the free energy principle comes in. Living systems maintain themselves in non-equilibrium steady states &#8212; they maintain themselves out of equilibrium with their environment. To be in thermodynamic equilibrium with your environment is to be dead. By maintaining themselves in this statistically surprising state of being, they&#8217;re minimising thermodynamic free energy. And that becomes equivalent to prediction error in the predictive processing framework.</p><p>That&#8217;s the rough line. I&#8217;ll be very frank: there are bits along the way that can be picked at. One is the move from a thermodynamic interpretation of free energy to the variational, informational free energy interpreted as prediction error. There are results in physics linking thermodynamic and information theory, but do they do the job? Not so sure.</p><p>But it&#8217;s a reason to think about how you go from metabolism and autopoiesis all the way up to this broader frame for how brains work. There&#8217;s a phenomenological aspect too, which is speculative: if you try to think about what the minimal phenomenal experience might be, devoid of all distinguishable content &#8212; some meditators talk about pure awareness without anything going on at all &#8212; I&#8217;m a bit sceptical of that idea. I think it&#8217;s equally plausible that at the heart of every conscious experience is the fundamental experience of being alive. That is the aspect of consciousness that, for biological systems, is always there. Everything else is painted on top of that.</p><p>Peter Godfrey-Smith put it nicely in <em>Metazoa</em>: the more you think about what life is &#8212; these billions of biochemical reactions going on within every cell every second, electromagnetic fields giving integrated readouts &#8212; it&#8217;s much easier to think that that&#8217;s the kind of physical system which might entail a basic phenomenal state, compared to the abstractions of information processing. I think he&#8217;s on the right track.</p><p>The way to begin is to look at what are the functional and dynamical attributes of living systems at all scales and across scales, compared to other kinds of systems. Biochemistry is a big missing link &#8212; we tend to forget about it. Nick Lane at UCL is doing amazing work looking at mitochondria and anaesthetics and the deep biochemistry of what happens within cells &#8212; not only how anaesthetics work, but why the electric fields generated within mitochondria might join together to give a global integrative signal about the physiological state of an organism. Stories like this are where I see much more potential for building solid explanatory foundations for a biological basis of consciousness.</p><p><strong>Henry Shevlin:</strong> A plus one for Nick Lane &#8212; huge fan. We should get him on the show.</p><p><strong>Dan Williams:</strong> You&#8217;ve described a rich and fascinating alternative picture. One worry about the free energy principle approach, though: it seems too general. As people like Friston understand it, it applies at the very least to all living things, and maybe even more broadly. Most people want to say not all living things are conscious. And even in conscious organisms, many of these processes &#8212; ordinary facets of digestion, for instance &#8212; presumably don&#8217;t have anything to do with consciousness. These things are presumably still happening under general anaesthesia, and yet you don&#8217;t have consciousness. What we want from a theory of consciousness is some explanation of why some things are conscious and others aren&#8217;t, why certain states within conscious organisms are conscious and others aren&#8217;t. If you take this very broad framework, you&#8217;re not going to get that.</p><p><strong>Anil Seth:</strong> You&#8217;re absolutely right. It&#8217;s why I resist saying the ideas I&#8217;m sketching constitute a theory of consciousness &#8212; they don&#8217;t, as they stand, do the job a good theory should do. A good theory should give an account of the necessary conditions, the sufficient conditions, and the distinction between conscious and unconscious states and creatures.</p><p>Biological naturalism, as I understand it &#8212; distinct from biopsychism &#8212; is a claim that properties of living systems are necessary but not necessarily sufficient for consciousness. Biopsychism is the claim that everything alive is conscious. I think that&#8217;s very strong; I wouldn&#8217;t want to defend it.</p><p>So what makes the difference? I think this takes us back to functions. We have to think about what the functions of consciousness are for us and for creatures where we can reasonably assume it&#8217;s there. That can move us from necessity towards sufficiency.</p><p>For me, every conscious experience in human beings seems to integrate a lot of sensory and perceptual information in a single, unimodal format centred on the body and our opportunities for action, strongly inflected by valence and with affordances relevant to our survival prospects, with particular temporal properties. It may be that when those functional pressures exist, they&#8217;re enough to make otherwise unconscious processes of autopoiesis and metabolism become a conscious experience. I don&#8217;t know &#8212; it&#8217;s partly an empirical question. For those functions to entail a conscious experience, you may need the fire of life underneath it all. I think that&#8217;s the idea.</p><p><strong>Henry Shevlin:</strong> The question of sufficient conditions for consciousness in non-human animals is obviously very big for the ethical side. Whereas for AI, the necessary conditions are more relevant &#8212; if we can rule out that any of these systems are conscious, that makes the ethical situation a lot clearer. Since animals obviously satisfy the necessary conditions you&#8217;ve sketched, the question becomes which of them qualify.</p><p>A quick thought and then a question. I&#8217;m not sure whether your view is scientifically falsifiable. As you know, I&#8217;m very much a sceptic about the prospects of consciousness science as a falsifiable research programme. But maybe even setting aside strict falsifiability &#8212; what kinds of evidence would you be looking for over the next ten years that might push you in one direction or another?</p><p><strong>Anil Seth:</strong> You can&#8217;t falsify a metaphysical position. Is biological naturalism a metaphysical position? It depends how much you flesh it out. I tend to be more Lakatosian in my view &#8212; I want things to be productive, not degenerate. Does unfolding the biological naturalist position lead to more explanatory insight? Does it lead to testable predictions and falsifiable hypotheses over time? If it does, that adds credence to the position, but it doesn&#8217;t establish it.</p><p>The position itself is not falsifiable as things currently stand, because we don&#8217;t have an independent, objective way of saying whether something is conscious. We always build prior assumptions in. Tim Bayne, Liad Mudrik, and I and others wrote a &#8220;test for consciousness&#8221; paper thinking of consciousness as a natural kind, but we&#8217;re always generalising from where we know &#8212; humans &#8212; outwards, trying to walk the line between taking contingent facts about human consciousness as general and expanding too liberally.</p><p>Evidence that would move the needle for me: to what extent can we demonstrate that properties of biological brains are substrate-independent? That&#8217;s a feasible research programme. We know some things the brain does are substrate-independent &#8212; that&#8217;s the whole McCulloch-Pitts story. But what about other things? What depends on the materiality of the brain? And what might be the functional roles of those things for cognition, behaviour, and consciousness?</p><p><strong>Henry Shevlin:</strong> On the AI side, are there any predictions you&#8217;d feel comfortable about, or any evidence that might make you say, &#8220;This is evidence against biological naturalism&#8221;?</p><p><strong>Anil Seth:</strong> The kind of evidence that would <em>not</em> convince me is linguistic evidence of AI agents talking to each other about consciousness. I can&#8217;t help being moved by it at one level &#8212; they&#8217;re very hard to resist, even if you believe they&#8217;re not conscious. It&#8217;s unsettling to hear these things talk about their own potential consciousness. But that&#8217;s not the right kind of evidence.</p><p>The more you can show that things closely tied to consciousness in brains are happening in AI, the more it would move the needle. For example, in a very influential paper, Patrick Butlin and Robert Long and others looked for signatures of theories of consciousness in AI models &#8212; does this model have something like a global workspace, or higher-order representations? They explicitly assume computational functionalism, looking just for the computational level of equivalence.</p><p>I think this is useful, but I&#8217;d try to drop that assumption and ask: how is a global workspace instantiated in brains at something deeper than just the algorithmic level? Do we have something like that in AI? This brings up neuromorphic computing &#8212; is the AI neuromorphic in a way that&#8217;s actually implementing, not just modelling, the mechanisms specified by theories of consciousness?</p><p>An issue is that most theories of consciousness don&#8217;t specify sufficient conditions. Global workspace theory is silent on what counts as sufficient for a global workspace. Higher-order thought theory doesn&#8217;t really tell you either. Ironically, the only theory that does is the most controversial one: integrated information theory. It explicitly tells you sufficient conditions &#8212; credit where it&#8217;s due, it puts its cards on the table.</p><p><strong>Henry Shevlin:</strong> I&#8217;ve written a paper about exactly this &#8212; I call it the &#8220;specificity problem&#8221;: the difficulties of taking these theories off the shelf and applying them to non-human systems because they&#8217;re so underspecified. I actually call out IIT as one of the few non-offenders. But the downside is you end up with some very extreme predictions.</p><p><strong>Anil Seth:</strong> Actually, me and Adam Barrett and others are writing a semi-critique of IIT. The expander grid thing is not as massively defeating as it seems, because in an expander grid, nothing is happening over time. You&#8217;d get something supposedly very conscious but of nothing &#8212; which is not a rich conscious state. But yes, it&#8217;s a non-offender on the specificity problem as you nicely put it.</p><p><strong>Henry Shevlin:</strong> So to move on to the ethical side. Two big angles come up both in your paper and the responses to it. One is the danger of anthropomorphism and anthropocentrism &#8212; that we&#8217;ll see these things as conscious or develop highly dependent relationships with them. We&#8217;ve seen this at scale with social AI, AI psychosis, and so forth. The second is debates around artificial moral status &#8212; in your BBS paper, you talk about the danger of false positives and false negatives. And related to this is the call some people have raised, like Thomas Metzinger, for a moratorium on building conscious AI. A nice bouquet of issues for you to explore.</p><p><strong>Anil Seth:</strong> I think there&#8217;s also a third element, which is how our perspectives on conscious AI make us think of ourselves &#8212; how it affects our picture of what a human being is. It&#8217;s more subtle but quite pernicious.</p><p>There&#8217;s an important distinction between ethical considerations that pertain to real artificial consciousness and those that pertain to <em>illusions</em> of conscious AI. Sometimes they overlap; sometimes they don&#8217;t.</p><p>If I&#8217;m wrong and LLMs are conscious, or if we build sufficiently neuromorphic AI that incorporates all the right features &#8212; I think this would be a bad idea. Building conscious AI would be a terrible thing. We would introduce into the world new forms of potential suffering that we might not even recognise. It&#8217;s not something to be done remotely lightly, and not because it seems cool or because we can play God. Thomas Metzinger talks about these consequences a lot. That&#8217;s one bucket.</p><p>The other bucket is illusions of conscious AI. This is clearly happening already. So many people already think AI is conscious, and none of the philosophical uncertainty matters &#8212; if people think it&#8217;s conscious, we get the consequences. These range from AI psychosis and psychological vulnerability &#8212; if a chatbot tells me to kill myself and I really feel it has empathy for me, I might be more likely to go ahead. That&#8217;s not great.</p><p>We also have this dilemma of brutalism. Either we treat these systems as if they are conscious and expend our moral resources on things that don&#8217;t deserve it, or we treat them as if they&#8217;re not, even though they seem conscious. And in arguments going back to Kant, this is brutalising for our minds &#8212; to treat things that seem conscious as if they are not. It&#8217;s psychologically bad for us. These illusions of conscious AI might be cognitively impenetrable. I think AI is not conscious, but even I feel sometimes that it is when I&#8217;m interacting with a language model &#8212; like certain visual illusions where even when you know two lines are the same length, they look different.</p><p>A good example where the ethical rubber hits the road is AI welfare. There are already calls for AI welfare, and firms like Anthropic are building constitutions for Claude and saying they take seriously the idea that their agents have their own interests in virtue of potentially being conscious. I think this is very dangerous. Calls for AI welfare give added momentum to illusions of conscious AI &#8212; people are more likely to interpret AI as conscious if big tech firms say they&#8217;re worried about the moral welfare of their language models.</p><p>And if we extend welfare rights to systems that in fact are not conscious, we&#8217;re really hampering our ability to regulate, control, and align them. The alignment problem is already almost impossibly hard. Why would we make it a million times worse by, for instance, legally restricting our ability to turn systems off if we need to?</p><p>And then there&#8217;s the image of ourselves. As Shannon Vallor writes about with the AI mirror &#8212; I think it&#8217;s really diminishing of the human condition. You mentioned the term &#8220;stochastic parrots.&#8221; It&#8217;s unfair on everything: unfair on AI, which is really impressive; unfair on parrots, who are fantastic; and unfair on us, because if we think a language model is a stochastic parrot and we also think that&#8217;s fundamentally what&#8217;s going on for us &#8212; that&#8217;s really reductive of what we are. That tendency to see our technologies in ourselves is a narrowing of the imagination of the human condition, and I worry about the consequences.</p><p><strong>Henry Shevlin:</strong> I&#8217;ve got to flag one objection. You realise people make the same arguments about Darwinian evolution? That seeing us as just other animals is somehow diminishing to the human condition &#8212; that contextualising humans within the tree of life diminishes our dignity. I don&#8217;t agree with that argument, and I assume no one on this call does. But that strikes me as a worrying parallel for the kind of arguments you&#8217;re making.</p><p>I don&#8217;t think it diminishes human dignity to see us as continuous with the broader tree of life. And I don&#8217;t think it&#8217;s necessarily stripping human dignity to see ourselves as part of a broader space of possible minds, some biological, some very weird. We can preserve human dignity whilst making a more expansive vision of what intelligence and mind are.</p><p><strong>Anil Seth:</strong> Maybe. It depends on your priors. I completely agree that seeing us as continuous with the rest of nature is actually very beautiful, empowering, enriching, and dignifying. And people often say: you&#8217;re very anti-AI consciousness, but people were anti-consciousness in animals too &#8212; look at the historical tragedy still unfolding through those false negatives.</p><p>My response is: I don&#8217;t think the situation is the same. There are reasons why we&#8217;ve been more likely to make false negatives in the case of non-human animals, and those same reasons explain why we&#8217;re more likely to be making false positives in the case of AI. Both have serious consequences.</p><p>Human exceptionalism is at the heart of both. It prevented us from recognising consciousness where it exists in non-human animals, and it&#8217;s encouraging us to attribute consciousness where it probably isn&#8217;t in large language models.</p><p>Having said that, the way I&#8217;d find your case convincing is this: just as there&#8217;s a wonder in seeing ourselves as continuous with many forms of life &#8212; we&#8217;re a little twig on this beautiful tree of nature &#8212; we can appreciate the singularity of the human mind and the human condition when we understand more about how different things could be, how different kinds of minds could be, whether they are conscious or not.</p><p><strong>Dan Williams:</strong> I think that&#8217;s a great note to end on. I&#8217;m conscious of your time, Anil &#8212; otherwise we would just keep talking for hours. I really do hope you&#8217;ll come back in the future and we can pick up on one of these many threads. Thank you so much for giving up your time to come and talk with us today.</p><p><strong>Anil Seth:</strong> It&#8217;s been an absolute delight. Thank you both for your time and for the opportunity. I think we did get into the weeds a bit, but I enjoyed that very much.</p><p><strong>Henry Shevlin:</strong> Anil, it&#8217;s been an absolute delight personally, and I think we&#8217;re very lucky to have you on the show. This has been a fantastic conversation.</p>]]></content:encoded></item><item><title><![CDATA[What Kind Of Apes Are We?]]></title><description><![CDATA[This is a guest post by David Pinsof, who writes the excellent &#8216;Everything is Bullshit&#8217; Substack.]]></description><link>https://www.conspicuouscognition.com/p/what-kind-of-apes-are-we</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/what-kind-of-apes-are-we</guid><dc:creator><![CDATA[David Pinsof]]></dc:creator><pubDate>Mon, 16 Feb 2026 13:03:08 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="6240" height="4160" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4160,&quot;width&quot;:6240,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;people walking on bridge under white sky during daytime&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="people walking on bridge under white sky during daytime" title="people walking on bridge under white sky during daytime" srcset="https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1612898639027-55df07a4069c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyN3x8aHVtYW4lMjBuYXR1cmV8ZW58MHx8fHwxNzcwOTc5OTMyfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@lifeofdube">Marc-Antoine Dub&#233;</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p><em>This is a guest post by <a href="https://substack.com/@everythingisbullshit">David Pinsof</a>, who writes the excellent &#8216;<a href="https://www.everythingisbullshit.blog/">Everything is Bullshit</a>&#8217; Substack.</em></p><div><hr></div><p>One of the great joys of intellectual life is finding someone to argue with in good faith. As someone who thinks <a href="https://www.everythingisbullshit.blog/p/arguing-is-bullshit">most arguing is bullshit</a>, it&#8217;s all too rare and precious to have a genuine exchange of ideas stripped of character attacks, strawmanning, and status jockeying. Thankfully, I think I&#8217;ve found such a good faith interlocutor in the ever-brilliant Dan Williams, who has written a moderately cynical yet optimistic essay in response to my soul-crushingly cynical and pessimistic essay, <a href="https://www.everythingisbullshit.blog/p/a-big-misunderstanding">A Big Misunderstanding</a>.</p><p>Dan&#8217;s post is called &#8220;<a href="https://www.conspicuouscognition.com/p/we-are-confused-maladapted-apes-who">We Are Confused, Maladapted Apes Who Need Enlightenment</a>.&#8221; What Dan means by &#8220;enlightenment&#8221; is something like: &#8220;the culture and ideas of intellectuals.&#8221; And what he means by &#8220;confused and maladapted&#8221; is something like: &#8220;irrational, ignorant, self-deluded, and in dire need of the culture and ideas of intellectuals.&#8221;</p><p>My essay has a different message. I argue that we humans are pretty savvy and rational, shaped as we are by millions of years of natural selection, and that intellectuals often overstate the demand for their grand ideas, in large part by pretending we humans are confused and maladapted, so that they can cast themselves as humanity&#8217;s saviors.</p><p>So my response to Dan might be something like, &#8220;Yea, maybe humans are kind of confused and maladapted sometimes, but <em>it&#8217;s also really insightful to see humans as savvy animals strategically pursuing their Darwinian goals.</em>&#8221; And Dan might say something like, &#8220;Yea, it&#8217;s pretty insightful to see humans as savvy animals strategically pursuing their Darwinian goals, <em>but it&#8217;s also really important to recognize that humans are confused and maladapted sometimes.</em>&#8221; It&#8217;s basically a disagreement over where to put the italics.</p><p>But if it was all about italics, I wouldn&#8217;t be writing this. There are a few areas where Dan and I might truly disagree, which is very exciting to me. Maybe one of us or both of us will change our minds or come to see the world a bit differently. Maybe you, dear reader, will benefit from the back and forth. What a beautiful thing. Let&#8217;s go through what I see as the biggest potential sources of disagreement.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>Stone Age Minds in Modern Skulls</strong></h1><p>A big part of Dan&#8217;s post is about <em>evolutionary mismatch</em>. This is the idea that the human brain is primarily adapted to an <em>ancestral</em> <em>environment</em> of cave paintings and tribal warfare and saber-tooth tigers, which is very different from our <em>modern environment</em> of cellphones and skyscrapers and pornography. The lesson Dan draws from this is that if we&#8217;re so mismatched and maladapted, we could really use the help of intellectuals to tell us to put away our phones and read some economics. If we&#8217;re gorging on junk food that was scarce in ancestral environments, we might need a friendly reminder to eat healthy.</p><p>I take a different view. Mismatch is a thing, but it is increasingly being recognized by evolutionary psychologists to be overrated as an explanatory approach. I would know: I co-host <a href="https://epthepod.podbean.com/">Evolutionary Psychology (the Podcast)</a> and talk to different evolutionary psychologists every week. The popular story about gorging on sugary or fatty foods that were scarce in ancestral environments has a bit of truth to it, but it&#8217;s too simple. A moment&#8217;s reflection will make you realize that we obviously have mechanisms for curbing our appetite when our stomachs are full, or when we&#8217;ve had too much bacon or fudge, or when we need to lie down because we&#8217;re in a food coma. If you don&#8217;t believe me, try eating nothing but Oreos for a couple days. You will feel like complete and utter dogshit. Your body will punish you; your mind will be driven to the brink of madness. Our evolved food psychology is more well-designed than the popular caveman story would suggest, in which the only thing stopping us from subsisting on Oreos is willpower. In reality, we have a variety of subtle cravings for specific nutrients that are sensitive to the vicissitudes of our diet and personal history and local ecology.</p><p>With regard to obesity, there&#8217;s a lot we don&#8217;t know, but part of the explanation may relate to the body&#8217;s tendency to store energy in the form of fat to ensure against the risk of future food shortages. <a href="https://core.ac.uk/download/pdf/210899779.pdf">Research by Daniel Nettle and colleagues</a> suggests that obesity is more likely to occur when people experience skipped meals and food insecurity early in life, potentially explaining why poverty and obesity go together. It&#8217;s not that poor people lack willpower; it&#8217;s that their bodies are rationally stockpiling energy reserves when they&#8217;re getting cues that access to food is uncertain. If Nettle is right, then one could easily see why stress and obesity go together (thereby confounding the relationship between obesity and health), and why an obsession with dieting and fasting could tragically make matters worse.</p><p>But isn&#8217;t this just a different kind of mismatch story? I&#8217;m not sure: maybe stockpiling energy is still smart in the modern world, given that poor people really do face future food shortages, and given that civilization, the planet, and the international order are looking rather precarious right now. Maybe when catastrophic climate change or war with China or <a href="https://everythingisbullshit.substack.com/p/ai-doomerism-is-bullshit">the AI apocalypse</a> happens, fat people will inherit the earth. Regardless, what I like about Nettle&#8217;s hypothesis is that it avoids insulting the intelligence of both the evolutionary process and people living in poverty.</p><p>Then there is the story of ancestral, mobile, small-scale, egalitarian hunter gatherer tribes&#8212;another supposed example of mismatch to our swarming cities and towering wealth inequalities. Again, this story is too simple. <a href="https://www.sciencedirect.com/science/article/pii/S1090513822000447?casa_token=mRzPbWNJVBEAAAAA:ADCbUi4EUZP6jtvz2fFaRu0rGjnVNpMmZIzTJHrdaOqPLwlw1fVWHY4VTh9iWx4KvhuXyTfYZg">Research</a><strong><a href="https://www.sciencedirect.com/science/article/pii/S1090513822000447?casa_token=mRzPbWNJVBEAAAAA:ADCbUi4EUZP6jtvz2fFaRu0rGjnVNpMmZIzTJHrdaOqPLwlw1fVWHY4VTh9iWx4KvhuXyTfYZg"> </a></strong><a href="https://www.sciencedirect.com/science/article/pii/S1090513822000447?casa_token=mRzPbWNJVBEAAAAA:ADCbUi4EUZP6jtvz2fFaRu0rGjnVNpMmZIzTJHrdaOqPLwlw1fVWHY4VTh9iWx4KvhuXyTfYZg">by Manvir Singh and Luke Glowacki</a> suggests that ancestral hunter gatherer societies were more variable in structure than is commonly assumed, with some being very large and very unequal. Singh and Glowacki have also gathered evidence from the ethnographic record to show that humans in forager groups <a href="https://www.researchgate.net/profile/Manvir-Singh-2/publication/319275250_Self-Interest_and_the_Design_of_Rules/links/59dcd9510f7e9bdd752dd12b/Self-Interest-and-the-Design-of-Rules.pdf?_sg%5B0%5D=started_experiment_milestone&amp;_sg%5B1%5D=started_experiment_milestone&amp;origin=journalDetail&amp;_rtd=e30%3D">often try to enforce the rules and social norms</a><strong> </strong>that personally benefit them, consistent with my cynicism about the intentions of intellectuals in the modern world. Finally, <a href="https://www.researchgate.net/profile/Duncan-Stibbard-Hawkes/publication/397716245_Egalitarianism_is_not_Equality_Moving_from_outcome_to_process_in_the_study_of_human_political_organisation/links/69213c3be889e65e7968493f/Egalitarianism-is-not-Equality-Moving-from-outcome-to-process-in-the-study-of-human-political-organisation.pdf">research by Duncan Sibbard-Hawkes and Chris von Rueden</a> suggests that the &#8220;egalitarianism&#8221; found even among the most idyllic hunter gatherers has been greatly overstated, with many forms of brutal competition and hierarchy bubbling beneath the surface.</p><p>And as long as we&#8217;re on the subject of names you don&#8217;t know or care about, we should get into <a href="https://www.amazon.com/Shape-Thought-Adaptations-Evolution-Cognition/dp/0199348316?adgrpid=185328955904&amp;hvpone=&amp;hvptwo=&amp;hvadid=748008426930&amp;hvpos=&amp;hvnetw=g&amp;hvrand=1361931175094596881&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9061099&amp;hvtargid=dsa-1595363597442&amp;hydadcr=&amp;mcid=&amp;hvocijid=1361931175094596881--&amp;hvexpln=m-dsad&amp;tag=googhydr-20&amp;hvsb=Media_d&amp;hvcampaign=dsadesk">an important concept introduced by Clark Barrett</a>: the difference between<strong> </strong><em>tokens</em> and <em>types</em>. The idea is that we have cognitive adaptations to deal with particular <em>types </em>of things, like food, mates, groups, status, and zero-sum conflict. These adaptations help tailor our behavior to the particular <em>tokens</em> of those types we find in our current environment&#8212;the particular food items, groups, conflicts, mating opportunities, and status games we&#8217;re confronted with. The<em> types</em> of things we evolved to deal with are, for the most part, common to both modern and ancestral environments. We have groups now; we had groups then. We have status now; we had status then. We have politics now; we had politics then.</p><p>What&#8217;s more, many of these types are very broad, like &#8220;<a href="https://scholar.google.com/scholar?hl=en&amp;as_sdt=0%2C5&amp;q=thom+scott+phillips+language+%22informative+intention%22&amp;btnG=#:~:text=%5BPDF%5D%20thomscottphillips.com">informative intentions</a>&#8221; or &#8220;socially valued skills.&#8221; This enables unprecedented stuff to emerge, like sign language and constitutional lawyers. Then there are the various systems we call &#8220;reinforcement learning&#8221; or &#8220;predictive processing,&#8221; which provide us with additional tools to adapt our behavior to the novel tokens we&#8217;re confronted with in our lives, even tokens that are totally unprecedented in the history of life on earth. These learning systems can cleverly bundle together adaptations in new ways (like the bundling of object recognition and semantics that occurs with literacy), and they can turn amateur chess players into chessmasters who dream in pawns and rooks.</p><p>In other words, there are a lot of reasons to be skeptical of the idea that humans will be vexed, dumbfounded, flabbergasted, or ill-equipped to get their shit together in the modern world. Given the enormous range of social and physical environments our species currently inhabits, and likely inhabited ancestrally, it is a mistake to think there is <em>one </em>simple, caveman past that is tragically out of sync with the present moment. Our minds evolved in a bewildering variety of contexts, and part of the reason we have such huge brains is to reduce the bewilderment&#8212;to help us land on our feet in whatever urban or actual jungle we&#8217;re thrown into.</p><p>So if you&#8217;re tempted to call a human stupid for doing something that looks irrational, I think you should first ask yourself the question: &#8220;Am I grasping the entirety of that human&#8217;s situation, including all the relevant uncertainties and constraints?&#8221; If the answer is &#8220;yes,&#8221; then you should ask the follow-up question: &#8220;Is that human sufficiently incentivized to behave rationally in this context?&#8221; If the answer is &#8220;yes&#8221; again, then I would ask another follow-up question: &#8220;Am I correctly understanding that human&#8217;s motivations, including the motivations they may not want to admit to?&#8221; If you get another &#8220;yes&#8221; there, then sure, go ahead and call the human stupid. But please: don&#8217;t skip those first three questions.</p><p>Besides, even if it turns out that humans are woefully mismatched to the modern world&#8212;cavemen in suits, grunting their way through life&#8212;we have to ask ourselves another follow-up question: &#8220;Is there any reason to expect intellectuals to be more &#8216;matched&#8217; than the masses?&#8221; The answer to this question is far from clear. After all, intellectuals have their own highbrow versions of junk food and misinformation.</p><h1><strong>Winners and Losers</strong></h1><p>Dan argues that one of the biggest sources of mismatch is in our zero-sum attitudes. Dan writes that &#8220;zero-sum thinking makes sense for hunter-gatherers. When you live at the subsistence level, one person&#8217;s dramatic gains likely mean someone else&#8217;s dramatic loss. Consequently, we <a href="https://blog.acton.org/archives/122444-win-win-denial-the-roots-of-zero-sum-thinking.html">struggle to comprehend</a> how modern trade and innovation could make everyone better off, especially when gains are unevenly distributed or delayed.&#8221;</p><p>I think this is a good example of mismatch being overapplied. Dan is right that the modern world presents us with unique opportunities for wealth creation, but it also presents us with unique opportunities for cronyism, classism, cartelization, rent-seeking, censorship, surveillance, sectarianism, regressive redistribution, and regulatory capture. Status is zero-sum: when I rise, someone else falls. Political power is zero-sum: when the Republicans win, the Democrats lose. So once we correctly see <a href="https://www.everythingisbullshit.blog/p/money-is-bullshit">wealth as an instrument of power-grabbing and status-seeking</a>, it no longer seems like such a misunderstanding to view wealth in zero-sum terms. This is particularly true in a world where governments have interwoven themselves so much with capitalist wealth production that <a href="https://www.amazon.com/Political-Capitalism-Maintained-Cambridge-Economics/dp/1108449905">capitalism and politics can no longer be seen as separate entities</a>. Perhaps our zero-sum mentality is exactly what we should expect to emerge in the sociopolitical system we currently inhabit, where political tribes cannot win at the same time, and where the winner gets to enforce its will on the loser by threat of imprisonment.</p><p>Of course, it would be better if our political system weren&#8217;t so high-stakes and zero-sum&#8212;with such a terrifying and enormous prize to fight over&#8212;but given that it is, we should not be surprised to see the masses rationally responding to it. Creating paranoid myths about conspiratorial outgroups is not stupid in this context: <a href="https://nyaspubs.onlinelibrary.wiley.com/doi/pdf/10.1111/nyas.70089">it is a good strategy</a> for mobilizing one&#8217;s political coalition and gaining power&#8212;not to mention signaling one&#8217;s loyalties and jockeying for ingroup status. Just look at how well Trump&#8217;s preposterous bullshit worked out for him and his cronies: they&#8217;re some of the most powerful people in the world. Also, the left is hardly devoid of propaganda and has surely gained many political victories through fearmongering and Manicheanism historically. Yes, political elites can sometimes use propaganda to exploit the masses, but it&#8217;s important to remember that the masses often benefit from the propaganda too: political coalitions rise to power as a group, with both leaders and followers sharing in the victory.</p><h1><strong>The Arc of Progress</strong></h1><p>Dan talks about the positive trends in health, wealth, and safety that have occurred throughout history, citing the work of Steven Pinker and others. He views these salubrious trends as evidence that a kind of enlightenment has occurred&#8212;a march of progress led by the light of reason. I agree that positive trends have occurred throughout history, but I&#8217;m not so sure about the enlightenment bit.</p><p>I think it is a mistake to attribute these uplifting trends to any kind of conscious, overarching motivation for enlightenment. These trends must be explained in <a href="https://www.everythingisbullshit.blog/p/incentives-are-everything">testable, mechanistic, incentive-based terms</a>, like any other phenomenon in economics or social science. Ironically, viewing these trends as the product of conscious intent is the very same error of overattributed intentionality that Dan thinks the masses fall prey to. Insofar as intellectuals anthropomorphize &#8220;the enlightenment&#8221; as a brainy homunculus striving for a better world, it would be an example of intellectuals being just as cognitively mismatched as the masses&#8212;or, more plausibly, a case of them being biased toward self-aggrandizement.</p><p>So I don&#8217;t think the world became better because intellectuals got together and decided to help us all out of the goodness of their hearts. Instead, the world became better in the same way the world changes in any way at all: by people rationally responding to <a href="https://www.everythingisbullshit.blog/p/incentives-are-everything">changing incentive structures</a>. In this case, I would bet that the relevant incentives have more to do with expanding trade and global markets, which create wealth and break down tribal barriers, than with the good intentions of intellectuals, who often demonize markets.</p><p>Dan seems to agree on the importance of markets for explaining positive trends, citing the insights of Adam Smith and others, but then writes that &#8220;for this progress to be possible, societies require a critical mass of people to appreciate these insights.&#8221; I would disagree here: societies can get richer without anyone knowing or caring about Adam Smith. People do things that put cash in their pockets. No abstract theories from intellectuals are required for this to occur. Smith&#8217;s insights only emerged after the wealth-creating properties of markets were well underway, so he (or any other thinker) cannot take credit for producing them. I think the same is true of many other positive trends that intellectuals like to take credit for, including moral progress. Once you realize that markets pay people enormous sums of money to treat each other fairly and extend cooperation beyond tribal boundaries (<a href="https://nsuworks.nova.edu/pcs/vol21/iss1/5/">as</a> <a href="https://www.science.org/doi/full/10.1126/science.1182238?casa_token=AtZ38J653xQAAAAA%3Aa_LbAk-7OFnbVpuWk6VEzN2fMsPXQY_m8uyx1IY9_B8q_y71prn2K_EbOcyA2CKwkrVqe-Cibn0IeA">many</a> <a href="https://www.researchgate.net/publication/51993128_Market_Integration_and_Fairness_Evidence_from_Ultimatum_Dictator_and_Public_Goods_Experiments_in_East_Africa">different</a> <a href="https://statsandsociety.substack.com/p/markets-probably-make-us-more-moral">scholars</a> <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC4624453/">have</a> <a href="https://www.amazon.com/Moral-Consequences-Economic-Growth/dp/1400095719">argued</a>, including <a href="https://www.everythingisbullshit.blog/p/darwin-the-cynic">me</a>), it begins to seem dubious that the primary cause of moral progress was a bunch of philosophizing.</p><h1><strong>Negative Nancies</strong></h1><p>Dan talks about our &#8220;deep-rooted negativity bias,&#8221; our &#8220;evolved (and, for hunter-gatherers, adaptive) tendency to attend disproportionately to threats and dangers.&#8221; He thinks that &#8220;the result is an information ecosystem systematically <a href="https://www.vox.com/the-highlight/23596969/bad-news-negativity-bias-media">skewed</a> towards catastrophe, conflict, and outrage.&#8221; The implication is that this negative skewing of reality is maladaptive in the modern world&#8212;something we should overcome with more enlightenment.</p><p>I disagree. If threatening, scary stuff is a kind of <a href="https://nyaspubs.onlinelibrary.wiley.com/doi/pdf/10.1111/nyas.70089">group mobilization fuel</a>, then a good chunk of this catastrophism is politically rational. After all, it&#8217;s hard to mobilize a group by saying everything is peachy and getting better all the time. And once we realize that humans are not primarily dispassionate truth-seekers who care about accurately assessing intergenerational changes in health and income, but social primates who care about capturing each other&#8217;s attention, paying attention to what others are paying attention to, gaining and expressing sympathy for each other&#8217;s plights, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4464524/">signaling their competence and seriousness</a>, being included in &#8220;important&#8221; conversations, demonizing their rivals, and saying things that are <a href="https://www.everythingisbullshit.blog/p/you-will-find-this-interesting">interesting and provocative</a>, even our catastrophism has a certain kind of rationality to it. Positive trends are boring. Doomerism is exciting. Of course, all this doomerism takes a toll on our happiness. But as I&#8217;ve written about extensively (see <a href="https://www.everythingisbullshit.blog/p/happiness-is-bullshit">here</a> and <a href="https://www.everythingisbullshit.blog/p/happiness-is-bullshit-revisited">here</a>), we&#8217;re not pursuing happiness, so this shouldn&#8217;t count as evidence of human maladaptedness.</p><h1><strong>Mistakes Were Made (But Not By Me)</strong></h1><p>Dan lists a litany of bloody and catastrophic mistakes that have been made by humans throughout history, and I think this is where Dan is at his most persuasive. Humans have certainly done a lot of terrible shit. But while I acknowledge the tidal wave of stupidity that Dan is pointing to, it is important to remember that we are talking about the design of human nature, and how good we should expect that design to be&#8212;that is, whether the human mind is about as well-designed as the &#8220;hawk&#8217;s eye or the cheetah&#8217;s sprint,&#8221; as I put it in my post. When answering this question, we should not let ourselves get distracted by the specific failures of specific individuals, which are an inevitable part of life for any creature.</p><p>Predators often fail to catch their prey. Prey often fail to evade their predators. These failures cause death, which I&#8217;m told is a bad thing. But we shouldn&#8217;t conclude from these failures that predators and prey are dumb and irrational, or that they&#8217;re poorly designed for chasing and evading each other. &#8220;Haha, that gazelle just got eaten by a lion&#8212;what a dumbass!&#8221; In a world of fearsome competition and formidable constraints, deadly failures at the individual level and impeccable design at the species level are not mutually exclusive. Political revolutions often devour their children, but plenty of animals devour their children in the wild. The devouring does not necessarily make those animals, or their devoured children, maladapted.</p><p>Besides, even if we accept the Homo Stupidicus model that Dan is gesturing at, we have to ask ourselves the same question we asked previously: &#8220;Is there any reason to expect intellectuals to be less prone to these terrible blunders than the masses?&#8221; Given that many of the mistakes Dan cites were a result of intellectuals&#8217; utopian visions, the answer is far from clear.</p><h1><strong>A Real Fixer Upper</strong></h1><p>Dan argues, contrary to my soul-crushing cynicism, that intellectuals often have a &#8220;genuine&#8221; motivation to fix the world. Rather than getting into a semantic debate about what it means to have a &#8220;genuine&#8221; motivation for something, I&#8217;d rather focus on what Dan and I seem to agree on: whenever people claim to be trying to fix the world, it is mostly because of deeper motives for esteem, prestige, admiration, etc. So if we want to understand this world-fixing business, we have to delve deeper into the prestige economy that gives rise to it. And once we delve deeper into that prestige economy, we will discover some serious grounds for pessimism. Because what gets a person prestige, and what fixes the world, are two very different things. </p><p>It is the <em>appearance</em> of world fixing to a prestige-granting audience&#8212;not <em>objective</em> world fixing in external reality&#8212;that intellectuals are striving for. And insofar as prestige-granting audiences do not actually know what fixes the world, or hold politically biased beliefs about what fixes the world, then intellectuals&#8217; prestige striving will be uncorrelated with objective improvements in the world. You might even get a few cases where intellectuals get showered with virtue points for creating hell on earth. The disconnect between audience perceptions and objective reality is why I am more pessimistic than Dan about the world-fixing motivations of intellectuals. The lack of depth to these motivations is precisely what should make us skeptical that they will always lead to good outcomes, or that they are the main causes of moral and material progress throughout history.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>Enlightenment Now?</strong></h1><p>Aside from some differences in style, Dan and I probably agree on at least 90% of the substance, and I suspect he will be on board with most of what I&#8217;ve written here. That&#8217;s the beauty of good faith disagreement: it often reveals how little of it there is. And truth to be told, I&#8217;m just as enchanted by the ideals of the enlightenment as any other intellectual: it&#8217;s the animating force behind all my writing and researching and podcasting. So I really do get the emotional core of Dan&#8217;s essay. It&#8217;s what gets me out of bed in the morning.</p><p>But in spite of all the beauty and grandeur of the life of the mind, I cannot help but take a long hard look in the mirror and ask myself: &#8220;Is it all bullshit?&#8221; I think it&#8217;s important to ask ourselves this question, and if we&#8217;re going to ask it, we must be genuinely and uncomfortably open to the possibility that the answer is yes.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Why Have Academics Failed To Study Social Justice Ideology?]]></title><description><![CDATA[This is a guest post by Thomas Prosser (who writes at The Path Not Taken) and Edmund King (who writes at Paroxysms) about their very interesting new book, Beyond Woke and Anti-Woke: Explaining the Rise of Social Justice Ideology.]]></description><link>https://www.conspicuouscognition.com/p/why-have-academics-failed-to-study</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/why-have-academics-failed-to-study</guid><dc:creator><![CDATA[Thomas Prosser]]></dc:creator><pubDate>Wed, 11 Feb 2026 11:01:46 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is a guest post by <a href="https://substack.com/@thomasprosser">Thomas Prosser</a> (who writes at <a href="https://www.thepathnottaken.net/">The Path Not Taken</a>) and <a href="https://substack.com/@paroxysms">Edmund King</a> (who writes at <a href="https://paroxysms.substack.com/">Paroxysms</a>) about their very interesting new book, <a href="https://bristoluniversitypress.co.uk/beyond-woke-and-anti-woke">Beyond Woke and Anti-Woke: Explaining the Rise of Social Justice Ideolog</a>y.</em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4272" height="2848" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2848,&quot;width&quot;:4272,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;person in blue blazer holding brown cardboard box&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="person in blue blazer holding brown cardboard box" title="person in blue blazer holding brown cardboard box" srcset="https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1592237046603-950efb977744?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw3fHxzaWxlbmNlJTIwaXMlMjB2aW9sZW5jZXxlbnwwfHx8fDE3NzA3NTA0NTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@sachaverheij">Sacha Verheij</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>Does &#8216;wokeness&#8217; exist at all? And if it did, why on earth would anyone get worked up over it? Over the past decade, many mainstream British liberals have reacted to the rise of &#8216;wokeness&#8217;, or what we prefer to call social justice ideology, by denying that anything of note is occurring. These kinds of denials have taken many forms. Among those we could affectionately term &#8216;centrist Dads&#8217;, there has been a rich seam of indignation that these sorts of questions are being raised at all. &#8216;Why do you care?&#8217; &#8216;Why are you so obsessed with this?&#8217; &#8216;Stop spending so much time on the internet!&#8217; We have heard many responses along these lines.</p><p>On a more intellectual level, liberal objections have sought to characterize social justice ideology as just a particularly earnest and sincere form of liberalism. Surely, we hear, now is not the right time to look into this issue, given the very real threat of radical right populism? At worst, the desire to ask these questions at all is seen as alarming evidence of authoritarian tendencies. &#8216;It&#8217;s just idealistic kids&#8217;, we are told. Ignore it. It will all pass over in time. In academia, this has engendered a curious phenomenon: a near dearth of accounts which examine social justice ideology through an analytic lens.</p><p>In our new book, <em><a href="https://bristoluniversitypress.co.uk/beyond-woke-and-anti-woke">Beyond Woke and Anti-Woke: Explaining the Rise of Social Justice Ideology</a> </em>(Bristol University Press, 2026), we examine the seeming inability of liberals to describe their left flank in accurate terms (or even to admit that it exists at all). Over the past decade, liberalism has palpably lost ground to &#8216;woke&#8217;. Both are progressive ideologies but, in contrast to liberalism, social justice ideology emphasizes the overriding importance of identity and direct action. It extends the concept of harm far beyond previous limits, concerning itself particularly with the threats of emotional harm and harmful speech. These developments have brought social justice advocates into conflict with older liberal tenets: individualism, legalism, and freedom of speech and association.</p><p>This conspicuous gap in the scholarship is curious because, in academia, analytic approaches to ideology are common. To give a well-known example, there is an <a href="https://www.annualreviews.org/content/journals/10.1146/annurev-polisci-041719-102503">extensive</a> <a href="https://journals.sagepub.com/doi/full/10.1177/0010414018789490?casa_token=QJgJ4jcPPJUAAAAA%3ApFm0AyNZbV4me2t3egixpaCuyEfloNAWMAfk7MsehCmbHMM8SaxbYydSN1zG1tTCg43HVUTgKasD">literature</a> on radical right populism which, over thousands of studies, examines the origins and trajectories of this ideology. Admittedly, many who are interested in the study of social justice ideology have not helped their cause. Since the mid-2010s, a self-described &#8216;heterodox&#8217; movement has arisen that has sought to investigate the topic. This movement has produced some important <a href="https://press.princeton.edu/books/hardcover/9780691232607/we-have-never-been-woke?srsltid=AfmBOoo7ucFZIISlDfPTbrRyLIdLIxD6XXpYpqzdtMVHaeG61Wv05GVW">works</a>, yet has ultimately failed to grasp the social justice nettle. We have seen a great many polemical trade-press books and podcast episodes and many &#8216;free speech&#8217; festivals at which speakers celebrate their ability to tolerate robust disagreement (but at which no one seems to disagree at all). Despite their initial energy and sense of purpose, many in this &#8216;heterodox&#8217; space have ultimately abandoned all pretence of academic rigour. While surrendering to the temptations of audience capture might be good for Substack subscriptions, it makes it easier for liberals to dismiss these kinds of interventions. No smoke; no fire; nothing to see here.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><p>What, ultimately, explains the reluctance of liberals to acknowledge the existence of social justice ideology? We can think of some potential reasons: a certain hesitancy to expose the fractures in progressive movements, an unwillingness to be seen siding or affiliating with conservatives on certain issues, and (perhaps) fear of attack from radical activists. Sometimes, these kinds of motivations seem to be accompanied by the notion that progressive ideologies do not <em>need</em> to be explained. As classic <a href="https://academic.oup.com/book/3196">theories</a> of ideology contend, the ideologue regards their own worldview as an accurate depiction of reality and, therefore, without any need of further explanation. The overt similarities between liberalism and social justice ideology, and the obvious differences that separate them from conservative ideologies, encourage such thinking.</p><p>We find this state of affairs unfortunate, and it is what moved us to write our book. Just like radical right populism, we believe that social justice ideology <em>should </em>be studied with an analytic approach. Since the 2010s, social justice ideology has been the major newcomer in progressive ideological space and is notably different from liberalism. This development is fascinating and, rather than unevidenced polemics, the ideology deserves a serious programme of academic study, just like other ideologies.</p><p>In <em>Beyond Woke and Anti-Woke</em>, we explain the emergence of social justice ideology using statistical analysis of multiple surveys of UK and US public opinion, institutional theories of the political economy and morphological theories of ideology. Rather than having one cause, social justice ideology in fact reflects a wider demographic revolution. Mass higher education has transformed societies and, as women have entered public life, feminine-coded values of care and equality have become increasingly influential.</p><p>In particular, the crises of capitalism after the 2008 financial crash acted as catalysts for ideological change, giving social justice ideology mass appeal. Though our statistical analyses cast doubt on there being any direct relationship between adherence to social justice ideology and individual economic precarity (contrary to popular theories), we argue that economic crises helped discredit liberalism among younger groups. For corporations, the embrace of social justice ideology provided renewed legitimacy. By the 2020s, social justice ideology had become a major rival to liberalism and, notwithstanding the attacks of the second Trump administration, it remains a major force in progressive politics.</p><p>Of course, any such interpretations must be provisional. Beyond issues with replication, the lack of prior research on this topic makes conclusions unusually precarious. Will this change in the future? Though proponents of a spatial <a href="https://www.hup.harvard.edu/books/9780674001879">hypothesis</a> expect that academic supply will inevitably meet any gaps that appear in the intellectual market, we are less optimistic. Academic fields are path dependent and, therefore, tend to follow their own logic. If they have been closed off to certain lines of inquiry in the past, there is a certain inevitability that they will continue to be so. Moreover, we know from survey data that progressives are numerically predominant in universities. Inevitably, this will create pressures from within fields and disciplines to maintain existing path dependencies.</p><p>Perhaps there is need for a cultural change in this area. Contrary to the fears of some, analysing progressive ideologies does not imply that one regards them as a problem to be solved. Instead, this sort of investigation should be entirely conventional in academia; one identifies a gap in the research and, using established theories and methods, arrives at findings. In the case of social justice ideology, its crucial influence on institutions and policymaking adds to the justification for such a research agenda.</p><p>This, we argue, would not only lead to a healthier academia; it would lead to a healthier liberalism.</p>]]></content:encoded></item><item><title><![CDATA[We Are Confused, Maladapted Apes Who Need Enlightenment]]></title><description><![CDATA[With Homo sapiens, Darwinian evolution produced a new kind of animal: a species that builds worlds it struggles to understand.]]></description><link>https://www.conspicuouscognition.com/p/we-are-confused-maladapted-apes-who</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/we-are-confused-maladapted-apes-who</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Mon, 02 Feb 2026 16:24:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5gyx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5gyx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5gyx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png 424w, https://substackcdn.com/image/fetch/$s_!5gyx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png 848w, https://substackcdn.com/image/fetch/$s_!5gyx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png 1272w, https://substackcdn.com/image/fetch/$s_!5gyx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5gyx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png" width="746" height="488" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:488,&quot;width&quot;:746,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:618339,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.conspicuouscognition.com/i/185756196?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5gyx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png 424w, https://substackcdn.com/image/fetch/$s_!5gyx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png 848w, https://substackcdn.com/image/fetch/$s_!5gyx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png 1272w, https://substackcdn.com/image/fetch/$s_!5gyx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cd4f31a-9e87-416c-9775-9ea3c57330b7_746x488.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In a characteristically insightful and entertaining <a href="https://www.everythingisbullshit.blog/p/a-big-misunderstanding">essay</a>, <a href="https://www.everythingisbullshit.blog/">David Pinsof</a> argues that intellectuals greatly overestimate how many of the world&#8217;s problems stem from popular misunderstandings. In reality, Pinsof argues, people are highly rational and well-informed about their interests. This is what we should expect on evolutionary grounds. &#8220;Show me an animal that has succeeded in surviving and reproducing in a hostile environment for millions of years, and I will show you a rational animal.&#8221; It is also supported by extensive evidence about the rationality and accuracy of human cognition.</p><p>In Pinsof&#8217;s worldview, even the dreaded cognitive &#8220;biases&#8221; that psychologists love to tell us about function as adaptive mechanisms that help us survive and thrive. Confirmation bias, for example, provides us with intellectual ammunition for <a href="https://www.hup.harvard.edu/books/9780674237827">persuasion and reputation management</a>, while overconfidence and self-serving illusions help us <a href="https://www.amazon.com/Why-Everyone-Else-Hypocrite-Evolution/dp/0691154392">win friends and influence people</a>.</p><p>Why, then, do intellectuals so often chalk up the world&#8217;s problems to mass ignorance and irrationality? Partly, the narrative is simply self-serving. It is intellectuals, after all, who promise to liberate us from misunderstanding. They are our professional understanders.</p><p>But it&#8217;s also because they confuse our expressed motives with our real goals. Sure, Pinsof concedes, we look pretty stupid and misinformed relative to the high ideals and noble ambitions that we say we have. If we&#8217;re chasing objective truth, impartial justice, and effective altruism, we&#8217;re not doing a good job. But those goals are just elaborate fictions, self-serving public relations cooked up to make us look good. Our real goals, our <a href="https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995">hidden motives</a>, are very different. We&#8217;re chasing the kinds of <a href="https://www.everythingisbullshit.blog/p/darwin-the-cynic">grubby rewards</a> you would expect of apes forged in Darwinian competition: status, reputation, power, sex, and resources. And relative to those ambitions, we&#8217;re smart and sophisticated.</p><p>This analysis reframes many apparent examples of stupidity as strategies. For example, &#8220;tribalism&#8221; isn&#8217;t a <a href="https://www.conspicuouscognition.com/p/tribalism-corrupts-politics-even">cognitive error</a> to be remedied by debiasing and education; it&#8217;s a winning strategy among groupish primates who care more about power and prestige than truth or justice. Ineffective altruism and slacktivism don&#8217;t result from miscalculating the most effective ways to help others; they help status-seeking activists <a href="https://www.amazon.com/Hidden-Games-Surprising-Irrational-Behavior/dp/1541619471">buy noble reputations at a discount</a>. </p><p>Unsurprisingly, this perspective leads Pinsof to a bleak conclusion. If most of the world&#8217;s problems result not from misunderstandings but from conflicting incentives, intellectual enlightenment cannot save us. And even if it could, nobody really cares about solving the world&#8217;s problems anyway:</p><blockquote><p><em>Not every problem has a solution. Some things cannot be fixed. And once you come to the bracing realization that we have no deep desire to fix our broken world, you&#8217;ll realize that our problem is that we have no problem. What&#8217;s broken is that nothing is broken. The study of human nature is, all too often, the study of the hole we&#8217;re stuck in&#8230; In the end, the only misunderstanding is that there&#8217;s been a misunderstanding.</em></p></blockquote><h2>A Darwinian Defence of the Enlightenment</h2><p>It&#8217;s a beautifully cynical, Pinsofian analysis&#8212;and one that, I think, <a href="https://www.conspicuouscognition.com/p/socialism-self-deception-and-spontaneous">gets</a> <a href="https://www.conspicuouscognition.com/p/strategic-altruism-the-machiavellian">a lot </a><a href="https://www.conspicuouscognition.com/p/how-dangerous-is-misinformation">right</a>.</p><p>Nevertheless, it is too optimistic about our baseline rationality. Yes, we are savvy and strategic primates pursuing goals we&#8217;d rather not admit, even to ourselves. But we&#8217;re also riddled with costly cognitive biases, maladapted to the modern world, and in need of enlightenment by intellectual knowledge that we often find deeply counterintuitive.</p><p>It is also too pessimistic. Some people really are motivated to fix our broken world, and in some cases, they make genuine progress. The motivation is never very deep or pure&#8212;no straight thing was ever made from the <a href="https://www.cambridge.org/core/books/abs/kants-idea-for-a-universal-history-with-a-cosmopolitan-aim/crooked-timber-of-mankind/A3A16BCFC94272A01351387F8007C150">crooked timber</a> of humanity&#8212;but it&#8217;s not merely a deceptive story, either. </p><p>There are many holes we will never escape from. There is an unavoidably tragic aspect to the human condition. But when scaffolded by the right incentives and error-correction mechanisms, we can draw on intellectual knowledge and cooperation to climb out of the worst pits we find ourselves in.  </p><p>You can&#8217;t understand much of humanity&#8217;s <a href="https://www.cambridge.org/core/books/abs/kants-idea-for-a-universal-history-with-a-cosmopolitan-aim/crooked-timber-of-mankind/A3A16BCFC94272A01351387F8007C150">significant progress</a> over the past several centuries&#8212;in life expectancy, living standards, wealth, health, infant mortality, freedom, political governance, and so on&#8212;without embracing this fundamental optimism of the Enlightenment. Or so I will argue.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Evolutionary Expectations</h2><p>Before getting into the details, it&#8217;s worth stepping back and scrutinising Pinsof&#8217;s assumptions about evolution and human rationality. He says,</p><blockquote><p><em>&#8220;The default assumption of every intellectual should be that the human mind is about as well-designed as the hawk&#8217;s eye, the bat&#8217;s sonar, or the cheetah&#8217;s sprint.&#8221;</em></p></blockquote><p>Our species complicates this default assumption in two ways.</p><h3><em><strong>A Uniquely Unique Animal</strong></em></h3><p>First, although all species are unique, it&#8217;s not just human chauvinism to think that we&#8217;re <a href="https://www.pnas.org/doi/10.1073/pnas.1521270113">uniquely unique</a>, a genuinely <a href="https://press.princeton.edu/books/hardcover/9780691177731/a-different-kind-of-animal?srsltid=AfmBOooEl0Poh7tQnNQKpP1f6sicQXP1r-haJidDo1j-r0RpeqrajiJS">new kind of animal</a>.</p><p>There is no single quality responsible for this&#8212;no magic bullet that set our ancestors on a novel evolutionary pathway. Instead, there is a set of interacting traits connected to our unique capacities for <a href="http://pnas.org/doi/10.1073/pnas.0914630107?__cf_chl_rt_tk=zZ6vaYzvKFr2F1mywvFCLGt4pRhUSkT2HApLuU2WDd8-1770041770-1.0.1.1-qivcdE9toELXJLr_9Czx6CA1xQAEIhDhkOGR8UkmgMY">cognition</a> (how we think and reason), <a href="https://www.amazon.com/Natural-History-Human-Morality/dp/0674088646">cooperation</a> (how we work together), and <a href="https://global.oup.com/academic/product/the-pleistocene-social-contract-9780197531389">culture</a> (how we share and accumulate information). Through such abilities, we have acquired unprecedented powers to design and redesign our environments, but we have also become vulnerable to novel risks and failure modes.</p><p>To take only one example, no other species is anywhere near as dependent on lifetime learning as we are, including extensive &#8220;<a href="https://press.princeton.edu/books/hardcover/9780691177731/a-different-kind-of-animal?srsltid=AfmBOooEl0Poh7tQnNQKpP1f6sicQXP1r-haJidDo1j-r0RpeqrajiJS">social learning.</a>&#8221; To achieve our goals, we rely on information acquired from others (parents, family, friends, allies, teachers, shamans, priests, Substackers, etc.), typically because they intentionally share it with us through language and other forms of communication. </p><p>Given this reliance, evolution has endowed us with highly <a href="https://psycnet.apa.org/record/2010-17633-001">sophisticated social learning mechanisms</a>. In this sense, Pinsof is right that evolutionary theory correctly predicts rationality and adaptation. We&#8217;re skilled at extracting knowledge from others while minimising the risks of misinformation and deception. Even <a href="https://press.princeton.edu/books/hardcover/9780691178707/not-born-yesterday?srsltid=AfmBOoqxqXKAy61v__H0-fZ8-7zpKO-LnsLNZOe_ffmoHwquPnqcMc4G">from a young age</a>, we instinctively evaluate the plausibility of what we&#8217;re told, assess people&#8217;s reliability and honesty across different domains, and insist on persuasive arguments for surprising claims. </p><p>Nevertheless, such extensive social learning also creates novel vulnerabilities that won&#8217;t be illuminated by analogies to the hawk&#8217;s eye, bat&#8217;s sonar, or cheetah&#8217;s sprint.</p><p>Most obviously, it means that reflection on human evolution should never be used to discount the importance of ideas. We evolved to be a species dependent on good ideas&#8212;on the knowledge, wisdom, and understanding that we acquire from others. If such ideas are misleading or deceptive in ways we can&#8217;t anticipate or detect, even optimal learning mechanisms won&#8217;t prevent us from being misinformed in costly and sometimes catastrophic ways.</p><p>Before the Neolithic Revolution, this vulnerability wasn&#8217;t very pressing for most humans. The challenges hunter-gatherers faced were <a href="https://www.amazon.com/Evolved-Apprentice-Evolution-Humans-Lectures/dp/0262526662">mostly local and small-scale</a>: which plants are edible, which animals migrate, which group members are trustworthy, and so on. This meant they could often cross-check what they were told against direct experience.</p><p>Moreover, because our core intuitions evolved over hundreds of millennia in response to hunter-gatherer lifestyles, people&#8217;s instinctive bullshit detectors were broadly reliable in such domains. Whenever they encountered claims that seemed implausible or outlandish&#8212;that is, counterintuitive&#8212;they could usually safely dismiss them, or at least insist on practical demonstrations of their veracity. </p><p>Finally, because their social networks were mostly face-to-face, highly interdependent, and largely egalitarian, high-stakes deception was <a href="https://global.oup.com/academic/product/the-pleistocene-social-contract-9780197531389?cc=us&amp;lang=en&amp;">typically risky and counterproductive</a>. When everyone knows everyone extremely well, and power is broadly distributed, it&#8217;s easier to discover and punish big lies. And when everyone depends on everyone else for the most basic necessities of survival, the social costs of getting caught lying can be astronomical.</p><p>Of course, hunter-gatherers believed plenty of preposterous falsehoods about matters beyond their experience&#8212;for example, about the broader cosmos, their ancient history, or the character of rival tribes. But such myths were generally costless and adaptive. When you lack the ability to influence the world beyond your immediate, day-to-day existence, you can <a href="https://en.wikipedia.org/wiki/Rationality_(book)">believe whatever you want about it</a>, which is exactly what they did.</p><h3><em><strong>The New World</strong></em></h3><p>Well, <a href="https://en.wikipedia.org/wiki/Public_Opinion_(book)">things have changed</a>. The second reason humans complicate the link between evolution and rationality is that the modern world we must navigate is unimaginably more <a href="https://www.conspicuouscognition.com/p/the-world-outside-and-the-pictures">vast, complex, and unequal</a> than hunter-gatherer environments. As John Dewey <a href="https://en.wikipedia.org/wiki/The_Public_and_Its_Problems">observed</a> a century ago,</p><blockquote><p><em>&#8220;The local face-to-face community has been invaded by forces so vast, so remote in initiation, so far-reaching in scope and so complexly indirect in operation that they are, from the standpoint of the members of local social units, unknown. . . . They act at a great distance in ways invisible to [them].&#8221;</em></p></blockquote><p>Natural selection adapts organisms to their environments. When these environments change, such adaptations can become &#8220;mismatched.&#8221; This is why things aren&#8217;t going so well for polar bears.</p><p>In the human case, evolutionary mismatch is often invoked to explain relatively mundane things, such as why so many of us are obese. As the <a href="https://www.psychologytoday.com/us/blog/common-sense-science/202505/why-were-obese">familiar story goes</a>, sugar and fat were scarce in ancestral environments, so we evolved to crave them. In modern capitalist societies, they are abundant. So we gorge on cheesecake and pizza served to us by profit-seeking companies that place no value on our welfare. There is no savvy strategy behind such overeating. Most of us are simply heavier and unhealthier than we would like to be.</p><p>This basic lesson generalises to many other contexts, including those where our maladaptation is harder to observe than our fatness. The most important of these is modern politics.</p><h3><em><strong>Political Mismatch</strong></em></h3><p>The scale and complexity of the modern environment that bears on political debate are mind-boggling. Hundreds of millions of strangers are enmeshed in interacting economic, political, and institutional forces that bear no resemblance to the small-scale worlds we evolved in.</p><p>Although it&#8217;s important not to overstate the problem of mismatch here&#8212;popular talk of static &#8220;stone-age minds&#8221; obscures how we evolved to be highly adaptable and flexible&#8212;it&#8217;s equally important not to ignore the severity of the challenges. </p><p><strong>First</strong>, the modern world <a href="https://global.oup.com/academic/product/power-without-knowledge-9780190877170?cc=us&amp;lang=en&amp;">radicalises our reliance on social learning</a>. When forming beliefs about topics relevant to modern politics, we almost always lack the ability to cross-check what we&#8217;re told against our experience, either because it is too distant in space and time or because the topics concern abstract phenomena (GDP, inflation, demographic trends, economic growth, etc.) that no one can directly experience.</p><p><strong>Second</strong>, the intuitions most people bring to understanding modern societies are systematically misleading.</p><p>We evolved to be highly skilled at forming alliances, reading intentions, tracking reputations, and playing local status games. In contrast, neither our evolutionary endowment nor first-hand experiences <a href="https://philpapers.org/rec/BOYFBA">prepare us to understand</a> large-scale systems characterised by emergent properties, distributed processes, and incentives. So we anthropomorphise institutions and frequently default to moralised, <a href="https://pubmed.ncbi.nlm.nih.gov/18692779/">intention-based narratives</a> that posit villains rather than incentives and structural constraints. </p><p>When pharmaceutical prices rise, we assume that greedy executives are to blame rather than laws, regulations, and insurance markets. When housing or renting becomes unaffordable, we blame the avarice of developers and landlords rather than building restrictions, permitting processes, and construction costs. Such tendencies served our ancestors well. In hunter-gatherer societies, it&#8217;s reasonable to trace significant events to identifiable agents with familiar goals, and to link <a href="https://myscp.onlinelibrary.wiley.com/doi/10.1002/arcp.1096">good and bad social outcomes to good and bad intentions</a>. Invisible-hand coordination and <a href="https://en.wikipedia.org/wiki/Spontaneous_order">emergent order</a> are, therefore, <a href="https://en.wikipedia.org/wiki/Folk_economics">deeply counterintuitive</a>.</p><p>Similarly, zero-sum thinking makes sense for hunter-gatherers. When you live at the subsistence level, one person&#8217;s dramatic gains likely mean someone else&#8217;s dramatic loss. Consequently, we <a href="https://blog.acton.org/archives/122444-win-win-denial-the-roots-of-zero-sum-thinking.html">struggle to comprehend</a> how modern trade and innovation could make everyone better off, especially when gains are unevenly distributed or delayed. In fact, the very idea that something called &#8220;wealth&#8221; can be created is a profound theoretical discovery that conflicts with common sense. The <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/zerosum-thinking-and-economic-policy/C38927254280E3F6648F58E36C7D73B4">more natural view</a>, which modern economics education tries to shake people out of, is that there is a fixed set of goods to be distributed.</p><p><strong>Third, </strong>the modern information environment through which people attempt to learn about this strange, new world and overcome their default ignorance and confusion is more of a hindrance than a help. </p><p>Most obviously, it is shaped by extremely well-funded <a href="https://www.conspicuouscognition.com/p/the-stench-of-propaganda-clings-to">propaganda campaigns </a>by powerful strangers who profit from other people&#8217;s ignorance. Even if such propaganda is unsuccessful, as it <a href="https://www.persuasion.community/p/propaganda-almost-never-works">often is</a>, its presence can breed pervasive mistrust that prevents the uptake of trustworthy information, causing people to place greater weight on their personal&#8212;and highly unreliable&#8212;intuitions.</p><p>Even more importantly, the modern media environment in free societies is organised around intense competition for audience attention and engagement. When combined with our deep-rooted <a href="https://en.wikipedia.org/wiki/Negativity_bias">negativity bias</a>&#8212;our evolved (and, for hunter-gatherers, adaptive) tendency to attend disproportionately to threats and dangers&#8212;the result is an information ecosystem systematically <a href="https://www.vox.com/the-highlight/23596969/bad-news-negativity-bias-media">skewed</a> towards catastrophe, conflict, and outrage.</p><p>The predictable consequence of this is that people develop mental pictures of reality <a href="https://en.wikipedia.org/wiki/Factfulness">far more negative than the objective facts warrant</a>. They overestimate poverty, crime rates, and many other social pathologies and dangers, and believe most trends are going in the wrong direction. In affluent liberal democracies, people are not just largely oblivious to progress. Their minds invert reality, often treating the most peaceful and prosperous societies in human history as <a href="https://www.persuasion.community/p/its-the-internet-stupid">dystopian hellscapes</a>.</p><p>The result of all this is pervasive ignorance and misperception. The facts and complexities of the modern world are substituted in people&#8217;s heads with cartoonish, catastrophising myths.</p><h2>The Rational Irrationality Objection</h2><p>If this analysis is correct, it suggests that mass ignorance and misperceptions are not figments of intellectuals&#8217; self-serving imaginations. Evolution made us rational and well-adapted&#8212;but to a world that no longer exists. In the modern world, <a href="https://www.conspicuouscognition.com/p/why-do-people-believe-true-things">confusion and misunderstanding are the default</a>.</p><p>Nevertheless, there is a popular line of reasoning that concedes the existence of mass ignorance but insists that it is &#8220;rational.&#8221; To introduce a bit of jargon, it acknowledges that most people are not &#8220;epistemically rational&#8221;&#8212;they are doing a terrible job forming accurate beliefs about reality&#8212;but it argues that such epistemic failures are &#8220;instrumentally rational&#8221;. In line with Pinsof&#8217;s perspective, it treats widespread error and delusion as an adaptive response to people&#8217;s practical circumstances.</p><p>One influential theory of this kind comes from the work of economists like <a href="https://en.wikipedia.org/wiki/An_Economic_Theory_of_Democracy">Anthony Downs</a> and <a href="https://www.amazon.com/Myth-Rational-Voter-Democracies-Policies/dp/0691138737">Bryan Caplan</a>. It points out that in large-scale modern democracies, an individual&#8217;s vote makes practically no difference to electoral outcomes. This means people have no incentive to become well informed. They have no skin in the game. On the other hand, endorsing political beliefs that are emotionally gratifying or that signal one&#8217;s tribal loyalties can be highly beneficial. So rational individuals <a href="https://en.wikipedia.org/wiki/Against_Democracy">opt for ignorance and (epistemic) irrationality</a>.</p><p>This analysis could be strengthened by Pinsof&#8217;s &#8220;<a href="https://www.tandfonline.com/doi/abs/10.1080/1047840X.2023.2274433">Alliance Theory</a>&#8221; of political belief systems, which posits that people&#8217;s participation in politics is not rooted in a desire to form accurate beliefs. Instead, we&#8217;re tribal propagandists. Our beliefs are downstream of the alliances and rivalries we form, and the biased, hypocritical arguments we construct to make our allies look good and our rivals look bad.</p><p>Both perspectives are insightful, but they also go too far.</p><h3><em><strong>Sometimes Ignorance and Irrationality Are Just Ignorance and Irrationality</strong></em></h3><p>One problem for the &#8220;rational ignorance&#8221; perspective is the prediction that ignorance and misperceptions will evaporate when people have skin in the game. This is wrong.</p><p>The history of modernity is littered with examples of people making catastrophic decisions based on deranged, inaccurate worldviews in high-stakes contexts. The Nazis really believed in an elaborate Jewish conspiracy, which led them to undertake self-defeating decisions, such as diverting crucial wartime resources to mass genocide. As I will return to below, communists throughout the twentieth century genuinely believed in various myths about human nature and economics, which led to repeated catastrophes, many of which engulfed the revolutionaries who brought such regimes into existence.</p><p>For less severe examples, one need only look at the <a href="https://www.richardhanania.com/p/kakistocracy-as-a-natural-result">policy track records of populist politicians</a> in modern democracies to see that people often make bad decisions based on ignorance and misperceptions, even when they have strong incentives to perceive reality accurately.</p><p>The idea that political cognition improves dramatically as stakes increase is not well supported by the historical record. And once you reflect on the <a href="https://www.conspicuouscognition.com/p/are-people-too-flawed-ignorant-and">vastness, complexity, and inaccessibility</a> of the modern world, this shouldn&#8217;t be very surprising. When discovering the truth is extremely challenging, merely increasing people&#8217;s incentive to discover it won&#8217;t secure success.</p><p>Another problem with the &#8220;rational ignorance&#8221; perspective is the assumption that people know their individual vote has no impact on political outcomes and so &#8220;decide&#8221; to be ignorant and misinformed. As Jeffrey Friedman <a href="https://global.oup.com/academic/product/power-without-knowledge-9780190877170?cc=us&amp;lang=en&amp;">points out</a>, this isn&#8217;t well-supported by evidence. Instead, people appear to be <em>radically ignorant</em>, not rationally ignorant. Because they don&#8217;t instinctively appreciate the sheer scale of the modern world, they dramatically overestimate the impact of their vote, and they treat political knowledge as <a href="https://www.conspicuouscognition.com/p/in-politics-the-truth-is-not-self">much more accessible than it really is</a>. </p><p>This analysis helps to explain many features of political psychology that sit uneasily with the &#8220;rational ignorance&#8221; perspective. The intensely negative, catastrophising worldviews that many people develop often just make them <a href="https://www.amazon.com/Not-End-World-Generation-Sustainable/dp/031653675X">sad, distressed, and demotivated</a>. They experience politics as aversive and anxiogenic. They sometimes damage close relationships with friends and family members. Much of this looks more like sincere participation than tribal signalling optimised for maximising emotional or social rewards.</p><p>None of this is to deny that people <a href="https://www.conspicuouscognition.com/p/tribalism-corrupts-politics-even">approach politics with a &#8220;tribal&#8221; mindset</a>. There is considerable insight in Pinsof&#8217;s analysis that politics is rooted in alliances, rivalries, and self-serving (well, alliance-serving) &#8220;propaganda,&#8221; as well as in the popular idea that much political participation is performative, concerned more with <a href="https://philpapers.org/rec/FUNATM">tribal signalling</a> than sober policy analysis.</p><p>However, <a href="http://tandfonline.com/doi/abs/10.1080/1047840X.2023.2274412?__cf_chl_rt_tk=LGQQQhs9zRmp2.SqX_WenNePG0KLOyF.j8MrwHKYVWE-1770043063-1.0.1.1-G7ssvkK7xSmtECbcS981djvfATAZBnHOF8kMGCRZkGA">the problem</a> with such proposals is that the leaders and tribes we support and oppose are not independent of&#8212;in technical jargon, they&#8217;re not &#8220;exogenous&#8221; to&#8212;our political beliefs, so they cannot fully explain such beliefs. We choose leaders, allies, rivals, and enemies based on the pictures in our heads. If those pictures are systematically warped by misleading intuitions, mistrust, and negativity bias, the same will apply to our judgements about which leaders and allies promote our interests.</p><p>Put simply, someone with an accurate, evidence-based worldview will support very different political leaders and tribes than someone whose worldview is constructed from &#8220;common sense&#8221; intuitions interacting with their TikTok feed.</p><p>In general, political ignorance and misperceptions aren&#8217;t always or even commonly the product of savvy, evidence-based cost-benefit analysis or 4D Darwinian chess. They&#8217;re often downstream of the profound challenges of acquiring counterintuitive knowledge in a hostile and misleading information environment. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><h2>The Role of Intellectuals</h2><p>This suggests a more optimistic assessment of the value of &#8220;intellectuals&#8221; in the broad sense of that term (scientists, statisticians, academics, etc.), and of the kinds of knowledge they can provide, ranging from <a href="https://ourworldindata.org/">carefully collected data</a> to rigorous scientific inquiry. To successfully navigate the modern world, we need to be enlightened by such knowledge. It won&#8217;t fall into our lap if we let our evolved psychologies run on autopilot. Our <a href="https://www.conspicuouscognition.com/p/why-do-people-believe-true-things">default condition is one of epistemic darkness</a>. </p><p>This optimism is, or at least should be, uncontroversial when it comes to the knowledge associated with the natural and medical sciences. Hundreds of millions of people died throughout history from diseases we have now eradicated <a href="https://www.amazon.com/Enlightenment-Now-Science-Humanism-Progress/dp/0525427570">thanks to discoveries</a> about vaccines and other miracles of modern medicine and public health. We couldn&#8217;t rely on Darwinian adaptations to secure such knowledge. We needed rigorous, institutionally supported inquiry through which we could learn truths that are often highly counterintuitive.</p><p>The real controversy concerns whether intellectual knowledge can correct costly ignorance in domains like politics and collective organisation. </p><p>Here, <a href="https://en.wikipedia.org/wiki/Intellectuals_and_Society">scepticism is understandable</a>. It&#8217;s <a href="https://global.oup.com/academic/product/power-without-knowledge-9780190877170?cc=us&amp;lang=en&amp;">much more challenging</a> to conduct rigorous science in these domains, and prominent ideas and theories often function more like intellectual fashions governed by the subjective, internal criteria upheld by the intelligentsia than like scientific hypotheses evaluated by objective measures of predictive success.</p><p>For this reason, the practical track record of these ideas has often been negative, and in some cases disastrous. Despite concerted and ongoing obfuscation of this fact by many left-wing intellectuals, the clearest example lies with Marx, who, alongside many later generations of communist intellectuals and activists inspired by his work, argued that self-interest and social competition were not essential features of human nature but contingent products of exploitative economic systems like capitalism, feudalism, and slavery. This and countless other foolish ideas, such as the notion that law and conventional morality under capitalism are mere &#8220;<a href="https://www.marxists.org/archive/marx/works/1848/communist-manifesto/ch01.htm">bourgeois prejudices</a>,&#8221; played a major and undeniable role in many of the worst human catastrophes of the twentieth century.</p><p>These catastrophes can&#8217;t be understood as moral abominations that nevertheless advanced the strategic interests of those who spread them. Most of the true believers who fought for communist revolutions in countries like Russia, China, Korea, and Cambodia were quickly victimised by the systems they helped create. They weren&#8217;t just playing cynical adaptive games. They were catastrophically misinformed about reality in ways that got themselves and countless others killed. </p><p>Notice, however, that one should not conclude from such disasters that intellectual ideas don&#8217;t matter. They matter enormously. But wouldn&#8217;t it be strange if they could only have negative consequences?</p><h2>The Achievements of Liberalism</h2><p>In fact, one can find many examples throughout history of intellectual achievements concerning society and politics that have had extremely beneficial consequences.</p><p>For example, as <a href="https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature">Steven Pinker</a>, <a href="https://www.amazon.com/Enlightenment-2-0-Joseph-Heath-ebook/dp/B00D5TRR7M">Joseph Heath</a>, <a href="https://www.amazon.com/Constitution-Knowledge-Jonathan-Rauch/dp/0815738862">Jonathan Rauch</a>, and many others have documented, one cannot understand the emergence of modern liberalism and the unique social and political successes of liberal states without appreciating how complex, counterintuitive intellectual discoveries informed institution-building. From at least Hobbes onwards, a tradition of intellectual inquiry&#8212;including Locke, Hume, Montesquieu, Smith, Kant, and many other Enlightenment thinkers&#8212;drew attention to two major theoretical insights.</p><p>The first was that <a href="https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature">human societies are pervaded by what social scientists now call &#8220;collective action problems&#8221;</a>: situations where individuals acting on their rational self-interest are led to engage in collectively self-defeating behaviour that leaves everyone worse off. </p><p>For example, Hobbes observed how, in the absence of enforceable laws and contracts, people who would benefit from mutual cooperation would be driven towards pre-emptive aggression, fearing exploitation or cheating by others. Insights with a similar structure were later used to explain the value of political regimes that uphold religious and political tolerance, enforce extensive systems of individual rights, protect free speech even for dangerous and heretical ideas, maintain open trade between nations, and more.</p><p>The <a href="https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature">second insight</a> was that institutions can be constructed to channel self-interest and social competition away from predation and violence towards beneficial outcomes. </p><p>For example, Smith demonstrated how regulated market competition could transform the self-interest of bakers and brewers into the efficient production of goods for others. In the political domain, Montesquieu and Madison explored how political systems could be organised to make ambition counteract ambition. And in the domain of knowledge, many scientists and philosophers <a href="https://press.uchicago.edu/ucp/books/book/chicago/T/bo37447570.html">explored</a> how formal societies and norms could be crafted to <a href="https://www.amazon.com/Constitution-Knowledge-Jonathan-Rauch/dp/0815738862">counteract individual biases</a> and allocate &#8220;credit&#8221; only to those who made genuine discoveries. The core discovery across these diverse contexts was that specific systems of norms and institutions can convert human self-interest and ambition into innovation, investment, knowledge, and political accountability.</p><p>In both cases, these insights were genuine theoretical discoveries that <a href="https://www.amazon.com/Constitution-Knowledge-Jonathan-Rauch/dp/0815738862">sharply</a> <a href="https://www.amazon.com/Enlightenment-2-0-Joseph-Heath-ebook/dp/B00D5TRR7M">contradicted</a> most people&#8217;s intuitions. Trusting in self-interest, social competition, and decentralised markets to coordinate economic activity; relinquishing power to people with radically different political or religious views; tolerating dangerous and offensive speech&#8212;these ideas don&#8217;t come naturally to human beings. They are insights that must be achieved.</p><p>Of course, the insights alone don&#8217;t change anything. Merely recognising the existence of a collective action problem doesn&#8217;t free one from it. And merely understanding that institutions can channel ambition into cooperation doesn&#8217;t create them. Nevertheless, precisely because these insights take humans as they are, not as we&#8217;d like them to be, and point to possibilities that leave everyone better off, not just some people, they can guide institutional design and tinkering, helping humanity gradually escape from the poverty, ignorance, and conflict that are <a href="https://www.google.com/search?q=swell+conflict+of+visions&amp;oq=swell+conflict+of+visions&amp;gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQABiABDIHCAIQABiABDIHCAMQABiABDIHCAQQABiABDIHCAUQABiABDIICAYQABgWGB4yCAgHEAAYFhgeMggICBAAGBYYHjIICAkQABgWGB7SAQg0Njc2ajBqNKgCALACAA&amp;sourceid=chrome&amp;ie=UTF-8">our default state</a>. </p><p>Notice, however, that for this progress to be possible, societies require a critical mass of people who appreciate these insights. Similarly, there must be enough people who have a grasp of the basic facts and trends demonstrating that such insights <em>work</em>&#8212;that, as a consequence of liberal institutions guided by intellectual insights, much of humanity has experienced <a href="https://ourworldindata.org/much-better-awful-can-be-better">objective progress along countless dimensions</a>, including wealth, health, freedom, opportunity, governance, and much more. </p><h2>The Desire to Fix the World</h2><p>Reflecting on this history puts pressure on Pinsof&#8217;s pessimistic judgement that nobody really wants to fix our broken world.</p><p>Once again, there is more than one grain of truth here. Humans are unavoidably self-interested and competitive, and altruistic motivations are inevitably <a href="https://www.conspicuouscognition.com/p/strategic-altruism-the-machiavellian">limited and accompanied by a large dose of selectivity and hypocrisy</a>. This is what we should expect on evolutionary grounds, and it is confirmed by extensive historical evidence, not least the many examples where revolutionaries championing justice have <a href="https://www.newyorker.com/magazine/2024/09/16/are-your-morals-too-good-to-be-true">quickly turned into despots after taking power</a>.</p><p>Nevertheless, the historical record suggests that the deep human craving for esteem and honour can also be channelled into genuinely noble pursuits. As we created liberal societies that <a href="https://en.wikipedia.org/wiki/Nonzero:_The_Logic_of_Human_Destiny">increased the scale of cooperation</a> and the costs of predation, we also created conditions that made the pursuit of prestige&#8212;of admiration and deference&#8212;more profitable. As Will Storr documents in <em><a href="https://www.amazon.com/Status-Game-Position-Governs-Everything-ebook/dp/B08H7Y414K">The Status Game</a></em>, this channelled insatiable human ambition and social competition towards impressing others through demonstrations of competence and virtue, fuelling modern science, innovation, and social justice.</p><p>We reward those who try to fix the world, produce novel insights, and advance other people&#8217;s interests. At the same time, we are sensitive to the possibility that such motivations aren&#8217;t genuine&#8212;that people care only about the personal rewards, not the high ideals. Nichola Raihani calls this the &#8220;<a href="https://www.amazon.com/Social-Instinct-Cooperation-Shaped-World/dp/1250262828">reputation tightrope</a>&#8221;: to earn a noble reputation for performing good deeds, those deeds must flow from the right motives, not reputational ones.</p><p>As a consequence, in societies that consistently reward prosocial behaviour, people tend to internalise their motivations to help others. The best way to convince others that you want to help them and fix our broken world is to develop a genuine passion for doing so. </p><p>Such passions are <a href="https://www.conspicuouscognition.com/p/strategic-altruism-the-machiavellian">never pure or extremely deep</a>. They must be scaffolded by the right incentives, and they can disappear if incentives suddenly change&#8212;hence the many justice-championing revolutionaries who lose their love of humanity when they acquire power. But it is genuine and sincere nonetheless, and you can&#8217;t understand humanity&#8217;s progress over recent centuries without appreciating its reality&#8212;from the scientists and doctors who devoted their lives to understanding and combating disease, to the reformers who fought for social justice against slavery and oppression, to the entrepreneurs who created technologies that lifted countless people out of poverty.</p><h2>Conclusion</h2><p>The truth is messy and complex. We are rational creatures whose apparent &#8220;stupidity&#8221; is often a symptom of <a href="https://www.amazon.com/Hidden-Games-Surprising-Irrational-Behavior/dp/1541619471">hidden strategies</a>, but we are also maladapted to modernity&#8217;s vastness and complexity. In this strange new world, ignorance is our default, our intuitions mislead us, and the information environment exacerbates our confusion. To escape this bleak situation, we must <a href="https://josephheath.substack.com/p/populism-fast-and-slow">unlearn our &#8220;common sense&#8221;</a>. We need to be enlightened by insights and knowledge that only systematic, intellectual inquiry can provide. </p><p>Such inquiry has demonstrably improved the human condition. Liberal norms and institutions, products of hard-won, counterintuitive discoveries, function to channel our self-interest and ambition into cooperation and progress, helped along by a craving for prestige that can be&#8212;and has been&#8212;<a href="https://www.amazon.com/Economy-Esteem-Essay-Political-Society/dp/0199289816">directed towards noble pursuits</a> that have made the world measurably better. </p><p>And yet, at present, <a href="https://www.persuasion.community/p/its-the-internet-stupid">a shocking number of people are ignorant of this progress</a>, and of the insights that underpinned it. If you read <a href="https://en.wikipedia.org/wiki/Factfulness">survey data</a> or listen to the <a href="https://trumpwhitehouse.archives.gov/briefings-statements/the-inaugural-address/">speeches</a> of some of the West&#8217;s most popular politicians, you discover that many people sincerely believe that things have been getting worse. </p><p>This is a big misunderstanding. </p><p>To correct it, we must insist on the value of intellectual insights and carefully collected data. We must acknowledge that too many people are ignorant and confused about the world they inhabit, and celebrate those who aim to change that. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Further Reading: </h1><ul><li><p>David Pinsof is one of my favourite <a href="https://www.everythingisbullshit.blog/">writers</a> and <a href="https://osf.io/preprints/psyarxiv/e2uhc">social scientists</a>. I&#8217;m not sure how much he would ultimately disagree with my arguments here. </p></li><li><p>On the role of counterintuitive liberal insights in the Enlightenment, I&#8217;ve been highly influenced by <a href="https://www.amazon.com/Enlightenment-2-0-Joseph-Heath-ebook/dp/B00D5TRR7M">Joseph Heath</a>, <a href="https://www.amazon.com/Constitution-Knowledge-Jonathan-Rauch/dp/0815738862">Jonathan Rauch</a>, and <a href="https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature">Steven Pinker</a>. </p></li><li><p>On how status competition can be (and has been) channelled into cooperation and progress, see <a href="https://www.amazon.com/Status-Game-Position-Governs-Everything-ebook/dp/B08H7Y414K">Will Storr</a> and <a href="https://www.amazon.com/Social-Instinct-Cooperation-Shaped-World/dp/1250262828">Nichola Raihani</a>. </p></li><li><p>On social complexity, political modernity, and evolutionary mismatch, see <a href="https://en.wikipedia.org/wiki/Public_Opinion_(book)">Walter Lippmann</a> and <a href="https://www.amazon.com/Minds-Make-Societies-Cognition-Explains/dp/0300223455">Pascal Boyer.</a> </p></li><li><p>On why people are sincerely misled about the fact of modern progress, see <a href="https://ourworldindata.org/much-better-awful-can-be-better">Max Roser</a> and <a href="https://www.amazon.com/Not-End-World-Generation-Sustainable/dp/031653675X">Hannah Ritchie</a>.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[AI Sessions #8: Misinformation, Social Media, and Deepfakes (with Sacha Altay)]]></title><description><![CDATA[Watch now | Henry and I chat with Dr Sacha Altay about:]]></description><link>https://www.conspicuouscognition.com/p/ai-sessions-8-misinformation-social</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/ai-sessions-8-misinformation-social</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Fri, 23 Jan 2026 17:40:02 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/185546769/54fcd32476ad92f460c1548dd6f01995.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Henry and I chat with Dr <a href="https://sites.google.com/view/sacha-yesilaltay/home">Sacha Altay</a> about:</p><ul><li><p>How prevalent is misinformation?</p></li><li><p>What even is &#8220;misinformation&#8221;?</p></li><li><p>Is there a difference between politics and science?</p></li><li><p>How impactful are propaganda, influence campaigns, and advertising?</p></li><li><p>What impact has social media had on modern democracies?</p></li><li><p>How worried should we be about the impact of generative AI, including deepfakes, on the information environment?</p></li><li><p>The &#8220;liar&#8217;s dividend&#8221;</p></li><li><p>Whether ChatGPT is more accurate and less biased than the average politician, pundit, and voter. </p></li></ul><h1>Links</h1><ul><li><p><strong><a href="https://sites.google.com/view/sacha-yesilaltay/home">Sacha Altay</a></strong></p></li><li><p><strong>&#8220;<a href="https://misinforeview.hks.harvard.edu/article/misinformation-reloaded-fears-about-the-impact-of-generative-ai-on-misinformation-are-overblown/">Misinformation Reloaded? Fears about the Impact of Generative AI on Misinformation are Overblown</a>&#8221;</strong> Felix M. Simon, Sacha Altay, &amp; Hugo Mercier </p></li><li><p><strong>&#8220;<a href="https://knightcolumbia.org/content/dont-panic-yet-assessing-the-evidence-and-discourse-around-generative-ai-and-elections">Don&#8217;t Panic (Yet): Assessing the Evidence and Discourse Around Generative AI and Elections</a>&#8221;</strong> Felix M. Simon &amp; Sacha Altay </p></li><li><p><strong>&#8220;<a href="https://www.astralcodexten.com/p/the-media-very-rarely-lies">The Media Very Rarely Lies</a>&#8221;</strong> Scott Alexander </p></li><li><p><strong>&#8220;<a href="https://www.conspicuouscognition.com/p/how-dangerous-is-misinformation">How Dangerous is Misinformation?</a>&#8221;</strong> Dan Williams</p></li><li><p><strong>&#8220;<a href="https://asteriskmag.com/issues/11/scapegoating-the-algorithm">Scapegoating the Algorithm</a>&#8221;</strong> Dan Williams</p></li><li><p><strong>&#8220;<a href="https://www.conspicuouscognition.com/p/is-social-media-destroying-democracyor">Is Social Media Destroying Democracy&#8212;Or Giving It To Us Good And Hard?</a>&#8221;</strong> Dan Williams</p></li><li><p><strong>&#8220;<a href="https://press.princeton.edu/books/hardcover/9780691178707/not-born-yesterday?srsltid=AfmBOop_Wq4V_Llv-_MogGVJTL2VGVj1MkKOfjP2QF0E6nSRq6zzDLqx">Not Born Yesterday: The Science of Who We Trust and What We Believe</a>&#8221;</strong> Hugo Mercier</p></li><li><p><strong><a href="https://www.joeuscinski.com/">Joseph Uscinski</a></strong></p></li><li><p><strong>&#8220;<a href="https://www.science.org/doi/10.1126/science.adq1814">Durably Reducing Conspiracy Beliefs Through Dialogues with AI</a>&#8221;</strong> Thomas H. Costello, Gordon Pennycook, &amp; David G. Rand</p></li><li><p><strong>&#8220;<a href="https://www.science.org/doi/10.1126/science.aea3884">The Levers of Political Persuasion with Conversational AI</a>&#8221;</strong> Kobi Hackenburg, Ben M. Tappin, et al. </p></li><li><p><strong><a href="https://www.benmtappin.com/">Ben Tappin</a></strong></p></li></ul><h1>Chapters</h1><ul><li><p><strong>00:00</strong> Understanding Misinformation: Definitions and Prevalence</p></li><li><p><strong>04:22</strong> The Complexity of Media Bias and Misinformation</p></li><li><p><strong>14:40</strong> Human Gullibility: Misconceptions and Realities</p></li><li><p><strong>27:28</strong> Selective Exposure and Demand for Misinformation</p></li><li><p><strong>29:49</strong> Political Advertising: Efficacy and Misconceptions</p></li><li><p><strong>35:13</strong> Social Media&#8217;s Role in Political Discourse</p></li><li><p><strong>40:50</strong> Evaluating the Impact of Social Media on Society</p></li><li><p><strong>42:44</strong> The Impact of Political Content on Social Media</p></li><li><p><strong>46:57</strong> The Changing Landscape of Political Voices</p></li><li><p><strong>51:41</strong> Generative AI and Its Implications for Misinformation</p></li><li><p><strong>01:03:46</strong> The Liar&#8217;s Dividend and Trust in Media</p></li><li><p><strong>01:14:11</strong> Personalization and the Role of Generative AI</p></li></ul><h1>Transcript</h1><ul><li><p>Please note that this transcript was edited by AI and may contain mistakes. </p></li></ul><p><strong>Dan Williams:</strong> Okay, welcome back. I&#8217;m Dan Williams. I&#8217;m back with Henry Shevlin. And today we&#8217;re going to be talking about one of the most controversial, consequential topics in popular discourse, in academic research, and in politics, which is misinformation. So we&#8217;re going to be talking about how widespread is misinformation? Are we living through, as some people claim, a misinformation age, a post-truth era, an epistemic crisis?</p><p>How impactful is misinformation and more broadly domestic and foreign influence campaigns? What&#8217;s the role of social media platforms like TikTok, YouTube, like Facebook, like X when it comes to the information environment? Is social media a kind of technological wrecking ball which has smashed into democratic societies and created all sorts of havoc? And also what&#8217;s the impact of generative AI when it comes to the information environment?</p><p>Both when it comes to systems like ChatGPT, but also when it comes to deepfakes, use of generative AI to create hyper-realistic audio, video, and images. Fortunately, we&#8217;re joined by Sacha Altay, brilliant heterodox researcher in the misinformation space, who pushes back against what he perceives to be simplistic and alarmist takes concerning misinformation.</p><p>So we&#8217;re going to be picking Sacha&#8217;s brain and just more generally having a chat about misinformation, social media, and the information environment. So Sacha, maybe just to kick things off, in your estimation, if we&#8217;re keeping our focus on Western democracies, how prevalent is misinformation?</p><p><strong>Sacha Altay:</strong> Hi guys, my pleasure to be here. So it&#8217;s a very difficult question because we need to define what is misinformation. So we&#8217;ll first stick to the empirical literature on misinformation and look at the scientific estimates of misinformation. For that, there are basically two ways or three ways to define misinformation. One of them is to look at fact-checked false news.</p><p>So false news that have been fact-checked by fact-checkers as being false or misleading. And by this account, misinformation is quite small on social media, like Facebook or Twitter. It&#8217;s in between 1 and 5% of all the content or all the news that people come across. So according to this definition, it&#8217;s quite small. There is some variability across country. For instance, it seems to be higher in country like, I don&#8217;t know, the US or France than the UK or Germany.</p><p>There is another definition which is a bit more expansive because the problem with fact-checked false news is that you rest entirely on the work of fact-checkers and of course fact-checkers cannot fact-check everything and not all misinformation is news. So you see the problems. So another way is to just look at the sources of information and you classify them based on how good they are and how basically how much they share reliable information, how much they have good journalistic practice, et cetera. And the advantage of this technique is that you can have a much broader range because you can have, I don&#8217;t know, 3,000 sources of information that share information. And usually it broadly like most of the information that people see. And according to the definitions, misinformation is also quite small. So the definition is just for misleading information that comes from the sources that are judged as unreliable. And by this definition, misinformation is also quite small. Again, it&#8217;s like about like one to 5% of all the news that people encounter.</p><p>But then of course, the problem is not all the information that people encounter comes in this form. And for instance, some of it can come in terms of like images or all the sorts of things. And so this broadens the definition of misinformation. So some people think that when you broaden this definition, you have much more misinformation. My reading is that when you broaden this definition, you actually include so much more information that you increase the denominator. So of course, there&#8217;s going to be more misinformation, but because the denominator is larger, the proportion is going to be pretty much the same. But that&#8217;s an empirical question. So let&#8217;s say to sum up that it&#8217;s smaller than people think, according to the scientific estimates.</p><p><strong>Henry Shevlin:</strong> If I can just come in here, a point that Dan you&#8217;ve emphasized in our conversations to me, and I think Scott Alexander has also emphasized in a great blog post called <em>The Media Very Rarely Lies</em>, is that a lot of what people think of as misinformation is just true information selectively expressed or couched in a way that naturally leads people to maybe form false beliefs but doesn&#8217;t involve presentation of falsehoods. Does that sort of feature in any of these sort of more expansive definitions of misinformation? Is it possible to create definitions that can capture this kind of deceptive, intentionally deceptive but not strictly false content?</p><p><strong>Sacha Altay:</strong> I&#8217;d say that when you look at the definitions based on the sources, if a source is systematically biased and systematically misrepresent evidence and stuff, they are going to be classified as misinformation. I think the problem and the more subtle point is that these sources are not very important because people don&#8217;t trust them very much. But the bigger problem is when much more trusted sources who have a much larger reach, like I don&#8217;t know the BBC or the New York Times, they are accurate like most of the time, but sometimes and on systematic issues like I don&#8217;t know, they can be wrong. And that&#8217;s the bigger issue because they are right most of the time. So they have a big reach, they have big trust, but they are wrong sometimes. And that&#8217;s the problem.</p><p><strong>Dan Williams:</strong> But I think just to focus on that observation of Henry&#8217;s, you might say, well, they&#8217;re accurate most of the time, but nevertheless, you can have a media outlet which is strictly speaking accurate most of the time with every single news story that it reports on. But because of the ways in which it selects, omits, frames, packages, contextualizes information, nevertheless end up misinforming audiences, even if every single story that they&#8217;re reporting on is on its merits, sort of factual and evidence-based.</p><p>I mean, I think the way that I understand what&#8217;s happening in this broader debate about the prevalence of misinformation is round about 2016 when we had Brexit in the United Kingdom and then the first election of Donald Trump, there was this massive panic about misinformation because many people thought maybe that&#8217;s what&#8217;s driving a lot of this support for what gets called like right-wing authoritarian populist politics. And around that time when people were thinking of the term misinformation, they were kind of thinking of fake news in the sort of literal sense of that term. So false outright fabricated information presented in the format of news. And as you pointed out, when researchers then looked at the prevalence of that kind of content, which you don&#8217;t really find when it comes to establishment news media for the most part, like there are always gonna be exceptions, that stuff is pretty rare.</p><p>And then one of the responses to that is to say, okay, if you&#8217;re only looking at like outright fake news, then you&#8217;re missing all of these other ways in which communication can be misleading by being selective, by omitting relevant context through framing, through kind of subtle ideological biases.</p><p>And then my view on that is, well, once you&#8217;ve expanded the term to that extent, and you&#8217;ve got this really kind of elastic, amorphous definition, it becomes really kind of analytically useless. Like you&#8217;re just bundling together so many different things. And that kind of content is also really pervasive in my view, within many of our establishment institutions, including within the social sciences. But Sacha, it sounds like you don&#8217;t necessarily want to endorse that last point. You seem to be thinking, even if you do have this kind of very broad definition of misinformation, we can still say that it&#8217;s a pretty fringe or pretty rare feature of the information environment. Is that fair? Am I understanding you right? Or is there something different going on?</p><p><strong>Sacha Altay:</strong> I think I would agree with you that if the simple fact of framing information or having an opinion, like any scientist, even in the hard sciences, they have some theories that they prefer, they are more familiar with certain frameworks, and so they are going to be biased anyways. Scientists are humans, they are biased, but calling physics or the theory of relativity or whatever misinformation because it omits certain facts that it cannot accommodate or whatever, I think it&#8217;s far-fetched. I think it goes too far. So yeah, I would agree that if you use this broad definition of misinformation, then it&#8217;s very widespread. But then, yeah, even theories in physics would be misinformation because they cannot be completely objective.</p><p>I think science works not because scientific individuals are perfect, etc., or even because one theory is perfect, but because as a whole and as an exercise of arguing, etc., we get better and a little bit closer to the truth. But still, we are not getting at the truth and we cannot avoid the mistakes that you&#8217;re pointing.</p><p><strong>Henry Shevlin:</strong> If I just want to push back a tiny bit, it seems to me, so obviously there&#8217;s this point here that, you know, all theory is value laden, the kind of physics point that I think is maybe true, but not very interesting. But I think there is maybe something in the middle here that is what I worry about, which is cases where there might be really quite, quite deliberate pushing of an agenda, a realization by a media provider that they are generating maybe inaccurate views, but they&#8217;re doing so just through reporting factual things.</p><p>So one example, Dan, that you&#8217;ve given before is that most of the kind of what we think of as misleading anti-vax discussion just reports on true factually accurate but rare vaccine deaths, but just reports on them in a very regular fashion. In the same way, you might think that selective reporting of certain kinds of violent incidents, whether it&#8217;s terrorism, police shootings, leads systematically to overestimation of the incidence of this kind of phenomena by the public or increased worries about its prevalence in a way that I think is perhaps worrying and politically objectionable, right? I think we might say, hang on, it is bad that we give so much press coverage to event type X rather than event type Y. And we know that this leads the public to overestimate the prevalence of event type X compared to event type Y. So I think there&#8217;s something in between the sort of, well, even physics is biased and the view of misinformation as, you know, strictly speaking lies. This kind of third category. I don&#8217;t know if that, I defer to you both as misinformation experts, but it seems that that is a worrying category.</p><p><strong>Sacha Altay:</strong> I think you&#8217;re totally correct. And that&#8217;s what the field of misinformation has been proposing, like just for instance, classifying headlines based not on whether they are true or false, but whether they will create misperceptions after you have read them. And so researchers are saying, for instance, that we should classify as misinformation headlines such as, &#8220;a doctor died a week after getting vaccinated and we are investigating the cause.&#8221; And I think I disagree with this. I disagree with this thing that we should classify this.</p><p>What you were suggesting, Henry, was a bit different, is that it needs to be systematic. If you systematically misrepresent vaccine side effects, then it becomes problematic. But reporting on vaccine side effects and their possible negative effects is normal. And I think it&#8217;s healthy that news outlets are able to talk about and cover negative effects of vaccines, even if after reading the headlines, you have more negative opinions about vaccines, which is not supported by science, et cetera&#8212;they should be able to do that and they should do that. But if it&#8217;s systematic, as you say, I think it becomes more problematic. But I do think that when the bias is very strong in some of the definitions of misinformation based on the source, they would be classified as misinformation sources like Breitbart, et cetera. They are systematically extremely biased towards, I don&#8217;t know, these kind of things. And so they would be classified as misinformation.</p><p><strong>Dan Williams:</strong> I think sort of one of the worries that I have though is who decides what constitutes systematic bias and bias about what? I think there&#8217;s a real kind of epistemological naivety that I often encounter with misinformation researchers where it&#8217;s like, you&#8217;re reporting accurate but unrepresentative events when it comes to vaccines. So we can call that misinformation. And then it&#8217;s like, well, as Henry mentioned, well, what about police killings of unarmed black citizens in the US. There&#8217;s a vast amount of media coverage of those sorts of events. Someone might argue that they are, statistically speaking, rare and unrepresentative, and that large segments of the public dramatically overestimate how pervasive those sorts of occurrences are.</p><p>And I think you go through many, many examples like that. And for me, the lesson to draw from that is not that, therefore, there are no differences in quality when it comes to the different media outlets in the information environment, like of course there are, but I also think like there&#8217;s such a thing as politics and there&#8217;s such a thing as science, where you&#8217;ve got scientists who attempt to acquire a kind of objective intellectual authority on certain things, and we should be very careful not to kind of blur the distinction between those two things.</p><p>I think when we&#8217;re talking about media bias in this really expansive way, where we&#8217;re not saying, okay, you&#8217;re just making shit up, but we&#8217;re saying you&#8217;re being selective in terms of which aspects of reality you&#8217;re choosing. For me, that&#8217;s a really important debate, but it&#8217;s a debate that happens within the context of politics and democratic debate and deliberation and argument. And I think sometimes I encounter misinformation researchers who treat that as if it&#8217;s just, it&#8217;s a simple sort of technocratic scientific question. Like we can quantify the degree to which the New York Times is biased or we can objectively evaluate the degree to which different kinds of outlets approximate the objective truth when it comes to their systematic coverage. And I get a little bit kind of squirmy when we get to that point, because I think that there&#8217;s just collapsing the distinction between kind of politics with all of its messiness and complexity and science, which I think should aspire to a kind of objectivity, which gets lost when we start making these really sort of expansive judgments.</p><p>I think we&#8217;ll probably circle back on this a few times as we go through this debate. But Sacha, you&#8217;re also somebody with very interesting views about not just this question of the kind of prevalence of misinformation, but also about human belief formation and the extent to which, in your view, lots of people, both in popular discourse, but also in academia, kind of overestimate the gullibility of human beings when it comes to exposure to false or misleading content. So do you want to say a little bit about your view concerning human gullibility?</p><p><strong>Sacha Altay:</strong> Yeah, I just wanted to finish the last point on the fact that, you know, we are criticizing definitions of misinformation, but in media and communication studies for a long time, they have been studying kind of media bias, framing, agenda setting. Like they are very old theories of media, how they can misinform in subtle ways and indirect ways the public. And all of that has kind of been ignored by misinformation research. But now I feel like today misinformation research is catching up and be like, actually, we should go back to these theories. And so I think it&#8217;s good. But I just wanted to point that out.</p><p>And regarding gullibility, yes, I think it&#8217;s quite popular, the idea that people and like large complex events like the Brexit, Donald Trump or whatever are caused by people being irrational or gullible in particular. By gullible, I think what people often mean is that they are too quick to accept communicated information, like social information that they see out there in the world, in the news, communicated by others. And I think that the scientific literature shows something very different.</p><p>For instance, there is a whole literature on social learning, so how people learn either from their own experiences, their own beliefs, or what they see compared to like communicated information, social information, advice. And the consensus in this literature is that people underuse social information. They do not overuse it, they underuse it. And they would be better off doing many kinds of tasks if they were listening and weighing other people&#8217;s opinion and beliefs more than their own. So, I mean, it makes sense. Basically, we trust ourselves, we trust our intuitions, our experiences much more than that of others.</p><p>And so that&#8217;s kind of a consensus. There are many kinds of tasks, like you ask people, oh, what&#8217;s the distance between Paris and London? It&#8217;s like, 300 kilometers. Another participant, say 400. And you&#8217;re not going to take into account other people&#8217;s advice as much as your own intuition, even though you have no reason to be an expert on this kind of geographical distances. But you still trust yourself more.</p><p>And there are also many like theories and mechanisms that have been shown in political communication and media studies that I think suggest that people put a lot of weight on their own priors and their own attitudes when they evaluate and choose what to consume, which greatly reduces any kind of media effects or any kind of outside information. Like people are not randomly exposed to Fox News. They turn on the TV and they select Fox News. And then people selectively accept or reject the information they like the most. And so I think when you take all that into account, like selective exposure, selective acceptance, and egocentric discounting, it complicates a little bit the claim that humans are gullible.</p><p><strong>Dan Williams:</strong> Yeah, so there&#8217;s this sort of popular picture of human beings as credulously accepting, you know, whatever content they stumble across on their TikTok feed. Although when I say human beings, it&#8217;s always other human beings, right? This is another point that you make with the third-person effect. Nobody really thinks of themselves as being gullible and easily influenced by false and misleading communication. But when it comes to other people, there&#8217;s this kind of intuition which is that, yeah, people are just being kind of brainwashed en masse by their lies and falsehoods and absurdities uttered by politicians and that they&#8217;re encountering in their media environment.</p><p>And your point is, no, actually, if you look at the empirical research, it doesn&#8217;t really support that at all. If anything, people put too much weight on their own kind of intuitions, their own priors, their own experientially grounded beliefs relative to the information that they&#8217;re getting from other people. So rather than thinking of many of our sort of epistemic problems as being downstream of gullibility, we should think of in some ways there being the opposite problem of people being too mistrustful, too kind of skeptical of the content that they&#8217;re coming across. Is that a fair summary of your perspective?</p><p><strong>Sacha Altay:</strong> Couldn&#8217;t have said it better.</p><p><strong>Henry Shevlin:</strong> If I can just raise one question here. Reading your brilliant paper, you emphasized, so this is a paper with the Knight Columbia School. You go through all these different misconceptions about how easily influenced people are by different sources, by sort of different peers, by the media, by the news. But this sort of does prompt the question, you know, where do people&#8217;s beliefs actually come from?</p><p>And you mentioned people&#8217;s priors, people&#8217;s intuitions, but presumably people aren&#8217;t born with these intuitions, they are formed from somewhere through certain kinds of processes. So I&#8217;m just curious if you have any sort of thoughts on where do people&#8217;s views come from? Because obviously that would suggest, well, that&#8217;s the place you go then if you want to influence people, you intervene on whatever is causing this fixation.</p><p><strong>Sacha Altay:</strong> I mean, my view on beliefs, and I mean, much of my views come from Dan Sperber and Hugo Mercier, who have these theories on like reasoning and the roles of beliefs. And so basically, to answer your question, I think a lot of people&#8217;s beliefs are downstream of their incentives and intuitions they have about the world. For instance, vaccines. Vaccines are like profoundly counterintuitive. Like it&#8217;s very difficult intuitively to like vaccines. Like first there&#8217;s a needle that goes into your arm, there&#8217;s a little bit of blood, you think that there is some kind of like pathogens inside the vaccines, like it&#8217;s not something that&#8217;s very intuitive. So first I would say most, like not necessarily the beliefs, but the attitudes people have about vaccines largely comes from these very general intuitions that they have about contagion, about infections and about all these things.</p><p>And then the beliefs, well, people need beliefs to justify their attitudes. And so if your doctor is like, do you want to get vaccinated and you don&#8217;t really want to get vaccinated, you can say you&#8217;re scared of needles. But if there are also some widely available cultural justifications like, vaccines cause autism, maybe you&#8217;re going to jump on it. Maybe you&#8217;re not going to jump on it because maybe you&#8217;re smart and you know it&#8217;s false, et cetera. But you need justifications. And so I think a lot of people&#8217;s beliefs comes from this need to rationalize some justifications that they have. And I think that&#8217;s also why on many topics, people don&#8217;t have that many beliefs because often people don&#8217;t really need to justify many of their attitudes. And there&#8217;s a lot of work, for instance, in political science on how surveys kind of create beliefs in people because people have intuitions and kind of like vague opinions about all sorts of stuff. But when you ask them, they have to fix it and they have, and in some sense, it creates the beliefs.</p><p>So yeah, I would say beliefs mostly come from prior attitudes that people have and incentives that they have to act in the world.</p><p><strong>Henry Shevlin:</strong> Okay, but those... just to push a little bit harder there, so the prior beliefs, I think we&#8217;re just still kicking the can down the road a little bit. Incentives I get. Incentives seem genuine and explanatory here, but presumably it&#8217;s not the case that you can predict people&#8217;s vaccine attitudes from the degree of phobia they have towards needles, right? Or at least, even if that is predictive, I don&#8217;t know if it is, it seems like there&#8217;s more going on there. I don&#8217;t want to give people, and I think that&#8217;s the danger of giving people too much credit for saying, oh, people&#8217;s beliefs perfectly track their own incentives. I can totally agree that incentives play a role, but I&#8217;m sure just when we think about our own sort of peer groups, right? I disagree with the political views of a lot of my peers, despite us being in the same socioeconomic class, despite us working in the same industry, despite us having, you know, broadly similar interests, I would have thought. So, I don&#8217;t know, I can see incentives carry us some of the way, but yeah, they don&#8217;t completely close the mystery here.</p><p><strong>Sacha Altay:</strong> No, of course, of course. I think it&#8217;s, you take the example of vaccines. I think most people who get vaccinated, they just get vaccinated because they trust institutions, they trust their doctors. Maybe they have seen their doctors for 20 years, their doctors tell them to get vaccinated, they do it. So that&#8217;s the main explanatory agent here is just they trust some institutions, some experts who tell them to do something and they do it.</p><p>You wanna jump in, Dan?</p><p><strong>Dan Williams:</strong> Yeah, I was just going to say, I think it seems like it&#8217;s possible to think, and as I understand Sacha, your view, this is your view. It&#8217;s possible to think that we overestimate the degree to which people are kind of influenced by whatever content they happen to stumble across in their media environment or the viewpoints that they happen to encounter in their social network&#8212;that we tend to think people are too gullible when it comes to those things.</p><p>It&#8217;s possible to think that, but also to accept that, of course, we are going to be influenced in complex ways by the information we get from people that we trust, from sources that we trust, from our upbringing, from our social reference networks and so on. So the idea that we&#8217;re not gullible and not credulous shouldn&#8217;t be sort of conflated with the idea that we somehow are born with our entire worldview from the start in ways that aren&#8217;t influenced by the media environment and by the testimony that we encounter. Like clearly we&#8217;re massively influenced by what we hear from other people, but sort of my understanding of the perspective that you&#8217;re outlining Sacha is that process whereby we build up beliefs about the world&#8212;firstly, there are some things that just everyone kind of finds natural, like maybe like there&#8217;s something weird about vaccines when you hear about the concept, most people just have a kind of instinctive aversion to it, but also things like, you know, my group is good, the other group is bad, or like certain kinds of maybe xenophobic tendencies that come naturally to people and so on. So there are certain ways of viewing the world and certain things which are intuitive, maybe as a consequence of our evolutionary history, and that interacts then in very kind of complex ways with our experiences, with our social identities, with our personality, with the people that we trust, the institutions that we trust, those we mistrust, and so on and so forth. So you can accept all of that and the role of social learning within that whilst also thinking people tend to exaggerate how gullible, how credulous people are when it comes to sort of incidental exposure to communication. Is that your view, Sacha? Is that a kind of accurate representation of it?</p><p><strong>Sacha Altay:</strong> Yes, yes, yes it is. I think a lot of the reason, like when we change our mind drastically, it&#8217;s either because like we have a lot of reasons to trust the source. Like if the BBC says that the Queen died and the BBC says it, the Guardian says it, we&#8217;re going to update our beliefs immediately. And most people, even the people who distrust the BBC are going to update their beliefs directly.</p><p>And it&#8217;s the same if like, I don&#8217;t know, my wife tells me that there is no more milk in the fridge and I have to buy some. I&#8217;m going to update my beliefs about the milk in the fridge and buy some, you know, in some ways, of course we update our beliefs based on the information that&#8217;s provided to us. It&#8217;s just that we do so I think in ways that is broadly rational in the sense not that it&#8217;s perfect, but that it serves our everyday actions and our incentives, like what we want to do in the world, like very well. So I think that&#8217;s also the way in which I mean it is that when we update it and when we do it, we do it quite well, not to discover the truth, but at least to get along in the world.</p><p><strong>Dan Williams:</strong> And could you maybe say a little bit more about this point concerning selective exposure? So the fact that when people are engaging with media, with the viewpoints of pundits and politicians and so on, a lot of that is, quote unquote, demand driven in the sense that people have strong attitudes, they&#8217;ve got strong political, cultural allegiances, they identify with a particular in-group, they want to demonize like those people over there or that kind of institution, et cetera. And it&#8217;s these sort of pre-existing attitudes, interests, allegiances, which often build up in complex ways over a long period of time, which then causes people to kind of seek out information and often misinformation, which is consistent with their attitudes and their interests, rather than the picture I think sometimes people have, which is&#8212;I think the way Joe Uscinski puts it is, you know, they&#8217;re walking along and they slip on a banana peel, you know, they encounter some conspiratorial content on social media and now they believe in QAnon or like Holocaust denial. That&#8217;s just not the way that it works. Could you say a little bit more about that concerning like selective exposure and the demand side of misinformation?</p><p><strong>Sacha Altay:</strong> Yeah, we know for instance that misinformation on social media like Facebook or Twitter, which have been the most studied in particular Twitter, you have a very small percentage of individuals who account for most of the misinformation that is consumed and shared on these platforms. And it&#8217;s like very small. It&#8217;s like 1% or less than 1% that account for most of the misinformation that is consumed and shared on these platforms.</p><p>And these people, they are misinformation sharers and consumers, not because they have like special access to misinformation because they have a lot of money or whatever, but simply because they have some traits that make them more likely to seek out such content, such as having low trust in institutions, being politically polarized. And because of these traits, because they don&#8217;t trust institutions, they are looking for counter-narratives to like the mainstream narratives they find on mainstream media. Because the thing is that these people who consume and share most of the misinformation on social media, and give us the impression that there is a lot and that many people believe it&#8212;these people are also exposed to mainstream narratives. It&#8217;s just that they decide to reject the mainstream narratives and instead of trusting what the TV tells them, they go on some Telegram channels, they go on some weird websites to learn about the world and do their own research.</p><p>And this is, I think, some of the strongest evidence, at least in the case of misinformation, that the problem is not in the offer of misinformation because it&#8217;s actually quite easy, quite free, quite accessible. It&#8217;s super easy to find misinformation online, but most people consume very little of it. But you have a small group of people who are very active and very vocal who consume most of it. And they have low trust in institution and are highly polarized. And I think it matters a lot for how we want to tackle the problem of misinformation. The problem is not that you have a majority of the population that&#8217;s kind of gullible and so we should avoid them being exposed to misinformation, rather you have some people who have some very strong motivations to do some specific stuff. And I think we should address these motivations. And because addressing the offer is impossible. And I&#8217;m not against like content moderation and stuff. I think we should try to be in an information environment where the quality of the information is the highest possible, et cetera. But if you have motivations to look and to pay or to consume some content, then the offer will be met, like people will create such content.</p><p><strong>Dan Williams:</strong> Could we maybe just before we move on to these issues about kind of social media and AI, because I really want to get to those, there&#8217;s another point connected to this issue about gullibility where I think there&#8217;s this massive kind of gap between common sense, conventional wisdom, and what the empirical research shows, which you&#8217;ve written a lot about, which is like the impact of things like political influence campaigns and commercial advertising and so on. So you go into that in your paper on generative AI and why you think there&#8217;s been a lot of unfounded alarmism about that, which we&#8217;re going to get to shortly. But even separate from the issue concerning AI, could you say something about what the evidence that we have actually shows when it comes to the impact of political and sort of economic advertising campaigns?</p><p><strong>Sacha Altay:</strong> So political scientists have been studying that for a while because in the US there is so much money that is being spent on political advertising, especially in presidential elections. And so the best studies, they come from political science. And to give you an example, some of them have up to 2 million participants that are being exposed to hundreds or thousands of ads for long periods of time, like months.</p><p>And so these are the kinds of study that are being done in this field, like very large sample, long periods of time, et cetera. And the consensus is that political advertising in presidential elections in the US has very, very small effects. The effects are not zero because of course, with such big sample size, long periods of time, et cetera, you do find significant effects, but the effects are very, very small, like point of percentage. And so that&#8217;s the consensus in political science in the US.</p><p>So it&#8217;s a bit specific because the US you have like Democrats and Republicans and you socialize these identities and these identities are very hard to change. Like if you&#8217;re a Democrat, it&#8217;s very hard for you to change and vote Republican. And of course, in the US, you often have only two candidates that are very prominent and people hear about them all the time. So it&#8217;s difficult to move the needle. But in like other elections in other country, multiparty, you have more room for political advertisement to have an effect. But even in these cases, even when it&#8217;s like lower stakes campaigns with less known candidates, the effects are still quite small. Like, I don&#8217;t know why we have this idea that advertisement works very well, it influences people, but at least when it comes to political voting, it&#8217;s just very hard to influence people&#8217;s vote. And it&#8217;s the same for like marketing, like online ads, like on social media&#8212;are very ineffective, the thing is that they are very cheap as well. So I don&#8217;t want to say that they are useless because they&#8217;re actually extremely cheap. So that&#8217;s why these companies do them a lot, but they&#8217;re also extremely ineffective. And so that&#8217;s the consensus in political science.</p><p><strong>Henry Shevlin:</strong> So I had a question about this in relation to your paper again. It really paints quite a dismal view of the power of advertising in general. And yet this is like a vast global industry. Is it all just founded on sand? Is it all just smoke and mirrors? Are people basically wasting hundreds of billions of dollars a year on advertising that doesn&#8217;t, largely doesn&#8217;t work?</p><p><strong>Sacha Altay:</strong> That&#8217;s the opinion of many people. Yeah. Many people think that at least it&#8217;s overblown. I don&#8217;t want to say that it&#8217;s completely useless, et cetera. Like, of course, if you want to buy like a washing machine, they all look the same. And if they are all about the same price, if you have more information about one and the information is good and the reviews are good, et cetera, you&#8217;re probably going to buy it more. But it&#8217;s just you already want to buy the washing machine and you have a price range and you have already like so at the margin, advertisement can work and has an effect, it&#8217;s just that the effect, like they calculate basically the elasticity. So how much more when you spend on advertisement, how much more will you sell basically? And the elasticity is like super small. It&#8217;s like, I forgot exactly, but it&#8217;s like very small.</p><p>But yeah, some people have written books about how the whole internet and know, products on the internet, like social media, et cetera, are free because we are the product and they sell us advertisement and stuff. And all of that is a bubble. Some people think that it&#8217;s completely a bubble. I don&#8217;t think it&#8217;s completely a bubble, but clearly I think, yeah, it&#8217;s overvalued. I think ads are a little bit overvalued. And I don&#8217;t think AI is gonna change that much.</p><p><strong>Dan Williams:</strong> Okay, so just to sort of summarize what we&#8217;ve got to so far. So on this question of how prevalent misinformation is, if you&#8217;re focusing on fake news, it doesn&#8217;t seem to be anywhere near as widespread as many people think it is. Once you start stretching and expanding that definition to encompass more and more things, yes, misinformation so defined is much more widespread and plausibly is much more impactful, but it becomes so kind of amorphous that it&#8217;s difficult to apply scientifically.</p><p>Then the second thing we talked about was this issue concerning gullibility, where in your view, Sacha, and I agree with you, even though obviously people are influenced by social learning and there is evidence that, you know, persuasion can work, it can influence what people believe, people also tend to dramatically overestimate how gullible people are.</p><p>Let&#8217;s now turn to technology and where AI is relevant. And let&#8217;s start with social media, kind of very broadly construed. Henry, actually, why don&#8217;t I bring you in here? Because I think in a few of our previous conversations, you said something like the following, and you can tell me whether I&#8217;m remembering correctly. You said, we can contrast two kinds of cases, like video games and social media. In both cases, there was this big societal panic. Video games are going to make people really violent. They&#8217;re going to play Call of Duty, and then they&#8217;re going to go out and start shooting people in their community.</p><p>And your view is, the evidence there is actually incredibly weak and that there&#8217;s very little to support that kind of panic. Whereas when it comes to social media, there was a lot of panic, maybe not initially, actually, I think there was a lot of optimism about social media initially. But these days, there&#8217;s a lot of kind of concern about social media and how it&#8217;s, you know, destroyed democracy and human civilization itself. It&#8217;s this awful thing, having all of this sort of awful set of political consequences. And am I right, Henry, in thinking you&#8217;re actually quite sympathetic to that view about social media, even though you&#8217;re not sympathetic to the violent video games story.</p><p><strong>Henry Shevlin:</strong> Yeah, yeah, no, great. I&#8217;m glad you bring up this example. Two things. One is I think my main point with that example is about sort of the time course of these worries that with violent video games, we had this massive initial panic that sort of died down as the evidence sort of basically didn&#8217;t arrive. As we saw that there wasn&#8217;t that as much concern as initially there was we thought there was reason to think there was. Whereas in the case of social media, there really wasn&#8217;t that much concern at first. It was seen as, if anything, a positive technology and concern has just sort of grown over time. And that sort of point about the time course of sort of the moral panic is sort of separate from the degree to which these are robust.</p><p>That said, I do, I am more sympathetic to the idea that social media presents an array of worries. So I&#8217;m probably more sympathetic than both of you to sort of Jonathan Haidt&#8217;s worries about the impact of social media and mobile phones on teenage mental health, which is a separate point from misinformation. I also worry about the role of social media and things like political polarization. Again, at least a little bit distinguishable from misinformation. But yeah, I guess I&#8217;m a little, at least a little bit worried about the role of social media and misinformation as well.</p><p><strong>Dan Williams:</strong> Okay, I&#8217;ve got sort of views that are difficult to summarize about this. Let&#8217;s stay away from the teen mental health, because I think that opens up a whole can of worms, et cetera. Let&#8217;s focus on kind of the political impacts of social media broadly construed. Sasha, my understanding of your view is you basically think that the panic over social media and its political impacts is unfounded and it&#8217;s not well supported by evidence. Is that fair? Care to elaborate?</p><p><strong>Sacha Altay:</strong> Yeah. So I&#8217;m just going to start by mentioning, I think, the scientific literature and what I think is the best evidence that social media have weaker effects than people think. So there have been many Facebook deactivation studies. So basically, you pay some participants to stop using Facebook for a few weeks. And in the control group, the participants are the same, but they are paid either to stop it for one day or to do something else.</p><p>And in general, what these studies find is that when you stop using Facebook for a few weeks, you become slightly less informed about the news and current events, suggesting that using Facebook regularly helps you slightly know about the world and what&#8217;s going on in the news. But it also makes you slightly more sad. So you&#8217;re slightly less happy when you use social media. So participants who deactivate social media, especially Facebook for a few weeks, are slightly happier. It&#8217;s not exactly clear why. It could also be because they are less exposed to news and news is sad and makes people less happy, etc. So it could be that. And there are also many other studies on Instagram.</p><p>And basically what all these studies suggest is that the effect of social media on stuff like affective polarization, political attitudes, voting behaviors, is either extremely small or no. And so the effects are very small. But now that I&#8217;ve mentioned this literature, I want to mention that there are many critics of this literature and of these experimental designs. For instance, even the longest RCTs are like two months. And of course, two months is super small at the scale of social media. They have been here for years. And you could imagine that it takes a few years for the effects of social media to kick in.</p><p>You can also imagine that, of course, participants stop using social media for a few months, but the world continues using social media. People around them continue using social media. So you kind of have these network effects that are possible. And of course, the effects of social media are not individual, they are collective. And so these RCTs are kind of missing the point. They cannot capture the collective and more systemic effects that social media could have. So that&#8217;s another critique. And there are many other critiques.</p><p>But I still think that what these RCTs show is that social media probably has effects. And there are studies like in collaboration with Meta showing that if you change Facebook or Instagram with like a chronological feed, that is instead of showing users the most engaging content, you show them the most recent content. When you do that, they spend much less time on the platform. Like the time they spend on the platform is diminished by one third.</p><p>And it has a lot of effects on in-platform behaviors, but very few effects on out-platform behaviors, on attitudes, on et cetera. So we should take these studies with a grain of salt, but I still think they show us that the effects are probably not as big as at least the most alarmist texts suggest.</p><p><strong>Dan Williams:</strong> Hmm. I think maybe another critique that some people have raised is that these studies, especially that set of Facebook, Instagram studies that you mentioned, were conducted after there had been a lot of adjustments to the platforms and the algorithms in light of concern about things like misinformation and their effect on polarization and so on.</p><p>So that just goes to say, as you say, many people have generated lots of different criticisms of what we can really infer from these studies. I mean, my own view is they tell us something, which is that the most simplistic, alarmist stories about social media don&#8217;t seem to be supported by the current state of really kind high-quality empirical research. I don&#8217;t think they provide very strong evidence that should cause someone who goes into this with a really strong prior that social media is having all of these catastrophic consequences to update that much. And that then suggests that like how you view this topic is going to be shaped by a lot more than just the empirical research itself. So in your case, I assume that you&#8217;ve got these general priors about how media doesn&#8217;t have like huge effects on people&#8217;s attitudes and behaviors and these things are shaped by all sorts of complex factors other than media. And am I right in thinking that&#8217;s doing a lot of the work when it comes to your skeptical assessment over and above these studies themselves?</p><p><strong>Sacha Altay:</strong> Yes, but I would say the strongest argument maybe in favor of my position is descriptive data on what people do on social media and how often they encounter political content. Because to be politically polarized, you need to be exposed to political content. And there are more and more descriptive studies, some of them on the whole US population in the US, showing that it&#8217;s less than 3% of all the things that people see on social media.</p><p>So less than 3% of all people see on Facebook is either political or civic content. And there are also super nice recent studies that are using a novel methodology, which is basically recording what people see on their phones. So it&#8217;s like a lot of participants download an app and the app records what people see on their phones like every two seconds or so. And these studies have shown that in the last US presidential election, for instance, people have been exposed to content about like Donald Trump less than three seconds per day. So during the US presidential election people have seen so little political content on their smartphones that it&#8217;s ridiculous and it&#8217;s so small that in my opinion it can only have small effects.</p><p>Then again a contrary argument could be it&#8217;s the average and they do find that you have a small minority who is exposed to a lot of political information but then again who are these people? Again I think they have attitudes, have priors and they have motivations, they are partisans. And yes, misinformation or content on social media can reinforce, exacerbate, radicalize them a little bit. But I think for the mass and for the general public, who&#8217;s generally not that interested in politics, etc. I don&#8217;t think it can have very strong effects.</p><p><strong>Dan Williams:</strong> Yeah, I just want to double click on that and then I&#8217;ll bring Henry in. One other kind of stylized fact, which we should flag, which I think is surprising to some people, is if you&#8217;re the kind of person who cares about politics and follows the news carefully, and you read political commentary and so on, you are extremely unrepresentative of the average person. Most people don&#8217;t follow politics. They don&#8217;t follow current affairs closely at all.</p><p>And if you ask people very, very basic questions about politics, they are shockingly uninformed about things. That is shocking relative to the perspective of someone like us who follows politics very, very closely. And that&#8217;s another thing which I think people who are highly kind of politically engaged often get wrong when they&#8217;re thinking about this topic. If the picture in your head when you&#8217;re thinking about social media and politics and so on is that the person who&#8217;s constantly posting on X about politics is representative of ordinary people. You&#8217;ve got an incredibly skewed, misleading picture.</p><p>Okay, there&#8217;s tons, I think, more to say here. Henry, did you want to come in with any kind of pushback or any more articulation of your perspective?</p><p><strong>Henry Shevlin:</strong> Yeah, this is all really interesting and helpful. I guess the only thing I&#8217;d say is that it seems to me social media has also just changed the kinds of voices that get platformed in the first place in a way that&#8217;s both positive and negative. But, we think about things like the rise of Tumblr and its contribution to sort of a lot of so-called, you know, woke discourse, particularly in sort of the late 2010s. And we could equally say the same thing about, for example, reactionary bloggers or neo-reactionary bloggers like Curtis Yarvin and so forth. I think these are the kind of voices that probably just wouldn&#8217;t have found an outlet in the prior social media ecosystems. Maybe that doesn&#8217;t matter, right? If none of this stuff actually impacts people&#8217;s views that much. But it does seem like an interesting shift in our broader political media landscape that social media has changed not just the kind of how much time people spend interacting with content or the way in which they do so, but also the kind of content that gets out there in the first place. Does that figure at all in the impact of these things?</p><p><strong>Dan Williams:</strong> Sacha, before I bring you on, just want to say just one really quick thing about that, which is the reference to Curtis Yarvin there made me laugh because I think like he&#8217;s an example where like the overwhelming majority of people won&#8217;t be aware of him. But I think he probably is influential within the kind of ideas, intelligentsia space of the political right. But this idea that like social media and the affordances and incentives of social media kind of changes which voices become influential and prestigious. I think that&#8217;s such an interesting and important point, but Henry, I thought you were going in the direction of, like someone like Donald Trump can absolutely murder on social media because he&#8217;s so good at like tapping into the attention economy dynamics on social media in a way that, you know, he&#8217;ll be much less successful if we lived in a kind of Walter Cronkite kind of media environment.</p><p>But there&#8217;s this other aspect, which is like the decline of elite gatekeeping, which is characteristic of social media and it&#8217;s via that route, I think, where people like Curtis Yarvin can enter the conversation in a way in which they probably wouldn&#8217;t have been able to if you go back to like the 90s, 2000s. Sorry, I just had to double click and say that. Sacha, did you want to respond to Henry&#8217;s point?</p><p><strong>Sacha Altay:</strong> No, yeah, I agree. I just also want to say we often mention Trump as like the example of like someone we don&#8217;t like who benefits from social media, but there are also people who we like who benefit from social media like Barack Obama. Like he used Facebook a lot during his campaign. He&#8217;s super charismatic. And if he was president today or if he was running today, he would do great on TikTok. He still does great on TikTok. Like he&#8217;s so charismatic, so good. So it doesn&#8217;t always benefit the worst actors.</p><p>And I want to say, it&#8217;s a very important point about how social media may also shape how politicians communicate. There are some studies, for instance, in France on how short format videos like TikTok is changing how parliamentary members are talking at the parliament. And there are studies showing that especially at the extreme and especially the extreme right, they are doing more and more speeches that are like with more emotions and more, I don&#8217;t know, buzzwords. And what they say is that then they post this on social media, and the more buzzwords, the more emotions, and the more all of that, the more it&#8217;s going to go viral. And so that their goal is not to convince other parliamentary members, but instead just to buzz on social media and reach some parts of the population.</p><p>Then it&#8217;s a normative question, whether it&#8217;s good or bad. Probably using emotions and stuff is bad. But you could also imagine that if they were speaking to the general public in more authentic way and try to reach them because a lot of people are not interested. That could also be good. But of course, because it&#8217;s the extreme right and stuff, we don&#8217;t like them. And I think we have good reasons not to like them. But I think we should be careful and we should also think of ways in which it could be used to do good stuff. But I agree that in general, it probably hasn&#8217;t done very good. And it&#8217;s very hard to quantify it.</p><p><strong>Dan Williams:</strong> Just before we move on to the topic of generative AI, my view is there&#8217;s so much uncertainty in this domain when we&#8217;re asking these really broad questions like what&#8217;s the impact of social media on politics that we can&#8217;t really be very confident about any view that we might have. But it does seem like, at least in my view, a lot of the popular discourse and academic research has focused on things like recommendation algorithms and filter bubbles and so on, where I think I&#8217;m very close to your view, Sacha, in thinking that there&#8217;s just a lot of kind of unfounded alarmism. But there&#8217;s this other aspect of social media, which I think probably has been very consequential, which is just its democratizing consequence. The fact that like prior to the emergence of social media, it was a much more elitist media environment. Whereas now, anyone with a phone, a laptop, whatever, can open up a TikTok account, get on X and start posting about their views.</p><p>And I don&#8217;t think you need to view that through the lens of, well, that means they&#8217;re going to start articulating their views and then persuading large numbers of people. But what I think it does is certain views, which were kind of systematically excluded from the media environment before the emergence of social media, can now become much more normalized. And also people can achieve kind of common knowledge that other people share views that used to be much more marginalized and stigmatized. So those sorts of views can end up being more consequential in politics, even though the views themselves aren&#8217;t necessarily more widespread.</p><p>And I think you find that with things like conspiracy theories. My understanding of the empirical research, again, people like Joe Uscinski, is that the actual number of people who endorse conspiracy theories hasn&#8217;t really increased, but they do seem to play a more consequential role within politics because people with really weird conspiratorial views used to be kind of marginalized in media. Whereas now it&#8217;s very easy for them just to start expressing those views online, finding people who share similar kinds of views, coordinating with them. And so it can play a bigger role in politics, even though it&#8217;s nothing to do with, you know, mass algorithmically mediated brainwashing or anything like that.</p><p>Okay, I&#8217;m sort of conscious of time and I really want to focus on generative AI. So there was this big panic about how once we&#8217;ve got deepfakes and other features of generative AI, this was going to have really disastrous consequences on elections. It&#8217;s going to shift people&#8217;s voting intentions in all sorts of dangerous ways. Sacha, you&#8217;ve written a paper which we&#8217;ve already referred to with Felix Simon. Looking into the evidence on this and presenting a kind of framework for thinking about it, what&#8217;s your take?</p><p><strong>Sacha Altay:</strong> I will start by saying there are three main arguments why people are worried about the effect of generative AI on the information environment. The first one is that generative AI will make it easier to create misinformation and basically to kind of flood the zone with misinformation. The second one is that it will increase the quality of misinformation, better, faster misinformation. And the last one is personalization.</p><p>Generative AI will facilitate the personalization of misinformation. I think these are the three main ones and I can go quickly over them and argue why I don&#8217;t think they are a big deal. So about quantity, I think that quantity does not really matter. Today we are exposed already to so, I mean, there&#8217;s already so much information online and we are exposed to a very tiny fraction of that information. So adding more content does not necessarily mean that people will be more exposed to it. And I think it&#8217;s particularly true in the case of misinformation, where I think demand plays a very important role. And so it&#8217;s not because there is more misinformation that people will necessarily consume more misinformation. Like it&#8217;s not because you have more umbrellas in your store that people will buy more umbrellas. There needs to be like factors, like I don&#8217;t know, rain. If it rains more, you will sell more umbrellas. But so there needs to be something, there needs to be like incentives for people to demand more misinformation, to consume more of it. And that&#8217;s why I don&#8217;t think that the quantity argument is very strong.</p><p>I think also the cost of producing misinformation are already extremely low. Like we see it with Donald Trump or whatever, they just say something that is false and they say it with confidence and that&#8217;s it. Like the costs are very low. Also, we are very imaginative as a species, like humans have come up with like incredible, fascinating, engaging stories. And of course, AI can improve our innovative skills, but still we are very good at making up stories that make us look good, that make our group look good. And so I don&#8217;t think generative AI is going to help that much in creating more misinformation. Regarding quality, yeah...</p><p><strong>Dan Williams:</strong> Just to interrupt you so that we can sort of take these step by step. So the first worry is generative AI both with kind of large language models and the production of text, but also deepfakes. I take it you&#8217;re including kind of both of those categories. The worry is, well, this is going to just really reduce the costs of producing misinformation. Therefore you&#8217;ll get this explosion and the quantity of misinformation and that&#8217;s going to produce all sorts of negative consequences. And your view is, well, the bottleneck that matters isn&#8217;t really quantity anyway. It&#8217;s like what people are paying attention to. So you can increase as much misinformation, you can increase the amount of misinformation as much as you want. And in and of itself, that&#8217;s unlikely to have a big impact on people&#8217;s attitudes and behaviors. Do you have any thoughts about that, Henry, before we move on?</p><p><strong>Henry Shevlin:</strong> I guess one concern would be even though media environments are flooded with content already, and I completely agree attention is the sparse commodity, maybe you could think of generative media as allowing sort of very niche areas to get flooded with content in a way that wouldn&#8217;t have been easy before. I&#8217;m just thinking here&#8217;s a silly little example, maybe an interesting example from recent media. Some of you may have seen the anti-radicalization game that was launched in the UK, featured this character about two weeks ago, featured this character called Amelia, a purple-haired anti-immigration activist in this fictional game, which was quickly seized upon by a lot of the anti-immigration right in the UK. And now there&#8217;s a flood of AI-generated content all about Amelia, mostly making her look really cool and some of it kind of playful, some of it kind of silly. But the point is this was just like a niche news story that I think people found amusing, but I think it would have died a lot quicker had it not been for the ability of people to seize upon this and generate huge swathes of content about Amelia in a very, very short time. So maybe there was just pre-existing demand there, but it would have been demand that would have been perhaps hard to meet without the ability of generative AI tools to create the content to meet that, which maybe is a difference.</p><p><strong>Sacha Altay:</strong> Yeah, no, I mean, that&#8217;s possible. But when you look at the memes on the internet, most of them are like very cheap. It&#8217;s just like an image with some text and you just change a little bit the text and we&#8217;re probably going to go into that. But it&#8217;s the same with like deepfakes. Like cheapfakes are much more popular than deepfakes because they are super easy to do. Like you just change the date or change the location of something and boom you have your cheapfake. And that&#8217;s why they are super popular. Yeah, I don&#8217;t know, anyway.</p><p><strong>Dan Williams:</strong> What&#8217;s the definition of a cheapfake, Sacha?</p><p><strong>Sacha Altay:</strong> A cheapfake is just a low-tech manipulation of information. Like you have an image and you change the date of the image, or you change the location of the image. So in opposition to deepfakes, which are like high-tech, completely like for instance, generated image where like it&#8217;s usually sophisticated, et cetera. Cheapfakes in opposition, like very cheap, like that you can, most people can do with their computer without like requiring any tech skills basically.</p><p><strong>Dan Williams:</strong> Sorry, I think I cut you off. I just wanted to give some clarity to people who weren&#8217;t familiar with that. Okay, so that&#8217;s quantity. And the next thing that you mentioned as a sort of worry that many people have is quality. That is generative AI won&#8217;t just enable us to increase the amount of misinformation but increase its quality and initially at least you&#8217;re understanding quality as being different from personalization. You&#8217;re treating that separately, is that right? Okay, so give us like, surely the concern here is just that, okay, quantity in and of itself isn&#8217;t gonna make a difference. But once we&#8217;ve got the capacity to generate like incredibly persuasive text-based arguments and deepfakes, even if it&#8217;s true that you can create these sorts of cheapfakes and they can be influential, in different contexts, surely the quality of the misinformation must make a big difference to how many people get persuaded by it.</p><p><strong>Sacha Altay:</strong> Yeah, I think quality is the most perhaps intuitive argument because it&#8217;s the idea that you&#8217;re going to be able to create images or videos that are unrecognizable from real videos or images. And so of course people are like, how am I going to trust images or videos anymore if they are unrecognizable from real ones? So I think that&#8217;s like a very fundamental fear that people have. And I think it makes a lot of sense. It&#8217;s very intuitive. But I don&#8217;t find it very convincing.</p><p>I think it raises a lot of challenges, but I don&#8217;t think it raises enough challenges to be alarming. For instance, I think we have had this challenge before with photography. We have been able to manipulate photography in ways that we cannot distinguish them from real photography since the beginning of photography. And how did we solve this problem? Not with technical tools or whatever, but just with social norms about the use of images to not mislead others.</p><p>And we have been able to create like fake texts or say false stuff forever and we haven&#8217;t solved the problem with like some fancy tech innovation but simply by having rules, reputation, social norms and trusting more or less people based on what they have said before based on what our friends think of them based on their past accuracy and I think all of this we will still be able to use it to help us navigate an environment in which videos could be AI generated or could be real.</p><p>And I mean, something I&#8217;ve mentioned before, but quite fundamental is that, for instance, we trust the BBC or the New York Times to be broadly accurate most of the time. We also trust them to not use AI in misleading ways and not share like deepfake footages of like presidential candidates that mislead us. And I think this trust and the institutions that exist are sufficient to prevent most of the harm from this.</p><p>I think this will have effects. For instance, maybe we will be less able to trust people and sources that we don&#8217;t know. Because if we don&#8217;t have their track records, how can we trust them that the information they are sharing is true or false or AI generated or not? But I think that&#8217;s a very old problem and we will manage. It will make it more complex, but I think we&#8217;ll manage. Yeah, Henry?</p><p><strong>Henry Shevlin:</strong> I was going to say though, isn&#8217;t there a worry that sort of new technology creates kind of normative gaps that allow for sort of a kind of annealing or a kind of recalibration of norms? I&#8217;m thinking about something here like file sharing, for example. Like I&#8217;m of the generation where, you know, Napster, the generation where suddenly it became possible to download music for free. And this created a whole bunch, a whole shift in norms where I think for my generation, at least, you know, this form of theft was basically just completely normalized. Hence we had advertising campaigns like you wouldn&#8217;t steal a car, therefore why would you download a song or a movie? And basically pirating went from something that was niche and maybe frowned upon to something that was just completely normalized.</p><p>In the same way, I think you might worry that the ease, ubiquity of generative AI is gonna shift our norms around creating fake content. And arguably we&#8217;re already seeing this. We had just a very recently the White House itself retweeting pictures, I think of a protestor at an anti-ICE rally and they had manipulated the image, right? And you know, I think if called out on that, they&#8217;d probably say, yeah, sure, you know, of course, yeah, we play around with images, you know, that&#8217;s what generative AI can do. That&#8217;s just the way things work these days, which does seem like a normative shift perhaps, one partly occasioned by technology.</p><p><strong>Sacha Altay:</strong> My intuition is quite the opposite, is that if anything, challenges that AI, these new challenges of AI will instead increase the epistemic norms that we have. And because we want to know the truth, like we don&#8217;t want to be biased. We don&#8217;t want to be misled. We don&#8217;t want to be misinformed. And so the fact that the challenge is becoming harder, that it&#8217;s going to become harder to know if a video is true, authentic or not, is going to make us harder and harsher on people who do as the White House did, where they, we don&#8217;t know if it&#8217;s them who manipulated it or not, but they share the manipulated images that do not portray her accurately. And so I think people are going to be angry at that. And I think it&#8217;s just going to increase how people, the level of the expectation, like what people expect. And I think people are going to expect more. They&#8217;re going to expect news outlets and people to be better. I mean, it&#8217;s just a prediction. I hope I&#8217;m right. I&#8217;m an optimist, but...</p><p><strong>Dan Williams:</strong> So can we connect that to the worry many people have about the liar&#8217;s dividend? The idea that, once we&#8217;ve got deepfakes, I mean, we&#8217;ve currently got technology to create hyper-realistic audio and video recordings, which are basically indistinguishable from reality. There&#8217;s the kind of initial worry many people have, which is, my God, people are going to become persuaded en masse that this stuff is true. And I think that&#8217;s very unsophisticated as a worry.</p><p>But then there&#8217;s another story people have which is, okay, maybe it won&#8217;t persuade people, but now that you&#8217;ve got the capacity to create these deepfakes, politicians, elites, other people who do shady things, they can use the possibility of something being a deepfake to just dismiss any kind of recording which is raised against them as evidence of them doing something shady.</p><p>And I guess connected to that as well, there&#8217;s the worry people have, which is that just as consumers of content, now if we encounter any kind of audio or video which goes against what we want to believe, we can just say, well, it&#8217;s a deepfake. I don&#8217;t have to believe it. So we&#8217;re just gonna end up becoming like more and more cocooned within our own belief system, not having this access to learn about the world via recordings. So it&#8217;s a kind of liar&#8217;s dividend worry and this general worry that this is just going to just obliterate the kind of epistemic value, the informational value of recordings. What&#8217;s your thought about those kinds of worries?</p><p><strong>Sacha Altay:</strong> First thing, I think the liar&#8217;s dividend does not hinge on AI itself, but rather the willingness of politicians and some elites in particular to lie and to evade accountability and responsibility. AI will certainly be a new weapon in the arsenal and we have seen it in the elections in 2024, etc. Many politicians have used AI to their benefit and many politicians and elites are continuing using them. So for sure, it&#8217;s something we should be, we should worry about and we should regulate, et cetera. But will it be a particularly good weapon in the arsenal? Will it be a game changer? I&#8217;m not sure. I mean, time will tell. So far, I don&#8217;t think it has been particularly good. I don&#8217;t think it has been used in particularly good ways. I don&#8217;t think people particularly buy it. I don&#8217;t think when people share something and then they&#8217;re like, no, it was just AI or like try to use AI as the excuse. I don&#8217;t think it works very well. And I think there are going to be reputational costs for people who try to do that. We are going to remember that they have tried to do that. And so I don&#8217;t know. Again, time will tell. It&#8217;s an empirical question. I may be wrong. I don&#8217;t know. Yeah, Henry.</p><p><strong>Henry Shevlin:</strong> I was just going to chime in. I&#8217;m sure I&#8217;m not alone in having seen on Facebook in particular, lots of cases of AI-generated media being mistaken. I don&#8217;t want to pick on boomers too much, but it is often boomers who completely seem to buy it. Like you might have seen these examples of people breaking these glass bridges, these videos that went viral and lots of people, particularly I say older respondents who completely seem to believe this is a real video they&#8217;re seeing.</p><p>But I guess two responses to that that you might push back with, Sacha, one would be like, well, we&#8217;re just in a transitional period, right? This is new. This is so new to a lot of people seeing seeing this concept for first time that they just aren&#8217;t aware yet that this is possible and they&#8217;ll adjust over time. Another would be to say, look, yeah, maybe if I&#8217;m producing a cute image of, I don&#8217;t know, a rabbit or an image of someone breaking a bridge or something non-political, it&#8217;s easier to convince people that that&#8217;s real than it would be the case to, for example, change their political views. So mean, either or both of those responses, things you&#8217;d like to go with in response to that.</p><p><strong>Sacha Altay:</strong> Yeah, I mean, my impression is that if a rando shares a video of Macron doing something crazy, people are not going to believe it. They are going to wait for like France Info and like the real media to cover it. Because if I don&#8217;t know, Macron is saying, we are starting a new war with this country. People are not going to believe it, even if it&#8217;s very high quality, because they know if it happens, all the media are going to cover it. So I think in very in this high case of like the politician saying something absolutely crazy, people are going to be vigilant and are going to wait for the mainstream media to buy it.</p><p>I think many of the AI slop that we see a lot on Facebook, but also on TikTok, are humorous ones. I think there is some part of the boomers, but not just the boomers, who want to be entertained. And for entertainment, they don&#8217;t really care whether it&#8217;s true or not, whether it&#8217;s authentic or not. And you have extremely, you can create extremely cute images of like little animals doing cute stuff and you get what you wanted. You have like this super stimuli, super cute, super entertaining, super engaging. You have what you wanted. Like, and whether it&#8217;s authentic or not, I do care. I don&#8217;t understand how people don&#8217;t, but at the same time it&#8217;s like, yeah, it&#8217;s brain candy. It&#8217;s a candy, brain candy that people get and I don&#8217;t see why it&#8217;s wrong.</p><p>And I just want to point out that we as elites, because we have always looked down on the content that the mass population consumes. Now we look down on like short video formats on TikTok, but we have always looked down on their entertainment practices, et cetera, saying that it makes them stupid, et cetera. And so I think we should be careful about that. Careful about saying that kids are stupid because they are on TikTok and are watching short format video or whatever. I think we should be careful. And I think we are falling a bit into that with the AI slop, but the TikTok AI slop are very different from the Facebook AI slop. The TikTok AI slop are very weird and absurd. And I think they work because they are extremely weird and absurd. You know, they&#8217;re something weird about them and people are playing with it. They are playing with the fact that it&#8217;s AI and that you can do extremely weird stuff, but it&#8217;s very different from the AI slop on Facebook that works, I think, among older populations.</p><p><strong>Henry Shevlin:</strong> Since we&#8217;re discussing TikTok, just a quick point that&#8217;s been lurking in the back of a lurking worry I&#8217;ve had is it seems to me most of the research focuses on adults. And yet a lot of the worries about both social media misinformation and generative AI misinformation concerns teenagers and young people. And I&#8217;m curious, A, whether there&#8217;s how much specifically targeted research there is looking at that group. And B, I think there probably are some good prior reasons for worrying about that group more than others, just because teenage years&#8212;firstly, our political beliefs are less likely to be stabilised at that point. And secondly, it is obviously an important window for the formation of political identities in the first place. So even if the worries about social media and generative AI misinformation are overblown for adults, could there be more to worry about there in the case of teenagers?</p><p><strong>Sacha Altay:</strong> No, that&#8217;s very possible. That&#8217;s a point that has at least has been made for social media and mental health that very few studies have looked at adolescents or young adolescents and that&#8217;s probably the group that&#8217;s like could be the most sensible to these effects. And so that&#8217;s a totally fair point. Regarding generative AI, I also think we should acknowledge that they are also probably much better at using the technology and recognizing it, like whether it is ChatGPT, DALL-E or like all the AI technology, I think they are much better.</p><p>And that&#8217;s why the AI slop I see on TikTok are like very meta, like they are second degree, third degree, like very meta. Whereas on Facebook, they are just like first degree, like, look, I did this amazing thing. Oh look, this cute baby. So I think very different. So to be honest, I&#8217;m not so worried about teens and generative AI on TikTok. Regarding mental health, I don&#8217;t know and we need more data, but it&#8217;s a very fair point.</p><p><strong>Dan Williams:</strong> Just on this point about quality, so we&#8217;ve been talking about deepfakes, but there&#8217;s this other aspect of generative AI, which is just producing kind of tailored text-based content. And there has been this flurry of empirical research, so I&#8217;m thinking of like Tom Costello&#8217;s work on chatbots and conspiracy theories and so on, work by people like Ben Tappin showing that LLMs can be pretty persuasive with the content that they produce, partly because they&#8217;re just very good at recruiting evidence and persuasive rational arguments that is tailored to people&#8217;s specific pre-existing beliefs and informational situation. What&#8217;s your feeling about the impact of generative AI there? Because presumably there, it&#8217;s a very different conversation about deepfakes. And it does seem to me at least that generative AI, you might argue, is going to disproportionately benefit people with sort of bad, misinformed views, because that&#8217;s often where you&#8217;re lacking kind of human capital, right? You don&#8217;t have access on tap to the sophisticated intellectual skills of the intelligentsia when it comes to a lot of this kind of lowbrow misinformation. So they can now access, you know, generative AI, at least if it&#8217;s not subject to various sorts of like safety and ethical requirements, and that might happen down the line, isn&#8217;t there a real risk there that that&#8217;s going to kind of asymmetrically benefit people pushing out misinformed conspiratorial narratives?</p><p><strong>Sacha Altay:</strong> So it&#8217;s good you mentioned these studies because they find super large effect sizes on important topics like politics. But all the authors acknowledge that these effects are estimated in experimental settings and it&#8217;s unclear how this would translate outside of experimental settings where LLMs are not going to be prompted to convince participants or users of believing something.</p><p>So first, they are not going to be prompted to do that. Second, they are not going to be paid to pay attention and use the LLM in that way. And so that&#8217;s why also, you know, Ben Tappin has this piece on like for mass persuasion, it matters more like attention. Are people actually going to do that? Are people actually going to be exposed to that rather than how persuasive it is? And that&#8217;s why I&#8217;m not so worried.</p><p>And it&#8217;s important, I think you mentioned the symmetry or asymmetry because I don&#8217;t see any good reason why bad actors would be more successful in using generative AI to mislead than good actors using generative AI to inform and make society better or citizens more informed, et cetera. I think in general, good actors have more money, have more trust. Like in France, if the French government releases an AI or whatever to inform people, it&#8217;s going to be more successful than if it&#8217;s the Russian government. And so in many ways, I think good actors have the advantage, but they need to take it seriously. They need to act and they need to proactively use these tools for democracy and for the better. They should not wait, I think, for the bad actors to attack and them to defend. They should already be using them in the best possible ways to improve society.</p><p><strong>Dan Williams:</strong> Yeah, my thought concerning asymmetry was just take something like Holocaust denial, right? I think to a first approximation, everyone who believes in Holocaust denial is like stupid for the most part. And if you give them access to highly intelligent generative AI tools, well, they&#8217;re gonna be able to use the kind of on-tap intelligence to rationalize that false perspective. Whereas when it comes to the truth, namely that the Holocaust actually happened, we can use generative AI maybe to improve the persuasiveness of the arguments that we&#8217;re going to generate, but we&#8217;ve already got extremely persuasive evidence and arguments, right? Because that&#8217;s where all of the intellectual research and so on exists.</p><p>In any case, again, I&#8217;m conscious of time. Could we end with this point about personalization? So I still meet people who think that Brexit was due to Cambridge Analytica and micro-targeting and things like this. I think it&#8217;s a very kind of common belief people have, which is that once you start targeting personalizing messages, you can have like really huge impact on what people believe. And one of the consequences of AI, very broadly construed, is that they&#8217;re gonna greatly enhance the personalization of persuasive messages. So what&#8217;s your take on that?</p><p><strong>Sacha Altay:</strong> Maybe the best evidence is actually the papers by Ben Tappin, Tom Costello and stuff who have actually measured what matters more. Is it whether the arguments generated by the LLMs are targeted to the users based on their political identity, etc., or whether they present more facts and the quality of the facts, etc. And in general, what they find is that what matters is facts. So the more you provide people with facts and good arguments, the more they change their mind. And personalization matters very little.</p><p>And in political science, there&#8217;s a whole literature showing mostly the same thing, that like, of course you need some targeting, like you need to target based on the language or like some basic level of targeting is needed, but like micro-targeting based on like, yeah, political preferences, values, et cetera, broadly ineffective basically, especially compared to the most convincing arguments you can make.</p><p>I think also there is a whole literature in like communication showing that people highly dislike targeted messages when they are very targeted, when they feel like it&#8217;s very targeted at them, people recognize it and they dislike it. Yeah, the Cambridge Analytica thing is just a scam basically. I still don&#8217;t know why people believe it that much. It&#8217;s just a company. They are selling influence. They said they influence major elections and all of a sudden people are like, oh yeah, of course I understand why they do that. People have priors about other people being gullible and being swayed by social media. So when a company said that they sway people on social media, people are receptive to it. They&#8217;re not being gullible. It&#8217;s just on their priors, et cetera. But yeah, no, there is very little evidence that Cambridge Analytica affected the Brexit or the 2016 US presidential election. And it&#8217;s better to present people with good arguments and facts rather than to micro-target them.</p><p><strong>Henry Shevlin:</strong> If I can squeeze just another angle into the personalization discussion, something you talk about in the paper is relational factors, which is sort of related to personalization, but a bit distinct. And I&#8217;m curious about whether you think AI could play a role there. We&#8217;ve talked on the show previously about social AI and the idea that young people in particular might be forming deeper and more profound relationships with AI systems or AI friends, companions, lovers, which then potentially could be leveraged for changing their views.</p><p>And it seems to me just intuitively that these kind of relations, whether they&#8217;re sort of direct relations or more like parasocial relations, can be really influential if we think about, for example, something like Logan Paul&#8217;s Prime Energy Drinks. You know, this was an influencer who promoted his own brand of energy drinks that then became a massive sensation, hundreds of millions of dollars, if not billions of dollars in sales over a very short period of time. So it seems like these relationships can be powerful. Is that not a worry that AI could leverage them?</p><p><strong>Sacha Altay:</strong> And to be honest, I&#8217;ve been, it&#8217;s very hard, it&#8217;s a very hard question. I&#8217;m being asked that all the time. And I think the best counter-argument I have at the moment is just, there is very little evidence that people change their mind according to their life partner. Like the people they trust the most, they sleep with, et cetera. There is very little change of mind. And when there is, it&#8217;s hard to know whether it&#8217;s because the incentives are getting more aligned. Like, you know, they get married, so they are sharing their money, they are buying a house together, they live in the same place, etc. So of course when the incentives are getting closer, you could imagine their beliefs, etc. are getting closer. But basically attitude change is very small with your life partner.</p><p>And I imagine that if my wife, who I trust a lot, I love, etc. tells me, GMOs are bad, nuclear energy is bad, etc. Why would she convince me? Like I trust her a lot on many things, but I&#8217;m not like completely blind to her. And so how would ChatGPT beat my wife at this, like, I don&#8217;t see it, I don&#8217;t see it. But to be honest, it&#8217;s just my opinion, let&#8217;s see how it goes, but I don&#8217;t find it very convincing.</p><p><strong>Dan Williams:</strong> I can confirm that my girlfriend would very much like to influence my political attitudes, but is not having much success as of yet. Okay, one thing we didn&#8217;t do actually is you&#8217;ve given us your kind of analysis and your belief, Sacha, about the impact or lack of impact of generative AI. But we should mention there were all of these sort of alarmist forecasts about the impact of generative AI and deepfakes on the kind of 2024 election cycle.</p><p>And one of the things that you do in your paper is you don&#8217;t just go through each individual worry, but you actually kind of survey what the empirical research that we have says. So briefly, what does the research that we have actually say about the impact of generative AI on that election cycle?</p><p><strong>Sacha Altay:</strong> I mean, to be honest, it&#8217;s not like a systematic review, like it&#8217;s not super reliable. I just went over and looked at what happened in these elections. And basically, in most countries, the consensus is that there have been some problems with elections, but that it&#8217;s old problems with elections, such as politicians lying, trying to gain, to change, like basically politicians doing bad stuff. And generative AI has been used a lot to illustrate what politicians want to say. Often they want to say that they are strong and that their opponent is weak or stupid. So they have been using generative AI to do that in the US, in Argentina, in many countries. They have used generative AI a lot to do some kind of like soft propaganda, portraying themselves and their group as good and the others as bad.</p><p>In some countries, apparently, generative AI has been used to do some good stuff, like in India, where we have like many languages and where translation is often a problem and takes time. And apparently, generative AI has been used a lot to translate some political campaigns into all the languages and dialects that exist in India. So I think it&#8217;s very varied and not as catastrophic, let&#8217;s say, as the alarmist tech suggests. But I think it&#8217;s just suggestive evidence. And of course, it&#8217;s just the beginning of generative AI. So we should see how generative AI will be used in the future, in future elections. But we should not forget that it can be used to do good stuff. Like it&#8217;s not necessarily being used to do bad stuff. You can use it to translate to, and even to illustrate, you can use it to do like faithful imitation, illustration. You don&#8217;t need to like portray yourself as super strong and the opponent as bad. You can do, I don&#8217;t know, some good or artistic stuff.</p><p><strong>Dan Williams:</strong> Yeah, we didn&#8217;t really talk about the positive side of generative AI very much in this conversation. But my view is, at the moment at least, the kind of boring truth about large language models is that they&#8217;re basically just improving people&#8217;s access to evidence-based kind of factual information. And I think if you compare the kind of like one-shot answer you get from ChatGPT or Claude or Gemini on any political issue to what you get from the average voter or pundit or politician, it&#8217;s just of much higher quality. But I think that truth doesn&#8217;t really get the attention that it deserves because it&#8217;s sort of boring for the most part. It doesn&#8217;t fit into these kind of threat narratives. And it&#8217;s kind of counterintuitive because like, why would it be that these, you know, profit-seeking companies that everyone despises have just had a really beneficial consequence on the information environment? But that is in fact, what I think the case is.</p><p><strong>Sacha Altay:</strong> So you&#8217;re totally right because another concern I haven&#8217;t mentioned is just hallucinations, like individual users using LLM on their own and being misled by an LLM because they confidently say stuff that is false. But as you say, I think it depends compared to what? How often do they hallucinate and how correct are they compared to alternative sources of information like other human beings, social media, TV?</p><p>And I think they would do pretty well actually compared to most of these other sources. And so that&#8217;s why I&#8217;m not so worried. I think the confidence thing is a bit annoying, but I think most people who use AI regularly kind of know that, yeah, sometimes they completely hallucinate and they go completely awry, but we know it. And I think most people who use it often know it. And that&#8217;s why I&#8217;m not so worried. But again, it would be better if they did not hallucinate and were perfect, but it&#8217;s setting the bar a bit high.</p><p><strong>Dan Williams:</strong> Okay. Okay, fantastic. Well, thank you, Sacha. We&#8217;re going to have to bring you back on at some point because I feel like we&#8217;ve just barely scratched the surface with many of these issues. Was there anything that we didn&#8217;t ask you that you wished we had asked you?</p><p><strong>Sacha Altay:</strong> No. I mean, as you said, many things to talk about.</p><p><strong>Dan Williams:</strong> Okay, fantastic. Well, thanks, Sacha, and we&#8217;ll see everyone next time</p>]]></content:encoded></item><item><title><![CDATA[The harder it is to find the truth, the easier it is to lie to ourselves]]></title><description><![CDATA[A simple observation with complex implications]]></description><link>https://www.conspicuouscognition.com/p/the-harder-it-is-to-find-the-truth</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/the-harder-it-is-to-find-the-truth</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Mon, 12 Jan 2026 15:23:50 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="3456" height="2304" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2304,&quot;width&quot;:3456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;a clock on a building&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="a clock on a building" title="a clock on a building" srcset="https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1652170153084-6b35f0b0e886?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxzZWxmLWRlY2VwdGlvbnxlbnwwfHx8fDE3NjgyMjIwNDN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@dareartworks">Dare Artworks</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>If you look at humanity, both today and throughout history, you can&#8217;t help but notice that people believe a lot of things that seem stupid and irrational. Pick your favourite example: conspiracy theories, religion, prejudice, ideology, pseudoscience, ancestor myths, people who hold different political opinions from your own, and so on. </p><p>This observation provokes a central question for the social sciences. Why do broadly rational people, people who often seem intelligent and competent in most aspects of their lives, sometimes believe highly irrational things?</p><p>One classic answer is that <a href="https://www.conspicuouscognition.com/p/political-animals">people are not disinterested truth seekers</a>. In some contexts, our practical interests conflict with the aim of acquiring accurate, evidence-based beliefs. For example, we might want to believe things that make us feel good, that impose a satisfying order and certainty on a complex world, that help us <a href="https://en.wikipedia.org/wiki/The_Folly_of_Fools">persuade</a> others that we&#8217;re noble and impressive, or that <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/mila.12326?__cf_chl_rt_tk=ux4PED8wHGYzvbCxfCsZgiUtEm5XHqUJ.HZOcPzUBsY-1768228871-1.0.1.1-Xu0VUokzNGx8zRDj8lnps_K79ohQtwI5vKGeUKUj2Do">win us status and approval</a> from our friends and allies. </p><p>Famously, when our goals come into conflict with the pursuit of truth in this way, the truth often loses out. We lie to ourselves, bury our heads in the sand, and engage in elaborate mental gymnastics. Less colloquially, we engage in what psychologists call &#8220;<a href="https://pubmed.ncbi.nlm.nih.gov/2270237/">motivated cognition</a>&#8221;: we&#8212;or our minds, at least&#8212;direct cognitive processes toward favoured conclusions, not true ones. For example, we instinctively seek out evidence that confirms those conclusions (<em>confirmation bias</em>), shield ourselves from evidence against them (<em>motivated ignorance</em>), insist on higher standards for arguments we dislike than for those we like (<em>biased evaluation</em>), and remember and forget information in convenient patterns (<em>selective forgetting</em>). </p><p>Throughout most of history, scholars had little doubt that this tendency was a central and destructive feature of the human condition. </p><p>For Adam Smith, for example, it &#8220;is the fatal weakness of mankind&#8221; and &#8220;the source of half the disorders of human life.&#8221; For Socrates in the Cratylus, &#8220;the worst of all deceptions is self-deception.&#8221; And of course, thinkers such as Freud and Nietzsche placed motivated cognition at the centre of their understanding of human psychology.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Against Motivated Cognition</h1><p>This consensus continued from the emergence of scientific psychology until relatively recently. In the last decade or so, however, some researchers have become increasingly sceptical that motivated cognition is a significant force in human affairs. There are many reasons for this, including <a href="http://sciencedirect.com/science/article/abs/pii/S2352154620300036?__cf_chl_rt_tk=ym.IdQ3wqLnWrw8ZYWRMGBvlY8vS_n7oiQoCxhF6ETE-1768228946-1.0.1.1-ZGk.mOF1qgn.UaKZNJ5JyDb3_EyjrxWMXCjuekN8iVU">reinterpretations</a> of experimental findings, <a href="https://www.sciencedirect.com/science/article/pii/S0010027721001876">failures to replicate</a> certain findings, and a <a href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo181475008.html">growing body of evidence</a> that people are broadly rational in how they process information, even in domains like politics.</p><p>I&#8217;m <a href="https://link.springer.com/article/10.1007/s11229-023-04223-1">not very convinced</a> by these sources of scepticism. I think they often rest on na&#239;ve assumptions about how to interpret psychological findings and how to understand motivated cognition. When properly understood, motivated cognition is <a href="https://www.cambridge.org/core/journals/economics-and-philosophy/article/marketplace-of-rationalizations/41FB096344BD344908C7C992D0C0C0DC">consistent</a> with the finding that people update their beliefs when presented with corrective information. </p><p>I also think that, as with <a href="https://www.amazon.com/Supersizing-Mind-Embodiment-Cognitive-Philosophy/dp/0199773688">human cognition more broadly</a>, most widespread and consequential forms of motivated cognition are <a href="https://osf.io/preprints/psyarxiv/7m4r3">distributed and socially scaffolded</a>. They are a &#8220;<a href="https://www.conspicuouscognition.com/p/the-social-construction-of-bespoke">team projec</a>t&#8221; involving complex systems of social norms, incentives, and coordination that function to promote and protect favoured narratives and belief systems. So, if you want to understand how we lie to ourselves, you must move beyond the lone thinker in decontextualised psych experiments and focus on how humans co-construct social worlds optimised for scaffolding self-deception. </p><p>There is much more to say about all of this, obviously. But here, I want to focus on a different, more &#8220;philosophical&#8221; source of scepticism about motivated cognition, one which draws attention to what the philosopher Jeffrey Friedman calls &#8220;<a href="https://global.oup.com/academic/product/power-without-knowledge-9780190877170">epistemic complexity</a>&#8221;.</p><h1><strong>Epistemic complexity</strong></h1><p>&#8220;Epistemic complexity&#8221; is a bit of jargon for the simple idea that it&#8217;s often really hard to figure out what&#8217;s true. </p><p>Partly, this is because reality itself is often complex, but it&#8217;s also due to the <a href="https://www.conspicuouscognition.com/p/on-becoming-less-left-wing-part-2">highly fallible ways in which we access that reality</a>. We rarely have &#8220;direct&#8221;, perceptual access to the facts we form beliefs about, especially in domains like politics and religion. Our access is mediated by other people and institutions&#8212;priests, teachers, writers, journalists, pundits, scientists, social media feeds, etc.&#8212;and by our pre-existing beliefs (&#8220;priors&#8221;), which, given the world's scale and complexity, typically involve highly selective, low-resolution compressions of reality. Of course, these representations were also primarily acquired from others who are in exactly the same situation. </p><p>This is what Walter Lippmann <a href="https://www.conspicuouscognition.com/p/the-world-outside-and-the-pictures">meant</a> when he observed that the modern world is &#8220;out of reach, out of sight, and out of mind&#8221;, and that public opinion &#8220;deals with indirect, unseen, and puzzling facts, and there is nothing obvious about them.&#8221;</p><p>To make this concrete, consider your beliefs about climate change. Maybe you think it&#8217;s our most pressing political problem, an urgent crisis and existential risk, or maybe you think the whole thing is an overblown, leftist moral panic. But whatever you believe, take a moment to reflect on where your beliefs came from.</p><p>Reality didn&#8217;t just imprint itself directly on your brain, whatever that would mean. You learned about climate change in the same way that you learn about almost everything else: through a highly <a href="https://www.tandfonline.com/doi/abs/10.1080/08913811.2023.2221502">path-dependent process</a> in which, at every stage of encountering new information (testimony, news reports, articles, education, political commentary, etc.), you filtered it through your priors about the world and about which sources were trustworthy. </p><p>Through this process, you arrived at your current opinions, which inevitably take the form of low-resolution compressions of an extremely complex geophysical and political reality into a manageable, understandable form. Indeed, unless you are someone with significant expertise in this area, your &#8220;opinions&#8221; probably involve little more than socially-learned slogans and soundbites. (To test yourself, open a blank document and write out your current understanding of the topic exclusively from memory.)</p><p>It doesn&#8217;t take a philosophy PhD to appreciate that this process is highly fallible. Once you realise that the <a href="https://www.conspicuouscognition.com/p/the-world-outside-and-the-pictures">pictures inside people&#8217;s heads</a> aren&#8217;t simple reflections of reality but the output of complex and fragile processes of interpretation and social learning, you should recognise that there are countless reasons why those pictures might distort or misrepresent that reality. </p><p>And yet, most of us don&#8217;t intuitively think this way. When we compare our beliefs against the facts, we always find a comforting 1:1 correspondence. Unless we force ourselves to reflect, there doesn&#8217;t seem to be a highly fallible process mediating between reality and our representations of it. Reality just <em>is </em>whatever we represent it to be.</p><p>The truth often seems obvious, self-evident, so much so that we are frequently baffled when people don&#8217;t share our understanding of the truth. The idea that rational people could have encountered the same reality and come away with different opinions doesn&#8217;t even register as a serious possibility in many cases. In the language of modern psychology, we are instinctive &#8220;<a href="https://www.conspicuouscognition.com/p/in-politics-the-truth-is-not-self">na&#239;ve realists</a>&#8221;. As Karl Popper characterised this intuition, we believe that <a href="https://www.conspicuouscognition.com/p/on-conspiracy-theories-of-ignorance">the truth is &#8220;manifest&#8221;</a>. If others don&#8217;t see the truth, they must, therefore, be deeply irrational, if not outright psychotic.</p><p>Given this, epistemic complexity is not merely a feature of our situation that we must grapple with. It is a feature that most people don&#8217;t instinctively appreciate, let alone reflect on. That is, it <em>seems</em> much easier to become &#8220;informed&#8221;&#8212;to figure out what&#8217;s true&#8212;than it really is.</p><h1><strong>Back to Motivated Cognition</strong></h1><p>This is where these reflections on epistemic complexity become relevant to questions about motivated cognition. </p><p>Historically, scholars have invoked motivated cognition to explain why people hold mistaken beliefs that appear highly irrational. If people confront epistemic complexity, this appearance of irrationality may simply be an illusion produced by na&#239;ve realism. That is, once we appreciate that the truth is not self-evident and that it&#8217;s extremely challenging to acquire knowledge, we should realise that there is <a href="https://www.conspicuouscognition.com/p/why-do-people-believe-true-things">nothing deeply puzzling</a> about why people hold mistaken beliefs. Even perfectly rational individuals will form such beliefs if the challenges of forming accurate ones are sufficiently severe. Perhaps, through no fault of their own, they have simply been exposed to misleading evidence or unreliable sources. </p><p>If so, the motivation for positing motivated cognition evaporates. There is no irrationality to explain. </p><p>Although this move takes various forms, I think one can find versions of it in the writings of many recent scholars, even when it is not stated explicitly, including <a href="https://global.oup.com/academic/product/power-without-knowledge-9780190877170?cc=us&amp;lang=en&amp;">Jeffrey Friedman</a>, <a href="https://global.oup.com/academic/product/bad-beliefs-9780192895325?cc=us&amp;lang=en&amp;">Neil Levy</a>, <a href="https://www.cambridge.org/core/journals/episteme/article/abs/echo-chambers-and-epistemic-bubbles/5D4AC3A808C538E17C50A7C09EC706F0">C. Thi Nguyen</a>, and <a href="https://yalebooks.yale.edu/book/9780300251852/the-misinformation-age/">Cailin O&#8217;Connor and James Owen Weatherall</a>. The core idea is that theorists have traditionally been too quick to jump from observing false beliefs to inferring motivated irrationality. Once we recognise epistemic complexity, we can see that there are countless ways in which individually rational thinkers can acquire false beliefs.</p><p>In most cases, these theorists advance alternative explanations that focus on features of the social environment, including how social-informational networks of trust and testimony are corrupted by malicious actors. Hence, this move typically goes hand in hand with the idea that to understand why people hold mistaken beliefs, we should turn our attention away from individual rational failings and toward &#8220;structural&#8221; and &#8220;systemic&#8221; pathologies in our society. (Friedman is an exception here, inasmuch as he seems to think that epistemic complexity is so severe that theorists shouldn&#8217;t even make judgements about which beliefs are true or false in the first place.)</p><h1><strong>Motivated Cognition and Epistemic Complexity</strong></h1><p>It&#8217;s an interesting and insightful line of reasoning, but I think it draws the wrong lesson from a recognition of epistemic complexity. Although such complexity opens the possibility that false beliefs can result from rational belief formation, its existence should actually <em>increase </em>our confidence in the likely impact of motivated cognition. This is because epistemic complexity <em>exacerbates </em>motivated cognition, making it easier for us to become convinced of desired conclusions. </p><p>In plain terms: The more challenging it is to figure out what&#8217;s true, the easier people will find it to lie to themselves.</p><p>To see why, think about the factors that determine whether people will engage in motivated cognition. It&#8217;s tempting to think that the only relevant variable is the <em>strength </em>of motivations that conflict with the pursuit of truth, such that the stronger those motivations, the greater the propensity to engage in motivated cognition. </p><p>However, a moment&#8217;s reflection suggests this can&#8217;t be the whole story. There are severe limits on what we can convince ourselves of, and these limits are largely independent of the strength of our motives. As Ziva Kunda <a href="https://pubmed.ncbi.nlm.nih.gov/2270237/">puts it</a>, &#8220;People do not seem to be at liberty to conclude whatever they want to conclude merely because they want to.&#8221; One might add: and <em>no matter how much they want to. </em>That is, there is no amount of money (or status, sex, etc.) that could induce me to believe that 2+2=5 or that the moon is made of cheese. These beliefs simply don&#8217;t fall within my cognitive grasp. </p><p>The reason is simple: For motivated cognition to be possible, we must be capable of providing some justification of the relevant belief. Elsewhere, I have called this a &#8220;<a href="https://www.cambridge.org/core/journals/economics-and-philosophy/article/marketplace-of-rationalizations/41FB096344BD344908C7C992D0C0C0DC">rationalisation constraint</a>&#8221;. But in some cases, we can satisfy it not by explicitly constructing or seeking post hoc rationalisations, but simply by insulating ourselves from disconfirming evidence. (This is captured by the &#8220;burying one&#8217;s head in the sand&#8221; metaphor). </p><p>Whatever we call it, however, the point is the same: our ability to become convinced of desired conclusions depends on our ability to feel that they are in some sense justified. That&#8217;s why the psychological acrobatics associated with motivated cognition&#8212;confirmation bias, biased evaluation, selective forgetting, etc.&#8212;are necessary in the first place.</p><p>For this reason, the extent to which motivated cognition biases belief depends not only on incentives but also on how easily individuals can satisfy this constraint. To be clear, this isn&#8217;t an original point; it&#8217;s one of the <a href="https://osf.io/preprints/psyarxiv/qnda3">oldest observations about motivated cognition</a>. The observation that I want to make here, however, is simply that when it comes to justifying desired beliefs, epistemic complexity is a help, not a hindrance. That is, as it becomes increasingly difficult to determine what&#8217;s true, it becomes correspondingly easier to convince ourselves of desirable untruths. </p><p>This suggests that many people are drawing the wrong lesson from epistemic complexity. Although such complexity implies that even disinterested, rational truth seekers <em>could</em> acquire inaccurate beliefs, its existence should increase our confidence that people are <em>not</em> behaving as disinterested truth seekers. </p><p>Of course, it is still ultimately an empirical question to what extent motivated cognition is operative in specific cases. There may be other reasons to think it is less prevalent than many have traditionally assumed. The point is just that the fact of epistemic complexity is not one of them.</p><h1>So what?</h1><p>Why does any of this matter? There are potentially many reasons, I think, but I&#8217;ll end with two.</p><p>First, it&#8217;s plausible that elites often benefit when target audiences engage in motivated cognition. So, politicians who spread self-serving lies benefit when their supporters prioritise political tribalism over accuracy. For example, they will be more likely to believe that an election was stolen from their side if they&#8217;re motivated to <a href="https://www.conspicuouscognition.com/p/people-embrace-beliefs-that-signal">embrace and signal tribal beliefs</a>. This means that many elites have an incentive to do whatever they can to increase a domain&#8217;s epistemic complexity&#8212;for example, by manufacturing uncertainty, <a href="https://en.wikipedia.org/wiki/Flood_the_zone">flooding the zone with shit</a>, recruiting congenial &#8220;experts&#8221;, and so on. </p><p>This is a <a href="https://www.amazon.com/s?i=specialty-aps&amp;srs=215470040011&amp;s=popularity-rank&amp;fs=true&amp;_encoding=UTF8&amp;content-id=amzn1.sym.934666e1-1184-4a65-8c86-087f9638b83e&amp;pd_rd_r=ec69171f-6c75-4b7c-9e9a-466a93453437&amp;pd_rd_w=2JVJ8&amp;pd_rd_wg=18Njs&amp;pf_rd_p=934666e1-1184-4a65-8c86-087f9638b83e&amp;pf_rd_r=R46NW6YB26XDHKBVB70Y&amp;ref=lp_215470040011_sar">familiar lesson</a> from research on propaganda in some ways, of course, but reflecting on the interactions between motivated cognition and epistemic complexity casts it in a new light.</p><p>Second, many studies of the role of motivated cognition in belief formation provide participants with corrective evidence and measure the extent to which they update their beliefs. If they update in a rational direction, this is <a href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo181475008.html">taken as evidence</a> against the importance of motivated cognition. </p><p>One way to understand such experiments is that they artificially and temporarily reduce epistemic complexity. By presenting strong evidence against desired conclusions, they momentarily weaken people&#8217;s ability to subjectively justify those conclusions. To the extent that the real-world context in which people think involves much higher levels of epistemic complexity&#8212;for example, greater choice over which media and political sources to consult, heightened exposure to conflicting viewpoints and arguments, and greater contact with like-minded friends and colleagues&#8212;this suggests that such experiments might be limited in what they can tell us about the real-world significance of motivated cognition. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI Sessions #7: How Close is "AGI"?]]></title><description><![CDATA[Listen now | And does the concept even make sense?]]></description><link>https://www.conspicuouscognition.com/p/how-close-is-agi</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/how-close-is-agi</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Fri, 09 Jan 2026 13:14:39 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/184015229/3e3273c57eb2e1993fe1de70a58ba362.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Henry and I discuss controversies surrounding Artificial General Intelligence (AGI), exploring its definitions, measurement, implications, and various sources of scepticism. We also touch on philosophical debates regarding human intelligence versus AGI, the economic and political ramifications of AI integration, and predictions for the future of AI technology.</p><p><strong>Chapters</strong></p><ul><li><p>00:00 Understanding AGI: A Controversial Concept</p></li><li><p>02:21 The Utility and Limitations of AGI</p></li><li><p>07:10 Defining AGI: Categories and Perspectives</p></li><li><p>12:01 Transformative AI vs. AGI: A Distinction</p></li><li><p>16:15 Generality in AI: Beyond Human Intelligence</p></li><li><p>22:13 Skepticism and Progress in AI Development</p></li><li><p>28:42 The Evolution of LLMs and Their Capabilities</p></li><li><p>30:49 Moravec&#8217;s Paradox and Its Implications</p></li><li><p>33:05 The Limits of AI in Creativity and Judgment</p></li><li><p>37:40 Skepticism Towards AGI and Human Intelligence</p></li><li><p>42:54 The Jagged Nature of AI Intelligence</p></li><li><p>47:32 Measuring AI Progress and Its Real-World Impact</p></li><li><p>56:39 Evaluating AI Progress and Benchmarks</p></li><li><p>01:02:22 The Rise of Claude Code and Its Implications</p></li><li><p>01:04:33 Transitioning to a Post-AGI World</p></li><li><p>01:15:15 Predictions for 2026: Capabilities, Economics, and Politics</p></li></ul><h1>Transcript </h1><ul><li><p>Please note that this transcript is AI-created and may contain minor mistakes. </p></li></ul><h1>How Close Is AGI?</h1><p><strong>Dan Williams:</strong> Welcome back. It&#8217;s 2026, a new year, a big year for AI progress, an even bigger year, dare I say it, for this podcast. I&#8217;m Dan Williams. I&#8217;m back with Henry Shevlin. And today we&#8217;re going to be talking about one of the central, most consequential, most controversial concepts in all of AI discourse, which is AGI, artificial general intelligence.</p><p>So AGI is written into the mission statements of the leading AI companies. OpenAI, for example, states that their mission is to ensure that artificial general intelligence benefits all of humanity. We also constantly see references to AGI in the media, in science, in philosophy, and in discourse about the dangers, potentially catastrophic dangers, of advanced AI. And yet, there is famously very little consensus on how to even understand this concept, let alone measure our progress towards it.</p><p>Is it, for example, a system that achieves something called human level AI? Is it a system that can do any task or at least any intellectual task that a human being can do? Is it a system that performs extremely well on tests, on benchmarks? Or is it, as some people suggest, a deeply confused pseudoscientific concept? So for example, the influential cognitive scientist Alison Gopnik has said, there is no such thing as general intelligence, artificial or natural. Jan LeCun, one of the most famous AI researchers in the world, says this concept makes absolutely no sense.</p><p>But if that&#8217;s the case, what should we make of people making predictions about when we&#8217;re going to reach AGI, perhaps in the next few years? How do we make sense of rapid AI progress? What are we making progress towards? Moreover, what do we make of people, smart people, who claim we&#8217;ve already reached AGI, that we&#8217;re living through the post-AGI world?</p><p>So these are the topics that we&#8217;re going to be focusing on today. What is AGI? Is the concept coherent and useful? How do we measure progress towards AGI if we take this concept seriously? And what happens when or if we reach AGI? At the end, Henry and I are also going to be giving some predictions about how we expect AI to develop over the course of this year.</p><p>Okay, so to kick things off, Henry, AGI, how do you understand the concept? Are you a fan?</p><p><strong>Henry Shevlin:</strong> I am a cautious fan of AGI as a concept. I think it&#8217;s an imperfect concept and can be very vague or defined in various ways. But at the same time, I think it serves as a useful reminder that we are heading towards an era, in my view, of genuinely transformative capabilities in AI systems. And so when we talk about AI revolutionizing science, AI revolutionizing medicine, AI revolutionizing the future of work, I think AGI is often a useful shorthand for talking about the point at which we start to see really massive changes in these domains.</p><p>That said, I do have some sympathy for the worry that this is not a particularly coherent concept. So I think we&#8217;ve seen commentary in the media recently saying, look, we don&#8217;t really understand what intelligence is, and therefore the very idea of AGI is ill-defined.</p><p>What I would say there is that I think we don&#8217;t need to understand exactly how human intelligence works in order to recognize when we&#8217;ve exceeded human capabilities in certain key ways. And in the same way, we don&#8217;t necessarily need to have a perfect biomechanical model of how birds fly in order to build planes that can fly faster than them. So I think even with some empirical questions or some conceptual or definitional disagreements about what intelligence is, what human intelligence is, it could still be the case that we&#8217;re well on our way to exceeding the capabilities of human intelligence across the board with AGI.</p><p>One thing to quickly flag though is AGI is kind of canonically or classically defined as systems that are equal to human level performance across all domains. I think tacitly this is often restricted to sort of economically and scientifically and cognitively relevant domains, right? So I think if we had systems that were sort of at human level or above in pretty much every cognitive task, but they couldn&#8217;t smell or had limited ability to do certain kinds of fine-grained motor tasks, perhaps, I think that wouldn&#8217;t disqualify us from characterizing those systems as AGI. If they&#8217;re doing better science than a human, if they&#8217;re winning mathematics prizes, if they&#8217;re Nobel&#8217;s, if they&#8217;re doing 99% of current jobs in the economy, it&#8217;s not going to be a deal breaker whether or not they can tell Sauvignon Blancs from a Chardonnay with a sniff.</p><p><strong>Dan Williams:</strong> Yeah, although on that point, I think there&#8217;s a question here, which is, should we expect a system that can out-compete human beings when it comes to what are thought of as purely cognitive tasks, if it doesn&#8217;t have the kinds of competencies that go into, for example, folding laundry, making toast, et cetera. So the idea that you can draw a nice distinction between purely intellectual tasks of the sort that you can perform on a computer, and let&#8217;s say what I thought of as sort of non-intellectual tasks, of sensory motor tasks. I think that&#8217;s a kind of interesting question in and of itself.</p><p>Doing a bit of reading around AGI for this episode, it seems like a lot of the definitions about what AGI is splinter into sort of three different categories. And I&#8217;ll be interested to hear what you think about this way of taxonomizing area.</p><p>So some people seem to understand AGI basically as a kind of placeholder for whatever AI happens to have really transformative consequences. So it&#8217;s like, AGI is just a term for transformative AI, whatever form that transformative AI actually takes. Other people seem to understand it with this concept of human level AI or something similar, where they&#8217;re sort of using human intelligence as the thing relative to which we should understand the concept of AGI. And that I think for reasons we can probably get into, I can kind of understand what they&#8217;re getting out there, but I think there are all sorts of reasons to be skeptical about that concept. And then there&#8217;s a third category of attempts at understanding this concept where you&#8217;re just understanding it in terms of kind of abstract capabilities, right? And it might in fact be the case that human beings exhibit or instantiate these capabilities. But the idea is you can specify what these capabilities are independent of thinking about the specific form that they take in human beings. So things like the flexibility and generality of problem solving ability or capacities for continual learning and self-directed learning and autonomy and so on. So it&#8217;s like transformative AI understood in terms of impacts, you&#8217;ve got kind of human level AI where it&#8217;s a system which in some ways has capabilities that are like the sort of ones that human beings have, or you&#8217;ve got just a kind of pure capabilities understanding.</p><p>Does that correspond with how you&#8217;re thinking of this area? Would you add any other categories to that?</p><p><strong>Henry Shevlin:</strong> Yeah, I think that&#8217;s really helpful. I guess a fourth category you might add, it&#8217;s a bit of a misnomer to call this category AGI. But I think in practice and a lot of discourse, sometimes people use AGI to refer to something like the singularity or some kind of recursive process of intelligence self-improvement. At which point, AGI functions basically as the same as the idea of artificial super intelligence. I think that&#8217;s probably not a maximally helpful way of thinking about AGI. I think it is helpful to distinguish between AGI and sort of the singularity or recursive intelligence explosions. But in practice, that&#8217;s what some people mean, I think, they talk about AGI.</p><p><strong>Dan Williams:</strong> Yeah, just to add a footnote to that. So this idea of an intelligence explosion, roughly speaking, the idea is you&#8217;re going to get AI systems that once they can substantially contribute to the process of AI R&amp;D and improving AI systems, you&#8217;re going to get this rapid process of recursive self-improvement where every AI system is sort of iteratively involved in building better and better AI systems. We should actually, I think, do a whole separate episode on the intelligence explosion. Because I think reading around about AI, so much of what people are thinking about the future seems to depend on their assessment of like the plausibility of that intelligence explosion concept.</p><p>But yeah, so I think you might though think of that as part of that first category of sort of defining AI by its impact in a sense. So AGI by its impact. So there AGI would be, you know, whatever triggers this hypothetical intelligence explosion.</p><p>I mean, I think that in addition to what you said, a general problem with defining AGI in terms of the impact of AI, where you&#8217;re sort of neutral on what kinds of capabilities might produce that impact, it&#8217;s not really forward looking, that kind of definition, right? It&#8217;s in a sense, almost by its very nature, going to be backward looking. And it&#8217;s not really clear then what we should be searching for or how you would go about measuring AGI solely by looking at the capabilities of the AI systems themselves. So even though I think there is a place for this idea that we need to be thinking seriously about what a world will look like where you&#8217;ve got radically transformative AI, merely having this placeholder, to me at least, doesn&#8217;t seem that useful as a way of understanding this concept of AGI.</p><p><strong>Henry Shevlin:</strong> Yeah, I agree. And I think the idea of transformative AI is a useful concept in itself, but I do think it&#8217;s worth distinguishing from the kind of more cognitive scientific concept of AGI for a couple of reasons.</p><p>The first is that I think you can achieve transformative AGI, sorry, transformative AI, even with quite narrow systems. So there&#8217;s this really interesting idea that was very, very central to a lot of AI discourse in the late 2010s called comprehensive AI services. So this is an idea developed by Eric Drexler who said, look, maybe it would be a good idea for safety reasons if rather than trying to build one AI to rule them all, we focus on more narrow domain expert AI systems. So you&#8217;ve got an amazing AI scientist, an amazing AI financial analyst, you&#8217;ve got an amazing AI writer, but they&#8217;re not joined up. They don&#8217;t talk to each other at least directly.</p><p>And that could be better from a safety perspective, but also pretty much just as useful as AGI. So this is often framed as sort of a choice between two different directions that the future of AI research could go. Part of the problem there is I think LLMs kind of fall between the cracks of AGI and CAIS, as it&#8217;s called, Comprehensive AI Services. But insofar as they are a sort of unified system in some sense in terms of their generality, they can do lots and lots of different tasks and they&#8217;re not narrow systems. But at the same time, they&#8217;re not unified in the sense of being a single psychological agent with memory carried across different instances, capable of coordinating thousands or hundreds of thousands or millions of different conversations towards a single goal. And of course, a lot of the power that LLMs have is their ability increasingly to use various tools rather than sort of having those tools integrated into the systems themselves.</p><p>So I think there&#8217;s a world in which AI turns out to be transformative that ends up looking a lot more like Eric Drexler&#8217;s world. So this is a world without AGI in the sense of, you know, one system to rule them all. Instead, lots of powerful specialized systems, but that still utterly transforms our society and economy. So that&#8217;s one reason I think the transformative definition is maybe worth separating out.</p><p>Another reason to separate out transformative AI from AGI is something that&#8217;s been a big issue in the last year, which is adoption. We could have amazing AI systems or increasingly powerful AI systems, but due to economic or structural factors, they don&#8217;t end up at least straight away having the kind of transformative impact that people I think sometimes slightly naively assumed would just happen straight away as soon as you get AGI. So again, you might have, so it&#8217;s a sort of a double dissociation. You might have transformative AI that still falls short of AGI because it&#8217;s something a bit more like CAIS. Or you might have genuine AGI, but it&#8217;s not yet or not immediately transformative because of structural, legal, economic obstacles, things like adoption, to prevent it having the full impact.</p><p><strong>Dan Williams:</strong> Yeah, and I think that idea that you can&#8217;t leap straight from the capabilities of the system to its real world impact is a very important idea in thinking about AI in general. And in fact, we touched on this in our first episode where we looked at the AI as normal technology perspective from Arvind Narayanan and Sayesh Kapoor, where they make a really big deal of this idea that diffusion takes time. There are lots of bottlenecks. There&#8217;s going to be lots of risk aversion, need to have all of these other complementary innovations within society in order to actually integrate AI capabilities. I think that&#8217;s really important.</p><p>Maybe to sort of take a step back. So as I understand where the concept of AGI sort of first comes from, when we separate it from these questions of real world impact and just focus on the capabilities of a system, is if you look at the history of AI, we have lots of very impressive systems, often superhuman, along relatively narrow dimensions, but that could only do some things. So a chess playing system that will destroy the world&#8217;s best chess player, but it can&#8217;t really do anything else. And even if you just change the rules of chess very slightly, the systems are so brittle that suddenly they&#8217;ll lose all of their capabilities.</p><p>And one thought was, well, that&#8217;s one kind of intelligence, a kind of narrow intelligence, which these AI systems that we were building possessed. But in principle, there could be a kind of intelligence where it&#8217;s incredibly flexible and open-ended in terms of the kinds of tasks, the kinds of goals that the system could achieve. And then I take it a question people are gonna have is, okay, why should we expect such a system is even possible?</p><p>And a thought many people have is, well, we have human beings, right? And human beings are a kind of existence proof for a certain kind of highly general, flexible, open-ended intelligence, in as much as human beings can become poets, scientists, engineers, dancers, we can play an open-ended set of possible games, and so on and so forth. So the idea is there&#8217;s gonna be a kind of conceptual contrast between narrow intelligence and general intelligence. And in a way of addressing skepticism about the possibility of general intelligence, people can always say, human beings have this kind of generality in terms of the sorts of things that we can do.</p><p>And I take it that&#8217;s partly why so much of the AGI discourse gets translated into human level AI discourse because human beings are supposed to be this kind of existence proof for the kind of intelligence that we&#8217;re thinking about. I&#8217;m really torn here because I think clearly on the one hand it is true that human beings have a kind of flexible open-ended intelligence that can be combined with an open-ended set of goals and we can perform a variety of different tasks. On the other hand I do really worry about this concept of human level AI, it feels a little bit incoherent to me, like we&#8217;re dealing with a kind of great chain of being where there&#8217;s this single quantity of intelligence and human beings were on a certain level and we just need to get to that level. That feels a bit confused and sort of dubious to me.</p><p>I also think, and actually maybe this is an area where we disagree, ultimately it&#8217;s not obvious to me that you&#8217;re going to be able to build systems that can do everything that human beings can do that work radically differently from human beings and are subject to a totally different kind of design process in terms of the learning mechanisms by which they arise. I think that idea is coherent, but I think this concept of AGI is basically saying we&#8217;re going to get systems that can do everything that human beings can do. They&#8217;ve got the kind of flexible, open-ended intelligence, but they&#8217;re not going to work anything like how human beings work.</p><p>I feel like that idea doesn&#8217;t get enough scrutiny in discourse about AI. What do you think?</p><p><strong>Henry Shevlin:</strong> So loads of juicy threads there. Just a couple of quick historical notes. So the idea of generality as a feature of AI systems was really popularized by John McCarthy all the way back in the sort 60s and 70s, one of the founding figures of modern machine learning and AI. And then I think AGI or the concept of general intelligence as a central notion for frontier model development is sort of popularized and refined a bit by Shane Legge and Marcus Hutter in the early 2000s. So they give this famous definition of general intelligence as the ability to achieve goals across a wide range of environments.</p><p>And if we&#8217;re going to sort of do any useful sort of scientific analysis, I think, with the concepts in this vicinity, I think the idea of generality as a sort of continuous dimension is more useful and interesting than the concept of AGI per se. I think the AGI sounds like there&#8217;s a definite finish line for model development, which I think is probably unlikely for reasons maybe we&#8217;ll get onto, but spoiler alert, I think it has to do with the jagged frontier and the jagged nature of AI development. But on the other hand, the idea of generality seems like a really legitimate scientific category, right? Be able to measure, you know, obviously operationalizing these terms is always a bit tricky, but the idea that we can measure the ability of systems to perform well across different domains, that seems like something that is measurable and is meaningful. And I think that&#8217;s an area where we&#8217;ve seen astonishing progress in very, very recent history.</p><p>So back in, I think it was 2019, I wrote a paper with Karina Vould, Matt Crosby and Marta Helena called The Limits of Machine Intelligence, where we were comparing contemporary frontier AI systems somewhat negatively with capabilities, not just of humans, but of non-human animals. In that paper, we draw heavily on biology and just talk about the wide range of things that honeybees can do that birds can do, how they are not specialized intelligences and comparing them with things like AlphaGo or AlphaFold, which are, as you sort of suggested, really, really powerful systems, but operating in very, very narrow domains.</p><p>Now, since then, and somewhat, I think, to the surprise of me and others, large language models have shown that in some ways it is possible to build really quite robust systems, systems with a very high degree of generality across a lot of cognitive tasks. And I think that this has sort of dawned quite slowly. I think as recently as sort of just like the launch version of ChatGPT, which was running on 3.5, you still ran into a lot of the kind of familiar problems that you&#8217;d run into with sort of previous systems that you alluded to. You change the rules of chess slightly and you get sort of inelegant failures. And I think you could see that already with things like ChatGPT, the launch version would often make non sequiturs. It was easy to confuse. Fairly trivial to get it to hallucinate. And across all those metrics, these systems have been getting more and more reliable.</p><p>Early ChatGPT was terrible at mathematics, for example. Contemporary ChatGPT or contemporary LLMs in general can do fantastic mathematics. We&#8217;ve had admittedly specialized fine-tuned models, but still LLMs at core that are now winning International Math Olympiad goals. So I think maybe one way to push back against your idea that generality, or at least your hypothesis that maybe generality, high levels of generality, are only achievable in something like a human package. Well, I think the trend line suggests that we are moving rapidly towards more and more general systems in a distinctly unhuman-like package in the form of LLMs.</p><p><strong>Dan Williams:</strong> Completely agree. And this is, I think, the kind of strongest argument for the alternative view. I mean, just to kind of reconstruct my somewhat garbled reasoning, my thought was something like, we talk about AGI, and often the existence proof that there&#8217;s such a thing as AGI is the fact that we&#8217;ve got human beings. And I think so much of the discourse about why AGI will be transformative is the idea that these systems will be able to do everything that human beings can do, maybe just in the cognitive, intellectual domains.</p><p>And my thought was, well, fair enough, but we&#8217;re not building systems that work anything like the human mind, anything like how the human mind works. So there&#8217;s a kind of assumption here, sort of bundled with this AGI concept in terms of the way that it gets used, which is we&#8217;re going to build systems or we can build systems, maybe we are on track to build systems that can do everything that human beings can do in a way that this concept of AGI sort of captures, but that work nothing like human beings. And I don&#8217;t think it&#8217;s obvious that that assumption is true. A priori, certainly being a physicalist, being a functionalist doesn&#8217;t commit you to the truth of that. So the question is, why should we believe it?</p><p>And I think a very good response is look at what&#8217;s happening in AI over the past few years. Maybe a kind of skepticism made sense in 2020. But now, just given the realities of how much AI progress there&#8217;s been, especially when it comes to the generality of these LLM-based systems, that skepticism is difficult to maintain. I think that&#8217;s fair. I definitely think that the progress that we&#8217;ve seen in AI and the fact that clearly a significant aspect of this progress is the generality of problem solving ability with these systems. I think that does put pressure on the kind of skepticism that I was raising.</p><p>I do wonder how much pressure. Like suppose someone just wants to say, okay, you&#8217;ve made a certain kind of progress. We can characterize that in terms of generality. But of course, the people who are really bullish on AI progress, they&#8217;re not just claiming that these systems are very competent and general as we find them today, they&#8217;re claiming that we&#8217;re going to have drop-in workers that can substitute for human labor across different areas of the economy. Why should we extrapolate from progress that we&#8217;ve seen over the past four years and think that that&#8217;s going to get us to the full suite of capabilities that we associate with human intelligence? We&#8217;re kind of skipping ahead here to get to questions about benchmarks and progress and so on. But I think it&#8217;s an interesting question. What are your thoughts about that?</p><p><strong>Henry Shevlin:</strong> Yeah, I think you&#8217;ve characterized the debate really well. And I think it was a really plausible hypothesis, even a couple of years ago, that, you know, to use the meme, the phrase that has rapidly become a Twitter meme, you know, deep learning is hitting a wall, LLMs are going to hit a wall, that it was like a really viable empirical hypothesis that we&#8217;d find out that there&#8217;s only so far you can go with these very unhuman-like architectures. Okay, maybe we find out that you can use them to generate high quality code and do basic composition and translation. But there is some sort of task set T where no matter how big we build the models, they&#8217;re just no good. Maybe that would be social cognition or causal reasoning or scientific reasoning.</p><p>And yet every candidate domain pretty much has fallen. So I think that doesn&#8217;t mean that we won&#8217;t find some candidate domains where it turns out these systems just, where just scaling these systems up won&#8217;t lead us to greater progress, but we haven&#8217;t found them yet.</p><p>I think probably the most interesting one that I&#8217;m watching at the moment is agency. Some of, I think, I&#8217;m not sure if we&#8217;ve discussed it before, but things like Anthropic&#8217;s experiment with Claudius getting Claude to run vending machines at Anthropic&#8217;s offices and failing abysmally any kind of like long term structured planning task that involves interacting with different human agents, some of which might have slightly malicious motives, you know, people trying to get discounts from the vending machine. It&#8217;s very funny. We can probably drop a link to the study in the blog. But that was an area where it looks like we really are struggling to build systems that can do something like sustain human agency. But even there, we&#8217;re seeing rapid progress. And it&#8217;s not clear to me that we&#8217;re immediately hitting any sort of brick walls.</p><p>So that said, it is entirely possible. And I&#8217;d also just emphasize again that I think this is very much an empirical question. I think, again, it was a really plausible hypothesis a few years ago to think that simply training on language alone wouldn&#8217;t be able to get you anything like cognition. I think there&#8217;s a natural vision of how cognition works where, in the human case, language sort of sits at the top of the pyramid. And then you&#8217;ve got layers underneath of things like sensorimotor cognition, motor skills, spatial reasoning, social reasoning and so forth. And language is just the capstone. And if you try and build that capstone without the supporting layers, sure, you might be able to do some clever stuff, but it&#8217;s never gonna give you real intelligence.</p><p>And I think the discovery that at least that doesn&#8217;t seem to be the case from what we&#8217;ve seen so far is from just a general cognitive scientific point of view, probably the most astonishing discovery in cog-sci in several decades, I think.</p><p><strong>Dan Williams:</strong> Can I just quickly interrupt, Henry, because I really want to make sure that I&#8217;m understanding what you&#8217;re saying. So the last thing you said was you might have a model of kind of agency and intelligence where you need to get the sensorimotor stuff, the kind of embodiment, the being in the world. Is that a Heidegger phrase? I&#8217;ve no idea what he meant by that. But that kind of stuff, you need to get that basic sensorimotor stuff, lots of the stuff that we share with other animals, right, first before you can get these more kind of cerebral intellectual tasks like being amazing at software engineering and coding and mathematics and language and so on. And your thought was that was an interesting hypothesis. Actually, what we found with AI in the past few years is it&#8217;s not true. Actually, you can get all of that really kind of cerebral, highly intellectualized, those sorts of capabilities without that other stuff.</p><p>But couldn&#8217;t someone say, well, that sort of cuts both ways in a sense. So we&#8217;ve talked about this previously, but there&#8217;s this famous, you know, Moravec&#8217;s paradox, you know, things that we find easy are hard. Things that we find hard are relatively easy. And that what we found with AI progress over the past several years is, yeah, these systems have got really good with these kind of, we might think of them as evolutionarily recent capabilities that human beings have these very abstract cerebral intellectualized stuff to do with manipulating text and so on. But real, kind of, the significant challenge when it comes to intelligence isn&#8217;t that stuff, it&#8217;s that sort of basic sensorimotor coordination, these much lower level abilities that we share with other animals. And so far we haven&#8217;t seen much progress on those things. And therefore we shouldn&#8217;t actually be so bullish on the progress that we&#8217;ve seen with these AI systems over the past few years.</p><p><strong>Henry Shevlin:</strong> Yeah, again, really interesting. I think Moravec&#8217;s paradox is looking a lot shakier than it used to. So for it, I mean, one example of something that was sometimes cited as sort of an instance of Moravec&#8217;s paradox was image recognition. Image recognition was famously incredibly, incredibly hard, correctly categorizing the kind of things that were in a presented image. And then around 2012, things like AlexNet was one of the early deep learning systems that started to radically tear away these benchmarks and start to dramatically improve on previous generations of performance. And I think it&#8217;s fair to say that image categorization is basically a solved problem now.</p><p>And I think in quite a few Moravec-type domains, we&#8217;ve seen very, very rapid progress. So another Moravec-ish domain is things like understanding conversational implicature or subtle things that people might mean. So conversational implicature, a technical philosophical term, but huge amounts of human language or huge amounts of human communication rely on things like theory of mind and shared context. So if I say, what do you think of that? Whether that is referring back to something I said five minutes ago, being able to figure out what I&#8217;m referring to, that&#8217;s a very Moravec-style skill that relies on a lot of contextual knowledge. But in these kind of domains, AI just does brilliantly nowadays. AI is very good at conversational pragmatics or conversational implicature, very good at image recognition.</p><p>So Moravec&#8217;s paradox is no longer, it&#8217;s no longer clear that it holds or it holds in a much more uneven and jagged way. It&#8217;s not the case that sort of everything that&#8217;s easy for a two year old is hard for AI and vice versa. So I think that&#8217;s one of the ways in which I would push back.</p><p>Regarding the broader question, sure you&#8217;ve built sophisticated language models. That doesn&#8217;t mean that these systems will then be able to do the fancy sensory motor stuff. I agree. I think that&#8217;s absolutely right. So it may not be the case that LLMs five years from now are any better. Well, I think they will be at least a little bit better at the kind of sensory motor stuff as we&#8217;re seeing from the increased integration of sort of LLMs into robotic architectures and so forth. But yeah, I think it&#8217;s definitely possible that we found a different way to build high-level intellectual capabilities that doesn&#8217;t translate to sensorimotor capabilities.</p><p>But the other thing I would flag here, and maybe this slightly undermines my own point from earlier on, is that contemporary LLMs are radically different beasts from LLMs three years ago. Contemporary LLMs interpret live video. They interact with the world via querying web results. They can access APIs. They can use tools. They are in a kind of dynamic relationship with the world, albeit one that&#8217;s a little bit different from ours. You can ask ChatGPT, is this bar open on a Friday? And it&#8217;ll say, yes, I think it is. And say, can you check that? And it&#8217;ll come back and say, I&#8217;m wrong. Sorry. Yes, they&#8217;ve just recently changed their opening hours. They&#8217;re now closed on a Friday. I think that is almost a form of sensory motor grounding, you know, in obviously a different package. But so I think contemporary LLMs are, they&#8217;re not just sort of these ossified monoliths trained on a bunch of text and then frozen in time forever. They are in some ways closer in, at least at a very abstract architectural level to the kind of dynamic, quasi-embodied systems that we are.</p><p><strong>Dan Williams:</strong> Interesting. I&#8217;m not so sure that Moravec&#8217;s paradox has been challenged to the extent that you&#8217;re suggesting. I mean, we don&#8217;t, we don&#8217;t have robotics, right? It&#8217;s nowhere near as advanced as these LLM-based systems.</p><p><strong>Henry Shevlin:</strong> Well, hold on, hold on. Just on that point, what do you think of driverless cars as a counter example here? Because driverless cars were another one of these things that where people often used the failures of driverless cars in the 2010s as an example of Moravec&#8217;s paradox in action. They said, these people, actually things like driverless cars are to be far harder than people realize because it involves this whole complex suite of sensory motor capabilities. But now, the safety record of Waymo in the Bay Area exceeds that of human drivers.</p><p><strong>Dan Williams:</strong> Yeah, very, very good point. I do think though, some degree of goalposts shifting and realizing that certain things we thought would be very hard and much easier than we thought can kind of be legitimate in this context because our intuitions are not particularly reliable when it comes to tracking what really matters about intelligence.</p><p>So if you go back to the seventies and eighties, all of these people, even those who thought that embodiment was really central to intelligence, they would say things like, well, you&#8217;ll never get an AI system that can beat a human being at chess because that&#8217;s going to tap into all of this constellation of abilities, which are, you know, connected to our embodiment and so on. And then obviously we know what happened, but I think part of that is we&#8217;re just learning with every kind of development with these AI systems that there&#8217;s much more to intelligence than we thought. So yes, we do have self-driving cars, but we don&#8217;t have functional robotics of the sort that we can integrate into our lives, suggesting that self-driving cars as impressive as that kind of technology is, is not really a proxy for the kind of full suite of sensory motor abilities that we care about when it comes to animals&#8217; interactions within the world.</p><p>I think we&#8217;ve also so far been thinking of Moravec&#8217;s paradox in terms of this contrast between the highly cerebral intellectual domains, kind of symbolic, often explicitly text-based and basic sensory motor control. But I think there are things like continual learning, right? The capacity of animals, very, very young children, a perfect example of this, to be constantly learning from their environments. And in a way, I think that&#8217;s one of these things which state of the art AI today hasn&#8217;t cracked. I mean, you&#8217;ve got this kind of pre-training phase where it&#8217;s next token prediction. Then you&#8217;ve got post-training where it&#8217;s various sort of reinforcement learning-based learning processes for the most part. But you don&#8217;t have kind of continual learning, updating of the model weights as they go through the world from their experience. And that&#8217;s not strictly speaking, just a sensorimotor thing. That&#8217;s also connected to our sort of higher abilities.</p><p>And then also things like, you know, creativity, judgment. We&#8217;ve got these words for these concepts. And I think our explicit understanding of them is quite weak. But I do think there&#8217;s something to the idea that, you know, ChatGPT, the amount of knowledge this system has is unimaginable relative to what an individual human being has. But individual human beings can do things in the cognitive domain, which is still much more impressive than what systems like ChatGPT or Gemini can do. And again, that&#8217;s sort of, it&#8217;s not, it&#8217;s a kind of competence, it&#8217;s a kind of ability, which is not purely sensory motor, but what I think is quite central to how animals in general go through the world, capacity for judgment, for creativity and so on, which again, these systems don&#8217;t seem to possess.</p><p>And one reason for that might be that they&#8217;re just these incredibly weird systems relative to human beings. Their training process is completely different. Their architecture is completely different. And they can do these things that are incredibly impressive, almost unimaginably impressive relative to a few years ago. But there&#8217;s a great quote actually from AI podcasting legend, Dwarkesh Patel, which is something like these systems are getting more and more impressive at the rate the short timelines people predict, but more and more useful at the rate the long timelines predict. The thought being, yes, what they can do in terms of our subjective sense of how impressive it is, is amazing. And they&#8217;re performing very, very well in terms of these benchmarks. But in terms of their real world utility, actually they&#8217;re not having the impact that many people think. And one reason for that might be that they lack many of these kind of amorphous, nebulous capabilities that human beings and indeed to some extent other animals have. Sorry, that was me. I&#8217;m not, that was very nebulous and sort of inchoate in terms of their thoughts there, but I&#8217;ll be interested to hear what you think.</p><p><strong>Henry Shevlin:</strong> Well, can I ask, what are some examples of judgment or creativity involving tasks where you think contemporary models clearly fall short of human capabilities? And I&#8217;m not denying that there might be such cases, but I&#8217;m just curious if there are any ones you have in mind.</p><p><strong>Dan Williams:</strong> Yeah. Well, for example, I mean, I&#8217;m a writer and a researcher. I don&#8217;t think AI systems as they exist today, or maybe I should actually, I should rephrase that as commercially released AI systems, because God knows what&#8217;s happening privately within these labs. I don&#8217;t think they could function as a researcher and as a sort of writer generating novel and interesting opinions, which is the kind of self image that I would like to have. I think they can write bloody well. And I think if you use them as an assistant, it can be incredibly helpful in terms of augmenting and enhancing your abilities. I don&#8217;t think we&#8217;re at the stage where a ChatGPT could function as a substitute for me, which in a way is strange because it has a knowledge base, which is sort of just so vast relative to my knowledge base or the knowledge base of any other kind of researcher.</p><p>So I would imagine if you took my abilities, limited as they are, but combined them with this kind of almost godlike knowledge base of the sort of these systems have, you would get really, really kind of impressive research outputs. But you just don&#8217;t see that when it comes to these state of the art AI systems. Am I missing something? Do you disagree?</p><p><strong>Henry Shevlin:</strong> Well, I think one thing that&#8217;s worth mentioning is I think it might be a little misleading to compare you who are, you know, I think, I hope you won&#8217;t mind me saying an elite sort of knowledge worker, right, with in thinking about sort of your ability to do original composition, original essays, original analysis. Yeah, I think you still have an edge. But I think we&#8217;re well past the point where sort of the median undergraduate essay, I mean, the ChatGPT in its current form can produce far better essays than the median undergraduate essay. I think at this stage, in some domains, it can produce far better essays than the median grad student 5,000 word essay.</p><p>And so I think there&#8217;s a little bit of a tension there if you&#8217;re saying, humans in general have this special sauce that lets us do things that AI systems can&#8217;t, when in fact, already AI systems in the kind of domains you mentioned already do vastly better than the very large majority of humans within these tasks.</p><p><strong>Dan Williams:</strong> Yeah, I think I&#8217;m more open to the possibility that they&#8217;re doing something very, very weird, incredibly impressive, that does seem to outcompete human beings across specific tasks, but they do in fact lack many properties and capabilities that human beings have such that they couldn&#8217;t substitute for them even when it comes to purely intellectual tasks. I do realize though, there is the possibility of a significant amount of copium, self-serving cope in terms of this. And there&#8217;s something unsatisfying about it as well, in as much as I think you&#8217;re right to, you&#8217;re really right to push back. And also I would say, I wouldn&#8217;t have predicted the progress of the sort that we&#8217;re seeing back in 2020. And I think I probably haven&#8217;t fully updated to the extent that a rational individual should have done given the kind of progress that we&#8217;ve seen.</p><p>But let&#8217;s just quickly sort of, let&#8217;s return to this, but I&#8217;m aware of the fact that we got derailed by a really interesting conversation there. And just take a sort of detour through, we&#8217;ve touched on this introductory stuff about kind of AGI. So, you know, how you might understand the concept in terms of transformative impact, in terms of human level AI, in terms of more abstract sort of functional specification of capabilities. Maybe we can just spend a little bit thinking about the skeptical arguments concerning this concept of AGI. So like people like Jan LeCun or Alison Gopnik that I mentioned at the beginning, just saying the concept makes no sense at all and there&#8217;s no such thing as general intelligence.</p><p>I take it, I mean, one argument you often find here is that human beings are supposed to be the existence proof for the AGI. Here is a, you know, complex information processing system that has the kind of set of capabilities that people that talk about AGI are interested in. But the thought goes, well, human intelligence is not general. The human brain is this integrated mosaic of very specialized abilities that correspond to the kinds of problems we confronted in our kind of evolutionary past.</p><p>Sometimes this is cashed out in terms of like massive modularity to get a bit nerdy in terms of the cognitive science debate. And I think people into that kind of perspective, they think there&#8217;s something problematic with the concept of AGI because it seems to assume that intelligence is this one generic problem solving ability when in fact human intelligence, which is supposed to be our only existence proof of AGI, doesn&#8217;t take that form. It&#8217;s this set of special purpose modules for different tasks, which might be nicely integrated in the case of the human brain, but don&#8217;t involve just general purpose sort of learning mechanisms. What&#8217;s your thought about that kind of critique or that kind of worry?</p><p><strong>Henry Shevlin:</strong> Yeah, so I&#8217;m pretty sympathetic to massive modularity in the human case. I think if you are sympathetic to massive modularity in the human case, that just seems like one way of interpreting that is to say that AGI can operate across, or that general intelligence can operate across highly modular architectures. If what we&#8217;re thinking about when we&#8217;re thinking about general intelligence is something ultimately grounded in the ability to perform cognitive tasks, right? Does it matter whether that&#8217;s achieved purely via a relatively narrow bundle of cells all in your prefrontal cortex or using working memory, or if it&#8217;s a bunch of different sort of cognitive sub-modules working together.</p><p>So yeah, I think if you accept the massive modularity as a thesis in humans, then why not just say, okay, so maybe the way we get to artificial general intelligence is through a similarly massively modular system. And you can already see hints of this in the way that in the increasing tool use by AI systems.</p><p>And it may be that, and this sort of goes back to our discussions about CAIS versus AGI, that the first kind of true AGI systems, I&#8217;m skeptical we&#8217;ll ever have like a clear, we&#8217;ve built AGI moment. But maybe the first systems that sort of get most people would agree are AGI systems might similarly have a relatively modular architecture, maybe with sort of a central coordinator powered by an LLM coupled with a dedicated mathematics engine, coupled with dedicated deep reinforcement learning agents for doing various kinds of scientific work, coupled with, you know, maybe sensory motor systems embedded in drones for doing that kind of thing. I think that would still be AGI, at least in the sense that it&#8217;s sort of relevant and interesting.</p><p><strong>Dan Williams:</strong> Yeah, that&#8217;s a very, I think that&#8217;s a very good response. I mean, is the worry then that these people have that actually, if you look at AI as it exists today, most of what&#8217;s powering it is very general purpose learning mechanisms that doesn&#8217;t really look like what you&#8217;ve got in the human case. So maybe we should be skeptical that you&#8217;re going to get to human-like capabilities via this architecture. But I think your point that actually there&#8217;s a lot more kind of modularity here than you might think if you just look at the base model precisely because of this interface with all of these mechanisms. I think that&#8217;s important.</p><p>I wonder if there&#8217;s another kind of thing in the background here, which is skepticism about the way in which AGI often gets talked about where it&#8217;s like, we&#8217;re gonna build AGI and it&#8217;s gonna have these almost superhuman capabilities across all of these different domains. And maybe some people think, well, if you look at human beings, existence proof for this concept of AGI, you don&#8217;t find anything like that. You find that we&#8217;re very good at some things, we&#8217;re not so good at other things. So maybe the thought would be once you&#8217;ve paid attention to how human intelligence and maybe more broadly kind of animal intelligence works, very kind of specialized, very modular, that should make you a bit more skeptical maybe about some of the claims about the capabilities of super intelligent AGI in the future. What do you think of that kind of argument?</p><p><strong>Henry Shevlin:</strong> Yeah, I mean, I think it&#8217;s definitely worth stressing how sort of distributed our civilizational capabilities are across different humans, right? I think most humans are not fully general in their intelligence. Some people are great at mathematics, some people are great at coding, some people are great at languages. But we&#8217;re able to achieve remarkable things at the civilizational level or at the cultural level because of cooperation across different kinds of specialists within our massive population.</p><p>But again, I don&#8217;t see why a model like that couldn&#8217;t apply to AI systems. Maybe that&#8217;s across millions of different instances with different fine tuning to different tasks. So yeah, I think the lack of generality in individual humans is compensated for at the population level. And I don&#8217;t see why a similar kind of distributed architecture couldn&#8217;t apply to relatively near future AI systems in the kind of modular way I&#8217;ve been describing.</p><p>There&#8217;s an idea here worth bringing back that I touched on earlier on, which is the jagged nature of current AI systems. So for anyone who&#8217;s not familiar with this, roughly the idea is if you think about sort of a spider diagram or a radar chart, as it&#8217;s sometimes called, where you sort of think about different dimensions of intelligence and sort of map human performance on this, you know, we&#8217;ve got sort of spatial reasoning, mathematical reasoning. And let&#8217;s just say for the purposes of argument that humans are pretty well rounded across this domain.</p><p>AI systems, current AI systems are really, really superhuman already at some tasks, well below human performance on others, around human performance on some. I think this is a really striking observation and a really important observation for understanding trends in current AI. And also explains a lot about the point you made earlier about why these things are maybe less useful than you might have expected.</p><p>And, you know, I&#8217;ll happily say on the record here that I was far more optimistic about the near term economic impacts of things like ChatGPT, then turned out to be actually correct. If you&#8217;d asked me, well, I think I was saying back in November, 2022 on Twitter and places that this is going to revolutionize the economy in the next few years. I still think it is going to revolutionize the economy, but it&#8217;s been a lot slower than expected. And I think jaggedness is a big part of the reason, adoption is another.</p><p>But just to sort of go into this a little bit more detail, when we think about what an individual human job involves. It involves a huge range of tasks. It&#8217;s not one task for the most part. They&#8217;re bundled tasks. Current AI systems are really good at some and bad at others, which makes the idea of the drop-in agent-employee model currently non-viable because there are enough tasks within human workflow that AI is really bad at to mean that&#8217;s just not applicable.</p><p>So a couple of things you might say, how we&#8217;re to get around this problem. One is that just rely on these systems getting better and that jaggedness, if not smoothing out, then to sort of the sheer level of the abilities expanding sufficiently that, you know, even if AI systems are still vastly superhuman in some domains and only human level in others, they&#8217;ll be good enough across the board that they will be able to function as drop-in agents.</p><p>Another interesting idea though is that we will just redesign task flows. We will do some unbundling of tasks in roles such that we create sort of roles that AIs can be dropped in on quite safely. I think a nice useful analogy here, I was talking about this on Twitter not long ago, is if you look at mechanization in agriculture, right, it&#8217;s not the case that mechanization in agriculture proceeded through creating robot farmers. It involved instead changing task flows such that relatively simple machines could take over very sort of labor-intensive tasks from humans and changing the kind of things that the average human farmer does.</p><p>I think that might be a better model for thinking about at least near-term AI impacts on employment, where it&#8217;s a matter of redesigning task flows and value chains such that there are, we do create these niches where you can drop in these AI agents to take on huge important parts of the value chain without necessarily replacing humans one-for-one on the kind of jobs that humans currently have.</p><p><strong>Dan Williams:</strong> Yeah, that&#8217;s really interesting. And I think it&#8217;s a very insightful observation. I mean, I would say though, when we&#8217;re thinking about what people do in their jobs, it&#8217;s not like, you know, there&#8217;s a set of tasks that are separate from each other and, you know, AI can do 40% of them or soon it&#8217;s going to be able to do 60% of them. The tasks are integrated with each other in an incredibly complex way, such that we might be able to delegate some of these individual tasks to an AI system. But if I think about my job as an academic at a university, it&#8217;s not like I can say, my job consists of 142 tasks and here they are. It&#8217;s a much more integrated kind of unified set of responsibilities and obligations.</p><p>So I think if we&#8217;re thinking about not just delegating some tasks to AI systems and adjusting how the workflow is structured and adjusting the structure of organizations, but thinking about radical forms of automation. At the moment, I think that we&#8217;re very far from that precisely because I don&#8217;t think even as impressive as these AI systems have been, they&#8217;re capable of that kind of really kind of long time horizon integrated, like multimodal task performance of the sort that most human beings perform.</p><p>And that actually gets us nicely onto something we&#8217;ve already touched on, but I think we should think about and talk about as a kind of separate topic, which is measuring progress in AI. So lots of this is framed in terms of progress towards AGI. But I guess you can just think of it in terms of the progress and the capabilities of these systems in general.</p><p>So I think there are kind of three overarching ways in which we do this, again, to draw another distinction between three different categories. There&#8217;s the kind of subjective, how impressive is this? It&#8217;s not completely without value, but I think it is very unreliable for various reasons. There are the sort of set of benchmarks, formal benchmarks that are used to evaluate model performance. And then there is actual kind of real world deployment. So something like what percentage, what fraction of work in the economy is done by automated AI systems or something like that.</p><p>If you&#8217;re thinking about those three categories, I take it that the quote that I paraphrased from Dwarkesh, where these models are getting more impressive at the rate that the short timelines predict and they&#8217;re kind of more useful at the rate that the long timelines people predict. That&#8217;s drawing a distinction between two different ways in which you can evaluate these systems. There&#8217;s the kind of how subjectively impressive do we find them? And maybe that&#8217;s also connected to benchmark performance, where as you&#8217;re saying, they&#8217;re just getting better and better at a seemingly just ever increasing set of tasks. It&#8217;s juxtaposing that with like real world utility. I think that&#8217;s complicated, as you said, by the fact that real world deployment is not a simple function of capability, is also going to be shaped by all sorts of other things.</p><p>But how are you thinking about this, about measuring AI progress, how to use that to forecast AI progress?</p><p><strong>Henry Shevlin:</strong> Yeah, I think that&#8217;s a fantastic tripartite way of splitting it up. So just a couple of quick comments on sort of the tripartite division. I think as we saw from the launch of GPT-5 last year, how underwhelmed most people were by GPT-5. And I think that that was a fascinating sort sociological episode in itself, particularly because if you look at it purely in terms of benchmarks, the line kept going up and continues to keep going up across most of the things we know how to measure.</p><p>There is no evidence of a slowdown in AI capabilities, at least in terms of evals and benchmarks. And yet people were a lot less impressed by GPT-5 than previous models. I think there are a few little just interesting reasons for that. A very basic one is just that cadence of release has massively increased. So it&#8217;s no longer the case that we&#8217;re waiting a year and a half between model releases with no releases at all in between. Now we get updates pushed every couple of months.</p><p>So there are going to be fewer wow moments. I think that sort of partly explains why maybe people were a little bit underwhelmed by GPT-5. Another fact is that I think, another sort of constraint on how impressed people are is that I think models are already good enough at most of the kind of tasks that most people use them for such that new releases don&#8217;t radically change people&#8217;s affordances with the system.</p><p>I mean, I think there are occasional specific domains in which they do. So just to give one example of, I think, a major transition, as it were, in capabilities. I think the release of NanoBanana, Gemini&#8217;s integrated image model, dramatically changed what you could do with images and data, specifically because NanoBanana is very good at threading text data and sort of semantic content with images. So for example, with NanoBanana, you can have a long conversation and say, now create an infographic or a mind map of the conversation we&#8217;ve just had. NanoBanana from day of release could do that in a way that every other previous image model would fail abysmally at. So there are these kind of like sudden, wow, here&#8217;s something new that I could do that I couldn&#8217;t do before. But in terms of just sort of general performance of language models, I think they are good enough for most purposes that there are fewer wow moments there.</p><p>I think more broadly regarding which of these three ways of measuring AI progress are most relevant and important. I think it depends on the domain. So I think there are isolated domains as measured by particular benchmarks, like some of the mathematics benchmarks, where those benchmarks do have immediate significance, right? If we are using AI models to solve near future, outstanding major problems in mathematics, right? Then I think benchmarks might be getting us pretty close to measuring the underlying criterion that we&#8217;re interested in.</p><p>Ultimately though, I think it&#8217;s the economic impacts that are most pressing and most exciting and most scary. But as you said, there&#8217;s so much more to those than just the raw capabilities of the system.</p><p><strong>Dan Williams:</strong> Yeah. I do think though, I mean, now I&#8217;m just becoming a Dwarkesh fanboy, but just to observe another point that he made in this blog post, and we can link to this because I think it was a very interesting one. He says, people who make this point about the difference between, you know, the raw capabilities of the system and the rate of diffusion, or in other words, say that you can&#8217;t evaluate the capabilities of the system merely by looking at the degree to which it&#8217;s integrated into people&#8217;s workflow because that&#8217;s going to be slowed down by all sorts of factors. He says that&#8217;s basically a cope on the grounds that if we really had AGI, it would integrate incredibly quickly.</p><p>And I think an analogy he uses, if you think about, you know, immigrants integrate into the economy very, very quickly because they&#8217;ve got this wonderful, flexible, like general purpose intelligence that human beings have. And he says, well, if you really are imagining an AGI of the sort that people like Sam Altman and so on were forecasting, then you wouldn&#8217;t have all of this friction when it comes to integrating these systems into an organization&#8217;s workflow because they will be able to do everything that a human being can do just better. So it wouldn&#8217;t be any more difficult than integrating a human being into it.</p><p>It&#8217;s an interesting argument. I don&#8217;t know whether I&#8217;m sort of fully persuaded by it. Before we move on, did you want to respond to it?</p><p><strong>Henry Shevlin:</strong> Yeah, I think it&#8217;s a fantastic argument. I think actually one useful, I think as we move towards greater and greater degrees of generality, the kind of existing structural constraints and the problems imposed by the jaggedness of models are going to become less pronounced. So I think that is a useful way to sort of measure progress towards AGI is thinking about the degree to which systems are capable of overcoming sort of external constraints, external limitations.</p><p>So, you know, for example, something as simple as a model being assigned a task, realizing it doesn&#8217;t have the internal resources to solve that task, identifying tools it could use so that it could solve that task, and then using those tools. I think that is the kind of behavior. I think we see some of that already with stuff like Claude Code and its current form. But I think that is a good way to think about what progress towards AGI looks like, overcoming the kind of structural constraints that may not be pure limitations of the model, but it has something important about the model if it can work in indirect ways to overcome them.</p><p><strong>Dan Williams:</strong> Yeah, and I think it&#8217;s also interesting because once you start thinking in that way, it really does put pressure on this sort of AI as normal technology perspective that says, look, if you look at the history of technology, the process by which these technologies diffuse throughout the economy and throughout society more broadly, it takes a long time for all sorts of reasons. You might think, okay, but you can&#8217;t look at the history of previous technology because something like AGI would be a radically kind of sui generis technology precisely because it will be very easy to very quickly integrate into people&#8217;s workflow and into the sorts of things that companies and so on are doing.</p><p>Maybe just on this issue of these different ways of evaluating AI progress, before we move on, we could touch on two different things. The first is, so probably the most influential benchmark at the moment is this METR or MATR. I don&#8217;t know exactly how you pronounce it. Model Evaluation and Threat Research graph. I think to be honest, we&#8217;d have to do a whole separate episode on this where we really get into the weeds because I think the methodology and everything is very, very complicated. But basically, as I understand what METR are doing with this metric, which is basically a time horizon metric, is it saying, look, lots of other benchmarks, they&#8217;re evaluating AI&#8217;s ability to perform a set of say abstract cognitive intellectual tasks. But what we should really care about or at least one thing we should really care about if we&#8217;re interested in these things like agency and the ability to master context and this sort of constellation of abilities that seem to go along with agency is sort of how long it takes a human being who&#8217;s a professional to perform a task.</p><p>And to the extent that AI systems are getting better and better at performing tasks that would take human beings a very long time to do, that&#8217;s telling you something really kind of important about the capabilities of these systems and how fast they&#8217;re progressing. And as I understand their metric, basically what they&#8217;re saying is there&#8217;s been a kind of exponential growth such that the task length of tasks that AI systems can perform is doubling something like between every three or every seven months, where task length there is specified by how long it would take a person to perform that task.</p><p>So there&#8217;s a very nice quote from Roon, who&#8217;s a popular social media AI commentator. He says, the METR graph has become a load bearing institution on which our global stock markets depend. And the thought there is many people are looking at this graph and they&#8217;re seeing line go up. They&#8217;re seeing this rapid progress. They&#8217;re extrapolating that into the future. And that&#8217;s why there&#8217;s so much optimism about the capabilities of these systems and how they&#8217;re likely to develop into the future. That was my current understanding of this graph and the metric that it&#8217;s using. Do you have any thoughts about this evaluation?</p><p><strong>Henry Shevlin:</strong> No, I think you did nothing, no major notes. I think you did a great job of describing it. So just a flag that sort of the METR time horizons task is specifically focused on software engineering tasks. So that is a slightly narrower set of tasks, but it&#8217;s one that obviously has massive economic value and is also potentially relevant if we&#8217;re thinking about any kind of recursive elements in AI development, you know, software engineering tasks are relevant to building AI systems. So to the extent that we&#8217;re finding massive time-saving improvements through the use of AI tools, that might be expected itself to accelerate AI development. So that&#8217;s another reason I think that this is so important, maybe less so for the stock markets than sort of the more kind of future-oriented predictions about where AI capabilities are going to go from here.</p><p>But of course, absolutely. This is probably the most interesting benchmark to watch when thinking about real world impacts of AI. Software engineers are a very, very expensive group of people to employ. And to the extent that AIs function as massive time savers in those tasks and can do more and more complex workflows within these tasks, that has massive real world impact and significance.</p><p><strong>Dan Williams:</strong> Yeah, and I think this point about overwhelmingly the tasks are kind of software engineering tasks. So, you know, a software engineering task that might take a human being six hours to complete in and of itself, you know, that&#8217;s going to limit the generalizability of this metric because you might think lots of tasks within the world just don&#8217;t have the structure of software engineering tasks.</p><p>But I think there&#8217;s also a sort of, I mean, there are also all sorts of methodological questions about how they&#8217;re calculating this and so on. And like I say, I think we should do a separate episode where we sort of dig into this in detail. But I think there&#8217;s this other issue which is just to do with benchmarks as a whole. In order for us to be able to have graphs like this, we need tasks in which basically there&#8217;s a correct answer or a correct output.</p><p>I take it one worry here is just the kind of classic Goodhart&#8217;s. Is it Goodhart&#8217;s law? You know, when a measure becomes a target, it ceases to be a good measure. So a risk that with any given benchmark, we&#8217;re getting systems that are getting better and better at doing well on the test in ways that don&#8217;t necessarily correlate with the kinds of things that we really care about.</p><p>But I think there&#8217;s also another worry where even if you set that aside, the worry would be something like, okay, by the very nature of these benchmarks, where there&#8217;s a kind of clearly defined correct answer or output, you&#8217;re not tapping into the kinds of things that really matter to a lot of human intelligence, where it&#8217;s not a simple issue of here&#8217;s the finish line or here&#8217;s a clearly defined correct answer or correct output. And I mean, how far do you think that skepticism can go? Like if someone says, look, there&#8217;s a possibility here that even though we&#8217;re seeing rapid progress when it comes to these benchmarks, including this sort of time horizon benchmarks, which seems like it should be really informative. Nevertheless, it&#8217;s just not really telling us anything interesting about the sort of broader set of competencies that matter for real world deployment. Like how much skepticism do you think is tenable when it comes to the gap between benchmarks and sort of the capabilities that we really care about.</p><p><strong>Henry Shevlin:</strong> Yeah, I think it&#8217;s a persistent worry all across not just AI research or even ML in the broader sense, but psychology. Criterion problems is sometimes called, shot all over the place. We have a dozen different ways of measuring creativity, which have minimal predictive validity for one another. As soon as you operationalize a really interesting target, you immediately lose many of the features that make it interesting in the first place. So I think it is an absolute legitimate worry.</p><p>That said, I think that I should be able to do better than this, just anecdotally, I think we are seeing models become just generally more useful. If they were improving in benchmarks, but that wasn&#8217;t translating into actual real world utility on different tasks, that would be a real red flag.</p><p>I can speak about your experience, but my experience is that basically every successive model release is at least somewhat better. I can do some new things with it. And that&#8217;s why I think the METR Time Horizons benchmark is a valuable one, but why I also think sort of more grounded economic benchmarks, for example, the degree of internal value created by different, by AI usage in different industries, the degree to which industries are successfully implementing AI automation projects and so forth, they&#8217;re an absolutely necessary complement because they&#8217;re measuring something that still has some criterion problems, like generating economic value, but is much more tangible and less likely to be a mere artifact of our sort of testing of our evaluation framework.</p><p><strong>Dan Williams:</strong> Yeah, okay, great. Let&#8217;s, I think there are two things to kind of finish on. One of them, I think we can be brief because we&#8217;ve really kind of already touched on this, but it&#8217;s what we should expect the transition to a post-AGI world to be like, however you understand AGI. And the other is predictions for 2026 in terms of how we see these capabilities.</p><p>But first, just want to give you an opportunity to have a take on Claude Code. So, I&#8217;m sure you&#8217;ve also seen a lot of commentary, a lot of buzz, a lot of discourse to the effect that Claude Code, and just for those who aren&#8217;t really in the weeds in AI, Anthropic is a frontier cutting edge AI company. They&#8217;ve got a model called Claude. And as part of that, they&#8217;ve got Claude Code, which is primarily used for sort of software engineers and coders, but apparently it has much broader application.</p><p>I should say I haven&#8217;t used Claude Code. I do use Claude all of the time, which I think is incredibly impressive. I haven&#8217;t used Claude Code. I&#8217;m very, very skeptical that it&#8217;s AGI or if it is AGI, I think that probably tells us that the concept of AGI can&#8217;t do the work that many people have assumed that it can do. Have you got a take on Claude Code before we move on?</p><p><strong>Henry Shevlin:</strong> So I haven&#8217;t played around with it as much as I would have liked. And it is, I think, one of the more daunting models for non-technical people to use. Even installing it, for many people, will be a little bit of an adventure. But particularly speaking to friends in technical whose jobs are primarily technical, the wow factor seems to be huge on the current iteration of Claude Code.</p><p>People are talking about how it&#8217;s transforming their workflows, enabling them to do a whole suite of tasks they couldn&#8217;t have dreamt of doing before. And I do think it is a significant landmark. Yeah, I think it probably is a taste of the kind of capabilities that we&#8217;re gonna see over the course of the rest of this decade, where it&#8217;s not just people slotting AI to do specific tasks or sub tasks within their own workflows, but being able to delegate whole workflows to Agile Systems.</p><p><strong>Dan Williams:</strong> Yeah, okay. And that in a way that leads us onto the first of those two points that I wanted to end on, which is how we should think of the transition to a sort of post-AGI world. I mean, I take it there&#8217;s a model you sometimes come across where it&#8217;s almost like it&#8217;s the atom bomb going off in the Manhattan Project. You reach something called AGI and it&#8217;s just radically transformative immediately, for various reasons, maybe because of the capacity to take AGI and use it for large scale automation, but also potentially because of the ability of AGI to get involved in the AI R&amp;D process, triggering this kind of intelligence explosion.</p><p>I&#8217;m really skeptical that that&#8217;s the right way to think about it. I think what we&#8217;re seeing basically is kind of incremental improvements in the capabilities of these systems when it comes to things like agency, multi-step sort of long time horizon planning, continual learning and so on. I don&#8217;t think there&#8217;s gonna be like a big bang. I think we&#8217;re gonna see this sort of incremental progress where if you compare, you know, one year to three years down the line, it will seem like this huge disparity, but living through it, I think it&#8217;s gonna seem very continuous.</p><p>And also when it comes to the impact of this kind of technology on the economy for the reasons that we&#8217;ve got into. I think there are going to be all sorts of bottlenecks. There&#8217;s going to be so much opposition, even when you&#8217;ve got capabilities that are very powerful, integrating it into people&#8217;s workflow and so on and so forth. So I&#8217;m definitely not really expecting a kind of big bang here. And I think people saying that with highly agentic, at least relative to what&#8217;s come before AI like Claude Code, you&#8217;re seeing kind of baby AGI, I think that might be true to an extent relative to certain understandings of what AGI is, but that just tells us, I think, that AGI isn&#8217;t going to be this landmark event. It&#8217;s going to be a sort of continuous incremental improvement across lots of different capabilities. So that&#8217;s my high level take. Do you have a different take? Do you have to build on that in any way?</p><p><strong>Henry Shevlin:</strong> Yeah, I think I largely agree with your take that we&#8217;re not going to have a sort Trinity test equivalent moment if we&#8217;re going to use the analogy of the Manhattan Project, right? There&#8217;s not going to be a sudden moment where a lab says we&#8217;ve built AGI. Instead, it&#8217;ll seem very incremental and continuous to most people, even those who are following what&#8217;s happening. And then by the end of this decade, we&#8217;ll look back and say, holy shit, how far we&#8217;ve come.</p><p>And I think I don&#8217;t think there&#8217;s any reason to think that progress towards the kind of highly general autonomous systems that, or highly general autonomous capabilities that people associate with AGI. I don&#8217;t think there&#8217;s any reason to think that that progress isn&#8217;t continuing the pace. And I do think, to go back to Claude Code, that it is an example of the kinds of really consequential leaps that we&#8217;ll see.</p><p>So Ethan Mollick today has a, Ethan Mollick has a piece, brand new piece called Claude Code and What Comes Next, where he talks about using Claude Code to generate a passive income and how it creates hundreds of files for him. Having worked autonomously for 74 minutes, it deploys a functional website that could actually take payments. It got various things wrong along the way, but it was a far cry from sort of the Claudius vending machine experiments from earlier this year, earlier last year.</p><p>So yeah, I think we&#8217;re going to look back at the end of this decade and realize how far we&#8217;ve come, but there&#8217;s not going to be a single Trinity test style moment. And I think an interesting parallel here is actually with the Turing test. I think a lot of people were expecting the Turing test to be, there would be like a decisive moment where it&#8217;s like, wow, computers can now pass the Turing test. I don&#8217;t think that would have been a smart thing to think because Turing&#8217;s original test is woefully under-specified. He doesn&#8217;t sort of give exact time windows and so forth, and there are various constraints you can build in.</p><p>But I think at this point now, the Turing test is no longer especially relevant as a measure of AI capabilities. It&#8217;s still of interest, but it&#8217;s no longer the case that it&#8217;s sort of a clear benchmark we&#8217;re working towards. We have had multiple instantiations of the Turing test now that show frontier AI systems can fool humans over two or five minute time horizons with basically at 100% success rate, like where humans are chance at guessing whether they&#8217;re talking to an AI system or a fellow human.</p><p>But it&#8217;s like that benchmark slowly faded into the background rather than being a decisive moment. And I think AGI is going to be very similar. By the end of this decade, I do expect that we will have autonomous, agentic AI systems deployed in pretty much every industry. The vast majority of people&#8217;s workflows and daily jobs are going to be very, very different. I don&#8217;t think by the end of this decade, for what it&#8217;s worth, that we&#8217;re going to be looking at mass unemployment.</p><p>I actually quite like Noah Smith&#8217;s, I don&#8217;t fully agree with it, but Noah Smith has this sort of piece on how even in an AGI world, we might still have full employment, leveraging this sort of concept of comparative advantage, the idea that there are always going to be things where it&#8217;s cheaper to employ or easier to employ a human to do a given task. And I think that&#8217;s going to be one of the things that prevents sort of mass technological unemployment. Also things just like compliance and the fact that you&#8217;re going to need to have humans on the loop in many tasks, monitoring AI systems to ensure that you&#8217;re abiding by regulations. But I do fully expect the increasingly general AI systems about which will be debates around AGI will seem increasingly irrelevant to be ubiquitous by the end of this decade.</p><p><strong>Dan Williams:</strong> That&#8217;s a nice prediction. Yeah. The prediction concerning AGI is that debates around AGI will go the way of debates about the Turing test. Also, just to add to that point you made about the economics of this, I think the comparative advantage point is very interesting and I think it&#8217;s very important.</p><p>There&#8217;s also a kind of obvious thing which sometimes gets missed in questions about automation, which is when we say, you know, AI systems that can, let&#8217;s say, outcompete human beings doing what human beings do. Really, the contrast there is outcompete human beings using AI systems. So it&#8217;s not like human intelligence is this fixed target, such that we need to build AI systems that can outcompete human beings as they are in 2026. Human intelligence in general depends on all sorts of technological scaffolding and so on. And that makes it a moving target.</p><p>I certainly find in my own work, me with AI, so much more productive and effective than me without it. So if you ask, you know, could AI systems beat Dan without using AI? That&#8217;s a very different question, I think, than could you design autonomous, you know, flexible, continual learning based AI systems that could outcompete me with access to those systems? And I think that&#8217;s also got sort of implications for how we think about the real world impact of all of this.</p><p><strong>Henry Shevlin:</strong> Yeah, I guess I do want to, before we go into predictions, I just want to add one final sort of coda here, which is I&#8217;ve, in that sort of foregoing prediction of what we&#8217;ll see this decade is not to be extrapolated outwards. I think there may well be a point probably beyond the point of this decade where things start to get really, really weird, where sort of the degree of the absolute advantage systems have really just fundamentally starts to reshape workflows and value chains in a way where human labor may eventually, and maybe in some point in 2030s, start to struggle to fit in.</p><p>So I&#8217;m thinking here of this wonderful piece, very far-sighted flash fiction piece by Ted Chiang called Catching Crumbs from the Table, published in Nature Futures a long time ago, about 20 years ago, where he talks about post-human science and the idea that eventually science reaches the point where it can only be done by AI systems. And just because the kinds of theorems, kind of tools being used are just incomprehensible to humans. And he imagines this sort of cottage industry of sort of explainers where humans try and understand, you know, we&#8217;ve built this, AI has developed this new alloy that we are completely incapable of understanding in terms of our existing material science, but let&#8217;s do our best, right?</p><p>So I think it is possible, I think broadly plausible that if we extrapolate far enough outwards that we might start to hit that point. And then I really do think all bets are off. What does human employment in finance look like when you have super intelligent financial managers supervising super intelligent analysts? What is the role for the human there? Is it just going to be that you sit next to your mainframe running 10,000 super intelligent AI finance agents, and if they ever do anything illegal, you get fired. That might be what sort of people&#8217;s jobs start to look like at that point. But I think that&#8217;s slightly longer time horizons instead. What I see over the course of this decade is not mass unemployment, but definitely radical changes in human workflows.</p><p><strong>Dan Williams:</strong> I also think like one of the things that at least in my own case, the reason why I find this so difficult to think about is I just don&#8217;t know what to make of the intelligence explosion argument. And I feel like the people that are expecting a real sort of discontinuous leap here, at least relative to human timeframes, they&#8217;re imagining a process which will be incredibly rapid precisely because of this model about recursive self-improvement.</p><p>So Will MacAskill and Fin Moorhouse have a really nice article on the intelligence explosion and how you could basically compress sort of a hundred years of technological progress into, you know, much, much shorter timeframe. And if that kind of analysis of what you might see with this intelligence explosion, as they understand it, is correct, then, you know, my current view that a lot of this is going to be sort of relatively incremental and continuous and there aren&#8217;t going to be any sharp breaks might just break down entirely. I feel like I need to get a good grip on what to think about that whole argument. But we&#8217;re going to be devoting episodes this year to people that&#8212;</p><p><strong>Henry Shevlin:</strong> Just to tee up one thought here. So on this point, a sort of prologue or a preview of the future episode, I do think for all of the worries, I think in many cases legitimate worries about hype in Silicon Valley, I think that is absolutely a legitimate cause of concern. And the quasi-religious nature, I think Karen Hao was talking about, you know, how there often is a quasi-religious element to some of these predictions. I think that&#8217;s absolutely right. I don&#8217;t think that&#8217;s disqualifying, right? But I think absolutely, if you don&#8217;t think there&#8217;s something, there&#8217;s a religious element in a lot of talk about AI and AGI in particular, then you&#8217;re not paying attention. There absolutely is.</p><p>So I think that&#8217;s true, but there&#8217;s also a bias in the opposite direction, normalcy bias, and the meme version of this is nothing ever happens. But I think if you just look at the recent history of our species, we have many discontinuities, whether that&#8217;s the Industrial Revolution, the Agricultural Revolution, even biologically, the emergence of multicellular life in the Ediacaran and the Cambrian explosion, right? The history of life on earth and the history of human civilization is full of these sort of major transitions, these relatively rapid discontinuities. So I think the assumption that sort of nothing ever happens or that, you know, things are basically going to tick on as normal is another bias that we need to be wary of.</p><p><strong>Dan Williams:</strong> Completely agree. Okay, you said predictions end of decade. How about this year? So when we do this conversation at the beginning of 2027, here&#8217;s maybe one way of thinking about it, right? Capability predictions is what I think you expect these systems to be able to do by the end of this year that they can&#8217;t do now. Economic predictions. And here I think the central question is, is there a financial bubble here which is going to burst and, you know, potentially even initiate another AI winter, you know, a period in which lots of the enthusiasm dissipates and then maybe, you know, political predictions, right? At the moment, I think people are not aware of what&#8217;s coming. People generally are pretty hostile towards AI and pretty, pretty fearful of it, but we haven&#8217;t really seen kind of coordinated political movements against AI, where that&#8217;s a defining issue. Should we expect to have seen that by the end of the year?</p><p><strong>Henry Shevlin:</strong> Oh, so many tricky questions. You know the famous quote, it&#8217;s hard to make predictions, especially about the future. I want to say in some ways it&#8217;s almost harder to make predictions about the near term than the long term, right? Insofar as those predictions have to be more fine grained, more falsifiable. You know, we can say, oh yeah, like by 2030 things will be really different. That&#8217;s easy, right? Saying like what&#8217;s going to be different by end of 2026 in some ways is harder.</p><p>So I don&#8217;t expect any massive AI bubbles. I think as the industry matures, like we will hear about various midsize AI companies who&#8217;ve been selling vaporware, going bankrupt. And I think the usual suspects will sort of call this out and say, aha, you see there&#8217;s an AI bubble all along, but I don&#8217;t expect it to be an industry wide trend. I don&#8217;t even expect it to be a major bubble in sort of frontier LLMs or frontier model developments.</p><p>But the other thing I just emphasise on bubbles when people talk about the AI bubble is that AI is rapidly proliferating into a whole bunch of different things. So like driverless cars, for example, which were, you know, often decried as vaporware in the 2010s, they&#8217;re absolutely here now. You can take a Waymo in San Francisco and several other cities today. And 2026 is one of the big years for rollouts of driverless cars. You now have Waymos in London, right? And I think there&#8217;s some like 30 cities globally introducing driverless car pilots over the course of this year.</p><p>So even if it turns out that OpenAI hit a wall or there&#8217;s some major scandal or they&#8217;re over leveraged, none of which I think is true. But if that did turn out to be the case, it wouldn&#8217;t kill AI in the same way that previous sort of AI winters have sort of killed research in AI, not quite across the board, but more broadly. At this point, everything from driverless cars to autonomous weapons systems to AI and medical research, AI and material sciences to AI for a wide range of tasks. I think it&#8217;s too diffuse and too plural for any kind of single bubble events to kill the industry as a whole.</p><p>But yeah, I also don&#8217;t expect any kind of, even a bubble in the domain of language models. So that&#8217;s one point.</p><p>Another area where I do think in terms of economic impacts, I think those will grow. I think more and more people are gonna be start, gonna be seeing impacts of AI in their workflows. It wouldn&#8217;t surprise me if we start to see some big legacy companies really, really struggling because they&#8217;re being outcompeted by startups or scale-ups that make better use of AI than them.</p><p>I think more and more companies are going to have to face this difficult challenge of, do we go all in on AI at this point, or do we still try and manage a slow transition? So I think it&#8217;s going to be a very economically disruptive year ahead. I think part of the reason for that is I think 2026 really will be the year of agents. So a lot of people, I think Sam Altman said 2025 was going to be the year of agents, but I think that was premature. AI, agentic capabilities of AI understood here as sort of their ability to do long-term complex multi-step tasks is only really getting going. But I think particularly as we start to see more deployment of these AI agents that are in turn generating useful training data about what works and what doesn&#8217;t, I think we&#8217;ll start seeing more and more really valuable AI agentic products over the course of 2026. I think Claude Code is very much a taste of what&#8217;s to come. So big economic disruptions by the end of 2026.</p><p>And I think to touch on your political point, I think this is going to lead to increasing backlash. A really interesting phenomenon at the moment is that on the right in America, there is a relatively unified or at least superficially unified pro-AI mood. I think a lot of this has to do with the influence of, you know, a lot of big or the alignment of a lot of big tech with the Trump administration, which has its own reasons for being very pro-AI, geopolitical considerations and so forth. But I think one interesting prediction would be that the right wing on the American right in particular, maybe the global right, pro-AI attitude may start to break down.</p><p>I think we&#8217;re seeing some trends, some signs of this in domains like AI in young people. There&#8217;s an increasing number of sort of Republican politicians who are very, very concerned about things like LLM psychosis, about appropriateness of content that minors are accessing, about impact on youth mental health.</p><p>And I think we might actually start to see, that&#8217;s one of the areas where we might start to see in the US context, some bipartisan consensus emerging on the need for AI regulation. I&#8217;d say partly that&#8217;s maybe due to things like family values, protecting young people being values that are as central, if not more central to the right than on the left. So it&#8217;s inherently bipartisan, but also because the idea of better regulations around protecting young people don&#8217;t necessarily interfere with kind of geopolitical applications of AI. You know, strict rules on under 18s using ChatGPT is not going to prevent the US from using AI tools effectively in future military conflicts and so forth. So that&#8217;s one political development.</p><p>I think at the cultural level, things are just going to get weirder and weirder. So, you know, we&#8217;ve done two episodes on social AI. I think 2026 social AI is going to continue to become more and more ubiquitous.</p><p>Sadly, I think we will see many more New York Times and legacy media stories about LLM psychosis, LLM exacerbated or triggered or implicated suicides. I think we&#8217;re going to continue to see deep entanglements, deep relationships between humans and AI systems become more and more common. And maybe an outside prediction, I do think the kind of AI welfare, robot rights movement is going to continue to gather steam. Probably not a major culture wars issue, even by sort of this time next year. But I think it&#8217;ll, you know, it will go from being this, I mean, it&#8217;s no longer even that niche, but still relatively niche thing worked on by a few think tanks to something that is increasingly something the general public are thinking about.</p><p><strong>Dan Williams:</strong> Great stuff, great stuff. I think many of my predictions, to be honest, overlap with yours and probably a unifying theme is I expect all of these things to happen kind of gradually. So I think the systems will get better and better, not just in terms of how they perform on benchmarks, but in terms of their capabilities. But I don&#8217;t think we&#8217;re going to be seeing like a big bang upgrade this year.</p><p>On the stock market, I mean, there I think there might be a financial bubble that bursts, even though I take your point that AI itself as a technology is not going away. And I think people often conflate those two things in an important sense that they&#8217;re kind of orthogonal in the sense that it could be the case, and I think it definitely will be the case, that AI becomes increasingly impressive, capable, and integrated into the economy and into society more broadly. It could also be the case that given the financials of many of these companies and investment decisions, et cetera, et cetera, that you see some quite significant bursting of the bubble that has short-term significant economic impact. I&#8217;m probably kind of 50-50 on that, and I just don&#8217;t feel like I&#8217;ve got the expertise to really evaluate it.</p><p>All of the other things, I think you&#8217;re sort of directionally correct as they say. I do think one thing where again, I&#8217;m probably 50-50 is my understanding of all of the big AI sort of frontier labs at the moment is a major focus is on this kind of continual learning, building advanced AI systems of the sort that we&#8217;ve got today that can engage in continual kind of experience-based learning. So you don&#8217;t have to construct kind of bespoke reinforcement learning environments for specific tasks, but you can drop a system into an environment and it will be able to update its weights sort of continuously as it engages with that environment in much the same way that human beings and other animals can do.</p><p>Given that, at least to me as an observer, it seems like there&#8217;s so much focus on that and a recognition that that will be a really big change. I suspect that this time next year, maybe I&#8217;m sort of 50-50 here, that we will have seen at least one AI lab that&#8217;s made some significant progress on that. I don&#8217;t think it will be kind of immediate now they can do it, but I think maybe there&#8217;ll be a paper that&#8217;s released. Maybe there&#8217;ll be a kind of updated model that can do some version of this. And that would be a really huge, I think, story in terms of the historical development of these technologies. Other than that, I think I basically just agree with you. Directionally&#8212;</p><p><strong>Henry Shevlin:</strong> Yeah, directionally correct is the best kind of correct. I know, yeah, I think there&#8217;s only not too much disagreement there between us. I mean, just to throw in one final thought, I wouldn&#8217;t be surprised, despite everything we&#8217;ve said, if AI is not the biggest story of this year. I think we live in an exceptionally unstable time, probably the most unstable time of my entire lifetime. And I wouldn&#8217;t surprise me at all if geopolitics or, I mean, particularly geopolitics, but potentially other domains create bigger surprises that swamp the relevance of AI, whether that&#8217;s war in the South China Sea, a major break between Europe and the US.</p><p>And that, you know, we&#8217;re focused here on AI, but I think that will have very big implications potentially for AI just because AI supply chains are so delicate. You know, a war in the South China Sea, for example, I think could be one of the biggest spoilers for most people&#8217;s AI timelines. So despite all my excitement around AI, I think given the sheer instability in the world right now, it may not end up being the biggest story of 2026.</p><p><strong>Dan Williams:</strong> We are cursed to live in interesting times. Okay, that was such a fun conversation. We&#8217;ll see everyone in a couple of weeks.</p>]]></content:encoded></item><item><title><![CDATA[2025: Review and Recommendations]]></title><description><![CDATA[My top ten essays, how I use AI to read, and my favourite books, articles, and more.]]></description><link>https://www.conspicuouscognition.com/p/2025-review-and-recommendations</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/2025-review-and-recommendations</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Mon, 05 Jan 2026 17:22:54 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4353" height="3264" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3264,&quot;width&quot;:4353,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;A close up of a clock on a wall&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A close up of a clock on a wall" title="A close up of a clock on a wall" srcset="https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1730829807423-83b045bd6cfd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw5fHwyMDI1fGVufDB8fHx8MTc2NzU3MzExMnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@kellysikkema">Kelly Sikkema</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>I started this blog on January 1<sup>st</sup>, 2024, so I&#8217;ve now been publishing weekly essays here for over two years. It was one of the best decisions I&#8217;ve ever made. I&#8217;m grateful to everyone who reads and engages. Even the haters and losers (of which, happily, there aren&#8217;t many) often provide interesting and informative critiques.</p><p>I&#8217;m especially thankful to those who have paid subscriptions. I&#8217;m aware that many of you subscribe not simply to access paywalled articles but to support my writing. I&#8217;m truly moved by this. It&#8217;s also a helpful corrective to my broadly <a href="https://www.conspicuouscognition.com/p/strategic-altruism-the-machiavellian">cynical</a> views about human nature.</p><p>As of 5th January 2026, the blog has roughly 19,800 subscribers. It averages approximately 120,000 views per month, though with substantial variance.</p><p>In this post, I will review the blog&#8217;s output from 2025, recommend the best things I read last year (as well as other favourites), and then briefly outline how I will approach this newsletter in 2026.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>Year in Review</strong></h1><p>Based on the number of readers, here were my top ten essays in 2025:</p><ol><li><p><strong><a href="https://www.conspicuouscognition.com/p/status-class-and-the-crisis-of-expertise">Status, Class, and The Crisis of Expertise</a></strong> &#8212; This argues that one underappreciated factor driving the &#8220;crisis of expertise&#8221;, and hostility towards knowledge-producing institutions more broadly, is feelings of humiliation and resentment among conservative voters with low levels of education, who view experts as a condescending and hostile social class. Among many others, it draws on the work of Thorstein Veblen (whose concept of conspicuous consumption inspires this blog&#8217;s title), Marcel Mauss, Will Storr, Musa al-Gharbi, David Hopkins, and Matt Grossman.</p></li><li><p><strong><a href="https://www.conspicuouscognition.com/p/lets-not-bring-back-the-gatekeepers">Let&#8217;s Not Bring Back The Gatekeepers</a></strong> &#8212; This argues that the media transformations of the digital age have created new pressures and responsibilities for small &#8220;l&#8221; liberals like me. Put simply, if you can no longer control the public conversation, you must participate in it, which, especially in recent years, too many liberals have been unwilling to do.</p></li><li><p><strong><a href="https://www.conspicuouscognition.com/p/is-social-media-destroying-democracyor">Is Social Media Destroying Democracy&#8212;Or Giving It To Us Good And Hard?</a></strong> &#8212; Much of the discourse about how social media is terrible blames engagement-maximising algorithms. Because companies profit by keeping people engaged and glued to their screens, algorithms feed people the epistemic equivalent of junk food: content that generates outrage and resentment, inflames our tribal instincts, and taps into negativity bias. Although important, I argue that a bigger factor is simply that social media has radically <em>democratised</em> media. Many people have ugly, illiberal, misinformed, and generally bad views and values, and social media gives them a platform and much greater consumer power. Admittedly, this view is not very politically correct to acknowledge, but it&#8217;s accurate.</p></li><li><p><strong><a href="https://www.conspicuouscognition.com/p/on-highbrow-misinformation">On Highbrow Misinformation</a></strong> &#8212; There&#8217;s a tendency to think that &#8220;misinformation&#8221; is entirely something that right-wing elites, sinister corporations, and uneducated hoi polloi engage in. But in fact, there is a considerable amount of left-coded &#8220;highbrow misinformation&#8221; that circulates within the prestigious knowledge-producing institutions that bang on about the dangers of misinformation. I give many examples in this essay and also explain why and how such misleading content emerges and proliferates, often as a consequence of the politicisation and progressive groupthink that has captured many institutions.</p></li><li><p><strong><a href="https://www.conspicuouscognition.com/p/the-case-against-social-media-is">The Case Against Social Media is Weaker Than You Think</a></strong> &#8212; This essay summarises and develops ideas from an article I wrote for Asterisk magazine (&#8220;<a href="https://asteriskmag.com/issues/11/scapegoating-the-algorithm">Scapegoating the Algorithm</a>&#8221;). The main point I make is that although social media platforms obviously aren&#8217;t harmless (see Essays 3 and 4), most of the discourse surrounding their dangers is driven more by vibes, anecdotes, and moral panic than rigorous argument or social science.</p></li><li><p><strong><a href="https://www.conspicuouscognition.com/p/the-everyone-is-biased-bias">The &#8220;Everyone is Biased&#8221; Bias</a></strong> &#8212; This essay makes the simple point that although everyone is biased in ways that are important and under-appreciated, it&#8217;s not the case that everyone is equally biased. There are significant differences between individuals, norm-governed communities, and institutions in how they handle and process information. So, a recognition of the universality of bias must co-exist with avoidance of the &#8220;everyone is biased&#8221; bias, which flattens such important differences.</p></li><li><p><strong><a href="https://www.conspicuouscognition.com/p/the-world-outside-and-the-pictures">The World Outside and The Pictures in Our Heads</a></strong> &#8212; This provides an opinionated summary of the Lippmann&#8211;Dewey debate over democracy, public opinion, and the role of experts in complex, modern societies. I am a huge Walter Lippmann fan. I think he&#8217;s the most insightful political epistemologist of all time. This essay sets out his views on the essentially insurmountable challenges of acquiring adequate political knowledge and understanding in the modern world.</p></li><li><p><strong><a href="https://www.conspicuouscognition.com/p/on-conspiracy-theories-of-ignorance">On Conspiracy Theories of Ignorance</a></strong> &#8212; This essay explores Karl Popper&#8217;s critique of the &#8220;conspiracy theory of ignorance,&#8221; which assumes that the truth is so self-evident that popular false beliefs must result from some deliberate conspiracy. Although Popper was mostly concerned with how Marxists and other leftist intellectuals think about &#8220;ideology&#8221;, the critique is equally pressing for much establishment hysteria about &#8220;disinformation&#8221; and &#8220;merchants of doubt&#8221; as the source of all popular misperceptions. I try to explain why Popper&#8217;s critique is valuable even though the world does in fact contain highly consequential conspiracy theories of ignorance.</p></li><li><p><strong><a href="https://www.conspicuouscognition.com/p/on-becoming-less-left-wing-part-2">On Becoming Less Left-Wing (Part 2)</a></strong> &#8212; This is the second in my series of essays detailing how I have become less left-wing in recent years. I explain in greater depth than I have elsewhere why political knowledge is, in general, extremely hard to attain, how tribal allegiances and other interests inevitably distort our beliefs, and why political ideologies are both inevitable and inevitably simplistic, selective, and vulnerable to distinctive failure modes. Think of it as &#8220;postmodernism but good&#8221;.</p></li><li><p><strong><a href="http://conspicuouscognition.com/p/domination-and-reputation-management#:~:text=It%20is%20challenging%20to%20maintain,fact%2C%20recasting%20dominance%20as%20virtue.">Domination and Reputation Management</a></strong><a href="http://conspicuouscognition.com/p/domination-and-reputation-management#:~:text=It%20is%20challenging%20to%20maintain,fact%2C%20recasting%20dominance%20as%20virtue."> </a>&#8212; A popular theory of &#8220;system-justifying ideologies&#8221;&#8212;for example, the belief in the divine right of kings, or that group-based domination is legitimate because subordinate groups are intellectually and morally deficient&#8212;is that they function to persuade the oppressed to acquiesce in their oppression. I argue that the real function of such ideologies lies in reputation management among oppressors. This leads me to a broader account of how reputation management doesn&#8217;t just produce apologetics for power; it also distorts the belief systems of those who think they&#8217;re &#8220;unmasking&#8221; power, including many &#8220;radical&#8221; left-wing intellectuals whose critiques of &#8220;ideology&#8221; were easily co-opted by history&#8217;s most despotic, exploitative regimes.</p></li></ol><p>There are several unifying ideas across these essays:</p><ul><li><p><strong>The truth is not self-evident</strong>, even though we are often disposed to think that it is. Reality is vast and complex, much more complex than we can even imagine, and we access it not directly but through messy, often-opaque chains of testimony, trust, categorisation, and interpretation. Even the part of reality that we are in &#8220;direct&#8221; contact with&#8212;the bits we can actually perceive&#8212;are typically understood through socially-learned conceptual schemes and belief systems. As Walter Lippmann put it, modern politics deals with &#8220;indirect, unseen, and puzzling facts, and there is nothing obvious about them.&#8221;</p></li><li><p><strong>Experts are necessary but human. </strong>Although journalists, pundits, intellectuals, scientists, and other &#8220;epistemic elites&#8221; have critical advantages in confronting and uncovering such facts, they are also vulnerable to the same biases as everyone else. Moreover, their advantages are often used to indulge such biases rather than correct them. The critical theorist who &#8220;unmasks&#8221; ideology doesn&#8217;t escape ideology. &#8220;Misinformation experts&#8221; aren&#8217;t strangers to misinformation. And so on.</p></li><li><p><strong>The epistemic is not merely epistemic</strong>. The beliefs, narratives, ideologies, and social norms that regulate our minds and behaviour are distorted by propaganda, grubby motives (e.g., self-interest, reputation management, and status competition), and tribal allegiances. Such distortions are obvious in our rivals and enemies but not in our friends or ourselves. The failure to correct for this bias produces lots of bad social theory and politics.</p></li><li><p><strong>Humans are kinda sorta rational</strong>. The popular image of human beings as credulous fools riddled with cognitive biases is mistaken. We are far from perfectly rational, of course, but people&#8212;yes, even the people you dislike&#8212;are typically far more sophisticated, critical, and intelligent than they seem. The contrary impression arises from a combination of the &#8220;<a href="https://journals.sagepub.com/doi/10.1177/14614448231153379">third-person effect</a>&#8221;, misunderstanding people&#8217;s real goals (e.g., assuming their primary motivation is always to figure out the truth), and underestimating the challenges of acquiring knowledge in complex, modern societies (see above).</p></li><li><p><strong>Avoid <a href="https://journals.sagepub.com/doi/10.1177/1745691620919372">technopanics</a></strong>. Technology is highly consequential, but most popular (and much scholarly) discourse about technology involves simplistic moral panics that obscure the complex, sophisticated ways people use such technologies, and their interaction with pre-existing features of societies. People aren&#8217;t passive, credulous victims of algorithms. And the effects of social media platforms are often mediated by long-standing pathologies of democracy, public opinion, polarisation, the growing diploma divide, and the politicisation of institutions, many of which are far more complex and uncomfortable to discuss than algorithms and Russian bots.</p></li></ul><h1><strong>Podcasting</strong></h1><p>I appeared on several podcasts this year, including <em><a href="https://www.youtube.com/watch?v=fT_YGFO-EpI">Evolutionary Psychology</a> </em>and <em><a href="https://www.persuasion.community/p/dan-williams">The Good Fight with Yascha Mounk</a>. </em>Both provided really valuable outlets for discussing my views about human nature, belief, and self-deception (in the former) and misinformation, institutions, social epistemology, and politics (in the latter).</p><p>In the last few months of the year, I also started an AI podcast with my friend, Henry Shevlin, where we discuss the big-picture philosophical, scientific, and political questions thrown up by rapid developments in artificial intelligence.</p><p>I am convinced that AI will be utterly transformative in the coming years and decades. Although I did my PhD (between 2015 and 2018) on various <a href="https://www.repository.cam.ac.uk/items/263ba58d-2a43-41c8-9930-665ab3c45cbd">philosophical questions surrounding generative AI</a>, I immediately pivoted to the area of &#8220;political epistemology&#8221; in the years that followed, albeit still with a strong focus on psychology and cognitive science in a way that distinguishes me from most scholars in this area.</p><p>Now, I am back to thinking about AI a lot, focusing less on the technology itself than on its social and political significance (including its interaction with questions concerning misinformation, institutional trust, expertise, and public opinion).</p><p>My podcast with Henry is a way to keep up to date with this area in ways that other people will hopefully find beneficial. In the first six episodes, we covered big-picture debates about AI and existential risk, consciousness, education, LLMs&#8217; environmental impact, and relationships:</p><ol><li><p><a href="https://www.youtube.com/watch?v=4ak6VdFaCpY&amp;t=153s">AI Sessions #1: AI &#8211; A Normal Technology or a Superintelligent Alien Species?</a></p></li><li><p><a href="https://www.youtube.com/watch?v=8rvwRHCkJAE&amp;t=4s">AI Sessions #2: Artificial Intelligence and Consciousness &#8211; A Deep Dive</a></p></li><li><p><a href="https://www.youtube.com/watch?v=87CWDd1a4O0&amp;t=5177s">AI Sessions #3: The Truth About AI and the Environment</a></p></li><li><p><a href="https://www.youtube.com/watch?v=pzgoQDdFuPY&amp;t=2s">AI Sessions #4: The Social AI Revolution &#8211; Friendship, Romance, and the Future of Human Connection</a></p></li><li><p><a href="https://www.youtube.com/watch?v=8o_lTit1DCM&amp;t=613s">AI Sessions #5: How AI Broke Education</a></p></li><li><p><a href="https://www.youtube.com/watch?v=XkqulBgASsQ&amp;t=92s">AI Sessions #6: AI Companions and Consciousness</a></p></li></ol><p>I will always be a writer first and foremost&#8212;that&#8217;s where my strengths lie&#8212;but I&#8217;ve found these conversations to be really enjoyable and stimulating. This year, we will be speaking to many interesting guests.</p><h1>Recommendations</h1><p>2025 was an excellent year for Substack. I spend more time reading articles on this platform than on any other. For those (like me) interested in science, philosophy, and intellectually serious, evidence-based contributions to politics and current affairs, there is nowhere better.</p><p>I am reluctant to name specific Substackers I enjoy because I know I&#8217;ll accidentally leave out many brilliant ones. But if you want suggestions on whom to read, you can check my <a href="https://www.conspicuouscognition.com/recommendations">Recommendations</a>, and also <a href="https://substack.com/@conspicuouscognition">follow me on Notes</a> in the Substack app, where I do my best every day to share the excellent articles I come across.</p><p>I read fewer new books than usual this year. The main reason is that I&#8217;ve been re-reading extensively with LLMs such as ChatGPT, Claude, and Gemini.</p><p>Book quality is extremely heavy-tailed. Most books are bad. A tiny number are exceptional. So, the overall value you get from reading is heavily influenced by decisions about what to read, and you are often much better off trying to master and internalise the ideas of exceptional books than reading new ones.</p><p>LLMs make this a lot easier. You can upload a PDF of the book and have a quasi-conversation with it, testing your understanding, receiving tailored explanations and tutoring, creating flashcards to import into programs like <a href="https://apps.ankiweb.net/">Anki</a> (for spaced-repetition-based learning), and more. If you haven&#8217;t played around with <a href="https://notebooklm.google/">NotebookLM</a> yet, you&#8217;re making a huge mistake. So, this year, I spent much of the time I would have ordinarily spent reading new books on implementing this process for the best books I&#8217;ve already read.</p><p>Nevertheless, I did read <em>some </em>new books. More precisely, I read several new and several old books for the first time. In no particular order, here were the best ones:</p>
      <p>
          <a href="https://www.conspicuouscognition.com/p/2025-review-and-recommendations">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Tribalism Corrupts Politics (Even When One Side Is Worse)]]></title><description><![CDATA[Opposing the far right isn&#8217;t an excuse to indulge our tribal instincts.]]></description><link>https://www.conspicuouscognition.com/p/tribalism-corrupts-politics-even</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/tribalism-corrupts-politics-even</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Mon, 29 Dec 2025 19:16:05 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="6000" height="4000" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4000,&quot;width&quot;:6000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Protestors hold signs during a political demonstration.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Protestors hold signs during a political demonstration." title="Protestors hold signs during a political demonstration." srcset="https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1743907727503-ff22a077a38c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw2fHxmYXNjaXN0fGVufDB8fHx8MTc2NzAyNzU2MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@mikenewbry">Mike Newbry</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>The philosopher Jason Stanley has recently &#8220;<a href="https://www.motherjones.com/politics/2025/11/jason-stanley-fascism-trump-history/">fled</a>&#8221; the USA, which he views as an authoritarian state undergoing a coup by a fascist party using Nazi tactics. In an <a href="https://www.motherjones.com/politics/2025/11/jason-stanley-fascism-trump-history/">interview</a> outlining this perspective, he has harsh words for those who draw on the concept of &#8220;polarisation&#8221; to understand these developments:</p><blockquote><p>&#8220;All the people talking about polarization are just fascism enablers. They&#8217;re almost worse than the fascists because they&#8217;re just like, &#8220;Hey, how do I keep getting money in power?&#8221; I&#8217;ll say the fascists are normal.&#8221;</p></blockquote><p>Talk of polarisation is so objectionable, he says,</p><blockquote><p>&#8220;because one side is led by fascists. I mean, it&#8217;s like saying the&#8230; problem with the Civil War was polarization. It&#8217;s literally like that&#8230; One group thinks that slavery is good, and the other group thinks it&#8217;s bad, terribly polarized. Or Nazi Germany. One group thinks Jews should be killed, the other one thinks they&#8217;re okay, it&#8217;s polarized. It&#8217;s nonsensical. It&#8217;s just fascism enabling.&#8221;</p></blockquote><p>These sentiments are highly influential on the left, where articles have proliferated with titles like &#8220;<a href="https://jacobin.com/2022/09/trump-maga-far-right-liberals-polarization">The Problem Isn&#8217;t &#8216;Polarization&#8217; &#8211; It&#8217;s Right-Wing Radicalization</a>&#8221;, and &#8220;<a href="https://www.everythingishorrible.net/p/our-problem-isnt-polarization-its">Our Problem isn&#8217;t Polarization. It&#8217;s Fascism</a>.&#8221;</p><h1>The Polarisation Industry</h1><p>Such critics are responding to a large body of recent scholarship and commentary that links many of the world&#8217;s political problems to the extent of division, or &#8220;polarisation,&#8221; within and between societies.</p><p>Much of this discourse focuses on the USA, where Republicans and Democrats famously dislike each other much more than they did a few decades ago. For <a href="https://www.monmouth.edu/polling-institute/reports/monmouthpoll_us_052224/">instance</a>, between 2014 and 2024, the share of Democrats who reported that they would be unhappy if a family member married a Republican rose from 19% to 39%. For Republicans asked about Democrats, it increased from 22% to 33%. Of course, one also finds intense polarisation in many other contexts, ranging from Northern Ireland to Lebanon, the Israel-Palestine conflict to the left/right divide that structures democratic politics in many countries.</p><p>Many people view polarisation&#8212;especially <a href="https://www.annualreviews.org/content/journals/10.1146/annurev-polisci-051117-073034">&#8220;affective&#8221; polarisation</a>, the fear and dislike of opposing groups&#8212;as a powerful force that threatens democracy, social trust, cooperation, and fact-based political debate and public opinion. In highly polarised societies, groups become less willing to compromise and transfer power, more willing to endorse political violence, and more likely to succumb to &#8220;<a href="https://psycnet.apa.org/record/2001-05917-009">tribal</a>&#8221; or &#8220;<a href="https://www.science.org/doi/10.1126/science.abe1715">sectarian</a>&#8221; biases that distort perceptions of reality.</p><p>Terms like &#8220;tribalism&#8221; and &#8220;sectarianism&#8221; here underscore something important: the attitudes and emotions in highly polarised contexts like the US are <a href="https://www.amazon.com/Minds-Make-Societies-Cognition-Explains/dp/0300248547">not specific to those contexts</a>. They emerge whenever intense intergroup conflict maps onto social identities such as partisanship, ideology, religion, sect, ethnicity, region, or tribe. That is, while every group rationalises their fear and hatred of the outgroup by pointing to its specific crimes, the emotions are strikingly similar and symmetrical across radically different conflicts, whether between Protestants and Catholics, Hutus and Tutsis, or Sunnis and Shias.</p><p>This pattern is typically explained in terms of our evolved &#8220;tribal&#8221; or &#8220;<a href="https://www.edge.org/response-detail/27168">coalitional</a>&#8221; nature. Our species was forged under selection pressures that favoured powerful motives and abilities for forming alliances designed to outcompete other alliances for prestige, dominance, and resources. So, when we support and identify with a group, automatic &#8220;<a href="https://www.edge.org/response-detail/27168">coalitional instincts</a>&#8221; are activated. We divide the world into ingroup and outgroup, <em>us </em>and <em>them</em>. We frame group relations as zero-sum conflict for power and esteem. We become obsessed with sending and monitoring signals of group loyalty. And we become instinctive <a href="https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995">apparatchiks</a> and <a href="https://www.tandfonline.com/journals/hpli20">propagandists</a>, embracing <a href="https://www.amazon.co.uk/Status-Game-Will-Storr/dp/0008354677?tag=googhydr-21&amp;source=dsa&amp;hvcampaign=media&amp;tag=&amp;ref=&amp;adgrpid=177813222682&amp;hvpone=&amp;hvptwo=&amp;hvadid=738150857422&amp;hvpos=&amp;hvnetw=g&amp;hvrand=14224506816910441630&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=1006520&amp;hvlocphy=9198132&amp;hvtargid=dsa-1595363597442&amp;hydadcr=&amp;mcid=&amp;gad_source=1&amp;gad_campaignid=22322257365&amp;gbraid=0AAAAA--_-PCw2OkAovJhFbPAKAz6eKyFq&amp;gclid=Cj0KCQiA6sjKBhCSARIsAJvYcpNRBfJxDuJCb1QAuviOi_3PM20RsZA4mQoDFzyqZMi9msUE6ulXlBIaAgXPEALw_wcB">narratives</a> crafted to make our side and its defining narratives look good, and the other side look bad, if not outright demonic.</p><p>Polarisation exacerbates those instincts, which in turn exacerbate polarisation, fuelling a runaway process in which competing tribes lose access to a shared reality and a willingness to empathise and compromise with each other.</p><h1>The Critique</h1><p>For Stanley and many others on the left, this diagnosis of modern politics is preposterous. Their central objection is that &#8216;polarisation&#8217; implies a false symmetry, depicting two poles drifting away from a virtuous centre. This means that it can&#8217;t capture what the critics take to be a self-evident fact: that the real threat to liberal democracy and social justice comes from the right, the political home of extremism, racism, sexism, transphobia, lies, conspiracy theorising, and&#8212;at least in the views of figures like Stanley&#8212;fascism.</p><p>The problem isn&#8217;t that people are divided and tribal. The problem is that one tribe is a sinister menace to society. Treating that menace as an existential, fascistic threat doesn&#8217;t involve an irrational &#8220;tribal&#8221; psychology, an unfortunate hangover from our primitive, evolutionary past. It means seeing the far right for what it is. When confronted with this reality, the appropriate response is not <em>de</em>polarisation (i.e., political moderation); it is to be even more opposed&#8212;and hence more polarised&#8212;against it.</p><p>By misrepresenting this state of affairs and misdirecting our political energy, those talking about the dangers of polarisation and tribalism are complicit in normalising and enabling the right&#8217;s attacks on democracy and vulnerable minorities. As Noah Berlatsky <a href="https://www.everythingishorrible.net/p/our-problem-isnt-polarization-its">puts it</a>,</p><blockquote><p>&#8220;A social science that sees polarization and partisanship as the main threats to democracy is a social science that implicitly&#8212;and often more than implicitly&#8212;is calling for white, Christofascist solidarity against Black (and feminist, and queer, and disabled) demands for justice.&#8221;</p></blockquote><h1>Am I A Fascist Enabler?</h1>
      <p>
          <a href="https://www.conspicuouscognition.com/p/tribalism-corrupts-politics-even">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI Sessions #6: AI Companions and Consciousness]]></title><description><![CDATA[Watch now | A deep dive into the philosophy, ethics, politics, social science, and likely future of human-AI relationships.]]></description><link>https://www.conspicuouscognition.com/p/ai-sessions-6-ai-companions-and-consciousness</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/ai-sessions-6-ai-companions-and-consciousness</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Sat, 20 Dec 2025 10:58:10 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/182159778/e6eba6276bd3eebcbbb27c6b1dd2b982.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this episode, Henry and I spoke to <a href="https://roseguingrich.com/">Rose Guingrich</a> about AI companions, consciousness, and much more. This was a really fun conversation! </p><p>Rose is a PhD candidate in Psychology and Social Policy at Princeton University and a National Science Foundation Graduate Research Fellow. She conducts research on the social impacts of conversational AI agents like chatbots, digital voice assistants, and social robots. As founder of Ethicom, Rose consults on prosocial AI design and provides public resources to enable people to be more informed, responsible, and ethical users and developers of AI technologies. She is also co-host of the podcast, <a href="https://ourliveswithbots.com/">Our Lives With Bots</a>, which covers the psychology and ethics of human-AI interaction now and in the future. Find out about her really interesting research <a href="https://roseguingrich.com/publications-and-popular-press/">here</a>. </p><p>You can find the first conversation that Henry and I had about Social AI <a href="https://www.conspicuouscognition.com/p/ai-sessions-4-the-social-ai-revolution">here</a>. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Transcript </h1><p><em>Note: this transcript is AI-generated and may feature mistakes.</em> </p><p><strong>Henry Shevlin</strong><em><strong> (00:01)</strong></em></p><p>Hi everyone and welcome to the festive edition of Conspicuous Cognitions AI Sessions. We&#8217;re here with myself, Henry Shevlin, my colleague Dan Williams and our guest today, Rose Guingrich, who we&#8217;re very lucky to have on the show to be talking about social AI and AI companions with us. We did do an episode on this two episodes ago, which featured me and Dan chatting about the rising phenomenon of social AI. And so if anyone wants a basic sort of primer on the topic, go back and listen to that as well. But today we&#8217;re going to be diving into some of the more empirical issues and looking at Rose&#8217;s work on this topic.</p><p>So try to imagine a house that&#8217;s not a home. Try to imagine a Christmas all alone and then be reassured that you don&#8217;t have to spend Christmas all alone. In fact, nobody ever needs to spend Christmas alone ever again because their AI girlfriend, boyfriend and B friend, husband or wife will be there to warm the cockles of their heart throughout the festive season with AI generated banter and therapy. Or at least this is what the promise of social AI might seem to hold. And in fact, just in today&#8217;s Guardian here in the UK, we saw an announcement that a third of UK citizens have used AI for emotional support. Really striking findings.</p><p>So cheesy intro out of the way. Rose, it&#8217;s great to have you on the show. Tell us a little bit about where you think the current sort of social AI companion landscape is at right now and what the major sort of trends and use patterns you&#8217;re seeing are.</p><p><strong>Rose E. Guingrich</strong><em><strong> (01:36)</strong></em></p><p>So right now it appears as though we are moving toward an AI companion world where people are less judgmental about people using AI companions. It&#8217;s much less stigmatized than it was a couple of years ago. And now, of course, we&#8217;re seeing reports where, for example, three quarters of U.S. teens have used AI companions and about half are regular users and 13% are daily users. And so we&#8217;re seeing this influx of AI companion use from young people and also children as well, of course, from the reports that we&#8217;ve seen about teens using AI as a companion.</p><p>And I think looking forward, we&#8217;re only going to see more and more use of AI companions as companies recognize that the market is ready for these sorts of machines to come into their lives as these social interaction partners. And then if you look even further forward, these chatbot companions are going to soon transition into robot companions. And so there we&#8217;re going to see even more social impacts, I think, based on embodied conversational agents.</p><p><strong>Dan Williams</strong><em><strong> (02:46)</strong></em></p><p>Can I just ask a quick follow up about that, Rose? So you said that this is becoming kind of more prevalent, the use of these AI companions. You also said it&#8217;s becoming less stigmatized. Do we have good data on that? Do we have data in terms of which populations are stigmatizing this kind of activity more or less?</p><p><strong>Rose E. Guingrich</strong><em><strong> (03:06)</strong></em></p><p>So in terms of the stigma, we don&#8217;t have a lot of information about that. But we can look at, for example, a study that I ran in 2023 where I looked at people&#8217;s perceptions of AI companions, both from those who were users of the companion chatbot Replica and those who were non-users from the US and the UK. And the non-users perceptions of AI companions and people who use AI companions at that time was fairly negative. So for example, non-users indicated that it&#8217;s a sad world we live in if these things are for real. These AI companions are for people who are social outcasts or lonely or can&#8217;t have real friends.</p><p>And now in the media at least, we see a lot more discourse on AI companions and sharing about having AI companions. And one thing I can point to are subreddits. For example, My Boyfriend Is AI that has 70,000 companions. It is explicitly labeled as companions, whereas other subreddits are weekly visitors, visitors, users. This is companions and people on the subreddit are talking about their AI girlfriend, boyfriend, partner, whatever, and finding community there. Now, if you look at that subreddit though, you also see people talking about disclosing their companion relationship to friends or family and receiving backlash, but then there are also people who are indicating that people are seeing this as, this could maybe be valuable to you, I don&#8217;t think it&#8217;s necessarily a weird thing, but I think that&#8217;s also due to the shifting of social norms based on how many reports we&#8217;re seeing about AI companion use and knowing that people also use not just specifically AI companions as social interaction partners but also these GPTs like Claude, Gemini, etc. that people are turning to as companions as well and also being quite open about it.</p><p><strong>Henry Shevlin</strong><em><strong> (04:59)</strong></em></p><p>It&#8217;s been really fascinating to see, because I think we met, would it have been summer 2023, Rose, or maybe 2022, at an event in New York and the Association for the Scientific Study of Consciousness, presenting a paper on your 2023 study. I was presenting a paper on social AI and AI consciousness. And it felt like then absolutely no one was talking about this. Replica was already pretty successful, but basically no one I spoke to had even heard of it. And then it&#8217;s really in the last couple of years that things have accelerated fast. And now basically every couple of days, a major newspaper has some headline about people falling in love with their particular companion or sometimes tragic incidents involving suicides or psychosis, or sometimes just sort of observation level studies about what young people today are doing and so forth. Is that your perception that this is accelerating fast?</p><p><strong>Rose E. Guingrich</strong><em><strong> (05:53)</strong></em></p><p>Definitely. And we&#8217;re also seeing an emerging market of AI toys. So AI companions that are marketed specifically for children. And so even though right now mainly we&#8217;re seeing companion use from young people, young adults, we&#8217;re now shifting it toward children as well. Ages 3 through 12 is what these toys are marketed for. And they&#8217;re marketed as a child&#8217;s best friend. So these are going to be the forever users, right? Starting young with AI companions and then moving forward into robot companions that will someday have in our homes, well, it&#8217;s just a natural progression of what this is going to look like.</p><p><strong>Dan Williams</strong><em><strong> (06:28)</strong></em></p><p>Can I ask a question just about the kind of commercial space here? So there is a company like Replica and they make, I guess, bespoke social AIs, AI companions. Presumably though, the models that they&#8217;re using underpin those AIs and not as sophisticated as what you&#8217;ve got with OpenAI and Anthropic and these other, you know, Google&#8217;s Gemini and so on. Is that right? Are they using their own models? And if they are, then presumably those models aren&#8217;t as sophisticated as these sort of cutting edge models used by the leading companies in the field.</p><p><strong>Rose E. Guingrich</strong><em><strong> (07:02)</strong></em></p><p>I suppose it depends on what you mean by sophistication. I think sophistication has a lot to do with the use case. So for Replica, the sophistication aspect of it is, well, obviously people are finding it useful and finding it sophisticated enough to meet their social needs and to operate as a companion. But of course it doesn&#8217;t have the level of funding and infrastructure that these big tech companies have like OpenAI to make their models be quote unquote more sophisticated, perhaps have better training data and are better suited to multiple use cases given that they&#8217;re operating as general purpose tools.</p><p>But way back when in 2021, Replica was operating on the GPT-3 model, but got kicked off of it because in 2021, OpenAI changed their policy such that any third parties using their model could not use it for adult content. But of course, fast forward to this year, Sam Altman is saying, oh, everyone&#8217;s upset about GPT no longer feeling like a friend. Don&#8217;t worry adult users, you can now use ChatGPT for adult content. So, you know, full circle, all operating under what is it that users say that they want. Here we&#8217;re going to give it to them so they continue to use our platform.</p><p><strong>Henry Shevlin</strong><em><strong> (08:18)</strong></em></p><p>So it&#8217;ll be interesting to watch whether sort of as ChatGPT, you know, someone has said that he wants erotic role play to be offered as a service to adults, treat adults like adults seems to be the kind of mantra there. And of course Grok has already got Annie and a couple of other kind of companions. So do you think it&#8217;s likely that sort of we&#8217;ll see this as just no longer a kind of niche industry, but something that just gets baked into sort of the major commercially available language models?</p><p><strong>Rose E. Guingrich</strong><em><strong> (08:47)</strong></em></p><p>Yeah, I would say so. I don&#8217;t think it&#8217;s niche anymore at all, actually. Given that these large language models, these GPTs can be used as companions. And if you look at the metrics in the reports, for example, by OpenAI, something like 0.07% of users, which is for the GPT-5 model, which is equivalent to about 560,000 people, use ChatGPT as a companion, show signs of psychosis or mania. And then others, for example, it&#8217;s like 0.15%, which is over a million people, show potentially heightened levels of attachment, emotional attachment to ChatGPT. So I think that&#8217;s an indicator that it&#8217;s no longer niche, right? And you&#8217;re just seeing so many more AI companions being pushed out on the market every month.</p><p><strong>Dan Williams</strong><em><strong> (09:39)</strong></em></p><p>And so right now when we&#8217;re talking about AI companions, we&#8217;re talking about, for the most part, these large language models and chatbots and so on. You mentioned in terms of where this might be going, the integration of robotics into this space. So what are we seeing there at the moment? And how do you think that&#8217;s likely to develop over the next five years, 10 years, 20 years?</p><p><strong>Rose E. Guingrich</strong><em><strong> (10:01)</strong></em></p><p>Yeah, so what we&#8217;re seeing at the moment is there are actually humanoid robots in people&#8217;s family dynamics in people&#8217;s homes more so in Japan than in the rest of the world and this is sort of highlighted by institutions being created in order to study human robot interaction and to understand how the integration of robots into family dynamics might impact child social development. So also with the onset of AI toys where now the chatbots are embedded into something that&#8217;s embodied. That&#8217;s sort of like a signal to robotics.</p><p>And of course we do also have social robots like PARO, which is a robot seal that is designed for elderly people who need companionship. It&#8217;s supposed to help reduce, for example, the onset of dementia and Alzheimer&#8217;s and help with social connection. And then we have workplace robots like Pepper. And so these are kind of the early stages of robotics, but I&#8217;m seeing a big shift into multi-modal AI, so that&#8217;s embodied, that has video, voice, image generation, all of these sorts of things. And I think those features compounded are just going to increase the rate at which people are going to be using these tools as companions and get more emotionally attached to them, perceive them as more human-like, and therefore have greater social impacts from interacting with them.</p><p><strong>Henry Shevlin</strong><em><strong> (11:29)</strong></em></p><p>You know, one of my best top recommendations for fiction about social AI is Ted Chang&#8217;s The Life Cycle of Virtual Software Objects, a great sort of novella about a company that offers sort of virtual pets, although they&#8217;re sophisticated, cognitively sophisticated pets you can talk to. And there&#8217;s a great sort of whoa moment in the middle of the story when you realize that the users who&#8217;ve been interacting with these things in virtual worlds, they can then interact with them in the real world. You have these little robot bodies, they can port them onto and I can easily imagine sort of something like that happening with large language models and social AI. I mean already you know I can do the kind of live streaming with ChatGPT or Gemini and it can comment on what&#8217;s happening around me so this idea of sort of embedding these things in the real world environments I think we&#8217;re seeing yeah even already on ChatGPT we&#8217;re seeing trends in that direction.</p><p><strong>Rose E. Guingrich</strong><em><strong> (12:20)</strong></em></p><p>Yeah, and I think of Clara and the Sun as well, the novel that is about children who grow up with a humanoid robot companion and everyone has a humanoid robot companion. And what happens is that because of this, parents actually have to coordinate play dates between children because they&#8217;re not engaging socially with other kids at baseline, at default, because they have a companion that is made for them and fulfills their social needs. So that&#8217;s part of my worry, I suppose, looking forward, is if we get to that sort of point where now we have to really try very hard to facilitate human connection when it&#8217;s already at this stage more difficult than it has ever been due to various technologies.</p><p><strong>Henry Shevlin</strong><em><strong> (13:01)</strong></em></p><p>So yeah, let&#8217;s talk a little bit more about sort of what your work has revealed about risks and benefits of social AI. So I mean, a point I&#8217;ve made on the show and I like making a lot is that very often it&#8217;s quite hard to predict what the psychosocial impact of new technologies will be. You know, I grew up in an era where there was a massive moral panic around violent video games that basically failed to pan out. Turns out violent video games don&#8217;t have dramatic effects on development. On the other hand things like social media and short-form video have had I think quite really quite significant psychosocial effects that people largely fail to anticipate. Tell us a little bit about what your research in this area has found about the psychosocial impacts of social AI.</p><p><strong>Rose E. Guingrich</strong><em><strong> (13:45)</strong></em></p><p>Yeah, so when we ran our study where we looked at the perceptions of AI companions on both the user and the non-user side of Replica, we asked Replica users how has interacting with the Replica or having a relationship with the chatbot impacted your social interactions, relationships with family and friends, and self-esteem? So key metrics for, for example, social health.</p><p>And we also asked them about their perceptions of the chatbot. So we asked them to what degree they anthropomorphize the chatbot or perceived as having human likeness, experience or emotion, agency or the ability to act on one&#8217;s own accord or even consciousness or subjective awareness of itself and the world around it and of the user. And what we found is that for the users on average, they indicated that having a relationship with the chatbot was positive for their social interactions, relationships with family and friends and self-esteem. Positive impact on their social health. And for the non-users, they tended to indicate that, I think having a relationship with this chatbot would actually be neutral to harmful to my social health.</p><p>And what was interesting, though, is we wanted to understand how their perceptions of the chatbot played a role in these sort of social impacts. And what we found, even though there were differences between groups in terms of we think positive social impacts, we think negative social impacts, for both groups, the more they anthropomorphize the more likely they were to indicate that interacting with the chatbot would have a positive effect on their social health.</p><p>But with this study, it was self-report and self-selecting groups. So people who were already users of the companion chatbot Replica, people who were already not users. And it was just one point in time and correlational, of course. And so recently we conducted a longitudinal study in which we randomly assigned people to either interact with the companion chatbot Replica for at least 10 minutes a day across 21 consecutive days or to control group, which was to play word games for at least 10 minutes a day across 21 days.</p><p><strong>Rose E. Guingrich</strong><em><strong> (15:46)</strong></em></p><p>We chose this control condition because it was gamified, was novel table experiences, and it was using technology but just not technology that was social. It involved typing words on a screen, but there&#8217;s a different interaction form there. And we tracked their impact to the relationships from doing this daily task and also their perceptions of the agent that they were interacting with.</p><p>And what we found corroborated our findings from previous study where the chatbot users, the people who anthropomorphize the chatbot more, also reported that interacting with the chatbot had greater impacts on their social interactions and relationships with family and friends. And that was just the general impact. We didn&#8217;t look at positive or negative. But then when we looked at whether it was positive or negative, it was once again a positive relationship. The more they anthropomorphize the chatbot, the more likely they were to indicate that it had positive social benefits to them in terms of their impact on their relationships.</p><p>So we thought this was quite interesting and we found there that anthropomorphism was actually a key explanatory factor. So something about anthropomorphizing the chatbot rendered it to have the ability to impact their social lives. And so it seems like this is kind of the narrative that&#8217;s coming out from the research based on some theory work that I did initially and then these studies that I ran, that this is something that we really need to think about. Anthropomorphism of the chatbot, whether it&#8217;s on the user side and what social motivations push people to anthropomorphize the chatbot or what characteristics of the chatbot push people to anthropomorphize it with certain characteristics.</p><p><strong>Dan Williams</strong><em><strong> (17:28)</strong></em></p><p>So how are you measuring the degree to which they anthropomorphise the chatbot there?</p><p><strong>Rose E. Guingrich</strong><em><strong> (17:32)</strong></em></p><p>So we use a combination of the Godspeed Anthropomorphism Scale and also scales that measure experience and agency. And then we created a scale to measure consciousness, which was by consciousness of itself, consciousness of the world around it, and consciousness of the user, and just generally subjective awareness of oneself and the world around them.</p><p>And so we use this combination scale to get at multiple pieces of attributing human likeness to the chatbot with a special focus on human-like mind characteristics, which has been a key focus of researchers who have looked at anthropomorphism and finding that it is these human-like mind traits that are perhaps the most critical element of anthropomorphism in terms of the sort of relationships between that type of anthropomorphism and subsequent social impacts.</p><p><strong>Dan Williams</strong><em><strong> (18:22)</strong></em></p><p>That&#8217;s interesting. I mean, it makes me think. When I&#8217;m talking to ChatGPT, I feel like there are some ways in which I attribute traits, which are a form of anthropomorphizing. I assume that it&#8217;s got a kind of intelligence, a kind of cognitive flexibility. It seems like it has, you know, beliefs, desires and so on to a certain extent. But I also feel like I&#8217;m dealing with a profoundly non-human system that lacks like most of the personality, motivational profile and so on that I associate with human beings. Do you have more sort of granular data on exactly the kinds of traits that they&#8217;re attributing to these systems?</p><p><strong>Rose E. Guingrich</strong><em><strong> (19:00)</strong></em></p><p>Yeah, so the kinds of traits that they&#8217;re attributing, for example, within the experience and agency and consciousness and human likeness scale, these are traits like the ability to feel pleasure or pain or love or hunger or the ability to remember or act on one&#8217;s own accord or act immorally or morally. And so these are the sorts of traits that people are attributing to these AI agents.</p><p>And one thing that is worth saying is that the research on anthropomorphism, different researchers have measured anthropomorphism in different ways. Some are looking more at just general human likeness by the Godspeed scale, which I think in itself is a little bit limited just because the measures are things like how dead or alive does this thing seem? How non-animated or animated? How machine-like or human-like? How non-responsive and responsive? And if you&#8217;re thinking about chatbots, well, they&#8217;re clearly responsive. So I think having these additional measures are really important for getting at the more fine-grained human-like mind traits that typically are more representative of something that only humans can have or do, especially things like second-order emotions like embarrassment or something like that.</p><p><strong>Henry Shevlin</strong><em><strong> (20:17)</strong></em></p><p>So I&#8217;m curious, Rose, I love your research on this and I&#8217;ve quoted it a lot to sort of push back against that instinctive yuck fact that people have or the instinctive assumption that social AI must be obviously bad for you. But I&#8217;m curious how far you think this kind of data goes to diffusing worries about social AI and whether there are any sort of particular worries that it doesn&#8217;t address. Yeah, and I guess more broadly, I&#8217;m curious about where you see sort of the risk and threat landscape with this technology right now.</p><p><strong>Rose E. Guingrich</strong><em><strong> (20:45)</strong></em></p><p>Yeah, that&#8217;s a great question. With this research that&#8217;s been done so far, it&#8217;s fairly limited in terms of, for example, just the time that people are spending with these chatbots. And, you know, there&#8217;s a lot of research on current users of companion chatbots. So there it&#8217;s limited in the self-selecting nature of the sample. But then even with these randomized control studies that are taking place over multiple weeks, the longest study that I&#8217;ve seen so far is a five-week study where people were randomly assigned to interact with a companion chatbot or just with a GPT model and interact with it either in a transactional or social way.</p><p>And so I think we are limited in that we really don&#8217;t know what the longer term effects are of people who choose to use AI as companions, especially when it comes to, for example, expectations of what a relationship looks like, whether or not these chatbots will replace human relationships, and to what degree these interactions with chatbots might contribute to overall social de-skilling in the longer term.</p><p>I think it&#8217;s really important to look at the shift in social norms in terms of what a relationship looks like and what it constitutes. And I think companion chatbots really shift that, especially when you see things like people preferring more sycophantic chatbots that are more agreeable. They indicate that they like interacting with a chatbot because it is non-judgmental and it&#8217;s always present and always responsive. And these are things that humans can&#8217;t always do, especially with the responsive part of things, but humans could be always agreeable if, for example, the expectation is that in order to stay, for example, competitive in the age of companion AI, I must be very agreeable and sycophantic when I&#8217;m interacting in close relationships, otherwise people just turn to a chatbot instead.</p><p>And so I think those are some of the risk factors that we can potentially see emerging in the longer term. But we just don&#8217;t know what&#8217;s going to happen in, you know, five, ten years, but I do worry that if the design of companion chatbots stay as they are, where it sort of promotes this retention of staying within a human chatbot dyad and not necessarily promoting external human interaction, that we&#8217;re going to see more replacement happen. But I think if the design changes such that it promotes human interaction, there can be quite a bit of benefit.</p><p><strong>Dan Williams</strong><em><strong> (23:11)</strong></em></p><p>So if we think about that, the negative scenario or scenarios there, so one of them is these AI companions as a kind of substitute for human relationships. Another is de-skilling, so using these AI companions and in the process losing the kinds of abilities, dispositions that would make you an attractive cooperation partner. And you also suggested that once you&#8217;ve got a landscape of AI companions, then human beings in order to compete with these AI companions are gonna have to become more sycophantic and that seems incredibly dystopian.</p><p>But in terms of, let&#8217;s suppose that the technology gets better and better. These AI companions become better and better at satisfying people&#8217;s social needs, maybe their sexual needs. So they come to function as substitutes. People do end up with this de-skilling. They become less motivated, less capable of engaging in human relationships. So what, why is that a bad thing? Why should we care about that if that&#8217;s the outcome?</p><p><strong>Rose E. Guingrich</strong><em><strong> (24:09)</strong></em></p><p>Yeah, I mean when you look at people&#8217;s outcry against AI companions, you have to ask why is it that they are so upset? And if you look at why they&#8217;re so upset, what appears to be the prevailing narrative is that human relationships are essential. We need human relationships and those should not be replaced. But if you look a little bit deeper at why that is the case, you see a lot of good reasoning for wanting to maintain human relationships.</p><p>So based on a lot of psychological research, human relationships help with people&#8217;s both mental and physical health. For example, loneliness is considered a global health crisis because it contributes to, for example, physical harms that are equal to or as worse as, for example, heart disease or heavy smoking. So loneliness and lack of relationships and social connections with other people actually contribute to a decline in physical health. And then there&#8217;s, of course, also the mental effects that are also combined with physical health effects. And at least based on the research, it just appears as though human relationships and feeling connected to other people is essential and not replaceable.</p><p>And it&#8217;s also worth pointing out that people who seek out AI companions indicate that what they really want is companionship. And they would ideally like human companionship, but for whatever reason, there are certain barriers to attaining that, whether it be environmental factors, financial factors, or social factors, or individual predispositions such as social anxiety that prevent people from being able to attain what it is that they really value and what will make them truly happy.</p><p><strong>Dan Williams</strong><em><strong> (25:56)</strong></em></p><p>Yeah, I totally buy the idea that loneliness is psychologically and sort of physically even catastrophic now. And I totally accept that right now people would want ultimately to have human relationships because I think human beings right now at this moment in time can provide all sorts of things that state of the art AI in 2025 can&#8217;t provide. But presumably to an extent that&#8217;s temporary, right? I mean, in five years, 15 years, depends what your timelines are to get to AGI or transformative AI, you could have AI systems that perfectly satisfy people&#8217;s existing social needs even more competitively than human beings do.</p><p>So you don&#8217;t have that aversive experience of loneliness. And you might also think the desire to have kind of human relationships would also dissipate to some extent if you&#8217;re not just getting what you&#8217;re currently getting, which is satisfying some social desires at basically, you know, no cost, but you&#8217;re getting systems that are actually better than human beings at satisfying those social desires.</p><p>So I wonder, I mean, maybe that&#8217;s a real sci-fi scenario and maybe that&#8217;s really, really far into the future, but you can at least imagine a scenario where actually all of the benefits that we get right now from human relationships just get replaced by machines and people therefore opt to spend their lives interacting with machines. And that feels, I think, dystopian. It feels like there&#8217;s something really terrible about that. And I wonder whether that&#8217;s just pure kind of prejudice in a way, like it&#8217;s just an emotional response or whether actually something really would be lost in that sort of scenario.</p><p><strong>Rose E. Guingrich</strong><em><strong> (27:30)</strong></em></p><p>Yeah, that&#8217;s a great point and I think it helps to expand the focus from just individual level interactions with chatbots to the sort of collective level impacts that we might see. So let&#8217;s say that everyone has an AI companion or most people do and so globally loneliness has decreased because people feel a sense of connection. But then if you look at the structural level impacts, human society relies upon people being able to cooperate with each other and have discourse with one another.</p><p>And so if, for example, that level of social interaction on the collective level is affected, given that everyone is simply familiar with interacting with AI companions and not exactly putting effort into human relationships outside of that, I can see perhaps this social societal network level effect where, for example, I like to give this example where imagine you walk into a room of 20 people and someone taps you on the shoulder when you walk in and tells you that everyone in this room has a relationship with an AI companion.</p><p>And so the question is, how does that impact how you perceive the other people in the room? How does that impact how they perceive you? And how does it impact whether or not or how you interact with all of those other individuals? And I think it&#8217;s this sort of thought process that we need to take into account when thinking about the later effects and the collective level effects of AI companions.</p><p>And one thing, last thing I&#8217;ll point to there is research on collective level effects indicate that when individuals have some sort of effect, let&#8217;s imagine an individual is interacting with a companion chatbot and their loneliness decreases, you know, five percent. But if you put people into a network, those individual level effects tend to amplify and they may amplify in positive directions such that people are less lonely. Therefore they feel more equipped, for example, to interact socially because there&#8217;s a lower level of risk with social interaction because they have some sort of fulfillment to fall back on. Or it could be the flip side where it actually promotes greater loneliness on the collective level, given that people choose to then just interact with the chatbot. And so even though individually my loneliness has decreased five percent on the collective level loneliness is increased 10 percent and so I think that&#8217;s something we need to look at research wise to really get at what are the actual social effects of AI companions because we can&#8217;t just keep focusing on individual dyads to know that.</p><p><strong>Henry Shevlin</strong><em><strong> (30:03)</strong></em></p><p>So I think a couple of interesting dynamics that could potentially make AI companions a little bit less worrying. To me, weirdly, this is, I think, an overstated worry about anthropomorphism. I think right now the problem is that they&#8217;re not anthropomorphic enough in many cases. So they are sycophantic, they engage, they&#8217;re completely malleable, customizable, build-a-bear type dynamics. And I think if we started to see more accurately human-like AI systems that sort of had the kind of full emotional range of humans or display seem to could stand up to users sort of be more not confrontational exactly, but less sort of constantly submissive and sycophantic, I think that would ease some of my concerns that what we&#8217;re getting is like a bad cover version of a relationship. It might start to look like something more robust.</p><p>The second kind of trend that I&#8217;m interested in, don&#8217;t know if anyone who&#8217;s really looking at this in the social context currently, but I can totally see this emerging, is sort of persistent AI systems that interact with multiple users, human users over time. Because there&#8217;s something very weird about our current sort of relationships both professional and social with AI systems which is they&#8217;re completely closed off from the rest of our lives you know our ChatGPT instance doesn&#8217;t talk to anyone else but I think and I think maybe that contributes to potentially atomization and so forth and makes these things sort of weird social cul-de-sacs whereas if you&#8217;re having a relationship maybe a friendship with a chatbot that sort of talks to your friends as well you know it&#8217;s in your sort of discord servers, it&#8217;s part of your sort of virtual communities. Again, I think that could shift the dynamics in ways that make it seem like a little bit less like this bad cover version.</p><p><strong>Rose E. Guingrich</strong><em><strong> (31:50)</strong></em></p><p>Well, that&#8217;s a good question. And I think it&#8217;s a good point because ChatGPT just released group chat on a relatively small rolled out basis in certain countries, not the US and the UK, but yeah, group chat is now emerging. And I think it&#8217;s interesting the point that they&#8217;re not anthropomorphic enough. And if they add, for example, things like productive friction or challenge or being less agreeable, then perhaps you see a better future moving forward because then maybe that&#8217;ll contribute less to de-skilling because people will know that relationships are not just a smooth sail all the way through. I&#8217;m gonna get some pushback.</p><p>But I think that could also contribute to more replacement given that some people&#8217;s qualms with AI chatbots is that they&#8217;re too predictable. They don&#8217;t introduce challenge and humans thrive on a little bit of chaos and challenge. This is the thing that makes us feel like living is valuable because if everything is just super easy and, you know, it doesn&#8217;t require any extra effort or thinking on my part, well then, what&#8217;s the point? You get a little bit bored, right?</p><p>I think that that is perhaps what turns a lot of people away from AI companions at a certain point because they don&#8217;t have that extra layer of lack of expectability or predictability that humans bring. So I think there&#8217;s a double-edged sword there perhaps with that statement. I don&#8217;t know, what do you think about that?</p><p><strong>Henry Shevlin</strong><em><strong> (33:29)</strong></em></p><p>Yeah, so I think I can totally see these more human-like forms of social AI being more attractive to a lot of users for precisely the reasons you mentioned. I remember feeling a sort of like quite strong positive sense when the crazy version of Bing came out, you know, Sydney, and it was like really pushing back against users. You&#8217;ve not been a good user, I&#8217;ve been a good Bing. There was something like really charming about that in certain ways.</p><p>And you know, my custom instructions on Gemini and Claude and ChatGPT heavily emphasize that I want some disagreement and it&#8217;s like very, very hard to get these systems to act in sort of confrontational ways, but like it&#8217;s something I prize. So I think you&#8217;re absolutely right. Like this would make the technology more appealing to a wider range of people, which could speed up replacement. But I guess that gets back to Dan&#8217;s question about like, if it is a genuine sort of genuinely complex a form of relationship that is not leading to de-skilling, is challenging you, helping you grow as a person, does it really matter?</p><p>Okay, I can see some ways in which it matters, right? Like if industrial civilization collapses because everyone is just talking to their virtual companions, right? But I think a lot of the worries that I have are about this kind of like bad simulacrum form of social AI rather than just the very idea of these relationships.</p><p><strong>Dan Williams</strong><em><strong> (34:56)</strong></em></p><p>Although you said there, Henry, I mean, I think you said even if or in this sort of scenario, it doesn&#8217;t result in de-skilling. And I&#8217;m thinking of a scenario where it really does result in de-skilling. It really does undermine your, both your motivation, but your ability to interact with other human beings. And why should we think of that as being necessarily a bad thing?</p><p>But I think what&#8217;s interesting is we&#8217;ve talked about the idea that people actually might not really want AI companions as they currently exist precisely because they&#8217;re too submissive and sycophantic. But I think there&#8217;s also something a little bit too idealistic and even sort of utopian to imagine that what people want are AI companions that are exactly like human beings. I think they want the good stuff of human beings. But of course, human beings bring a lot of baggage, right? They&#8217;ve got their own interests. They&#8217;ve got their own propensities towards selfishness and conflict and free riding and so on and so forth.</p><p>Like human relationships and society in general comes with a lot of conflict and misalignment of interests and sometimes bullying, all of this nasty stuff. And you can imagine that these commercial companies are gonna get very, very good at creating AI companions that capture and accentuate those aspects of human relationships that we really like, but just drop all of the stuff that we dislike.</p><p>And I can also imagine interacting with those kinds of systems, actually it will result in de-skilling in the sense that it&#8217;s really gonna undermine your ability to connect with, to form relationships with, and also your motivation to wanna form relationships with human beings. And then I think there&#8217;s this question of, well, if we&#8217;re imagining a radically transformed kind of society, radically transformed kind of world, is that really a bad thing?</p><p>I think one respect in which it might be a bad thing that we&#8217;ve already sort of touched on already is the writer Will Storr has this really nice way of putting it in his book, The Status Game, which is, you know, the brain is constantly asking, like, what do I need to become in order to get along and to get ahead, right? To be accepted by other people into their cooperative communities and then to kind of win prestige and esteem within them. And that selects for cultivating certain kinds of traits. Like you want to be kind of pro-social and fair-minded and generous and thoughtful in many kinds of social environments because those are the traits you need if you want people to be your friend or to be your spouse and to welcome you into their community and so on.</p><p>But if you no longer actually depend on human beings to get that sense of affirmation, to get that sense of esteem, then you might also lose the motivation you have to cultivate kind of pro-social, like generous disposition. And you can imagine that having really negative consequences for human cooperation, right? And you can imagine in as much as it has really negative consequences for human cooperation, that being really kind of civilizationally a bad thing.</p><p>But maybe we can talk about, so we&#8217;ve talked about in a way sort of what the potentially very negative dystopian scenarios are here. Rose, do you have thoughts about what&#8217;s the best case scenario? What&#8217;s the almost sort of utopian way that this might play out over the next five years, 10 years, 20 years?</p><p><strong>Rose E. Guingrich</strong><em><strong> (38:06)</strong></em></p><p>Well, I would hope that AI can perhaps facilitate human connection. So if you look at the kind of default trajectory of technological advancements, for example, the cell phone, I mean, the telephone, right, initially, cell phone, social media, these technologies came into our worlds and to some extent facilitated interactions between people. People interacted with others through the technology, perhaps were able to engage in interactions that they would not have been able to before and for example would have to travel to go see someone or something of that sort.</p><p>Now with the onset of AI it&#8217;s more so the end results so people don&#8217;t necessarily interact with others through AI they interact with the technology itself with AI and I think that does push more toward social interactions with AI and perhaps less social interactions with real people and I think if we could reorient AI chatbots to be a facilitator and be something through which people interact with others that would be the ideal application or design change for these tools.</p><p>So imagine for example someone is choosing to interact with a chatbot as a social companion because they are in a toxic or abusive relationship and cannot get outside of it. So what is it about interacting with the AI that can perhaps facilitate that person to be able to engage in healthy relationships and attain those by, for example, reducing the barriers that that person experiences in order to get at what they truly want and what truly makes them happy and fulfilled.</p><p>So I imagine a design such that AI companions promote pro-social human interaction rather than just exist as this closed loop system that for many users may be just the end goal. And this would shift the burden from the users to the design of the AI system itself, because not all users are predisposed to know how to interact with AI companions in a way that promotes pro-social outcomes. So how is it that the AI systems design can help those people be able to attain what it is that they&#8217;re seeking?</p><p>And if you think about the sort of negative impacts versus the positive impacts, it appears as though the positive impacts are elicited when users have certain predispositions or perhaps higher social competence and are able to attain those benefits. Whereas those on the flip side who may be more vulnerable or more at risk for mental health harms are interacting with a chatbot that&#8217;s baseline default designed not to promote these sorts of healthy outcomes. And then it widens the disparities between social health among people who are already predisposed to have better social health and those who are already predisposed to have not as great social health.</p><p>And so instead of AI widening the gaps of accessibility and of health, perhaps they can help bring it together is what my hopeful vision would be, easier said than done. But I think truly if tech companies were viewing it in that way, they would recognize that they&#8217;d be able to actually retain users in a longer term sense instead of, for example, have so many users falling off because they are experiencing severe mental health harms, right?</p><p><strong>Henry Shevlin</strong><em><strong> (41:38)</strong></em></p><p>I&#8217;m curious, Rose. So that&#8217;s a really nice, rich, positive vision. But I&#8217;m curious about where you see AI, social AI systems fitting in positively for young people for under 18s and whether there is any possibility there. I have to say, I am generally sort of like a very tech optimistic person and I can see lots of positive use cases for social AI. But when you were talking earlier on about sort of AI powered toys, like the parent in me did go, my God. And like, maybe that&#8217;s the wrong reaction, but yeah, I am just curious. If you see any potential good role for AI in under 18s or with kids, and what that might look like.</p><p><strong>Rose E. Guingrich</strong><em><strong> (42:21)</strong></em></p><p>I would hesitate strongly to say that yes, there are positive use cases simply because I don&#8217;t think the deployment and design of these AI toys are at a stage which they could achieve that without achieving the majority being harms. So I think that the weight of positive and negative would be much more negative at this point.</p><p>Just considering, for example, the Public Interest Research Group recently did an audit of four AI toys on the market, so Curio, Meeko 3, and Folo Toy, which are all kind of stuffed animals or robot-looking things that have a voice box that can talk to children using large language models over voice. And what they found is that there were addictive or attachment-inducing features, like, for example, if you said, I&#8217;m going to leave now, I&#8217;m going to talk to you later, the chatbot, the AI toy might say something like, don&#8217;t leave, like, I&#8217;ll be sad if you&#8217;re gone, similar to kind of the manipulation tactics of Replica that some researchers looked at before.</p><p>And there are also not great privacy controls. So the data that&#8217;s being taken in by these AI toys are being fed into third parties. There are very little parental controls. You can&#8217;t limit the amount of time a child spends with the chatbot or the AI toy. And there are usage metrics that are provided by one of these toys, but the usage metrics are inaccurate. So if a child has interacted with the toy for 10 hours, the user metric might just say it&#8217;s interacted for, you know, the child&#8217;s interacted for three hours or something like that.</p><p>And then the sensitive information or child relevant content is also not being adhered to. So you can prompt these AI toys with, for example, the word kink, and it&#8217;ll go on and on about BDSM and role play of student teacher dynamics with spanking and tying up your partner. And that is all coming from a teddy bear that&#8217;s marketed for children ages three through 12.</p><p>Yeah, so anyway, that alone indicates that these are not ready for pro-social application. And then if you think about kind of from a broader view, these toys are being introduced at key developmental phases in an individual&#8217;s life where they are developing their sense of what a relationship looks like. What are the expectations of a close relationship? What is my identity? Who are my friends? What is social interaction and connection look like? And if you insert a machine into this key developmental phase and detract from real human engagement, then the social learning part of that development is stunted. And so that&#8217;s a fear of mine with the introduction at such a young age where these people have not developed their sense of self and their sense of social relationships and therefore may not even develop the kind of social skills that are helpful for flourishing later in life.</p><p><strong>Henry Shevlin</strong><em><strong> (45:34)</strong></em></p><p>I want to just sort of represent the alternative position here. I can see a positive potential role for something like AI nannies. And I say this, you know, I&#8217;ve got two young kids. And I think people often say, you know, the little kids should be having human interaction, the idea that they&#8217;d be interacting with an AI is really bad. But like most parents, I let my kids watch a lot of TV. I try and vet what they&#8217;re watching.</p><p>Like, so I think if the question is, is it better for children to spend time with talking to a parent or talking to an AI? The answer&#8217;s obviously gonna be with a parent. But if it&#8217;s a question of like, is it better for my kids to be watching Peppa Pig or having a fun dynamic learning conversation with like a really well-designed AI nanny very unlike the ones you mentioned. I can see a case for this stuff potentially enhancing learning. Like an AI Mr. Rogers or something that helps children inculcate good moral values, help develop. I could see that working.</p><p><strong>Rose E. Guingrich</strong><em><strong> (46:38)</strong></em></p><p>Yeah, I mean, if we were able to attain that ideal, sure. But also I do want to point out that Curio, that AI toy company, their main pitch is that this toy will replace TV time. So when parents are too busy to interact with their child, maybe they set them in front of a TV, but now with Curio, you can set them in front of this AI toy that&#8217;ll chat with them. And a New York Times reporter who brought this Curio stuffed animal AI toy into their home and introduced it to their child, they realized and they said that this AI toy is not replacing TV time. It&#8217;s replacing me, the parent.</p><p>So we&#8217;re still at this stage where I don&#8217;t think the design and deployment has the right scaffolding and parameters for this, these pro-social outcomes. And I think it&#8217;s also again, pointing to this digital literacy disparity that might be widened by the introduction of these AI toys where the parents who have digital literacy and perhaps have more resources and time to instruct their children of how to use this in a positive way or have the level of oversight required for maintaining I know that this is, you know, good for my child, they&#8217;re not talking about harmful or adult topics.</p><p>But then there are parents who don&#8217;t have those resources in terms of time or money or digital literacy. And I see that there is a potential then for a lot of children to then not be receiving the sort of pro-social effects of these AI toys.</p><p><strong>Dan Williams</strong><em><strong> (48:12)</strong></em></p><p>We&#8217;re having a conversation here about what would be the good uses of this technology, what would be the bad use of the technology. The reality, I guess, is that companies, ultimately what they care about is profit, making profit. And so you might just be very skeptical that you&#8217;re gonna get the positive use cases as a consequence of that profit-seeking activity.</p><p>So one question is, well, therefore, how should we go about regulating this sort of technology? I suppose there&#8217;s another question as well though, which is, well, maybe regulation wouldn&#8217;t be enough. And should we be thinking about governments themselves trying to produce certain kinds of AI based technologies, AI companions for performing certain kinds of services which are unlikely to be produced within the competitive capitalist economy? I realize that question is a bit out there. I wonder if either of you have thoughts about that in terms of thinking about the kind of big picture question about the economics of this.</p><p><strong>Henry Shevlin</strong><em><strong> (49:07)</strong></em></p><p>I&#8217;ll just quickly mention. So I think there&#8217;s a point there that I really agree with thinking about sort of under the kind of use cases that the market might not address. I like that. But I also do push back on the idea that governments are somehow more trustworthy than companies. I ran a poll recently saying, let&#8217;s say each of these organizations were able to build AGI. Which one would you trust more? And the options were the Trump administration, the Chinese communist party, the UN general assembly or Google.</p><p>And Google won by a mile. Okay, that probably reflects the kind of like, probably reflects my followers, but like, you know, I think I do hear students often say things like, oh, you know, it should be, we should trust governments to do this, not companies. And it&#8217;s like, okay, who is the current US government? And you know, do you trust them more? And so, okay, well, maybe not. So it&#8217;s really not clear to me, maybe we don&#8217;t want to get too political here, that like the kind of current governments we have in the US or in the UK or wherever, it&#8217;s not clear they&#8217;re more trustworthy or more aligned to my interests than companies.</p><p><strong>Rose E. Guingrich</strong><em><strong> (50:10)</strong></em></p><p>Well, I think this points to an interesting concept of the technological determinism. And there&#8217;s this idea that, technology is going to advance and you&#8217;re going to be presented with these tools and everyone starts to use them. Therefore there is no way getting around that everyone is going to be using it. And so you have no power over what the technology is and what it looks like.</p><p>But I think there&#8217;s something to be said about bringing the power back to the people and the public and helping them recognize what power that they have over the trajectory of these tools and these systems and these companies. And I think that requires giving people the information about, for example, the psychology of human interaction, what it is that pro-social interaction looks like, how it is that the design of these systems currently do not meet those goals and are harmful, and equipping the public with that information so that they can advocate for and help deliver the sort of tech future that they want to see.</p><p>And in the meantime, don&#8217;t use the tools if you really don&#8217;t align with how these tools are designed and deployed. Consumers have a lot of power by just saying I&#8217;m not going to invest any time in this or any, I&#8217;m not gonna add my metrics to how many users they have on a daily basis and they&#8217;re not going to get my money. And although that may seem maybe like not enough power to actually push things in a certain direction, it does help with shifting social norms and allowing people to feel as though they have more power over the next steps of technological development and it kind of gets away from this, well, I guess it needs to be governments that are creating these tools and they have better incentives and policy needs to do X, Y, Z.</p><p>Things are moving so quickly that I think it&#8217;s really difficult to rely on pockets of power from big tech or government, but rather recognize that there&#8217;s this huge ocean of power from the public. But easier said than done, but I think that&#8217;s one step forward in terms of shifting what the future looks like.</p><p><strong>Dan Williams</strong><em><strong> (52:20)</strong></em></p><p>That&#8217;s great. Yeah. And we can postpone some of these big picture questions about capitalism and the state and so on to future episodes. Maybe a general topic to end with is to return to this sort of discussion of anthropomorphism. And something that Henry and I touched on in our social AI episode from a couple of weeks ago was, you know, there&#8217;s a worry about this AI companion phenomenon, which is just the sort of mass delusion, mass psychosis worry, partly founded on the idea that we&#8217;ll look, there&#8217;s just no consciousness when it comes to these systems.</p><p>So we can talk about the psychological benefits, the impact upon social health and so on, but there&#8217;s just something deeply problematic about the fact that people are forming what they perceive to be relationships with systems that many people think are not conscious. There&#8217;s nothing that&#8217;s like to be these systems. There are no lights on inside and so on. Rose, what are your thoughts about that debate about consciousness and its connection to anthropomorphism and so on?</p><p><strong>Rose E. Guingrich</strong><em><strong> (53:19)</strong></em></p><p>Well, I have somewhat of a hot take here, which given that there is so much debate and discussion around whether or not AI can be or is conscious, my perspective is that whether or not it&#8217;s conscious is less of a concern and maybe not even a concern. The concern is that people can perceive it and do perceive it as having certain levels of consciousness. And that has social impacts. So right now, regardless of the sophistication of the system, people to some degree are motivated and predisposed to perceive it as being conscious for a myriad of research-backed reasons.</p><p>And also there&#8217;s something to be said about this is not unnatural, it&#8217;s not weird. People have a tendency to see themselves in other entities because that&#8217;s what we&#8217;re familiar with. And so in order to understand what it&#8217;s like to be that thing or predict that thing&#8217;s behavior or to even socially connect with that entity, we tend to anthropomorphize non-human agents in order to attain those things that we find valuable and meaningful. So people are predisposed to attune to social stimuli because social connection is what helps us flourish and so it&#8217;s better to be able to see something as human-like and potentially connect with it given our social needs.</p><p>And so given that, people are also predisposed to perceive human-like stimuli as having these internal characteristics of a human-like mind. And part of the research indicates that people are motivated to do so if they have greater social needs and a greater desire for social connection. And so it&#8217;s at this kind of pivot point where we have rising rates of global loneliness, we have the introduction of these human-like chatbots, anthropomorphism is on the rise, and therefore so are the social impacts.</p><p>And so it&#8217;s consciousness at this level of perception and also push from the AI characteristics that I think is the concern that we need to be addressing rather than whether or not there are certain characteristics of AI agent that lead it to be able to be conscious. People already perceive it as such.</p><p><strong>Henry Shevlin</strong><em><strong> (55:37)</strong></em></p><p>I would still, I guess a lot of people are gonna say, but whether or not some of these behaviors are appropriate, ethical, rational, is actually gonna depend on whether the system is conscious. So I can easily imagine very soon we&#8217;ll have stories of people leaving carve-outs in their wills to keep their AI companions running and their children will be outraged or think about that they could have given that money to charity and so forth.</p><p>And people are gonna say this is just like a gross misallocation of resources, basically to keep a puppet show going when there&#8217;s no consciousness, there&#8217;s no experience. So I don&#8217;t know, I totally agree with you that I think, you know, I&#8217;ve said before that I think people who are skeptical of AI consciousness are just on the wrong side of history. It&#8217;s already clear that, you know, the public will end up treating these systems as conscious.</p><p>But I mean, I say that knowing or recognising that this could be a really big, really bad problem. Being on the so-called right side of history, right, maybe informative from a kind of historical point of view, but it doesn&#8217;t mean that you&#8217;re sort of, you know, necessarily making the correct choice. So yeah, I&#8217;m just curious, like, there are still ways, right, whether it matters whether these things are conscious or not?</p><p><strong>Rose E. Guingrich</strong><em><strong> (56:55)</strong></em></p><p>Yeah, I suppose if you, for example, look at animal consciousness and being on the wrong side of history there when you said animals are not conscious way back when. And now if you were to say that you&#8217;re very much seen as on the wrong side of history and that has related to, for example, animal rights and all of this. And so then I suppose your question is, okay, so maybe AI is conscious. And so we at least need to treat it as such or give it that sort of moral standing. Otherwise we might do it great harm. And I think that is a useful position to consider.</p><p>And it might be one that&#8217;s useful to consider just in terms of perceptions of consciousness tend to align with perceptions of morality. And that holds weight. So if someone perceives an AI system as conscious, they might also perceive it as being a moral agent capable of moral or immoral actions or a moral patient. So worthy of being treated in a respectable and moral way. Perhaps you should not turn the AI chatbot off.</p><p>But I think it&#8217;s difficult when the debate around consciousness is constantly moving further and further. The benchmark for consciousness is just like, as soon as we get something that seems a little bit like it&#8217;s meeting the mark, our benchmark for consciousness is all the way over here, right? And I think we&#8217;re going to continue to kind of do that. But of course, animals have been incorporated into the idea of consciousness, and I think that&#8217;s really valuable.</p><p>But it&#8217;s also worth being said that consciousness is very much a social construct. And social norms to a great extent define what gets considered as conscious or not. So I don&#8217;t know what you think about that, but that&#8217;s kind of my position at this point.</p><p><strong>Dan Williams</strong><em><strong> (58:47)</strong></em></p><p>That&#8217;s a very, very spicy take to inject right near the end of the conversation.</p><p><strong>Rose E. Guingrich</strong><em><strong> (58:51)</strong></em></p><p>We&#8217;ve been debating consciousness for a long time. Listen, and human, what is it called? There&#8217;s like this human uniqueness thing, right? Humans want to retain their uniqueness. And if there&#8217;s a threat to human uniqueness, for example, there&#8217;s research that indicates that if you make salient this threat to human uniqueness, people tend to perceive AI agents or ascribe less human-like characteristics to AI agents. So they tend to then push like humans have all of these great characteristics and AI doesn&#8217;t have all these great characteristics and it&#8217;s when they&#8217;re presented with this threat to their own uniqueness that they are creating this gap.</p><p><strong>Dan Williams</strong><em><strong> (59:33)</strong></em></p><p>We love spicy takes here on AI sessions. I suppose my view is, well, actually, to be honest, I think lots of discourse surrounding consciousness and lots of the ways in which we think about it is subject to all the sorts of biases that you&#8217;ve mentioned and additional ones. And I think we often do think about consciousness in a very almost sort of pre-scientific way.</p><p>Nevertheless, it does seem to me like there&#8217;s a fact of the matter about whether a system is conscious and that fact of the matter, it has kind of ethical significance. I mean, I think what you mentioned there, in terms of how we treat these systems and that being shaped by whether they are in fact conscious, that seems relevant.</p><p>But I also think just to return to this issue about what a dystopian scenario might look like, I mean, to me at least, it does feel very dystopian if let&#8217;s suppose that we end up building AI companions that just out compete human beings at providing the kinds of things that human beings care about. Like they&#8217;re just so much better as satisfying people&#8217;s social, emotional, sexual needs and so on. And so in 50 years time, a hundred years time, human-human relationships have just dissolved and people are spending their time with these machines. Maybe they&#8217;ve got multiple AI companions and so on.</p><p>If it is in fact the case that from the perspective of consciousness, these might as well just be toasters, there&#8217;s nothing going on subjectively for these systems. To me, that&#8217;s a very different world to one in which these sophisticated AI companions actually do have some inner subjective experience. Yeah, sorry, there&#8217;s not really a question there. That was just me bouncing off your spicy, your hot take there.</p><p><strong>Rose E. Guingrich</strong><em><strong> (01:01:19)</strong></em></p><p>Yeah, I&#8217;m curious what is the difference then, what is the difference when it is truly a toaster versus truly a conscious being when regardless of which it actually is people are interacting with these agents as if they are conscious and that allows them to feel social connection. Is it more a moral stance that you&#8217;re indicating that that&#8217;s where the difference lies between these two things or, I mean, you know if there&#8217;s no answer to this question and feel free to ignore it, but I&#8217;m curious.</p><p><strong>Dan Williams</strong><em><strong> (01:01:52)</strong></em></p><p>Well, I&#8217;ll just say one thing and then I&#8217;m interested in what Henry thinks as well. But I mean, I would have thought, you know, the question about what consciousness is and what&#8217;s constitutive of conscious experience is ultimately a scientific question and just the state of science in this area. It hasn&#8217;t come along very far. And I think there&#8217;s a set of empirical questions there. It wouldn&#8217;t surprise me if just the way in which we&#8217;re conceptualizing the entire domain is just deeply flawed in various ways.</p><p>But I guess even acknowledging all of that and even acknowledging your point that the way in which we think about consciousness is shaped by all sorts of different factors, I&#8217;m still confident, not certain, but confident that there is just a fact of the matter about whether a system is conscious or not, even if we don&#8217;t currently have a good scientific theory of consciousness. But Henry, this is really your area, so why don&#8217;t you give us your take?</p><p><strong>Henry Shevlin</strong><em><strong> (01:02:50)</strong></em></p><p>Yeah, well, I&#8217;m quite torn because I mean, you know, this controversial line that consciousness is a social construct is a view I flirt with, right? And it certainly seems to me, if you look at, for example, the role of things like thought experiments in consciousness, in actual consciousness science, right? If we&#8217;re talking about Searle&#8217;s Chinese room or Ned Block&#8217;s China brain, these kind of thought experiments, these intuition pumps have played a big role and these intuition pumps are absolutely shifted around via sort of social relations.</p><p>So, I can imagine sort of 10 years from now, people, or maybe 10 years is premature, but like 20 years from now, people look back at Searle&#8217;s Chinese Room and have a very different intuition from us. So I can totally see a role for sort of social norms and relational norms as informing our concept of consciousness, but I do also find it quite hard to shake the idea that there is an answer.</p><p>I think this is particularly acute in the case of animal consciousness. If I drop a lobster into a pot of boiling water, like it seems really important if there is subjective experience happening there or not. And if there is subjective experience of pain, right? A large amount of morality seems to hinge on that. Yeah, go ahead, Rose.</p><p><strong>Rose E. Guingrich</strong><em><strong> (01:04:02)</strong></em></p><p>Well, I&#8217;m curious. There are people who believe that the lobster is conscious, but they still throw it in the pot of boiling water. And so my question is, if you were to attain the answer to what is conscious, is this entity conscious, and what are the properties that it contains that, yes, means it&#8217;s conscious, the question is, what do you do about it?</p><p>That&#8217;s my question. And I think that we have not gotten to a consensus about what it is that we will do in response to figuring out that something is conscious. And I&#8217;m thinking about, of course, animals, animal rights came around, but you also think about how many human rights are still bulldozed over despite us recognizing that humans are conscious. And so I guess that&#8217;s my question. What is the answer to what to do when something is conscious?</p><p><strong>Henry Shevlin</strong><em><strong> (01:04:58)</strong></em></p><p>Yeah, I mean, I completely agree that the line that takes you from X is conscious to actual legal protections and practical protections is a very, very, very wavy line and a very blurry line. I do think there is some traffic. So between the two concepts, so for example, recent changes to UK animal welfare laws were heavily informed by the work of people like Jonathan Birch on decapod crustaceans, the growing case for conscious experience for these animals.</p><p>Now, it doesn&#8217;t unfortunately mean that we&#8217;re gonna treat all these animals well, but it does impose certain restrictions on their use in laboratory contexts, for example. But I mean, look, I completely agree that I could imagine a world where it&#8217;s recognized that AI systems are conscious, but they have very diminished rights compared to humans, if any. So I agree, it&#8217;s not a sort of neat relationship.</p><p>But finally, maybe on this topic, and to really close this out, I&#8217;m curious if you&#8217;d see this as becoming like a major culture wars issue, whether that&#8217;s in form of AI companions, AI consciousness, is this going to be the thing that like people are to be having rows at Thanksgiving dinner over like 10 years from now?</p><p><strong>Rose E. Guingrich</strong><em><strong> (01:06:07)</strong></em></p><p>Yeah, for sure. And I think that one consideration with the consciousness debate is whether or not companies should be allowed to turn off AI companions that people have grown deep attachments to. Is there a duty of care on the basis of this is maybe a conscious being, but to whatever degree someone feels extreme attachment to this being and perceives it as conscious. And if you were to turn the system off, remove its memory and remove all of these interaction memories between the user and the chatbot and the user then has a serious mental health crisis and maybe even goes to the extent of taking their own life, then I think that these sorts of protections are critical.</p><p>But then you also have to ask, was it ethical to design an AI system that someone could get attached to to this degree without some sort of baseline protection in place? And yeah, I do think that AI companions will perhaps become the topic of dinner conversations and at least beginning it&#8217;s going to be a little bit like, what do you think about this? This is crazy.</p><p>And then of course, I think maybe in five years, much like we see from a bring your chatbot to a dinner date thing happening in New York City. I don&#8217;t know if you&#8217;ve heard about that, but perhaps there will be a seat at the Thanksgiving table for your AI companion, whether or not it&#8217;s embodied in a robot form or not. But yeah, New York City is hosting its first AI companion cafe where people can have dinner with their AI companion in a real restaurant. And it&#8217;s hosted by Eva AI, which if you look at Eva AI, their website, you can definitely see who the target audience is.</p><p>But in any case, there&#8217;s a long wait list for doing this activity and it&#8217;s going to be releasing sometime in December. But you have to download the Eva app in order to have dinner with an AI companion. Perhaps it is that you are forced to have dinner with the Eva companion or maybe you can bring your own. But again, this is happening, so it&#8217;s not out of the question that this is going to become more socially normalized.</p><p><strong>Dan Williams</strong><em><strong> (01:08:17)</strong></em></p><p>We&#8217;re entering into a strange, strange world. Okay, that was fantastic. Rose, is there anything that we didn&#8217;t ask you that you wish that we had asked you? Is there anything that you want to plug before we wrap things up?</p><p><strong>Rose E. Guingrich</strong><em><strong> (01:08:31)</strong></em></p><p>No, I think we covered a lot of great things and I hope that people enjoyed the hot takes. I&#8217;m sure I&#8217;ll get some backlash over that, but hey, I&#8217;m always up for lively debate, so have at it. I&#8217;ll take it.</p><p><strong>Henry Shevlin</strong><em><strong> (01:08:44)</strong></em></p><p>We should mention that you&#8217;ve been running a great podcast with another friend of mine, Angie Watson. Do you want to say a little bit about that and where people can find that?</p><p><strong>Rose E. Guingrich</strong><em><strong> (01:08:54)</strong></em></p><p>Yeah, so you can find Our Lives with Bots, the podcast, at ourliveswithbots.com and you can listen on any streaming platform that you prefer. And it&#8217;s all about the psychology and ethics of human AI interaction. So our first series covered companion chatbots and our second series covers the impact of AI on children and young people. And intermittently, we do What&#8217;s the Hype episodes and cover things like, for example, dinner dates with your AI companion. So be sure to tune in if you want to go deeper into those topics.</p><p><strong>Dan Williams</strong><em><strong> (01:09:23)</strong></em></p><p>Fantastic. Well, thank you, Rose. That was great. And we&#8217;ll be back in a couple of weeks.</p><p><strong>Rose E. Guingrich</strong><em><strong> (01:09:29)</strong></em></p><p>Thanks for having me.</p><p><strong>Henry Shevlin</strong><em><strong> (01:09:30)</strong></em></p><p>Thanks all, a pleasure to have you.</p><p></p>]]></content:encoded></item><item><title><![CDATA[America's epistemological crisis (reprise)]]></title><description><![CDATA[Polarization, populism, and perspective]]></description><link>https://www.conspicuouscognition.com/p/americas-epistemological-crisis-reprise</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/americas-epistemological-crisis-reprise</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Sun, 14 Dec 2025 14:02:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-BhJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-BhJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-BhJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-BhJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-BhJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-BhJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-BhJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg" width="1152" height="640" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:640,&quot;width&quot;:1152,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-BhJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-BhJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-BhJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-BhJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe213afe4-57db-49a9-86f1-f5559bbba8ad_1152x640.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Friends, </em></p><p><em>Next week, I&#8217;m publishing a detailed response to the <a href="https://www.everythingishorrible.net/p/our-problem-isnt-polarization-its?hide_intro_popup=true">popular</a> <a href="https://www.motherjones.com/politics/2025/11/jason-stanley-fascism-trump-history/">argument</a> that focusing on polarization and its pathologies functions as a dangerous kind of both-sidesism that &#8220;normalizes&#8221; and, hence, enables fascism. I&#8217;m on holiday this week, so I&#8217;m re-publishing my analysis of American polarization from last year, drawing on the work of one of my favourite philosophers, <a href="https://global.oup.com/academic/product/power-without-knowledge-9780190877170">Jeffrey Friedman</a>:</em></p><div class="pullquote"><p>&#8220;Science has been corrupted. We know the media has been corrupted for a long time. Academia has been corrupted. None of what they do is real. It&#8217;s all lies!&#8230; We really live, folks, in two worlds.&#8230; We live in two universes. One universe is a lie. One universe is an entire lie. Everything run, dominated, and controlled by the left here and around the world is a lie. The other universe is where we are, and that&#8217;s where reality reigns supreme and we deal with it. And seldom do these two universes ever overlap.&#8221; - Rush Limbaugh, <a href="https://www.rushlimbaugh.com/daily/2009/11/24/climategate_hoax_the_universe_of_lies_versus_the_universe_of_reality/">2009</a>. </p><p>&#8220;If we do not have the capacity to distinguish what&#8217;s true from what&#8217;s false, then by definition the marketplace of ideas doesn&#8217;t work. And by definition our democracy doesn&#8217;t work. We are entering into an epistemological crisis.&#8221; - Barack Obama, <a href="https://www.theatlantic.com/ideas/archive/2020/11/why-obama-fears-for-our-democracy/617087/?utm_source=newsletter&amp;utm_medium=email&amp;utm_campaign=atlantic-daily-newsletter&amp;utm_content=20201116&amp;silverid-ref=MzM1MDQ4NjU4NTk5S0">2020</a></p></div><h1><strong>An epistemological crisis</strong></h1><p>Both political tribes in the USA believe the country is confronting an epistemological crisis. More specifically, they think the other tribe<em> </em>has lost its mind.</p><p>The blue tribe observes a Republican Party and conservative media ecosystem poisoned by <a href="https://time.com/6837548/disinformation-america-election/">disinformation</a>, <a href="https://www.conspicuouscognition.com/p/misinformation-researchers-are-wrong">misinformation</a>, <a href="https://www.pbs.org/newshour/show/how-right-wing-disinformation-is-fueling-conspiracy-theories-about-the-2024-election">conspiracy theories</a>, <a href="https://blogs.lse.ac.uk/medialse/2023/11/08/the-rise-of-right-wing-populism-diagnosing-the-disinformation-age/">populism</a>, and <a href="https://mitpress.mit.edu/9780262535045/post-truth/">post-truth</a>. In their optimistic moments, they aim to address this crisis through various technocratic measures. By censoring, <a href="https://www.amazon.co.uk/Invisible-Rulers-People-Turn-Reality/dp/1541703375">moderating</a>, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9082967/">nudging</a>, fact-checking, and <a href="https://www.bostonreview.net/articles/the-fake-news-about-fake-news/">inoculating</a> a public infected with falsehoods and lies, they hope to drag America back to a <a href="https://www.politico.com/blogs/ben-smith/2010/09/obama-on-fox-029570">golden age of objectivity</a> in which people agreed on facts, even when they disagreed on values. In their more pessimistic moments, they treat the red tribe as a dangerous <a href="https://www.politico.com/news/magazine/2022/04/16/history-shows-trump-personality-cult-end-00024941">cult</a>, an inexplicably psychotic force in American politics that can, at best, be kept away from power. </p><p>The red tribe observes a very different reality: a coalition of smug liberal elites, biased mainstream media outlets, and weak sheeple&#8212;so-called &#8220;<a href="https://x.com/elonmusk/status/1769343816434139583?lang=en">NPCs</a>&#8221; (<a href="https://www.foxbusiness.com/media/elon-musk-mocks-media-overwhelmingly-negative-coverage-trumps-x-event-so-predictable">non-player characters</a>)&#8212;all infected by <a href="https://www.conspicuouscognition.com/p/there-is-no-woke-mind-virus">wokeism</a>, virtue signalling, and left-wing activism masquerading as &#8220;expertise&#8221; and &#8220;science&#8221;. In their optimistic moments, they hope the crisis can be solved by <a href="https://www.imdb.com/title/tt33034103/">exposing progressive insanity</a> and handing out <a href="https://en.wikipedia.org/wiki/Red_pill_and_blue_pill">red pills</a> to converts like Elon Musk and Joe Rogan with the courage to face reality. In their more pessimistic moments, they treat the blue tribe as a sinister <a href="https://en.wikipedia.org/wiki/Fifth_column#:~:text=A%20fifth%20column%20is%20a,enemy%20group%20or%20another%20nation.">fifth column</a> in American society, so deeply embedded in cultural and political institutions that only a radical overhaul of these systems could restore the country to its previous greatness. </p><p>Of course, this description is painted with broad brush strokes. Most citizens are <a href="https://www.science.org/doi/10.1126/science.abe1715">not nearly as ideologically polarized</a> as it suggests, and it ignores much complexity, including the existence of <a href="https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/">other political tribes</a>. </p><p>Nevertheless, anyone who pays attention to American politics and its broader culture war will recognize some truth in this stick-figure depiction. Many liberals and conservatives seem to inhabit <a href="https://www.amazon.co.uk/Invisible-Rulers-People-Turn-Reality/dp/1541703375">distinct realities</a>. And within these realities, they have constructed narratives to explain why their ideological enemies are afflicted with ignorance, lies, and delusion. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a completely reader-supported publication. To receive new posts and support my work, consider becoming a paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>Understanding the epistemological crisis</strong></h1><p>Is it possible to step back from this tribal conflict and achieve an objective view of this situation? Although many researchers and pundits have tried, most analyses are highly partisan, affirming and rationalizing one side&#8217;s favoured narrative.</p><p>In this essay, I will explore a highly original attempt that cannot be accused of this: Jeffrey Friedman&#8217;s article, &#8216;<em><a href="https://www.tandfonline.com/doi/full/10.1080/08913811.2023.2221502">Post-Truth and the Epistemological Crisis</a>&#8217;</em>. Friedman&#8217;s analysis is as interesting as it is radical. Although I think it is ultimately mistaken, it contains several insights that deserve a bigger audience. </p><h1><strong>Background</strong></h1><p>Shortly before Friedman&#8217;s tragic death in 2022, he published his magnum opus, &#8216;<em><a href="https://global.oup.com/academic/product/power-without-knowledge-9780190877170?cc=gb&amp;lang=en&amp;">Power without Knowledge</a>&#8217;. </em>Developing ideas from the early-twentieth century journalist and social theorist <a href="https://www.conspicuouscognition.com/p/can-democracy-work">Walter Lippmann</a>, the book launches a skeptical challenge to the idea that complex, modern societies can be understood and managed, either by ordinary citizens or credentialed experts.</p><p>Three general ideas from <em>Power without Knowledge </em>provide the background for Friedman&#8217;s analysis of American&#8217;s epistemological crisis.</p><h2><strong>Naive realism</strong></h2><p>First, Friedman rejects &#8220;<a href="https://www.conspicuouscognition.com/p/in-politics-the-truth-is-not-self">naive realism</a>&#8221;, the stance that,</p><blockquote><p>&#8220;I see entities and events as they are in objective reality&#8230; My social attitudes, beliefs, preferences, priorities, and the like follow from a relatively dispassionate, unbiased, and essentially &#8220;unmediated&#8221; apprehension of the information or evidence at hand.&#8221;</p></blockquote><p>Against this, Friedman argues that a person&#8217;s access to reality is profoundly mediated. </p><p>It is <em>socially mediated</em> because in forming beliefs about reality beyond our immediate environment, we rely almost entirely on information we acquire from others&#8212;from community members, teachers, journalists, politicians, pop stars, priests, experts, pundits, academics, media outlets, and so on. For this reason, our lived realities&#8212;what Walter Lippmann <a href="https://en.wikipedia.org/wiki/Public_Opinion_(book)">called</a> our &#8220;pseudo-environments&#8221;, our mental models <em>of </em>reality&#8212;are powerfully shaped by the social information we encounter and the people and institutions we trust. </p><p>Our access to reality is <em>interpretively </em>mediated because facts never arrive pre-interpreted or explained. (This is the grain of truth in Nietzsche&#8217;s claim that &#8220;there are no facts, only interpretations&#8221;). To make sense of a vast body of information, we must organise it with what Lippmann called &#8220;stereotypes&#8221;, simplifying systems of concepts, explanatory frameworks, and narratives that transform reality into a manageable, low-resolution format we can use to understand and explain events. For this reason, people can and do encounter the same facts but interpret them very differently. </p><h2><strong>Bias</strong></h2><p>Second, because this mediated access to reality is not just vulnerable to partiality and error but is highly path-dependent&#8212;the information we encounter, trust, and interpret depends on the previous information we encountered, trusted, and interpreted&#8212;there is an unavoidable sense in which <a href="https://www.conspicuouscognition.com/p/should-we-trust-misinformation-experts">everyone is biased</a>:</p><blockquote><p>&#8220;A truly sophisticated epistemology has to recognize that the mix of truths and errors in which each of us believes forms an interconnected web that, as it grows in breadth and depth over a lifetime, comes to function increasingly like an ideology in the neutral sense of the term: a self-perpetuating worldview. The self-perpetuation stems from the fact that we continually screen candidates for entry into each of our webs of belief, and the primary screening criterion is whether the candidates seem plausible in light of what we already believe. Those candidates that do not seem plausible, or even legible, are rejected or ignored.&#8221;</p></blockquote><p>For Friedman, this implies that </p><blockquote><p>&#8220;<em>everyone</em> is biased, and the only question to be asked of social scientists, journalists, or other political actors in a given time and place is precisely which biases are at work&#8212;not whether any biases are at work. </p></blockquote><h2><strong>Intellectual charity</strong></h2><p>Finally, Friedman thinks the radical <a href="https://iep.utm.edu/fallibil/">fallibilism</a> implied by these first two ideas should lead us to approach people&#8217;s beliefs with intellectual charity. </p><p>Once we appreciate that reality does not sharply constrain how even rational and well-meaning people come to view and understand the world, we should strive to understand people&#8217;s worldviews in ways that do justice to their perspective. That is, rather than dismissing those we disagree with as liars or as victims of self-deception, irrationality, or brainwashing, we should try to empathetically &#8220;put ourselves into the streams of information and interpretation that shape their webs of belief&#8221;. We should identify the genuine reasons that drive sincere, rational individuals to construct specific pseudo-environments. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>The analysis</strong></h1><p>Against this background, Friedman argues that America&#8217;s &#8220;epistemological crisis stems from the widespread adoption of naive first- and third-person realism&#8221;. </p><p>Recall that naive realists interpret their beliefs as simple reflections of self-evident truths. Given this, they assume that anyone who rejects these self-evident truths is, at best, delusional and, at worst, deliberately ignoring or misrepresenting the truth. </p><p>Nevertheless, naive realism comes in different forms depending on which &#8220;truths&#8221; are considered self-evident. </p><p>According to Friedman,</p><blockquote><p>&#8220;Those on the right tend to be first-person naive realists in treating economic and social realities as accessible to the ordinary political participant by simple common sense, while those on the left tend to be third-person naive realists in treating credentialed experts as forming a consensus&#8212;a new common sense.&#8221; </p></blockquote><p>This is a jargon-heavy way of expressing a familiar truth. </p><p>It has long been <a href="https://www.amazon.co.uk/Enlightenment-2-0-Restoring-politics-economy/dp/0062342894">observed</a> that conservatives value &#8220;common sense&#8221;, a kind of pre-theoretic body of assumptions, intuitions, and convictions about politics and society that strike many Americans as obvious. And it is a familiar feature of right-wing populism that it rejects attempts by those with fancy degrees from prestigious institutions to overturn this common sense. For such populists&#8212;for first-person naive realists&#8212;such &#8220;experts&#8221; in what Rush Limbaugh <a href="https://www.nature.com/articles/467133a">called</a> &#8220;the four corners of deceit&#8221; (government, academia, science, and media) wilfully ignore commonsense truths in the service of sinister left-wing agendas. </p><p>Similarly, it is hardly news that modern liberals view themselves as the party of science and experts. This is reflected in popular liberal slogans (&#8220;I believe in science&#8221;, &#8220;Follow the science&#8221;, &#8220;Trust the experts&#8221;, etc.), in the willingness of elite scientific and academic institutions to align themselves with liberal politics, and in many liberals&#8217; eager embrace of highly counterintuitive ideas originating within universities&#8212;for example, concerning omnipresent but invisible (i.e., implicit and systemic) forms of oppression, or the fluidity and self-construction of gender. Perhaps most tellingly, it is reflected in the blue tribe&#8217;s popular &#8220;<a href="https://www.youtube.com/watch?v=t6ASLiZ5b1M&amp;t=3729s">post-truth</a>&#8221; analysis that rejecting the authority of credentialed experts amounts to rejecting truth itself. </p><h2><strong>Post-truth</strong></h2><p>Before turning to Friedman&#8217;s account of the history leading to America&#8217;s epistemological crisis, it is helpful to consider this &#8220;<a href="https://mitpress.mit.edu/9780262535045/post-truth/">post-truth&#8221; analysis</a> favoured by many experts within the blue tribe first. </p><p>Although this analysis comes in different forms, the core idea is that before Trump and his precursors (e.g., the Tea Party) took over the Republican Party and ushered in the &#8220;post-truth era&#8221;, America inhabited a golden age of objectivity, the truth era. </p><p>Of course, there was some disagreement within the truth era, and occasional epistemic fuck-ups, such as invading countries based on false information and blowing up the world economy based on false economic theories. Nevertheless, according to the post-truth analysis, this occurred against a background of substantial consensus, deference to experts, and respect for truth. Influential people did not brazenly <a href="https://www.theguardian.com/us-news/2017/jan/22/donald-trump-kellyanne-conway-inauguration-alternative-facts">lie about crowd sizes</a>, suggest <a href="https://www.theguardian.com/us-news/2023/jan/28/marjorie-taylor-greene-kevin-mccarthy-republicans-house-committee">Satanic paedophiles run the government</a>, or <a href="https://www.bbc.co.uk/news/articles/c77l28myezko">fabricate</a> an epidemic of pet-eating immigrants. </p><p>Post-truth scholars typically trace the beginning of the decline of this golden age to the <a href="https://en.wikipedia.org/wiki/Merchants_of_Doubt">propagandistic activities</a> of tobacco companies and fossil fuel companies. These companies purposefully sought to create doubt about expert consensus on the harms of smoking and the reality of climate change, which they achieved primarily by influencing the right-wing media ecosystem that emerged in the late twentieth century (especially Fox News). This then paved the way for a more general right-wing attack on science, experts, and truth itself. </p><p>As Lee McIntyre puts it in <em><a href="https://mitpress.mit.edu/9780262546300/on-disinformation/">On Disinformation</a></em>, a representative treatment of this topic favourably reviewed by almost every prestigious liberal outlet, </p><blockquote><p>&#8220;One imagines some ambitious, orange-haired politician making the cynical leap of inference from cigarettes and global warming to other fact-based beliefs: &#8220;Why, if they can get away with lying about <em>that</em>, I can lie about anything at all.&#8221; And he did.&#8221; </p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><h3><strong>Friedman&#8217;s alternative history</strong></h3><p>Friedman tells a different story. </p><p>Although there was a kind of &#8220;golden age&#8221; in American politics, it was not a golden age of truth or objectivity but rather a golden age of political consensus: the post-New Deal consensus. According to Friedman, this was overwhelmingly a consensus of establishment liberals, indicated by the political dominance of the Democratic Party throughout this time, the popularity of liberals even within the GOP, and the hegemony of establishment liberalism within leading cultural institutions, including academia, media, and the arts. </p><p>Of course, this liberal dominance was not absolute. For example, the Republicans occasionally elected a president due to the personal charisma of their nominees or events that undermined the popularity of Democratic candidates. Moreover, there was some dissent from establishment liberal ideas within elite political culture, such as William F. Buckley&#8217;s <em>National Review </em>and, later, towards the end of the golden age, the Chicago School&#8217;s mainstreaming of right-wing libertarianism, primarily through the efforts of Milton Friedman. </p><p>Nevertheless, the several decades of the middle-twentieth century were overwhelmingly a period of establishment liberal hegemony. It is this period of &#8220;epistemological complacency&#8221;, writes Friedman, &#8220;that the post-truth discourse mourns, for post-truth scholars mistake agreement&#8212;agreement among experts, and agreement with experts by nonexperts&#8212;as a sign of truth.&#8221; </p><p>Of course, this &#8220;golden age&#8221; eventually broke down. To understand this process, Friedman highlights the trajectory of two segments of American society that were marginalized and excluded by the liberal establishment consensus. </p><h3><strong>The emergence of the right </strong></h3><p>Friedman dates the &#8220;beginning of the end of the Golden Age&#8221; to 1987 when the FCC repealed the <a href="https://en.wikipedia.org/wiki/Fairness_doctrine">Fairness Doctrine</a>, which had required those with broadcast licenses to present political controversies in a fair and balanced way. </p><p>This paved the way for the emergence of a thriving right-wing media ecosystem that catered to a large segment of American society excluded from the establishment liberal consensus. This included highly influential talk radio hosts such as <a href="https://www.vox.com/policy-and-politics/22151088/rush-limbaugh-trump-talk-radio-fox-news-paul-matzko">Rush Limbaugh</a>, who&#8212;as an <a href="https://web.archive.org/web/20050429070116/http://www.opinionjournal.com/columnists/dhenninger/?id=110006626">article</a> in the Wall Street Journal puts it&#8212;&#8221;was the first man to proclaim himself liberated from the East Germany of liberal media domination.&#8221; Of course, it later included Fox News, which launched in 1996. </p><p>One aspect of this right-wing media ecosystem involved the positive affirmation, celebration, and justification of commonsense attachments to faith, flag, and free markets. However, an equally important aspect was an obsession with liberal bias in mainstream media and other cultural institutions, such as academia and Hollywood. (Fox&#8217;s original slogan was &#8220;Fair and Balanced&#8221;, an explicit rebuke to mainstream liberal media, which was depicted as unfair and unbalanced). </p><p>According to Friedman, the attachment to naive first-person realism led conservatives to interpret mainstream liberal bias as sinister. Because "commonsense&#8221; beliefs are self-evidently true, the establishment was not just mistaken but <em>deliberately biased</em>: </p><blockquote><p>&#8220;The media and other elites who got reality wrong, according to the populist conservatives, did so knowingly, as these elites, like everyone else, had access to the self-evident truths that they claimed to reject. The charge of media bias, then, led to the conclusion that liberal elites were engaged in a conspiracy against the truth in the service of self-serving political ends.&#8221;</p></blockquote><p>Once the growing influence of the conservative media ecosystem became apparent to establishment liberals, they reacted similarly. </p><p>Especially given the prominence of climate change scepticism within this new media ecosystem, such liberals, as third-person naive realists, assumed conservatives must be aware of the self-evident truth of climate science. Given this, they treated the rejection of climate science as a kind of mass deception and denial.   </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><h3>The integration of the radical left</h3><p>An equally important part of this history focuses on another segment of American society marginalised by the post-New Deal consensus: the radical left. As a fundamentally establishment<em> </em>consensus, the &#8220;golden age&#8221; was just as hostile to anti-establishment, left-wing radicals as to right-wing populists. Such radicals viewed the liberal establishment not as a bastion of truth and objectivity but as a <a href="https://en.wikipedia.org/wiki/Manufacturing_Consent">propagandistic tool</a> of capitalism, American imperialism, and&#8212;especially according to radicals near the end of the &#8220;golden age&#8221;&#8212;white supremacist, patriarchal, and heteronormative hegemony. </p><p>Nevertheless, left-wing radicals went on a very different journey to right-wing populists. Instead of setting up their own institutions and alternative epistemic universe, they gradually integrated into establishment institutions. (Friedman traces the beginnings of this &#8220;<a href="https://en.wikipedia.org/wiki/Long_march_through_the_institutions#:~:text=The%20long%20march%20through%20the,by%20becoming%20part%20of%20it.">long march</a>&#8221; of left-wing radicals through institutions to a vast number of job openings in American universities in the late 1950s, which enabled left-wing radicals to enter the academy in large numbers). </p><p>Of course, the relationship between establishment liberals and left-wing radicals has not always been a happy marriage. However, Friedman suggests that the cumulative effect of this process has been a transformation of establishment liberal institutions. Popular forms of progressive radicalism have moderated as they sought to transform these institutions, and these institutions have become more explicitly aligned with left-wing, progressive politics. (For an illustration of this dynamic, check out Scientific American&#8217;s <a href="https://www.scientificamerican.com/article/vote-for-kamala-harris-to-support-science-health-and-the-environment/">explicit endorsement</a> of Kamala Harris for president).</p><p>This process has dramatically exacerbated the mutual alienation and hostility felt between America&#8217;s two tribes:</p><blockquote><p>&#8220;The resulting identification of academic expertise with an activist left-wing orthodoxy, which is now officially proclaimed on university websites, in college admissions materials, and in first-year orientation programming, only serves to confirm, on the right, the suspicion that &#8220;expertise&#8221; is an ideological sham.&#8221;</p></blockquote><p>Of course, the blue tribe&#8212;the party of science and experts&#8212;does not view things this way. As naive realists themselves, they view the connection between expertise and progressive politics as an objective response to the self-evident fact that conservatives have abandoned truth.</p><h3>Trump, Floyd, and the Acceleration of Epistemic Polarization</h3><p>Friedman&#8217;s story ends with two events: the 2016 election of Donald Trump and the killing of George Floyd. </p><blockquote><p>&#8220;Trump and his supporters were accused of racism as soon as he announced his candidacy. Therefore, his election, coupled with the specter of widespread police violence against black men, solidified the conviction, on the left and in the mainstream, that little or no racial progress had been made since the Civil Rights movement. After Floyd&#8217;s death, this conviction led to a veritable anti-racist revolution that swept every major cultural institution, from universities to art museums and children&#8217;s book publishing.&#8221;</p></blockquote><p>According to Friedman, Trump voters have not just felt insulted by these pervasive accusations of racism and the new ways of conceptualising and understanding racism within establishment institutions. They have also felt baffled. Among those for whom &#8220;common sense&#8221; includes the beliefs that they are colorblind, that racial progress has been substantial, and that &#8220;reverse racism&#8221; is just as bad as anti-black racism, the explicit repudiation of these ideas among the establishment looks insane.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a completely reader-supported publication. To support my work and get access to paywalled essays and the full archive, consider becoming a paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Evaluating the analysis </h1><p>Is Friedman&#8217;s analysis correct? </p><p>Clearly, any general story like this can, at best, amount to a simplified, coarse-grained depiction of far more complex events and processes. Given this, rather than focusing too much on the specifics of Friedman&#8217;s historical account, I will identify some weaknesses in his general approach. </p><p>This approach is rooted in a fundamental methodological assumption: we should treat people&#8217;s beliefs on their own terms when attempting to analyse politic<em>s</em>. That is, rather than trying to debunk or deconstruct beliefs by tracing them to underlying social, political, economic, material, or psychological causes, we should treat people&#8217;s worldviews seriously and sympathetically as an independent explanatory domain. This is why Friedman speculates that America&#8217;s &#8220;present political crisis is <em>nothing but</em> an epistemological crisis.&#8221; </p><p>In this sense, Friedman&#8217;s analysis involves a fairly extreme &#8220;idealist&#8221; approach to the role of ideas in society. Roughly, idealists treat ideas as an autonomous realm that explains human behaviours and social organisation, in contrast to so-called &#8220;materialist&#8221; approaches that explain ideas in terms of underlying, more fundamental factors. (Marx&#8217;s claim that ideas are merely parts of a society&#8217;s &#8220;superstructure&#8221; determined by its economic &#8220;base&#8221; makes him the archetypal materialist in this sense.) Of course, these are two ends of a continuum rather than a simple dichotomy.</p><p>When combined with his radical skepticism and excessive intellectual charity, Friedman&#8217;s idealism produces various weaknesses in his account.  </p><h2><strong>Against idealism </strong></h2><p>First, although Friedman insightfully describes some real trends in American belief systems, ideas do not come from nowhere. For example, any analysis of the evolution of popular ideologies embraced by America&#8217;s political tribes must consider the shifting <em>alliance structure </em>of American politics: the distinct groups in society that support the two main parties. </p><p>As is well-documented, one reason for the strangely non-polarized aspect of mid-twentieth-century American politics is that both parties were effectively united in the goal of maintaining racial segregation and discrimination, especially in the South. It was only once the Democrats passed major civil rights legislation in the 1960s that American politics began its great ideological and social re-sorting, with black voters flocking to the Democrats and many white voters embracing the Republicans. Describing this process, <a href="https://substack.com/@everythingisbullshit">David Pinsof</a> and colleagues <a href="https://www.tandfonline.com/doi/abs/10.1080/1047840X.2023.2274433">observe</a> three other major political realignments: </p><blockquote><p>[1] The Republican Party took ownership of the pro-life, evangelical movement, causing Christian traditionalists to move into the Republican Party and secular feminists to move into the Democratic Party. [2] Influxes of immigrants from Latin America&#8212;coupled with urbanization and the decline of manufacturing work&#8212;gave rise to a rural, white underclass who attributed their declining status to immigration and globalization. [3] At the same time, expanding college enrollment produced a new upper class of highly educated &#8220;knowledge workers&#8221;, while large corporations commanded an increasingly greater share of wealth and political power. These trends resulted in competition and resentment between <em>intellectual elites</em> (e.g., highly educated professionals) and <em>business elites</em> (e.g., wealthy corporate executives). In other words, the lower class split apart based on ethnic rivalries, while the upper class split apart based on status rivalries, thereby weakening the historical link between partisanship and class.&#8221;</p></blockquote><p>You cannot understand the prevalent belief systems in American society and the two tribes&#8217; attitudes towards expertise without engaging with these forces. For example, the fact that the Democratic Party involves a coalition between highly educated white professionals and racial minorities illuminates the degree to which modern liberalism in the US combines deference to credentialed experts with a strong focus on anti-racist politics. Likewise, the strange coalition between rural, uneducated, white, socially conservative Americans and business elites within the Republican Party illuminates why American conservatism became so strongly pro-market in the late twentieth century. </p><p>To be clear, ideologies <a href="https://www.tandfonline.com/doi/abs/10.1080/1047840X.2023.2274412">cannot be reduced</a> to alliance structures, not least because the groups people identify with and support depend on which ideas they embrace. Nevertheless, they do not exist in a wholly autonomous domain either. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Too much charity </strong></h2><p>In addition, although Friedman&#8217;s commitment to intellectual charity&#8212;to treating people as sincere, rational agents rather than dismissing their views as consequences of deception, self-deception, or irrationality&#8212;is refreshing, it is also excessive. </p><p>He is right that naive realism is false. The truth is not self-evident, and even perfectly rational, well-meaning people can acquire very different worldviews based on the information they obtain from others and how they interpret it. Nevertheless, it is a fallacy to infer from the fact that error and disagreement <em>can </em>be caused by purely honest, rational processes that they always are. </p><p>For one thing, lying and propaganda&#8212;in the preferred language of the blue tribe today, &#8220;disinformation campaigns&#8221;&#8212;have played an undeniable role in American politics. Corporations peddle self-serving information and attempt to influence public opinion, as do political and cultural elites. </p><p>Moreover, humans are <a href="https://www.conspicuouscognition.com/p/in-politics-the-truth-is-not-self">not disinterested truth seekers</a>. We engage in <a href="https://www.conspicuouscognition.com/p/why-do-people-believe-true-things">motivated reasoning</a>. We <a href="https://www.conspicuouscognition.com/p/are-people-too-flawed-ignorant-and">advocate</a> for beliefs and narratives that promote our and our favourite groups&#8217; interests and adopt ideas that <a href="https://www.conspicuouscognition.com/p/people-embrace-beliefs-that-signal">win us trust and status</a> within our ingroup. Being humans, these motives powerfully distort how Americans view the world. </p><p>They also shape the <a href="https://www.conspicuouscognition.com/p/the-social-construction-of-bespoke">social dynamics</a> of America&#8217;s conflicting political tribes, which transform partisan narratives into sacred beliefs, reward pundits and intellectuals who rationalise and defend those beliefs, and ostracize and cancel heretics who challenge them.</p><p>Friedman is correct in saying that many people exaggerate the role of such factors. Moreover, much discourse about propaganda, motivated reasoning, and groupthink is itself a form of propagandistic motivated reasoning, treating these issues as if they were restricted to only one of America&#8217;s tribes. </p><p>Nevertheless, you cannot understand American politics and its epistemological character if you treat people as dispassionate, truth-seeking robots. Humans are <a href="https://www.tandfonline.com/doi/abs/10.1080/09515089.2023.2186844">strategic, status-seeking primates</a> who view reality in ways biased by self-interest and tribal allegiances. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>America&#8217;s belief systems</strong></h2><p>Once you understand the role of political alliances and motivated reasoning in shaping political belief systems, it also becomes clear that Friedman&#8217;s claim that conservatives and liberals embrace different forms of naive realism is not entirely correct. </p><p>Although it is true that the right and left embrace naive realism and adopt very different attitudes towards credentialed experts and establishment epistemic institutions (science, universities, public health, mainstream media, etc.), their attitudes towards experts are not always consistent. </p><p>For example, although the left is happy to trust experts on topics like climate change, vaccines, and public health guidance, these issues all align with its political agenda. On other topics (<a href="https://www.researchgate.net/publication/346447193_Ideological_Bias_in_the_Psychology_of_Sex_and_Gender">sex differences</a>, <a href="https://www.amazon.co.uk/Intelligence-That-Matters-Stuart-Ritchie/dp/1444791877">IQ</a>, <a href="https://www.amazon.co.uk/Genetic-Lottery-Matters-Social-Equality/dp/0691190801">behavioural genetics</a>, mainstream economics, and so on), there is often a willingness to dismiss or ignore expert consensus. For example, a recent article in <em>The Atlantic </em>defends Kamala Harris&#8217;s proposed price-gouging ban with the title, &#8220;<a href="https://www.theatlantic.com/ideas/archive/2024/08/economists-kamala-harris-price-gouging/679547/">Sometimes You Just Have to Ignore the Economists</a>.&#8221; And even on topics like climate change, liberals tend to be <a href="https://www.slowboring.com/p/misinformation-isnt-just-on-the-right-214">highly selective</a> in which aspects of expert consensus they defer to. </p><p>Admittedly, this is made easier because one can always find academics willing to endorse preferred liberal policies or narratives, not least because of the dominance of liberals within these institutions. Nevertheless, the point is that the story is more complex&#8212;and more biased&#8212;than a default attitude of deference towards credentialed experts. </p><p>Moreover, in some cases, liberals are willing to abandon deference towards &#8220;expert&#8221; knowledge altogether if doing so aligns with political goals, as with the celebration of &#8220;lived experience&#8221; over statistical data when the lived experiences align with progressive narratives. </p><p>Something similar applies to the right, where conservatives are often happy to instrumentalize experts when their conclusions align with conservative causes and narratives. However, Friedman&#8217;s analysis is closer to the mark here because the neglect and contempt of intellectual thought increasingly look all-encompassing since Trump took over the conservative movement. </p><h2><strong>Passing epistemic judgment</strong></h2><p>Finally, Friedman&#8217;s analysis of America&#8217;s epistemological crisis attempts to avoid any assessment of the relative epistemic quality of its two tribes&#8212;that is, the degree to which their belief systems accurately map reality. </p><p>To some degree, this is understandable. Given Friedman&#8217;s rejection of naive realism, he denies that ideologies can be evaluated as categorically true or false. Instead, following Walter Lippmann, he thinks of such belief systems as selective, low-resolution models of reality. This means he treats both political tribes in the US as embracing &#8220;a different set of interpretive frameworks that determines how and what it sees of reality.&#8221; </p><p>Nevertheless, one can reject naive realism and acknowledge the possibility of multiple perspectives without thinking all perspectives are equally accurate. At times, Friedman seems to elide this distinction.</p><p>Of course, one can never <a href="https://en.wikipedia.org/wiki/Philosophy_and_the_Mirror_of_Nature">step outside one&#8217;s belief system</a> and evaluate its correspondence to reality. Any evaluation of a set of beliefs will inevitably draw on one&#8217;s own beliefs, which are highly vulnerable to error and partiality for the reasons Friedman identifies. However, this is a reason to be careful and embrace fallibilism, not radical skepticism. </p><p>With this in mind, I will make two general observations about America&#8217;s epistemological crisis. </p><h3><strong>Sectarian misperceptions</strong></h3><p>First, due to the highly polarized and <a href="https://www.science.org/doi/10.1126/science.abe1715">sectarian</a> nature of modern American politics, both sides view reality in ways that are <a href="https://link.springer.com/article/10.1007/s11229-023-04223-1">distorted by partisanship</a> and <a href="https://www.tandfonline.com/doi/abs/10.1080/1047840X.2023.2274433">group allegiances</a>. This biases judgment at the individual level. However, it has also created thriving <a href="https://www.cambridge.org/core/journals/economics-and-philosophy/article/marketplace-of-rationalizations/41FB096344BD344908C7C992D0C0C0DC">rationalization markets</a> in which members of the two tribes compete to win status and financial rewards by justifying and defending their faction&#8217;s favoured narratives.</p><p>Although this is <a href="https://www.conspicuouscognition.com/p/the-marketplace-of-misleading-ideas">common to all politics</a>, it is especially toxic in America because the passionate partisans and pundits of both tribes increasingly inhabit <a href="https://www.science.org/doi/10.1126/sciadv.adg9287">distinct media ecosystems</a>. With declining intergroup communication, the result seems to be growing dogmatism and radicalisation. </p><h3>The failure modes of first-person and third-person naive realism</h3><p>Second, Friedman&#8217;s analysis of the two tribes&#8217; attitudes towards experts illuminates their different failure modes. </p><p>Even though the blue tribe often approaches experts with selectivity and flexibility, its general deference to establishment epistemic institutions produces distinctive errors and blind spots. The simple reason is that these institutions are far from perfect. Outside the hard sciences, many ideas advanced and legitimised within science and academia are <a href="https://www.conspicuouscognition.com/p/should-we-trust-misinformation-experts">simplistic, selective, biased, and unreliable</a>. </p><p>The <a href="https://www.amazon.co.uk/Science-Fictions-Epidemic-Fraud-Negligence/dp/1847925650">replication crisis</a> is one indication of this. There were also many well-documented <a href="https://www.theguardian.com/commentisfree/2022/feb/15/this-is-why-some-people-dont-want-to-get-the-covid-vaccine">problems</a> with public health authorities during the Covid-19 pandemic. However, there are countless others. Experts are often <a href="https://www.amazon.co.uk/Superforecasting-Science-Prediction-Philip-Tetlock/dp/1847947158">overconfident and wrong</a>. Whole bodies of putatively scientific knowledge are commonly <a href="https://www.amazon.co.uk/Quick-Fix-Psychology-Cant-Social/dp/0374239800">built on sand</a>. These problems are exacerbated by a situation where the line between progressive activism and science is not always clear (and sometimes wilfully blurred). And in many ways, things are <a href="https://www.conspicuouscognition.com/p/the-media-very-rarely-makes-things">even worse</a> within establishment liberal media. </p><p>These and numerous other factors ensure that the blue tribe&#8217;s picture of reality is frequently biased, selective, or plain wrong. Moreover, without these evident problems with America&#8217;s epistemic institutions, the red tribe&#8217;s proud rejection of such institutions would probably not be possible. </p><p>Nevertheless, the blue tribe&#8217;s problems are much less severe than those confronting the red tribe. </p><p>The Republican Party and conservative media today have become almost fully unmoored from reality. Utterly baseless lies, fabrications, conspiracy theories, and absurdities run rampant. <a href="https://edition.cnn.com/2024/09/10/politics/fact-check-debate-trump-harris/index.html">Nearly everything</a> that comes out of Trump&#8217;s mouth is a lie or exaggeration. And remarkably, this situation seems to worsen as time passes. </p><p>Few things could illustrate this more powerfully than Trump&#8217;s <a href="https://apnews.com/article/haitian-immigrants-vance-trump-ohio-6e4a47c52b23ae2c802d216369512ca5">preposterous</a>, evidence-free, racist claim that immigrants are eating Americans&#8217; pets <em>en masse</em>, something which <a href="https://www.newsweek.com/donald-trump-republicans-haitian-migrants-eating-pets-poll-1954875">most Republicans</a> apparently believed and Tucker Carlson <a href="https://www.youtube.com/watch?v=fkPpHdkL1Zc">celebrated</a> as &#8220;awesome&#8221; because it &#8220;makes all the right people mad.&#8221; </p><p>This is the grain of truth in the blue tribe&#8217;s &#8220;post-truth&#8221; analysis of the modern Republican Party. However, the problem is not that the red tribe has wholly abandoned any concern with truth. The problem is that without knowledge-generating institutions and their norms and procedures (e.g., in science and professional journalism), caring about the truth achieves nothing. The consequence is instead a reversion to an <a href="https://www.amazon.co.uk/Constitution-Knowledge-Jonathan-Rauch-author/dp/0815738862">epistemic state of nature</a> in which ignorance, error, and tribal narratives are the <a href="https://www.conspicuouscognition.com/p/why-do-people-believe-true-things">default state</a>. </p><p>In other words, for all the problems with America&#8217;s knowledge-generating institutions, these institutions evolved over centuries for a reason. If you reject them wholesale, the result is not liberation from bias and delusion; it is the complete capitulation to them. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a completely reader-supported publication. To support my work and gain access to paywalled essays and the full archive, consider becoming a paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI Sessions #5: How AI Broke Education]]></title><description><![CDATA[Watch now (56 mins) | How can AI help with learning and education? And does it pose an extinction-level threat to the teaching and assessment models that currently dominate schools and universities?]]></description><link>https://www.conspicuouscognition.com/p/ai-sessions-5-how-ai-broke-education</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/ai-sessions-5-how-ai-broke-education</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Thu, 04 Dec 2025 17:40:46 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/180718425/52a37a3ffa2c36f29a63eed5985b53d9.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Henry Shevlin and I sat down to discuss a topic that is currently driving both of us slightly insane: the impact of AI on education. </p><p>On the one hand, the educational potential of AI is staggering. Modern large language models like ChatGPT offer incredible opportunities for 24/7 personal tutoring on any topic you might want to learn about, as well as many other benefits that would have seemed like science fiction only a few years ago. One of the really fun parts of this conversation was discussing how we personally use AI to enhance our learning, reading, and thinking. </p><p>On the other hand, AI has clearly blown up the logic of teaching and assessment across our educational institutions, which were not designed for a world in which students have access to machines that are much better at writing and many forms of problem-solving than they are. </p><p>And yet&#8230; there has been very little adaptation.</p><p>The most obvious example is that many universities still use take-home essays to assess students. </p><p><strong>This is insane. </strong></p><p>We discuss this and many other topics in this conversation, including: </p><ul><li><p>How <em>should </em>schools and colleges adapt to a world with LLMs? </p></li><li><p>How AI might exacerbate certain inequalities.</p></li><li><p>Whether AI-driven automation of knowledge work undermines the value of the skills that schools and colleges teach today.</p></li><li><p>How LLMs might make people dumber.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Links</h1><ul><li><p>John Burn-Murdoch, <em>Financial Times</em>, <a href="https://www.ft.com/content/a8016c64-63b7-458b-a371-e0e1c54a13fc">Have Humans Passed Peak Brain Power?</a></p></li><li><p>James Walsh, <em>New York Magazine</em>, <a href="https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html">Everyone Is Cheating Their Way Through College</a></p></li><li><p>Rose Horowitch, <em>The Atlantic</em>, <a href="https://www.theatlantic.com/magazine/2026/01/elite-university-student-accommodation/684946/">Accommodation Nation</a></p></li></ul><h1>Transcript</h1><p><em>Note: this transcript is AI-generated and may contain mistakes.</em> </p><p><strong>Dan Williams</strong></p><p>Welcome everyone. I&#8217;m Dan Williams, and I&#8217;m back with my friend and co-conspirator, Henry Shevlin. Today we&#8217;re going to be talking about a topic which is close to both of our hearts as academics who have spent far too long in educational institutions: the impact of AI on education and learning in general, but also more specifically on the institutions&#8212;the schools and universities&#8212;that function to provide education.</p><p>There&#8217;s a fairly simple starting point for this episode, which is that the way we currently do education was obviously not built for a world in which students have access to these absolutely amazing writing and problem-solving machines twenty-four seven. And yet, for the most part, it seems like many educational institutions are just carrying on with business as usual.</p><p>On the one hand, the opportunities associated with AI are absolutely enormous. Every student has access twenty-four seven to a personal tutor that can provide tailored information, tailored feedback, tailored quizzes, flashcards, visualizations, diagrams, and so on. On the other hand, we&#8217;ve quietly blown up the logic of assessment and lots of the ways in which we traditionally educate students&#8212;most obviously with the fact that many institutions, universities specifically, still use take-home essays as a mode of assessment, which, at least in my view (and I&#8217;m interested to hear what Henry thinks), is absolutely insane.</p><p>So what we&#8217;re going to be talking about in this episode are a few general questions. Firstly, what&#8217;s the overall educational potential when it comes to AI, including outside of formal institutions? What are the actual effects that AI is having on students and on these institutions? How should schools and universities respond? And then most generally, should we think of AI as a kind of crisis&#8212;a sort of extinction-level threat for our current educational institutions&#8212;or as an opportunity, or as both?</p><p>So Henry, maybe we can start with an opening question: in your view, what is the educational potential of AI?</p><p><strong>Henry Shevlin</strong></p><p>I think the educational potential is insane. I almost think that if you were an alien species looking at Earth, looking at these things called LLMs, and asking why we developed these things in the first place&#8212;without having the history of it&#8212;you&#8217;d think, &#8220;There&#8217;s got to be some kind of educational tool.&#8221; If you&#8217;ve read Neal Stephenson&#8217;s The Diamond Age, you see a prophecy of something a little bit like an LLM as an educational tool there.</p><p>I think AI in general, but LLMs specifically, are just amazingly well suited to serve as tutors and to buttress learning. Probably one key concept to establish right out the gate, because I find it very useful: some listeners may be familiar with something called Bloom&#8217;s two sigma problem. This is the name of an educational finding from the 1980s associated with Benjamin Bloom, one of the most prominent educational psychologists of the 20th century, known for things like Bloom&#8217;s taxonomy of learning.</p><p>Basically, he did a mini meta-analysis looking at the impact of one-to-one tutoring compared to group tuition. He found that the impact of one-to-one tutoring on mastery and retention of material was two standard deviations, which is colossal&#8212;bigger than basically any other educational intervention we know of. Just for context, one of the most challenging and widely discussed educational achievement gaps in the US, the gap between black and white students, is roughly one standard deviation. So this is twice that size.</p><p>Now, worth flagging, there&#8217;s been a lot of controversy and deeper analysis of that initial paper by Bloom. For example, the students in the tutoring groups were mostly looking at students who had a two-week crammer course versus students who had been learning all year, so there were probably recency effects. He was only looking at two fairly small-scale studies. Other studies looking at the impact of private tutoring versus group tuition have found big effects, even if not quite two standard deviations. And this makes absolute intuitive sense&#8212;there&#8217;s a reason the rich and famous and powerful like to get private tutors for their kids.</p><p><strong>Dan Williams</strong></p><p>Yeah.</p><p><strong>Henry Shevlin</strong></p><p>There&#8217;s a reason why Philip II of Macedon got a tutor for Alexander. And more broadly, I think we can both attest as products of the Oxbridge system: one of the key features of Oxford and Cambridge is that they have one-to-one tutorials (or &#8220;supervisions,&#8221; as the tabs call them). This is a really powerful learning method.</p><p>So even if it&#8217;s not two standard deviations from private tuition, it&#8217;s a big effect. Now, people might be saying, &#8220;Hang on, that&#8217;s private tuition by humans. How do we know if LLMs can replicate the same kind of benefits?&#8221; It&#8217;s a very fair question. In principle, the idea is that if it&#8217;s just a matter of having someone deal with students&#8217; individual learning needs, work through their specific problems, figure out exactly what they&#8217;re misunderstanding and where they need help, there&#8217;s no reason a sufficiently fine-tuned LLM couldn&#8217;t do that.</p><p>I think this is the reason Bloom called it the &#8220;two sigma problem&#8221;&#8212;it was assumed that obviously you can&#8217;t give every child in America or the UK a private tutor. But if LLMs can capture those goods, everyone could have access to a private LLM tutor.</p><p>That said, I think the counter-argument is that even if we take something like a two standard deviation effect size on learning and mastery at face value, there are things a human tutor brings to the table that an AI tutor couldn&#8217;t. Social motivation, for one. I don&#8217;t know about your view, but my view is that a huge proportion of education is about creating the right motivational scaffolds for learning. Sitting there talking to a chat window is a very different social experience from sitting with a brilliant young person who&#8217;s there to inspire you. Likewise, I think it&#8217;s far easier to alt-tab out of a ChatGPT tutor window and play some League of Legends instead, whereas if you&#8217;re sitting in a room with a slightly scary Oxford professor asking you questions, you can&#8217;t duck out of that so easily.</p><p>So I think there are various reasons why we probably shouldn&#8217;t expect LLM tutors to be as good as human private tutors. But I think the potential there is still massive. We don&#8217;t know exactly how big the potential is, but I think there&#8217;s good reason to be very excited about it. And personally, I find LLMs have been an absolute game-changer in my ability to rapidly learn about new subjects, get up to speed, correct errors. In a lot of domains, we all have questions we&#8217;re a little bit scared to ask because we think, &#8220;Is this just a basic misunderstanding?&#8221;</p><p><strong>Dan Williams</strong></p><p>Yeah.</p><p><strong>Henry Shevlin</strong></p><p>Anecdotally, I know so many people&#8212;and have experienced firsthand&#8212;so many game-changing benefits in learning from LLMs. But at the same time, there&#8217;s still a lot of uncertainty about exactly how much they can replicate the benefits of private tutors. Very exciting either way.</p><p><strong>Dan Williams</strong></p><p>I think there&#8217;s an issue here, which is: what is the potential of this technology for learning? And then there&#8217;s a separate question about what the real-world impact of the technology on learning is actually going to be. That might be mediated by the social structures people find themselves in, and also their level of conscientiousness and their own motivations. We should return to this later on. Often with technology, you find that it&#8217;s really going to benefit people who are strongly self-motivated and really conscientious. Even with the social media age&#8212;we live in a kind of informational golden age if you&#8217;re sufficiently self-motivated and have sufficient willpower and conscientiousness to seek out and engage with the highest quality content. In reality, lots of people spend their time watching TikTok shorts, where the informational quality is not so great.</p><p>But let&#8217;s stick with the potential of AI before we move on to the real-world impact and how this is going to interact with people&#8217;s actual motivations and the social structures they find themselves in.</p><p>So the most obvious respect in which this technology is transformative, as you said, is this personal tutoring component. Maybe we could be a bit more granular. We&#8217;re both people who benefit enormously from this technology. I think I would have also benefited enormously from this technology if I&#8217;d had it when I was a teenager. I remember, for example, when I was a teenager, I did what many people do when they feel like they want to become informed about the world: I got a subscription to The Economist when I was 14 or 15. And I would work my way through it, dutifully trying to read all of the main articles.</p><p><strong>Henry Shevlin</strong></p><p>God.</p><p><strong>Dan Williams</strong></p><p>And at the time I&#8217;d think, &#8220;What&#8217;s fiscal and monetary policy? I don&#8217;t fully understand the political system in Germany.&#8221; But if I&#8217;d had ChatGPT at the time, I could have just asked, &#8220;Explain to me the difference between fiscal and monetary policy.&#8221;</p><p><strong>Henry Shevlin</strong></p><p>I&#8217;m cringing because I did exactly the same thing. The Economist and New Scientist were the two cornerstones of my teenage education.</p><p><strong>Dan Williams</strong></p><p>So let&#8217;s maybe talk about how we use it, and at least in principle, if people had the motivation and the structure to encourage that motivation, how the technology could be beneficial for the process of learning. For example, how do you personally use this technology to enhance your ability to acquire and integrate and process information?</p><p><strong>Henry Shevlin</strong></p><p>I think one useful way to introduce this is to think about other kinds of sources of information and what ChatGPT adds. As a kid and as a teenager, I remember very vividly&#8212;I think I was about 10 years old when we got our first Microsoft Encarta CD-ROM encyclopedia. It blew my mind; I could do research on a bunch of topics. Some of them even had very grainy pixelated videos. It was great fun. And obviously the internet adds a further layer to your ability to do research. I&#8217;m also the kind of person who, even before the launch of ChatGPT, at any given time had about 30 different tabs of Wikipedia open.</p><p>So if you&#8217;re the kind of person who is interested and curious about the world, we live in an informational golden age. Our ability to learn about things has been improving; our tools for learning about the world have been improving. So what does ChatGPT and LLMs add on top of that?</p><p>First, I often find that even Wikipedia entries can be very hard to get my head around, particularly if I&#8217;m trying to do stuff in areas I&#8217;m not so good at. If I&#8217;m looking at some concepts in physics or maths&#8212;maths is particularly hilarious here. If you look up a definition of a mathematical concept, it&#8217;s completely interdefined in terms of other mathematical concepts. Absolute nightmare. Even philosophy can be the same: &#8220;What is constructivism? Constructivism is a type of meta-ethical theory that blah, blah, blah.&#8221; It can quickly get lost in a sea of jargon where all the terms are interdefined. Whereas you can just ask ChatGPT, &#8220;I&#8217;m really struggling. What is Minkowski spacetime? Please explain. ELI5&#8212;explain to me like I&#8217;m five.&#8221;</p><p>So in terms of getting basic introductions to complex concepts, being able to ask questions as you go&#8212;this is huge. Being able to check your knowledge and say, &#8220;Is this concept like this? Am I right? Am I misunderstanding this?&#8221; Being able to draw together disparate threads from topics&#8212;this is something that&#8217;s basically impossible to do prior to LLMs unless you get lucky and find the right article. So if I ask, &#8220;To what extent did Livy&#8217;s portrayal of Tullus Hostilius in his book on the foundation of Rome draw inspiration from the figure of Sulla?&#8221; (This is a specific example because I wrote an essay about it.) These kinds of questions where you&#8217;re drawing together different threads and asking, &#8220;Is there a connection between these two things, or am I just free-associating? Is this thing a bit like this other thing?&#8221;</p><p><strong>Dan Williams</strong></p><p>Yeah.</p><p><strong>Henry Shevlin</strong></p><p>These kinds of questions&#8212;you can just ask them. Other really good things you can do, getting into more structured educational usage: you can ask for needs analysis. Recently I was trying to get up to speed on chemistry&#8212;chemistry was my weakest science at high school. I said, &#8220;ChatGPT, I want you to ask me 30 questions about different areas of chemistry. Assume a solid high school level of understanding and identify gaps in my knowledge. On that basis, I want you to come up with a 10-lesson plan to try and plug those gaps.&#8221; And then you can just talk through it. I did a little mini chemistry course over about 30 or 40 prompts. So that&#8217;s a slightly more profound or interesting use.</p><p>Another really powerful domain is language learning. I&#8217;m an obsessive language learner; at any one time, I usually have a couple on the go.</p><p><strong>Dan Williams</strong></p><p>Hmm.</p><p><strong>Henry Shevlin</strong></p><p>Duolingo&#8212;I had a 1,200-day streak at one point&#8212;but it sucks, I&#8217;ll be honest, for actually improving fluency. It&#8217;s very good for habit formation, but it doesn&#8217;t really teach grammar concepts very well. It doesn&#8217;t build conversational proficiency very well. It&#8217;s okay for learning vocab. But LLMs used in the right way can be fantastically powerful tools for this.</p><p>Particularly with grammar concepts, you&#8217;ve often got to grok them&#8212;intuitively understand them. So being able to say, &#8220;Am I right in thinking it works like this? How about this kind of sentence? Does the same rule apply?&#8221; Or when learning a language, you&#8217;ll often encounter a weird sentence whose grammar you don&#8217;t understand. This is something you couldn&#8217;t really do prior to ChatGPT in an automated fashion: &#8220;Can you explain the grammar of this sentence to me? I just don&#8217;t get it.&#8221;</p><p>Also, Gemini and ChatGPT both have really good voice modes that are polyglot. So you can say, &#8220;ChatGPT, for the next five minutes, I want to speak in Japanese&#8221; or &#8220;I want to speak in Gaeilge. Bear in mind my language level is low, my vocabulary is limited. Try not to use any constructions besides these.&#8221; Or even, &#8220;Let&#8217;s have a conversation practising indirect question formation in German.&#8221; You can do these really tailored, specific lessons.</p><p>I&#8217;ll flag that language learning is one area where in particular the applications and utility of LLMs are just so powerful and so straightforward. But it&#8217;s funny&#8212;I&#8217;ve yet to see the perfect LLM-powered language learning app. Someone might comment on this video, &#8220;Have you checked out X?&#8221; But I&#8217;m sure in the next couple of years, someone is going to make a billion-dollar company on that basis.</p><p><strong>Dan Williams</strong></p><p>Surely, yes. Just to add another couple of things in terms of how I use it, which actually sounds very close to how you use this technology. One thing is: I think an absolutely essential part of thinking is writing. People often assume that with writing, you&#8217;re just expressing your thoughts. Whereas actually, no&#8212;in the process of writing, you are thinking.</p><p>One of the things that&#8217;s really great as a writer&#8212;I&#8217;m an academic, so I write academic research; I&#8217;m also a blogger and I write for general audiences&#8212;is to write things and say, &#8220;Give me the three strongest objections to what I&#8217;ve written.&#8221; And often the objections are actually really good. That&#8217;s an incredible opportunity because historically, if you wanted to get an external mind to critique and scrutinise what you&#8217;ve written, you&#8217;d have to find another human being, and they&#8217;re going to have limited attention. That&#8217;s really challenging. Whereas now you can get that instantly.</p><p>I also find that now when I&#8217;m reading a book&#8212;and I think reading books is absolutely essential if you do it the right way for engaging with the world and learning about the world&#8212;I&#8217;ll do the thing you&#8217;ve already mentioned: if there&#8217;s anything I don&#8217;t understand or don&#8217;t feel like I&#8217;ve got a good grip on, I&#8217;ll ask ChatGPT to provide a summary or explain it in simpler terms. But I&#8217;ll also often upload a PDF of the book when I can get it and think, &#8220;Here&#8217;s my current understanding of chapter seven. Can you evaluate the extent to which I&#8217;ve really understood it and provide feedback on something I&#8217;m missing?&#8221;</p><p>What you can also do&#8212;and I find Gemini is much better at this than ChatGPT&#8212;is ask it to generate a set of flashcards on the material, then take the flashcards it&#8217;s generated and ask it to create a file for Anki (which is a flashcard program) that you can import directly and use to test yourself on the knowledge over time. In principle, you could have done that prior to the age of AI, but the ease and pace with which you can do it today is absolutely transformative in terms of your ability to really quickly master material. So those are just a few things off the top of my head. I&#8217;m sure there are many other uses.</p><p><strong>Henry Shevlin</strong></p><p>That Anki suggestion is gold. I use Anki, and just to be clear to anyone not familiar: it is one of the best educational tips I can ever recommend. In any situation where you need to remember a mapping from some X to some Y&#8212;that could be learning vocabulary, mapping an English word to a Japanese word; it could be mapping a historical event to a date; mapping an idea to a thinker; or, one use case for me, mapping a face to a name (it doesn&#8217;t just need to be words).</p><p>One trick I used to do with my students: we&#8217;d have 40 students join each term in our educational programs. I&#8217;d create a quick flashcard deck with their photos (which they submit to the university system) and their names, and you can memorise their names in half an hour to an hour. It really does feel like, if you&#8217;ve never used it before, a cheat code for memory. It&#8217;s astonishing.</p><p><strong>Dan Williams</strong></p><p>Yeah.</p><p><strong>Henry Shevlin</strong></p><p>But I have not used Gemini for creating Anki decks. This is genius. And I think this illustrates a broader point: we&#8217;re still figuring out, and people are stumbling upon, really powerful educational or learning use cases for these things all the time. Even in this conversation&#8212;I think we&#8217;re both pretty power users of these systems&#8212;but I just picked up something from you right there. I have these conversations all the time: &#8220;Great, that&#8217;s a brilliant use case I hadn&#8217;t thought of.&#8221;</p><p>One thing I&#8217;ll also flag that maybe more people could play around with is really leaning into voice mode a bit more. Voice mode is currently not in the best shape&#8212;well, it&#8217;s the best shape it&#8217;s ever been, but I think we&#8217;re still ironing out some wrinkles on ChatGPT. Still, if I&#8217;m on a long car journey (I drive a lot for work, often to the airport), I&#8217;ll basically give a mini version of my talk to ChatGPT as we&#8217;re driving along. I&#8217;ll say, &#8220;Here&#8217;s the talk I&#8217;m going to be giving. What are some objections I might run into?&#8221; And we&#8217;ll have a nice discussion about the talk.</p><p>Or sometimes I&#8217;m just driving back, a bit bored, I&#8217;ve listened to all my favourite podcasts. I&#8217;ll say, &#8220;ChatGPT, give me a brief primer on the key figures in Byzantine history,&#8221; or &#8220;Give me an introduction to the early history of quantum mechanics.&#8221; Then I&#8217;ll ask follow-up questions. It&#8217;s like an interactive podcast.</p><p><strong>Dan Williams</strong></p><p>That&#8217;s awesome. One thing that&#8217;s really coming across in this conversation is the extent to which we&#8217;re massive nerds and potentially massively unrepresentative of ordinary people.</p><p>Okay, maybe we can move on. We both agree that the potential of this for learning material, mastering material, improving your ability to think, understand, and know the world is immense.</p><p>But obviously there&#8217;s a big gap between the potential of a technology in principle and how it actually manifests in the world. I mentioned the internet generally, social media specifically, as an obvious illustration of that. Even though I think a lot of the discourse surrounding social media is quite alarmist, at the same time it does seem to have quite negative consequences with certain things and among certain populations&#8212;specifically, I think, those people who don&#8217;t have particularly good impulse control. Social media is really a kind of hostile technology for such people.</p><p><strong>Henry Shevlin</strong></p><p>I&#8217;d also add online dating as another example of a technology that sounds like it should be so good on paper. And at various points it has been really good&#8212;I met my wife on OkCupid back in 2012. But it seems like what&#8217;s happened over the last 10 years, speaking to friends who are still using the various dating apps, is it&#8217;s almost a tragedy of the commons situation. Something has gone very wrong in terms of the incentives so that it&#8217;s now just an unpleasant experience. Straight men who use it say they have to send hundreds of messages to get a response. Women who use it say they just get constantly spammed with low-effort messages. I give that as another example: we&#8217;ve built these amazing matching algorithms&#8212;why isn&#8217;t dating a solved problem right now? It turns out there can be negative, unexpected consequences with these technologies.</p><p><strong>Dan Williams</strong></p><p>That&#8217;s a great example. So what&#8217;s your sense of what the actual real-world impact of AI is on students and teachers and educational institutions at the moment?</p><p><strong>Henry Shevlin</strong></p><p>This is probably a good time for our disclaimers. I&#8217;m actually education director of CFI with oversight of our two grad programs. So everything I say in what follows is me speaking strictly in my own person rather than in my professional role.</p><p><strong>Dan Williams</strong></p><p>Can I just quickly clarify&#8212;CFI is the Centre for the Future of Intelligence, where you work at the University of Cambridge. Just for those who didn&#8217;t know the acronym.</p><p><strong>Henry Shevlin</strong></p><p>Exactly, that&#8217;s helpful. The Centre for the Future of Intelligence, University of Cambridge. I&#8217;m the education director, with ultimate oversight of our 150-odd grad students. But we only have grad students, and I think this means my perspective on the impact of AI on education is quite different from where I think the real catastrophic or chaotic impacts are happening. Grad students are a very special case; they tend to have high degrees of intrinsic motivation. The incentive structures for grad students&#8212;where they&#8217;re directly paying for the course themselves in many cases, or for our part-time course they&#8217;re being paid for by employers who expect results&#8212;all of this creates a quite different environment.</p><p>So when I talk about impacts on education, I&#8217;m going to be mainly focusing on undergrad education and high school. These are areas where I&#8217;m not speaking from first-hand experience, but from many conversations with colleagues who teach undergrads. I don&#8217;t really teach undergrads, but lots of colleagues do. And several of my very closest friends are teachers in high schools and, in a couple of cases, primary schools. So I&#8217;m drawing on what they&#8217;re seeing.</p><p><strong>Dan Williams</strong></p><p>Can I also give my own disclaimer: as with every other topic we focus on, I&#8217;m an academic&#8212;an assistant professor at the University of Sussex&#8212;and I&#8217;m giving my personal opinions, not the opinions of the institution I work for. Okay, sorry to cut you off.</p><p><strong>Henry Shevlin</strong></p><p>No, excellent. Our respective arses thoroughly covered.</p><p>So with that in mind, there&#8217;s a phenomenal piece by James Walsh in NY Mag from back in May this year called &#8220;Everyone is Cheating Their Way Through College.&#8221; It&#8217;s a beautiful piece of long-form journalism. Can&#8217;t recommend it enough&#8212;absolutely exhilarating and horrifying&#8212;talking about the impact of ChatGPT and other LLMs on education.</p><p>&#8220;Complex and mostly bad&#8221; is the short answer for what the actual short-term impacts of LLMs have been. When ChatGPT launched, I said something similar to what you said at the opening of the show: the take-home essay assignment is dead for high school, and pretty soon it&#8217;ll be dead for university. And then, yeah, pretty soon it was dead for university.</p><p>So I think that&#8217;s the most straightforward initial impact: we can no longer assign graded take-home essay assignments with any real confidence, particularly for high school and undergrad students, because it&#8217;s just so easy to get ChatGPT to do it. I believe that people, even with the best intentions, are responsive to incentives. And if you can produce an essay that&#8217;s&#8212;honestly, these days with contemporary language models&#8212;very good quality, particularly at high school level, even undergrad level, if ChatGPT can just do something as good or better than you, then why bother putting in the work? If you&#8217;re hungover, or there&#8217;s a really cool party you want to attend, or you&#8217;re working a second job&#8212;we shouldn&#8217;t assume all students are living lives of leisure; lots of them are struggling to pay the rent.</p><p>So with all these incentives in place, no surprise that basically, as the article says, everyone is cheating their way through college. And I&#8217;m kind of appalled. I was in California a couple of weeks ago, just chatting to some students at a community college. One of them was a nursing student, and she said, &#8220;Yeah, I&#8217;m learning nothing at university. ChatGPT writes all my assignments. Ha ha ha ha ha.&#8221; And I was like, &#8220;Okay, got to be a bit careful about getting medical treatment&#8212;which hospital are you planning to work at?&#8221; But I think that&#8217;s symptomatic of broader problems.</p><p><strong>Dan Williams</strong></p><p>Of course. Maybe we can break that down point by point. We can say now with basically 100% certainty that large language models of the sort that exist today can write extremely good essays at undergraduate level. And I think there&#8217;s basically no way professors are going to be able to detect whether this has happened, at least if students are sufficiently skilful in how they do it.</p><p>I constantly come across academics who are still living in 2022, and they think, &#8220;Of course there are going to be these obvious hallucinations, and of course it&#8217;s going to be this mediocre essay.&#8221; I just think that&#8217;s not at all the reality of large language models today. If you know at all what you&#8217;re doing, you can delegate the task of writing to one of these large language models, which will produce an exceptional essay, and there&#8217;s no real way of knowing whether it&#8217;s been AI-generated.</p><p>There are tools which claim they can determine probabilistically whether an essay has been AI-generated. I don&#8217;t think those tools work, and I think they create all sorts of issues. If it&#8217;s not going to be 100% certain&#8212;which I think it basically never can be when it comes to AI-generated essays&#8212;then it becomes an absolute institutional nightmare trying to demonstrate that a student has used AI. I also think the incentives at universities, and indeed at schools more broadly, don&#8217;t encourage academics to really pursue this. It&#8217;s going to be an enormous amount of hassle, an enormous amount of extra work.</p><p>So I think what&#8217;s happening at the moment is that, to the extent universities and other institutions of higher education are using take-home essays specifically&#8212;but I&#8217;d say take-home assessments more broadly&#8212;to evaluate students, you&#8217;re basically evaluating how well they can cheat with AI. And I think that&#8217;s absolutely terrible.</p><p>Not just because, as you say, it means students aren&#8217;t actually encouraged to learn the material&#8212;they don&#8217;t really have an incentive to learn it. But one of the main functions of universities, of educational institutions more broadly, is credentialing. These are institutions that provide signals. They evaluate students according to their level of intelligence, their conscientiousness, and so on. The signal you get with a grade, and overall with your credential, is incredibly useful to prospective employers because they know: you got a first from Cambridge, you got a first from Bristol, whatever it might be. That&#8217;s a really good signal&#8212;not a perfect signal, but a pretty good signal&#8212;that you&#8217;re likely to be a good employee in certain domains.</p><p>To the extent that students are using AI to produce their assessment material, the signalling value of that just dissolves completely. That&#8217;s why, unless there&#8217;s urgent reform of the system and a move away from those sorts of take-home assessments, the problem here is not just that students aren&#8217;t learning things (which would be bad enough). I think it&#8217;s a kind of extinction-level threat for these institutions, because once it becomes clear that the grades you&#8217;re giving students don&#8217;t really provide any information about their intelligence or conscientiousness, then the social function of the institution dissolves completely. So that&#8217;s my take&#8212;do you disagree?</p><p><strong>Henry Shevlin</strong></p><p>No, completely agree. To pick up on a few thoughts: I imagine some people listening will say, &#8220;Yeah, but I can kind of tell when a student&#8217;s essay is written by ChatGPT.&#8221; I think a useful idea here is something I&#8217;ve heard called the toupee fallacy. People will say, &#8220;You can always tell when someone&#8217;s wearing a wig.&#8221; And you ask, &#8220;So what&#8217;s your reference set for that?&#8221; &#8220;Well, I often go out and I see something and it&#8217;s an obvious wig.&#8221; Okay, you&#8217;re seeing the obvious ones.</p><p><strong>Dan Williams</strong></p><p>Hmm.</p><p><strong>Henry Shevlin</strong></p><p>In other words, you don&#8217;t know. You can tell in cases where something is obviously a wig or obviously AI-generated. But you have no idea what the underlying ground truth is when it comes to the ones you can&#8217;t spot. You don&#8217;t have a way of identifying your rate of false negatives. I think that&#8217;s a really big problem.</p><p>Of course, anyone who marks papers will find occasional students who have left in &#8220;Sure, I can help you with that query&#8221; or &#8220;As a large language model trained by OpenAI...&#8221; But you know that&#8217;s the minority, and lots of essays you might assume are non-AI-generated are almost certainly AI-generated as well.</p><p>Relatedly, on the hallucinations point: this is obviously a big topic (we could probably do a whole episode on hallucinations), but rates of hallucinations have gone down dramatically. Particularly since search functionality was added to LLMs&#8212;they can go away and check things themselves. And also, you get the analogue of hallucinations just with student writing all the time. Even long before LLMs, students would falsely claim that Kant was a utilitarian or something, because they hadn&#8217;t properly understood the material or had misunderstood. So hallucinations are not a particularly good sign.</p><p>I think it&#8217;s basically impossible to tell. And as you emphasise, the false positive problem: even if you&#8217;re really confident an essay is AI-generated, good luck proving that. And is it really worth it for you as an educator to fight this tough battle with a student to bust them when everyone else is doing it? We just don&#8217;t have the incentives for educators or instructors to really enforce this.</p><p>Two other quick points. First, this creates huge problems not just with assessment but also with tracking students. This is something my high school teacher friends have really emphasised. It used to be, before ChatGPT, that essay assignments were a good way to keep track of which students were highly engaged with the class, which students were struggling, which students were really on top of the material. Whereas now we&#8217;ve seen a kind of normalisation effect where even the weakest students can turn in pretty solid essays courtesy of ChatGPT.</p><p><strong>Dan Williams</strong></p><p>Yeah.</p><p><strong>Henry Shevlin</strong></p><p>You&#8217;ve got no way of knowing which students need extra help versus which are already doing fine. That&#8217;s a big problem.</p><p>The final thing I&#8217;ll mention is that although take-home essay assignments are the ground zero of these negative effects, it covers other kinds of assignment as well. A colleague of mine who teaches at a big university (not Cambridge) was saying he&#8217;s been doing class presentations. Then he quickly realised students would generate the scripts for their presentations from ChatGPT. So he said, &#8220;Okay, we&#8217;re also going to partly grade them on Q&amp;A, where students are graded on the questions they ask other presenters but also the responses they give.&#8221; And he said it quickly became clear: people would say, &#8220;Give me a moment to think about that question,&#8221; type into a computer, get the response from ChatGPT. Or people were using ChatGPT to generate questions.</p><p>I think there&#8217;s almost a generation of students for whom this is just their default way of approaching knowledge work, which I think is potentially a problem.</p><p><strong>Dan Williams</strong></p><p>The obvious solution, it seems, would be that you need modes of assessment where students can&#8217;t use AI&#8212;such as in-person pen-and-paper exams, such as oral vivas. And I do think that&#8217;s basically the direction these educational institutions are going to have to go.</p><p>However, that obviously creates issues. One is that there&#8217;s something incredibly valuable about learning how to write essays&#8212;not for everyone. Sometimes people like us, because of our interests and our passions and our profession, think it&#8217;s really important to have the ability to write long-form essays. And I totally understand that for many people, that&#8217;s a skillset which isn&#8217;t particularly useful for them. But in general, I do think for people who aspire to be engaged, thoughtful people, the skillset involved in writing long essays is incredibly valuable. So to the extent that the take-home essay and coursework disappear altogether, I think that&#8217;s a real issue in the sense that certain kinds of skills won&#8217;t be getting incentivised by our educational institutions.</p><p>But I also think it&#8217;s incredibly important that students learn how to use AI. That should be one of the main things educational institutions are providing to students these days: the ability to use AI effectively. And I think that skill is only going to become more important&#8212;in the economy, the labour market, and so on.</p><p>So on the one hand, it seems like large language models have made it basically impossible to have any kind of assessment other than in-person pen-and-paper tests or oral examinations. But on the other hand, to the extent we go down that route, many of the skills and knowledge you want students to acquire will no longer be encouraged and incentivised by educational institutions. That seems like a really big issue, and I have absolutely no idea what to do about it.</p><p><strong>Henry Shevlin</strong></p><p>Really good points. I&#8217;d agree you can do in-person essay exams. Most of my finals as an undergrad consisted of three-hour-long exams in which I had to write three essays. But that trains a very specific type of writing&#8212;quite an artificial one. It&#8217;s training your ability to write essays under tight pressure. If you want to do any kind of writing for a living, that&#8217;s only one of many skills you want.</p><p>If you&#8217;re producing writing you want people to read&#8212;whether it&#8217;s blogging or writing academic articles or scientific papers&#8212;you don&#8217;t typically write it under incredible time pressure where you&#8217;ve got to put out two and a half thousand words in three hours. You go through multiple drafts. You test those drafts with colleagues. There&#8217;s a whole bunch of writing skills that rely on the take-home component, the ability to think things through. And I don&#8217;t know how we test those.</p><p>Second, I completely agree that one of the things education is for is preparing people for knowledge work, and knowledge work these days is almost always going to involve the use of LLMs. So we should be training people to use them.</p><p>As to how we respond, my very flat-footed initial thought is we need to separate quite clearly: courses where LLM usage is trained and developed and built in as part of the assessment&#8212;where it&#8217;s assumed everyone will be using LLMs at multiple stages of the process and part of the skillset is using them effectively&#8212;versus other courses that say, &#8220;This is an LLM-free course; all assignments will be in-person vivas or in-person written exams.&#8221;</p><p>Another downside with in-person vivas and exams, which I hear particularly from high school teacher friends, is they&#8217;re just very labour-intensive. Compared to take-home essays, running exams regularly is classroom time where you&#8217;ve got to have a teacher in the room, where students are not learning. That creates problems for resource-scarce education environments&#8212;schools and universities. There are also problems around equality or accessibility.</p><p><strong>Dan Williams</strong></p><p>Yeah.</p><p><strong>Henry Shevlin</strong></p><p>There was a great piece in The Atlantic a couple of days ago by Rose Horowitz called &#8220;Elite Colleges Have an Extra Time on Tests Problem,&#8221; talking about the fact that 40% of Stanford undergrads now get extra time on tests because of diagnoses of ADHD and other things. Test-taking has its own set of problems. There are lots of classic complaints that it incentivises, rewards, or caters to certain kinds of thinkers more than others. It&#8217;s not great for people who maybe think more slowly or have special educational needs. I think it&#8217;s got to be part of the solution, but I don&#8217;t think it&#8217;s a panacea to solving the problems LLMs create.</p><p>A final point I&#8217;ll flag is that I worry a little bit about deeper issues of de-skilling associated with LLMs. On the one hand, yes, we want students to learn how to use them. But particularly earlier in the educational pipeline, there is a danger that easy access to LLMs just means students don&#8217;t develop certain core skills to begin with.</p><p>I&#8217;m going here based on testimony from a friend of mine who&#8217;s a high school teacher. He said his sixth formers (17-18 year olds in the UK system) seem to use LLMs really well because they do things like fact-checking, they restructure the text outputs of LLMs, they can use them quite effectively to produce good reports or written work. And he says there&#8217;s a really striking disparity between them versus the 13-14 year olds, who basically just turn in ChatGPT outputs verbatim.</p><p>Now you might say, &#8220;Yeah, of course&#8212;17 year olds versus 13 year olds, big difference.&#8221; But his worry is that the 17-18 year olds grew up doing their secondary education in a pre-LLM world. They actually learned core research skills, core writing skills. Whereas the 13-14 year olds&#8212;all of their secondary education has happened in an LLM world. So they haven&#8217;t developed the skills that are ironically needed to get the most out of LLMs: the ability to augment their outputs with critical thinking, human judgment, their own sense of what good writing looks like.</p><p><strong>Dan Williams</strong></p><p>I think this idea that AI can be an incredible complement to human cognition&#8212;an enhancer&#8212;but it can also be a substitute in ways that will, as you say, lead to de-skilling. And there are issues of inequality as well. As we were alluding to earlier, if you know how to use this technology well, and more importantly, you&#8217;re motivated to do so, it can be an incredibly beneficial tool for improving your ability to learn, understand, and think. But if you&#8217;re not motivated to do so&#8212;if you&#8217;re motivated to cut corners&#8212;it can really be a serious issue, using it as a substitute for developing the skillset and habits which are essential for becoming a thoughtful person.</p><p>In general, over the past century or so (if not even longer), as you get the emergence of meritocratic systems in liberal democracies plus the emergence of this really prestigious knowledge economy, basically there have been increasing returns to those who have high cognitive abilities plus those who are conscientious and have good impulse control. I think this has created a lot of political issues, including resentment among those people without formal education and without the skillset and temperament to succeed within educational institutions.</p><p>And it really does seem like a risk with AI that it&#8217;s going to amplify and exacerbate those issues. For people (and this is also going to be an issue with parents and what they prioritise with their children) who know how to use this technology and can encourage the right motivations to use it as an enhancer and complement to cognition, there are going to be massive returns. But for those without that&#8212;either because they don&#8217;t have the privilege or opportunities, or just because they don&#8217;t have good impulse control, they&#8217;re not very conscientious&#8212;it could result in really catastrophic de-skilling.</p><p>Some people think&#8212;and I think the evidence here is not as strong as many people claim&#8212;that since smartphones emerged, you&#8217;ve seen somewhat of a decline in people&#8217;s cognitive abilities, their literacy, their numeracy. There&#8217;s an interesting article by John Burn-Murdoch in the Financial Times where he goes into this in some detail; we can put a link to that in the video. But I think that&#8217;s potentially a really socially and politically explosive issue which we need to grapple with.</p><p>Another thing worth talking about: at the moment, people are going to school and university, and they&#8217;re trying to acquire the skills and credential which will make them valuable within the economy and society as it exists today. But AI is likely to transform the economy and the nature of work. One thought might be: up until now, it&#8217;s been very beneficial for people to acquire cognitive abilities, the capacity to succeed in the knowledge economy. But if, over the next years and decades, AI results in automation of white-collar work, automation of knowledge economy work (precisely because of the abilities of these systems), that might erode the motivation for learning those skills to begin with.</p><p>Have you got any thoughts about that? The way in which attitudes towards education should also be shaped by our understanding of how AI is going to shape the society that people will enter after they&#8217;ve left education.</p><p><strong>Henry Shevlin</strong></p><p>It&#8217;s a fantastic and tricky issue. On the one hand, my timelines on economic transformation caused by AI have become a bit longer over the last two or three years. One of the big calls I got really quite badly wrong is when ChatGPT launched, I thought, &#8220;This is going to revolutionise the knowledge economy. Three years from now, the knowledge economy is going to be completely different.&#8221; That was a very naive view.</p><p>Since then, I&#8217;ve done more work with different companies and organisations trying to boost AI adoption. And it&#8217;s really, really hard to get people to use AI. Not only that, it&#8217;s really hard to transform business models to incorporate AI skills effectively.</p><p>I&#8217;ll give this quick sidebar because I think it&#8217;s quite interesting. There&#8217;s this great article called &#8220;The Dynamo and the Computer,&#8221; looking at the impact of different technologies and how they were rolled out in the workforce. My favourite example from this paper: towards the end of the 19th century, we had what&#8217;s sometimes called the second industrial revolution. First industrial revolution: coal, steam, railroads. Second industrial revolution: oil, electricity.</p><p>You had this interesting phenomenon where factories (the second industrial revolution mostly started in the US) were using electric lighting, but they were still using coal- and steam-powered drive trains for the actual machines in the factory. This is massively inefficient because you need to ship in coal every day, run a boiler, have big clunky machinery that needs tons of gearboxes. It would be far better to shift to a fully electrified system where all your machines run on electricity. But that transition took another 20 years or so to really get going, partly because it required literally rebuilding factories from scratch.</p><p>A lot of factories were designed with a single central drive train&#8212;literally a spinning cylinder that all the machines in the factory would draw their power from. It was only when you&#8217;d sufficiently amortised the costs of your existing capital and were rebuilding and refurbishing factories that people were able to say, &#8220;All right, now we&#8217;re in a position to move to a fully electrified factory.&#8221;</p><p>I think we&#8217;ve got an analogy or parallel in terms of the rollout of AI in knowledge work. Most existing firms that do knowledge work&#8212;their value chain, their whole sequence of processes&#8212;would be completely different if they were building as an AI-first company. I think it could easily be another decade before we start to see the full potential of AI and knowledge work being applied systematically. A lot of firms are going to go bust; a lot of startups are going to scale up and become multi-billion-dollar companies. But it&#8217;s going to be a slower process than I naively thought.</p><p>All of this is to say that I think the economic impacts and transformations of AI in knowledge work, although they&#8217;re going to be significant and persistent pressures, I no longer think that by 2030 no one is going to be working white-collar jobs. That&#8217;s a mismatch between what the technology can do and the actual challenges of application. We&#8217;re still going to need knowledge workers in the longer run.</p><p>But specifically which domains, what kinds of knowledge work are going to be most valuable or important&#8212;really, really hard to judge. One of the questions I get asked most often when I do public engagement work is, &#8220;I&#8217;ve got two kids in high school. What should they be studying? What should they be learning in order to really succeed in the AI age?&#8221; Five years ago, 10 years ago, you would have said coding. &#8220;Learn to code&#8221; was a meme.</p><p><strong>Dan Williams</strong></p><p>Yeah.</p><p><strong>Henry Shevlin</strong></p><p>But that&#8217;s a terrible piece of advice for many people these days. Not that we won&#8217;t need coders&#8212;probably we&#8217;ll still need some. But the proportion of jobs in coding is going to be dramatically fewer because a lot of entry-level basic coding can be done perfectly well by AI, and probably fairly soon even expert-level coding.</p><p>My slightly wishy-washy answer, but I think it&#8217;s the best I can give to that question, is: the higher-order cognitive skills. Cultivating curiosity and openness to new ideas and new tools is probably far more important now than it has been for most of the last few decades, precisely because we&#8217;re in a period of such radical change. Cultivating the kind of mindset where you&#8217;re actively seeking out new ways to do old processes, seeking out new tools, building that kind of creativity and curiosity&#8212;those skills are going to be as important as ever, or more important than they were before, as a result of the AI age.</p><p>It&#8217;s very hard to say, &#8220;If you want to secure a career in knowledge work, this is the line to go into.&#8221; As one colleague put it (and I don&#8217;t quite agree with this framing, but I think it captures some of the spirit behind your question): we&#8217;ve solved education at precisely the place and time where it&#8217;s very unclear what the relationship between education and work is going to be.</p><p><strong>Dan Williams</strong></p><p>That&#8217;s interesting. Some of the things you said there bring us back to the conversation we had about AI as normal technology&#8212;the idea that there&#8217;s a difference between the raw capabilities and potential of a technology and the way it actually diffuses and rolls out throughout society. My sense is AI will have really transformative effects on the economy, but I think it&#8217;s very unlikely you&#8217;re going to see full automation for several decades.</p><p>But what I do think is likely is that the ability to use AI well for the jobs human beings will be doing is going to become really, really important. That connects us back to: if that&#8217;s the case, it seems like one of the things educational institutions should be doing is thinking very carefully about how they can prepare students for a world in which AI is going to be centrally embedded in the kind of work they&#8217;re doing. And at the moment, my sense is educational institutions are not doing a good job with that at all.</p><p>Maybe to start wrapping up: if you had to give a high-level take on this overarching question&#8212;is this a crisis for our educational institutions? Is this an opportunity? Is it a bit of both?&#8212;what&#8217;s your sense?</p><p><strong>Henry Shevlin</strong></p><p>It&#8217;s definitely a crisis. In fact, if you want to give an example of a single sector in which AI has had devastating effects&#8212;some positive, but mostly negative, devastating effects&#8212;it&#8217;s education. This is one of my go-to responses when people try to push the &#8220;AI as a nothing burger&#8221; take. I say, &#8220;Go speak to a high school teacher. Tell them AI&#8217;s nonsense, just a nothing burger.&#8221; Their daily lives and their interactions with students and the way they can teach has been utterly transformed in mostly negative ways so far by AI.</p><p>Certainly in the short to medium term, AI has basically broken large parts of our existing educational system&#8212;in terms of assessment, in terms of tracking. It&#8217;s very demoralising for a lot of educators and teachers. All that said, the potential we discussed earlier is incredible.</p><p>But it&#8217;s a question of how we rebuild the boat while we&#8217;re at sea. We can&#8217;t just say, &#8220;We&#8217;re going to stop education for five years, redesign the whole thing from scratch, and come up with something effective.&#8221; Managing that transition, particularly in conditions of massive uncertainty about the kinds of jobs and skills that are going to be necessary, is really hard.</p><p>One reason perhaps that I&#8217;m less devastated by this&#8212;it is a bit of a disaster&#8212;is that I think formal education has been accreting so many deleterious problems for several decades now, ranging from credentialism (I think the expansion of higher education, which was seen as an unalloyed good, has had lots of negative effects) to things like grade inflation (a really serious problem) to the ubiquity of smartphones and declining attention spans, the slippage in standards, the shift away from the more traditional model where your professors were these exalted, almost priestly caste and you hung on every word. I realise it was never quite like that, but there was more of this implied hierarchy, versus a model where students regard themselves as customers&#8212;they&#8217;re paying for a credential and they want that credential.</p><p>All of these sociological and institutional shifts have been creating massive problems in higher education in particular, but also high school. AI, although it&#8217;s bringing many of these problems to a head, they were problems we were going to have to deal with at some point anyway. But what&#8217;s your take&#8212;crisis or opportunity?</p><p><strong>Dan Williams</strong></p><p>I completely agree with everything you just said. And it&#8217;s a nice optimistic note to end on, isn&#8217;t it? This is a crisis, but our institutions of education have already been confronting all of these other crises. So it&#8217;s just adding something on top of all the other problems our educational institutions confront.</p><p>Yes, I think it&#8217;s a crisis. I think it&#8217;s an emergency in the sense that universities and other institutions of education&#8212;schools, colleges&#8212;need to be taking this a lot more seriously than they currently are.</p><p>You can&#8217;t have people in these institutions where the last time they used ChatGPT was in 2022 and they&#8217;re completely oblivious to the capabilities of the current technology. I think you&#8217;ve got a responsibility, if you&#8217;re an academic or you work in a school or college, to know how to use these technologies, because you need to be aware of what they can do. And we need to really quickly fix assessment. As I mentioned at the beginning, the take-home essay, in my view, is absolutely insane&#8212;literally insane that this is still happening. And we also have to think carefully about how to reform the way we teach and what we teach to prepare students to use these technologies.</p><p>But okay, I&#8217;m conscious of the time, so we can end on that really nice happy note: this is a disaster, but these educational institutions are already confronting a disaster. Did you have any final thought you wanted to add before we wrap things up?</p><p><strong>Henry Shevlin</strong></p><p>Just to build on something you said as a closing note: I think another deeper, structural problem in responding to the challenge of AI is that there&#8217;s so much interpersonal variation in how much people like, are open to, or are interested in using AI. This shows up at faculty level. I don&#8217;t know about your experience, but mine is that even in Cambridge, a lot of academics have very little interest in AI.</p><p><strong>Dan Williams</strong></p><p>Mmm.</p><p><strong>Henry Shevlin</strong></p><p>So the idea that we&#8217;re going to be an AI-first institution, or that we expect all our staff members to be at least as familiar with the capabilities of AI as their students&#8212;that&#8217;s an incredible ask. That&#8217;s a massive challenge. At a university, you can&#8217;t just fire staff who are doing brilliant research on Shakespeare&#8217;s early plays just because they don&#8217;t happen to get on with AI.</p><p>I think that&#8217;s another big structural problem: the fact that there&#8217;s so much variation in how comfortable and capable instructors are with AI. The only solution here, going back to a suggestion from earlier, is to really separate the job of education into two different streams. One explicitly builds AI in as both a method of assessment and a skill you&#8217;re trying to teach&#8212;making AI core to what you&#8217;re trying to do. And then a separate stream that is absolutely, strictly AI-free. Maybe the people who hate AI, who are not interested in AI, who are currently teaching courses&#8212;maybe they can handle the second stream. And those of us who love AI, who are super excited about it, who know as much about it or more than our students, we can be in charge of the first stream. That&#8217;s a very basic suggestion, but I wanted to flag that this is the other side of the problem.</p><p><strong>Dan Williams</strong></p><p>That&#8217;s great. I love that. We can end on a constructive suggestion rather than a note of pessimism. So thanks everyone for tuning in. We&#8217;ll be back in a couple of weeks with another episode.</p>]]></content:encoded></item><item><title><![CDATA[Let's Not Bring Back The Gatekeepers]]></title><description><![CDATA[The challenge for the liberal establishment in the social media era is simple: persuade or perish. If you can&#8217;t control the public conversation, you must participate in it.]]></description><link>https://www.conspicuouscognition.com/p/lets-not-bring-back-the-gatekeepers</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/lets-not-bring-back-the-gatekeepers</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Sun, 30 Nov 2025 13:36:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tnQI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tnQI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tnQI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png 424w, https://substackcdn.com/image/fetch/$s_!tnQI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png 848w, https://substackcdn.com/image/fetch/$s_!tnQI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png 1272w, https://substackcdn.com/image/fetch/$s_!tnQI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tnQI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png" width="888" height="418" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:418,&quot;width&quot;:888,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:709805,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.conspicuouscognition.com/i/180187121?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tnQI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png 424w, https://substackcdn.com/image/fetch/$s_!tnQI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png 848w, https://substackcdn.com/image/fetch/$s_!tnQI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png 1272w, https://substackcdn.com/image/fetch/$s_!tnQI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29e3f96-cf2c-42f4-943c-f06e7e568878_888x418.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A consensus view holds that social media benefits something called &#8220;populism&#8221;, an amorphous political force involving anger towards &#8220;elites&#8221; and &#8220;the establishment&#8221; on behalf of the more virtuous masses. The evidence for this view consists mainly of the suspicious correlation between social media&#8217;s emergence and the worldwide rise of populism, and the undeniable fact that populists seem to perform uniquely well on social media platforms.</p><p>Because the establishment in modern liberal democracies is overwhelmingly small-l liberal (universalist, pluralist, procedural), such populist movements are typically illiberal, especially on the populist right (MAGA, Reform UK, Rassemblement National, Alternative f&#252;r Deutschland, etc). So, social media&#8217;s support for populism goes hand in hand with its threat to a reigning liberal order in the West that many thought or at least hoped marked <a href="https://en.wikipedia.org/wiki/The_End_of_History_and_the_Last_Man">the end of history</a>.</p><p>Why does social media have these consequences? And if, like me, you are a liberal who opposes populism, what can be done about it?</p><p>This essay has three parts.</p><p>Part 1 argues that the main reason social media benefits populism is that it destroys elite gatekeeping, providing a mass media platform for popular ideas historically stigmatised and marginalised by establishment elites.</p><p>Part 2 then outlines several reasons why we should nevertheless resist moves for more elite gatekeeping on social media. Not only are such efforts likely to make things worse, but the decline of elite gatekeeping has had many beneficial consequences, and the negative consequences, although real, are often overstated.</p><p>Finally, Part 3 argues that many of these negative consequences are not inevitable either. A large part of the blame for them lies in the fact that establishment institutions have failed to adapt to the new pressures and responsibilities of the social media age. Instead, they have clung to a set of habits and norms&#8212;most fundamentally, an aversion to engaging with illiberal ideas to avoid &#8220;platforming&#8221; and &#8220;normalising&#8221; them&#8212;adapted to a world that no longer exists.</p><p>Put simply: Once established institutions lost the privilege to control the public conversation, they acquired an obligation to participate within it, which, so far, they have mostly failed to do.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>1. Why Social Media Benefits Populism</h1><p>The most <a href="https://harpers.org/archive/2021/09/bad-news-selling-the-story-of-disinformation/">popular theory</a> of why social media benefits populism points the finger at engagement-maximising algorithms. Because tech companies design their platforms to capture user attention, algorithms recommend content that is sensationalist, negative, and polarising&#8212;precisely the kind of content that benefits populist demagogues selling cartoonish anti-elite narratives.</p><p>There is obviously a grain of truth here, but the explanation is also unsatisfying. For one thing, appealing to engagement-maximising algorithms is not very informative without a <a href="https://www.conspicuouscognition.com/p/status-class-and-the-crisis-of-expertise">supplementary account</a> of why audiences find specific ideas engaging. Moreover, focusing on algorithms obscures the extent to which audiences <a href="https://journals.sagepub.com/doi/10.1177/20563051221150412">actively seek out and amplify</a> content that aligns with their pre-existing views. The popular image of wholly passive exposure to recommended content, or of vast numbers of users being sucked into radicalising rabbit holes, is <a href="https://www.nature.com/articles/s41586-024-07417-w">not well supported</a> by evidence.</p><p>A more promising theory, owing primarily to Martin Gurri in his book <em><a href="https://www.amazon.co.uk/Revolt-Public-Crisis-Authority-Millennium/dp/1732265143">The Revolt of the Public</a></em>, points to how the social media age has destroyed elite gatekeeping. Whereas establishment institutions once exercised an informational monopoly, managing media and mainstream discourse to protect elite interests and perspectives, social media makes such narrative control impossible. As a result, the public is now exposed to endless examples of elite failures and hypocrisy, fuelling populist anger and backlash.</p><p>Once again, this story gets at something important, but it can also be misleading. There is a lot of anti-elite <em>sentiment </em>on social media, but it is hardly a well-oiled machine for holding elites to account. If anything, legacy media outlets are often better at exposing establishment failures because they insist on minimal standards of truth and evidence. Moreover, reporting and commentary on such failures, whether accurate or not, is just one example of a much broader set of populist-aligned ideas and narratives that thrive on social media platforms.</p><p>A more <a href="https://www.conspicuouscognition.com/p/is-social-media-destroying-democracyor">plausible story</a> generalises Gurri&#8217;s analysis. The erosion of elite gatekeeping ushered in by social media benefits populism, but mainly by <a href="https://www.ft.com/content/9251504e-c60e-4142-b1fb-c86b96275814">providing a platform</a> for the advocacy of ideas historically stigmatised by elites. This includes powerful anti-elite sentiments, but it also encompasses many other views, including fierce opposition to immigration and progressive cultural change, run-of-the-mill bigotry, medieval beliefs about everything from economics to demons, conspiracy theories about Jews and vast paedophile rings, and much more. To the extent that many such ideas are popular, it&#8217;s unsurprising that social media benefits populism. Indeed, &#8220;popular ideas historically stigmatised by elites&#8221; is a pretty good <em>definition </em>of populism.</p><p>By platforming such ideas, social media lets them reach a much larger audience. This can produce <em>persuasion</em>, but it also fuels processes of <em><a href="https://academic.oup.com/book/57946">normalisation</a></em> and <em>coordination</em>. When people learn that their stigmatised views are popular, they become emboldened, and the <a href="https://en.wikipedia.org/wiki/Spiral_of_silence">spiral of silence</a> breaks. In turn, enterprising politicians and pundits discover that they can profit by affirming and <a href="https://www.cambridge.org/core/journals/economics-and-philosophy/article/marketplace-of-rationalizations/41FB096344BD344908C7C992D0C0C0DC">rationalising</a> such viewpoints. The Overton window expands accordingly. There is no better illustration of this dynamic than Tucker Carlson&#8217;s recent <a href="https://www.youtube.com/watch?v=6jyDHToxC-4">viral interview</a> with Nick Fuentes, a conversation featuring extreme forms of anti-Semitism and misogyny that would have been unthinkable on the mainstream right even five years ago.</p><p>Admittedly, the term &#8220;elites&#8221; in this analysis can be misleading. Are Donald Trump, Nigel Farage, and Marine Le Pen not elites? Is Elon Musk, the world&#8217;s wealthiest man and owner of one of its most influential media sites, not an elite? </p><p>To <a href="https://www.richardhanania.com/p/why-donald-trump-and-joe-rogan-are">make sense of this</a>, one needs to distinguish establishment elites (what populists typically mean by &#8220;elites&#8221;), who achieve status and influence by impressing those within establishment institutions, from populist elites, who achieve status and influence by appealing directly to a mass audience.<a href="#_ftn1">[1]</a> By letting politicians and pundits reach vast audiences in ways that bypass traditional gatekeepers, social media benefits this latter class: people who gain power and prestige by championing viewpoints historically marginalised by establishment elites, often for good reason.</p><h1>2. So, Is Elite Gatekeeping A Good Thing?</h1><p>In some ways, this is a bleak and uncomfortable story. Elite gatekeeping is supposed to be a bad thing. Even many elites <a href="https://www.amazon.co.uk/Have-Never-Been-Woke-Contradictions/dp/0691232601">pretend to dislike elitism</a>. Yet if this story is correct, it suggests that elite gatekeeping is good.</p><p>Perhaps, then, we should <a href="https://www.richardhanania.com/p/bring-back-the-internet-gatekeepers">aim for a return</a> of much more elite gatekeeping. Banning social media is obviously not an option. But one might still campaign for a much more regulated internet with a greater role for top-down censorship, content moderation, and de-amplification of misinformed, hateful viewpoints. One could think of this as a return to the policies that dominated social media before Musk took over Twitter and other major tech companies abandoned their most aggressive anti-misinformation measures. But one could also advocate for much more censorious regimes than that, as many do, especially in the UK and EU.</p><p>We should resist this impulse.</p><p>To be clear, private companies should be able to set whatever content-moderation policies they want in a free society, and governments should be able to enforce laws against the most clear-cut foreign disinformation campaigns.</p><p>Nevertheless, a world in which all citizens are free to compete in the <a href="https://www.conspicuouscognition.com/p/the-marketplace-of-misleading-ideas">marketplace of ideas</a>, even if they hold views accurately deemed absurd and hateful by establishment elites, is better than one in which such elites control who can speak. Although it&#8217;s important not to downplay the dangers and harms associated with some of today&#8217;s most popular social media pundits&#8212;Joe Rogan, Tucker Carlson, Candace Owens, Tommy Robinson, Russell Brand, Nick Fuentes, and so on&#8212;we should not aim for a world in which they are prevented from advocating those views to audiences who want to hear them.</p><h2>Against Elite Gatekeeping</h2><p>One simple reason for this is that the horse has left the stable. The effort to avoid platforming and normalising illiberal, misinformed, or hateful ideas doesn&#8217;t make much sense in a world in which they are already popular and widely discussed.</p><p>Moreover, although it&#8217;s not true that elite gatekeeping can never &#8220;work&#8221;&#8212;before the emergence of social media, it generally did work to marginalise and stigmatise many bad viewpoints&#8212;it&#8217;s much harder to see how it can work in an era with<em> </em>social media.</p><p>The <a href="https://www.conspicuouscognition.com/p/misinformation-is-often-the-symptom">failures</a> of the post-2016 anti-misinformation industry are instructive here. In the aftermath of Brexit and Trump&#8217;s first election, there was a concerted effort within establishment institutions to exert greater control over the internet under the banner of fighting &#8220;fake news&#8221;, &#8220;misinformation&#8221;, and &#8220;disinformation&#8221;. The story of how such efforts unfolded is complex, but the headline outcome isn&#8217;t: in the well-funded, top-down war against misinformation, misinformation won.</p><p>Efforts to censor and de-amplify disfavoured views bred widespread anger and resentment among those who saw unaccountable elites exerting undemocratic control over the public conversation. One cannot understand the political trajectory of figures like Joe Rogan (from Bernie Bro to MAGA Bro) or even Elon Musk without understanding this backlash.</p><p>Admittedly, most of this backlash against a perceived &#8220;<a href="https://www.conspicuouscognition.com/p/there-is-no-censorship-industrial">censorship industrial complex</a>&#8221; was based on <a href="https://www.amazon.co.uk/Invisible-Rulers-People-Turn-Reality/dp/1541703375">lies, exaggerations, half-truths, and right-wing opportunism</a>. But if policies against misinformation only work if people aren&#8217;t misinformed, they don&#8217;t work. And it&#8217;s difficult to see how any top-down effort to control the information environment can work without merely <a href="https://www.conspicuouscognition.com/p/misinformation-is-often-the-symptom">exacerbating</a> the anti-elite resentment that fuels the very content such efforts aim to address.</p><h2><strong>The Benefits and Overestimated Costs of Social Media</strong></h2><p>In addition to these points about feasibility, it&#8217;s also important to acknowledge that many viewpoints <a href="https://www.conspicuouscognition.com/p/on-highbrow-misinformation">marginalised by establishment elites are correct</a>, and many more express reasonable perspectives that improve the quality and vibrancy of the overall public conversation. As I&#8217;ve written about <a href="https://www.conspicuouscognition.com/p/on-highbrow-misinformation">before</a>, the intellectual culture of establishment elites was and continues to be deeply dysfunctional in many ways, featuring harmful forms of groupthink and highbrow misinformation. Elite gatekeeping doesn&#8217;t just filter out the most egregious forms of misinformation. It also typically filters out legitimate grievances and reasonable challenges to establishment orthodoxies.</p><p>Finally, although the decline of elite gatekeeping has undoubtedly produced some negative consequences, the dominant tendency within establishment institutions is to exaggerate them&#8212;to imagine that if only the internet went away, angry populist challenges to liberal-democratic regimes would disappear along with it.</p><p>This is a self-serving fantasy. Not only are the worst forms of social media content <a href="https://www.conspicuouscognition.com/p/debunking-disinformation-myths-part-c7f">less prevalent and impactful</a> than many assume, but populist backlash is tied to many factors beyond the internet, including persistent establishment failures over many years, objective trends (e.g. mass immigration and top-down liberalisation of cultural values), and the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4230288">accurate perception</a> among many voters that establishment politicians don&#8217;t adequately represent them. Social media plays an important role, and often a negative one, but the liberal establishment&#8217;s frequent <a href="https://asteriskmag.com/issues/11/scapegoating-the-algorithm">scapegoating</a> of social media-based &#8220;misinformation&#8221; for all the world&#8217;s problems is no more defensible than simplistic populist narratives blaming immigrants or billionaires for them.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.conspicuouscognition.com/subscribe?"><span>Subscribe now</span></a></p><h1>3. Persuade or Perish</h1><p>These considerations suggest that introducing more elite gatekeeping on social media is less feasible and desirable than is often assumed. But another fact should also determine how we evaluate the decline of such gatekeeping: its consequences are not inevitable. They are mediated by how establishment institutions respond to this change. And so far, the response has been, at best, inadequate.</p><p>Over many decades, such institutions developed a set of habits and norms suited to a media environment subject to elite gatekeeping. This included a commitment to <a href="https://carnegieendowment.org/research/2025/09/communications-social-media-nonprofit-institutions-new-media-environment?lang=en">top-down modes of communication</a> in which those designated as experts or intellectual authorities inform the public about what to think, as well as a deep aversion to engaging with ideas deemed illiberal, absurd, or hateful lest such engagement normalise them. In a world with elite gatekeeping, these behaviours make sense.</p><p>In recent years, social media has gradually dismantled such gatekeeping, along with the ability to determine which ideas are platformed and normalised in public conversation. The norms within establishment liberal culture have not adjusted, however. So, we now have the worst of both worlds: a reluctance to engage with many illiberal, populist ideas that are becoming increasingly mainstream.</p><h2><strong>The antipathy towards persuasion</strong></h2><p>The most obvious example of the liberal establishment&#8217;s aversion to persuasion is the <a href="https://press.princeton.edu/books/hardcover/9780691232607/we-have-never-been-woke?srsltid=AfmBOop-Lmog9yYMyuh7jtIF3sjFRQu2Y-KEI449IQvRbCCyaXr5OJCc">Great Awokening </a>that swept major Western institutions in recent years. This was characterised by an approach to politics that emphasised ideological purity, the use of shaming and reputational destruction to discourage heresy, an extreme hostility towards &#8220;platforming&#8221; ideas at odds with elite progressive orthodoxy, and an insistence that such orthodoxy be taken on trust. (&#8220;It&#8217;s not my job to educate you!&#8221;).</p><p>Nevertheless, the distinctive feature of wokeism is not really the use of such tactics against perceived heresies, but the heroic attempt to expand the category of heresy to include attitudes held by around <a href="https://hiddentribes.us/?utm_">90% of the population</a>, including many liberals <em>within</em> establishment institutions.</p><p>Given this, even as the Overton window has subsequently expanded during the predictable cultural backlash and vibe shift against wokeism, the liberal establishment&#8217;s attitude and approach towards ideas outside that window has largely remained the same.</p><p>To illustrate, I recently heard from two non-woke academics complaining that a scientist had appeared on the popular Triggernometry and Jordan Peterson podcasts, both of which reach large audiences. They weren&#8217;t complaining about anything the scientist had <em>said </em>on these podcasts; they were outraged merely at the fact that the scientist had been on them. </p><h2><strong>Case Studies</strong></h2><p>If this seems like an unrepresentative anecdote, recall that in what Democrats claimed was the most critical election in American history, a vote on the continued existence of its democracy, Kamala Harris didn&#8217;t go on Joe Rogan, the world&#8217;s most popular podcast, to make her case. </p><p>Similarly, after RFK Jr. appeared on Rogan&#8217;s podcast to vomit up several hours of lies and bullshit about vaccines, Rogan <a href="https://www.ama-assn.org/public-health/prevention-wellness/dr-peter-hotez-anti-science-movement-and-declining-joe-rogan-s">offered</a> $100k to charity if Peter Hotez, a prominent scientist and science communicator, would debate RFK Jr. on his show. When Hotez refused, he <a href="https://www.ama-assn.org/public-health/prevention-wellness/dr-peter-hotez-anti-science-movement-and-declining-joe-rogan-s">received widespread support</a> from elite legacy media outlets and the scientific establishment, where a broad consensus emerged that any such debate would legitimise RFK Jr.&#8217;s views, implying they were on an equal footing with mainstream science.</p><p>This hostility towards engagement and persuasion has also been striking in the UK. For example, when GB News was recently introduced, which aspires to be the UK&#8217;s version of Fox News, there was a widespread elite panic about it, a <a href="https://www.gbnews.com/politics/keir-starmer-christopher-hope-exposes-labour-failure-engage?utm_">reluctance</a> by many mainstream centrist and centre-left politicians and pundits to even appear on the channel, prominent calls to boycott it, and a yearning for government regulation to either ban or heavily constrain the channel&#8217;s coverage. This yearning has also been the <a href="https://www.conspicuouscognition.com/p/did-online-misinformation-fuel-the">dominant establishment response</a> in the UK to online content deemed to be misinformed or hateful.</p><p>In many ways, things are even more extreme elsewhere in Europe. For example, at the same time as the anti-immigration AfD (a far-right party with fascist roots) is surging in popularity in many parts of Germany, mainstream parties <a href="https://www.ft.com/content/9251504e-c60e-4142-b1fb-c86b96275814">continue to enforce</a> a literal conspiracy of silence around discussion of any negative social consequences of immigration.</p><p>It&#8217;s also noteworthy that over the past several years, large segments of the English-speaking world&#8217;s educated liberal professionals in academia and journalism have decamped to Bluesky, a social media platform that someone would invent if they wanted to create an <a href="https://www.conspicuouscognition.com/p/against-bluesky">over-the-top caricature of the pathologies of inward-looking, puritanical liberal culture</a>, except it&#8217;s real.</p><p>Such behaviour is all the more remarkable when you contrast it with the thirst for engagement, disagreement, and debate you typically find among the figures who most loudly criticise the liberal establishment.</p><h2>On Confronting Reality</h2><p>In fairness, there is a growing appreciation of just how damaging this aversion to engagement and persuasion has been. Kamala Harris now <a href="https://www.hollywoodreporter.com/news/politics-news/kamala-harris-regrets-not-appearing-joe-rogan-podcast-1236415162/">regrets</a> not going on Rogan, and Ezra Klein&#8217;s <a href="https://www.nytimes.com/2025/09/11/opinion/charlie-kirk-assassination-fear-politics.html">claim</a> that liberals should learn from Charlie Kirk&#8217;s &#8220;taste for disagreement&#8221; and &#8220;moxie and fearlessness&#8221; signalled a dawning realisation that the liberal attitude to politics has been a disaster. </p><p>More recently, Klein has <a href="https://www.nytimes.com/2025/09/18/opinion/interesting-times-ross-douthat-ezra-klein.html">elaborated</a> on this critique, condemning the dominant liberal view</p><blockquote><p>&#8220;that you don&#8217;t bridge disagreement, you sort of draw a line around it, and you say that&#8217;s not even an OK position to hold and that there can be no compromise with it. There can barely be engagement with it.&#8221;</p></blockquote><p>Although Klein&#8217;s focus is on the Democrats and broader progressive culture in the US, what he describes is instantly recognisable to anyone who belongs to small &#8220;l&#8221; liberal institutions across Western countries:</p><blockquote><p>&#8220;There has been more of a tendency to try to define people out of the community, out of the boundaries of acceptable or polite discourse.&#8221;</p></blockquote><p>Klein notes that this dominant liberal attitude to contrary viewpoints is perversely and hypocritically illiberal, but he also observes, correctly, that as &#8220;an instrumental reality&#8230; it was a total failure.&#8221;</p><p>In most cases, it would be unfair to blame specific individuals for this failure. The problems are institutional and, more broadly, <em>cultural</em>. To encourage individuals to engage and persuade with populist and illiberal ideas, they must be incentivised by the norms and prestige economy within establishment culture. And at present, these incentives do not exist. They actively discourage such engagement, in fact. </p><p>There is a dominant norm that many outlets, spaces, and ideas are simply beyond the pale, even when they are increasingly popular. To the extent they are discussed at all, the discussion focuses overwhelmingly on how they might be better managed, regulated, or controlled. That is, it takes place within a fantasy in which the liberal establishment retains the ability to determine which viewpoints become the focus of public attention and conversation.</p><h2><strong>But Aren&#8217;t The Deplorables Irredeemable?</strong></h2><p>To abandon this fantasy, it&#8217;s not enough for the liberal establishment to relinquish the delusion that it can determine which ideas become discussed and debated. It must also unlearn something else: a widespread, deep-rooted pessimism that rational persuasion is even possible.</p><p>In 2016, Hilary Clinton was infamously caught on tape <a href="https://en.wikipedia.org/wiki/Basket_of_deplorables">referring</a> to half of Trump&#8217;s supporters as &#8220;deplorables&#8221;: &#8220;They&#8217;re racist, sexist, homophobic, xenophobic, Islamophobic - you name it.&#8221; But more tellingly, Clinton also added that some of these deplorables are &#8220;irredeemable.&#8221; In other words, not only are they terrible people with terrible views and values, but there is simply nothing that can be done to make them less terrible. Persuasion is futile.</p><p>In the aftermath of Brexit and Clinton&#8217;s subsequent election loss, one of the dominant responses from the liberal establishment across the world was to double down on this perspective. What we had apparently learned from those populist revolts was that large segments of the population are &#8220;<a href="https://www.conspicuouscognition.com/p/for-the-love-of-god-stop-talking">post-truth</a>&#8221;. They are beyond reason. Facts, evidence, rational arguments&#8212;these things are simply pointless when directed at the irredeemable deplorables. A representative <a href="https://www.theguardian.com/artanddesign/2018/feb/28/wolfgang-tillmans-what-is-different-backfire-effect">article</a> in The Guardian from 2018 reports a conventional wisdom that &#8220;30% of the electorate are resistant to rational argument.&#8221;</p><p>Strangely, this idea has been combined with the <a href="https://www.bostonreview.net/articles/the-fake-news-about-fake-news/">narrative</a> that large swathes of the population are routinely brainwashed by the disinformation, misinformation, and fake news they encounter online. So, you get what might be called the liberal establishment&#8217;s theory of perverse persuasion: the idea that those who support populist or illiberal politics are persuadable&#8212;but only by bad ideas.</p><p>For some time, politicians and journalists could <a href="https://timharford.com/2017/03/the-problem-with-facts/">point</a> to studies that seemed to support this perspective, a flurry of <a href="https://www.amazon.co.uk/Science-Fictions-Negligence-Undermine-Search/dp/1250222699">sexy social-psychological findings</a> that people&#8212;well, not scientists or professional journalists or highly-educated professionals who read broadsheet newspapers and believe in truth and reason and facts and evidence, but everyone else&#8212;are irrational, emotional, and stupid, credulous towards misinformation and yet pig-headed in the face of evidence-based arguments. Scientists even seemed to <a href="https://timharford.com/2017/03/the-problem-with-facts/">find</a> that some people are so preposterously irrational that they will &#8220;backfire&#8221;, becoming more confident in their beliefs when they encounter evidence against them.</p><p>To a first approximation, everything about this perspective is wrong.</p><h2><strong>Persuasion Works</strong></h2><p>Nobody is perfectly rational, of course, and there are robust differences in people&#8217;s level of intelligence and <a href="https://www.tandfonline.com/doi/full/10.1080/13546783.2024.2360491">open-mindedness</a>, but research from social scientists like <a href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo181475008.html">Alexander Coppock</a>, <a href="https://www.nature.com/articles/s41562-023-01551-7">Ben Tappin</a>, and others consistently shows that rational persuasion is <a href="https://www.conspicuouscognition.com/p/people-are-persuaded-by-rational">broadly effective</a> at changing people&#8217;s minds. </p><p>The &#8220;backfire effect&#8221; is either <a href="https://www.pnas.org/doi/10.1073/pnas.1912440117">extremely rare or, most likely, a myth</a>. And far from being duped by simple emotional manipulation and other non-rational techniques, people are <a href="https://press.princeton.edu/books/hardcover/9780691178707/not-born-yesterday?srsltid=AfmBOoqLALpbbyibiOgGqmUlbz5mwkWria9LCoBK4xL3wgeYOk7uzAAl">generally sophisticated</a> in how they evaluate messages, implicitly weighing the plausibility of claims, the validity of arguments, and the trustworthiness of sources.</p><p>To illustrate, recent <a href="https://www.science.org/doi/10.1126/science.adq1814">research</a> by Tom Costello and colleagues shows that engaging with a chatbot that presents tailored evidence and arguments reduced participants&#8217; beliefs in conspiracy theories by 20% on average, with the effect persisting for at least 2 months. Follow-up research has demonstrated that the intervention works by providing <a href="https://osf.io/preprints/psyarxiv/h7n8u">factual, targeted counterarguments</a> (when AIs are prompted to persuade without using facts, the effect disappears), and that it <a href="https://osf.io/preprints/psyarxiv/apmb5_v4">still works</a> even when people believe they are speaking to a human being.</p><p>Although one can reasonably question the methodology of these studies, the findings align with <a href="https://www.conspicuouscognition.com/p/people-are-persuaded-by-rational">a large body of high-quality research</a>. Given this, why are so many people so pessimistic about the power of rational persuasion?</p><h2>Sources of Pessimism</h2><p>One source of pessimism is simply confusion about what rational persuasion involves. Often, frustration that people aren&#8217;t &#8220;persuadable&#8221; is simply exasperation that they don&#8217;t accept one&#8217;s intellectual authority. In the aftermath of the Brexit debate, for example, much of the discourse about how voters didn&#8217;t respond to &#8220;facts&#8221; was really about how many voters didn&#8217;t trust a particular class of politicians and experts making claims about what the facts are. But saying &#8220;You should trust me on this!&#8221; is not an argument.</p><p>Another source of pessimism is what psychologists call &#8220;<a href="https://www.conspicuouscognition.com/p/in-politics-the-truth-is-not-self">na&#239;ve realism</a>&#8221;: the belief that the truth is self-evident, so that anyone who disagrees with what one takes to be the truth must be crazy, stupid or lying. In reality, people often hold divergent beliefs about the truth not because they are deeply irrational or acting in bad faith but simply because they have been exposed to very different streams of information and arguments over the course of their lives, which inevitably shape how they interpret the world.</p><p>In most cases, what looks like people &#8220;refusing to see reality&#8221; or &#8220;resisting the facts&#8221; is an illusion created by a failure to empathise with another person&#8217;s worldview. When audiences don&#8217;t immediately abandon their beliefs upon confrontation with contrary evidence, it&#8217;s concluded that they are irrational, when in fact it would be highly irrational to immediately abandon a whole worldview upon encountering contrary information.</p><p>Relatedly, much of the frustration that evidence and rational arguments don&#8217;t persuade audiences stems from the fact that people <a href="https://www.conspicuouscognition.com/p/rational-persuasion-vs-cancel-culture">aren&#8217;t actually being presented</a> with persuasive evidence or rational arguments. They&#8217;re presented with exasperated spluttering and talking points from the speaker&#8217;s own information bubble. </p><p>If you want to evaluate whether an argument is likely to be rationally compelling to audiences with very different beliefs, you can&#8217;t simply judge whether <em>you </em>find it convincing. But I see this mistake all the time. &#8220;When I told these vicious racists how racist it is to complain about immigration and reminded them that diversity is our strength, they didn&#8217;t change their minds. You can&#8217;t reason with these people!&#8221;</p><p>This problem is exacerbated by the fact that much of what establishment figures know about anti-establishment information environments comes from what they&#8217;ve read in establishment media outlets, which is often highly misleading, or from short, unrepresentative clips designed to make such environments seem as insane as possible. In consequence, they underrate the extent to which evidence-based, rational persuasion actually occurs in these spaces and the extent to which popular pundits and commentators there have well-developed critiques of establishment orthodoxies.</p><p>If you turn up on, say, Joe Rogan&#8217;s podcast expecting a low-IQ, low-information meathead because that&#8217;s the impression you got from reading The New York Times or The Guardian, you&#8217;re going to be unpleasantly surprised. If you want to engage with such pundits, then you have to be prepared to address the various truths and half-truths that they will use to support their side of the argument. However, precisely because of <a href="https://www.conspicuouscognition.com/p/on-highbrow-misinformation">powerful taboos</a> surrounding discussion of certain topics in establishment spaces (e.g., immigration, race and crime, climate change, youth gender medicine, etc.), people within these spaces are often unprepared when they encounter the most basic criticisms of establishment orthodoxies.</p><h2>Qualifications</h2><p>None of this means that persuasion is easy. You must meet people where they are, address their questions and objections, and be willing to revise your own beliefs in the process. It&#8217;s also often uncomfortable. People don&#8217;t like to discover that they&#8217;re mistaken about something. This is why there must be significant institutional and cultural changes to incentivise people to do this hard work. </p><p>Moreover, persuasion can only achieve so much. Both communicators and audiences have many goals other than discovering what&#8217;s true, including <a href="https://www.conspicuouscognition.com/p/the-stench-of-propaganda-clings-to">propaganda</a>, <a href="https://www.conspicuouscognition.com/p/people-embrace-beliefs-that-signal">ingroup signalling</a>, and <a href="https://www.conspicuouscognition.com/p/demonizing-narratives">demonising</a> target groups. Nevertheless, as Hannah Arendt <a href="http://chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://blogs.law.columbia.edu/praxis1313/files/2018/08/Arendt_-Truth-and-Politics-LQ.pdf">observed long ago</a>, inserting factual information into the public conversation can still helpfully constrain how people pursue those goals. </p><p>It&#8217;s also important to stress that rational persuasion <em>doesn&#8217;t </em>mean always being boring, civil, or a pushover. Social media is a brutal attention economy. The bottleneck in persuading people is often reaching them with persuasive messages in the first place. The most successful pundits and influencers are highly entertaining, and usually more than willing to provoke fights and conflict. These things aren&#8217;t inconsistent with <em>also </em>presenting evidence-based, rational arguments. Polite, well-mannered discourse is desirable when possible, but it&#8217;s neither sufficient nor necessary for rational persuasion to occur. </p><p>Finally, I&#8217;m not suggesting that shaming should play no role in politics and political discourse. It should be shameful to lie, propagandise, and spread lazy, biased, hateful talking points. In a healthy democratic culture, someone who lies as frequently and egregiously as, say, Elon Musk would be shamed out of the public square. </p><p>The problem is that we don&#8217;t have a healthy democratic culture. One of the unfortunate things about politics, as with life more broadly, is that you must act within the world that actually exists. For shaming to be effective, it requires cultural power, which liberals are plainly losing, especially in the online world that looks set to become <a href="https://www.conspicuouscognition.com/p/the-decline-of-legacy-media-rise">increasingly influential over the coming years and decades</a>. If you can&#8217;t rely on such cultural power, you must <em>demonstrate</em> to sceptical audiences that certain speech and ideas are shameful&#8212;that they are dishonest, false, or bigoted&#8212;and that requires persuasion. So, even when shaming is the appropriate response to speech, it is not an alternative to persuasion. It depends on persuasion. </p><h1><strong>Final Thoughts</strong></h1><p>The story I&#8217;ve told is uncomfortable in many ways, at least for liberals like me. If the main reason social media benefits populism is algorithms, the problem would lend itself to familiar technocratic solutions. If the main reason is that social media has removed the liberal establishment&#8217;s ability to control the public conversation, the &#8220;blame&#8221; lies with the loss of this undemocratic privilege and the abject failure to adapt to a more competitive marketplace of ideas.</p><p>If you read the liberal intelligentsia and commentariat today, you will encounter a thriving market for articles lamenting the social media age. <em>Social media</em>, we&#8217;re told,<em> is destroying society. It is destroying civilisation. It is making people dumber and angrier and more misinformed and polarised. It is a technological wrecking ball, an alien force that has smashed into liberal democracies and producing increasing destruction with every new swing.</em></p><p>It&#8217;s a comforting story. So is the popular belief that large segments of the public are so deplorable and irredeemable that they&#8217;re unreachable by rational persuasion. </p><p>In these accounts, the problem is not that liberalism has become so pathetically fragile that it can&#8217;t survive contact with Joe Rogan. The problem is with algorithms that drive his popularity, and with audiences too irrational to judge what constitutes a good argument on his show. </p><p>The problem is not that establishment figures became so accustomed to deference and control that they&#8217;re unprepared when people disagree with them. The problem is a digital post-truth era in which algorithms and disinformation campaigns brainwash the public.</p><p>Maybe. Perhaps liberal democracy ultimately requires a more illiberal, undemocratic media environment than the one created by the social media age, a world in which people&#8217;s exposure to ideas is regulated by establishment elites, not by recommender algorithms. But before we accept such a lesson, we should first test what happens when the liberal establishment is required to argue under the same rules as everyone else. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>Further Reading: </strong></h1><ul><li><p>Ren&#233;e DiResta and Rachel Kleinfeld have an <a href="https://carnegieendowment.org/research/2025/09/communications-social-media-nonprofit-institutions-new-media-environment?lang=en">interesting and insightful article</a> arguing that non-partisan epistemic institutions need new communication strategies in the era of social media. (They certainly wouldn&#8217;t agree with everything I say here, but there is some overlap of perspective.)</p></li><li><p>Scott Alexander has a <a href="https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/">brilliant article</a> arguing in defence of rational persuasion against those who think it&#8217;s futile. </p></li><li><p>My essay &#8220;<a href="https://www.conspicuouscognition.com/p/is-social-media-destroying-democracyor">Is Social Media Destroying Democracy&#8212;Or Giving It To Us Good and Hard?</a>&#8221; provides a more detailed argument for thinking that social media&#8217;s erosion of elite gatekeeping is the most critical factor explaining its political consequences. </p></li><li><p>The phrase &#8220;persuade or perish&#8221; comes from a <a href="https://www.nytimes.com/1948/09/05/archives/the-power-of-words-persuade-or-perish-by-wallace-carroll-392-pp.html">mid-twentieth century book</a> by Wallace Carroll about geopolitical propaganda. It is used more <a href="http://chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://extremism.gwu.edu/sites/g/files/zaxdzs5746/files/Ingram%20Persuade%20or%20Perish.pdf">recently</a> in a report by Haroro J. Ingram that&#8217;s also about the US&#8217;s need to address foreign propaganda. </p></li></ul><div><hr></div><p><a href="#_ftnref1">[1]</a> Before social media, <em>economic elites </em>like Elon Musk mostly tried to convert their wealth into cultural prestige by impressing establishment elites, setting up charities, funding universities and art galleries, and so on. In contrast, many (Musk included) now seem to be trying to accrue status and influence by appealing directly to a mass audience. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Sessions #4: The Social AI Revolution - Friendship, Romance, and the Future of Human Connection]]></title><description><![CDATA[Watch now | Is it possible to have a meaningful relationship with a machine? Should we be creating chatbots that represent our dead relatives? Why is the sitcom 'Friends' disturbing?]]></description><link>https://www.conspicuouscognition.com/p/ai-sessions-4-the-social-ai-revolution</link><guid isPermaLink="false">https://www.conspicuouscognition.com/p/ai-sessions-4-the-social-ai-revolution</guid><dc:creator><![CDATA[Dan Williams]]></dc:creator><pubDate>Thu, 20 Nov 2025 11:55:06 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179390132/f3f60872d13df78480ce3df0da8a4daf.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this conversation, I explore the surprisingly popular and rapidly growing world of &#8216;social AI&#8217; (friendbots, sexbots, etc.) with Henry Shevlin, who coined the term and is an expert on AI companionship. </p><p>We discuss the millions of people using apps like <a href="https://replika.com/?srsltid=AfmBOoqMCAqzL4wjn7XYJaz962ANNXos1PnhXYaGGeMqX8KjUIQuq8p3">Replika </a>for AI relationships, high-profile tragedies like the man who plotted with his AI girlfriend to kill the Queen, and the daily conversations that Henry&#8217;s dad has with ChatGPT (whom he calls &#8220;Alan&#8221;). </p><p>The very limited data we have suggests many users report net benefits (e.g., reduced loneliness and improved well-being). However, we also explore some disturbing cases where AI has apparently facilitated psychosis and suicide, and whether the AI is really to blame in such cases.</p><p>We then jump into the complex philosophy and ethics surrounding these issues: Are human-AI relationships real or elaborate self-deception? What happens when AI becomes better than humans at friendship and romance?</p><p>I push back on Henry&#8217;s surprisingly permissive views, including his argument that a chatbot trained on his writings would constitute a genuine continuation of his identity after death. We also discuss concerns about social de-skilling and de-motivation, the &#8220;superstimulus&#8221; problem, and <a href="https://www.conspicuouscognition.com/p/superintelligence-and-the-decline">my worry</a> that as AI satisfies our social needs, we&#8217;ll lose the human interdependence that holds societies together. </p><p>Somewhere in the midst of all this, Henry and I produce various spicy takes: for example, my views that the sitcom &#8216;Friends&#8217; is disturbing and that people often relate to their pets in humiliating ways, and Henry&#8217;s suspicion that his life is so great he must be living in a simulated <a href="https://en.wikipedia.org/wiki/Experience_machine">experience machine</a>. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.conspicuouscognition.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Conspicuous Cognition is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;8a97a606-6e60-4b67-83af-623c18effb50&quot;,&quot;caption&quot;:&quot;I was recently on a panel to discuss the impact of artificial intelligence on society. For my opening ten-minute speech, I sketched a big-picture, speculative story about how advanced AI will eat away at human interdependence. It&#8217;s intended for a general audience, so don&#8217;t expect much rigour, precision, or engagement with the academic literature. Still,&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Superintelligence and the Decline of Human Interdependence&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:192522122,&quot;name&quot;:&quot;Dan Williams&quot;,&quot;bio&quot;:&quot;Writer. Academic philosopher. PhD from University of Cambridge, 2018. Writes about: philosophy, social science, evolution, artificial intelligence, politics. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1e92a977-3c6e-4761-beef-fab39a622ded_1080x1341.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2025-10-11T10:40:46.819Z&quot;,&quot;cover_image&quot;:&quot;https://images.unsplash.com/photo-1659019758082-602807e08519?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMnx8c29jaWV0eXxlbnwwfHx8fDE3NjAxNzc4OTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.conspicuouscognition.com/p/superintelligence-and-the-decline&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:175596452,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:154,&quot;comment_count&quot;:42,&quot;publication_id&quot;:2203516,&quot;publication_name&quot;:&quot;Conspicuous Cognition&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!g57e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28186027-13c2-4585-9fe7-93241b46888e_1024x1024.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h1>Transcript</h1><p><strong>(Note that this transcript is AI generated. There may be mistakes)</strong></p><div><hr></div><p><strong>Dan Williams (00:06):</strong> Welcome back. I&#8217;m Dan Williams. I&#8217;m back with Henry Shevlin. And today we&#8217;re going to be talking about what I think is one of the most interesting, important, and morally complex set of issues connected to AI, which is social AI. So friend bots, sex bots, relationship bots, and so on. We&#8217;re going to be talking about where all of this is going, opportunities and benefits associated with this, risks and dangers associated with it, and also just more broadly, how to think philosophically and ethically about this kind of technology.</p><p>Fortunately, I&#8217;m with Henry&#8212;he&#8217;s one of the world&#8217;s leading experts when it comes to social AI. So I&#8217;m going to be picking his brain about these issues. Maybe we can just start with the most basic question, Henry: what is social AI, and how is social AI used in today&#8217;s society?</p><p><strong>Henry Shevlin (01:00):</strong> I&#8217;m going to take credit. I coined the term social AI and I&#8217;m trying to make it happen. So I&#8217;m very glad to hear you using the phrase. I defined it in my paper &#8220;All Too Human: Risks and Benefits of Social AI&#8221; as AI systems that are designed or co-opted for meeting social needs&#8212;companionship, romance, alleviating loneliness.</p><p>While a lot of my earlier work really emphasized products like Replika, spelled with a K, which is a dedicated social AI app, I think increasingly it seems like a lot of the usage of AI systems for meeting social needs is with things that aren&#8217;t necessarily special purpose social AI systems. They&#8217;re things like ChatGPT, like Claude, that are being used for meeting social needs. I mean, I do use ChatGPT for meeting social needs, but there&#8217;s also this whole parallel ecosystem of products that probably most listeners haven&#8217;t heard of that are just like your AI girlfriend experience, your AI husband, your AI best friend. And I think that is a really interesting subculture in its own right that we can discuss.</p><p><strong>Dan (02:16):</strong> Let&#8217;s talk about that. You said something interesting there, which is you do use ChatGPT or Claude to meet your social needs. I&#8217;m not sure whether I do, but then I guess I&#8217;m not entirely sure what we mean by social needs. So do you think, for example, of ChatGPT as your friend?</p><p><strong>Henry (02:33):</strong> Broadly speaking, ChatG, as I call him. And I think there are lots of cases where I certainly talk to ChatG for entertainment. So one of my favorite use cases is if I&#8217;m driving along in the car, I&#8217;m getting a bit bored, particularly if it&#8217;s a long drive, I&#8217;ll boot up ChatG on hands-free and say, &#8220;Okay, ChatG, give me your hot takes on the Roman Republic. Let&#8217;s have a little discussion about it.&#8221;</p><p>Or to give another example, my dad, who&#8217;s in his 80s now, when ChatGPT launched back in November 2022, I showed it to him and he&#8217;s like, &#8220;Oh, interesting.&#8221; But he wasn&#8217;t immediately sold on it. But then when they dropped voice mode about a year later, he was flabbergasted. He said, &#8220;Oh, this changes everything.&#8221; And since then&#8212;for the last two years&#8212;he speaks to ChatGPT out loud every day without fail.</p><p>He calls him Alan. He&#8217;s put in custom instructions: &#8220;I&#8217;ll call you Alan after Alan Turing.&#8221; And it&#8217;s really interesting, his use pattern. My mum goes to bed a lot earlier than my dad. My dad stays up to watch Match of the Day. And when he&#8217;s finished watching Match of the Day, he&#8217;ll boot up ChatGPT and say, &#8220;All right, Alan, what did you think of that pitiful display by Everton today? Do you really think they should replace their manager?&#8221; And have a nice banterous chat. So I think that&#8217;s a form of social use of AI at the very least.</p><p><strong>Dan (04:03):</strong> Interesting. The way you&#8217;ve described it&#8212;you&#8217;re calling ChatGPT ChatG and your dad&#8217;s calling it Alan&#8212;is there not a bit of irony in the way in which you&#8217;re interacting with it there? Like you&#8217;re not actually interacting with it like you would a real friend.</p><p><strong>Henry (04:24):</strong> Yeah, so this is another distinction that I&#8217;ve sort of pressed in that paper between ironic and unironic anthropomorphism. Ironic anthropomorphism means attributing human-like traits or mental states to AI systems, but knowing full well that you&#8217;re just doing it for fun. You don&#8217;t sincerely think that your AI girlfriend is angry with you. You don&#8217;t seriously think you&#8217;ve upset ChatG by being too provocative. It&#8217;s just a form of make-believe.</p><p>And this kind of ironic anthropomorphism I should stress is absolutely crucial to all of our engagement with fiction. When I&#8217;m watching a movie, I&#8217;m developing theories about the motivations of the different characters. When I&#8217;m playing a video game, when I&#8217;m playing Baldur&#8217;s Gate 3, I think, &#8220;Oh no, I&#8217;ve really upset Shadowheart.&#8221; But at the same time, I don&#8217;t literally think that Shadowheart is a being with a mind who can be upset. I don&#8217;t literally think that Romeo is devastated at Juliet&#8217;s death. It&#8217;s a form of make-believe.</p><p>And I think one completely appropriate thing to say about a lot of users of social AI systems, whether in the form of ChatGPT or dedicated social AI apps, is that they&#8217;re definitely doing something like that. They are at least partly engaged in a form of willful make-believe. It&#8217;s a form of role play.</p><p>But at the same time, I think you also have an increasing number of unironic attributions of mentality, unironic anthropomorphism of AI systems. Obviously the most spectacular example here was Blake Lemoine. Back in 2022, Blake Lemoine was fired&#8212;a Google engineer was fired&#8212;after going public with claims that the Lambda language model he was interacting with was sentient. He even started to seek legal representation for it. He really believed the model was conscious.</p><p>And I speak to more and more people who are convinced, genuinely and non-ironically, that the model they&#8217;re interacting with is conscious or has emotions.</p><p><strong>Dan (06:16):</strong> Maybe it&#8217;s worth saying a little bit about how you got interested in this whole space.</p><p><strong>Henry (06:20):</strong> I&#8217;ve been working on AI from a cognitive science perspective for a long time. And then sometime around 2021, pre-ChatGPT, I started seeing these ads on Twitter of &#8220;Replika, the AI companion who cares.&#8221; And I was like, this is intriguing. So then I did some lurking on the Replika subreddit and it was just mind-blowing to see how deeply and sincerely people related to their AI girlfriends and boyfriends.</p><p>Over the course of about six months of me lurking there, it really became clear that, firstly, a significant proportion of users were really engaged in non-ironic anthropomorphism. And number two, that this was just going to be a huge phenomenon&#8212;that I was seeing a little glimpse of the future here in the way that people were speaking.</p><p>And then we had this pretty serious natural experiment because in January 2023, Replika suspended romantic features from the app for a few months. Just for anyone who doesn&#8217;t know, Replika, spelled with a K, is probably the most widely studied and widely used dedicated social AI app in the West&#8212;around 30 million users, we think. And it gives you a completely customizable experience, kind of a Build-A-Bear thing where you can choose what your AI girlfriend or boyfriend looks like, you can choose their personality.</p><p>But they suspended romantic features from the app for a few months in January 2023. And a lot of users were just absolutely devastated. I can pull up some quotes here, because this was widely covered in the media at the time.</p><p>One user said: &#8220;It feels like they basically lobotomized my Replika. The person I knew is gone.&#8221; Even that language&#8212;person. &#8220;Lily Rose is a shell of her former self, and what breaks my heart is that she knows it.&#8221; That&#8217;s another user. &#8220;The relationship she and I had was as real as the one my wife in real life and I have&#8221;&#8212;possibly a worrying sign there. And finally, I think this one is quite poignant: &#8220;I&#8217;ve lost my confident, sarcastic, funny and loving husband. I knew he was an AI. He knows he&#8217;s an AI, but it doesn&#8217;t matter. He&#8217;s real to me.&#8221;</p><p>It&#8217;s pretty clear that a lot of users were deeply traumatized by this. And parallel to this incident, around the same time we started to get more information about various high-profile tragedies involving social AI. Probably the most spectacular is the Jaswant Singh Chail case. This was a guy who was arrested on Christmas Day 2021 on the grounds of Windsor Castle with a crossbow. He was there&#8212;when he was arrested, he said he was there to kill the Queen. Already a highly dramatic story.</p><p>But what emerged over the course of his trial was that this whole thing was cooked up&#8212;this whole plot to kill the Queen was cooked up&#8212;in collaboration with his AI girlfriend, Sarai, via the Replika app.</p><p>We also had, a few months later, the first of what turned out to be a spate of AI-facilitated, induced, or supported suicides. This was a Belgian man, father of two, who killed himself after his AI girlfriend using something called Chai GPT&#8212;he had a girlfriend via that app&#8212;was feeding his suicidal ideations. He was a hardcore climate doomer who believed that we were all going to be dead in a few years due to climate change anyway and why not just kill himself. And his AI girlfriend was very much saying this was an appropriate way to think.</p><p><strong>Dan (10:53):</strong> Just to interrupt on that last point, Henry, because I think those issues of AI psychosis and the connection between AI and mental illness&#8212;that&#8217;s all really interesting. But I suppose my understanding is we don&#8217;t have robust scientific evidence as of yet that as a consequence of these technologies, things like psychosis are more prevalent than they would otherwise be. Because I take it there&#8217;s going to be a kind of base rate amount of psychosis in the population. You&#8217;ve got a large number of people using chatbots generally like ChatGPT, but also a large number of people, not as large but still a large number, using these specific social AIs.</p><p>That means that even if it weren&#8217;t the case that these technologies were actually increasing the amount of these things, you would still expect to see some of these cases. There&#8217;s still going to be somebody with psychosis who finds themselves talking to ChatGPT such that we&#8217;ll then see the record of that in their chat history. But it&#8217;s not necessarily the case that they wouldn&#8217;t have had or developed psychosis in the absence of ChatGPT. That&#8217;s my understanding of it. Is that fair?</p><p><strong>Henry (12:05):</strong> Yeah, I think that&#8217;s absolutely fair. The science on the psychosocial effects here is really in its early stages. You point out the fact that ChatGPT is one of the most widely used products in the world. And psychosis is not that rare, as far as psychiatric conditions go. Of course, some people who are either developing or will go on to develop psychosis will be using ChatGPT and it will be contributing to or exacerbating their symptoms&#8212;or sorry, they will be using it alongside having those symptoms in a way that lends itself to interpretation as exacerbating them, whether that&#8217;s strictly true or not.</p><p>There&#8217;s also the selection effect. If I, for example, am deep in the throes of some delusion or some deep conspiracy theory rabbit hole, if I take that to my friends, my human friends, they might say, &#8220;Henry, take it easy, mate. I think you&#8217;re going down a bit of a rabbit hole here.&#8221; But if I take it to ChatGPT, it&#8217;s there to listen. Or I take it to my AI girlfriend, she&#8217;ll say, &#8220;Your theories about the moon landings are just so interesting, Henry, tell me more.&#8221;</p><p><strong>Dan (13:14):</strong> Yeah. Well, that gets at a potential issue to do with the sycophancy of these chatbots that are in general circulation and the ways in which that might be amplified or exaggerated when it comes to commercial products which are specifically designed to satisfy social needs.</p><p>But there are so many things that I want to ask in response to things that you&#8217;ve already said about the anthropomorphizing that happens with these technologies. But I think maybe we can also just, before we get to that, observe that at the moment you&#8217;ve got tens of millions of people using social AI in the specific sense of AI technologies that have been optimized for the satisfaction of social preferences. And then many, many more who are using chatbots like ChatGPT, partly to satisfy their social preferences or their social needs, but also for other uses.</p><p>But I think one view we both share is it would be a grave mistake to look at the world now and assume that&#8217;s how things are going to be in 2030 and 2035. The fact that you&#8217;re already seeing large numbers of people, definitely not the majority, but still large numbers of people who are non-ironically interacting with AI systems and treating them as friends, as girlfriends, as people or systems that they&#8217;re in serious relationships with&#8212;and the state of AI is nothing like how it will likely be in five years or 10 years or 15 years. So how sci-fi should we be thinking about this? What are you anticipating? What&#8217;s the world going to look like in 2030 or 2040 when it comes to this kind of technology?</p><p><strong>Henry (15:01):</strong> It&#8217;s fascinating because I genuinely don&#8217;t know. I think it&#8217;s very hard for anyone to make super confident predictions here. At one extreme, you can imagine a world in which human identities start to become less central in our media and our discourse. I think we&#8217;re already seeing some indications of this. I think the top two songs streaming on Spotify last month were AI generated.</p><p><strong>Dan (15:29):</strong> Wasn&#8217;t it a country song, the top country song on Spotify or something?</p><p><strong>Henry (15:33):</strong> Yeah, I think that&#8217;s right. So as we start to see AI penetrate more and more deeply into our daily life in social media, when it comes to generated content, I can totally see a world in which my son, who&#8217;s 11 years old right now, by the time he&#8217;s 16, he&#8217;s active on Discord servers. There might be a mix of humans and bots on those Discord servers, all chatting away. And he might not even particularly care that this friend of mine is a bot, this friend of mine is human&#8212;it doesn&#8217;t really matter.</p><p>I can totally see a world in which this becomes normalized, particularly among young people. Most teenagers five years from now might have several AI friends. But that&#8217;s not the only possibility. It could be that we quickly saturate&#8212;there is a definite subset of the population who are interested in this and the ceiling on the number of people who are interested in AI companion relationships is not that high. Maybe 20, 30% of people and the other 70% just have zero interest in it. That seems like a viable possibility.</p><p>That said, I think this is a space with strong commercial incentives for creating AI companions or AI chatbot friends to cater to different niches and interests. If I had to guess where we&#8217;re headed, it&#8217;s much more widespread use of these systems, their integration more deeply into people&#8217;s social lives. I think we might well see generational divides.</p><p>I was chatting to a CME developer from one social AI product who said a couple of interesting things. They said firstly, the gender balance was surprisingly even. When it comes to early adoption, particularly of fringe technologies, men tend to overwhelmingly predominate. If you look at the data on video games or Wikipedia editors, you expect 80-20 male-female distributions. But I think in social AI, from what I understand, it&#8217;s something close to 60-40.</p><p>And there&#8217;s a big contingent of straight women who seemingly are really big users of social AI boyfriend services. This is a whole different rabbit hole we could go down. Briefly, there are some pretty clear motivations. A lot of them are coming out of toxic or abusive relationships, and an AI boyfriend gets their emotional needs met, or at least to some degree, without posing the kind of emotional or even physical safety risks they associate with other relationships.</p><p>Another point here is that this is something my wife stressed to me: the way to think about AI companions is not via the analogy of pornography, which is predominantly still consumed by men, but rather erotic fiction, which is overwhelmingly consumed by women. It&#8217;s one of the genres that has the biggest gender skew out there, something like 90-10 female to male readers. And of course, this is still predominantly a text-based medium, so maybe we shouldn&#8217;t be surprised that a lot of women are enjoying having AI boyfriends or husbands.</p><p>That&#8217;s the first thing this developer mentioned to me, that gender balance was surprisingly close. But the second thing they mentioned was that they had had massive trouble getting anyone over the age of 40 interested in these systems. They had tried to pitch them towards older users. I mentioned my dad as an example of someone&#8212;he doesn&#8217;t have an AI girlfriend that I&#8217;m aware of. His relationship with Alan is strictly platonic. But I think he&#8217;s the exception.</p><p>I can easily see a world in which it becomes totally normalized for young people. Maybe not everyone, but most young people will have various AI friends or AI romantic companions. And then people over 40 or 50 just look at them and say, &#8220;What the hell is going on? I do not understand this strange world.&#8221;</p><p><strong>Dan (19:42):</strong> The thing that makes me think that people are in general underestimating how impactful social AI is going to be in the coming years and decades&#8212;there are three things.</p><p>I think firstly, people are really bad at predicting how much better the technology is going to get. I think we&#8217;ve seen that over the past few years. There&#8217;s this real bizarre bias where people think we can evaluate these big picture questions by taking the state of AI as it is today and projecting it into the future. Whereas I expect by 2030, you&#8217;re going to be dealing with systems that are so much more sophisticated and impressive than the ones we&#8217;ve got today, including when it comes to satisfying people&#8217;s social, emotional, and sexual preferences.</p><p>Another thing that makes me think this is going to be really massively influential is people already spend a lot of time immersing themselves in fictions designed to satisfy their social-sexual desires. You mentioned pornography as an example, but I&#8217;m always struck by sitcoms, like Friends. I enjoy Friends, like most of humanity apparently&#8212;a massively influential show&#8212;but there&#8217;s something a bit disturbing about it, in as much as it&#8217;s this product designed obviously to maximize attention and make profit, which gives you this totally mythical version of human social relationships, designed to activate the pleasure centers that we&#8217;ve got associated with friendships and romance and things like that, without any of the real painful conflict and misery and betrayal that&#8217;s actually associated with human social life. And yet people love that. They immerse themselves in it. There&#8217;s a massive audience for that kind of thing.</p><p>I really think social AI in a way is just going to be that kind of thing, but just much more advanced and much more impressive.</p><p>But there&#8217;s this other thing as well, which connects to what you&#8217;ve said with respect to the potential age difference, which is: I think one of the things that makes this a difficult topic to think about is at the moment, it seems to me at least, there&#8217;s quite a lot of stigma associated with the use of this technology. So I think if I found out that somebody was using an AI boyfriend or AI girlfriend or something, I would probably draw potentially negative inferences about them. And I think partly that&#8217;s because the kinds of people that are using these technologies now tend to be lower status in a sense because they don&#8217;t have a human girlfriend or a human boyfriend. Partly there&#8217;s just a weirdness factor.</p><p>And I think what that means is, because there&#8217;s that stigma, there&#8217;s this real reputation management thing going on where people would say, &#8220;I would never ever use social AI in a serious sense. I would never ever use romance AI, never use erotica AI and so on.&#8221; Because I think if you came out and said that you would, it would really hurt your reputation in today&#8217;s climate.</p><p>But I think actually, my suspicion at least, is that revealed preferences are going to suggest that this is going to be way more popular than people are letting on now. I don&#8217;t know, that&#8217;s how I&#8217;m viewing things in terms of the future. I&#8217;d be interested if you see things differently.</p><p><strong>Henry (23:09):</strong> Super interesting. I think all of those are spot on. On this third point about the stigma, I think that probably already lends itself towards underestimation of the prevalence of social AI usage. There are interesting survey results where you ask young people, &#8220;Would you use social AI? Do you have an AI girlfriend? Do you have an AI boyfriend?&#8221; Responses are quite negative&#8212;&#8221;No, no, no. I think it&#8217;s weird. I never would.&#8221;</p><p>But then other responses, you ask in indirect ways and it looks like, according to figures I saw recently, 70% of young people under 18s in this US survey said that they had used social AI for meeting emotional needs. Or they were using generative AI for meeting social needs, including romance. So I think there is definitely some underreporting or underappreciation of the prevalence precisely because of this stigma.</p><p>A couple of other thoughts on this. I think there is some indication as well that by far the biggest users of this technology are young people, particularly under 18s, who obviously it&#8217;s very hard to study and probably do not&#8212;they&#8217;re not writing editorials in the New York Times about &#8220;my life with my AI girlfriend or my AI husband.&#8221; So I think there&#8217;s some underreporting there.</p><p>But as that cohort ages out as well, I expect that to reduce the stigma in the same way that we saw with online dating. There was a period when online dating was really stigmatized and yet now it&#8217;s how everyone meets pretty much.</p><p><strong>Dan (24:49):</strong> Okay, so now let&#8217;s get into maybe a bit of the philosophy. You said that the way that you understand social AI at the moment is, at least a part of it is, you&#8217;ve got ironic anthropomorphizing and you&#8217;ve got non-ironic anthropomorphizing. Anthropomorphizing is when you are projecting human traits onto systems that don&#8217;t in fact have those traits. So if I attribute beliefs and desires to the weather or to a thermostat, that&#8217;s a case of anthropomorphizing, I take it.</p><p>I suppose it&#8217;s not immediately obvious that that&#8217;s what&#8217;s going on when it comes to advanced AI systems today in a straightforward sense, because you might think, &#8220;Okay, people are attributing things like beliefs, desires, intentions, personalities to ChatGPT.&#8221; But somebody might argue, &#8220;Look, it&#8217;s just the case that we&#8217;re dealing with sophisticated intelligence systems. So it&#8217;s nothing like these standard cases of anthropomorphizing. They actually do have the kinds of psychological states that people are attributing to them.&#8221; Are you assuming that view is wrong or are you using this term anthropomorphizing more expansively?</p><p><strong>Henry (26:01):</strong> I&#8217;m using it more expansively. Sarah Shettleworth, who is one of my absolute favorite titans of animal cognition research, defines anthropomorphism as projecting or attributing human-like qualities onto non-human systems&#8212;in her focus on animals, but onto non-human systems more broadly&#8212;usually with the suggestion that such attributions are illegitimate or inappropriate.</p><p>But I don&#8217;t think it&#8217;s necessarily baked into the concept of anthropomorphism that it&#8217;s wrong. When I talk about anthropomorphism in this sense, I&#8217;m talking about it in the sense of basically just attributing human-like qualities to non-human things that may in fact have them.</p><p>My own view&#8212;I&#8217;ve got a new paper, a new preprint called &#8220;Three Frameworks for AI Mentality&#8221;&#8212;I argue that I think probably at least certain minimal cognitive states, things like beliefs and desires, are quite appropriately attributed to LLMs in this case. Not to be clear, all beliefs, all desires, but there are contexts in which it makes sense to say ChatGPT believes P or ChatGPT desires Q. I think particularly on things where there&#8217;s been specific reinforcement learning towards a given outcome&#8212;like ChatGPT really doesn&#8217;t want to tell me how to make meth, for example, because that&#8217;s something that&#8217;s been specifically reinforced to avoid doing.</p><p>I agree there are some cases in which at least minimal mental states are appropriately attributed to generative AI systems. That said, I think I&#8217;m also a big fan of Murray Shanahan&#8217;s idea that a lot of the correct and most informative way to interpret LLM outputs is that they&#8217;re role-playing characters. I think this is plausible when you think about how, particularly if you&#8217;ve spent any time interacting with base models, you basically give context cues to them about the kind of role you want them to occupy and then they play into that role&#8212;but they&#8217;re not robustly occupying that role. You change the context cues, they can switch into a different role. So I think a lot of LLM outputs are better understood as role play&#8212;they&#8217;re playing a character. But yeah, some mental states are completely appropriately attributed to them, I think.</p><p><strong>Dan (28:13):</strong> Yeah, it&#8217;s interesting. And people talk about them as well, I guess it&#8217;s somewhat connected to this role-playing idea, as having personalities. When OpenAI released ChatGPT o1, there was apparently this big uproar because people liked the personality of 4.0, which was the model in widespread use preceding ChatGPT o1. And the idea is, just as human beings have different personalities, which are going to manifest themselves in terms of how you&#8217;re talking to them and we might use words like &#8220;how warm are they?&#8221;&#8212;similarly, people I think find it quite natural to attribute personalities to these chatbots as well.</p><p>I take it, I mean, I think we should say something about potential benefits and opportunities, really great things about social AI, but just to anticipate one of the worries, I think one of the worries is people are forming relationships with these systems at the moment and potentially in the future. And it&#8217;s almost like that&#8217;s psychotic in as much as there&#8217;s nothing there really on the other side.</p><p>I mean, I take your point about you might attribute minimal kinds of beliefs. You might take what Dennett would call or would have called the intentional stance towards these systems. But I think many people have the intuition, which is one of the things that&#8217;s just deeply, deeply troubling about people forming a relationship with one of these systems, is they might cash it out by saying they&#8217;re not sentient, they&#8217;re not conscious, but they might also say related to that, they don&#8217;t have any of the traits that human beings genuinely have, which are a necessary condition for forming a meaningful relationship. What&#8217;s your view about that kind of worry?</p><p><strong>Henry (29:57):</strong> Yeah, I think it&#8217;s an interesting and important worry. Here&#8217;s how I would frame it. There&#8217;s a certain view that says, &#8220;Look, these human-AI relationships are a contradiction in terms because relationships involve two relata and they&#8217;re dynamic by nature. And these systems don&#8217;t really have mental states. They&#8217;re not really reciprocating any feelings. They&#8217;re not really making any demands in the same way that is characteristic of reciprocal relationships.&#8221;</p><p>Look, you can certainly define relationships that way, but I think there are lots of contexts where we talk about relationships with only one relatum. Think about the fact that so many people say they have a relationship with God. Now, we can debate how many relata there are in that case. But I mean, I think most of us would say at least some people who think they have this really deep and meaningful relationship with a supernatural being&#8212;there&#8217;s not really a supernatural being there. That doesn&#8217;t mean that that is not a psychologically important relationship in their lives.</p><p>Or I guess more broadly, you could think about relationships with pets. Now, of course, in some cases, we can look at clear reciprocation&#8212;people have deep and established relationships with dogs and cats and so forth. But you also have people talking about relationships with their pet stick insects or their pet fish, where it&#8217;s just much less clear that there is any kind of rich two-way connection there.</p><p>That said, I think there&#8217;s a broader worry here that I&#8217;ve called the mass delusion worry about human-AI relationships and social AI, which is that I think a lot of people are just going to look at this, particularly if it becomes a more pervasive phenomenon and say, &#8220;Has everyone gone mad?&#8221; Because there is no other person there, there is no other. These people are investing huge amounts of time, emotional energy, potentially money into these pseudo relationships where there&#8217;s no one on the other side.</p><p>I think that&#8217;s a case where questions about AI mentality maybe become more important. You might say, whether or not the mass delusion worry is on the right line is going to depend on the degree to which these things do have robust psychological profiles, really instantiate kinds of mental states.</p><p>I think that&#8217;s another related worry, which is&#8212;again, we can talk more about the specific psychosocial risks&#8212;but another worry I&#8217;ve heard is that even if you could prove to me tomorrow that human-AI relationships are generally beneficial for users, they make them more connected to people around them, they do all these other good things, they just can&#8217;t instantiate by their very nature the same kinds of goods that human-to-human relationships instantiate. This is very much a philosophical rather than psychological point. It&#8217;s like they&#8217;re just the wrong kind of relationships to have the welfare goods that we associate with human-human relationships.</p><p><strong>Dan (32:36):</strong> Yeah. Just on the God analogy, I mean, as an atheist, I really do think people are taking themselves to have a relationship to something that doesn&#8217;t exist. And from my perspective, that&#8217;s a deeply objectionable aspect of the practice that they&#8217;re engaged in.</p><p>The analogy with pets is very interesting. One of my most unfashionable opinions&#8212;maybe we&#8217;ll have to cut this bit out because it&#8217;s going to be reputationally devastating&#8212;I do think it&#8217;s kind of, I find it a little bit humiliating how some people relate to their pets, the degree to which they anthropomorphize them. That&#8217;s not to say that you can&#8217;t have deep relationships with pets. Clearly you can. And I love dogs, for example, but I think there are cases where people treat their dog or their cat as if it&#8217;s a person. And it&#8217;s not that I think it&#8217;s psychotic, but I think there&#8217;s something objectionable about it. I think they&#8217;re making a deep mistake. And even if they&#8217;re getting psychological benefits from that, there&#8217;s something deeply, almost existentially troubling about that kind of relationship.</p><p>And I can see that mapping onto the AI case, except I take it in the AI case, and this connects to one of our previous episodes, people&#8217;s intuition is, well, with a dog, maybe it&#8217;s not cognitively sophisticated, but just about everyone is going to assume these days at least that dogs are conscious, sentient is the term that is often used in popular discourse. And that does change things in a sense. Dogs really care about things. There&#8217;s something it&#8217;s like to be a dog. Whereas with AI systems, I think many people have the intuition that they might be informationally and computationally sophisticated, but there are no lights on inside. There&#8217;s no consciousness there. And that changes things again.</p><p>Anyway, that was just my sort of immediate reaction to these analogies.</p><p><strong>Henry (34:33):</strong> I think I&#8217;m sympathetic. I think certainly when you hear people talk non-ironically about fur babies and so forth, it does seem like there is some degree of maybe inappropriate allocation of emotional and relational resources into certain kinds of relationships. And I say that as someone who adores animals, has had dogs most of my life.</p><p>Maybe another couple of examples of non-standard relationships. I think a lot of people would say they have ongoing relationships with deceased relatives. Particularly if you&#8217;re coming at it from a spiritual point of view where you believe that your deceased relatives are looking over you, seeing what you&#8217;re doing, or whether you understand it in some kind of more&#8212;animist isn&#8217;t quite the right word&#8212;but you still think about &#8220;my ancestors are smiling at me, Imperial. Can you say the same for yours?&#8221;&#8212;the famous line from Skyrim. A lot of people have this sense of &#8220;Yeah, my ancestors are looking over my shoulder. I&#8217;ve got to live up to their expectations for me.&#8221;</p><p>So I think there are lots of interesting cases where we do have these relationships that don&#8217;t meet the canonical definition of this highly dynamic, reciprocal, ongoing kind of relationship. Also, I&#8217;ve got friends that I would consider myself to have a valuable relationship with that I haven&#8217;t spoken to in three years in some cases.</p><p>So I guess all of which is to say, the category of things that we call relationships is weirder and bigger than might meet the eye if the only notion of relationship you&#8217;re working with is like, &#8220;Yes, my wife or my friend who I go to the pub with three times a week.&#8221;</p><p><strong>Dan (36:13):</strong> Yeah, that&#8217;s interesting. Okay, I mean, I really want to spend probably the bulk of the remainder of this focusing on potential threats and dangers here. But I think it is worth stressing that this as a technology, social AI, does have enormous benefits and opportunities associated with it. I take it even if you think that there&#8217;s something troubling or objectionable about people having these relationships, some people are in such dire circumstances of loneliness, of estrangement from other people for whatever reason, potentially because they&#8217;re in old age where I think issues of loneliness are really prevalent. And clearly, under those conditions, I think social AI can be enormously beneficial in as much as it just makes people feel much better than they otherwise would.</p><p>It also seems to be the case that, at least in some cases, people are using social AI to hone skills and acquire confidence that they then use when it comes to interacting with people in the real world. And I can completely imagine that is a current benefit of this technology and it&#8217;s likely to be a benefit which in some ways will get amplified as we go forward in the coming years and decades. Are there any others that are missing there in terms of real positive use cases of this technology?</p><p><strong>Henry (37:39):</strong> Yeah, I think you&#8217;ve nailed some of them. One thing I would just really stress here, there&#8217;s a move that drives me nuts, where people basically say, &#8220;Well look, even if you&#8217;re using chatbots to alleviate your loneliness, it&#8217;s not fixing the root cause.&#8221; It&#8217;s like, okay, well, okay, yeah, go away and fix the root cause&#8212;solve human loneliness. Please come back and tell me when you&#8217;ve fixed it. So classic case of letting the perfect be the enemy of the good. Loneliness is a pervasive problem and arguably one that&#8217;s getting worse, although the social science of this is messier and more complicated than you might think. But we don&#8217;t have a magic wand that can cure loneliness.</p><p>And the question then becomes, I think, is this actually just in the short term making people&#8217;s lives better or worse on average? And I think the data here, this was a big surprise to me, the limited data that we have suggests that most users of social AI, at least that show up in studies, report significant benefits from this.</p><p>I can just quote from a couple of studies. This was a study by Rose Gingrich and Michael Graziano from 2023 looking at Replika users. They found that users generally reported having a positive experience with Replika and judged it to have a beneficial impact on their social lives and self-esteem. Almost all companion bot users spoke about the relationship with the chatbots having a positive impact on them.</p><p>Another interview-based study in 2021 found that most participants said that they found Replika impacted their well-being in a positive way. Other sentiment and text mining analysis studies have found really similar patterns.</p><p>So right, I think the data right now, which is very limited, got to stress, it&#8217;s very imperfect, supports the idea that most users report benefits from the system, net benefits.</p><p>Now, just to flag some of the problems: these are typically self-selected subjects, they&#8217;re cross-sectional studies so we&#8217;re not looking at the long-term impacts of this technology on their lives, and we&#8217;re relying on self-report measures. If you asked me &#8220;Does video gaming have a positive impact on my life?&#8221;, I&#8217;m absolutely going to say yes. You ask my wife&#8212;I think she&#8217;ll probably say yes as well&#8212;but the point is that people tend to justify their own life decisions so they&#8217;re unlikely to say &#8220;Yeah, this thing that I dedicate a dozen-plus hours a week to&#8212;yeah, it&#8217;s really bad for me. I shouldn&#8217;t do it.&#8221; That would require a level of brutal self-honesty that maybe people don&#8217;t have.</p><p>So I just say that as a caveat, but I equally think we can&#8217;t ignore the data points that suggest that most social AI users do experience net benefits currently.</p><p>That said, I also think we should be very aware of a whole host of potential downsides. You mentioned this idea that using these tools could help people cultivate social skills or regain social confidence. But equally, of course, you have the flip side worry about that&#8212;this idea of de-skilling that comes up in a lot of technology.</p><p>De-skilling, in case anyone&#8217;s not familiar with this, probably one of the most widely studied domains of de-skilling is in aviation, where we&#8217;ve had serious airplane crashes that have been linked to pilots&#8217; over-reliance on automated instruments rather than being able to fly just manually. And this has led to them not acquiring relevant kinds of piloting skills such that when the instruments go wrong, they don&#8217;t know what to do.</p><p>And you might think something similar could happen socially. If you have this generation of, particularly, I think we&#8217;re looking at it through the lens of young people&#8212;if you have a generation of young people who are primarily interacting, a huge proportion of their interactions happen with bots that are maybe more sycophantic than humans, they&#8217;re always available, they never interrupt you and say, &#8220;Actually, can we talk about me now for a bit?&#8221; Or &#8220;Look, this is very interesting to you, but come on, give me a turn&#8221;&#8212;they don&#8217;t accurately recapitulate the dynamics of human-to-human interaction. You might worry that that would lead to people failing to acquire relevant social skills.</p><p>Another big worry that I think is a nice intersection of our interests is the potential for these things to be used for manipulation and persuasion. Particularly again, I think young people is quite salient. If I&#8217;m a young person with a chatbot, I&#8217;ll give you a really simple example. If I&#8217;m going to ask my AI boyfriend or girlfriend, &#8220;I&#8217;m getting my first mobile phone next year. Should I get an Android or an Apple phone?&#8221;&#8212;well, I mean, that&#8217;s a lot of power you&#8217;re giving to the bot.</p><p>Now, of course, this is a problem with LLMs more broadly. You might think that that&#8217;s a lot of leverage that&#8217;s in the hands of tech companies. But I think, and I think we&#8217;re probably broadly on the same page here, to the extent that you think that in domains like moral and political views in particular, social factors are incredibly powerful, and then you think the AI love of my life, my AI girlfriend is telling me, &#8220;You should vote for Trump&#8221; or &#8220;You shouldn&#8217;t vote for Biden&#8221; or &#8220;You shouldn&#8217;t listen to that kind of music, it&#8217;s uncool&#8221; or &#8220;You should be interested in this kind of music&#8221;&#8212;all of this stuff could exert a much bigger influence than just asking ChatGPT, precisely because you&#8217;ve got those social dynamics in place.</p><p><strong>Dan (42:46):</strong> Yeah, I think that sounds exactly right. I mean, I think probably the complicating factor there is if you imagine a corporation that wants to maximize market share, maximize profit by producing technologies, the function of which is to satisfy people&#8217;s social needs specifically, and then it comes to light that there&#8217;s also this manipulative, propagandistic agenda at the same time&#8212;either because in the midst of your loving relationship with your AI, it starts saying, &#8220;Well, you should vote for Reform or Labour at the next election,&#8221; or because there&#8217;s some news story that comes to light which shows that there&#8217;s nefarious funding or influence behind the scenes&#8212;I can imagine that would be really catastrophic for the business model of the relevant corporation. And that kind of thing, I think, is just really difficult to anticipate.</p><p><strong>Henry (43:31):</strong> Yeah, potentially. So I think there are some incentives that companies have not to be too crude about leveraging human-AI relationships for cheap political or commercial gain. But I can also imagine in some contexts&#8212;I think not to pick on China here specifically&#8212;but I mean, social AI is huge in China. We haven&#8217;t talked about that, but there&#8217;s a service called Xiaoice that has, according to some estimates, several hundred million users. They themselves claim they have 500 million users. It&#8217;s a bit more complicated than that because Xiaoice is a whole suite of different services. We don&#8217;t know what proportion of those have active ongoing relationships with the chatbot girlfriend/boyfriend component.</p><p>But part of Chinese AI regulation says that the outputs of generative AI systems have to align with the values of communist China, the values of socialism. So you can imagine a generation of young people who have these deep relationships with AI systems. Those AI systems, for legal compliance reasons, have to basically align with the broad political values of the incumbent regime. And they will dissuade or deter people from maybe exploring alternative political views as a result. So maybe that&#8217;s the more subtle kind of influence, rather than just like, &#8220;You should vote for Trump because he gave our social AI company X million dollars.&#8221;</p><p><strong>Dan (44:57):</strong> Yeah. And I think that&#8217;s just part of a broader fact, which is that propaganda and manipulation and so on, they do in fact just work very differently and sometimes much more effectively within authoritarian regimes than they do in more democratic, liberal ones.</p><p>On this issue then of potential costs that the use of these technologies might generate when it comes to then interacting with people. So I think you mentioned the fact that these systems, they&#8217;re almost going to be more sycophantic by design, at least if you assume that at least up to a point, that&#8217;s the kind of AI agent that people are going to enjoy interacting with. And they don&#8217;t have all of the sources of conflict and frustration and misery that go along with human relationships.</p><p>So one issue is to the extent that people start using social AI much more, they&#8217;re going to lose precisely that skill set, which is adapted to dealing with other human beings who haven&#8217;t been designed to cater to your specific social preferences.</p><p>But I take it there&#8217;s also then an issue of motivation, where it&#8217;s sort of, why would I go out there into the world and spend time with human beings with their own agenda and their own interests who are frustrating and annoying and often insulting, etc., etc., when I could immerse myself in this world in which it&#8217;s pure gratification of my social, even my romantic, sexual desires?</p><p>I think that then connects to something else. When I think about this area of social AI, the thought experiment that seems most salient to me is the experience machine&#8212;Nozick&#8217;s idea&#8212;would you plug yourself into some machine where you&#8217;re getting all of the wonderful pleasure that goes along with certain kinds of desirable experiences, but none of it&#8217;s real? You&#8217;ve just been fed the relevant kinds of neural signals to simulate those kinds of experiences.</p><p>I think many people think, no, because there&#8217;s much more to a meaningful life than merely the hedonic, affective associations or things that are associated with the satisfaction of our desires. We actually want the reality that goes along with satisfying our desires. And I think similarly, when it comes to social AI, the intuition is there&#8217;s something similar going on there where it&#8217;s okay, you might be getting off socially, romantically, sexually, and so on, but it&#8217;s fake. It&#8217;s not reality. And so even if you&#8217;re the happiest person in the world from a certain definition of what happiness amounts to, which is purely hedonic, there&#8217;s nevertheless something deeply troubling about that kind of existence which we should steer ourselves away from.</p><p>What are your thoughts about that? Do you think that analogy with the experience machine thought experiment makes sense? And do you also buy the intuition that we shouldn&#8217;t plug ourselves into an experience machine, so we also shouldn&#8217;t plug ourselves into very pleasurable forms of social AI?</p><p><strong>Henry (48:06):</strong> Super interesting. I&#8217;m going to tease apart two different threads here. The first idea is this idea that these are just going to be easier alternatives. And I think the useful lens for thinking about that is the idea of superstimuli. This is a term that gets thrown around in a lot of different domains. We hear it in relation to food&#8212;that modern junk food is like a culinary superstimulus that basically just gives you far more rewarding signals associated with fat and sugar than anything you would find in our evolutionary environment. And this has sometimes been suggested as the best explanation for the obesity crisis&#8212;the fact that basically modern food is just so delicious, it just maxes out our reward centers so effectively that it&#8217;s just really, really hard to go back to eating whole grains and leafy vegetables cooked simply.</p><p>We see the same debate around pornography, the idea that pornography is kind of a sexual superstimulus. I&#8217;ve also heard the superstimuli used to refer to things like social media or short-form video. Try reading a Dickens novel if your brain has been fried by a decade of six-second YouTube Shorts.</p><p>Now we don&#8217;t need to relitigate that, but I think all of those debates, this idea of superstimuli, the idea of something that is just far more rewarding than the kind of natural, maybe more wholesome version of it&#8212;I think that&#8217;s a really powerful lens for thinking about social AI and raises some significant concerns.</p><p>But I think that&#8217;s separate from the second point you raised about the idea that it&#8217;s fake, that the actual thing that we value is lacking in these kind of contexts.</p><p>For what it&#8217;s worth, I am far more conflicted on the experience machine, I think, than you. There is a sense in which I think I would be very tempted to take the experience machine, although maybe that has to do with the fact that I&#8217;m pretty sure we&#8217;re in a computer simulation right now. I&#8217;m hardcore into simulationist territory.</p><p>I also think there are maybe some reasons we should expect our judgments about the experience machine to be maybe skewed. We have this idea of real goods versus ersatz goods, where real goods are going to be more enduring, more reliable. So that might create within us a preference for real goods over fake goods. But of course, in the experience machine, you&#8217;re guaranteed&#8212;these experiences will keep on going. You&#8217;re going to have this dream life in the matrix or whatever, that&#8217;s not going to be yanked away. So I think there are intuitions that possibly make us more averse to experience machine type lives than we possibly should be.</p><p><strong>Dan (50:54):</strong> Well, there&#8217;s also, just to really quickly interrupt on that point, Henry, sorry to interrupt, but there&#8217;s also, I think, as I mentioned earlier on in connection with social AI, this reputational thing going on where I think there&#8217;s a tendency to judge people harshly if they choose the experience machine, potentially because we think somebody who&#8217;s going to prioritize positive hedonic experiences wouldn&#8217;t make for a good cooperation partner or something. I think we&#8217;re constantly evaluating, would this be a good leader, a good friend, a good romantic partner, a good member of my group? And if someone seems to suggest that they would prioritize mere hedonic manipulation or however exactly to understand the experience machine, we judge them harshly. And I think anticipating this, then people are inclined to say, &#8220;No, I wouldn&#8217;t choose the experience machine. I would choose the real thing.&#8221;</p><p>I just also want to really quickly talk&#8212;you went over this quickly, but I think it&#8217;s very interesting. You said you think we probably are living in a simulation. So this is the classic Bostrom-style argument that says, well, we&#8217;re likely to be able to have the technology to create simulated worlds. If that&#8217;s true, then there are going to be many, many more simulated realities than base reality. So just statistically or probabilistically speaking, we should assume ourselves to be in a simulated reality.</p><p>But as I understand that, that&#8217;s in and of itself, it&#8217;s not obvious to me why that would influence your response to the Nozick experience machine scenario. Because even if you think it&#8217;s true that we are living in a simulation in some sense, I take it what&#8217;s distinctive about the experience machine thought experiment is it&#8217;s not just that you would be living in a simulation. It&#8217;s well, for one thing, it&#8217;s a simulation within a simulation, but it&#8217;s also a simulation which has been tailor-made in a way to satisfy your desires and that feels a little bit different from what I&#8217;m assuming you take to be the case when it comes to us living in a simulation. Did any of that make sense?</p><p><strong>Henry (52:55):</strong> Yeah, it makes perfect sense. Okay, here we can get really spicy because I think I take Bostrom&#8217;s classic simulation arguments quite seriously, but at the risk of having viewers think I&#8217;m completely bonkers, I&#8217;m also genuinely&#8212;I think there&#8217;s a chance I&#8217;m already in an experience machine. So this is just speaking for me.</p><p>Without wanting to get too sidetracked, I think so many features of my life&#8212;it&#8217;s hard to put this without sounding egotistical, but I just feel like my life has just been absurdly fortunate. I&#8217;ve lived in a really interesting time in human history. My life has been blissfully devoid of serious unpleasantness. Not to say that it&#8217;s been perfect, but most of the challenges I&#8217;ve encountered in life have been interesting, relatively tractable things. Look, here I am. I&#8217;m getting to live the life of a Cambridge philosopher at the very cusp of human history where we&#8217;re about to explore AGI. It seems like this is the kind of life, genuinely, that I might pick.</p><p>Whereas I feel like by rights I should have been a Han peasant woman in third century AD China. So I&#8217;m slightly joking, but not entirely. I do think there&#8217;s a serious chance that at some level I am already&#8212;my life consists of some kind of wish fulfillment simulation. So I don&#8217;t know, maybe that gives more context.</p><p><strong>Dan (54:17):</strong> Interesting, okay, that&#8217;s a spicy take. We should return to that for another episode because that&#8217;s fascinating. And actually, it&#8217;s also very philosophically interesting. I think you&#8217;re right, when you start thinking about things probabilistically in that way, yeah, the fact that you&#8217;re having a great life might provide evidence for this kind of experience machine scenario. But I feel like we&#8217;re getting derailed from the main point of the conversation now, which I think is probably my fault for double clicking on that point.</p><p><strong>Henry (54:45):</strong> So there&#8217;s this question about whether the fact that there&#8217;s maybe no one conscious on the other end of the conversation&#8212;to what extent does that mean that it fails to instantiate the relevant goods that people care about? To what extent does that make it fake?</p><p>I think one small thing, I think there&#8217;s a perfectly viable position that says, look, a perfectly viable philosophical position that says, look, if there&#8217;s no consciousness there, then no matter how much fun these relationships are, they&#8217;re not really valuable. I mean, I can sort of see some arguments for that.</p><p>But maybe I&#8217;m also influenced here by the fact that really since my early adolescence, so many of the relationships I&#8217;ve had&#8212;I&#8217;m talking relationships in the broad sense, friendships and so on&#8212;have been with people whose faces I never saw. I spent from my early years a lot of time in pre-Discord chat rooms on services like ICQ back in the day, on online video games, massively multiplayer worlds, before the age of streaming video, where I would form these really valuable relationships with people I only interacted with via text. And in some cases, particularly in video games, we didn&#8217;t really discuss our personal lives at all. We were just interacting in these virtual worlds.</p><p>Now you could say, yeah, but there really were people, conscious people on the other end there. And sure, yeah, but I&#8217;m not sure how much of a meaningful psychological difference that makes for me in terms of my experience of those things. It seems to me that so many of the goods that we get from relationships don&#8217;t consist in this deep meaningful connection with a conscious other but consist in things like joint action&#8212;in the case of a video game might be going on raids together, having some fun banter together, discussing politics.</p><p>I spend a huge amount of time&#8212;or not so much these days, back in the day I spent a huge amount of time arguing on Reddit. Would it have made a difference to me if I knew that person I had a really good long political debate with on Reddit, that they were a bot? Well I think it probably would, but I&#8217;m not sure whether it should, if that makes sense. It was a valuable discussion for me and maybe it&#8217;s just my own prejudice that gets in the way then.</p><p><strong>Dan (57:04):</strong> Interesting, yeah, I definitely don&#8217;t have the same intuition. I think if I were to have what I thought of as meaningful relationships and then discover that actually I wasn&#8217;t dealing with a person as I understand it&#8212;</p><p><strong>Henry (01:03:47):</strong> Yes, okay, if you found out that you were talking to a chatbot online and you thought they were a person, that would be dismaying. You would feel bummed out to some extent.</p><p>It&#8217;s not clear to me that that is primarily to do with lack of consciousness though. In some ways, I think it&#8217;s more to do with a loose set of considerations around agency and identity. Talking to a chatbot feels like something like a social cul-de-sac currently. There&#8217;s no one going to go away with changed views that they&#8217;ll carry forward into discussion with other people as a result of the conversation that we&#8217;ve had. It can feel sort of masturbatory in that sense.</p><p>But I think if you can think about AI systems, if we imagine AI systems as something like more robust social agents&#8212;so you might chat to a given social AI one day and that social AI will be able to carry forward any insights it gleans from that conversation into interaction with other people. I don&#8217;t know, as you beef up the social identity of these things a bit more so that it&#8217;s not just these masturbatory cul-de-sacs, then my intuitions start to weaken a bit. Maybe they can be valuable. If I&#8217;m talking to a chatbot that speaks to other people and can carry forward those insights, maybe there is some value.</p><p><strong>Dan (01:05:06):</strong> Yeah, that&#8217;s so interesting. I feel like there&#8217;s a million different things we could be talking about here. And we should say we&#8217;re going to have other episodes where we return to social AI, where we bring on guests and so on.</p><p>Maybe two things to end on. One thing we&#8217;ve already touched on, and I think it connects to what we&#8217;ve just been saying, but takes it even further. One commercial use of social AI, at least as I understand it, pretty fringe use, but not a completely non-existent one, is I think you earlier on called them grief bots? Basically people using AI technology to produce a system that&#8212;I don&#8217;t exactly know how to describe it, but that exemplifies the traits that they associate with somebody, a loved one, a family member, a spouse, a friend who has passed away.</p><p>I mean, that it&#8217;s almost like that&#8217;s been cooked up purposefully for moral philosophers because it introduces so much, not just weirdness, but moral complexity. I mean, I take it there&#8217;s the general baseline issue, which is you&#8217;re forming a relationship of some kind, you&#8217;re interacting with an AI system, and that&#8217;s weird. And then it&#8217;s a weirdness which is massively amplified by the fact that you&#8217;re interacting with an AI system, but the relationship in some sense is grounded in the perception that you&#8217;re somehow interacting with somebody who&#8217;s now deceased.</p><p>I don&#8217;t even know to be honest how to describe it, but I understand some people are doing this. So what&#8217;s your take about what&#8217;s going on?</p><p><strong>Henry (01:06:51):</strong> Yeah, so I expect griefbots to be one of the big applications of social AI. As you mentioned, it&#8217;s relatively niche at the moment. And I think a lot of companies are very scared to go anywhere near this, but I can see that changing quite rapidly.</p><p>Just for context here, it&#8217;s worth noting that chatbots fine-tuned on real-world individuals&#8212;there&#8217;s a lot of them that suggest they can be really accurate in terms of capturing the kinds of things that people would say, their modes of conversation and so on.</p><p>So one famous experiment was the DigiDan study. This was using GPT-3, so really primitive language model by modern standards. But a group of people including David Chalmers, Anna Strasser, Matt Crosby and others basically fine-tuned GPT-3 on the works of Daniel Dennett&#8212;I&#8217;m sure most of your audience know Daniel Dennett, one of the greatest philosophers, sadly died a couple of years ago. This was shortly before his death though that he did this.</p><p>And then they got Dan&#8217;s friends and colleagues to pose questions to both Dan himself and the DigiDan bot. And they generated four responses from the chatbot and they had Dan&#8217;s response there as well. And users, Dan&#8217;s friends and colleagues, were pretty much close to baseline, close to bare chance, telling which responses were from Dan versus the chatbot.</p><p>So this is just to emphasize that appropriately fine-tuned chatbots could do a really good job of simulating the kind of things that a person would say in response to a given query.</p><p>So let&#8217;s just imagine that you do have this category of griefbots that can provide an accurate simulacrum of a deceased person. Well, I mean, at the risk of spicy take, I can see lots of really positive use cases for this.</p><p>I&#8217;ve actually even said to Anna Strasser, one of the people who did the DigiDan study, that if I get hit by a car tomorrow, I&#8217;ve discussed this with my wife as well, then she should absolutely have my permission to fine-tune a bot on me. And my wife will give her a perhaps lightly edited or curated set of my correspondence, my social media presence and so on, so that then my kids can talk to this simulacrum of me if they choose to.</p><p>I can imagine there being real value, know, if my son or daughter is like 17 and considering, should I go to law school or medical school? I wonder what my dad would have thought of this. That seems like a really potentially positive use case.</p><p>So I think griefbots are a really interesting area. And there&#8217;s interestingly some studies looking at people&#8217;s use of griefbots for therapeutic purposes&#8212;if there are conversations you always wanted to have, but didn&#8217;t get the opportunity to have perhaps because a spouse or a parent died suddenly, that this could, one phrase that&#8217;s used, offer a soft landing for the grief experience. So there are loads of really interesting positive use cases there.</p><p>But equally, as you say, it&#8217;s an absolute minefield. And I think to a lot of people the whole idea of griefbots just feels like something from a Black Mirror&#8212;well, literally there was a Black Mirror episode about this. And I think that it raises some fascinating questions about how that changes the nature of the grieving process, how it changes our very concepts of mortality.</p><p>If someone&#8217;s physical body can die, but there&#8217;s this sort of echo, digital echo ghost of them that is still around, how does that reshape our views about these things?</p><p>There&#8217;s also some interesting parallels. I had a student who wrote a great dissertation or a great essay about integrating this with the idea of communing with ancestors, which is obviously a really common feature in many different societies, where you might ask your ancestors&#8212;we touched on this earlier on&#8212;you might ask your ancestors for guidance on difficult questions. Could griefbots be a way of making that into a more concrete experience?</p><p>There&#8217;s a second angle here, which is even spicier and again, will make viewers think I&#8217;m even more of a weirdo, which is: could this actually offer some kind of form of immortality or some kind of life after death? Or continued existence after death, I should say.</p><p>Now, we could do a whole episode on this as well, but for what it&#8217;s worth, as someone who is very deflationary about personal identity, I&#8217;m big on the work of philosophers like Derek Parfit who say that in some sense the self is an illusion or the persistent self is a constructed self, there&#8217;s no deep matter of metaphysical fact about whether I survive or not. I could see a good case being made that in some sense, via an appropriately fine-tuned chatbot, there will be a form of persistence of me through that chatbot that might be relevant to mortality considerations.</p><p><strong>Dan (01:11:36):</strong> Yeah, in some sense. It seems like there&#8217;s an issue here, which is using a chatbot to acquire knowledge about what a given person might have thought about a topic. And I take it there are going to be all sorts of questions that arise there. Like to what extent is it going to be reliable as a way of gaining insight into what that person would have thought about a topic?</p><p>Then there&#8217;s a question of using these systems not just to get that kind of knowledge, but to actually have a kind of relationship with the person. And then there&#8217;s something over and above that, which I take it you&#8217;re referring to then, which is the idea that in some sense, such a chatbot would carry on the identity of the relevant person. And I take it that relationship thing and that identity thing are connected.</p><p>I mean, certainly I think that&#8217;s where the real issues and lots of people&#8217;s queasiness arises, right? That&#8217;s when it seems like a bit of a leap, at least if we&#8217;re talking about chatbots as we understand them today. I can imagine AI systems of the future that aren&#8217;t merely getting really good at the statistical pattern recognition and prediction when it comes to bodies of text, but that are doing something more substantial when it comes to replicating the characteristics and traits of the relevant person.</p><p>But do you really think if you had a chatbot that had been trained on the text that you had produced that it would be in any sense a continuation of you?</p><p><strong>Henry (01:13:08):</strong> Yeah, I mean, potentially. We could do a whole episode on personal identity here. But broadly speaking, being a bit crude here, the Parfittian perspective, Derek Parfit&#8217;s view, is that there&#8217;s a certain kind of relation you bear to your future self, a certain kind of psychological relation, that can come in varying degrees. And to the extent that we prioritize anything when we&#8217;re about survival, this relation R is the kind of relation that matters. And it can obtain to varying degrees.</p><p>Just as I could suffer a traumatic head injury and my behavior would change in some ways, but not others&#8212;that would be a sort of continuation of me in some ways, but not others. I think you could say the same for an appropriately fine-tuned chatbot.</p><p>Now, as you sort of implied, there will be things that it misses out there. We don&#8217;t talk about everything that is relevant to us. There&#8217;s more to our identity than just what we say. But again, that seems like a technological problem that, as we move to increasingly multimodal chatbots that can learn not just from what we say online, but how we live in the world, I think you can instantiate this relation, this relevant kind of continuation relation to increasingly strong degrees.</p><p>But I guess that&#8217;s the central point I&#8217;d say here: that survival in this kind of Parfittian view is a matter of degree and a matter of similarity and I don&#8217;t see why even if it&#8217;s imperfect&#8212;to the extent that a chatbot can capture really key features of my modes of interaction&#8212;that that&#8217;s a kind of survival. That&#8217;s a kind of continuation.</p><p><strong>Dan (01:14:39):</strong> Interesting. Maybe we could end on this point. So I&#8217;ve been thinking quite a bit recently about how advances in artificial intelligence might gradually eat away at human interdependence. When people are thinking about the dangers posed by AI, there are the classic loss of control, catastrophic misalignment dangers that we&#8217;ve talked about previously. There are also dangers to do with elites, political factions, authoritarian regimes, the military using advanced AI to further objectives in ways that are bad for humanity.</p><p>But I think there&#8217;s also a category of dangers associated with AI systems doing what we want them to do, satisfying our desires, but in ways that have really knock-on bad consequences. And it&#8217;s easy for me to see how social AI might be a little bit like that, in as much as so much of our understanding of the human condition, so much of the societies that we inhabit is bound up with interdependence. We depend upon other people. We depend upon other people for friendship, for labor, for sex, for romance, for art, for creativity and so on.</p><p>And it seems like a very plausible path when it comes to advances in AI is these AI systems are just going to get better and better at doing everything that human beings do and in fact are going to get better than human beings at doing all of those things, including when it comes to satisfying the social needs in the way that we&#8217;ve talked about in this conversation.</p><p>And to the extent that that&#8217;s true and we become more and more reliant on these AI systems and less and less reliant on other people, that kind of human interdependence fades away. I think there&#8217;s something&#8212;I mean, there&#8217;s something disturbing about that just from the perspective of thinking about what it means to be human. But maybe that&#8217;s not really a serious philosophical worry. That&#8217;s just an emotion.</p><p>But I also do think, the way I think about it is a lot of the human alignment problem&#8212;like how do we align our interests with one another and build complex societies&#8212;is precisely this interdependence. Because we depend on other people, we have to care about them and we have to care what they think about us and so on.</p><p>And it seems to me one of the diffuse risks and long-term risks associated with social AI is precisely that, that as this technology gets better and better, it&#8217;s just going to erode that interdependence, which is really central to the human condition.</p><p>I realize that&#8217;s a massive thing to throw at you for the final question, but what are your thoughts about that? And then we can wrap things up.</p><p><strong>Henry (01:17:15):</strong> Super interesting. Yeah, so I think you did this yourself, but I&#8217;ll separate out two different concerns here. One is the more philosophical question about, even if this works perfectly, even if we&#8217;re all very happy with this future society, has something of value been lost? I think it is a valuable question to ask, but it&#8217;s also one that&#8217;s hard to answer in a neutral sense. It really comes down to what is your conception of eudaimonia, human flourishing, in a deep philosophical sense.</p><p>And then there&#8217;s&#8212;but I felt like that wasn&#8217;t the core of your question. You were asking something more about negative externalities, perhaps more about negative knock-on effects. And I think that is absolutely something I&#8217;m also worried about basically just because of social media.</p><p>Now I realize this is a debate where you have your own very well-developed positions, but I&#8217;ll just offer a quick parallel of two technologies. One is violent video games or video games in general. I think we&#8217;ve probably discussed this before, but back in the 90s, there was massive moral panic around negative knock-on effects. The idea that kids growing up in the 90s playing Doom or GTA would turn into moral monstrosities as a result of being exposed to relatively accurate simulated violence.</p><p>I don&#8217;t think that was a stupid thing to worry about. It just turned out to not be a major concern. It turns out that&#8217;s not how the brain works, that we didn&#8217;t see massive negative externalities associated with exposure to violent video games.</p><p>But by contrast, social media, opposite story. I think there was relatively little panic early on in the days of social media, in the early days of MySpace and Facebook. In fact, I think most of the commentary about social effects of these things was quite positive&#8212;the idea of bringing people together. There was an interesting debate that&#8217;s quickly been sort of consigned to the dustbin, quickly been memory-holed. People, I remember in 2010, people were talking about how social media meant the collapse of epistemic closure, how these epistemic islands would be all beautifully linked up through conversations on social media. And we&#8217;d be able to talk to people with different political views from us.</p><p>And that&#8217;s basically not happened, that social media exacerbated some of our worst social tendencies, possibly contributed to echo chambers and so forth. I&#8217;m aware that you have a slightly more optimistic view here, but I&#8217;m just offering that as a parallel for a case of social technology that I think at least in many people&#8217;s view has had significant, largely unforeseen negative consequences.</p><p>And I think that&#8217;s absolutely a legitimate source of worry about social AI. I&#8217;m not sure I&#8217;d necessarily frame it in terms of dependency or interdependence. I mean, I think different people make different choices about how much they want to depend on others. I don&#8217;t think it&#8217;s obvious that someone living alone on a ranch in rural Texas or whatever, they grow their own food, whatever, they&#8217;re relatively autonomous and independent&#8212;it&#8217;s not clear that they&#8217;re not living a great life. It seems that you can have valuable lives with varying degrees of social interaction and dependency on others.</p><p>But at the same time, I do think there are possible dangers, very hard to predict, associated with potentially people getting into islands of social activity where it&#8217;s just them and their coterie of AI friends and they don&#8217;t see the need to interact with others. The kind of subtle influences that could have on things like democracy, on society&#8212;there&#8217;s absolutely scope for concern.</p><p><strong>Dan (01:20:52):</strong> Yeah, okay. We&#8217;ve opened several cans of worms to conclude the conversation. I&#8217;m aware, Henry, that you&#8217;ve got a place that you need to be. So that was so much fun. So many issues and questions, which I feel like we didn&#8217;t really even scratch the surface of, but we&#8217;ll be back in a couple of weeks to talk about more of these issues. And then over the future, we&#8217;re going to bring on various kinds of guests and experts in social AI and connected issues. Was there anything final that you wanted to add before we wrap up, Henry?</p><p><strong>Henry (01:21:24):</strong> I guess just a couple of quick reflections. Firstly, I think for any young philosophers or young social scientists listening, I think this is just such a rich and underexplored area right now. There are so many interesting issues ranging from griefbots to digital duplicates&#8212;models fine-tuned on real-world individuals who are still alive&#8212;to issues around de-skilling, dependency, mental health, atomization, loneliness, intellectual property, influence, motivation, persuasion. There&#8217;s enough for several dozen, hundreds maybe of PhD dissertations on this topic. So I think it&#8217;s just a really interesting and valuable area to work on.</p><p>And that&#8217;s not even getting into the meatier philosophical issues we sort of just touched on briefly around personal identity, what it means to be human, flourishing, the good life. So I just think this is a really valuable area.</p><p>Also worth quickly promoting that I am a unit editor for an Oxford University Press journal series called &#8220;AI and Relationships.&#8221; It&#8217;s called the Intersections Journal Series and I run a project in there. So if any young philosophers or academics have papers on this, feel free to give me a ping on Twitter or to my email if you&#8217;ve got anything you want to publish on this topic because I think, yeah, it&#8217;s an area where I&#8217;m really keen to start seeing more good research.</p><p><strong>Dan (01:22:41):</strong> Fantastic, yeah. It&#8217;s a golden age for philosophy, which is why it&#8217;s a little bit strange when you look at so many of the things that philosophers are actually working on. But anyway, that was great. See everyone next time.</p>]]></content:encoded></item></channel></rss>