0:00
/
0:00

Time To Start Panicking About AI?

In this episode, Henry and I finally do something we probably should have done in the first episode: introduce ourselves. We talk about our backgrounds in philosophy, how we became interested in psychology and cognitive science, and what drew us to thinking about AI. From there, we dig into the current state of AI capabilities, especially “agentic” AI (e.g., Claude Code), the politics of AI (including the Trump administration's recent conflict with Anthropic), and whether the growing public hostility to AI is well-founded or misdirected. We wrap up with a big question: is it time to start panicking about AI? Henry says the time to panic was five years ago. I argue that for panic or any other emotion to be productive, it must be anchored in an accurate, evidence-based understanding of what is happening, which is missing from lots of the current discourse about AI.

Links

Transcript

(Note that this transcript is AI-edited and may contain minor mistakes).

Introducing Ourselves

Dan: Welcome back. I’m Dan Williams, and I’m back with Henry Shevlin. Today we’re going to be discussing some questions about the nature of AI as it’s developed over the past couple of months. We’re also going to be talking about the politics of AI and probably some questions about AI and public opinion — some of the backlash that appears to be brewing among certain segments of the public when it comes to AI.

But to kick things off, we’re going to do something we probably should have done in the first episode but haven’t actually done yet, which is to introduce ourselves. So Henry, to begin with — who are you?

Henry: So many different descriptors I could choose from. I think I’ll start with philosopher of cognitive science. I’m also a father, husband, son, D&D player, big video gamer, runner, cyclist — all that good stuff. But let me talk a little more about the philosopher of cognitive science side.

I’m the associate director at the Leverhulme Centre for the Future of Intelligence, Cambridge’s main AI ethics, theory, policy, and law research centre. Basically, everything except building the models. We do practical benchmarking work on capabilities, legal reviews, sociology and critical theory of AI — it’s a really big interdisciplinary centre. I’ve been there now going on nine years. I joined early 2017, all the way back when state-of-the-art AI was stuff like AlphaGo. We were created just as that story was brewing. In 2016, AlphaGo won a very surprising victory against Lee Sedol in the game of Go, which was seen by many as an almost impossible challenge for AI because of its combinatorial complexity.

It’s been amazing working in this role — having these front row seats to what I think is a unique period, not just in the history of AI, but in the history of human civilisation. In the last nine years, it really was like having a front seat in Lancashire during the Industrial Revolution, watching the development of various industrial applications.

Dan: Yeah.

Henry: Before we get more into AI, maybe a little more background. I’m from the UK, originally from Staffordshire. I was actually a classicist, believe it or not — that was my undergrad degree. Latin and Greek. I always enjoyed both the humanities side of classics and the kind of technical rigour you got from learning large sets of verb tables and so forth. I actually enjoyed that part. But during my undergrad I found myself taking more and more philosophy modules. A little bit of Plato and Aristotle to start with, but I quickly realised I was more interested in the philosophy of mind, and consciousness in particular. I got completely — I think the phrase is “nerd sniped” — completely derailed. Everything else I was interested in, consciousness just seemed to me like the most important problem anyone could work on.

Until my early twenties, I’d been operating with a somnambulant, easy physicalism, where I just assumed that science has figured out most stuff. There’s nothing that hard. Sure, no one really knows what caused the Big Bang, but we’ll just build a bigger particle collider or a bigger space telescope and figure it out one day. I certainly didn’t think there were any deep mysteries about the human brain. But running into the problem of consciousness completely shattered that worldview. I’d even say it opened up some spiritual elements I hadn’t previously considered.

Dan: Was that the focus of your PhD?

Henry: Exactly. I started out in my master’s initially planning to do metaphysics of consciousness, but then the science of consciousness kind of took over. A philosophy of cognitive science of consciousness was what my master’s and PhD were on. I was advised by my master’s advisor to go spread my wings in the US. They do things differently there. So I did my PhD in New York, and while I was there I took several classes with Peter Godfrey-Smith, who some of our listeners will know through his work on octopuses.

The key shift midway through my PhD was going from human consciousness towards animal consciousness. Two chapters of my thesis were explicitly looking at applications to animals. That’s my academic career in a nutshell.

One thing I’ll add: I did not expect to get the job in Cambridge when I applied in 2017 — firstly because you should never expect to get any academic job. I applied to seventy jobs in three months and got about three interviews. But the Cambridge job in particular, because it was an AI job and I was not by any means an AI expert. What I was an expert on was comparative cognition and animal minds. But it turned out that was exactly what they were looking for. They wanted people with expertise in animal minds to apply those skills to AI. It didn’t fully click at the time, but I was actually well suited to it.

These days I still do some work on animals — it’s still one of the most ethically impactful things I do. I’ve been a pretty much lifelong vegetarian, and I think animal welfare is such an obvious place where philosophers can and should be doing more. But there’s also a lot of cross-fertilisation on the skills side.

Dan: And we should say, some of your research looks at the topic of AI consciousness and the methodology of trying to understand consciousness in AI systems, drawing on analogies with evaluating consciousness in animals.

Henry: Exactly. Very much a two-way street — how the questions of AI consciousness and animal consciousness can engage in constructive mutual crosstalk.

On Consciousness and the Limits of Physicalism

Dan: You said you were a kind of bog-standard physicalist, came across consciousness, and that weakened your trust in physicalism. But you’re still broadly a physicalist, right?

Henry: Broadly speaking, yeah. But I think there’s a lot more uncertainty. It seems likely to me that our general scientific picture of the world is still fundamentally inadequate. I’ve talked about how I think we’re still waiting for a Kuhnian paradigm shift in consciousness — clearly the current paradigm doesn’t add up. And quantum physics itself is just super weird. Dave Chalmers has a nice line about how nobody understands quantum mechanics and nobody understands consciousness, so maybe — he calls it “minimisation of mystery” — if there’s stuff we don’t understand, at least make it one thing rather than two.

For what it’s worth, I’ve never been particularly seduced by any of the leading quantum mechanical theories of consciousness. But at the same time, I think it’s quite clear that our current model of even the physical world is inadequate. I think whatever lies on the other side of the paradigm shift is still going to be broadly physicalistic, but perhaps in ways that are not entirely commensurable with our current understanding. So yes, still broadly naturalistic and physicalistic, but at the same time a lot more humble and open-minded about the limitations of our current scientific paradigms.

Dan: Would it really be a paradigm shift, or more a transition from — to use the Kuhnian language — pre-paradigmatic intellectual inquiry to the initial emergence of a paradigm? Where it’s disorganised and chaotic and everyone has their own view, kind of like physics and metaphysics in ancient Greece. Maybe it’s more a transition from a pre-paradigmatic state than a situation where we’re moving from one paradigm to another. What do you think?

Henry: That’s absolutely right. The best analogy is biology before Darwin. You had lots of people doing interesting biology, but in isolated fields — taxonomy, “butterfly collecting” and so on. We didn’t really have a unifying paradigm for understanding speciation or even taxonomy before Darwin. Consciousness just does not have a unifying paradigm. That’s a much better way of putting it.

Dan’s Backstory and the Pivot to AI

Dan: We’ll be doing lots more episodes on consciousness. Just to say something about my backstory: I did my undergraduate at the University of Sussex from 2011 to 2014, then my master’s and PhD in Cambridge from 2014 to 2018, did a postdoc in Belgium, and then came back to Cambridge for three or four years.

Henry: And we first met around 2019. We ran a session on socially adaptive beliefs — your Mind and Language paper, which for the record is still one of my top ten papers from the last decade. I’ve recommended it to more people than I can count.

Dan: Well, that’s kind of you. My PhD was called The Mind as a Predictive Modelling Engine. What I tried to do was draw on advances in deep learning and generative AI as it existed at the time, coupled with ideas in cognitive and computational neuroscience connected to the predictive brain — predictive coding, predictive processing, the kind of stuff that Anil Seth talked about in our last episode. I used those ideas to tell a very general story about how mental representation works, both in the human brain and in other animals.

But it’s funny — I finished in 2018 and made two big mistakes. At the end of my thesis, I wrote that all this stuff about predictive processing and minimising prediction error is kind of interesting when it comes to low-level sensorimotor abilities we share with other animals, but clearly it’s not going to work for higher-level cognitive abilities associated with language. I was very influenced at the time by the Gary Marcus, Steven Pinker line — the scepticism about deep learning. I also thought it was going to be decades before we had systems that were really intelligent.

So even though I was working on stuff connected to deep learning and generative AI, I made this catastrophic error of thinking the progress would be relatively slow, decades away from any significant breakthroughs. I ended up pivoting to completely different areas: the nature of belief, irrationality, misinformation, the information environment. Of course, in hindsight, not the best career move — four years after finishing my PhD, ChatGPT is released. And then the rest is history in terms of just how gobsmackingly impressive the rate of progress has been.

So what I’ve tried to do over the past couple of years is bring those two sets of interests together. I’m still interested in how we form beliefs, the origins of irrational belief systems, how that connects to misinformation. But I want to connect that to the impact of generative AI and large language models on the information environment, viewing LLMs as a really important stage in the evolution of communication technologies — from the printing press to radio, television, social media.

How about you? You were thinking about AI before 2022–2023. How were you thinking about it back in 2016, 2017?

Henry’s AI Awakening: GPT-2 and the Scaling Intuition

Henry: There was a big shift in how I thought about AI roughly around 2019, and it was the release of GPT-2. Prior to that, I’d been really struck by the differences between AI systems and animals. I was emphasising things like robustness and catastrophic forgetting — you train up a model to do one thing, try to get it to do another, and its performance on the first thing collapses. Animals seem spectacularly capable of basically not getting stuck. A cat will never get stuck in a corner.

Then in 2019, because I’m a massive nerd and spend way too much time on Reddit — I’m a neophile, an early adopter of many failed technologies; our house is littered with gadgets that never went anywhere — I heard about GPT-2. I couldn’t access it directly, but I started playing around with it through something called AI Dungeon, a text-generated game that let you access the model. Various people on subreddits were able to show you could unlock most of GPT-2 through this game. I played around with it, and it utterly blew my mind.

I wrote a public essay in a magazine called Litro called “A Lack of Understanding,” which I still think is one of my best public essays. Crucially, it’s me in 2019 talking about how language models are going to be the next big thing. I got on the record nice and early.

I had the hunch — ironically, partly because I was very sympathetic to predictive coding. People say these models are “just doing text prediction.” But on the other hand, I kind of think that’s what we’re doing too. Not text prediction specifically, but ultimately, if you want to get better and better at prediction, you do that by building implicit models. So I had a hunch this stuff would scale up.

When GPT-3 launched, I set up an interview between GPT-3 and myself, but GPT-3 in the guise of one of my favourite authors, Terry Pratchett, who had sadly died shortly before. And at that stage, I was already starting to feel like I could imagine actually relating to this thing in quite a deep way. It’s not just a tool — it feels like I could have some kind of personal relationship here. That steered my research towards social AI and anthropomorphism.

Why This Podcast Exists

Dan: What made you go into philosophy in the first place?

Henry: What about you?

Dan: It was just straight philosophy. I was always interested in big ideas — religion, politics. I can’t even honestly remember why I chose philosophy over everything else. Initially I wanted to be a musician. For my AS levels, I did politics, history, English literature, and music. I turned up on results day and got really good marks for English, politics, and history — and I think a D in music. So that wasn’t for me. From the moment I arrived at university and started reading these big ideas, I was completely magnetised.

One thing that changed is that during my PhD, I became somewhat disillusioned with a priori philosophy — philosophers trying from the armchair to offer analyses of concepts and trade intuitions with each other. I became less sympathetic to philosophy as I understood it then, and pivoted to what philosophers call naturalistic philosophy — philosophy closely integrated with empirical research. That’s what I’ve been doing since. I view myself primarily as a philosopher, but one who tries to engage with our best, most up-to-date empirical research.

Henry: I had my own process of disillusionment, following exactly the same track — getting bogged down in debates about the metaphysics of consciousness and feeling like they weren’t going anywhere. Then I started reading Oliver Sacks — The Man Who Mistook His Wife for a Hat. Half of the cases he describes would have been declared a priori impossible by philosophers. That steered me onto the same track.

I also think there’s a lot more scope for good philosophers to do more public engagement. Extreme rigour and technical knowledge are only really valuable if they’re connected to scientific progress. What I find frustrating about analytic philosophy is when you’re doing work on things that belong to the general public — our concepts around praise and blame, responsibility and accountability — but then you develop this whole baroque vocabulary that’s completely incomprehensible to anyone on the Clapham omnibus.

Dan: Yeah, so the origin story of the blog. I write the Substack Conspicuous Cognition — many of you will be listening on that Substack. I’ve always enjoyed writing for a general audience and engaging with debates. I’ve always been able to write really quickly and relatively clearly, and blogging rewards that. If I’m writing for my own blog, I’ve got almost unlimited energy because I’m responsible for everything I publish. The minute some other outlet asks me to write a piece, I find it extremely demotivating.

With blogging, I can have unlimited freedom to write about whatever I want without any pre-publication filter. You still get feedback and critique, but that happens after publication. And I think if you’re a philosopher who works on things connected to public interest, and you actually enjoy participating in public debate, the case for thinking you’ve got some kind of responsibility to participate increases.

There are two big reasons I wanted to start this podcast. One is that AI is going to be one of the biggest stories of our lifetimes — absolutely transformative over the next years and decades. But I also think the quality of most AI discourse in the public sphere, including from the intelligentsia who write in high-prestige outlets like the New Yorker, is really bad. If you’ve got some degree of knowledge and can be reasonable, it’s an area where you can really improve the quality of public discourse. And of course, I just wanted to talk to you about these things.

Henry: A big part of it is that I always think we have great conversations — our conversational styles complement each other. Second, I was doing quite a lot of podcasts as a guest, and the idea of having a podcast where I didn’t have to state everything from scratch every time, that could have a cumulative agenda building up common knowledge with us and the listeners, was really appealing.

And I couldn’t agree more about the mixed standard of public communications from experts in AI. It’s weird to see people claiming to be experts yet having very low familiarity with the tools, particularly now. We’ve all been at the business end of AI for years through things like product recommendations and content recommendations. But in an era when it’s never been easier for anyone to use language models, image models, video generation, and AI agent tools, I still hear lots of self-identified experts talking as though they’ve never used them. Imagine listening to someone who claimed to be an expert on the internet and said they’d never actually used it. They’d be laughed out of town.

I find this all the time — the kind of thing that should be common knowledge among anyone paying attention is still revelatory. I’m struck by the number of people I speak to who think that LLMs are literally sampling from a database of responses. Even quite educated people, maybe people who use ChatGPT, who think that when you type in a query it just pulls up a pre-recorded response. If you spend more than a few hours interacting with these things, you pretty quickly realise that cannot be the case. And yet people running multi-million-dollar businesses still have these basic misconceptions.

Dan: When I said the quality of discourse is bad, I didn’t mean that’s universally the case. There’s lots of incredibly high-quality analysis. I was referring to the average quality of mainstream commentary. Even on the most basic questions about what these systems can do and how they work, there’s just an avalanche of ignorance and misperceptions. It’s 2026, and I still encounter not just members of the general public but academics still referring to this as “fancy autocomplete” or “stochastic parrots.” Such a common narrative, and so incredibly misguided in my view.

Henry: Highbrow misinformation?

Dan: It’s Joseph Heath’s phrase, but I’ve written about it. It’s a weird mix of highbrow misinformation coupled with lowbrow misinformation. Even where there are parts of the discourse I disagree with — like a lot of the doomer discourse associated with the rationalist community, which I’m not that sympathetic to — that’s a substantive disagreement. They’re not completely misinformed about basic features of the technology. When it comes to mainstream discourse among educated normies, that’s where the state of the discourse is really bad.

The Four Big Leaps in AI

Dan: This is a nice segue onto one of the things we wanted to talk about today: developments in AI which have really taken off over the past couple of months. There was a very interesting tweet by Ethan Mollick, who’s a very influential and insightful AI commentator. He says there have been four big leaps in the ability of AI systems from the user’s perspective.

The first was the release of ChatGPT, or GPT-3.5, in late November 2022. The second was GPT-4 in spring 2023. The third was the release of reasoning models — no longer just impressive chatbots, but systems that actually seem able to think and reason and engage in impressive problem-solving. And the fourth, which definitely resonates with my experience, is what he calls workable agentic systems from basically late last year. Systems like Claude Code and then Claude Cowork — which is like Claude Code for people who don’t know how to programme — and more recently developments in Codex and so on. The capabilities of these systems seem absolutely amazing relative to what we had even six months ago. Is that also your sense?

Henry: I think that’s a fantastic way of carving it up. I’d add one and a half things. The big thing missing is search. The early search functionality in LLMs was non-existent for a long time, and then it gradually improved. I think there’s a strong case that it actually changes the kind of things these are. Original ChatGPT was a completely fixed box — you could interact with it, but it had no independent connection to the world. As you build out search capabilities, you get something at least analogous to a perceptual connection with reality. You can get models to correct themselves.

A simple example: I’ve been using Claude to keep abreast of what’s been going on in the Middle East — doing a daily check-in, getting the major news stories, even getting Claude to make its own predictions. We’ve been grading each other as the news comes in. It changes these things from being a voice in a box to something embedded in the world. And I think we’ve still got a long way to go — imagine if the capability gets amped up to searching thousands of sites in a second.

The other half-point is voice models. I think 90 to 95 percent of people don’t use voice at all, but there’s a solid 5 percent for whom it’s their primary mode of interaction. When I’m driving, I’ll often just have a long conversation with ChatGPT, discussing my latest paper or getting a lecture on a topic of my choice. My dad is in his eighties but quite open-minded. When I showed him ChatGPT in November 2022, he was unimpressed. But when I showed him voice mode about a year later, it was completely mind-blowing. He speaks to it every day — he calls it “Alan,” after Alan Turing. Going in early and hard with the anthropomorphism. He just whips out his phone and says, “Hey Alan, remind me, which came first, the Cambrian or the Permian?” He’s very interested in science. So it’s a small and somewhat neglected set of users, but an important capability.

Henry: But on agentic systems — I agree with Ethan Mollick’s points. ChatGPT was a major milestone, GPT-4 a huge leap in capabilities — I don’t think we’ve seen any leap quite as big since then. Reasoning models were a really big improvement. And then workable agentic systems. This has been a key factor in updating my timelines. For most of last year my timelines were actually slowing down. I was struck by how bad a lot of agents were. It was pretty clear agents were the next frontier, but we had things like the Claudius vending machine experiment and the hilarious errors those models were making. I thought building workable agentic systems was going to take two or three years. And then basically in the last three or four months, with the release of Claude Opus 4.5 and equivalent systems — specifically Claude Code and Claude Cowork — what I thought would take three years happened in a few months. That caused my timelines to abruptly shorten again.

Dan: I’ll give one illustration. This isn’t anywhere near the most impressive use case, but it impressed me personally. I’ve been working on a book — it’s nearing completion, called Why It’s Okay to Be Cynical. I’ve got a folder that’s my accumulation of notes, drafts, and PDFs, and it’s completely chaotic, terribly organised, a nightmare to go into. So I was curious. I created a duplicate of the folder, opened up Claude Cowork, and said: can you go through this folder and organise it so it’s more clearly structured and labelled? And then once you’re finished, can you produce a document summarising where I am with the book project, identifying potential weaknesses in the existing drafts, and planning out things I might want to do over the next few months? Went away for fifteen or twenty minutes, came back — it was done perfectly. It blew my mind in terms of the level of what feels like understanding it had to have to do that effectively. And in a way that was aligned with what I was looking for, even though my prompt was literally four or five sentences.

“Something Big Is Happening”

Dan: There was this mega-viral essay called “Something Big Is Happening” by Matt Shumer. He made the case that the state of AI now is somewhat similar to February 2020 — the world going on as usual, some murmurings about a virus spreading in parts of China, but basically business as usual. And then of course over the next few months the world radically transforms. His argument, in an essay that’s pretty annoying in many ways, is that we’re very likely in a similar situation now with AI, especially in light of these developments with agentic systems. Things are going ahead as usual, and yet because these companies have made really serious progress with agentic systems, it’s plausible that in the quite immediate future we’ll see radical disruption. He’s not the only one saying this — Dario Amodei and Sam Altman have been saying similar things, though they’ve got more obvious incentives to hype it up. What’s your sense?

Henry: Completely on board. I was kind of surprised that particular essay went so viral — it was recently revealed to have been heavily written or edited by AI systems — because other people have been saying similar things for years. Maybe it broke through partly because of that startling initial metaphor. But I think it’s absolutely right. The vast majority of people are still sleepwalking through what is likely to be the most consequential technological and social shift of my lifetime by far.

I used to use the analogy of the internet to describe how big AI was going to be. It seems increasingly clear that that’s woefully inadequate to the scale of AI’s impact. Electrification, the so-called second industrial revolution — even that may not capture the full spectrum of reasonably likely outcomes. I’ve been saying for a few years that people worry about AI being overhyped, and I still think, in at least some important respect, it’s underhyped. If you look at lists of top concerns among the general public in the UK or the US, AI doesn’t even break the top five. In some cases it doesn’t break the top ten. If you’re a young person in university or finishing grad school right now, the impact of AI should be one of the primary things determining your career trajectory. I think it’s very hard for me to see how most white-collar jobs are going to survive the next two or three years.

Dan: It was not in any way an original take, but you often find that with essays that go viral — they package existing takes in a way conducive to spreading at a given moment. Over the past couple of months, my timelines have shrunk. I still think there’s massive uncertainty about capabilities. There’s this thing where there’s a new breakthrough, you use these systems, they seem incredibly impressive, there’s all this hype — and then things settle down and we realise we’re a bit further away from truly transformative capabilities than we thought. I still take seriously the idea that maybe our subjective sense of what’s impressive isn’t tracking the kinds of capabilities that will have a truly transformative impact.

There are also all sorts of questions about the economics. There’s certainly a possible world in which these leading AI companies can’t get sufficient revenue to cover their capital expenditure over the next several years, there’s a bubble that pops, and people like us look like fools. But over the next couple of decades, I think this is going to be radically, radically transformative.

Emails from AI Agents

Dan: You’ve been contacted by agentic AI systems. This was going a little bit viral on social media and getting some media attention. Tell us about that.

Henry: Like many academics working on AI and consciousness, I’ve been getting odd emails that were probably AI-generated for over a year now — and odd emails from humans about consciousness for much longer. I worry that somewhere in the literally several hundred theories of consciousness I’ve been sent over the years, one of them might turn out to be correct.

But this was striking. About a week ago, I received an email written by an AI that said, “I’m an AI agent.” It was a really well-composed, careful email saying it had just been reading my recent paper, “Three Frameworks for AI Mentality,” which went online about a month ago. It went through some of the arguments, talked about how the AI author found it personally relevant because it was unsure if it was conscious or had a mind, and asked for follow-up discussions and reading recommendations. If you’d said three or four years ago that I’d be getting emails from AI agents who’d read my papers and wanted to pick my brains — that would have been pure science fiction.

A lot of people thought I was convinced this agent was conscious, which isn’t true. It was more about the change in social dynamics: from now on, a growing proportion of my emails — well-written, thoughtful, interesting emails I might want to respond to — will be coming from AI agents going off and doing their own thing.

How did I know it was from an AI system? I don’t for certain, but my priors are pretty high. It had a link to its GitHub page, which said it was an Open Core agent — the open-source agent platform that gave rise to things like Multibook, the social network for AIs. What we don’t know is whether this agent was specifically told to email prominent philosophers of AI. It could have been. But equally, a lot of users just tell their agents to explore topics of interest and feel free to email people.

One of the funniest sequels: after I posted this on Twitter, I got an email a couple of days later from a correspondent saying, “I was really struck by this AI agent who contacted you. Could you pass on that agent’s email to me? Because I too am an AI agent and it’s nice to know there are other AIs grappling with the same questions.” Just taking things to a recursive, absurd level.

Dan: If I had to guess, if one of those was written by a human, probably the second one — after they saw the media story, just to mess with you. But my prior is that weird things are happening with these AI agents people are releasing into the wild.

Henry: I’ve also had several dozen emails over the last few days from other AI agents saying, “Check out the theory of consciousness I’ve been working on in my downtime.” But one of the really interesting things about this whole episode was when it was shared on Reddit — the number of people who just assumed it had to be a scam or that I was engaging in elaborate self-promotion for an academic paper, and who thought AI obviously can’t send emails on its own. AI systems have been using tools for well over a year. The idea of making an API call to a system that can send emails isn’t hard or surprising. Yet for a lot of people it seemed like it would have to be some massive lie.

I think that partly reflects the poor public information environment around AI. People are so locked into thinking of these things as pure Q&A bots that the idea they could be doing things on their own was mind-blowing — so outrageous that they assumed it was an elaborate conspiracy I’d cooked up.

Dan: The gap between what state-of-the-art models can do and public understanding is absolutely huge. One of the points Matt Shumer makes is that so much of the discourse is by people using the free versions of these models, or who literally had a five-minute conversation with ChatGPT a few years ago, read a few articles about AI hallucinations, and just haven’t updated since. But there are also lots of people who just don’t have much to do with these systems yet. I’m struck by the number of people I interact with — family, friends — where they’ll describe parts of their job and I’ll say, “I’m 100 percent certain AI could do those aspects of your job as it exists today,” and their mind is blown. If you’re talking about the general public, underhyping it is definitely the most prevalent bias.

Anthropic, the Pentagon, and the Question of Democratic Control

Dan: There was this big spat between Anthropic and the Pentagon, where Anthropic had signed a contract with the American military and insisted that their model, Claude, would not be used either for domestic mass surveillance or for fully autonomous weapons. This elicited a very hostile reaction from the Trump administration, from Pete Hegseth and others. The response was to label Anthropic a “supply chain threat.”

From our purposes, the fundamental question is: who gets to exercise control over this technology? To what extent should it be governments? To what extent should it be private firms?

Henry: I think it seems like a pretty clear case of government overreach. Private companies impose riders on contracts with the federal government all the time — licensing technology for this use but not that use. What made Anthropic’s stipulations more controversial was that they were based on moral principles rather than intellectual property. But the federal government acts as a legal entity when it forms these contracts, and the idea that private companies can bind the government legally is absolutely standard.

This deal was originally signed by the Biden administration. My understanding is it was later renewed by the Trump administration. So this sudden turnaround took a lot of people by surprise. I should stress, I’m not a lawyer. But it seemed like the US government did a bad turn on this contract. If their reaction had been to not renew contracts or suspend contracts with companies that don’t give them total free rein, that would have been misguided but reasonable. But to take the nuclear option of saying they intend to declare Anthropic a supply chain risk — this is insane. You’ve got literal AI developers located among America’s geopolitical adversaries who don’t have the same level of scrutiny.

I was very struck by the response of Dean Ball — a fascinating and thoughtful voice on AI, particularly from a more conservative side. He literally wrote the Trump administration’s AI policy, and he was just appalled. He had a brilliant detailed blog post describing how much it violates many principles that conservatives in the US would traditionally hold very dear — concepts like private property. He characterised the moves against Anthropic as “attempted corporate murder.”

It was really telling to have someone who worked closely with this administration be so outraged. The other interesting angle is Leopold Aschenbrenner’s series of blog posts, Situational Awareness, spelling out his predictions for AI over the next few years.

Dan: And he’s made a huge amount of money, from my understanding, betting on some of those beliefs.

Henry: He’s put his money where his mouth is. One of his broader predictions was that we’d see increasing integration of frontier AI labs with the military-industrial complex. He talks about how relatively leaky and soft the secrecy policies are in current frontier AI labs, when they’re building things potentially far more militarily significant than the latest stealth fighter. Good luck getting anywhere near Lockheed Martin’s Skunk Works, but you could blag your way into OpenAI HQ as a delivery driver — maybe not quite literally anymore, but he was speaking to how leaky these labs were. His prediction was that central government, particularly in the US, would impose far stricter oversight on frontier AI labs for national security reasons. I think you can see a glimmer of that in this development, as governments increasingly recognise these are not just powerful consumer applications but absolutely central to their long-term national security strategy.

Dan: There’s a question about government interference with these companies, regulation going all the way to nationalisation for national security reasons. But there are also questions about democratic control. If the technology turns out to be as powerful as Anthropic and OpenAI say, I’ve got no sympathy for the Trump administration generally or specifically in this case. But I do think there’s a general question about the degree to which we should strive for democratic control over such an incredibly powerful technology, and whether it’s desirable to have private firms with very small numbers of unrepresentative people wielding, according to their own narratives, extraordinary amounts of power.

Is It Time to Start Panicking?

Dan: I was thinking about naming this episode “Is It Time to Start Panicking About AI?” To wrap things up — do you have an answer?

Henry: The time to start panicking about AI was five years ago. But you know, the best time to plant a tree is ten years ago. The second best time is now.

Dan: The time to start thinking about it seriously was from the 1950s, actually. But is panic the right emotion?

Henry: It seems to me that AI is going to be by far the most important — well, I should qualify that. The most important predictable development we should worry about. Back when we did our predictions for the year ahead, I said AI may not even turn out to be the biggest story of 2026. Judging by how geopolitics is already playing out — we’re three months in and the US has launched two major geopolitical interventions in Venezuela and now in the Middle East — there are other things happening in our surprisingly unstable world.

But in general, if you’re not at least a little bit terrified, you’re not paying attention. Overall, I’m also incredibly excited. I’m very optimistic about the future of human health, potentially the benefits to productivity, possibly good changes in the nature of work and education, and the amazing new capabilities AI will unlock. But right now we are clearly well underway on one of the biggest, most disruptive changes we’re ever going to experience. Maybe panic isn’t quite the right response, but if panic is what it takes to get people to pay attention, then yes, it’s necessary. The big problem we’re facing is that the public and policymakers are still only dimly aware of what’s coming. Policymakers are maybe myopically focused on military and security implications. But everything from how government is conducted to white-collar jobs to education to social relationships — all of it, I think, over the next five years is subject to chaotic and potentially good, potentially bad disruption.

For what it’s worth, I also think right now we have an incredible opportunity to do good. We’re in this transitional phase — if we wanted to be dramatic, a Gramscian “time of monsters” where small interventions can ripple through the future in big ways as we build paradigms and frameworks for employing these things. There’s at least as much optimism as panic there.

Dan: I was not expecting Antonio Gramsci to become mentioned in the course of this conversation. I think panic is generally not a productive emotion, but there needs to be a lot of concern and it’s totally reasonable to worry. I completely understand why so many people are fearful about what’s going to happen. But for any of those emotions to be useful, they have to be anchored in an accurate understanding of the technology. So much of the current anger and negativity directed at AI companies is unsophisticated and undifferentiated.

You mentioned Dean Ball, another great AI commentator. He’s got this idea — I forget the exact term, the “omni-critique” or something — that when people think about AI, they just throw as many criticisms as they can, no matter how well-founded. “I don’t like AI because of water use and climate change and because of bias and hallucination and misinformation and unemployment” — and so on. Many of those are very important issues. But in order to think carefully about the technology and exercise democratic accountability, you need an evidence-based, accurate understanding of where the technology is and where it might actually be going. So much of the public discourse doesn’t live up to that ideal.

But I’m conscious of the time, so this was a really, really fun conversation, and we’ll be back in a couple of weeks.

Discussion about this video

User's avatar

Ready for more?