We are joined by Anil Seth for a deep dive into the science, philosophy, and ethics surrounding the topic of AI and consciousness. Anil outlines and defends his view that the brain is not a computer, or at least not a digital computer, and explains why he is sceptical that merely making AI systems smarter or more capable will produce consciousness.
Anil Seth is a neuroscientist, author, and professor at the University of Sussex, where he directs the Centre for Consciousness Science. His research spans many topics, including the neuroscience and philosophy of consciousness, perception, and selfhood, with a focus on understanding how our brains construct our conscious experiences. His bestselling book Being You: A New Science of Consciousness was published in 2021. He is the English-language winner of the 2025 Berggruen Prize Essay Competition for his essay “The Mythology of Conscious AI”, which develops ideas in his recent article, “Conscious Artificial Intelligence and Biological Naturalism.”
Topics
What we mean by “consciousness” (subjective experience / “what it’s like”) vs intelligence.
Whether general anaesthesia and dreamless sleep are true “no consciousness” baselines.
Psychological biases pushing us to ascribe consciousness to AI
How impressive current AI/LLMs really are, and whether “stochastic parrots” is too dismissive
Whether LLMs “understand”, and the role of embodiment/grounding in genuine understanding
Computational functionalism: consciousness as computation + substrate-independence, and alternative functionalist flavours
Main objections to computational functionalism
Whether the brain is a computer
Simulation vs instantiation
Arguments for biological naturalism
Predictive processing and the free energy principle
What evidence could move the debate
The ethics surrounding AI consciousness and welfare.
Transcript
(Please note that this transcript is AI-edited and may contain minor errors).
Dan Williams: Welcome back. I’m Dan Williams, back with Henry Shevlin. And today we are honoured to be joined by the great Anil Seth. Anil is one of our most influential and insightful neuroscientists and public intellectuals, working on a wide range of different topics, including the focus of today’s conversation, which is consciousness — and more specifically, the question of AI and consciousness.
Could AI systems, either as they exist today or as they might develop over the coming years and decades, be conscious? Could they have subjective experiences? In a series of publications that have been getting a lot of attention from scientists and philosophers, Anil has been defending a somewhat sceptical answer to that question, arguing that consciousness might be essentially entangled with life — with biological properties and processes of living organisms — which, if true, would suggest that no matter how intelligent AI systems become, they would nevertheless not become conscious. He’s also argued that the consequences of getting this question wrong in either direction — attributing consciousness where there is none, or failing to attribute consciousness when there is — are enormous: socially, politically, morally.
So in this conversation, we’re going to be asking Anil to elaborate on this perspective, see what the arguments are, and generally pick his brain about these topics. Anil, maybe we can start with the most basic preliminary question in this area: when we ask whether ChatGPT is conscious, or any other system is conscious, what are we asking? What’s meant by consciousness there?
Anil Seth: Well, thanks, Dan. Let me first say thank you for having me on — it’s a great pleasure to be chatting with you, my Sussex colleague Dan, and my longtime sparring partner about these issues, Henry. I’m very much looking forward to this conversation.
I think you set it up beautifully. It’s a deep intellectual question which involves both philosophy and science, and it’s a deeply important practical question, because the consequences of getting it wrong either way are very significant.
You’re also right that the first step is to be clear about what we’re talking about. For a while, there was this easy slippage where people would talk about AI and intelligence and artificial general intelligence — which is supposedly the intelligence of a typical human being — and then to sentience and consciousness. There was this easy slippage between these terms, but I think they’re very different. That’s the first clarification.
Consciousness is notoriously resistant to definition, but it’s also extremely familiar to get a handle on colloquially. As you said: any kind of subjective experience. Any kind of experience — we could be even briefer. Unpacking that just a little: it’s what we lose when we fall into a dreamless sleep, or more profoundly under general anaesthesia. It’s what returns when we wake up or start dreaming or come around. It’s the subjective, experiential aspect of our mental lives.
People talk about it by pointing at examples — it’s the redness of red, the taste of a cup of coffee, the blueness of a sky on a clear day. It’s any kind of experience whatsoever. Thomas Nagel put it a bit more formally fifty years ago now: for a conscious organism, there is something it is like to be that organism. It feels like something to be me, but it doesn’t feel like anything to be a table or a chair. And the question is: does it feel like anything to be a computer or an AI model or any of the other things we might wonder about? A fly, a brain organoid, a baby before birth. There are many cases where we can be uncertain about whether there is some kind of consciousness going on.
And that’s very different from intelligence. They go together in us — or at least we like to think we’re intelligent. But intelligence is fundamentally about performing some function. It’s about doing something. And consciousness is fundamentally about feeling or being.
Dan Williams: Just to ask one follow-up about that. This idea that intelligence is about doing and consciousness is about what it’s like to have an experience — someone might worry that if you frame things that way, you end up quite quickly committing to a kind of epiphenomenalism. Because if we’re not understanding consciousness in terms of what it enables systems to do, the sorts of functions they can perform, isn’t there a risk that right from the outset we’re going to be biased in the direction of treating consciousness not as something that evolved because it conferred certain fitness advantages on organisms, but as this sort of mysterious qualitative thing which is distinct from what organisms can do?
Anil Seth: I think it’s a good point to bring up, but I don’t think it’s too much of a worry. The point is not to say that consciousness cannot or does not have functional value for an organism. If we think of it as a property of biological systems — plausibly the product of evolution, or at least the shape and form of our conscious experiences are shaped by evolution — it’s always useful to take a functional view. Conscious experiences very much seem to have functional roles for us, and there’s a lot of active research about what we do in virtue of being conscious compared to unconscious perception.
So there’s no worry about sinking into epiphenomenalism. The point is more that intelligence and consciousness are not the same thing, but they can nonetheless be related. And it may be that they can be completely dissociated. It may be the case that we can develop systems that have the same kinds of functions that we have in virtue of being conscious, but that do not require consciousness — just as we can build planes that fly without having to flap their wings. The functions might be multiply realisable; they might be doable in different ways. They might not be, of course.
On the other hand, it might be possible to have systems that have experiences but aren’t actually doing anything useful. Here I’m worried less about AI and more about this other emerging technology of neurotechnology and synthetic biology, where people are building little mini-brains in labs constructed from biological neurons. They don’t really do anything very interesting, but because they’re made of the same stuff, I think it’s hard to rule out that they may have some kind of proto-consciousness going on, or at least be on a path plausibly to consciousness. So we can tease intelligence and consciousness apart, but it’s also important to realise how they are related in those cases where both are present.
Henry Shevlin: I’ll jump in with a minor pedantic point, but one that’s illustrative of some of the problems in debates around consciousness. You mentioned, Anil, as examples of losing consciousness, dreamless sleep and general anaesthetic. But both of those are contested. Your fellow biological naturalist Ned Block has raised serious doubts about whether general anaesthetic really eliminates all phenomenal consciousness. And there are those like Evan Thompson who have suggested that even in dreamless sleep there could be some residual pure consciousness, perhaps consciousness of time. I think this is a broader problem in the science of consciousness: we can’t even clearly agree on contrast cases. A lot of the blindsight cases that were supposed to be gold-standard cases of perception without consciousness are now contested, and it seems very hard to get an absolutely unequivocal case of something that’s not conscious in the human case.
Anil Seth: Well, I mean — death.
Henry Shevlin: I don’t know. You have some people who disagree, admittedly on more spiritual grounds.
Anil Seth: Yeah, but I want to push back a little. It is hard, but I don’t think it’s as hard as some people might suggest. Sleep is complicated, which is why I tend to also say anaesthesia. Sleep is very complex. In most stages of sleep, people are having some kind of mental content. We might typically think we only dream in rapid eye movement sleep, and the rest of the time it’s dreamless and basically like anaesthesia. This is not true. You can wake people up all through the night at different stages of sleep, and quite often they will report something was going on. So it’s hard to find stages of sleep that are truly absent of awareness in the way we find under general anaesthesia.
We notice this: when we go to sleep and wake up, we usually know roughly how much time has passed. We may get it wrong by an hour or two if we’re jet-lagged or sleep-deprived, but we roughly know. Under anaesthesia, it’s completely different. It is not the experience of absence — it’s the absence of experience. The ends of time seem to join up and you are basically turned into an object and then back again.
The residual uncertainty about general anaesthesia depends on the depth of anaesthesia. Some anaesthetic situations don’t take you all the way down, because in clinical practice you don’t want to unless you absolutely have to. But if you take people to a really deep level, you can basically flatline the brain. I think under these cases, with the greatest respect to Ned Block — who is very much an inspiration for a lot of what I think and write about — that’s as close to a benchmark baseline of no consciousness but still a live case as we can get.
Henry Shevlin: Although it is standard to administer amnestics as part of the general anaesthesia cocktail, which might make people suspicious. You’re told: we’re also going to give you drugs that prevent you forming memories. Why would you even need to do that if it was unequivocal that you were just completely unconscious in that period?
Anil Seth: Well, because it’s never been unequivocal to anaesthesiologists. There’s been this bizarre separation of medicine from neuroscience in this regard until relatively recently. From a medical perspective, there are cases where they don’t always administer a full dose — so it’s an insurance policy. There have been a number of purely scientific studies of general anaesthesia and conscious level, and in those studies, it’s a good question whether they also administer amnestics. I would imagine not, but I’m not sure.
Dan Williams: Okay, to avoid getting derailed by a conversation about general anaesthesia — when we ask whether a system is conscious, we’re asking: is there something it’s like to be that system? We’re not asking how smart it is, we’re asking about subjective experience. Before we jump into your arguments on the science and philosophy of this, Anil, you’ve also got interesting things to say about why human beings might be biased to attribute consciousness, especially when it comes to systems like ChatGPT, even if we set aside the question of whether it in fact is conscious.
Anil Seth: Yeah, I think this is the first thing to discuss. Whenever we make judgements about something where we don’t have an objective consciousness meter, there is some uncertainty. It’s going to be based on our best inferences. And so we need to understand not only the evidence but also our prior beliefs about what the evidence might mean. This brings in the various psychological biases we have.
The first one we already mentioned: it’s a species of anthropocentrism — the idea that we see the world from the perspective of being human. This is why intelligence and consciousness often get conflated. We like to think we’re intelligent and we know we’re conscious, so we tend to bundle these things together and assume they necessarily travel together, where it may be just a contingent fact about us as human beings.
The second bias is anthropomorphism — the counterpart where we project human-like qualities onto other things on the basis of only superficial similarities. We do this all the time. We project emotions into things that have facial expressions on them. And language is particularly effective at this. Language as a manifestation of intelligence is a very strong signal: when we see or hear or read language generated by a system that seems fluent and human-like, we project into that system the things that in us go along with language, which are intelligence and also consciousness.
The third thing is human exceptionalism. We think we’re special, and that desire to hold on to what’s special leads us to prioritise things like language as especially informative when it comes to intelligence and consciousness. In a sense, this is a legacy of Descartes and his prioritisation of rational thought as the essence of what a conscious mind is all about and what made us distinct from other animals. That’s echoed down the centuries despite repeated attempts to push it away.
There’s a good Bayesian reason for this too: in pretty much every other situation we’ve faced, if something speaks to us fluently, we can be pretty sure there’s a conscious mind behind it — whether it’s a human being recovering from brain injury or perhaps a non-human primate using language. These are strong signals. So this might be the first time in history where language is not a reliable signal, because we’re not dealing with something that has the shared evolutionary history, the shared substrate, the shared mechanisms. It’s a different kind of thing.
So that’s one set of biases. We can think of it as a kind of pareidolia. Our minds work by projecting, seeing patterns in things — whether it’s faces in clouds or minds in AI systems. These priors are generally useful, but they can mislead.
Henry Shevlin: It’s not just pareidolia though, is it? Setting aside consciousness for a second, in terms of what we might loosely think of as cognitive abilities — the whole range of benchmarks for reasoning, understanding, and so forth — the performance of these systems on a huge range of tasks has skyrocketed to the point where people talk about approaching coding supremacy, for example. AI can now produce pretty decent fiction. It can do a whole range of verbal reasoning tasks at human-level performance. So it’s not entirely pareidolia at the level of AI cognition. Or would you disagree?
Anil Seth: At the level of cognition, I kind of agree, but as always, Henry, I only partly agree. I think we can still overestimate. It’s useful here to separate what Daniel Dennett might have called the intentional stance — where it’s useful to interpret something’s behaviour as engaged in the kind of cognitive process we might be familiar with in ourselves, as thinking, understanding, reasoning. These systems are described this way too, as “chain of thought” models and so on. I still think we overestimate the similarity. Through the surface veneer of interacting through language or code, there’s a tendency to assume that because the outputs have the same form, the mechanisms underneath are more similar than they really are.
There’s another really foundational question here for language models in particular, which is whether they understand. One of the things I hadn’t really thought about before the last few years is that consciousness and understanding might also come apart. I’m used to distinguishing consciousness from intelligence, because there are clear examples where you can have one without the other. But I’d always implicitly assumed that understanding necessarily involves some kind of conscious apprehension of something being the case — grokking something. And now I’m not so sure. That might be another case of anthropocentrism.
I’d be fairly compelled by an argument that language models — especially if they are embodied in a world and perhaps trained while embodied, so that the symbol manipulation their algorithms engage in has some grounding — may be truly said to understand things, but still without any connotation of consciousness. So yes, I kind of agree, but even now I’d be resistant to say that language models truly understand. I think that’s still a form of our projecting. But the criteria for a language model to truly understand seem more achievable — I can see how it could be achieved under a relatively straightforward extrapolation of the way we’re going — compared to something like consciousness.
Dan Williams: Can I ask a question about that? These arguments we’re going to focus on are targeted at consciousness in AI systems. And as we said, you want to draw a distinction between intelligence and consciousness. But before we get into issues of consciousness, when we’re just focusing on the capabilities of these systems — what they can actually do — there are some people who are very dismissive, even setting aside consciousness. They’re just “stochastic parrots,” engaged in a kind of fancy auto-complete. What’s your view about those kinds of debates? Someone might agree with you that it’s a mistake to attribute human-like intelligence to these systems — they’re very alien in their underlying architecture — but they’re maybe even super-intelligent along certain dimensions, even more impressive than human beings. So where do you sit?
Anil Seth: Somewhere in the middle — it’s always a comfortable or uncomfortable place to be. But they are astonishing. Whenever this question comes up, I’m always reminded that I did my PhD in AI in the late 1990s, finishing in 2001. The situation was totally different then. We were still thinking about embodiment and embeddedness, especially here at Sussex, and some of the more in-principle limitations. But the practical capabilities of AI back then were just — there was nothing really to write home about. That’s changed so much. That’s why conversations like this now have real practical importance in the world.
AI is super impressive. I don’t see it as a single trajectory, though. I think there’s a meta-narrative we often fall into, which is that intelligence is along a single dimension — plants at the bottom, then insects, then other animals, then humans in a kind of scala naturae, the great chain of being — and then there’s angels and gods, and AI is travelling along this curve and at some point it’s going to reach human-level intelligence and then shoot past to artificial super-intelligence. I think this is a very constraining way to think of it.
It’s already the case, and has been for a long time, that AI has been better than humans at many things. But it’s always been very narrow. What we’ve seen through the foundation model revolution is the first kind of semi-general AIs — language models are good at many things, not good at everything, but good at many things rather than just one. But I still think they’re exploring a different region in the space of possible minds. They may soon be better than humans at many things, but they’ll still be different from us.
I think it’s important to recognise that, because we get into all kinds of trouble if — to use a beautiful metaphor from Shannon Vallor’s book about the AI mirror — we think of AI systems as just alternative instantiations of human minds that are either a little bit weaker or much stronger. Then we misunderstand both the systems and ourselves, and miss opportunities for how we can develop AI technologies so that they best complement our own cognitive capacities.
Dan Williams: Let’s go back to the consciousness issue. As you said, one reason you might think AI systems are or could be conscious is because of these cognitive biases. Another reason is you might hold a sophisticated philosophical view called computational functionalism. Can you say a little about how you understand computational functionalism and why it might commit you to the view that conscious AI is at least possible in principle?
Anil Seth: Yeah. So my understanding of computational functionalism is that it’s really an assumption you need in order to get the idea of conscious AI off the ground. It’s the idea that consciousness is fundamentally a matter of computation — and this computation is the kind that can be independent of the particular material implementing it. To put it another way: if you implement the right computations, you get consciousness. That’s sufficient.
That means if you can implement those computations in silicon, that’s enough. You could implement them in some other material — that would also be enough. It’s the computation that matters. The material underlying it is only important insofar as it’s able to implement those computations. And silicon is very good at implementing a certain class of computations — what we call Turing computations. So that makes it a good candidate for consciousness if computational functionalism is true. And that’s what I think is a big “if.” It seems a very natural assumption. But first let me ask you — does that resonate with your understanding of computational functionalism?
Henry Shevlin: I completely agree with that characterisation. Computational functionalism says mental states are individuated by their computational role. The only thing I’d push back on is that computational functionalism is one road to concluding that AI can be conscious, but there are other types of functionalism out there. My response to your BBS paper emphasises this.
Psychofunctionalism — apologies to listeners, the terminology does get messy — says we should individuate mental states not in terms of computational processes necessarily, but whatever functional roles those mental states play in our best scientific psychology. Ned Block is a big fan of this view. The view I’m partial to is analytic functionalism, which is the functionalist take on behaviourism: mental states should be individuated by everyday folk psychology. A belief is something we all sort of know what it is because we can characterise people as having them, forming them, losing them. Once you formalise this tacit knowledge, that gets you to a theory of what beliefs are.
Those views could overlap with computational functionalism, but it’s not necessary to endorse it to think AI is conscious. If you’re an analytic functionalist, you might think that if AI adheres sufficiently closely to the platitudes of everyday folk psychology — they believe like us, they form goals, they have hopes and aspirations — then of course they can be conscious, even if you think brains are not computers, even if what brains do is not a computational process and what AI systems do is. Because both fit the same functional-behavioural profile, they might both count as conscious.
Anil Seth: That’s quite a wrinkle — I’d say a massive fold. I completely agree that computational functionalism is a specific flavour of a broader set of functionalist views. Part of the problem has been that people assume all these views are equivalent, and they really aren’t.
Functionalism, as I understand the original version, just says that mental states are the way they are because of the functional organisation of the system. But that can include many things — the internal causal structure, many things not captured by an algorithm. An algorithm is in the end determined by the input-output mapping between a set of symbols. Functionalism in general can mean many other things. You could be a signed-up, subscription-paying functionalist and still disagree with computational functionalism, which is a much more specific claim about everything that matters about the brain being a matter of computation.
I’d also worry a bit about your view, Henry, which seems a little behaviourist. If you’re saying that behaving the same way and having the same kinds of beliefs are sufficient conditions — well, computational functionalism at least has the merit of specifically stating conditions for sufficiency. If you’re saying the same about folk-psychological criteria, I think you’re open to all the problems of the psychological biases we discussed. It’s a position that’s going to be much more open to false positives, because there are so many ways of things looking as if they have the kinds of beliefs and goals that go along with consciousness in us, but which need not go along with consciousness in general.
But back to the point: computational functionalism is this specific claim, grounded on the idea that the computation is what matters. And it’s also grounded on the idea that even in biological brains, it’s the computation that matters — and if you can abstract that computational description and implement it in something else, you get everything that goes along with the real biological brain.
Dan Williams: So roughly speaking, functionalism is the view that what matters for consciousness is not what a system is made of, but what it can do. And computational functionalism is the view that what matters in terms of what the system is doing is something like processing information.
Anil, your arguments have two aspects. Some are critical of computational functionalism — the negative part — and then you’ve got an alternative way of viewing consciousness and its connection to the brain. Let’s start with those criticisms. What do you think are the main weaknesses of computational functionalism?
Anil Seth: I think there are a number of weaknesses, all grounded on the intuition that we’ve taken what’s a useful metaphor for the brain — the brain is a kind of carbon-based computer — and we’ve reified it. We’ve taken a powerful metaphor and treated it literally.
The idea that the brain literally is a computer raises the question of what we mean by a computer, by computation. Let’s think of computation in the most standard way: as Turing defined it in the form of a universal Turing machine. In this definition, computation is a mapping between a set of symbols through a series of steps — that’s an algorithm. And this mapping involves a sharp separation between the algorithm and what implements it, between software and hardware. That sharp separation both influences how we build real computers — we can run the same software on different computers — and underwrites the assumption that computation is the thing that matters, because it allows you to strip out the computation cleanly from the implementation.
If you look at the brain, it has a superficial appeal: we think of the mind as software and the brain as hardware. But the closer you look, the more you realise you can’t induce anything like this sharp separation — not of software and hardware, but of mindware and wetware. In a brain, you can’t separate what it is from what it does with the same sharpness that, by design, you can in a digital Turing computer.
But Turing computation remains appealing. Roll back almost ninety years to Turing, but also to McCulloch and Pitts: they showed that if you think of the brain as very simple abstract neurons connected to each other, each just summing up incoming activity and deciding whether to be active or not — very simple abstractions of the biological complexity of real neurons — you basically get everything Turing computation has to offer. You can build networks of these that are Turing-complete; they can implement any algorithm.
So you get this beautiful marriage of mathematical convenience. You can strip away everything about the brain apart from the fact that it consists of simple neuronal elements connected together, and yet you get everything Turing computation can give you. So maybe that’s the only thing that matters about brains. And of course, that abstraction is in practice very powerful — the neural networks trained for foundation models are direct descendants of these McCulloch-Pitts networks.
But this marriage starts to get stressed, because Turing computation, while powerful, is not everything. Strictly speaking, anything that is continuous or stochastic is not within the realm of algorithms. Algorithms also don’t care about continuous time — there could be a microsecond or a million years between two steps; it’s the same computation. Real brains are not like that. We’re in time just as much as we’re embodied. You can’t escape real physical time and continue to be a functioning biological brain. The phenomenology of consciousness is also in time — time is plausibly an intrinsic and inescapable dimension of our phenomenology.
So there are things brains do which are not algorithmic and might plausibly matter for consciousness. And when you look at brains, you can’t separate what they are from what they do in any clean way. I think that really undermines the idea that the algorithmic level is the only level that matters.
To roll back to where we started: the idea that the brain literally is a computer is a metaphor. Like all metaphors, there’s a bit of truth to it. But not everything the brain does is necessarily algorithmic. And that opens the question: if we can’t assume everything the brain does is computational, that puts a lot of pressure on computational functionalism, which is based on the idea that consciousness is sufficiently describable by a computation.
Henry Shevlin: I agree with a lot of what you’ve said about the importance of fine details of realisation in brains. Peter Godfrey-Smith has also advanced this point, talking about the role of intracellular, intra-neuronal activity. Rosa Cao has had some great papers on this recently too.
But here’s a provocative analogy. Imagine we were trying to understand what art was, and all we had was paintings. We might say: clearly an essential part of being an artwork is pigment, because not only is pigment present in every example of art we’ve got, it’s essential to how it is artistic — pigment defines the formal properties of every piece of artwork we’ve ever seen. But of course, there are lots of types of art that don’t involve pigments.
In the same way, yes, all these fine details of wetware might be essential to the type of consciousness we see in humans and other animals, whilst not exhausting the space of possible conscious minds that might be very different from us.
Anil Seth: I think that’s fine. All I’ve said so far is that there’s the open question of whether things besides computation might matter, but then one has to give an account of what they are and why. If I wanted to make the case that some aspect of biology is absolutely necessary for consciousness, I have to do that separately.
These things are somewhat independent. Computational functionalism could be wrong, but biology could still be not necessary — there could be other ways of making art. If I’ve got a strong case that some aspect of biology is necessary for consciousness, then computational functionalism cannot be true. But the reverse is not the case.
Dan Williams: Maybe one question before we move on. I was a little confused reading your papers about which of the following two positions you’re defending. One position says: even if we could build computers that replicated all the functionality of a human being, it nevertheless wouldn’t be conscious. The other says: we just couldn’t build computers that replicate all of the functionality of a human being, because to do what human beings do, you need the kinds of materials and structures found within the brain. Those feel like two different positions. Someone could be a computational functionalist as a purely metaphysical doctrine, saying: if you could build a computer that does everything humans do, it would be conscious — it just so happens we can’t do that. Are you denying that metaphysical thesis, or making a different claim?
Anil Seth: There’s a lot in there. I am very suspicious of that metaphysical claim. Let me put it in a scenario that might help clarify.
Some people might say that if aspects of biology really matter, and we built a digital computer simulation including those details, would that be enough? We can do this ad infinitum — build a maximally detailed whole-brain emulation that digitally simulates all the mitochondria, even microtubules. Simulate everything. Would that be enough?
The metaphysical computational functionalist might say yes — somewhere in there, the right computations have to be happening. But I don’t think so, because it still relies on the claim that consciousness is constitutively computational. Making a simulation more detailed doesn’t make it any more real unless the phenomenon you’re simulating is a computation.
We make a simulation of a weather system; making it more detailed doesn’t make it any more likely to be wet or windy. Most things we simulate, we’re not confused about the fact that the simulation doesn’t instantiate the thing we’re simulating. If it is to move the needle on consciousness, that depends on the claim that consciousness is constitutively computational.
The irony is that if you think simulating the details is necessary — if you think you have to simulate the mitochondria — that actually makes it less likely that consciousness is constitutively computational. Because if consciousness is constitutively computational, those kinds of details should not matter.
A slight sidebar: I think this is ironically amusing because there are people investing their hopes, dreams, and venture capital into whole-brain emulation in order to upload their minds to the cloud and live forever. I think that’s very wrong-headed. If you think the details matter, then it’s unlikely consciousness is a priori a matter of computation alone.
So to your question: I’m very suspicious of that metaphysical claim. The burden of proof is on the computational functionalist to say why computation is going to be sufficient, given all the differences between computers and brains. I start from a physicalist perspective — consciousness is a property of this embodied, embedded, and timed bunch of stuff inside our heads. If you build something sufficiently similar, it will be conscious. The question is: how similar does it have to be? Does it have to be embodied? Made of neurons? Made of carbon? Alive? These are still open questions.
Henry Shevlin: Just to chime in — this point about simulated weather systems not getting anyone wet is obviously John Searle’s point originally. I think it’s better understood as a restatement of the disagreement rather than a dunk on functionalism. If consciousness is computational, then it is absolutely substrate-invariant. There are other things that are substrate-invariant: online poker is poker, online chess is chess, money is money whether it’s coins, banknotes, or on a balance sheet. So if consciousness is not computational, then a simulation won’t be conscious. But if it is computational, the simulation point has no bite.
Anil Seth: I don’t disagree. But the key point is: you can’t use the simulation argument to argue for the fact that consciousness is computational. If consciousness is computational, certain things follow about what happens in a simulation. But the fact you can simulate something doesn’t tell you anything about consciousness being computational.
I reread Nick Bostrom’s simulation argument paper while writing the BBS paper. He carefully interrogates his assumptions — that we don’t wipe ourselves out, that at least one person is interested in building ancestor simulations. But he also says: we have to assume consciousness is a matter of computation for this whole thing to get off the ground. And then he says, “Don’t worry, philosophers generally think that’s fine.”
Hold on a minute — that is the most contentious assumption by far of everything in the paper, and he gives it no critical examination. The fact that computational functionalism is at the very least contentious is, for me, very good evidence against the simulation hypothesis.
Dan Williams: I really want to get to your positive account, but one follow-up on your criticisms. One of your strongest arguments is that when you look at the brain, you don’t find anything like the hardware-software distinction central to digital computation as we understand it post-Turing. I think that’s true and important. But isn’t it possible that someone could say: that’s an interesting feature of how computation works in biological systems — people call it “mortal computation,” the term from Geoffrey Hinton — maybe having to do with energetic efficiency? But it doesn’t follow that you couldn’t replicate those computational abilities in digital computers. It could just be a contingent feature of our architecture.
Anil Seth: The first part is right, but the second part doesn’t follow. You can’t separate what brains are from what they do; there’s no sharp distinction between mindware and wetware. Rosa Cao has written about this, and there’s the notion of mortal computation from Hinton. Others have talked about biological computation, emphasising these features — you can call it generative entrenchment. I like the term “scale integration”: in biological systems, the microscales are deeply integrated into higher levels of description in a way that you can’t separate out. The macro and the micro are causally entangled with each other. This is very characteristic of evolved biological systems — there’s no design imperative from evolution to have a sharp separation of scales. And that has benefits: you get energy efficiency, and you may get explanatory bridges towards aspects of consciousness too, like its unity.
This is, for me, a very exciting avenue: if we stop thinking of the brain as just a network of McCulloch-Pitts neurons implementing some Turing algorithm, and start looking at what it actually is — what the functional dynamical properties of scale-integrated systems really are — I think we’ll learn a lot.
But the second part — that biological computation could be done in a digital computer — I don’t think follows, and this is why I resist calling these things varieties of “computation.” Whenever you use that word, it’s easy to slip into the idea that they’re portable between substrates. The biological computation my brain does in virtue of being scale-integrated could be simulated by a digital computer. But the simulation is not an instantiation unless what you’re simulating is constitutively that kind of computation. And biological scale-integrated computation is not digital Turing computation.
The more general point: the further you move away from a Turing definition of computation, the less substrate independence you have. Analog computers, for instance, implement features that are probably essential — like grounding in time with continuous dynamics — but they do not have the same substrate flexibility as digital computers. We love digital computers because they have that flexibility. But when it comes to understanding what brains do, whether in intelligence or consciousness, we can’t throw all these things away.
Henry Shevlin: A quick side note: the Open Claude instances, the more agentic Claude bots, have something called a “heartbeat” — a regular interval at which they can take actions. So we’re starting to see at least simulation of some temporal dynamics in large language models. Obviously radically different from the kind you’re concerned with, but interesting.
Anil Seth: I don’t buy that. That’s a simulated heartbeat. You could slow the clock rate down. You can give these things a sense of time, but it’s not physical time. Imagine you slow all the Anthropic servers way down — all the agents slow down, but the computation is still the same. We are embedded in physical time in a way that even agents with simulated heartbeats are not.
Dan Williams: I’ll set you up for developing your positive account with a question: well, isn’t computational functionalism the only game in town? Doesn’t it just win by default?
Anil Seth: No. That’s part of the issue — one of the responses is often, “What else could it be?” There’s a phrase, “information processing,” that I find increasingly revealing. It’s so common to describe the brain in terms of information processing that we don’t even realise we’re saying it, as if there’s no other game in town. What do we mean when we say a brain is processing information? It’s really not clear to me. The most rigorous formal definition is Shannon’s, which is purely descriptive — it doesn’t tell you whether a system is processing information.
But alternatives have been around for a long time. When I was doing my PhD at Sussex, there was the dynamical systems perspective, the whole enactive embodied approach to cognition — continuous dynamics, attractors, phase spaces. These describe complex systems doing things in ways which are not computational, not algorithmic. Brains oscillate — this is one of the most central phenomena of neurophysiology, as Earl Miller talks about a lot. And it would be crazy if evolution hadn’t taken advantage of this natural physical property. The right framework for understanding oscillatory systems is not an algorithm, because algorithms are abstracted out of time.
So there are many other games in town. A lot of these are perfectly compatible with functionalism, but now it’s a functionalism much more tied to the material basis — only some substrates can implement the right kinds of functions, and biological material may be necessary for the right kind of intrinsic dynamical potential.
I think biological naturalism is still basically a functionalist position. I’m wary of saying something considered vitalistic — there’s no magic, non-explicable, intrinsic quality about life associated with consciousness. Living systems can be distinguished from non-living systems in terms of functional description. Features like metabolism and autopoiesis are still amenable to functional descriptions, but now the functions are closely tied to particular kinds of materials, particular biochemistries. Metabolism is a function, but it’s a function inseparable from some material process. Maybe it doesn’t have to be carbon — maybe there are other ways of having metabolism. But you can always say that intrinsic properties at one level can be decomposed into functional relations at a lower level.
So I’m comfortable with functionalism broadly, but the question is: how far down do you have to go? And to Henry’s point: how do we make sure we’re not focusing on things that are contingently the case in biological consciousness only?
Many of the comments to my BBS paper said I haven’t made a rigorously indefensible case for biological naturalism, and I totally concede that. I don’t think there is one yet.
Henry Shevlin: Can I give you an opportunity to say more about autopoiesis specifically? I’ve yet to hear a really convincing case for how it helps explain what consciousness is. Here’s a dark framing. The standard Maturana and Varela notion of autopoiesis is a system continually replacing, maintaining, and repairing its own components.
A few years ago, I read about a horrific case: Hisashi Ouchi, a Japanese nuclear researcher who received the largest dose of radiation ever recorded. Every chromosome in his body was destroyed, no new cell production, no RNA transcription — his body couldn’t produce new proteins. Every cell was effectively dead; autopoietic processes had basically stopped. He was kept alive through amazing medical interventions — you could call it allopoiesis — for eighty-three days. And he was conscious and in a lot of pain throughout.
So here’s a case of someone in whom autopoietic processes had basically stopped, and yet he was still consciously experiencing severe pain. I’d love to hear more about why you think autopoiesis is important for consciousness.
Anil Seth: That is darkly, weirdly fascinating. Setting aside the horror of it — it would be very interesting to consider: has autopoiesis really stopped entirely, or is it winding down? I can imagine all sorts of problems with that dose of radiation, but it’s also not true that every cellular process stopped at the moment he was still alive for eighty-three days. It might be a gradual winding down.
If there were a case where you could show that all autopoietic processes had definitively stopped and yet consciousness was continuing, that would put pressure on the claim that autopoiesis is necessary in the moment for consciousness. It might still be diachronically necessary — systems have to have gotten those processes rolling to begin with.
The reason I usually mention autopoiesis and metabolism as candidate features of life is partly because they maximise the difference between living systems and silicon-based computers. They’re obvious examples of things closely tied to life, things that silicon devices clearly cannot have. It’s partly to emphasise how different these things are and why it’s very reductive to think of us as meat-based Turing machines.
There’s another reason to think about autopoiesis, and it’s the connection between autopoiesis, the free energy principle, and predictive processing as a way of understanding the contents of consciousness. There’s a line that can be drawn between these poles — what Carl Friston and Andy Clark and Jacob Hohwy have called the high road and the low road, but they meet in the middle.
The basic idea: start with the brain engaged in approximate Bayesian inference about the causes of sensory signals — very much a Bayesian brain perspective, Helmholtz’s “perception is inference.” Of course, Bayesian inference can be implemented algorithmically, but that doesn’t mean that’s how the brain does it. The free energy principle shows a way of doing it which follows continuous gradients — not necessarily an algorithm.
So our perceptual experiences of the self and the world are brain-based best guesses about the causes of sensory inputs. This doesn’t explain why consciousness happens at all, but gives us a handle on why experiences are the way they are. This applies to the self too: our experiences of selfhood are underpinned by brain-based best guesses about the state of the body — especially the interior of the body, through what I’ve been calling interoceptive inference. These processes are more to do with control and regulation. The brain, when perceiving the interior of the body, doesn’t care where the heart is or what shape it is — it cares how it’s doing at the business of staying alive.
This explains why emotional experiences are characterised more by valence — things going well or badly — rather than shape and location and speed. And prediction allows control: once you have a generative model, you can have priors as set points and implement predictive regulation to keep physiological variables where they need to be.
So far so good. We’ve gone from experiences of the world, to the self, to the interior of the body, from finding where things are to controlling things. And then comes the part that’s still difficult for me: that imperative for control goes all the way down. It doesn’t bottom out — it goes right down into individual cells maintaining their persistence and integrity over time. There’s no clear division where the stuff ceases to matter. And so you get right down to autopoiesis.
That’s where the free energy principle comes in. Living systems maintain themselves in non-equilibrium steady states — they maintain themselves out of equilibrium with their environment. To be in thermodynamic equilibrium with your environment is to be dead. By maintaining themselves in this statistically surprising state of being, they’re minimising thermodynamic free energy. And that becomes equivalent to prediction error in the predictive processing framework.
That’s the rough line. I’ll be very frank: there are bits along the way that can be picked at. One is the move from a thermodynamic interpretation of free energy to the variational, informational free energy interpreted as prediction error. There are results in physics linking thermodynamic and information theory, but do they do the job? Not so sure.
But it’s a reason to think about how you go from metabolism and autopoiesis all the way up to this broader frame for how brains work. There’s a phenomenological aspect too, which is speculative: if you try to think about what the minimal phenomenal experience might be, devoid of all distinguishable content — some meditators talk about pure awareness without anything going on at all — I’m a bit sceptical of that idea. I think it’s equally plausible that at the heart of every conscious experience is the fundamental experience of being alive. That is the aspect of consciousness that, for biological systems, is always there. Everything else is painted on top of that.
Peter Godfrey-Smith put it nicely in Metazoa: the more you think about what life is — these billions of biochemical reactions going on within every cell every second, electromagnetic fields giving integrated readouts — it’s much easier to think that that’s the kind of physical system which might entail a basic phenomenal state, compared to the abstractions of information processing. I think he’s on the right track.
The way to begin is to look at what are the functional and dynamical attributes of living systems at all scales and across scales, compared to other kinds of systems. Biochemistry is a big missing link — we tend to forget about it. Nick Lane at UCL is doing amazing work looking at mitochondria and anaesthetics and the deep biochemistry of what happens within cells — not only how anaesthetics work, but why the electric fields generated within mitochondria might join together to give a global integrative signal about the physiological state of an organism. Stories like this are where I see much more potential for building solid explanatory foundations for a biological basis of consciousness.
Henry Shevlin: A plus one for Nick Lane — huge fan. We should get him on the show.
Dan Williams: You’ve described a rich and fascinating alternative picture. One worry about the free energy principle approach, though: it seems too general. As people like Friston understand it, it applies at the very least to all living things, and maybe even more broadly. Most people want to say not all living things are conscious. And even in conscious organisms, many of these processes — ordinary facets of digestion, for instance — presumably don’t have anything to do with consciousness. These things are presumably still happening under general anaesthesia, and yet you don’t have consciousness. What we want from a theory of consciousness is some explanation of why some things are conscious and others aren’t, why certain states within conscious organisms are conscious and others aren’t. If you take this very broad framework, you’re not going to get that.
Anil Seth: You’re absolutely right. It’s why I resist saying the ideas I’m sketching constitute a theory of consciousness — they don’t, as they stand, do the job a good theory should do. A good theory should give an account of the necessary conditions, the sufficient conditions, and the distinction between conscious and unconscious states and creatures.
Biological naturalism, as I understand it — distinct from biopsychism — is a claim that properties of living systems are necessary but not necessarily sufficient for consciousness. Biopsychism is the claim that everything alive is conscious. I think that’s very strong; I wouldn’t want to defend it.
So what makes the difference? I think this takes us back to functions. We have to think about what the functions of consciousness are for us and for creatures where we can reasonably assume it’s there. That can move us from necessity towards sufficiency.
For me, every conscious experience in human beings seems to integrate a lot of sensory and perceptual information in a single, unimodal format centred on the body and our opportunities for action, strongly inflected by valence and with affordances relevant to our survival prospects, with particular temporal properties. It may be that when those functional pressures exist, they’re enough to make otherwise unconscious processes of autopoiesis and metabolism become a conscious experience. I don’t know — it’s partly an empirical question. For those functions to entail a conscious experience, you may need the fire of life underneath it all. I think that’s the idea.
Henry Shevlin: The question of sufficient conditions for consciousness in non-human animals is obviously very big for the ethical side. Whereas for AI, the necessary conditions are more relevant — if we can rule out that any of these systems are conscious, that makes the ethical situation a lot clearer. Since animals obviously satisfy the necessary conditions you’ve sketched, the question becomes which of them qualify.
A quick thought and then a question. I’m not sure whether your view is scientifically falsifiable. As you know, I’m very much a sceptic about the prospects of consciousness science as a falsifiable research programme. But maybe even setting aside strict falsifiability — what kinds of evidence would you be looking for over the next ten years that might push you in one direction or another?
Anil Seth: You can’t falsify a metaphysical position. Is biological naturalism a metaphysical position? It depends how much you flesh it out. I tend to be more Lakatosian in my view — I want things to be productive, not degenerate. Does unfolding the biological naturalist position lead to more explanatory insight? Does it lead to testable predictions and falsifiable hypotheses over time? If it does, that adds credence to the position, but it doesn’t establish it.
The position itself is not falsifiable as things currently stand, because we don’t have an independent, objective way of saying whether something is conscious. We always build prior assumptions in. Tim Bayne, Liad Mudrik, and I and others wrote a “test for consciousness” paper thinking of consciousness as a natural kind, but we’re always generalising from where we know — humans — outwards, trying to walk the line between taking contingent facts about human consciousness as general and expanding too liberally.
Evidence that would move the needle for me: to what extent can we demonstrate that properties of biological brains are substrate-independent? That’s a feasible research programme. We know some things the brain does are substrate-independent — that’s the whole McCulloch-Pitts story. But what about other things? What depends on the materiality of the brain? And what might be the functional roles of those things for cognition, behaviour, and consciousness?
Henry Shevlin: On the AI side, are there any predictions you’d feel comfortable about, or any evidence that might make you say, “This is evidence against biological naturalism”?
Anil Seth: The kind of evidence that would not convince me is linguistic evidence of AI agents talking to each other about consciousness. I can’t help being moved by it at one level — they’re very hard to resist, even if you believe they’re not conscious. It’s unsettling to hear these things talk about their own potential consciousness. But that’s not the right kind of evidence.
The more you can show that things closely tied to consciousness in brains are happening in AI, the more it would move the needle. For example, in a very influential paper, Patrick Butlin and Robert Long and others looked for signatures of theories of consciousness in AI models — does this model have something like a global workspace, or higher-order representations? They explicitly assume computational functionalism, looking just for the computational level of equivalence.
I think this is useful, but I’d try to drop that assumption and ask: how is a global workspace instantiated in brains at something deeper than just the algorithmic level? Do we have something like that in AI? This brings up neuromorphic computing — is the AI neuromorphic in a way that’s actually implementing, not just modelling, the mechanisms specified by theories of consciousness?
An issue is that most theories of consciousness don’t specify sufficient conditions. Global workspace theory is silent on what counts as sufficient for a global workspace. Higher-order thought theory doesn’t really tell you either. Ironically, the only theory that does is the most controversial one: integrated information theory. It explicitly tells you sufficient conditions — credit where it’s due, it puts its cards on the table.
Henry Shevlin: I’ve written a paper about exactly this — I call it the “specificity problem”: the difficulties of taking these theories off the shelf and applying them to non-human systems because they’re so underspecified. I actually call out IIT as one of the few non-offenders. But the downside is you end up with some very extreme predictions.
Anil Seth: Actually, me and Adam Barrett and others are writing a semi-critique of IIT. The expander grid thing is not as massively defeating as it seems, because in an expander grid, nothing is happening over time. You’d get something supposedly very conscious but of nothing — which is not a rich conscious state. But yes, it’s a non-offender on the specificity problem as you nicely put it.
Henry Shevlin: So to move on to the ethical side. Two big angles come up both in your paper and the responses to it. One is the danger of anthropomorphism and anthropocentrism — that we’ll see these things as conscious or develop highly dependent relationships with them. We’ve seen this at scale with social AI, AI psychosis, and so forth. The second is debates around artificial moral status — in your BBS paper, you talk about the danger of false positives and false negatives. And related to this is the call some people have raised, like Thomas Metzinger, for a moratorium on building conscious AI. A nice bouquet of issues for you to explore.
Anil Seth: I think there’s also a third element, which is how our perspectives on conscious AI make us think of ourselves — how it affects our picture of what a human being is. It’s more subtle but quite pernicious.
There’s an important distinction between ethical considerations that pertain to real artificial consciousness and those that pertain to illusions of conscious AI. Sometimes they overlap; sometimes they don’t.
If I’m wrong and LLMs are conscious, or if we build sufficiently neuromorphic AI that incorporates all the right features — I think this would be a bad idea. Building conscious AI would be a terrible thing. We would introduce into the world new forms of potential suffering that we might not even recognise. It’s not something to be done remotely lightly, and not because it seems cool or because we can play God. Thomas Metzinger talks about these consequences a lot. That’s one bucket.
The other bucket is illusions of conscious AI. This is clearly happening already. So many people already think AI is conscious, and none of the philosophical uncertainty matters — if people think it’s conscious, we get the consequences. These range from AI psychosis and psychological vulnerability — if a chatbot tells me to kill myself and I really feel it has empathy for me, I might be more likely to go ahead. That’s not great.
We also have this dilemma of brutalism. Either we treat these systems as if they are conscious and expend our moral resources on things that don’t deserve it, or we treat them as if they’re not, even though they seem conscious. And in arguments going back to Kant, this is brutalising for our minds — to treat things that seem conscious as if they are not. It’s psychologically bad for us. These illusions of conscious AI might be cognitively impenetrable. I think AI is not conscious, but even I feel sometimes that it is when I’m interacting with a language model — like certain visual illusions where even when you know two lines are the same length, they look different.
A good example where the ethical rubber hits the road is AI welfare. There are already calls for AI welfare, and firms like Anthropic are building constitutions for Claude and saying they take seriously the idea that their agents have their own interests in virtue of potentially being conscious. I think this is very dangerous. Calls for AI welfare give added momentum to illusions of conscious AI — people are more likely to interpret AI as conscious if big tech firms say they’re worried about the moral welfare of their language models.
And if we extend welfare rights to systems that in fact are not conscious, we’re really hampering our ability to regulate, control, and align them. The alignment problem is already almost impossibly hard. Why would we make it a million times worse by, for instance, legally restricting our ability to turn systems off if we need to?
And then there’s the image of ourselves. As Shannon Vallor writes about with the AI mirror — I think it’s really diminishing of the human condition. You mentioned the term “stochastic parrots.” It’s unfair on everything: unfair on AI, which is really impressive; unfair on parrots, who are fantastic; and unfair on us, because if we think a language model is a stochastic parrot and we also think that’s fundamentally what’s going on for us — that’s really reductive of what we are. That tendency to see our technologies in ourselves is a narrowing of the imagination of the human condition, and I worry about the consequences.
Henry Shevlin: I’ve got to flag one objection. You realise people make the same arguments about Darwinian evolution? That seeing us as just other animals is somehow diminishing to the human condition — that contextualising humans within the tree of life diminishes our dignity. I don’t agree with that argument, and I assume no one on this call does. But that strikes me as a worrying parallel for the kind of arguments you’re making.
I don’t think it diminishes human dignity to see us as continuous with the broader tree of life. And I don’t think it’s necessarily stripping human dignity to see ourselves as part of a broader space of possible minds, some biological, some very weird. We can preserve human dignity whilst making a more expansive vision of what intelligence and mind are.
Anil Seth: Maybe. It depends on your priors. I completely agree that seeing us as continuous with the rest of nature is actually very beautiful, empowering, enriching, and dignifying. And people often say: you’re very anti-AI consciousness, but people were anti-consciousness in animals too — look at the historical tragedy still unfolding through those false negatives.
My response is: I don’t think the situation is the same. There are reasons why we’ve been more likely to make false negatives in the case of non-human animals, and those same reasons explain why we’re more likely to be making false positives in the case of AI. Both have serious consequences.
Human exceptionalism is at the heart of both. It prevented us from recognising consciousness where it exists in non-human animals, and it’s encouraging us to attribute consciousness where it probably isn’t in large language models.
Having said that, the way I’d find your case convincing is this: just as there’s a wonder in seeing ourselves as continuous with many forms of life — we’re a little twig on this beautiful tree of nature — we can appreciate the singularity of the human mind and the human condition when we understand more about how different things could be, how different kinds of minds could be, whether they are conscious or not.
Dan Williams: I think that’s a great note to end on. I’m conscious of your time, Anil — otherwise we would just keep talking for hours. I really do hope you’ll come back in the future and we can pick up on one of these many threads. Thank you so much for giving up your time to come and talk with us today.
Anil Seth: It’s been an absolute delight. Thank you both for your time and for the opportunity. I think we did get into the weeds a bit, but I enjoyed that very much.
Henry Shevlin: Anil, it’s been an absolute delight personally, and I think we’re very lucky to have you on the show. This has been a fantastic conversation.









