Most conversations about artificial intelligence are focused on Earth: jobs, misinformation, education, politics, science, regulation, consciousness, safety, and the future of human society. But AI—and especially the possibility of reaching “AGI” (artificial general intelligence) and “superintelligence”—forces us to think on much larger scales. If advanced AI is possible, why hasn’t it already emerged elsewhere? If civilisations can build self-replicating probes, artificial scientists, or planet-scale computational systems, why does the universe still look so natural? And if intelligent life is common, where is everyone?
In this episode, Henry and I discuss these and many other questions with David Kipping, Associate Professor of Astronomy at Columbia University, where he leads the Cool Worlds Lab. David’s research spans exoplanets, exomoons, Bayesian inference, technosignatures, and the search for life and intelligence beyond Earth. He is also one of the best science communicators working today through the Cool Worlds YouTube channel and podcast.
Among other topics, we discussed:
David’s Red Sky Paradox: if most stars are red dwarfs, and red dwarfs live for vastly longer than stars like the Sun, why do we find ourselves orbiting a yellow star?
Whether anthropic reasoning — reasoning from the fact of our own existence — is a profound scientific tool, a philosophical minefield, or both.
The reference class problem: when we reason about “observers like us”, who or what exactly counts as being like us?
The Doomsday Argument, and why some apparently bizarre forms of probabilistic reasoning can nevertheless be powerful.
The Fermi Paradox: if the universe is so large, and if life or intelligence is not fantastically rare, why don’t we see clear evidence of extraterrestrial civilisations?
Whether advanced civilisations would spread through the galaxy using self-replicating probes — and why the absence of such probes might be one of the strongest constraints on extraterrestrial intelligence.
How recent developments in artificial intelligence affect the Fermi Paradox. If humanity is close to building systems that can massively accelerate science and engineering, shouldn’t someone else have got there first?
Whether artificial intelligence makes the simulation argument more plausible.
David’s experience using artificial intelligence in scientific research, and why a meeting at the Institute for Advanced Study changed how he thinks about the role of these tools in science.
Why David thinks artificial intelligence already has something close to “coding supremacy”, but is still far from being able to do science autonomously.
The risks of AI-generated scientific slop: papers, peer review, and training data polluted by low-quality machine outputs.
Whether artificial intelligence will make science more productive, or instead strip it of some of its deepest human value.
Why the future of science communication may depend on better collaboration between academic institutions and independent creators.
Links and further reading
Cool Worlds Lab — David’s research group at Columbia University, focused on extrasolar planetary systems, exomoons, habitability, technosignatures, and related questions.
Cool Worlds on YouTube — David’s excellent science communication channel, covering astronomy, exoplanets, alien life, the Fermi Paradox, cosmology, and much else.
Cool Worlds Podcast — David’s podcast, featuring conversations on astronomy, technology, science, engineering, and related topics.
Cool Worlds Podcast: “We Need To Talk About Artificial Intelligence” — the solo episode in which David reflects on artificial intelligence and science after a meeting at the Institute for Advanced Study.
David Kipping’s Columbia profile — short institutional profile with background on his research.
Transcript
Please note that this transcript has been lightly AI-edited and may contain minor mistakes.
Henry Shevlin: Welcome back. Our guest today is David Kipping, Associate Professor of Astronomy at Columbia University, where he leads the Cool Worlds Lab. His research spans exoplanets, exomoons, and the search for extraterrestrial life and intelligence, and he brings a Bayesian rigor to questions that could easily drift into speculation. He’s also one of the best science communicators working today with over a million subscribers on his Cool Worlds YouTube channel, where I should confess, I’ve spent an embarrassing number of hours watching when I probably should have been doing philosophy of AI.
David, like many of the best people, is a Cambridge alumnus, although unlike us, he actually studied something useful, namely natural sciences, before going on to do his PhD at UCL and postdoc at Harvard on the Sagan Fellowship. His work also has a really fantastic philosophical dimension, particularly around anthropic reasoning and observation selection effects, which makes him a perfect guest for two cognitive scientists who are finally getting to talk to an actual scientist. So David, welcome to Conspicuous Cognition.
David Kipping: Thank you for that very generous introduction.
Henry Shevlin: This is a bit of a fanboy moment for me, for real though. I really have spent like hundreds of hours at this point on Cool Worlds. But I’m going to get past it. I’m going to be a serious host.
David Kipping: It’s always weird when people say that to us, because I just imagine no one watches them. If it gets in my head that people are watching them, I’ll get tightened and anxious about what I’m saying. I just imagine I’m talking to a brick wall or something, and that’s much easier.
Henry Shevlin: Honestly, half the Warhammer figures in this room were painted while I was listening to Cool Worlds. I’ll leave it at that. Maybe a good place to start would be discussing anthropic reasoning, since that’s a real natural intersection at the boundary of astronomy and philosophy. Could you just give us a brief view of how you see anthropic reasoning, and maybe tell us a little bit about the Red Sky Paradox, which is one of your distinctive contributions to this area?
Anthropic Reasoning and the Red Sky Paradox
David Kipping: Yeah, I think one of the most interesting data points when it comes to asking questions about the search for life in the universe and our own place in the universe is our own existence — just the fact that we’re here. Anthropic reasoning has in many ways really been born out of cosmology. Cosmology had a rich history of using this. I think one of the first successful examples was by Steve Weinberg, a cosmologist who’s really a giant in the field. I think he’s now passed away, but he showed that you could predict not only the existence of the cosmological constant, but also its value to within a factor of a few, just based off of anthropic reasoning.
The argument was something like: the cosmological constant causes the universe to expand. It’s what causes the accelerating expansion of the universe. And so if you make that number too large, then structure would not form in the universe. You couldn’t form galaxies because everything would just fly apart too quickly. And if you make that number too small, or even negative, then you’d cause everything to recombine too quickly. So there has to be some Goldilocks value in order to explain our own existence. And so he predicted that.
At the time, the cosmological constant was kind of even a controversial idea — that it should exist because, obviously, Einstein’s general relativity, there’s that whole history of it being like his greatest blunder, of whether that should really be in there or not. People were kind of thinking that could be a static universe, and he predicted it successfully. So that was a really powerful use of it. And then Brandon Carter was the one who really kind of championed it and used it in all sorts of contexts.
In recent years, I’ve been thinking about it in an astrobiological context — how can we use it to ask questions about life in the universe especially, and our place in it?
For the Red Sky Paradox in particular: one interesting curiosity that seems to violate the norms of probability. The norms of probability would be to say that if there’s a Gaussian, a bell curve of possibilities, you should expect really to be near the center of that bell curve. It would be kind of weird if you lived many, many sigmas, many, many standard deviations off to the outside, either negative or positive direction. You’d expect to be somewhere in the middle. We sometimes call it the mediocrity principle, or something like this.
If you look at stars in the universe, most stars in the universe are red dwarfs. About 80%, 82% of stars are red dwarfs, which are stars less than half the mass of our own sun. So they’re very, very numerous. They’re called red dwarfs, of course, because they’re so low mass — they don’t have the internal pressure, the gravity, to fuse as much energy as the sun does. And thus they have less luminosity, and so their temperature is cooler. That’s why they look red.
Not only do these stars have this 80%-plus frequency — Sun-like stars are something like 6%, I think, frequency, an enormous ratio, just straight off the bat, about 30 to one or something — but on top of that, they live really long. These stars live for trillions of years potentially, especially the lowest mass ones. And so if you flash forward into the future, tens of billions of years, hundreds of billions of years, there wouldn’t be any Sun-like stars left, really. There’d be very, very few of them. And the only stars that would be glowing would be these red dwarfs.
So if you ask yourself — and this is sort of called the strong self-sampling assumption by Nick Bostrom, where you allow yourself to be born at a random moment in time — if you were born at a random moment in the history of the universe, then the advantage of the longevity of these red dwarfs really manifests. It ends up being more than a thousand to one odds that, if you’re a random soul, a random observer born around either a red dwarf or a yellow dwarf, you’re much more likely by over a thousand to one — I think 1600 to one — to be born around a red dwarf.
So I call that the Red Sky Paradox because it’s just odd. If all things being equal — and that’s kind of the base assumption there, that red dwarfs are just as good for life as Sun-like stars — you might question that assumption. That’s always the point of a paradox: a paradox shows a logical contradiction that then you can revisit the assumptions under which that paradox was derived and say, one of those assumptions must be wrong.
So for the Fermi paradox, you might say, if life is everywhere, how can we not see anyone? Therefore, the assumption to revisit is that life is everywhere. And here, with the Red Sky Paradox, we might challenge the assumption that red dwarf stars are even capable of sustaining — and really specifically — complex life like us, observers. Maybe they have simple life, but something prohibits them from evolving all the way through to something that can do statistics, do astronomy, do geology — like learn about its planet and kind of essentially write the paper that I wrote about the Red Sky Paradox. That’s kind of the cogito ergo sum criterion I’m using as my conditional in this reasoning.
I have been making that suggestion to colleagues because the James Webb Space Telescope right now is heavily invested on red dwarfs. There’s a good reason for that. It’s kind of all they can do. Unfortunately, it just doesn’t have the capability, the technology, really to do anything with Sun-like stars. But red dwarfs, it’s game on. And I’m just saying, look, there might be reasons why it won’t turn up anything.
Henry Shevlin: And is the specific suggestion, I think I’ve heard, that basically in the early years of the early formation of red dwarf stars, they might be especially turbulent in a way that sort of scorches any planets in their vicinity and strips away their atmospheres? Is this one of the empirical predictions that we can make on the basis of the Red Sky Paradox?
David Kipping: I would say it’s more a consistency than a prediction. I try to be very careful. I love very broad agnostic reasoning as much as possible. In this case with the Red Sky Paradox, I don’t have to invoke any mechanism specifically. There is probably a mechanism, surely there is a mechanism — unless we are really a one in 1600 outlier. That’s possible as well, and I concede that it is possible, that we are just a very unusual example.
But if that’s not true — for a typical example — then there is some mechanism which bars the evolution of observers like ourselves. And in the paper, I point out there are numerous mechanisms people have suggested, including the fact that these stars have large coronal mass ejections coming off them, which can strip planets of their atmospheres. They have a prolonged, what we’d call adolescence for a star. Our Sun went from being born to being a main sequence star in the space of about a hundred million years, even less than that, tens of millions of years. Whereas red dwarfs take a billion years sometimes to settle down. And during that adolescence phase, they’re violent, and they can actually remove all the water off their neighboring planets.
We think it’s that, that’s when water gets delivered. Our water was probably delivered by comets during the late heavy bombardment and the other bombardments that were occurring before that. And so if during that time you’re delivering water through comets, the comets get depleted, but the star is so active it’s stripping all the water off them, then you’re kind of net zero — like you don’t end up with any water at end of the day. And then, when all said and done, you’ve just got dry planets around a normal star, but it’s too late. There’s no more water left to deliver to the planet anymore. So that’s been suggested as well.
Then there’s also the questions about photosynthesis. Is photosynthesis possible if the star is much redder than our own star? Because obviously plants on Earth use blue light as well as red light. If you take away all the blue light, how will they do? We don’t know. It’s kind of unclear. We don’t really have too many examples of life on Earth which thrives under those conditions. And then there’s tidal locking — these planets have probably one side of the planet facing the star.
So there’s many sensible concerns. But what I’m trying to do is avoid saying it’s this, it must be this one. Because that’s really for the astrophysicists studying the geology of those objects to figure out. I’m just saying there probably is something, and go after it.
Dan Williams: I’m not sure entirely how to frame this question, David, but someone might respond that there’s just something a little bit weird or surprising that you could draw seemingly substantive inferences from such a slim evidential basis. The starting observation here is just we exist where we do. And then there’s this interesting probabilistic reasoning. And then that’s leading you potentially to draw inferences about where life might potentially evolve in the universe. I suppose this is just an objection from the perspective of, isn’t there something a little bit weird about this entire style of reasoning?
David Kipping: It’s definitely weird. Yeah, it’s very weird. I always think Nick Bostrom is really like the father of all this kind of thinking in the modern era. And he often concedes that point, that it is very strange. We don’t really have a complete theory of anthropic reasoning. It’s sort of a work in progress, to some extent. In the same way, we don’t really understand how AI works. We don’t understand really the full nature of the universe. They are works in progress.
And yet it also seems logically, you can pose these logical questions that seem irrefutable or compelling. So like I mentioned the Weinberg example: it is really hard to imagine how you could possibly have the cosmological constant be a thousand times what it is, because Weinberg’s right — you just wouldn’t have galaxies. So how could you possibly have us in that situation?
The fine-tuning argument for the multiverse is the other popular use of it in modern science, I would say. They often point out, why is it that the gravitational constant and the fine-structure constant and the speed of light, all these things are just the way they are? There’s a simple anthropic reason for it. You don’t have to accept it, but you can certainly make this argument that if they were anything else, you wouldn’t be here to talk about it. So you can’t really change the mass of the electron by a factor of ten and get away with it. There’s going to be repercussions to chemistry. If you made the speed of light ten times slower than it really is, then relativistic effects happen in sort of everyday cases, especially for chemistry — that impinges the ability of electron shells to be stable. So you start to really ruin the CNO cycle inside stars and stuff. You start to ruin a lot of interesting nuclear physics and chemistry. So you can see, I think that’s the most common case.
There’s also a fun case — I think this is true, but it’s kind of a bit of an urban legend — that during World War II, there’s this thing called the German tank problem, if you’ve heard of that. The Allies would apparently — maybe you know better than I do whether it’s true or not — would see the numbers imprinted on German tanks. It would say one-five-five or something. So they would look at that number and say, okay, so they must have a border of like 300 tanks. Because if that’s a typical number, they’ve probably not got a million tanks, because otherwise it’d be very unusual that we had the 155th tank out of a million that were being produced. And they probably don’t have 155 tanks, because then we’d just be very lucky that we’d caught the very last tank that was manufactured. It’s probably of order three to 400. And so they used that to set the manufacturing constraints for the factories back in the UK — like, this is how many tanks you need to produce, because we think that’s how many the Germans have.
So yeah, there’s examples of this reasoning being used quite a bit. I think the one way it really troubles people is the doomsday argument. I think that’s kind of like the one that everyone gets — no, something doesn’t feel right about that when you apply it to that case.
Dan Williams: Could you walk through what the doomsday argument is, David?
The Doomsday Argument
David Kipping: Yeah, sure. So it’s been invented like three or four times, I think, by different people at this point. It essentially says that if we are a medium example of ranked humans that ever live — so you go from, I mean, this is where it gets a little bit, I always think, a bit ill-defined — like, you have somehow a first human who lived, I don’t know, a million years ago or something, and then you go all the way up to today, and maybe you count up that it’s of order of sort of 100, 200 billion humans who’ve ever lived throughout human history.
So if you’re somewhere in the middle, then you’d expect there to be about another 200 billion humans to go before we call it a day. And of course, the birth rate is much higher — there’s much more people than there is today, more importantly. So the number of absolute people that are being born is much higher than it ever has been in history. And so that means there’s probably only like five or six generations left, or something, before you run out of these people. And so that’s kind of disturbing because it implies that there’s only like a hundred or a few hundred years to go before doomsday will happen.
So a lot of people think that’s really weird. How could you possibly take your rank position and make inferences about the extinction of humanity? When it’s framed like that, I think it feels really flimsy. But on the other hand, if you frame it slightly differently — you look at like the Foundation series or Star Wars or something like that, where they have these galactic-spanning empires — and you think how many individuals must be living in those societies. They’re all humans, right? They’re humans just living all over these planets in the Foundation series. You’d have trillions and trillions and trillions of trillions of people. And the chance, if you were born as a random soul at a random time, that you would be on the progenitor planet, pre-empire phase, would be vanishingly small. So you might therefore make the argument that that doesn’t look like a likely future for us. It doesn’t seem likely that humanity will ever become a galactic or universal-spanning species, because how does that possibly make sense with us being so early in the story?
But there’s lots of ways to criticize it. One is that maybe humans change. Maybe the experience of a human in a thousand years from now is some kind of cyborg, or genetically modified version of us, or just natural evolution that — their experience is not the same as us. And so we can’t say that they’re a representative example. That’s kind of the key part of this assumption. You can draw a random member, but maybe the membership itself evolves in some subtle way.
And certainly that goes backwards in time, like, when does Homo erectus suddenly become human and suddenly not? It feels very artificial to draw a line. Do you include all animals that have ever lived by that metric? How does this work? So I think that’s where, when you start ranking people, it gets really flimsy. But I think this is more a criticism of the ranking aspect of the anthropic argument, and the anthropic reasoning itself. I think it’s more to do with the ranking — that it’s probably an ill-defined problem to try and rank and discretize people like this, because of the changes that happen to humans.
Henry Shevlin: So this is one thing that I get hopelessly confused about when I think about anthropic reasoning, which is sort of the reference class problem. How do you decide how to specify your sample? Because in the case of the Red Sky Paradox, you might say, well, I step outside and I see a yellow star, right? So of course it’s impossible that I could ever have been born around a red star. So you could condition the reference class on the type of observers living under yellow star atmospheres. Why doesn’t that diffuse the problem?
David Kipping: Well then you’re kind of like double conditioning. You’re almost like saying, what’s the probability of having water on your planet given that you have water on your planet? Well, it’s one. I mean, obviously it’s one, because it’s a double, it’s self-conditional, it’s a circular statement. Obviously you can certainly make such a statement, but it doesn’t teach you anything. So you can say, what’s the probability of having a yellow Sun given you have a yellow Sun? But it doesn’t move the needle in any way.
So you do have to make a stretch. And so that stretch here would be: what’s the probability of an observer seeing a yellow star under the assumption that observers are equally likely to be born around any type of star, or any main sequence star, to be a bit more specific? So that’s the tacit assumption. And it’s reasonable to question that assumption. That’s kind of what the Red Sky Paradox tries to do.
The reference class issue is a sticky one. And again, I think this leads to these questions of, do you use the self-sampling assumption or the self-indication assumption — SIA versus SSA? They can lead to different conclusions, especially for these toy problems like the sleeping beauty problem and things like this. And those are just unresolved. You can take the Sleeping Beauty problem and get two different answers depending on how you do the anthropic reasoning. So I think these are totally sound critiques of the model. But at the same time, we do have to concede that it has had some interesting successes along the way in its journey so far. So I give it some credence, but I’m also cautious about using it.
Henry Shevlin: One thing that’s troubled me about thinking about Red Sky-style paradoxes is it seems kind of implausible to me that we would be orbiting around — that we’d be sitting on a planet to begin with. Maybe I’ve just read too much Iain Banks, but it seems to me that the vast majority of habitable landscape across the future of the universe is going to be — for at least sentient, for sapient beings, let’s say the kind of beings you can do statistics — is going to be on orbitals or constructed habitats. So why do we look up — why are we on a natural planet to begin with, when you’d think that any sufficiently advanced civilization would be building artificial habitats? Is that also a puzzle? Should that lead us to think that people aren’t going to build habitats at scale, or the majority of sapient life that’s ever going to exist is going to be, for whatever reason, planet-bound rather than on orbital habitats?
David Kipping: Yeah, I mean, you’re kind of adding in this extra ingredient of what happens to super-advanced civilizations. Most people, if this is true, would probably be born off-world. Let’s just call it that. Whether it’s orbitals, or just another planet, or a moon, or something, they’d be born off-world — which obviously isn’t true. You were not born off-world, I was not born off-world. We don’t know anyone who was born off-world. So therefore it’s already an interesting constraint to some degree, that hasn’t happened.
A simple resolution to that is to say that just doesn’t happen. Species never get to a point where they do that. Or at least species that have a — and this is where it gets very philosophical — comparable sense of consciousness to us, or whatever that means. Because perhaps there is AI doing this, but we can’t be born as AI. Perhaps there are funguses which do this — technological fungi, that’s, you know, we can’t really imagine what they’d look like, but somehow they do that, and their experience of reality is so different to us that we should not be surprised that we were not born a fungus. It’s a meaningless question to even sort of frame it that way, because they’re colonies of single-celled organisms that just extend ad infinitum. So that’s where the reference class problem gets really sticky.
The one I’ve been thinking about the most recently — and it’s kind of a real classic one — is what’s called Hart’s Fact A. It’s considered the strongest constraint by many in SETI, the search for extraterrestrial intelligence. It’s that, again, we exist. And if you imagine extrapolating human technology, even a century, maybe even just a few decades into the future, we can imagine self-replicating, what we call, von Neumann probes. You could put an AI in a small chip, you could accelerate it — not to the speed of light, but even like 1% the speed of light would be more than enough to make this a real problem for astronomers. The size of the Milky Way is about 100,000 light-years across. So at 1% the speed of light, in 10 million years you could colonize the entire Milky Way. The galaxy is 10 billion years old. So that could have happened a thousand times over by now. And yet it clearly hasn’t.
So that’s startling because there are a hundred billion stars, a hundred billion opportunities. For someone, at some point, however unlikely it is — if it’s a one in a hundred billion event, then it should have happened by now. And we shouldn’t be here to even have this conversation. So that’s a really strong constraint, I think, that civilizations just don’t get to that point for whatever reason.
Maybe they don’t choose to do it ethically. It’s hard to believe there’s a universal ethics like that. And of course, these systems don’t have to be — it could just mutate. If you make a self-replicating probe, each generation will have errors. And so those errors will cause the behavior of the probes to change. You could very easily have these runaway situations. In a way, it’s like the most dangerous technology an alien could ever develop. And yet that seems to have not have happened. And that’s really interesting from an anthropic perspective, because it does imply that we’re probably as advanced as it gets.
Science, Philosophy, and Falsifiability
Dan Williams: One of the things you said there, David, was: this is when things start to get really philosophical. I’d be interested to hear your thoughts about how you view that relationship between science as it’s sort of conventionally or traditionally understood, and philosophy, and how you position yourself in terms of the relationship between the two.
David Kipping: I have no formal philosophy training, first thing to say. I always like to be candid about what I don’t know. I don’t have a philosophy background. I remember when I was actually thinking of doing undergraduate, Oxford at the time had a physics and philosophy degree — I don’t know if they still do. It was a double major, and I was really attracted by that. But everyone told me that Cambridge had the stronger physics program. So I thought, okay, that’s really my passion is physics, I’ll go for Cambridge.
I’ve always had an interest in philosophy, and I think obviously science naturally has a connection to it. Sean Carroll often complains about this, especially in quantum physics — there’s this kind of “shut up and calculate” view that a lot of us have adopted, where we don’t really, we’re not encouraged to think about the implications of our work. But sometimes the implications can shake you to your bones when you really think about what they mean.
And that’s what gets me excited. As a kid, what I was always drawn to is just asking, what else is out there? Am I part of some bigger continuum? What is the nature of humanity ultimately? I think natural philosophy obviously tries to address those questions in a related but slightly orthogonal direction. So I’ve really enjoyed at SETI meetings — there’s often the opportunity to talk to philosophers directly. There’s all sorts of backgrounds: anthropologists, social scientists, people working in media, obviously physicists, astronomers. So you get this really diverse group of academics, even theologians. I think theology has lots of interesting connections to looking for aliens, because God and aliens actually have lots of similarities. So it’s really fun at those meetings to have — it’s the only meetings I go to where you get that kind of broad interdisciplinary interaction. So that’s where I’m learning most of my things and having those great conversations.
Dan Williams: I once had dinner with Roger Penrose, and he said that the people he most enjoys talking to are philosophers of physics — actually, philosophers of physics at Oxford — rather than physicists, precisely because he thinks with many physicists there is this kind of “shut up and calculate” mentality. They’re not willing to engage with those really kind of big-picture, fundamental questions.
But I suppose another way of coming at the same question about the relationship between science and philosophy, and how you view that relationship, is: what’s the role of kind of ordinary empirical testing when it comes to addressing these really big-picture questions that you’re engaged in?
David Kipping: Maybe this isn’t directly answering your question, but one connection that comes to mind when I think about that is Popperianism, and the definition of the empirical process of the scientific method. We have this guideline from Karl Popper, which is, your theories have to be falsifiable. Otherwise it’s not really science. You’re doing something else. And a lot of us have adopted that for a long time. Not really thought about it too much, but we were taught at college and then just went off with it.
But suddenly a lot of science that’s happening right now challenges that Popperian view. I have colleagues like Grant Lewis, who’s a cosmologist, he works on fine-tuning, for instance, and string theorists often would be in this boat as well — where what they’re working on doesn’t make any testable predictions. Certainly not in a practical way. Maybe you could imagine in some extremely advanced civilization, we’d have to build particle colliders that could be galaxy-spanning wide or something, to test some of these theories. But typically they’re asking questions that are unfalsifiable.
And even questions that I’m interested in, like, does Mars have life on it? That’s, to some seminal degree, actually unfalsifiable. I can’t ever prove that Mars is sterile, because there’s always another rock to look under. There’s always another core drilling site you could dig under to see if there’s someone there. So you can’t ever disprove it. And I can’t disprove that UAPs are aliens. I can’t disprove that aliens are not inside your body right now and you’re just wearing human skin. You can go down this slippery slope kind of view where everything just becomes unprovable in science.
But I think bringing it a little bit back to cosmology, they’ve been saying — at least Grant has been telling me this, I’ve been thinking about it a lot — that it doesn’t really matter whether it is falsifiable. It’s whether it has use, is it useful? It’s kind of maybe a better way to think about these models. Certainly the multiverse, even though it’s not testable, it has explanatory capability through that anthropic argument we talked about before. It can explain why the constants of the universe are the way they are. And if you don’t have that, you just would have to accept it as brute fact, or hope for a miracle, which is to say that one day physicists will figure it out and there’ll be some reductionist view to explain where it comes from. But it’s also possible that will never happen. I think it’s quite plausible that will never happen. And so then you’re just sat with brute fact versus, at least this has explanatory capability.
It doesn’t prove the theory is correct. I don’t think you can do that. But you can say that it’s useful. And when you frame it that way — I think a lot of us would say quantum theory isn’t really true. It’s just useful. We don’t really know to what degree the universe truly is quantum. There might be some deeper theory, as Einstein suspected, that explains all of these random probabilities, and we’ve just yet to uncover what that deeper theory is. There’s some grand unified theory beneath it. So the model of the universe being quantum is an extremely useful model for calculations, but we shouldn’t necessarily assume that it’s a totally accurate description of how the world really is. So perhaps this falsification then might be challenged as being — well, let’s just find things which actually explain stuff, and we can use in our society to progress things.
AI in Science
Henry Shevlin: So I think probably these issues of philosophy and science and their relation are going to continue to percolate in the conversation. But I’d like to take us now to discussing AI a little bit, because there was an absolutely fantastic recent episode of the Cool Worlds podcast called “We Need to Talk About AI,” which seems to suggest that, at least for you, this was a real wake-up call. I think it was one meeting at the School of Advanced Studies in Princeton. Do you want to just give us a quick summary of what this meeting meant to you, and how it was maybe shaping your views on what AI is doing to the sciences?
David Kipping: Yeah, so this was a meeting, I think in February or January — it was a few months back now, near the start of the year. I think like many people, many scientists I know are using these AI tools. And I was certainly using them. I wasn’t using Claude at the time, but I was using ChatGPT a little bit, and Copilot, and things like this. I kind of assumed that the really smart people — because we all have a bit of imposter syndrome — don’t do that. The really good coders don’t need Copilot. They’ll just code up properly. They’ll do their reasoning without any help. And I was using it as a crutch because I was inferior to these other great scientists. And so it was just sort of helping me in that way.
And then what was startling was at this meeting, these people though, just have the highest respect for. Because the Institute of Advanced Studies, you know, it is like the pinnacle of where you can go intellectually amongst many other schools, but it is one of those very, very top tier places. I remember I walked down the corridor and saw Ed Witten. People say he’s got the highest IQ on Earth — they say that about Ed Witten, right? And so you’ve got people like that saying they’re all using AI tools for not just coding. And these people were like hardcore coders. They were writing these — Enzo and Gadget — these like astrophysical simulations of galaxies and hydrodynamical fluids and stars and things like this. Really, really complicated codes. Legacy codes that have been handed down sometimes over advisor to student to student to student generations of people. And they were using it.
So there was a concession that it has coding supremacy. That language was used — that it already has coding supremacy, and we have to admit that and use it. It doesn’t make any sense to pretend it doesn’t. And second, that it possibly has mathematical supremacy. There was — it was less certain — but there was a sense that it was already pretty close to being as good as what we can do mathematically, even in some cases superior. And that was really wild to hear. To me, it just sort of made me think, I’m not being like the idiot in the room by using this. Everyone’s using this at this point. And if anything, they’re trying to accelerate the adoption of these tools, not resist it. There was no way back, sort of view, about it.
Henry Shevlin: And of course, David, you’ve been using AI in the broad sense basically for your entire career, I think. Have you seen significant evolution in the way these tools have evolved? Was there one moment, perhaps it was this meeting at the Institute of Advanced Study, where things suddenly kicked into a different gear? Or have the tools been steadily improving since you started in the field?
David Kipping: Yeah, certainly in my own career, I was more on the development side of some of these tools for a while, but not at a serious level. We wrote a couple of papers where we developed our own deep neural networks — just simple feed-forward, back-propagation trained models for bespoke problems in astrophysics. In particular, we were interested in predicting if you take a solar system, can you predict whether it has additional planets in it? Questions like that. And then where would those planets live? So we could take this sample of all of these known planets and make successful predictions for the systems.
I’d written my own DNNs like that. It was mostly — I mostly did it, I think, because I was just interested in how they work. The best way to figure out how something works is just to find a pet project and code it up. So I was more on that development side. That was sort of 2010, 2011. And then in the years that followed, I started to back off it, because lots of astronomers were doing AI — and still are — but what I was seeing was that it wasn’t like a hobby project anymore. You couldn’t dip into it and mess around and write an impactful paper, and then go away and do Bayesian statistics and all the other stuff. It was becoming a full-time job, because the literature was just exploding. To keep up with it was like you would have to spend all your time just reading the archive and playing around with various AI tools to keep up with that.
And I just consciously decided I didn’t want to do that, because AI is not my passion. Science is my passion. So I kind of left it to the wayside. I’ve said to several students recently over those years — they were like, “I saw you did these AI projects. Can I do one with you? I’m really interested in AI.” And I’m like, I’m not doing anything else with AI at this point. So I kind of went stagnant on it.
And then most recently, I’ve now become, I’d say, like a power user of it. I don’t have any false narrative in my mind that I’m going to develop the next LLM for exoplanets, or for anything. That’s not my interest. There’s no point. I can’t possibly write an LLM anywhere near as good as what OpenAI can do, or Anthropic can do. So I may as well just use the tools, and think about how to use them as effectively as possible in my field. I think that’s the transition that I’m seeing a lot of people moving to — that the billions and billions of dollars of investment these companies have make it just a complete waste of time for astronomers, especially, who aren’t even software engineers, to possibly try and compete with that. We may as well just try and use them in a way that advances our field.
Dan Williams: So in terms of the use of AI in science now, as you said, David, there are some people, including some of the smartest people on the planet, who are using AI aggressively. There are some people both inside academia and outside of it who are aggressively against the use of AI. How are you thinking about that in terms of — are you really excited about where this is going? Are you worried about it? Do you understand some of the worries people have about the use of AI in science?
David Kipping: Yeah, for sure. It is, in some ways, it has analogies to what’s happened before. One concern might be the ethical concerns of how much power, especially for climate change — how much power and how much water these data centers use. Even potentially, building space data centers would also be a form of further contamination and pollution to our natural environment. So I think you could understand why someone might say, “I’m trying to be carbon neutral, so I just don’t want to use these things.” But that debate’s already — that’s not a new debate, because astronomers have been using high-performance computers for generations already, since probably the ‘40s or ‘50s. As soon as computers were accessible to scientists, astronomers were using them to do big calculations.
I remember there was a really fun paper, like about 10 years ago, that made a lot of controversy. It was saying that all astronomers who code in Python are bad for the Earth, because Python is so computationally inefficient that you are basically emitting 10 times more CO2 than you need to if you just coded in C instead. It was like really trying to shame astronomers who coded in Python — of course, basically all astronomers these days code in Python. So a lot of people really didn’t like that paper. But it was a fair point, like if you really care about your carbon footprint, then that’s a big factor — these data centers, what they produce.
So that’s not that new. Different people will just arrive at different comfort levels as to where they think these tools are applicable. There’s also this kind of oligarchic element to it as well, like these companies and the extreme wealth and the wealth inequality in our society, the future of work, the future of labor — all get tied up into that. So it intersects so many things.
I think it’s interesting that AI has become such a political topic. I think it didn’t used to be that way. It used to just be like a tool, and you had an opinion about the tool, but now it’s like very politicized. And even, I’ve noticed that some students who identify as very liberal will not use AI tools. And maybe students who are more right-leaning or centrist will not really care as much about that. They’ll be like, “well, whatever, it’s just the way of the world. Let’s just be pragmatic about it.” Even saying you’ve used AI can certainly trigger a political reaction to your work, if you say that. So that’s, I mean, this is all kind of new. That was very on the margins when previous work I found with data centers and high-performance computing. But now it’s becoming much more present. So that’s interesting.
I’ve just been thinking personally — I think the question I’ve been asking myself is, I’m on sabbatical right now, so I don’t have to deal with it, but: would I hire a student who refused to use AI? I talked about that, I think, in that podcast episode, and I’m still thinking about that. I think I probably wouldn’t, in the same way that I probably wouldn’t hire a student who refused to use the internet. It would be such a disadvantage to them. If they said, “I’m only going to use a typewriter, I’m not going to use a computer,” I’d be like, okay, that’s fine, but you’re really tying two hands behind your back here. If you want to get a job, and you want to have an impact for PhD, and we want to get some work done together — you need to be using these tools. It’s weird not to use them. So that’s a difficult conversation to have with yourself and with the student, but it’s certainly something I’m thinking about.
Henry Shevlin: So there’s a related worry about the impact of AI on sciences that I think has come up a few times on the podcast, most recently with Chris Lintott — about whether AI might strip science of a lot of its human value. If we’re relying on AI systems to produce the next generation of theories that may be to some extent inscrutable to humans, that this will sort of destroy the most successful project in human history, namely humans doing science. And I guess the counter-argument to that is that the reason that we fund science at scale, the reason we build particle colliders and expensive space telescopes, is because we care about results. So fine if people want to be hobbyist scientists to experience the joy of science. But should the taxpayer be funding your own epistemic discovery and aesthetic enjoyment? Or should the taxpayer be concerned about results? So I’m curious where you land between those two positions.
David Kipping: Yeah, I think I was a lot more concerned about this a few years ago. And weirdly, I’ve actually gone the other way a little bit. A few months ago, I was right with you. I was really worried about — what’s the point? I don’t want to live in a world of magic. I want to — the point I became a scientist is because I want to understand how things really work. It’s understanding. And I don’t want a model just to spit out a result, have no idea where it comes from or what it does, and just trust it. That’s not comfortable to me.
But having used these models a lot over the last few months, I’ve become — A, you get a bit acclimatized to using them, but B, you start to understand the limitations, at least of the current versions of what it’s doing. And it’s certainly not at the stage where it’s able to pump out a paper. It’s just not there at all, in my opinion.
There was a colleague of mine who spoke to me about this recently, where she had a PhD student who wrote a really nice first draft of a paper, a really great astronomy paper. They submitted it for review, and they got the referee report back. And then the student came to her a few days later and said, “I’ve finished the second revision already.” That was quick — just two days. That was fast. And she looked at it, and it was just complete nonsense. The paper was twice as long. All the figures were ruined. It was overly verbose. The messaging had just completely been lost. She said to him, “did you put this into ChatGPT?” And he was like, “no, no, no.” But then it turned out, of course, she did. Eventually he confessed that that’s what he had done. So they had to just totally scrap that revision and go back and do it the old-fashioned way.
I think that’s just a good example of how — I mean, it kind of touches on also expertise, like — I don’t think a senior person at my level would do that. But I think students and interns could be tempted to do this, where you just do that, copy and paste the whole damn project into ChatGPT and say, “do it.” That’s really dangerous in my experience. And it’s not the correct way to use them. You need to figure out a plan in your head a little bit, or even interact with it to develop a plan. But it has to be like a conversation. And then you need to go piecemeal — you take little bites of it. You ask it to pursue that next thing. You test it. You compare it to other codes you know that do the same thing.
In a way, that’s not that different from what scientists have always done. To go back to the example of using large-scale simulations of the universe — if you’re a PhD student who is trying to simulate, I don’t know, supernova feedback around supermassive black holes, or something, the star formation regions around those areas — you might be handed over surely a giant piece of code, hundreds of thousands of lines of code that have been handed down over like 10 years of people developing it, with huge teams. You would not be expected to understand every line of code in that. You would be expected to use it, and to understand sort of broadly what it’s doing, and to ask skeptical questions. So if you got an answer that said there was negative star formation, you would look at that result and say, hmm, that doesn’t make sense. Let me work through the problem and see where it’s going wrong.
It’s that kind of sanity check that I think physicists, especially, have always learned to do — those back-of-the-envelope calculations. Yes, you have some sophisticated computer code that spits out impressive answers as a black box, but the skill of being able to check things with your brain and ask those reasoning questions is absolutely vital. And almost every time I use these AI models to do something, it messes up the first time over, and I catch it out, because I’ve done that back-of-the-envelope calculation. I’ve said, well, actually, let’s take the asymptotic limit of this in this limit, or this degree, and you can see it fall over. And it’s like, “oh yeah, you’re right.” And then it will go back and fix it. But that’s that vital skill that I think we’ve always needed.
So I don’t know — I don’t know how things are going to improve. Maybe eventually it’ll be able to do all of that itself, and just completely take over. But certainly, as impressive as Opus 4.7 is, and these are the models — they’re nowhere near that level yet, in my opinion, of being able to run away and do science.
Dan Williams: So the obvious argument, you suggested, David, for scientists making as much use of AI as possible is that it’s just going to help them with the work of science and advancing the frontier of knowledge. That’s kind of the social responsibility of scientists. Can you foresee any ways in which actually, even though it might seem like it’s making us more productive, it might have some negative consequences for that core scientific project of creating and advancing knowledge?
David Kipping: Yeah, certainly there’s spamming, which can happen. You can have — and that’s been happening in some journals. I don’t think astronomy journals have suffered from this too much yet, but there are certainly examples of people doing what that student did, which is what you shouldn’t do — which is just to prompt an entire research project and not really look at it too closely, and just submit it to a journal. The journals themselves may start using AI to do the refereeing — again, in which case you could just end up with an enormous amount of, what would, AI slop literally in these journals.
What I worry about — I mean, it’s true with image generation as well, and other things — is just that kind of recursive loop then starts to close. You start to have scientific agents that are trained on junk. Because if we get to a point where there’s enough junk science out there, then what it’s learning is junk, and so the true scientific innovations get lost in the noise. So that would be really worrying.
I do think that human referees are a vital part of making sure this doesn’t happen, which is an interesting problem because human referees are in very short supply. It’s very hard for editors to find human referees these days. But yeah, in the same way that that’s happening with music, and it’s happening with image generation, and it’s happening already with video — I think it is a worry that you start to train on fake data.
I know that — I was listening to the NVIDIA CEO, I forget his name, he was on Lex Fridman recently —
Henry Shevlin: Jensen Huang.
David Kipping: Yeah, sorry. He was talking about how they’re very comfortable with using simulated data and augmented data. I don’t really know how that would translate to science. It would make me nervous to generate fake scientific papers and then train on them to create an AI researcher. I’d have to think about that and learn more about what they had in mind there. I don’t think he was thinking about research particularly in that case, but it would have to — you’d have to solve that problem, because you probably wouldn’t have enough volume for, in terms of research papers, really to create credible agents, at least with the training tools they’re currently using.
AGI Timelines and the Future of Science
Henry Shevlin: So you mentioned, and I completely relate, that current AI agents — although they’re very useful as tools, they can’t take over large-scale project management single-handedly, particularly in the sciences, or in my own field. I find AI tools very useful when doing, for example, research for philosophy and cognitive science papers, but I wouldn’t trust writing a paper to one of these things anytime soon. But at the same time, the timelines that serious researchers are talking about — they talk about five, 10 years away from AGI, from real transformative super-intelligence. And I’m just curious whether you are skeptical of some of those timelines, or whether you see real transformative AI in our near future.
This actually really comes across, I think sometimes in the show, when you’re talking in the podcast — when you’re talking about, you know, various new telescopes that are scheduled to go up in the 2040s. And part of me just thinks, come on, by that point either all of the major predictions from leading labs about the destination of AI, AGI, will be falsified, or these telescopes will be — maybe not redundant — but our sights will be set much higher. We’ll be building our first Dyson swarms by 2045. So I’m curious, are you a skeptic about some of these more ambitious goals for AI in the next decade or two?
David Kipping: I’m certainly a skeptic of having Dyson swarms, I’d say, by 2045. That would surprise me a lot if that was true. Because I think there’s a big difference between software and hardware — actually to physically build stuff. Even what’s slowing down a lot of this development with AI is they can’t build data centers fast enough, nor the power to supply them fast enough. Energy is really becoming the bottleneck for them, not the software development.
I always try to be very agnostic about everything scientifically, especially about predictions of the future. And it’s totally plausible that there’s a ceiling — that there’s a ceiling to how good these models can get. Usually that’s true of most things. Most things are S-curves. There’s hardly anything in the universe that’s truly exponential, except for probably the expansion of the universe. That’s the only thing that’s exponential. Everything else is an S-curve in nature. So it would be weird if it didn’t saturate at some point. And I’m not exactly sure what that bottleneck could be, but it could just be a fundamental limitation of large language models themselves.
The actual way we think — although language is an integral part of how we think, and obviously you guys know a lot more about this than I do as cognitive scientists — but it feels to me that there’s thoughts I can have that don’t involve language. I can imagine a ball rolling down a hill, or a spaceship taking off, and there’s no words in my head. It’s almost like a little physics simulation that’s playing in my brain. And I don’t know if the way these LLMs work will guarantee that it can do all the cognitive things I can do. I just don’t know. I’d be interested to hear what you think about that.
Henry Shevlin: Well, just to push back slightly, of course LLMs are one of many different games in town at the moment. You’ve got things like AlphaFold, GNoME, doing sort of basic material science research. I would have shared some of those doubts a few years ago, but seeing, for example, the amazing work being done by frontier AI in even LLMs in things like mathematics — we’ve now had multiple Erdős problems being solved with AI playing an absolutely central, defining role. So I’ve been surprised at how well these models that seemingly just start out as linguistic predictors can actually contribute to frontier mathematics — LLMs and frontier material science or biology when talking about non-LLM AI systems. So I see the current wave of AI, although LLMs get all the headlines at the moment — we’re investing in multiple different pipelines in parallel.
David Kipping: Hmm. Yeah, that’s fair. I think the best case of agnosticism I can give you that I’ve used in my own work that bears on this would be the simulation argument, actually, which kind of leaps back to that anthropic point. You’ve probably heard Musk say this and others — that he’s stated very confidently that the odds that we don’t live in a simulation are like a billion to one. Like, we almost certainly are simulated, by this reasoning that, you know, if a universe can make a simulated universe, and that one can make a simulated universe, and so on and so on, then you’d end up with far more simulated universes than real ones.
But I point out in a paper a few years ago, very simple argument, that we don’t know that we’ll ever have the ability to make those simulations of that fidelity. Maybe there’s some bottleneck to our own ability. And what Musk was doing was taking one of the trifecta — the trilemma — that Nick Bostrom took, and just saying it was the last one was true: that essentially we would indeed go on to make these simulations. But there’s the other two parts of the trilemma — A, that we never develop the capability, or B, that we never choose to do it. So if you just have a more soft prior, more agnostic prior, you’d say, maybe there’s a 50% chance, or something, that we will develop that technology. There’s also a chance that we won’t develop that technology.
I just try to remain agnostic like that with AI, because if you just extrapolate all technologies ad infinitum, then you would certainly conclude with simulated. And historically, that’s been precarious. Percival Lowell took canals being built across America and said, that’s what advanced civilizations will do. They’ll just be covered in canals. And it seems silly to us — like, we think, why is that so silly? Why would a civilization cover their planet in canals? But to him, it made perfect sense as an extrapolation. Scientists today talk about tiling planets with solar panels, because that would be a natural extrapolation of renewable energy. And similarly, I wonder if in a few generations, the idea of extrapolating the capability of AI without any bound would look foolhardy. So I just try to remain totally agnostic about it. It is possible — I’m not saying it won’t happen — I just try to remain agnostic. I don’t know how far these things can go. I don’t think anyone really knows.
Dan Williams: Yeah, I agree with that. I don’t think anyone really knows. I’m also extremely uncertain about the timelines here. Just to double-click on one thing — state-of-the-art LLMs these days aren’t only trained on linguistic input; there’s sort of multimodal inputs as well. Although I also share the potential skepticism about whether this particular kind of architecture will scale to AGI and super-intelligence and so on.
But David, suppose we fast-forward five years, 10 years, and we do have AGI, in the sense of AIs that can fully substitute for the kinds of stuff that we do — for all kind of economically valuable, scientifically valuable human labor. How would that cause you to update your views about these other big-picture questions you’ve looked at? You mentioned the simulation argument. Earlier on, we touched on the Fermi paradox. So I totally take the point — there’s huge uncertainty. Suppose that resolves in 2035 and we do have the real deal, super-intelligent AI. How would that then shift your beliefs about these other topics?
David Kipping: Yeah, it’d be a big shift, I think. It’d influence all sorts of aspects of this conversation. One thing we see already with these AI models is how energy hungry they are. And if you extrapolate that, then surely the only purpose of these computing data centers is to compute as much as possible, as fast as possible. And so that implies that you’re going to need vast amounts of energy.
One interesting consequence that I’ve been thinking about just recently is, with these orbital data centers that billionaires are getting very excited about — that would produce quite a signature. We should probably see that in James Webb data. We could probably already put limits on the existence of essentially artificial rings of thermally hot — because they’d be emitting a lot of infrared because they’re warm — geosynchronous orbits, most likely, to capture as much solar energy as possible. So that puts them orthogonal to the plane at which these planets transit. So that maximizes their detectability. So I think we should see that. That gives you lots of ideas about what might be possible to do with asking these questions about other life.
But if we make that breakthrough, I think the biggest point is it seems to imply that we are alone. Because if we can do it, surely someone else could have done that. And it really does exacerbate that point we talked about earlier with Hart’s Fact A — that we seem to live in a totally natural universe. Everything about the universe we see — stars, galaxies, clouds of plasma — everything is consistent with nature. There’s no hint anywhere of anything artificial, no engineering, nothing in the whole universe as far as we can say is true. That is weird.
If we can invent these machines which have this exponential capability to just basically almost do magic — just do whatever they want, Dyson spheres everywhere, colonize wherever they want, faster-than-light spaceships, whatever it is — it just massively exacerbates the Fermi paradox, to the point where you’d probably conclude this is it. That would be my natural reaction. It would make me even more pessimistic, I think, about the probabilities of civilized, intelligent life in the universe.
Henry Shevlin: I mean, there’s a fun idea here that if we do develop AGI, then this should massively raise our prior on us being a simulation, which could also — and the simulation theory is sometimes offered as an explanation of the Fermi paradox itself. The kind of pop version of this is the kind of “draw distance” argument that you see from video games. If you’re in a video game and you look at the mountains in the distance, they’re not fully rendered. They’re just like a skybox, right? So in some sense, you might say, well, the reason we haven’t found a universe paved with technosignatures is precisely because we’re in a simulation. There’s no point simulating — if you’re doing an ancestor simulation of life on Earth, then you just need the minimal amount of background information in the galaxy.
David Kipping: Yeah, I agree. It comes back to this idea of, what is science? Because I think simulation theory has explanatory capability like that. It naturally explains why there’d be no one else out there. And it also kind of explains why we live when we live, right? Because we would live, basically, in the most interesting time, which we seem to indeed live in — the most interesting time of this step-function transformation, where you might be interested in seeing how does that play out? What does that look like? Let’s simulate it. Let’s see how it looks. So it has a lot of explanatory capability.
But the simulation argument definitely fails the Popperian definition in most versions. Because any errors — you know, people talk about looking for glitches in the matrix — but any errors, you could always just rewind the simulation a little bit, fix the error, and then start back from before that error crept in. You could always just have reverse tracking. Go back to the last save game before you jumped off the cliff, right? Is what you could always do. So in that sense, I don’t think it’s testable. I don’t really know what to do with it as a scientific idea, except as an interesting philosophical idea. I think it would always be unprovable. It would always just be something we suspect — and maybe a lot of us suspect it — but we’d never be able to prove it.
But the idea of an AGI that can do everything I can do, just to reverse track a little bit, would be — it just changes everything, right? Because then what would I do with my time? I don’t even know. What would I — how would I spend my days?
Henry Shevlin: Well, hopefully producing — continuing to produce the podcast for a start.
David Kipping: But you wouldn’t need me to produce the podcast, right? It would do that as well. There’s no function to that, because you could probably think you’re watching me, but it’s just an emulation of me. You’d just say, “create fake Davids that make podcast episodes every two seconds.”
Henry Shevlin: But you see, this is an interesting argument about employment in the post-AGI era — that relational goods, or goods where the humanness is sort of the point, will become the most valuable area of the economy. A simple example here is the famous string quartet argument: I can play a beautiful recording of the greatest string quartet in the world, but people still hire humans to do it for them, because the humanness is sort of the point. I think things like entertainment might be an area where there’s a known person with their own brand and their own reputation. Maybe this is exactly the kind of area where humans will still be working, even if it’s AIs behind the scenes doing a lot of the science, doing the economically valuable activity in industry.
David Kipping: Yeah, but I do think a podcast is a digital product. That’s the deliverable. I actually physically upload a file to YouTube, or to Podbean, or whatever. That’s the final deliverable. So if you could produce that convincingly with an AI model, it’d be far easier for me to do that than to actually sit down for two hours, and I’d probably enjoy it less. I’m sure I would. So maybe we’d all just revert to actually physically meeting again, and talking in public lectures and things like that. Maybe that would be all that would be left.
But even then, it’s hard to imagine. If I tried to imagine giving a public lecture in 20 years time, after AGI, I’d have no idea what’s going on with AGI. Because AGI would be so far ahead of me. All I could talk about would be classical learning. I wouldn’t be able to tell you anything about how this latest discovery works, because it would probably be beyond my comprehension. And so that’s where I just lose excitement. I can’t even really imagine staying a scientist, because it just would feel purposeless. If I don’t understand what’s happening, if I’m not a participant — I mean, David Hogg wrote a wonderful piece about this. Maybe you saw it on arXiv: that ultimately we do science because we want to participate in science, not because we just want to have these answers delivered to us. That’s a byproduct of it. But ultimately, we’re curious creatures. That’s a fundamental part of human nature, is to want to understand how things work. And if we lose that, we just become spectators. I think that’s really tragic. So I fear that future. I would not really want to live in that world.
That’s why you’ve had — it hasn’t happened so much recently — but you had a few years ago people, like Max Tegmark, having these calls for pauses on AI development and things like this. I’m sure in part that’s fueled by asking these questions about who are we in that world?
Henry Shevlin: Have you seen this lovely Ted Chiang short story — flash fiction in Nature — called “Catching Crumbs from the Table,” from about 20 years ago? Where he talks about this era of post-human science, and he imagines that you have this new industry of machine hermeneutics, where humans try and figure out — try and explain in very dumbed-down terms — what it is the machines are coming up with. So that’s one vision of what the next generation of science could be: us consulting the sacred texts almost. They produce these amazing advances and we try to win out the sense and the logic in them.
David Kipping: Yeah, but even that, you could imagine AI doing that. I think that’s the problem. There’s really nothing — because the whole point of AI is it can do everything we can do. So then there’s nothing left for us to do. You can retreat and retreat. And especially if you get to the point where robotics can obviously do all the manual labor, and even eventually the emotional labor, and therapy, and talking to people. People talk about AI girlfriends already all the time, but God knows what’s going to happen once we have robotic girlfriends like that. It’s just going to be the end of the birth rate. That explains the doomsday argument, I think, right there.
It’s a terrifying future if all those predictions come true. But I just — something doesn’t feel right about it to me. There’s just like a spider sense, an intuition, that these models will never be able to replace everything that we can do. I think our role will evolve as scientists, as managers, in terms of where we interact with each other, as communicators in the media space. I’m sure all of that will evolve as it always has done. I am skeptical it will totally be displaced, because I think a lot of people don’t want that. There’s no — most people don’t desire to have no function in this world. Most of us desire to have a role. If humans don’t want it, I don’t think it will happen.
Dan Williams: I think it’s also important to distinguish the question of whether AI could replace human beings, from whether AI could replace human beings using AI and augmenting our capabilities and extending our capabilities with the use of AI. I think we are, as the philosopher Andy Clark puts it, kind of natural-born cyborgs. We’ve always extended our capabilities with the use of technology. I think even once we’ve reached really advanced AI, the period that will follow that will not just be us becoming, sort of, 19th-century aristocrats playing frivolous status games. I think there’ll be this long period where we’re augmented with this technology, rather than replaced by it.
The Fermi Paradox and Being Alone
Dan Williams: I have a question, just to go back to the Fermi paradox. It seems like many people have the intuition that if it is in fact the case that we are the only animals that creates super-intelligent AI, there’s something kind of surprising about that, just given the scale of the universe. As someone as an outsider to this whole literature, it strikes me there’s always something that seems sort of teleological in the way that that assumption gets set up — as if there’s some tendency in the universe towards intelligence and then technology and civilization. If we were the only animals in the universe that ever produced the music of the Beatles, I wouldn’t find that a priori very surprising. It’s purely contingent that that specific chain of events happened. Similarly, when it comes to the fact that we’ve got the cognitive capabilities and the institutions that enable us to build things — I don’t think there’s any tendency in the universe that’s pushed anything in that direction. I think it happened through lots of chance events, and through an evolutionary process that is not in any way kind of teleological. So what’s supposed to be sort of surprising? What’s giving the Fermi paradox that paradoxical character, according to many people?
David Kipping: Yeah, I mean, certainly when you look at human history, we were more or less biologically the same as we are today for the past 200,000, 300,000 years. And yet we did not have agriculture, the Neolithic revolution didn’t start until about 12,000, 11,000 years ago. So we were quite happy for 200,000 years to be hunter-gatherers. We weren’t compelled to develop cities and farm. They thought — I don’t know what they thought — but apparently they were quite satisfied with that way of life.
So it’s certainly not obvious that you could take even humans and put them in a different planet and rewind the clock and get the same outcome again. Maybe this is a very unusual outcome of what happens even in the human experiment, let alone other advanced intelligent beings out there. And of course, intelligence is so diverse, because there’s lots of intelligent creatures on our own planet, that it’s hard to imagine them developing a civilizational, technological civilization — like a dolphin, or something. Obviously, it doesn’t have the fingers and thumbs to really build anything like that, despite possibly having greater intelligence. We’re not really sure.
So there’s certainly no guarantee. But I think the argument might be like monkeys on a typewriter — that if you give enough rolls of the dice, you probably will at some point form a roaming AI. And if we do it, then it proves that that is the case, that it can at least happen in some instances. The question then becomes, what are your priors? Like how often do you think that happens? In a hundred billion stars, do you think that’s a probable outcome or not?
I did a calculation last week — I might publish it on Galaxies. Again, I used [AI] actually to help me with the math, to be honest, to go through it. But I just kind of asked: imagine each galaxy gets a chance of turning AI — I call it berserker, like just getting infected. There’s some spontaneous spawn rate at which a galaxy can convert from essentially just a vanilla galaxy into a berserker galaxy. And berserkers send out a signal at the speed of light, which — every galaxy they come into contact with, they infect. So it’s almost like an infection-type problem. But on top of that, you’ve got cosmological expansion — the universe is physically expanding on top of this as well. So that was the calculation I did.
It turns out that in order to get 50% of galaxies infected in the universe, the spawn rate is one in six billion galaxies. So if just one in six billion galaxies, over the entire history of the universe to date, ever spawns an AI, half of all galaxies would be gone by now. That’s even more so, because that’s one in six billion galaxies. Each of those galaxies contains 10^11 stars. So this is where things get — the numbers get really big and you start to run into real problems. Now you’re talking about an event that’s a one in a trillion level less than that event of happening. That’s where it just starts getting a bit uncomfortable. Maybe that’s where you start to think simulation thoughts, because you think, how does this make sense? How can there be just absolutely no one else out there? Because if we’re only a few decades away from doing this, what gives?
Henry Shevlin: So I think that’s a fascinating point. To pick up on something you said, Dan, and also something you said, David, about rewinding the clock. Stephen Jay Gould had this famous radical contingency thesis: if you rewound the clock of evolution on Earth, to what extent would we see the same kinds of animals and forms emerging? And as someone who dabbles in the philosophy of biology world, my sense is that there’s been a slight move towards thinking there’s perhaps less contingency than we thought. We see many instances of convergent evolution, convergent intelligence across, for example, eusocial insects and humans and cephalopods and cetaceans. And even at sort of earlier stages in development, primary endosymbiosis occurred at least twice, we think; multicellularity something like 20 or 30 times independently.
To relate this to the point you just made, David, when you think about the various possible locations of a Great Filter — there aren’t as many good candidates, perhaps, I think, as there used to be. Apart from perhaps the origin of life itself, maybe the emergence of something like eukaryotic life. But you really need those numbers in the Drake equation to get down, you know, to reach the trillion-to-one levels. So I’m curious — I know obviously you’re writing a book about how we might be alone in the universe — and I’m just curious where you think the filter is, or what the best candidates for the filter are.
David Kipping: Yeah, the origin of life, I thought, would be the obvious place to put it as well for a long time. Just because — certainly if you ask, what is the chance of making a protein by random chance? Take some amino acids — there’s 20 amino acids in a protein, 20 different types. A typical protein is like 80 to 100 amino acids in length. So therefore, the number of combinations ends up being, I think it was 10^180, possible ways of arranging those amino acids, and only one would be a protein. So it just seems like — we’ve never done that. No one, as far as I’m aware, has ever taken amino acids, shaken them up in a lab, and got a protein out of it. It’s such an improbable arrangement to form even a protein. So you can certainly make the argument [for that as a filter]. But maybe there’s — I think the counter argument was always, well, maybe there’s something we’ve yet to discover. There’s some autocatalytic process that’s making those that we have yet to find.
So I think the strongest piece of data we had in my mind for life elsewhere would be the occurrence rate of abiogenesis — was how early life started on Earth. As you say, similar to the evolutionary convergence aspects, that has been revised significantly over the last 10, 20 years as well. In fact, there was a paper in Nature a couple of years ago — maybe it was last year — by Moody Adow that looked at the genetics of LUCA, the last universal common ancestor, and estimated that it lived 4.2 billion years ago. Which is almost immediately, because the Earth had oceans — formed about 4.4 billion years ago. The Earth formed about 4.5. You get the oceans at 4.4 billion years ago. And then within 200 million years, you don’t just have one organism, you have a planet covered in life to explain LUCA. It’s a whole network. It’s a whole biosphere at this point already.
When it’s that early, I did the math, I did the Bayesian stats of that — you end up with strong evidence that it’s a fast process. You really can’t explain that without it just being somewhat of an inevitability of the chemistry that was available. So that removed for me one compelling Great Filter.
And of course, if we discover life on Mars, or we discover life on an exoplanet, then I think it’s totally gone. There’s no plausible case — that would have just established that life is everywhere at that point. And so then you do get into these frightening scenarios of it being potentially ahead of us. It could be in some form of what we’re doing right now with our technology — whether it’s the AI, whether it’s the weapons we’re developing. It may be that not the AI itself, but the effects that this rapid transformation has in our society — we just can’t handle it. It’s moving too fast, and it causes too much instability. Compound that with other geopolitical effects, and you could easily imagine it being a path to our demise.
So I can’t imagine that the [Great Filter] being — I hope it’s not, obviously — I hope it’s not ahead of us, but I can’t imagine it being anything but ahead of us. The one saving grace about this, I think, is that we’re so widespread, and there’s so many of us at this point. There’s almost 10 billion humans on this planet from pole to pole, and probably soon in space as well. It’d be difficult to eradicate every single one of us. I think it would take a real work of art to kill every single human on this planet. So I think humans probably will persist. I can imagine a giant reset of some kind — a throwback to the Stone Age type situation, where we just really revert to a Neolithic style of living, or something. And then probably we’ll fade out, or maybe we’ll go away in some way.
But intelligence is in so many different trees of life now, as you mentioned. It seems to be a convergent trait to some degree, because it’s not just us that has intelligence. Even cephalopods have intelligence, very different creatures to us. So you can imagine intelligence persisting. The Earth probably has about 900 million years left to go before it becomes uninhabitable due to the evolution of the Sun. And all animals evolved in the last 600 million years. So we have one and a half times — from single-celled to us — we have one and a half times that still to go. Evolution will be starting afresh from a very high vantage point, compared to where it was 600 million years ago. So I think it’d be a little bit surprising if a technological civilization didn’t re-emerge on this planet. So I think that’s almost our best bet for communicating, to be honest, with another civilization — is to leave something behind for them. Maybe the Earth is a cradle of multiple instantiations of civilizations. We might not even be the first, as far as we know. Maybe there was someone before us, but it appears we are the first.
Henry Shevlin: So just on the idea of a late filter — the thing that I’ve never found super persuasive about this, you know, the idea that there is this predictable trajectory by which all intelligent civilizations across the galaxy, across the universe, wipe themselves out — is it seems there’s a lot more path dependency in technology. You only need one civilization to, say, avoid nuclear war, or avoid building advanced super-intelligence, and then go off and successfully spread across the cosmos, in order for that whole sort of Great Filter to collapse and no longer explain the Fermi paradox.
David Kipping: But that’s true of every Great Filter.
Henry Shevlin: I guess the thought is, if there are just hard, immutable rules of biology that mean that the initial formation of protein is just incredibly hard, that seems a lot more sturdy as a filter than relying on social conditions reliably coalescing so that civilizations wipe themselves out through nuclear war, or something like that. The idea of an early filter seems more robust to me than a late one. But obviously that doesn’t help much when the early filter candidates are themselves being winnowed down.
David Kipping: Yeah, I agree it would be neater if that were true. It’d be neater if abiogenesis was incredibly difficult to happen. In my opinion, that’s untenable with how early life starts on Earth — unless you start diving into a conspiratorial world of, it was seeded here, or someone put it here. But I think it’s really difficult to reconcile how quickly it happened with an improbable outcome.
If the Great Filter is, I don’t know, the evolution of eukaryotes, or something called eukaryosis, then I think you can make the same argument as you could about technological devastation: that yes, there’s many paths, there’s many different ways things could play out, but you would expect, over trillions of examples, it to eventually manifest. But we don’t know — there is no theory of — there is no predictable, quantifiable theory of evolution like that, in the same way. There’s no real theory of life. We can’t predict abiogenesis. We’re still trying to understand the odds of that happening. So there’s a lot we don’t know.
But for my money, yeah, I would say abiogenesis seems to be easier than we probably thought it was, even 10 years ago, based off this revised evidence. And I genuinely think we probably will find — we already have hints of microbial life on Mars with these leopard spots that were found recently, that remain quite compelling. So it would not surprise me at all if we shore up that case. Of course, it has to be independent. It can’t just be our cousins that hitched a ride. But if there is independent evidence of life in the solar system — which I think there’s a good chance we could find something like that — that theory is just gone. It can’t survive anymore. So you have to put the Great Filter as one of those evolutionary chains, or something imminent. And it feels like the imminent one — we can imagine a lot more ways of that happening. Unfortunately.
Dan Williams: David, I’m conscious of your time. So my final question to you — Henry might have a different final question — is: you’ve thought about these topics in a rigorous way, probably more than anyone else on planet Earth. When it comes to this hypothesis that we are alone in the universe, what’s your current credence?
David Kipping: I think — define universe.
Henry Shevlin: With an “I” like cone.
David Kipping: Yeah, within the Hubble volume. Yeah, I think that’s important to note. Because the universe is probably infinite, as far as we can tell. And so if it is infinite, then the answer is 100% that there’s someone else out there. There’s just literally infinite rolls of the dice. So I think that is an important demarcation. That’s why, if the universe is infinite — which it seems to be — and if you have faster-than-light travel, this is one of my biggest reasons why I don’t think faster-than-light travel is out there. All this stuff gets way, way harder, because now someone from outside our light cone could travel in and screw with us. So the Fermi paradox gets infinitely times worse if you allow for faster-than-light travel.
So barring that, just in our Hubble volume — yeah, I certainly predict there are other creatures and organisms in our Hubble volume, most likely in our own galaxy. My best bet is that there are likely extinct civilizations in our galaxy as well. There are probably relics and artifacts out there for us to find. I’m somewhat doubtful there’d be someone contemporaneous with us, because our window is just so short. So I think the best bet of us finding something is some artifact that’s floating through space, or we can somehow remotely detect around a planet. And then the fate of those civilizations, I suspect, is probably some Great Filter that lies ahead of us right now, and that we will face.
This is all speculation, but that, I think, that set of possibilities forms a very self-consistent narrative to explain everything we know about the universe.
Science Communication
Henry Shevlin: Although I probably shouldn’t say fantastic — in some ways it’s kind of a gloomy hypothesis, but a really nicely argued one. So my final question was just going to be a more general one, because one thing all of us share is that we are academics who try to communicate complex ideas to a general audience. It’s something you’ve done spectacularly successfully through Cool Worlds. I’m just wondering if you had any thoughts on what you’ve learned about this process — being an academic communicating complex ideas — and whether you think it’s something academia rewards enough, or we could be doing more to incentivize it.
David Kipping: Yeah, when I started 10 years ago, it was unusual. Academics didn’t podcast, they didn’t do YouTube. But that has changed a lot. Obviously, now you have, you know, Andrew Huberman, or someone like that — like giants in the podcast world, who come from academia. So it has become a lot more typical.
But I think what we’ve always wanted to avoid — I thought the beauty of YouTube could be, and this podcast, I think, is a great example of this — is the democratization of science communication. Before the internet, you really just had like one or two figures who dominated the landscape of science communication. And that’s somewhat unhealthy, because then you’ve got someone like Michio Kaku, who’s being asked about geology, and he doesn’t know anything about geology. So he’s going to do his best to answer the question, but he’s probably going to mess up, because it’s just not his background.
But now, if you want to know about geology, you can find an amazing YouTube channel about geology, or a podcast that will go really deep and teach you everything in a really rigorous way. So I think that’s kind of the beauty of the landscape we’re in.
In terms of how institutions handle it — I think they’re still not really on it. I don’t think they quite understand what it is, and how powerful it is. I don’t think they quite understand that most people get their science from podcasts at this point. They don’t read the newspaper anymore. They’re not reading press releases from your institution. They’re listening to what Joe Rogan says about it. That’s probably how most people, to be honest, are getting a lot of their science.
So I think it makes a lot more sense to engage with that. I could imagine you having a synergy where you have science communicators who have large platforms, whether they come from academia or not. Most of them, I think, want to do a good job with science communication. There’s some bad actors, but I think most want to. And you can imagine them partnering with these institutions more directly. So you could imagine having outreach officers at these institutions that work with them to develop the scripts, and even the production itself, to try and make it be legitimate.
I think one of the biggest challenges of being a science communicator in the YouTube space is that the reactionary news cycle is so fast, that YouTube often rewards the people that just say, report the story first. And because YouTubers don’t typically have access to embargoed materials, that means they’re producing videos in a space of like a couple of hours on a very complex topic that they’re not even trained in, or with any help from the institution. And so then you end up with really troublesome and problematic miscommunication and things going on.
It’d make more sense if these institutions would reach out, I think, to the science communicators and say, “we’ve got this big story coming out next week. We’d love to do something with you, and try to make it reach your big audience. But also, you’ve got such a great voice, great style — I want to use that, but also try and ground it. Here’s all the facts, and we’ll work with you to make it be as factually true as possible.” So I can imagine some kind of partnership like that. Nothing like that really exists right now. It’s really like a separate world, mostly. And I think that’s to the disadvantage of these institutions, who a lot of people are seeing them become archaic and questioning their relevancy. So I think if they want to remain relevant, they have to be a bit smarter with their media portfolios.
Dan Williams: Fantastic. Well, thank you, David. We really appreciate you giving us the time. This has been one of my favorite conversations we’ve had on this podcast. So with that, thanks everyone for listening. See you next time.











