In recent years, I’ve become increasingly convinced that advances in artificial intelligence (AI) are going to be the biggest story of our era. I think AI systems will transform not just the economy, but our societies, cultures, political institutions, ideologies, values, self-understandings, and much more. And I think this will happen over the next several decades, not centuries or millennia.
Given this, I want to spend a lot more time thinking about the social, political, and philosophical issues that AI raises. And that means I want to spend a lot more time talking to people with expertise and interesting opinions in this area. One such person is Dr Henry Shevlin, a friend and brilliant academic at the University of Cambridge.
In this conversation, which is hopefully the first of many, we talk about some really big-picture issues:
How we should understand state-of-the-art AI systems like ChatGPT-5
To what extent such systems are similar to human minds
Our proximity to truly transformative AI (and what “transformative AI” even means)
How we should measure AI progress
The economic impacts of AI
Whether future AI systems are likely to kill us all.
We have some interesting disagreements about some of these issues. For example, Henry’s timelines for when we will reach transformative AI are shorter than mine. He is also much more worried that super-intelligent AI systems will eliminate the human species than I am.
I found the conversation really fun and informative. My hope is that others with interests in this area might also get something out of it.
Here are links to some of the things mentioned in the conversation:
The Shoggoth meme to illustrate AI
Situational Awareness by Leopold Aschenbrenner
Video by Miles Brundage explaining AI trends on a white board
Share this post