Is artificial intelligence (AI) a “normal technology” or a potentially “superintelligent” alien species? Is it true, as some influential people claim, that if anyone builds “super-intelligent” AI systems, everyone dies? What even is superintelligence”?
In this conversation, the first official episode of Conspicuous Cognition’s “AI Sessions”, Henry Shevlin and I discuss these and many more issues.
Specifically, we explore two highly influential perspectives on the future trajectory, impacts, and dangers of AI.
The first models AI as a “normal technology”, potentially transformative but still a tool, which will diffuse throughout society in ways similar to previous technologies like electricity or the internet. Through this lens, we examine how AI is likely to impact the world and discuss deep philosophical and scientific questions about the nature of intelligence and power.
The second perspective presents a very different possibility: that we may be on the path to creating superintelligent autonomous agents that threaten to wipe out the human species. We unpack what "superintelligence" means and explore not just whether future AI systems could cause human extinction but whether they would “want” to.
Here are the primary sources we cite in our conversation, which also double up as a helpful introductory reading list covering some of the most significant current debates concerning artificial intelligence and the future.
Main Sources Cited:
Narayanan, Arvind and Sayash Kapoor (2025). "AI as Normal Technology."
Yudkowsky, Eliezer and Nate Soares (2025). If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.
Kokotajlo, Daniel, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean (2025). "AI 2027."
Alexander, Scott and the AI Futures Project (2025). "AI as Profoundly Abnormal Technology." AI Futures Project Blog.
Henrich, Joseph (2016). The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter.
Huemer, Michael. "I for one, welcome our AI Overlords"
Pinker, Steven (2018). Enlightenment Now: The Case for Reason, Science, Humanism, and Progress
Further Reading:
Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies.
Pinsof, David (2025). "AI Doomerism is Bullshit."
Kulveit, Jan, Raymond Douglas, Nora Ammann, Deger Turan, David Krueger, and David Duvenaud (2025). "Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development." arXiv preprint.
For a more expansive reading list, see my syllabus here:
You can also see the first conversation that Henry and I had here, which was recorded live and where the sound and video quality were a bit worse:
Will AI Change Everything?
In recent years, I’ve become increasingly convinced that advances in artificial intelligence (AI) are going to be the biggest story of our era. I think AI systems will transform not just the economy, but our societies, cultures, political institutions, ideologies, values, self-understandings, and much more. And I think this will happen over the next sev…