Superintelligence and the Decline of Human Interdependence
What happens to humanity when we no longer need each other?
I was recently on a panel to discuss the impact of artificial intelligence on society. For my opening ten-minute speech, I sketched a big-picture, speculative story about how advanced AI will eat away at human interdependence. It’s intended for a general audience, so don’t expect much rigour, precision, or engagement with the academic literature. Still, it’s a concise, accurate summary of what I genuinely think.
When thinking about how advances in artificial intelligence are likely to change society in the coming years, people tend to suffer from two failures of imagination. They either fail to grapple with how transformative this technology will be, or they misrepresent the nature of this transformation by anthropomorphising the technology, treating advanced AI as an alien species that will threaten human survival.
I will suggest that AI’s most significant impacts will come not from superintelligent agents that threaten human survival, but from superintelligent tools that serve human interests too well.
The First Failure: Underestimating Transformative AI
Much of the popular and academic discourse about AI and its dangers focuses on how it might exacerbate existing risks and concerns: for example, concerning misinformation, privacy, or how decision-making systems perpetuate biases against marginalised groups. These are important conversations worthy of serious attention. But if they are the only conversations, they are too conservative. They treat AI as a technology that will be confined to existing social structures, maybe making current problems worse, but not fundamentally remaking reality.
But AI is not and has never been just another technology. From its beginnings as a serious science and engineering project in the mid-twentieth century, it has always had a revolutionary goal: to build machines that aren’t just as smart as the smartest human beings—that can match all of our most impressive intellectual and behavioural abilities—but that are far smarter. That are “ultraintelligent” or “superintelligent”.
However, even this framing undersells the revolutionary nature of this project, because it treats human minds as the benchmark. But humans are just one species of African ape. The space of possible intelligences is unimaginably vast, many of them associated with capacities we can’t imagine. We have barely begun to explore this space.
Some people doubt that superintelligent machines are possible. They think there is something special about human intelligence that couldn’t be replicated in machines, or that the engineering challenge is simply too hard. Others are open to the possibility in principle but think it lies so far ahead in our future, centuries or millennia away, that we don’t need to think or worry about it now.
I couldn’t disagree more. Once you accept that there is nothing magical or supernatural about the source of human intelligence, that our brains are just complex information-processing machines, the possibility of superintelligent AI follows very quickly. That’s why pioneers of AI like Alan Turing and John von Neumann saw this coming as soon as it became clear that we could build digital computers.
Back then, scepticism might have been reasonable. But in the space of about seventy-five years, we’ve gone from room-sized calculators to affordable pocket devices that can hold conversations, search the web, write complex code, produce detailed research reports on any topic, create hyper-realistic images, video, and audio, and outperform most humans on many cognitive tasks.
If you think that progress will stop here, you’re not paying attention. This year alone, it’s estimated that tech companies will spend about 400 billion dollars building AI infrastructure. To put that in perspective, that’s just a little short of Denmark’s entire annual GDP.
At the same time, many of the world’s most brilliant minds are currently earning astronomical salaries to make this happen. And the world’s most powerful militaries are investing heavily.
There’s a reasonable question about when we will reach superintelligent AI, and whether our leading paradigm of generative AI and deep learning could get us there. But if you think we will never reach it, or that it lies centuries away, the burden of proof now falls on you to justify that assumption.
The Second Failure: Anthropomorphising Transformative AI
Many people appreciate these lessons. They recognise that superintelligent, general-purpose machines are coming and that they will radically change the world. And they are terrified. In the words of Elon Musk, they think we’re summoning the demon.
The main worry here is that we are building a kind of superintelligent alien species that will be so difficult to control and align with our interests that it will threaten human survival, enslaving or eliminating us.
“Imagine we created a new species and that species was smarter than us in the same way that we’re smarter than mice or frogs or something,” says AI pioneer Yoshua Bengio. “Are we treating the frogs well?”
Some people dismiss these scenarios because they sound like “science fiction”. That objection is misguided. A world with superintelligent AI will be so strange that scenarios should sound like science fiction. It would be objectionable if they didn’t.
The real problem is that the scenarios aren’t strange enough.
Although they confront the possibility of superintelligence, they project assumptions about agency that make sense in the case of human beings and other animals onto machines. But the connection between intelligence and drives like power-seeking and self-preservation that you find in living organisms is purely contingent. It reflects our evolutionary origins under conditions of brutal resource scarcity and Darwinian competition. There’s no reason why superintelligent machines should have those drives. They will be nothing like a biological species.
This isn’t to deny that aligning AI is a genuine engineering challenge. I’m glad that there are intelligent, thoughtful people working on it. But the simple fact is that the AI systems we’ve rolled out so far have been extremely well-aligned with their creators’ interests. The obsessive media focus on rare exceptions misses the forest for the trees. And it doesn’t follow from the fact that it can be challenging to train AI systems to do precisely what we want that we will somehow end up training them to do the exact opposite of what we want.
The Real Risks
Stephen Hawking said that “the rise of powerful AI will be either the best or the worst thing ever to happen to humanity.”
There’s a popular temptation to think that if we avoid the worst-case scenario of catastrophically misaligned superintelligence, we will achieve utopia instead. But this is wrong. A world with well-aligned superintelligences would give rise to profound problems of its own.
The most obvious risk is superintelligent systems under the control of individuals or factions pursuing self-serving or nefarious goals. If access to the best systems is unevenly distributed, it would grant novel and unprecedented powers to some individuals. The core risk here is not misaligned AI systems, but AI systems well-aligned with the goals of bad actors. If you think people wouldn’t exploit those opportunities, you know little about human nature or history.
However, I’ll end by focusing on a less-obvious, subtler risk: the impact of superintelligent AI on human interdependence.
From birth to death, humans are and always have been reliant on other humans to survive and thrive. We depend on others for resources, work, protection, knowledge, art, culture, sex, love, companionship, and more.
This interdependence is not incidental to the human condition. It’s partly constitutive of it. It’s one of the most important forces that shaped our species’ evolution. It’s also the glue that holds human solidarity and cooperation together. It’s in large part because we depend on others that we’re forced to care about them and to care what they think of us. Even the most sociopathic, extractive elites are constrained by their reliance on the cooperation of others.
In other words, interdependence solves the human alignment problem. It’s how we align human interests.
What happens to such interdependence in a world of superintelligent machines? When there are machine workers that are far more effective and efficient than people—that don’t call in sick, complain about the boss, or start a union? When machines can advance the frontier of human knowledge and innovation without human bias? When they can provide sex and companionship without any of the annoying complexities, conflict, and compromise that characterise human relationships?
This would be a world in which many people, whether wealthy capitalists or ordinary consumers, rely more and more on machines and less and less on other people. It would be a world where the glue holding human solidarity and cooperation together gradually dissolves.
That process would threaten human survival, not because we would have lost control of superintelligent machines, but because our ability to control them would erode one of the main things that makes us human.
Further Reading
Two excellent essays (both infinitely more academic and rigorous than this one) that have influenced how I think about this topic are The Intelligence Curse and Gradual Disempowerment, both of which explore how the adoption of advanced AI could undermine human bargaining power in potentially catastrophic ways.
You can also check out my conversation about some of these issues with Henry Shevlin:
> "the connection between intelligence and drives like power-seeking and self-preservation that you find in living organisms is purely contingent"
What do you make of the "instrumental convergence" argument that power and self-preservation are all-purpose means to *whatever* other ends an agent might have (and that a superintelligent agent could not fail to notice this fact)?
Are you assuming that artificial superintelligence will be *so* alien that it cannot properly be modelled as an "agent" at all? I worry that that's an awfully big assumption! (While current LLMs aren't especially agentic, newer-gen ones seem significantly more so than earlier ones, so the trajectory seems concerning...)
I worked in law enforcement for a long time and left, in part, because I could see there was no comprehension of what AI was about to do to social and legal order. Not looking good!
Great article.