> "the connection between intelligence and drives like power-seeking and self-preservation that you find in living organisms is purely contingent"
What do you make of the "instrumental convergence" argument that power and self-preservation are all-purpose means to *whatever* other ends an agent might have (and that a superintelligent agent could not fail to notice this fact)?
Are you assuming that artificial superintelligence will be *so* alien that it cannot properly be modelled as an "agent" at all? I worry that that's an awfully big assumption! (While current LLMs aren't especially agentic, newer-gen ones seem significantly more so than earlier ones, so the trajectory seems concerning...)
I worked in law enforcement for a long time and left, in part, because I could see there was no comprehension of what AI was about to do to social and legal order. Not looking good!
"I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things."
-- Douglas Adams, The Salmon of Doubt: Hitchhiking the Galaxy One Last Time
I see this concept proven over and over. Every large technology advance is accompanied by much punditry bemoaning how it is somehow bad for humanity. For heaven's sake, you spend your days readings and writing, and you don't believe it's a tragedy that you've been isolated with dead words instead of the proper human contact of constant real face-to-face conversation with fellow sages.
"When there are machine workers that are far more effective and efficient than people" - umm, this is the history of mechanization. I'm sure you know about the literal Luddites. One thing I've found very amusing about the Great AI Moral Panic, is the enormous wails of complaint and gnashing of teeth about labor effects, from the general class of people who have been indifferent or even hostile to the similar experiences of blue-collar workers. There is a very distinct lack of moralizing, lecturing, and finger-wagging, about not to be a metaphorical Luddite or the economic horrors of job protection (not that it's unknown, but it's really stark how different it is, because this time the targets are the knowledge class itself).
Don't get me wrong, I'm not saying don't think about the social effects of technology. But I, for one, welcome our new AI Overlords.
(n.b. for people who don't get that last sentence, it's a joke reference).
I agree with a lot of this comment. But modern tech already has had some of the effects Dan describes (also some countervailing). As Yogi Berra says 'it is hard to make predictions, especially about the future."
Thought provoking article. I think, however, by beginning to equate interdependence with its instrumental value -- yes, machines will do more and more for us -- that observation doesn't lead us to the conclusion that this means the "glue" that holds us together will lose its strength, as if interdependence is a zero-sum affair. In fact, it might just have the opposite effect, driving humans to spend more time but for phatic reasons, time spent with family and friends for the pleasure of being with them. It depends on how the productivity gains that result from using AI are distributed. As a result, the design of AI systems is an ontological affair. What kind of world do we want to bring forth? Unfortunately, it appears that those who are developing the systems are operating under the dictates of the One-World World (OWW), which priorizes the profit motive above all else. Imagine using those productivity gains to reduce everyone's workload so they could work less, so they could be more creative and be more caring outside of work. I think that discussions about the future of AI are overly focused on the risk factor. There needs to be more thought and more discussion on what are the ontological possibilities that this new technology gives rise to and how can we decide which ones would be better for us. That is if the "we" and "us" have a say in the matter.
I think you have hit on the most insidious risk of all. The concept of superintelligent tools serving human interests too well is far more unsettling than the "demon summoning" narrative. The erosion of human interdependence, the very "glue" that aligns us, is a profound and less-discussed path to societal collapse. It's not about the AI becoming a tyrant; it's about the AI making us redundant to each other. Excellent point about our evolutionary drivers being purely contingent, too.
The question I have always asked is why are we allowing people to build these things? We lock mass murderers away we prematurely take out terrorists. Yet the monsters building these things get a free ride.
Even the "positive" claims like 20% GDP growth is just another way of saying AI will kill most of us in an indirect way. You cant get 20% in a normal economy because there is insufficient demand. So it would be a supply side thing, making consumers irrelevant and unemployed and so irrelevant. So what gets built initially is intelligent war drones to protect the AI owners from people now realizing they are unneeded by the AI lords and hence expendable. And when the clash comes the unneeded billions are dispatched.
AI and AGI are Ponzi scheme solutions in search of a problem with only two outcomes; automation and a Uyghur surveillance state that will need unlimited supply of energy-water and rare earth metals, that will cause unimaginable unemployment and destitution and unlimited harvesting of human data, both are already in full warp speed mode.
The trouble with trying to think the unthinkable is that it is a stretch. Maybe stretching is all one can do. The concept of humanity is just beyond our ability to say anything sensible about it now or in future projections. I prefer thinking in terms of hyperobjects. If AI and its attendant machines is a hyperobject, what kind of objectification can be reasonably expected? And what kind of action may future humans take to address whatever might happen. We just don't know.
I think you're right that interdependence is what incentivizes cooperation, and that AI making us less interdependent constitutes a risk. But I would also say that the history of modernity has been a history of technological and social progress reducing human interdependence. And while that's certainly come with downsides, on the whole it's been an overwhelmingly positive development that has liberated individuals from the tyranny of unchosen community and tradition.
The thing that concerns me most about AI is the related but distinct fact that it lowers the cost of solipsism. AI makes it easier for people to retreat into physical and intellectual bubbles of their own making. So it's partly a decline in people's dependence on one another, but also a decline in their dependence on objective reality.
I’m very sympathetic with the point about interdependence. I’d suggest a parallel worry about self-reliance that probably reflects deeper symmetries between morality and prudence. It is probably good for most of us that we have to exert ourselves to maintain our standard of living. If so, ASI superabundance may prove a kind of lottery-winners’ curse that makes many people vicious and miserable. (I think Bostrom’s recent book is great on this stuff.)
A superficial quibble that I raise because I find it interesting: The characterisation of Turing is arguably misleading. I agree that if you reject anything supernatural about human intelligence then you should regard (strong) AGI and ASI as possible. But that is probably not why Turing believed in this possibility. For in “Computing Machinery and Intelligence” he takes time to give a (somewhat superficial) argument for the possibility of AGI conditional on the premiss that “thinking is a function of man’s immortal soul”; and he states his belief in the empirical case for “telepathy, clairvoyance, precognition and psycho-kinesis”, stating that these pose “quite a strong” argument against his views, which he then tries to address. (Turing also claims that Muslims believe that “women have no souls”. For some reason this stuff doesn’t get mentioned as much as the Turing test.)
Chalmers’ multi-level account of the Singularity forces us to rethink society itself. The Spanish philosopher Fernando Broncano has already spoken of a “hybrid agency” that displaces traditional epistemic authority. This, I’m afraid, is no longer a futuristic or fictional horizont....some episodes of Black Mirror already seem just around the corner.
> "the connection between intelligence and drives like power-seeking and self-preservation that you find in living organisms is purely contingent"
What do you make of the "instrumental convergence" argument that power and self-preservation are all-purpose means to *whatever* other ends an agent might have (and that a superintelligent agent could not fail to notice this fact)?
Are you assuming that artificial superintelligence will be *so* alien that it cannot properly be modelled as an "agent" at all? I worry that that's an awfully big assumption! (While current LLMs aren't especially agentic, newer-gen ones seem significantly more so than earlier ones, so the trajectory seems concerning...)
I worked in law enforcement for a long time and left, in part, because I could see there was no comprehension of what AI was about to do to social and legal order. Not looking good!
Great article.
"I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things."
-- Douglas Adams, The Salmon of Doubt: Hitchhiking the Galaxy One Last Time
I see this concept proven over and over. Every large technology advance is accompanied by much punditry bemoaning how it is somehow bad for humanity. For heaven's sake, you spend your days readings and writing, and you don't believe it's a tragedy that you've been isolated with dead words instead of the proper human contact of constant real face-to-face conversation with fellow sages.
"When there are machine workers that are far more effective and efficient than people" - umm, this is the history of mechanization. I'm sure you know about the literal Luddites. One thing I've found very amusing about the Great AI Moral Panic, is the enormous wails of complaint and gnashing of teeth about labor effects, from the general class of people who have been indifferent or even hostile to the similar experiences of blue-collar workers. There is a very distinct lack of moralizing, lecturing, and finger-wagging, about not to be a metaphorical Luddite or the economic horrors of job protection (not that it's unknown, but it's really stark how different it is, because this time the targets are the knowledge class itself).
Don't get me wrong, I'm not saying don't think about the social effects of technology. But I, for one, welcome our new AI Overlords.
(n.b. for people who don't get that last sentence, it's a joke reference).
I agree with a lot of this comment. But modern tech already has had some of the effects Dan describes (also some countervailing). As Yogi Berra says 'it is hard to make predictions, especially about the future."
Thought provoking article. I think, however, by beginning to equate interdependence with its instrumental value -- yes, machines will do more and more for us -- that observation doesn't lead us to the conclusion that this means the "glue" that holds us together will lose its strength, as if interdependence is a zero-sum affair. In fact, it might just have the opposite effect, driving humans to spend more time but for phatic reasons, time spent with family and friends for the pleasure of being with them. It depends on how the productivity gains that result from using AI are distributed. As a result, the design of AI systems is an ontological affair. What kind of world do we want to bring forth? Unfortunately, it appears that those who are developing the systems are operating under the dictates of the One-World World (OWW), which priorizes the profit motive above all else. Imagine using those productivity gains to reduce everyone's workload so they could work less, so they could be more creative and be more caring outside of work. I think that discussions about the future of AI are overly focused on the risk factor. There needs to be more thought and more discussion on what are the ontological possibilities that this new technology gives rise to and how can we decide which ones would be better for us. That is if the "we" and "us" have a say in the matter.
I think you have hit on the most insidious risk of all. The concept of superintelligent tools serving human interests too well is far more unsettling than the "demon summoning" narrative. The erosion of human interdependence, the very "glue" that aligns us, is a profound and less-discussed path to societal collapse. It's not about the AI becoming a tyrant; it's about the AI making us redundant to each other. Excellent point about our evolutionary drivers being purely contingent, too.
The question I have always asked is why are we allowing people to build these things? We lock mass murderers away we prematurely take out terrorists. Yet the monsters building these things get a free ride.
Even the "positive" claims like 20% GDP growth is just another way of saying AI will kill most of us in an indirect way. You cant get 20% in a normal economy because there is insufficient demand. So it would be a supply side thing, making consumers irrelevant and unemployed and so irrelevant. So what gets built initially is intelligent war drones to protect the AI owners from people now realizing they are unneeded by the AI lords and hence expendable. And when the clash comes the unneeded billions are dispatched.
AI and AGI are Ponzi scheme solutions in search of a problem with only two outcomes; automation and a Uyghur surveillance state that will need unlimited supply of energy-water and rare earth metals, that will cause unimaginable unemployment and destitution and unlimited harvesting of human data, both are already in full warp speed mode.
I don't understand the need for human data in a world of impoverished people whose collective demand is trivial.
The trouble with trying to think the unthinkable is that it is a stretch. Maybe stretching is all one can do. The concept of humanity is just beyond our ability to say anything sensible about it now or in future projections. I prefer thinking in terms of hyperobjects. If AI and its attendant machines is a hyperobject, what kind of objectification can be reasonably expected? And what kind of action may future humans take to address whatever might happen. We just don't know.
I think you're right that interdependence is what incentivizes cooperation, and that AI making us less interdependent constitutes a risk. But I would also say that the history of modernity has been a history of technological and social progress reducing human interdependence. And while that's certainly come with downsides, on the whole it's been an overwhelmingly positive development that has liberated individuals from the tyranny of unchosen community and tradition.
The thing that concerns me most about AI is the related but distinct fact that it lowers the cost of solipsism. AI makes it easier for people to retreat into physical and intellectual bubbles of their own making. So it's partly a decline in people's dependence on one another, but also a decline in their dependence on objective reality.
I’m very sympathetic with the point about interdependence. I’d suggest a parallel worry about self-reliance that probably reflects deeper symmetries between morality and prudence. It is probably good for most of us that we have to exert ourselves to maintain our standard of living. If so, ASI superabundance may prove a kind of lottery-winners’ curse that makes many people vicious and miserable. (I think Bostrom’s recent book is great on this stuff.)
A superficial quibble that I raise because I find it interesting: The characterisation of Turing is arguably misleading. I agree that if you reject anything supernatural about human intelligence then you should regard (strong) AGI and ASI as possible. But that is probably not why Turing believed in this possibility. For in “Computing Machinery and Intelligence” he takes time to give a (somewhat superficial) argument for the possibility of AGI conditional on the premiss that “thinking is a function of man’s immortal soul”; and he states his belief in the empirical case for “telepathy, clairvoyance, precognition and psycho-kinesis”, stating that these pose “quite a strong” argument against his views, which he then tries to address. (Turing also claims that Muslims believe that “women have no souls”. For some reason this stuff doesn’t get mentioned as much as the Turing test.)
Chalmers’ multi-level account of the Singularity forces us to rethink society itself. The Spanish philosopher Fernando Broncano has already spoken of a “hybrid agency” that displaces traditional epistemic authority. This, I’m afraid, is no longer a futuristic or fictional horizont....some episodes of Black Mirror already seem just around the corner.
How much does big tech pay you to stump?