> "the connection between intelligence and drives like power-seeking and self-preservation that you find in living organisms is purely contingent"
What do you make of the "instrumental convergence" argument that power and self-preservation are all-purpose means to *whatever* other ends an agent might have (and that a superintelligent agent could not fail to notice this fact)?
Are you assuming that artificial superintelligence will be *so* alien that it cannot properly be modelled as an "agent" at all? I worry that that's an awfully big assumption! (While current LLMs aren't especially agentic, newer-gen ones seem significantly more so than earlier ones, so the trajectory seems concerning...)
Thanks, Richard. Good question. Will try to respond at greater length later, but briefly: I find instrumental convergence arguments very weak. They rest on a picture of rational agency that has very little connection to the realities of how biological and artificial intelligences work.
On whether artificial superintelligence could be modelled as an “agent”: yes, I expect we will build superintelligent systems that are “agents” in one sense but not in the sense that would license analogies to living things or an alien species.
In general, I find much of the AI x risk literature rests on folk concepts and stick-figure idealisations - “intelligence”, “goal”, “agent” - that are very unhelpful for thinking in a clear, precise, empirically-informed way about the mechanisms, functions, and likely failure modes associated with specific systems.
Thought provoking article. I think, however, by beginning to equate interdependence with its instrumental value -- yes, machines will do more and more for us -- that observation doesn't lead us to the conclusion that this means the "glue" that holds us together will lose its strength, as if interdependence is a zero-sum affair. In fact, it might just have the opposite effect, driving humans to spend more time but for phatic reasons, time spent with family and friends for the pleasure of being with them. It depends on how the productivity gains that result from using AI are distributed. As a result, the design of AI systems is an ontological affair. What kind of world do we want to bring forth? Unfortunately, it appears that those who are developing the systems are operating under the dictates of the One-World World (OWW), which priorizes the profit motive above all else. Imagine using those productivity gains to reduce everyone's workload so they could work less, so they could be more creative and be more caring outside of work. I think that discussions about the future of AI are overly focused on the risk factor. There needs to be more thought and more discussion on what are the ontological possibilities that this new technology gives rise to and how can we decide which ones would be better for us. That is if the "we" and "us" have a say in the matter.
The evidence so far about machines that do more and more for us is not that we spend more time together. Quite the opposite. We are moving away from our families, forming couples later, having fewer children and fewer friends. All of these trends started well before the internet, so it's not just that.
Any vision of humans spending more time with each other has to start with humans as we actually exist, not Rousseauiste mythical humans.
Thank you for your thoughtful reply. I think it's safe to say that people spending less time with each other in the Global West is more of a function of the hyper individualism at the core of the One-World World (OWW) mindset. In my mind, the idea of the autonomous individual, the celestial self in which the vast majority of people cast themselves as the star of their life script with others revolving around them, brings forth a world in which individuals utilize technology to advance their already individualistic life scripts. If you live in Latin America, however, although people also have access to the same technology as those in the Global West, people still spend an enormous amount of time with their extended families, since the role of the family plays such a large factor in the development of the personal identity. So, when you say "we" I think you are referring to the majority of people living in the Global West, but not everyone who lives there. Moreover, when talking about "humans", especially with regard to their myriad of cultural and social worlds in which they live, these worlds are heterogenous. As a result, "we" humans don't experience the world in a unique fashion, and I think any vision of humans spending more time with each other has to start with ontological plurality. Humans actually exist in a world that is large enough to fit in all the worlds the every day meaning makers create, which means there are multiple trajectories from which the future can unfold.
It's not just the West, though. The same decline trends are evident throughout the world; it's just that the absolute levels of interaction are still higher than ours, as the trends started a little later.
You might like to read some of Dr. Alice Evans's work here on Substack about this very thing. She has spent time in India and Pakistan, Syria, South America, and African countries, and seen the same patterns of decreasing interaction everywhere. And of course East and South-East Asia have the same patterns too.
"I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things."
-- Douglas Adams, The Salmon of Doubt: Hitchhiking the Galaxy One Last Time
I see this concept proven over and over. Every large technology advance is accompanied by much punditry bemoaning how it is somehow bad for humanity. For heaven's sake, you spend your days readings and writing, and you don't believe it's a tragedy that you've been isolated with dead words instead of the proper human contact of constant real face-to-face conversation with fellow sages.
"When there are machine workers that are far more effective and efficient than people" - umm, this is the history of mechanization. I'm sure you know about the literal Luddites. One thing I've found very amusing about the Great AI Moral Panic, is the enormous wails of complaint and gnashing of teeth about labor effects, from the general class of people who have been indifferent or even hostile to the similar experiences of blue-collar workers. There is a very distinct lack of moralizing, lecturing, and finger-wagging, about not to be a metaphorical Luddite or the economic horrors of job protection (not that it's unknown, but it's really stark how different it is, because this time the targets are the knowledge class itself).
Don't get me wrong, I'm not saying don't think about the social effects of technology. But I, for one, welcome our new AI Overlords.
(n.b. for people who don't get that last sentence, it's a joke reference).
Thanks for the quote, that's a great one. Though I'm not so sure it's actually applicable right now, because young people in their 20s actually seem far more pessimistic about AI and technology/social change now than older people do, and it's the youngest who are engaging in the most nostalgia, often for times they were never alive for.
I agree with a lot of this comment. But modern tech already has had some of the effects Dan describes (also some countervailing). As Yogi Berra says 'it is hard to make predictions, especially about the future."
This is an excellent point, and one I would love to see explored further. I've sometimes referred to an adjacent form of this problem: how much do humans trust each other in our future AI-powered society? Alignment with human goals and ends is one thing, but doing so in a way that preserves inter-human trust is essential. A theoretical GDP-maximizer AI would be very likely to generate an incredibly low-trust society, so how do we ensure AI is focused on preserving or enhancing trust?
“the connection between intelligence and drives like power-seeking and self-preservation that you find in living organisms is purely contingent. …There’s no reason why superintelligent machines should have those drives.”
This is hand-wavey. I’ve not heard a good argument against instrumental convergence, and I was disappointed that it wasn’t addressed in this essay.
That said, your point about the erosion of interdependence is not talked about enough and well articulated!
Making us non-interpendent (or at least moreso) is, I agree, the most likely scenario out of a variety of scenarios that could play out. The harder part to think through is whether that is good or bad. Revealed preferences show us that whenever people can get less interdependent, they do. At least, most people. They might moan about it and say it's bad, all while they are choosing to do the things they're moaning about. So idk, maybe it's not bad even though we're wired to assume its bad or to inherently feel threatened by it. Lotta exploitation, abuse, and general horrific stuff has occurred through society bc of interdependence. But also almost no one really likes thinking about everyone in their own little pod-world. I'm not sure we're even capable of assessing this or getting outside our wired-in intuitions that something we've always needed MUST be good.
AI will replace most human jobs, but there are some things we will continue to want to do with other humans - dance, sing, have sex, go to church, eat food,live entertainment, and maybe a few others. I believe that these desires for companionship are woven so deep into our evolutionary nature that computers and robots will not replace them ever, or at least for a long time.
Making love with another human being that you love and who loves you back is one of the most powerful life affirming experiences one can have. It’s part of what makes all the suffering and misery worth it. Even the most sycophantic machine will not be able to capture this.
I just finished Mutual Aid by Kropotkin. He could use an editor, but he gave so many examples because he was fighting against the Social Darwinism of his era and wanted to bring a scientific argument against it. Like the author, he argues that our tendency to cooperate and form tribes is in our DNA.
Provocative premise. This technology does seem to be just a thought calculator. Taking our ideas and connecting them in response to our queries. I think it will likely never be capable of creative thinking. That will probably remain the domain of people.
To the point that the real danger might be the undermining of human relationships, that's here now. Embodied in our preference for the much less demanding relationships we enjoy with the web. I think we already see generations with this characteristic. Maybe even worse... the newest ones might not even like other people.
The place you describe "...a world in which many people, whether wealthy capitalists or ordinary consumers, rely more and more on machines and less and less on other people... " is the places I walk and drive through today. It's where we won't even look into each others eyes in favor of taking the selfies we seem to so desperately want.
Yest still the places we gather to sing and dance give us hope that real love will never be the product of a calculation.
I've intuitively thought that AI will ruin things not through some alignment problem, but through labor disempowerment — public facing talking heads I respect neglect this risk though which made me doubt that impulse
Validating / encouraging to see you & these references take the threat to relationships / culture seriously 🙏 I hope there's more to come
"Once you accept that there is nothing magical or supernatural about the source of human intelligence, that our brains are just complex information-processing machines, the possibility of superintelligent AI follows very quickly."
That's already assuming a lot, and plenty of (humanly) smart people have disagreed. But I don't think the answer really matters for your argument. What matters is whether AI can be powerful and adept enough in specific ways to erode our human interdependence. What concerns me most is the possibility that such erosion is already underway at current capabilities, or will be very shortly even at levels far below what might qualify as "superintelligence." Perhaps all that's required is the appearance of verbal parity, or irregular achievement of parity in some tasks. This puts us in a grey zone between the legible past and some superintelligent future: where no one is on the same page about what AI can, can't and shouldn't do, and where erosion of our shared fabric is so incremental and ambiguous that by the time most people recognize the damage it could be too late.
To put this another way: even if many people underestimate the long-term risks or focus on the wrong ones, even more underestimate the short-term risks relative to any long-term risks.
The argument about interdependence begs the question “so what?”
If we’re so satisfied & happy that human society frays apart out of a lack of need, isn’t that a win? Every human society is riven with major problems and this can’t be changed. Sounds like this might be the utopia you said wouldn’t happen
I’m very sympathetic with the point about interdependence. I’d suggest a parallel worry about self-reliance that probably reflects deeper symmetries between morality and prudence. It is probably good for most of us that we have to exert ourselves to maintain our standard of living. If so, ASI superabundance may prove a kind of lottery-winners’ curse that makes many people vicious and miserable. (I think Bostrom’s recent book is great on this stuff.)
A superficial quibble that I raise because I find it interesting: The characterisation of Turing is arguably misleading. I agree that if you reject anything supernatural about human intelligence then you should regard (strong) AGI and ASI as possible. But that is probably not why Turing believed in this possibility. For in “Computing Machinery and Intelligence” he takes time to give a (somewhat superficial) argument for the possibility of AGI conditional on the premiss that “thinking is a function of man’s immortal soul”; and he states his belief in the empirical case for “telepathy, clairvoyance, precognition and psycho-kinesis”, stating that these pose “quite a strong” argument against his views, which he then tries to address. (Turing also claims that Muslims believe that “women have no souls”. For some reason this stuff doesn’t get mentioned as much as the Turing test.)
Thanks, Ralph. That’s really interesting about Turing. I remember the weird stuff about telepathy in Computing Machinery and Intelligence, but still thought that he was basically a computational functionalist of a certain kind about thought and intelligence. Certainly not my area of expertise, though, so it’s very likely I’m misremembering or misunderstood him.
I worked in law enforcement for a long time and left, in part, because I could see there was no comprehension of what AI was about to do to social and legal order. Not looking good!
'or how decision-making systems perpetuate biases against marginalised groups'. Univercities are cooked, i understand that most people say this for browni victimhood points but eventually enough people will believe it. I can imagine people staying in a building with a plane falling into it because elderly and disabled people must be the first to exit (women and chidlren is too patriarchical somehow)
As far as the actual article goes, i am a more pessimistic camp 2 inmate and im happy you have a different view since u are smarter and more well read in this area. However, even if the AIs dont have the darwinian drive for survival, despite being programmed by humans who do, they would also miss the same drive for human survival. Whatever thing they end up pursuing , even if it helps humanity as a whole might conflict with some of our desires. Imagine a medical advisor who suggest euthanasia because economic costs or possible mental load from therapy is deemed too high. Or shuting down a nuclear center who is seen too risky.
In some ways the darwinian cognitive bias could be a healthy one, consider some issues that arise from extreme utilitarianism. It is why some people might rationally prefer it is humans who make the wrong decisions
Lucid as always - well noticed, kudos. I'd only add - the process of detachment has been underway for some time now. Maybe not a step change?
Or rather - not of detachment, but of rebalancing of attachments? Looks to me that very close, and very distant relationships - are strengthening. While the middle-distance relationships - are weakening.
I have close relationship with my nuclear family at home (Harpenden, UK), with my closest next-of-kin family mum/sister in another country (my ex-home in Skopje, MK), and with the company/owner that employs me - that ultimately is in the US (in CA), but really Co is distributed between US West coast, East coast, UK, Europe Berlin, Central Europe, including the investors all round the world. My connections to my own street/close are as before: IK the neighbours by name as before, and we say "Good Day" when we see each other outside. But to people the other side of Harpenden, or people in London, or other parts of England or UK - those connections have weakened I think. They are weaker, my dependence is less of them, than the connection I have to the others connected to the company that pays for my daily bread (from UK, US, Europe, working and/or owning or investing in the firm).
Is this for the worse? IDK. My recollection of a childhood pre-Internet is of joy of hanging out with friends, but interspaced with impossible amount of boredom where in Jul/Aug neighbourhood friends went on holidays with their families (as did myself). And/or in winter months when we were cooped in our homes not going out for it was too cold to spend too much time there. It was soul crushingly booring. Watching a paint dry boring. Internet - of which I was a very early user of, even of computer networks predating it briefly - opened another universe of non-boring interesting stuff to us.
AI doesn’t have to weaken human interdependence, it can enhance it. By handling repetitive or complex tasks, AI frees people to focus on collaboration, creativity, and deeper human connection. With thoughtful use, superintelligent tools could actually strengthen the bonds that make us human.
For more AI trends and practical insights, check out my Substack where I break down the latest in AI.
> "the connection between intelligence and drives like power-seeking and self-preservation that you find in living organisms is purely contingent"
What do you make of the "instrumental convergence" argument that power and self-preservation are all-purpose means to *whatever* other ends an agent might have (and that a superintelligent agent could not fail to notice this fact)?
Are you assuming that artificial superintelligence will be *so* alien that it cannot properly be modelled as an "agent" at all? I worry that that's an awfully big assumption! (While current LLMs aren't especially agentic, newer-gen ones seem significantly more so than earlier ones, so the trajectory seems concerning...)
Thanks, Richard. Good question. Will try to respond at greater length later, but briefly: I find instrumental convergence arguments very weak. They rest on a picture of rational agency that has very little connection to the realities of how biological and artificial intelligences work.
On whether artificial superintelligence could be modelled as an “agent”: yes, I expect we will build superintelligent systems that are “agents” in one sense but not in the sense that would license analogies to living things or an alien species.
In general, I find much of the AI x risk literature rests on folk concepts and stick-figure idealisations - “intelligence”, “goal”, “agent” - that are very unhelpful for thinking in a clear, precise, empirically-informed way about the mechanisms, functions, and likely failure modes associated with specific systems.
Thought provoking article. I think, however, by beginning to equate interdependence with its instrumental value -- yes, machines will do more and more for us -- that observation doesn't lead us to the conclusion that this means the "glue" that holds us together will lose its strength, as if interdependence is a zero-sum affair. In fact, it might just have the opposite effect, driving humans to spend more time but for phatic reasons, time spent with family and friends for the pleasure of being with them. It depends on how the productivity gains that result from using AI are distributed. As a result, the design of AI systems is an ontological affair. What kind of world do we want to bring forth? Unfortunately, it appears that those who are developing the systems are operating under the dictates of the One-World World (OWW), which priorizes the profit motive above all else. Imagine using those productivity gains to reduce everyone's workload so they could work less, so they could be more creative and be more caring outside of work. I think that discussions about the future of AI are overly focused on the risk factor. There needs to be more thought and more discussion on what are the ontological possibilities that this new technology gives rise to and how can we decide which ones would be better for us. That is if the "we" and "us" have a say in the matter.
The evidence so far about machines that do more and more for us is not that we spend more time together. Quite the opposite. We are moving away from our families, forming couples later, having fewer children and fewer friends. All of these trends started well before the internet, so it's not just that.
Any vision of humans spending more time with each other has to start with humans as we actually exist, not Rousseauiste mythical humans.
Thank you for your thoughtful reply. I think it's safe to say that people spending less time with each other in the Global West is more of a function of the hyper individualism at the core of the One-World World (OWW) mindset. In my mind, the idea of the autonomous individual, the celestial self in which the vast majority of people cast themselves as the star of their life script with others revolving around them, brings forth a world in which individuals utilize technology to advance their already individualistic life scripts. If you live in Latin America, however, although people also have access to the same technology as those in the Global West, people still spend an enormous amount of time with their extended families, since the role of the family plays such a large factor in the development of the personal identity. So, when you say "we" I think you are referring to the majority of people living in the Global West, but not everyone who lives there. Moreover, when talking about "humans", especially with regard to their myriad of cultural and social worlds in which they live, these worlds are heterogenous. As a result, "we" humans don't experience the world in a unique fashion, and I think any vision of humans spending more time with each other has to start with ontological plurality. Humans actually exist in a world that is large enough to fit in all the worlds the every day meaning makers create, which means there are multiple trajectories from which the future can unfold.
It's not just the West, though. The same decline trends are evident throughout the world; it's just that the absolute levels of interaction are still higher than ours, as the trends started a little later.
You might like to read some of Dr. Alice Evans's work here on Substack about this very thing. She has spent time in India and Pakistan, Syria, South America, and African countries, and seen the same patterns of decreasing interaction everywhere. And of course East and South-East Asia have the same patterns too.
"I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things."
-- Douglas Adams, The Salmon of Doubt: Hitchhiking the Galaxy One Last Time
I see this concept proven over and over. Every large technology advance is accompanied by much punditry bemoaning how it is somehow bad for humanity. For heaven's sake, you spend your days readings and writing, and you don't believe it's a tragedy that you've been isolated with dead words instead of the proper human contact of constant real face-to-face conversation with fellow sages.
"When there are machine workers that are far more effective and efficient than people" - umm, this is the history of mechanization. I'm sure you know about the literal Luddites. One thing I've found very amusing about the Great AI Moral Panic, is the enormous wails of complaint and gnashing of teeth about labor effects, from the general class of people who have been indifferent or even hostile to the similar experiences of blue-collar workers. There is a very distinct lack of moralizing, lecturing, and finger-wagging, about not to be a metaphorical Luddite or the economic horrors of job protection (not that it's unknown, but it's really stark how different it is, because this time the targets are the knowledge class itself).
Don't get me wrong, I'm not saying don't think about the social effects of technology. But I, for one, welcome our new AI Overlords.
(n.b. for people who don't get that last sentence, it's a joke reference).
Thanks for the quote, that's a great one. Though I'm not so sure it's actually applicable right now, because young people in their 20s actually seem far more pessimistic about AI and technology/social change now than older people do, and it's the youngest who are engaging in the most nostalgia, often for times they were never alive for.
I agree with a lot of this comment. But modern tech already has had some of the effects Dan describes (also some countervailing). As Yogi Berra says 'it is hard to make predictions, especially about the future."
This is an excellent point, and one I would love to see explored further. I've sometimes referred to an adjacent form of this problem: how much do humans trust each other in our future AI-powered society? Alignment with human goals and ends is one thing, but doing so in a way that preserves inter-human trust is essential. A theoretical GDP-maximizer AI would be very likely to generate an incredibly low-trust society, so how do we ensure AI is focused on preserving or enhancing trust?
“the connection between intelligence and drives like power-seeking and self-preservation that you find in living organisms is purely contingent. …There’s no reason why superintelligent machines should have those drives.”
This is hand-wavey. I’ve not heard a good argument against instrumental convergence, and I was disappointed that it wasn’t addressed in this essay.
That said, your point about the erosion of interdependence is not talked about enough and well articulated!
Making us non-interpendent (or at least moreso) is, I agree, the most likely scenario out of a variety of scenarios that could play out. The harder part to think through is whether that is good or bad. Revealed preferences show us that whenever people can get less interdependent, they do. At least, most people. They might moan about it and say it's bad, all while they are choosing to do the things they're moaning about. So idk, maybe it's not bad even though we're wired to assume its bad or to inherently feel threatened by it. Lotta exploitation, abuse, and general horrific stuff has occurred through society bc of interdependence. But also almost no one really likes thinking about everyone in their own little pod-world. I'm not sure we're even capable of assessing this or getting outside our wired-in intuitions that something we've always needed MUST be good.
AI will replace most human jobs, but there are some things we will continue to want to do with other humans - dance, sing, have sex, go to church, eat food,live entertainment, and maybe a few others. I believe that these desires for companionship are woven so deep into our evolutionary nature that computers and robots will not replace them ever, or at least for a long time.
Making love with another human being that you love and who loves you back is one of the most powerful life affirming experiences one can have. It’s part of what makes all the suffering and misery worth it. Even the most sycophantic machine will not be able to capture this.
I just finished Mutual Aid by Kropotkin. He could use an editor, but he gave so many examples because he was fighting against the Social Darwinism of his era and wanted to bring a scientific argument against it. Like the author, he argues that our tendency to cooperate and form tribes is in our DNA.
Provocative premise. This technology does seem to be just a thought calculator. Taking our ideas and connecting them in response to our queries. I think it will likely never be capable of creative thinking. That will probably remain the domain of people.
To the point that the real danger might be the undermining of human relationships, that's here now. Embodied in our preference for the much less demanding relationships we enjoy with the web. I think we already see generations with this characteristic. Maybe even worse... the newest ones might not even like other people.
The place you describe "...a world in which many people, whether wealthy capitalists or ordinary consumers, rely more and more on machines and less and less on other people... " is the places I walk and drive through today. It's where we won't even look into each others eyes in favor of taking the selfies we seem to so desperately want.
Yest still the places we gather to sing and dance give us hope that real love will never be the product of a calculation.
I've intuitively thought that AI will ruin things not through some alignment problem, but through labor disempowerment — public facing talking heads I respect neglect this risk though which made me doubt that impulse
Validating / encouraging to see you & these references take the threat to relationships / culture seriously 🙏 I hope there's more to come
"Once you accept that there is nothing magical or supernatural about the source of human intelligence, that our brains are just complex information-processing machines, the possibility of superintelligent AI follows very quickly."
That's already assuming a lot, and plenty of (humanly) smart people have disagreed. But I don't think the answer really matters for your argument. What matters is whether AI can be powerful and adept enough in specific ways to erode our human interdependence. What concerns me most is the possibility that such erosion is already underway at current capabilities, or will be very shortly even at levels far below what might qualify as "superintelligence." Perhaps all that's required is the appearance of verbal parity, or irregular achievement of parity in some tasks. This puts us in a grey zone between the legible past and some superintelligent future: where no one is on the same page about what AI can, can't and shouldn't do, and where erosion of our shared fabric is so incremental and ambiguous that by the time most people recognize the damage it could be too late.
To put this another way: even if many people underestimate the long-term risks or focus on the wrong ones, even more underestimate the short-term risks relative to any long-term risks.
The argument about interdependence begs the question “so what?”
If we’re so satisfied & happy that human society frays apart out of a lack of need, isn’t that a win? Every human society is riven with major problems and this can’t be changed. Sounds like this might be the utopia you said wouldn’t happen
I’m very sympathetic with the point about interdependence. I’d suggest a parallel worry about self-reliance that probably reflects deeper symmetries between morality and prudence. It is probably good for most of us that we have to exert ourselves to maintain our standard of living. If so, ASI superabundance may prove a kind of lottery-winners’ curse that makes many people vicious and miserable. (I think Bostrom’s recent book is great on this stuff.)
A superficial quibble that I raise because I find it interesting: The characterisation of Turing is arguably misleading. I agree that if you reject anything supernatural about human intelligence then you should regard (strong) AGI and ASI as possible. But that is probably not why Turing believed in this possibility. For in “Computing Machinery and Intelligence” he takes time to give a (somewhat superficial) argument for the possibility of AGI conditional on the premiss that “thinking is a function of man’s immortal soul”; and he states his belief in the empirical case for “telepathy, clairvoyance, precognition and psycho-kinesis”, stating that these pose “quite a strong” argument against his views, which he then tries to address. (Turing also claims that Muslims believe that “women have no souls”. For some reason this stuff doesn’t get mentioned as much as the Turing test.)
Thanks, Ralph. That’s really interesting about Turing. I remember the weird stuff about telepathy in Computing Machinery and Intelligence, but still thought that he was basically a computational functionalist of a certain kind about thought and intelligence. Certainly not my area of expertise, though, so it’s very likely I’m misremembering or misunderstood him.
I worked in law enforcement for a long time and left, in part, because I could see there was no comprehension of what AI was about to do to social and legal order. Not looking good!
Great article.
'or how decision-making systems perpetuate biases against marginalised groups'. Univercities are cooked, i understand that most people say this for browni victimhood points but eventually enough people will believe it. I can imagine people staying in a building with a plane falling into it because elderly and disabled people must be the first to exit (women and chidlren is too patriarchical somehow)
As far as the actual article goes, i am a more pessimistic camp 2 inmate and im happy you have a different view since u are smarter and more well read in this area. However, even if the AIs dont have the darwinian drive for survival, despite being programmed by humans who do, they would also miss the same drive for human survival. Whatever thing they end up pursuing , even if it helps humanity as a whole might conflict with some of our desires. Imagine a medical advisor who suggest euthanasia because economic costs or possible mental load from therapy is deemed too high. Or shuting down a nuclear center who is seen too risky.
In some ways the darwinian cognitive bias could be a healthy one, consider some issues that arise from extreme utilitarianism. It is why some people might rationally prefer it is humans who make the wrong decisions
Lucid as always - well noticed, kudos. I'd only add - the process of detachment has been underway for some time now. Maybe not a step change?
Or rather - not of detachment, but of rebalancing of attachments? Looks to me that very close, and very distant relationships - are strengthening. While the middle-distance relationships - are weakening.
I have close relationship with my nuclear family at home (Harpenden, UK), with my closest next-of-kin family mum/sister in another country (my ex-home in Skopje, MK), and with the company/owner that employs me - that ultimately is in the US (in CA), but really Co is distributed between US West coast, East coast, UK, Europe Berlin, Central Europe, including the investors all round the world. My connections to my own street/close are as before: IK the neighbours by name as before, and we say "Good Day" when we see each other outside. But to people the other side of Harpenden, or people in London, or other parts of England or UK - those connections have weakened I think. They are weaker, my dependence is less of them, than the connection I have to the others connected to the company that pays for my daily bread (from UK, US, Europe, working and/or owning or investing in the firm).
Is this for the worse? IDK. My recollection of a childhood pre-Internet is of joy of hanging out with friends, but interspaced with impossible amount of boredom where in Jul/Aug neighbourhood friends went on holidays with their families (as did myself). And/or in winter months when we were cooped in our homes not going out for it was too cold to spend too much time there. It was soul crushingly booring. Watching a paint dry boring. Internet - of which I was a very early user of, even of computer networks predating it briefly - opened another universe of non-boring interesting stuff to us.
AI doesn’t have to weaken human interdependence, it can enhance it. By handling repetitive or complex tasks, AI frees people to focus on collaboration, creativity, and deeper human connection. With thoughtful use, superintelligent tools could actually strengthen the bonds that make us human.
For more AI trends and practical insights, check out my Substack where I break down the latest in AI.