This is my personal experience with Ai. Whilst shopping at a grocery food chain I used for many years, I was stopped while exiting by alarm bells flashing and security guards approached and asked for my receipt, which I gave them. I asked why I was stopped, and they told me that their new Ai security system identified my toilet rolls in the trolly. I told them that many other (white) customers in front of me also had outsized items in their trolly, and why pick me? I asked whether their computer algorithms selected for non white demographics. They refused to answer, and I left and have never gone back.
It’s the easiest thing in the world to load Ai with demographics for risk assessments. Like for example the amount of Native Indian or blacks in incarcerated compared to whites. But where is the fact that I was a paying customer of many years at that store. Why didn’t the Ai select got that?
It is not the danger of Ai outthinking us, it is the danger of Ai judging us a less human than the next, because it’s easier to do. Programming human rights and dignity is harder and less profitable.
Anecdotes like Geoffrey's are so important to keep in mind, when having discussions like these. With respect to the issue he raised, there are two problems really. One, that it is difficult if not impossible to anticipate and program in all the relevant variables that would enable AI not to arbitrarily privilege some humans over others. Even if they updated the AI to give extra leeway to customers who had shopped there >X times, this would be another hopelessly rigid metric. Two, that even if we knew how, in practice AIs will inevitably not be reliably programmed with human rights and dignity by those in charge of training it and setting the algorithms, given incentives, agendas and biases among those with de facto authority.
Very well written article although I consider you significantly downplay the role ai algorithms play in public opinion especially on platforms like X, considering there is evidence pointing towards musk’s actions regarding X’s feed when he told Twitter employees to boost his tweets in front of over 200M users. And this is one single example. Keep it up!
That's a legitimate concern, but you don't need AI to create those kinds of algorithms. In fact, it's possible that AI used responsibly could mitigate some of the harmful effects of social media. Of course that won't happen with X, which is a complete Trainwreck, but it might have rescued Twitter before Musk bought it.
I’ll be afraid when AI can replicate itself, starts independently competing for resources with humans because it has somehow acquired a lifespan and learns how to extract resources to power itself.
So because it is not *currently* wreaking havoc (except it is; just ask anyone in the academic world) then we can dismiss the potential for danger in the future? I don’t get the reasoning here. Certainly we can’t know with 100% certainty how it may play out, and no doubt many dire predictions won’t unfold (although it’s basically inevitable that other unpredicted dangers will), the fact that they can be seen so clearly in what has already happened in such a short time makes it incredibly naive to say, this early on, “very little to see here.” It’s like judging the dangers of asbestos based on what was known in 1910, except even more short sighted.
Thanks. I'm very much "in the academic world". It would be helpful to engage with the reasoning before saying you don't get it, or to provide arguments for your viewpoint rather than assuming its truth.
I’ve been in the academic world (and adjacent) for many years. I know the frustration of seeing AI “assisted” (or written) papers, scholarship applications, and academic conference proposals. But my main argument here is that it is far too early to dismiss concerns about AI wreaking havoc in the future. Clearly it has the potential to be used to spread misinformation. I’ve linked relevant articles below. AI is in its infancy and it’s already pointing to very obvious risks. We’re in no position to dismiss those risks, and have every responsibility to mitigate them. I’d say the first most important step is to stop letting AI replace our own reasoning, comprehension, expression, and artistic creativity. If we do let it replace those, we become extremely mentally lazy and even more vulnerable and suggestible.
If you want to argue that AI alarmism is not warranted, it's probably worth addressing the arguments of the AI alarmists on the object-level. The fact of the matter is that AI could in principle be thousands of times smarter than us because it's not constrained by the limits of biology. Today, we're trying very hard to bring it to that level, and we could succeed very soon. What will society look like after we get there, when beings thousands of times smarter than us exist? It's naive to think that little of significance will change.
I wonder something that seems so naive that it risks coming across as flippant snark, which I don't intend. What strikes me as simplistic, among AI existential risk doomers, is how 'AI' is positioned as one monolithic thing, rather than many things. If one did go rogue and attempt to anhihillate homo sapiens sapiens wouldn't we have AI on hand figuring out how to stop it? The argument that it would just be cleverer than all the others just seems like an appeal to an imaginary AI god. A related thought is that, like everything, the benefit/risk is just a normal trade-off. Anecdotally I have lost work, thanks to AI, but also use it to work faster and get paid quicker.
I second what Dan is saying about overblowing the dangers of AI. I wrote about this in my substack: https://crjustice5.substack.com/p/ai-whats-autonomy-got-to-do-with The thing is, AI doesn't have to survive on its own ever. It is designed and maintained by humans. It has no history of a struggle of survival, nor does it know how to reproduce little versions of itself that grow up to be big versions. If one of them messes up we can unplug the servers, and be done with it. Every AI system out there is totally dependent on humans for its existence.
When AI is trained, it develops a set of guiding behaviors that help it accomplish the training objective. For example, when LLMs are being trained to be nice using reinforcement learning, it develops behavior that guides it away from outputting tokens that end up saying bad things. This is true of agentic AIs as well. They develop specific patterns of behavior to help them accomplish the task they're given. Some patterns of behavior seem to be useful for all tasks, such as dividing into subgoals and pursuing those. Furthermore, some subgoals are useful for all tasks, such as self-preservation and resource acquisition. If a superintelligent AI were not perfectly aligned with us and it developed these subgoals, whose will do you think is going to win out in the end? And as we train AIs for tasks with longer and longer time horizons, how do we prevent these subgoals from emerging in a way that's bad for us? There's no good answer to that right now.
This rests on positing a system both vastly more intelligent than us and yet also so unintelligent it can't be directed to help us achieve our goals. I know that the AI x risk literature is filled with these sorts of arguments - I lecture on them at university - but they're not very plausible IMO.
The reason a superintelligence might not do want we want is not because it's not smart enough to understand what we want but because it doesn't care.
During training, the AI learns a set of proxy values to approximate the thing we actually want the AI to do. But as long as those proxies are just proxies, there's always going to be some Frankenstein way to satisfy them that's very different from what we had in mind. This is especially true if the AI has superintelligent capabilities, which allow any set of values to be taken to their extreme, potentially causing a very large divergence between the consequences of the AI's learned proxies and the values we envisioned.
A lot depends on whether the AI first gets to the level where its value structure mirrors our vision closely enough or that where it's intelligent enough to reason about which instrumental actions serve to best satisfy its current value structure in the long term. If the latter condition comes first, it seems trivial that the end result will be an unaligned AI. And to me, it seems that in order for the AI to get to a place where the former condition is true, it takes a level of intelligent reasoning ability that already lends itself to the latter condition.
I'm not one of those people who claim that we're definitely screwed by AI. There are too many unknowns and unknown unknowns to say that for certain. But based on the mental model of AI that I've developed, things don't seem good. The risk isn't definite but it's large enough that it should be taken seriously as a policy concern, which we're not doing right now.
I've spent my career (20 years) in machine learning, and am now transitioning into technical AI safety. This article feels like an addendum to your series about left over-hype of disinformation with AI talk sprinkled in top. There's definitely some alarmism out there. And lots of informed people whom I think are likely wrong technically about the fact that we don't need another breakthrough or three to get to the full, even super, intelligence that's a while other world. And so their confidence interval for when that happens is, IMO, pushed way too short. Though scarily son is def still possible.
But you've ignored the two real concerns for a bunch of fake ones. First, you wave away super intelligence saying we don't know what that will mean. Fully agree with that. But I'd say it's still worth that being a topic of societal discussion and awareness. If nothing else make people slightly more prepared to reckon with it?
More importantly, even the current models, when the technical infrastructure and economic structures and processes and patterns of thought have had time to shift, are stressing enough to reduce most jobs in the white collar workforce by some crazy amount. Already startups aren't hiring junior engineers. The founder and the first couple senior engineering employees can have claude do the work of the 6 junior engineers they would've hired in their second round of hiring. Even with frozen AI capabilities, that portends what, 85% of non physical labor becoming unemployed over 15 years? That's like compressing the shift from agricultural employment from a couple hundred years down to less than a generation. Deep fakes about trump 2.0 aren't the threat to democracy indeed. It's the insane social chaos and uprisings that are!
I say that like I'm certain. I agree that we really don't know. But it's deeply misguided to pretend what I describe above isn't really possible. And not in an it's possible to get struck by lightning way. In a maybe the GOP will win another presidential election after Trump even without democratic suppression sort of way.
My experience in technology and change in large organisations makes me very sceptical about claims of high percentage redundancies in short time scales.
Maybe that will be true of some specific sectors (not coincidentally, the very kinds of startups where these views are common) but incorporating AI as-is into large organisations will take more than 15 years.
Many non physical jobs are effectively protected from AI because they involve deciding and implementing strategy and regulatory functions. Many of the world's biggest institutions still have processes and technology built from the 1970s. Even if AI development stops tomorrow it's easily a half century programme of work and cultural change to fully embed the technology widely.
Thanks so much for writing this post. I loved your point about how those who predicted large impacts on elections don’t get much downside risks from being wrong. This coupled with media and audience incentives for pessimism is huge. Do you have any ideas about how to improve the bias-for-pessimism?
I wonder whether the growth of solo voices (like yours) on Substack might help, if only a bit, since the incentive is to build trust with quality content, rather than aiming for many views. Substack was founded primarily to offer an alternative to views-based ad revenue and seems like it’s growing in market share.
I'm a bit skeptical about AI destroying democracy. People seem to be more persuaded by their own social circle than stuff they might encounter online. I've also had enough conversations with people about controversial topics where they make a false claim, I point out that it's false and their response is that sure it's false but it might as well be true because those people really are that awful. Someone who isn't biased toward beiliving it is probably going to do some digging to figure out if it's true. It has some similarities with low quality (or even junk) science. Most people who don't already believe whatever it is that the low quality science is claiming to prove are going to either ignore it or try to verify it. I would not be surprised if a significant portion of people sharing deepfakes know they're probably deepfakes but think they're morally or emotionally true, so might as well share them.
It seems like negativity bias might actually be one of the biggest political problems in the world. Many very useful ideas/technologies that seem highly likely to help lots of people (donating to effective charities, gene editing to improve health and life outcomes, AI etc) seem to end up in these public conversation cycles largely dominated by marginal or spurious critiques. Wonder how the Apollo program would have faired if it was subject to the level of relentless pessimism and negativity so many worthwhile projects face today.
To me, the threat to democracy is not the threat of stupefying the demos, or riling them up, or implanting incorrect ideas, etc. It's the threat of algorithmic governance: unresponsive, unaccountable, unintelligible.
I think I'm with Justin Smith-Riu on this point, as he suggests in "A World-Historical Upgrade" over at The Hinternet. I'd take an inefficient, human bureaucracy over a sleek, fully automated, anti-human one any day.
>"Imagine we created a new species and that species was smarter than us in the same way that we're smarter than mice or frogs or something. Are we treating the frogs well?” -Bengio
>“If the AI of today is like an amoeba, just imagine what T-rex will look like, and it won’t take billions of years to get there. We can get there in a few years.” -Harari
I was a little confused by these quotes you selected pertaining to your post’s very specific argument that current AI probably doesn’t pose a threat to democracy. I agree with you on that narrow subject, but both these quotes seem contextually meant to address whether far more intelligent AI’s than we currently possess could pose existential risks. I think neither Bengio nor Harari think current AI is smarter than us in any significant way. These particular quotes appear to suggest that a super intelligence potentially poses a grave risk to any non-super intelligence. I’d be inclined to agree with that.
I also find myself disagreeing with your point that raising alarms on societal threats is far more popular and rewarding than saying everything is fine and pushing for more unfettered technological progress. From Leo Szilard and nuclear weapons to Rachel Carson and DDT, those warning of technological risks have historically faced steep reputational costs and institutional pushback, even when later vindicated. I wrote about this and took the opposite stance to yours in a past post: “Doomers Can’t Win, Boomers Can’t Lose”
To the argument you present in "Doomers Can't Win, Boomers Can't Lose", I would add the problem that a *truly* intelligent machine is just an *absurd* idea in the eyes of many. That means it's all-too easy to make fun of "doomers", and I can't see how giving a very high p(doom) can be a good move in terms of reputation for someone who has a good reputation to lose in cultural-elite circles. So when someone in good standing gives their p(doom) as, say, 10 percent, I tend to suspect their real p(doom) could be much higher even.
This point does not apply to alarmism about democracy, which indeed seems like a good move in terms of reputation or prestige. The alarmist can say "machines are just stochastic parrots, I'm aware of that and not one of those who entertain crazy science-fiction scenarios --- but it's enough to endanger democracy". In general, I think existential threat from AI and threat to democracy from AI should be kept separate --- whereas this essay does things like selecting the Bengio and Harari quotes you mention, or speaking of an "existential threat to democracy" in a subheading (what does the word "existential" add here?).
Dan, sorry to be critical, but I think the biggest problem with the debate over AI safety is all the people who don't know much about AI writing posts about whether it's dangerous. When you see most of the experts in AI lining up on the side of "it's dangerous", and most of the people who know nothing about it on the side of "it isn't", you should hesitate to dive in.
History suggests that experts (who do not speak with a single voice) are correct about some of their predictions, wrong about others, and are likely as not missing the most significant consequences.
Where they're on firm ground, if they aren't able or patient enough to persuade a smart, attentive, good-faith critic, that is a problem, but that problem is on them, not the critics.
I trust superforecasters over subject matter experts when it comes to making predictions about the future. In domains like geopolitics, that attitude has a good track record.
Eh, I went through this all with the development of the Internet itself. It produced neither utopia nor dystopia. But there were seismic changes to various institutions. I remember going around literally talking myself hoarse at an early conference trying to get people to think about some of the censorship implications of this (it wasn't worth it - the net result years later was some of my harshest detractors basically said, yeah, you were right - but that was little comfort for the cost).
By the way, this is knocking down a weakman again: "A third important factor is what psychologists and media researchers call the "third-person effect", the belief that people - other people, not oneself - are gullible and easily influenced by media."
The strong version is: "People who have both the time and education to analyze an issue in detail are much less likely (STRAWMAN - NOT 100% NOT IMPOSSIBLE!) to be conned by liars on that issue". This is self-consistent, and avoids that implicit sneer of "How very elitist of you to think some people can be more informed than others!".
"people are generally vigilant and sophisticated ..."
This is my personal experience with Ai. Whilst shopping at a grocery food chain I used for many years, I was stopped while exiting by alarm bells flashing and security guards approached and asked for my receipt, which I gave them. I asked why I was stopped, and they told me that their new Ai security system identified my toilet rolls in the trolly. I told them that many other (white) customers in front of me also had outsized items in their trolly, and why pick me? I asked whether their computer algorithms selected for non white demographics. They refused to answer, and I left and have never gone back.
It’s the easiest thing in the world to load Ai with demographics for risk assessments. Like for example the amount of Native Indian or blacks in incarcerated compared to whites. But where is the fact that I was a paying customer of many years at that store. Why didn’t the Ai select got that?
It is not the danger of Ai outthinking us, it is the danger of Ai judging us a less human than the next, because it’s easier to do. Programming human rights and dignity is harder and less profitable.
Sorry to hear that!
Anecdotes like Geoffrey's are so important to keep in mind, when having discussions like these. With respect to the issue he raised, there are two problems really. One, that it is difficult if not impossible to anticipate and program in all the relevant variables that would enable AI not to arbitrarily privilege some humans over others. Even if they updated the AI to give extra leeway to customers who had shopped there >X times, this would be another hopelessly rigid metric. Two, that even if we knew how, in practice AIs will inevitably not be reliably programmed with human rights and dignity by those in charge of training it and setting the algorithms, given incentives, agendas and biases among those with de facto authority.
Very well written article although I consider you significantly downplay the role ai algorithms play in public opinion especially on platforms like X, considering there is evidence pointing towards musk’s actions regarding X’s feed when he told Twitter employees to boost his tweets in front of over 200M users. And this is one single example. Keep it up!
That's a legitimate concern, but you don't need AI to create those kinds of algorithms. In fact, it's possible that AI used responsibly could mitigate some of the harmful effects of social media. Of course that won't happen with X, which is a complete Trainwreck, but it might have rescued Twitter before Musk bought it.
I’ll be afraid when AI can replicate itself, starts independently competing for resources with humans because it has somehow acquired a lifespan and learns how to extract resources to power itself.
So because it is not *currently* wreaking havoc (except it is; just ask anyone in the academic world) then we can dismiss the potential for danger in the future? I don’t get the reasoning here. Certainly we can’t know with 100% certainty how it may play out, and no doubt many dire predictions won’t unfold (although it’s basically inevitable that other unpredicted dangers will), the fact that they can be seen so clearly in what has already happened in such a short time makes it incredibly naive to say, this early on, “very little to see here.” It’s like judging the dangers of asbestos based on what was known in 1910, except even more short sighted.
Thanks. I'm very much "in the academic world". It would be helpful to engage with the reasoning before saying you don't get it, or to provide arguments for your viewpoint rather than assuming its truth.
I’ve been in the academic world (and adjacent) for many years. I know the frustration of seeing AI “assisted” (or written) papers, scholarship applications, and academic conference proposals. But my main argument here is that it is far too early to dismiss concerns about AI wreaking havoc in the future. Clearly it has the potential to be used to spread misinformation. I’ve linked relevant articles below. AI is in its infancy and it’s already pointing to very obvious risks. We’re in no position to dismiss those risks, and have every responsibility to mitigate them. I’d say the first most important step is to stop letting AI replace our own reasoning, comprehension, expression, and artistic creativity. If we do let it replace those, we become extremely mentally lazy and even more vulnerable and suggestible.
https://www.nbcnews.com/tech/internet/republican-bot-campaign-trump-x-twitter-elon-musk-fake-accounts-rcna173692
(https://economictimes.indiatimes.com/magazines/panache/are-you-arguing-with-an-ai-russian-bot-posing-as-a-trump-supporter-wild-theory-sparks-online-frenzy/articleshow/119544757.cms?from=mdr)
These are valid concerns, but not the ones the article was addressing, I think.
If you want to argue that AI alarmism is not warranted, it's probably worth addressing the arguments of the AI alarmists on the object-level. The fact of the matter is that AI could in principle be thousands of times smarter than us because it's not constrained by the limits of biology. Today, we're trying very hard to bring it to that level, and we could succeed very soon. What will society look like after we get there, when beings thousands of times smarter than us exist? It's naive to think that little of significance will change.
Thanks but it would have been helpful to read the article before commenting, such as when I explicitly say I'm not considering super-intelligent AI.
Well-argued piece, thanks.
I wonder something that seems so naive that it risks coming across as flippant snark, which I don't intend. What strikes me as simplistic, among AI existential risk doomers, is how 'AI' is positioned as one monolithic thing, rather than many things. If one did go rogue and attempt to anhihillate homo sapiens sapiens wouldn't we have AI on hand figuring out how to stop it? The argument that it would just be cleverer than all the others just seems like an appeal to an imaginary AI god. A related thought is that, like everything, the benefit/risk is just a normal trade-off. Anecdotally I have lost work, thanks to AI, but also use it to work faster and get paid quicker.
I second what Dan is saying about overblowing the dangers of AI. I wrote about this in my substack: https://crjustice5.substack.com/p/ai-whats-autonomy-got-to-do-with The thing is, AI doesn't have to survive on its own ever. It is designed and maintained by humans. It has no history of a struggle of survival, nor does it know how to reproduce little versions of itself that grow up to be big versions. If one of them messes up we can unplug the servers, and be done with it. Every AI system out there is totally dependent on humans for its existence.
When AI is trained, it develops a set of guiding behaviors that help it accomplish the training objective. For example, when LLMs are being trained to be nice using reinforcement learning, it develops behavior that guides it away from outputting tokens that end up saying bad things. This is true of agentic AIs as well. They develop specific patterns of behavior to help them accomplish the task they're given. Some patterns of behavior seem to be useful for all tasks, such as dividing into subgoals and pursuing those. Furthermore, some subgoals are useful for all tasks, such as self-preservation and resource acquisition. If a superintelligent AI were not perfectly aligned with us and it developed these subgoals, whose will do you think is going to win out in the end? And as we train AIs for tasks with longer and longer time horizons, how do we prevent these subgoals from emerging in a way that's bad for us? There's no good answer to that right now.
This rests on positing a system both vastly more intelligent than us and yet also so unintelligent it can't be directed to help us achieve our goals. I know that the AI x risk literature is filled with these sorts of arguments - I lecture on them at university - but they're not very plausible IMO.
The reason a superintelligence might not do want we want is not because it's not smart enough to understand what we want but because it doesn't care.
During training, the AI learns a set of proxy values to approximate the thing we actually want the AI to do. But as long as those proxies are just proxies, there's always going to be some Frankenstein way to satisfy them that's very different from what we had in mind. This is especially true if the AI has superintelligent capabilities, which allow any set of values to be taken to their extreme, potentially causing a very large divergence between the consequences of the AI's learned proxies and the values we envisioned.
A lot depends on whether the AI first gets to the level where its value structure mirrors our vision closely enough or that where it's intelligent enough to reason about which instrumental actions serve to best satisfy its current value structure in the long term. If the latter condition comes first, it seems trivial that the end result will be an unaligned AI. And to me, it seems that in order for the AI to get to a place where the former condition is true, it takes a level of intelligent reasoning ability that already lends itself to the latter condition.
I'm not one of those people who claim that we're definitely screwed by AI. There are too many unknowns and unknown unknowns to say that for certain. But based on the mental model of AI that I've developed, things don't seem good. The risk isn't definite but it's large enough that it should be taken seriously as a policy concern, which we're not doing right now.
I've spent my career (20 years) in machine learning, and am now transitioning into technical AI safety. This article feels like an addendum to your series about left over-hype of disinformation with AI talk sprinkled in top. There's definitely some alarmism out there. And lots of informed people whom I think are likely wrong technically about the fact that we don't need another breakthrough or three to get to the full, even super, intelligence that's a while other world. And so their confidence interval for when that happens is, IMO, pushed way too short. Though scarily son is def still possible.
But you've ignored the two real concerns for a bunch of fake ones. First, you wave away super intelligence saying we don't know what that will mean. Fully agree with that. But I'd say it's still worth that being a topic of societal discussion and awareness. If nothing else make people slightly more prepared to reckon with it?
More importantly, even the current models, when the technical infrastructure and economic structures and processes and patterns of thought have had time to shift, are stressing enough to reduce most jobs in the white collar workforce by some crazy amount. Already startups aren't hiring junior engineers. The founder and the first couple senior engineering employees can have claude do the work of the 6 junior engineers they would've hired in their second round of hiring. Even with frozen AI capabilities, that portends what, 85% of non physical labor becoming unemployed over 15 years? That's like compressing the shift from agricultural employment from a couple hundred years down to less than a generation. Deep fakes about trump 2.0 aren't the threat to democracy indeed. It's the insane social chaos and uprisings that are!
I say that like I'm certain. I agree that we really don't know. But it's deeply misguided to pretend what I describe above isn't really possible. And not in an it's possible to get struck by lightning way. In a maybe the GOP will win another presidential election after Trump even without democratic suppression sort of way.
My experience in technology and change in large organisations makes me very sceptical about claims of high percentage redundancies in short time scales.
Maybe that will be true of some specific sectors (not coincidentally, the very kinds of startups where these views are common) but incorporating AI as-is into large organisations will take more than 15 years.
Many non physical jobs are effectively protected from AI because they involve deciding and implementing strategy and regulatory functions. Many of the world's biggest institutions still have processes and technology built from the 1970s. Even if AI development stops tomorrow it's easily a half century programme of work and cultural change to fully embed the technology widely.
Thanks so much for writing this post. I loved your point about how those who predicted large impacts on elections don’t get much downside risks from being wrong. This coupled with media and audience incentives for pessimism is huge. Do you have any ideas about how to improve the bias-for-pessimism?
I wonder whether the growth of solo voices (like yours) on Substack might help, if only a bit, since the incentive is to build trust with quality content, rather than aiming for many views. Substack was founded primarily to offer an alternative to views-based ad revenue and seems like it’s growing in market share.
I'm a bit skeptical about AI destroying democracy. People seem to be more persuaded by their own social circle than stuff they might encounter online. I've also had enough conversations with people about controversial topics where they make a false claim, I point out that it's false and their response is that sure it's false but it might as well be true because those people really are that awful. Someone who isn't biased toward beiliving it is probably going to do some digging to figure out if it's true. It has some similarities with low quality (or even junk) science. Most people who don't already believe whatever it is that the low quality science is claiming to prove are going to either ignore it or try to verify it. I would not be surprised if a significant portion of people sharing deepfakes know they're probably deepfakes but think they're morally or emotionally true, so might as well share them.
It seems like negativity bias might actually be one of the biggest political problems in the world. Many very useful ideas/technologies that seem highly likely to help lots of people (donating to effective charities, gene editing to improve health and life outcomes, AI etc) seem to end up in these public conversation cycles largely dominated by marginal or spurious critiques. Wonder how the Apollo program would have faired if it was subject to the level of relentless pessimism and negativity so many worthwhile projects face today.
To me, the threat to democracy is not the threat of stupefying the demos, or riling them up, or implanting incorrect ideas, etc. It's the threat of algorithmic governance: unresponsive, unaccountable, unintelligible.
I think I'm with Justin Smith-Riu on this point, as he suggests in "A World-Historical Upgrade" over at The Hinternet. I'd take an inefficient, human bureaucracy over a sleek, fully automated, anti-human one any day.
>"Imagine we created a new species and that species was smarter than us in the same way that we're smarter than mice or frogs or something. Are we treating the frogs well?” -Bengio
>“If the AI of today is like an amoeba, just imagine what T-rex will look like, and it won’t take billions of years to get there. We can get there in a few years.” -Harari
I was a little confused by these quotes you selected pertaining to your post’s very specific argument that current AI probably doesn’t pose a threat to democracy. I agree with you on that narrow subject, but both these quotes seem contextually meant to address whether far more intelligent AI’s than we currently possess could pose existential risks. I think neither Bengio nor Harari think current AI is smarter than us in any significant way. These particular quotes appear to suggest that a super intelligence potentially poses a grave risk to any non-super intelligence. I’d be inclined to agree with that.
I also find myself disagreeing with your point that raising alarms on societal threats is far more popular and rewarding than saying everything is fine and pushing for more unfettered technological progress. From Leo Szilard and nuclear weapons to Rachel Carson and DDT, those warning of technological risks have historically faced steep reputational costs and institutional pushback, even when later vindicated. I wrote about this and took the opposite stance to yours in a past post: “Doomers Can’t Win, Boomers Can’t Lose”
To the argument you present in "Doomers Can't Win, Boomers Can't Lose", I would add the problem that a *truly* intelligent machine is just an *absurd* idea in the eyes of many. That means it's all-too easy to make fun of "doomers", and I can't see how giving a very high p(doom) can be a good move in terms of reputation for someone who has a good reputation to lose in cultural-elite circles. So when someone in good standing gives their p(doom) as, say, 10 percent, I tend to suspect their real p(doom) could be much higher even.
This point does not apply to alarmism about democracy, which indeed seems like a good move in terms of reputation or prestige. The alarmist can say "machines are just stochastic parrots, I'm aware of that and not one of those who entertain crazy science-fiction scenarios --- but it's enough to endanger democracy". In general, I think existential threat from AI and threat to democracy from AI should be kept separate --- whereas this essay does things like selecting the Bengio and Harari quotes you mention, or speaking of an "existential threat to democracy" in a subheading (what does the word "existential" add here?).
Dan, sorry to be critical, but I think the biggest problem with the debate over AI safety is all the people who don't know much about AI writing posts about whether it's dangerous. When you see most of the experts in AI lining up on the side of "it's dangerous", and most of the people who know nothing about it on the side of "it isn't", you should hesitate to dive in.
History suggests that experts (who do not speak with a single voice) are correct about some of their predictions, wrong about others, and are likely as not missing the most significant consequences.
Where they're on firm ground, if they aren't able or patient enough to persuade a smart, attentive, good-faith critic, that is a problem, but that problem is on them, not the critics.
I trust superforecasters over subject matter experts when it comes to making predictions about the future. In domains like geopolitics, that attitude has a good track record.
Superforecasters are a lot less pessimistic than AI experts about AI doom: https://www.freethink.com/robots-ai/ai-predictions-superforecasters
Eh, I went through this all with the development of the Internet itself. It produced neither utopia nor dystopia. But there were seismic changes to various institutions. I remember going around literally talking myself hoarse at an early conference trying to get people to think about some of the censorship implications of this (it wasn't worth it - the net result years later was some of my harshest detractors basically said, yeah, you were right - but that was little comfort for the cost).
By the way, this is knocking down a weakman again: "A third important factor is what psychologists and media researchers call the "third-person effect", the belief that people - other people, not oneself - are gullible and easily influenced by media."
The strong version is: "People who have both the time and education to analyze an issue in detail are much less likely (STRAWMAN - NOT 100% NOT IMPOSSIBLE!) to be conned by liars on that issue". This is self-consistent, and avoids that implicit sneer of "How very elitist of you to think some people can be more informed than others!".
"people are generally vigilant and sophisticated ..."
Two words: Donald Trump.
Confession time - I've often myself been guilty of the unfalsifiable "not yet" defence.
Nonetheless, I would argue that when we are talking about potential existential risks, it is better to err on the side of caution.