What if the truth about AI is more boring than we'd like?
On the excessively negative, alarmist, and catastrophising discourse surrounding AI and democracy.
Recently, I stumbled across a conversation between the computer scientist Yoshua Bengio and the public intellectual Yuval Noah Harari on “Artificial Intelligence, Democracy, and the Future of Civilization.” Early in the conversation, Bengio is asked whether AI systems that become “smarter than us” pose an existential threat. With a smiling face, he replies, “Yes”:
"Imagine we created a new species and that species was smarter than us in the same way that we're smarter than mice or frogs or something. Are we treating the frogs well?”
When asked whether he agrees, Harari says, “Absolutely”:
“There are short-term threats, which are also very significant, like the collapse of democracy…. [and] I completely agree with the frog analogy. It doesn’t bode well for us.”
He then offers “another way to think about it”:
“If the AI of today is like an amoeba, just imagine what T-rex will look like, and it won’t take billions of years to get there. We can get there in a few years.”
The rest of the conversation proceeds along similar lines. It illustrates two features of much popular discourse surrounding AI.
Two prominent features of AI discourse
The first is how low-quality it is. Likening the human-AI relationship to the human-frog relationship rests on so much confusion on so many different levels it’s difficult even to know where to begin. To make only one obvious point, animals like Homo sapiens and frogs are agents that evolved to maximise fitness through an undirected process of Darwinian evolution. In contrast, AI systems are tools we consciously design to help us promote our interests and achieve our goals.
The second and related feature is how alarmist and catastrophising much of the discourse is. As the conversation between Bengio and Harari illustrates, much of the popular commentary surrounding these topics is relentlessly negative, alarmist, and catastrophising. And people—pundits, experts, and audiences alike—clearly find this gloomy and sensationalist discourse captivating and delicious.
After Bengio explains why he thinks AI is an existential risk, he laughs. So does the host. It’s clear that they’re enjoying the shock factor. It’s also obvious from Harari’s contributions in the conversation and elsewhere that he can’t wait to tell us all the ways that AI is a profound and unprecedented threat, danger, catastrophe, menace, hazard, disaster, and so on. And judging based on his book sales and audience size, many people can’t wait to hear it.
The most obvious example of this discourse is the sheer amount of attention devoted to discussing the existential threat AI allegedly poses both to humanity and democracy. I’ll have more to say about the former topic in future articles.1 In this article, I will focus on the second threat, which I’ve written about before and which helps illustrate more general flaws in popular AI discourse.
As Harari repeatedly suggests in the conversation with Bengio, should we worry that AI might usher in the “collapse” or “end” of democracy?
I’m sceptical.
First, however, two disclaimers:
My focus here is on popular discourse surrounding AI. I’m aware there’s also lots of careful and rigorous analysis of the topic, including on relatively popular podcasts among small-ish groups of experts and intelligent people, like the Dwarkesh Podcast and the 80,000 Hours Podcast.
My focus is on existing AI technologies and plausible developments of those abilities in the near future, not an imagined future where we have “super-intelligent” AI that exceeds human capabilities along all dimensions. I don’t dismiss that future as “science fiction”. Nevertheless, it raises a very different set of issues for democracy, and I’m generally sceptical of our ability to predict what such a world will look like.
Does AI pose an existential threat to democracy?
The idea that AI poses an existential threat to democracy is typically linked to concerns that bad actors will use AI to create, spread, and target disinformation campaigns designed to manipulate voters into shifting their attitudes and votes.
Although people raise many worries in this context, the following uses of AI tend to get the most attention:
Creating deepfakes (hyper-realistic fake images, video, and audio) that trick people into believing falsehoods or cause them to distrust all recordings because they might be deepfakes
Using large language models to create highly persuasive arguments for false conclusions
Using armies of AI bots to automate disinformation campaigns
Creating and distributing highly personalised (“micro-targeted”) propaganda
The World Economic Forum has cited these dangers as a primary justification for listing “misinformation and disinformation” as the most pressing near-term global risks. Over the past few years, such dangers have also received extensive coverage in mainstream media and by various pundits and public intellectuals.
In one of the first articles on this blog, I pointed out that these concerns are greatly overblown. Drawing on arguments from people like Felix Simon, Sacha Altay, Hugo Mercier, and Scott Alexander, as well as fairly consensus findings from psychology and political science about how people process information, how media and propaganda work, and what determines voter behaviour, I argued that “the alarmism surrounding this topic generally rests on popular but mistaken beliefs about human psychology, democracy, and disinformation.”
That was published in January 2024 before nearly half the world’s population went to vote in places as diverse as India, the United States, Indonesia, and much of Europe. Now that those elections have passed, we can ask whether the fears surrounding the dangers of AI-based disinformation were warranted.
The answer seems to be “no”.
For example, in May 2024, a report by the Alan Turing Institute found evidence of attempted AI-based interference in just 19 out of the 112 national elections they studied and discovered “no clear signs of significant changes in election results compared to the expected performance of political candidates from polling data.”
As Felix Simon, Keegan McBride, and Sacha Altay put it, it’s not very surprising that the alarmist speculations about AI-based disinformation were “so off”:
“They ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology.”
Moreover, despite all the focus on AI and social media, it’s noteworthy that the most profound threat to democracy today is pretty much the same one Plato warned of in The Republic over two thousand years ago: bullshitting demagogues appealing to the uninformed and base instincts of the populace in ways that ultimately lead to authoritarianism.
Why so alarmist?
So, what’s going on here? Why is so much of the popular discourse surrounding AI so alarmist and catastrophising? This was the topic I addressed in my opening remarks on a panel at the British Library recently on ‘Democracy and AI: Who Decides?’
Here’s a slightly expanded version of what I said:
Negativity bias
The first factor distorting popular conversations about AI is negativity bias. Human attention and thought are disproportionately drawn to negative information, especially threats and dangers. So, when it comes to AI, we’re highly interested in and captivated by stories about the risks it poses and its potential downsides, and we frequently ruminate on worst-case scenarios.
One unfortunate byproduct of this tendency is that much of the public conversation surrounding AI ignores that many (I’d bet most) applications of these technologies are positive, at least in Western democracies.
In my own case, I can’t think of a single example where AI has disadvantaged me. However, I can think of several ways I’ve benefited, including when it comes to my ability to learn, organise, and process information relevant to democratic participation. This includes everything from text-to-speech technology, which has significantly improved in recent years, to using large language models to quickly and helpfully summarise and synthesise large bodies of information.
It could be that I’m very unrepresentative of most people in this respect, but I doubt it. I suspect the boring truth about AI in modern liberal democracies is that it provides tools that mostly help people achieve their goals in positive ways.
Does AI favour bad information over good?
Relatedly, when people think about AI's impact on democracy, they frequently assume that the power of these technologies will asymmetrically benefit bad (e.g., propagandistic, false, and misleading) information over good information. Sometimes, they don’t even make this assumption explicit. They simply don’t consider the positive use cases at all.
However, this assumption strikes me as highly dubious. Even setting aside interesting proposals for how AI could enhance democratic deliberation rolled out in countries like Taiwan, I suspect systems like ChatGPT and Claude already have a net positive impact on the information environment in Western democracies.
One reason is simply that they help writers fact-check and evaluate their contributions to public debate, and most writers aren’t launching disinformation campaigns. Another more mundane reason is that these systems are simply pretty reliable sources of information on most issues.
During the British Library event, one of the panellists argued that a big problem with large language models is their “bias”. But whatever one thinks about subtle biases in the output of systems like ChatGPT, they’re clearly much less biased than most highly influential political elites.
For example, almost everything Elon Musk posts on X is a lie, a falsehood, a half-truth, or propaganda. In contrast, if you ask Musk’s own large language model, Grok, “Who is the most prolific spreader of misinformation on X?”, it will happily—and very plausibly—inform you that it’s Musk.
More generally, I think it’s also obvious that mainstream large language models are much less biased than the average voter. Although Jason Brennan’s influential claim that most real-world voters are either “hobbits” (political know-nothings completely disinterested in political affairs) or “hooligans” (tribal and dogmatic partisans) is too pessimistic, it’s much closer to the truth than an idealistic model of voters as highly-informed rational do-gooders deliberating about the common good.
Ask ChatGPT a heated political question, and then ask a random friend the same question when you’re next at the pub. I’d bet ChatGPT’s response is much better informed, thoughtful, and nuanced. Insofar as people are turning to large language models to get information about the world, it would, therefore, seem to be a good thing.
Nevertheless, due to negativity bias, these fairly obvious points rarely even get considered in mainstream discussions about AI's potential impact on democratic processes.
The incentives of media, pundits, and public intellectuals
Negativity bias is a psychological tendency, but at the collective level it also shapes the incentives of pundits, mainstream media, public intellectuals, and various other commentators.
We hear a lot these days about how social media algorithms are designed to maximise user engagement, which tends to reward attention-seeking, sensationalist clickbait over thoughtful contributions to public discourse. There’s a lot of truth to this analysis. However, it too often omits the obvious point that mainstream media, pundits, and public intellectuals also confront a brutal attention economy in which many incentives drag them away from rational, evidence-based contributions.
In the context of AI and democracy, this means that coverage and commentary tends to be systematically skewed towards plausible-sounding narratives about threats and dangers. Audiences are magnetised by this kind of content.
Moreover, this tendency is exacerbated by the unfalsifiability of most alarmism and threat-based narratives. Now that the recent election cycle has passed and AI doesn’t seem to have impacted elections in ways many feared, has anyone who raised those fears admitted they were wrong? Or even noticed? Of course not. From their perspective, they can always say, “Just wait!”, or “The only reason nothing bad happened is because people took the risk I helped to warn about seriously!”.
In contrast, imagine if, say, AI-based disinformation had clearly swung an election. Then, anybody who had claimed that fears about AI-based disinformation are overblown would look really bad.
In other words, it’s not just that questioning alarmism is boring in the literal sense that people aren’t likely to pay attention to it. It’s also reputationally risky in ways that alarmism isn’t.
The result is that the incentives of modern punditry and media coverage reward an anti-Popperian culture: bold and risky predictions are discouraged; nebulous and unfalsifiable catastrophising is incentivised and amplified.
The third-person effect
A third important factor is what psychologists and media researchers call the “third-person effect”, the belief that people—other people, not oneself—are gullible and easily influenced by media. As I have argued repeatedly on this blog, drawing on the work of Dan Sperber, Hugo Mercier, Sacha Altay, and many others, this perception is mistaken: people are generally vigilant and sophisticated in evaluating communicated information, and manipulation of opinion is exceptionally difficult.
Nevertheless, most people disagree with this. Although they think it is unlikely they will be—or have been—manipulated by AI-based disinformation, they assume that others are far more credulous and influenceable.
This powerful bias distorts our ability to think carefully about media, politics, public opinion, and many other topics. As Sacha Altay and Alberto Acerbi document in an interesting study, the “strongest, and most reliable, predictor of the perceived danger of misinformation is the third-person effect.”
Consider deepfakes, for example. As people have gradually abandoned an initially prominent idea that deepfakes would cause mass deception, the new conventional wisdom is that they will lead to an overriding distrust of all recordings. If deepfakes exist, the thought goes, you can no longer trust your eyes or ears because any recording you see might be a deepfake. Worse, once deepfakes exist, politicians and other influential people can simply dismiss any inconvenient recordings as deepfakes. (This is sometimes called the “liar’s dividend”).
It could be that the presence of deepfakes will slightly hurt the overall quality of the informational environment for these sorts of reasons. Nevertheless, such worries also seem greatly overblown. I know deepfakes exist, yet if I see a recording released by the BBC or the New York Times, I will trust it is real. In contrast, if I see a video posted by Elon Musk or Catturd, I won’t.
To me, at least, this doesn’t exactly seem like a crisis. The primary mode of human communication is language, which has always allowed for “deepfakes”: anyone can say anything they want at any time, true or false. Does that mean we no longer trust what anyone says, or that language-based testimony is never useful for holding anyone to account? Of course not. Trustworthy communication is rooted in sophisticated psychological processes of “epistemic vigilance” and complex social and institutional scaffolding aimed at detecting and punishing dishonesty.
This is all fairly obvious when we reflect on our own case. However, when speculating about AI's impacts on society, we tend to imagine a public much less sophisticated than ourselves.
Final thoughts
These are certainly not the only factors distorting public conversations about AI. For example, there is also the strange fact that leaders of AI companies often seem eager to amplify alarmism about AI, perhaps because it speaks to the power of their product in ways that attract more investors and users.
Moreover, there’s also the fact—and here I’m speaking from experience—that any criticism of alarmism inevitably invites the following response: “So, you’re saying there’s literally nothing to worry about?”
Let me be clear: I’m not saying that. There are things to worry about. There are also things to be excited about. The point is instead that we should attend to such issues with more care and less catastrophising than one finds in much of the current discourse surrounding this topic.
In an ideal world, human psychology and public debate would orient us to the truth. In the real world, reality often diverges from the fictional worlds we find most engaging to think and talk about. The impact of AI on democracy provides a vivid illustration of this.
Sometimes, the reality is simply more boring than the thrilling doomsday scenarios we find so captivating.
I’m broadly sympathetic to the views of existential risk sceptics like Steven Pinker, Robin Hanson, and David Pinsof. If there is a legitimate concern here, I think it comes from how advanced AI might transform military conflict in dangerous ways, not from super-intelligent AI agents whose interests are misaligned with ours).
This is my personal experience with Ai. Whilst shopping at a grocery food chain I used for many years, I was stopped while exiting by alarm bells flashing and security guards approached and asked for my receipt, which I gave them. I asked why I was stopped, and they told me that their new Ai security system identified my toilet rolls in the trolly. I told them that many other (white) customers in front of me also had outsized items in their trolly, and why pick me? I asked whether their computer algorithms selected for non white demographics. They refused to answer, and I left and have never gone back.
It’s the easiest thing in the world to load Ai with demographics for risk assessments. Like for example the amount of Native Indian or blacks in incarcerated compared to whites. But where is the fact that I was a paying customer of many years at that store. Why didn’t the Ai select got that?
It is not the danger of Ai outthinking us, it is the danger of Ai judging us a less human than the next, because it’s easier to do. Programming human rights and dignity is harder and less profitable.
Very well written article although I consider you significantly downplay the role ai algorithms play in public opinion especially on platforms like X, considering there is evidence pointing towards musk’s actions regarding X’s feed when he told Twitter employees to boost his tweets in front of over 200M users. And this is one single example. Keep it up!