0:00
/
0:00

AI Sessions #8: Misinformation, Social Media, and Deepfakes (with Sacha Altay)

A deep dive into controversies surrounding fake news, "misinformation", propaganda, advertising, polarisation, social media, and generative AI.

Henry and I chat with Dr Sacha Altay about:

  • How prevalent is misinformation?

  • What even is “misinformation”?

  • Is there a difference between politics and science?

  • How impactful are propaganda, influence campaigns, and advertising?

  • What impact has social media had on modern democracies?

  • How worried should we be about the impact of generative AI, including deepfakes, on the information environment?

  • The “liar’s dividend”

  • Whether ChatGPT is more accurate and less biased than the average politician, pundit, and voter.

Links

Chapters

  • 00:00 Understanding Misinformation: Definitions and Prevalence

  • 04:22 The Complexity of Media Bias and Misinformation

  • 14:40 Human Gullibility: Misconceptions and Realities

  • 27:28 Selective Exposure and Demand for Misinformation

  • 29:49 Political Advertising: Efficacy and Misconceptions

  • 35:13 Social Media’s Role in Political Discourse

  • 40:50 Evaluating the Impact of Social Media on Society

  • 42:44 The Impact of Political Content on Social Media

  • 46:57 The Changing Landscape of Political Voices

  • 51:41 Generative AI and Its Implications for Misinformation

  • 01:03:46 The Liar’s Dividend and Trust in Media

  • 01:14:11 Personalization and the Role of Generative AI

Transcript

  • Please note that this transcript was edited by AI and may contain mistakes.

Dan Williams: Okay, welcome back. I’m Dan Williams. I’m back with Henry Shevlin. And today we’re going to be talking about one of the most controversial, consequential topics in popular discourse, in academic research, and in politics, which is misinformation. So we’re going to be talking about how widespread is misinformation? Are we living through, as some people claim, a misinformation age, a post-truth era, an epistemic crisis?

How impactful is misinformation and more broadly domestic and foreign influence campaigns? What’s the role of social media platforms like TikTok, YouTube, like Facebook, like X when it comes to the information environment? Is social media a kind of technological wrecking ball which has smashed into democratic societies and created all sorts of havoc? And also what’s the impact of generative AI when it comes to the information environment?

Both when it comes to systems like ChatGPT, but also when it comes to deepfakes, use of generative AI to create hyper-realistic audio, video, and images. Fortunately, we’re joined by Sacha Altay, brilliant heterodox researcher in the misinformation space, who pushes back against what he perceives to be simplistic and alarmist takes concerning misinformation.

So we’re going to be picking Sacha’s brain and just more generally having a chat about misinformation, social media, and the information environment. So Sacha, maybe just to kick things off, in your estimation, if we’re keeping our focus on Western democracies, how prevalent is misinformation?

Sacha Altay: Hi guys, my pleasure to be here. So it’s a very difficult question because we need to define what is misinformation. So we’ll first stick to the empirical literature on misinformation and look at the scientific estimates of misinformation. For that, there are basically two ways or three ways to define misinformation. One of them is to look at fact-checked false news.

So false news that have been fact-checked by fact-checkers as being false or misleading. And by this account, misinformation is quite small on social media, like Facebook or Twitter. It’s in between 1 and 5% of all the content or all the news that people come across. So according to this definition, it’s quite small. There is some variability across country. For instance, it seems to be higher in country like, I don’t know, the US or France than the UK or Germany.

There is another definition which is a bit more expansive because the problem with fact-checked false news is that you rest entirely on the work of fact-checkers and of course fact-checkers cannot fact-check everything and not all misinformation is news. So you see the problems. So another way is to just look at the sources of information and you classify them based on how good they are and how basically how much they share reliable information, how much they have good journalistic practice, et cetera. And the advantage of this technique is that you can have a much broader range because you can have, I don’t know, 3,000 sources of information that share information. And usually it broadly like most of the information that people see. And according to the definitions, misinformation is also quite small. So the definition is just for misleading information that comes from the sources that are judged as unreliable. And by this definition, misinformation is also quite small. Again, it’s like about like one to 5% of all the news that people encounter.

But then of course, the problem is not all the information that people encounter comes in this form. And for instance, some of it can come in terms of like images or all the sorts of things. And so this broadens the definition of misinformation. So some people think that when you broaden this definition, you have much more misinformation. My reading is that when you broaden this definition, you actually include so much more information that you increase the denominator. So of course, there’s going to be more misinformation, but because the denominator is larger, the proportion is going to be pretty much the same. But that’s an empirical question. So let’s say to sum up that it’s smaller than people think, according to the scientific estimates.

Henry Shevlin: If I can just come in here, a point that Dan you’ve emphasized in our conversations to me, and I think Scott Alexander has also emphasized in a great blog post called The Media Very Rarely Lies, is that a lot of what people think of as misinformation is just true information selectively expressed or couched in a way that naturally leads people to maybe form false beliefs but doesn’t involve presentation of falsehoods. Does that sort of feature in any of these sort of more expansive definitions of misinformation? Is it possible to create definitions that can capture this kind of deceptive, intentionally deceptive but not strictly false content?

Sacha Altay: I’d say that when you look at the definitions based on the sources, if a source is systematically biased and systematically misrepresent evidence and stuff, they are going to be classified as misinformation. I think the problem and the more subtle point is that these sources are not very important because people don’t trust them very much. But the bigger problem is when much more trusted sources who have a much larger reach, like I don’t know the BBC or the New York Times, they are accurate like most of the time, but sometimes and on systematic issues like I don’t know, they can be wrong. And that’s the bigger issue because they are right most of the time. So they have a big reach, they have big trust, but they are wrong sometimes. And that’s the problem.

Dan Williams: But I think just to focus on that observation of Henry’s, you might say, well, they’re accurate most of the time, but nevertheless, you can have a media outlet which is strictly speaking accurate most of the time with every single news story that it reports on. But because of the ways in which it selects, omits, frames, packages, contextualizes information, nevertheless end up misinforming audiences, even if every single story that they’re reporting on is on its merits, sort of factual and evidence-based.

I mean, I think the way that I understand what’s happening in this broader debate about the prevalence of misinformation is round about 2016 when we had Brexit in the United Kingdom and then the first election of Donald Trump, there was this massive panic about misinformation because many people thought maybe that’s what’s driving a lot of this support for what gets called like right-wing authoritarian populist politics. And around that time when people were thinking of the term misinformation, they were kind of thinking of fake news in the sort of literal sense of that term. So false outright fabricated information presented in the format of news. And as you pointed out, when researchers then looked at the prevalence of that kind of content, which you don’t really find when it comes to establishment news media for the most part, like there are always gonna be exceptions, that stuff is pretty rare.

And then one of the responses to that is to say, okay, if you’re only looking at like outright fake news, then you’re missing all of these other ways in which communication can be misleading by being selective, by omitting relevant context through framing, through kind of subtle ideological biases.

And then my view on that is, well, once you’ve expanded the term to that extent, and you’ve got this really kind of elastic, amorphous definition, it becomes really kind of analytically useless. Like you’re just bundling together so many different things. And that kind of content is also really pervasive in my view, within many of our establishment institutions, including within the social sciences. But Sacha, it sounds like you don’t necessarily want to endorse that last point. You seem to be thinking, even if you do have this kind of very broad definition of misinformation, we can still say that it’s a pretty fringe or pretty rare feature of the information environment. Is that fair? Am I understanding you right? Or is there something different going on?

Sacha Altay: I think I would agree with you that if the simple fact of framing information or having an opinion, like any scientist, even in the hard sciences, they have some theories that they prefer, they are more familiar with certain frameworks, and so they are going to be biased anyways. Scientists are humans, they are biased, but calling physics or the theory of relativity or whatever misinformation because it omits certain facts that it cannot accommodate or whatever, I think it’s far-fetched. I think it goes too far. So yeah, I would agree that if you use this broad definition of misinformation, then it’s very widespread. But then, yeah, even theories in physics would be misinformation because they cannot be completely objective.

I think science works not because scientific individuals are perfect, etc., or even because one theory is perfect, but because as a whole and as an exercise of arguing, etc., we get better and a little bit closer to the truth. But still, we are not getting at the truth and we cannot avoid the mistakes that you’re pointing.

Henry Shevlin: If I just want to push back a tiny bit, it seems to me, so obviously there’s this point here that, you know, all theory is value laden, the kind of physics point that I think is maybe true, but not very interesting. But I think there is maybe something in the middle here that is what I worry about, which is cases where there might be really quite, quite deliberate pushing of an agenda, a realization by a media provider that they are generating maybe inaccurate views, but they’re doing so just through reporting factual things.

So one example, Dan, that you’ve given before is that most of the kind of what we think of as misleading anti-vax discussion just reports on true factually accurate but rare vaccine deaths, but just reports on them in a very regular fashion. In the same way, you might think that selective reporting of certain kinds of violent incidents, whether it’s terrorism, police shootings, leads systematically to overestimation of the incidence of this kind of phenomena by the public or increased worries about its prevalence in a way that I think is perhaps worrying and politically objectionable, right? I think we might say, hang on, it is bad that we give so much press coverage to event type X rather than event type Y. And we know that this leads the public to overestimate the prevalence of event type X compared to event type Y. So I think there’s something in between the sort of, well, even physics is biased and the view of misinformation as, you know, strictly speaking lies. This kind of third category. I don’t know if that, I defer to you both as misinformation experts, but it seems that that is a worrying category.

Sacha Altay: I think you’re totally correct. And that’s what the field of misinformation has been proposing, like just for instance, classifying headlines based not on whether they are true or false, but whether they will create misperceptions after you have read them. And so researchers are saying, for instance, that we should classify as misinformation headlines such as, “a doctor died a week after getting vaccinated and we are investigating the cause.” And I think I disagree with this. I disagree with this thing that we should classify this.

What you were suggesting, Henry, was a bit different, is that it needs to be systematic. If you systematically misrepresent vaccine side effects, then it becomes problematic. But reporting on vaccine side effects and their possible negative effects is normal. And I think it’s healthy that news outlets are able to talk about and cover negative effects of vaccines, even if after reading the headlines, you have more negative opinions about vaccines, which is not supported by science, et cetera—they should be able to do that and they should do that. But if it’s systematic, as you say, I think it becomes more problematic. But I do think that when the bias is very strong in some of the definitions of misinformation based on the source, they would be classified as misinformation sources like Breitbart, et cetera. They are systematically extremely biased towards, I don’t know, these kind of things. And so they would be classified as misinformation.

Dan Williams: I think sort of one of the worries that I have though is who decides what constitutes systematic bias and bias about what? I think there’s a real kind of epistemological naivety that I often encounter with misinformation researchers where it’s like, you’re reporting accurate but unrepresentative events when it comes to vaccines. So we can call that misinformation. And then it’s like, well, as Henry mentioned, well, what about police killings of unarmed black citizens in the US. There’s a vast amount of media coverage of those sorts of events. Someone might argue that they are, statistically speaking, rare and unrepresentative, and that large segments of the public dramatically overestimate how pervasive those sorts of occurrences are.

And I think you go through many, many examples like that. And for me, the lesson to draw from that is not that, therefore, there are no differences in quality when it comes to the different media outlets in the information environment, like of course there are, but I also think like there’s such a thing as politics and there’s such a thing as science, where you’ve got scientists who attempt to acquire a kind of objective intellectual authority on certain things, and we should be very careful not to kind of blur the distinction between those two things.

I think when we’re talking about media bias in this really expansive way, where we’re not saying, okay, you’re just making shit up, but we’re saying you’re being selective in terms of which aspects of reality you’re choosing. For me, that’s a really important debate, but it’s a debate that happens within the context of politics and democratic debate and deliberation and argument. And I think sometimes I encounter misinformation researchers who treat that as if it’s just, it’s a simple sort of technocratic scientific question. Like we can quantify the degree to which the New York Times is biased or we can objectively evaluate the degree to which different kinds of outlets approximate the objective truth when it comes to their systematic coverage. And I get a little bit kind of squirmy when we get to that point, because I think that there’s just collapsing the distinction between kind of politics with all of its messiness and complexity and science, which I think should aspire to a kind of objectivity, which gets lost when we start making these really sort of expansive judgments.

I think we’ll probably circle back on this a few times as we go through this debate. But Sacha, you’re also somebody with very interesting views about not just this question of the kind of prevalence of misinformation, but also about human belief formation and the extent to which, in your view, lots of people, both in popular discourse, but also in academia, kind of overestimate the gullibility of human beings when it comes to exposure to false or misleading content. So do you want to say a little bit about your view concerning human gullibility?

Sacha Altay: Yeah, I just wanted to finish the last point on the fact that, you know, we are criticizing definitions of misinformation, but in media and communication studies for a long time, they have been studying kind of media bias, framing, agenda setting. Like they are very old theories of media, how they can misinform in subtle ways and indirect ways the public. And all of that has kind of been ignored by misinformation research. But now I feel like today misinformation research is catching up and be like, actually, we should go back to these theories. And so I think it’s good. But I just wanted to point that out.

And regarding gullibility, yes, I think it’s quite popular, the idea that people and like large complex events like the Brexit, Donald Trump or whatever are caused by people being irrational or gullible in particular. By gullible, I think what people often mean is that they are too quick to accept communicated information, like social information that they see out there in the world, in the news, communicated by others. And I think that the scientific literature shows something very different.

For instance, there is a whole literature on social learning, so how people learn either from their own experiences, their own beliefs, or what they see compared to like communicated information, social information, advice. And the consensus in this literature is that people underuse social information. They do not overuse it, they underuse it. And they would be better off doing many kinds of tasks if they were listening and weighing other people’s opinion and beliefs more than their own. So, I mean, it makes sense. Basically, we trust ourselves, we trust our intuitions, our experiences much more than that of others.

And so that’s kind of a consensus. There are many kinds of tasks, like you ask people, oh, what’s the distance between Paris and London? It’s like, 300 kilometers. Another participant, say 400. And you’re not going to take into account other people’s advice as much as your own intuition, even though you have no reason to be an expert on this kind of geographical distances. But you still trust yourself more.

And there are also many like theories and mechanisms that have been shown in political communication and media studies that I think suggest that people put a lot of weight on their own priors and their own attitudes when they evaluate and choose what to consume, which greatly reduces any kind of media effects or any kind of outside information. Like people are not randomly exposed to Fox News. They turn on the TV and they select Fox News. And then people selectively accept or reject the information they like the most. And so I think when you take all that into account, like selective exposure, selective acceptance, and egocentric discounting, it complicates a little bit the claim that humans are gullible.

Dan Williams: Yeah, so there’s this sort of popular picture of human beings as credulously accepting, you know, whatever content they stumble across on their TikTok feed. Although when I say human beings, it’s always other human beings, right? This is another point that you make with the third-person effect. Nobody really thinks of themselves as being gullible and easily influenced by false and misleading communication. But when it comes to other people, there’s this kind of intuition which is that, yeah, people are just being kind of brainwashed en masse by their lies and falsehoods and absurdities uttered by politicians and that they’re encountering in their media environment.

And your point is, no, actually, if you look at the empirical research, it doesn’t really support that at all. If anything, people put too much weight on their own kind of intuitions, their own priors, their own experientially grounded beliefs relative to the information that they’re getting from other people. So rather than thinking of many of our sort of epistemic problems as being downstream of gullibility, we should think of in some ways there being the opposite problem of people being too mistrustful, too kind of skeptical of the content that they’re coming across. Is that a fair summary of your perspective?

Sacha Altay: Couldn’t have said it better.

Henry Shevlin: If I can just raise one question here. Reading your brilliant paper, you emphasized, so this is a paper with the Knight Columbia School. You go through all these different misconceptions about how easily influenced people are by different sources, by sort of different peers, by the media, by the news. But this sort of does prompt the question, you know, where do people’s beliefs actually come from?

And you mentioned people’s priors, people’s intuitions, but presumably people aren’t born with these intuitions, they are formed from somewhere through certain kinds of processes. So I’m just curious if you have any sort of thoughts on where do people’s views come from? Because obviously that would suggest, well, that’s the place you go then if you want to influence people, you intervene on whatever is causing this fixation.

Sacha Altay: I mean, my view on beliefs, and I mean, much of my views come from Dan Sperber and Hugo Mercier, who have these theories on like reasoning and the roles of beliefs. And so basically, to answer your question, I think a lot of people’s beliefs are downstream of their incentives and intuitions they have about the world. For instance, vaccines. Vaccines are like profoundly counterintuitive. Like it’s very difficult intuitively to like vaccines. Like first there’s a needle that goes into your arm, there’s a little bit of blood, you think that there is some kind of like pathogens inside the vaccines, like it’s not something that’s very intuitive. So first I would say most, like not necessarily the beliefs, but the attitudes people have about vaccines largely comes from these very general intuitions that they have about contagion, about infections and about all these things.

And then the beliefs, well, people need beliefs to justify their attitudes. And so if your doctor is like, do you want to get vaccinated and you don’t really want to get vaccinated, you can say you’re scared of needles. But if there are also some widely available cultural justifications like, vaccines cause autism, maybe you’re going to jump on it. Maybe you’re not going to jump on it because maybe you’re smart and you know it’s false, et cetera. But you need justifications. And so I think a lot of people’s beliefs comes from this need to rationalize some justifications that they have. And I think that’s also why on many topics, people don’t have that many beliefs because often people don’t really need to justify many of their attitudes. And there’s a lot of work, for instance, in political science on how surveys kind of create beliefs in people because people have intuitions and kind of like vague opinions about all sorts of stuff. But when you ask them, they have to fix it and they have, and in some sense, it creates the beliefs.

So yeah, I would say beliefs mostly come from prior attitudes that people have and incentives that they have to act in the world.

Henry Shevlin: Okay, but those... just to push a little bit harder there, so the prior beliefs, I think we’re just still kicking the can down the road a little bit. Incentives I get. Incentives seem genuine and explanatory here, but presumably it’s not the case that you can predict people’s vaccine attitudes from the degree of phobia they have towards needles, right? Or at least, even if that is predictive, I don’t know if it is, it seems like there’s more going on there. I don’t want to give people, and I think that’s the danger of giving people too much credit for saying, oh, people’s beliefs perfectly track their own incentives. I can totally agree that incentives play a role, but I’m sure just when we think about our own sort of peer groups, right? I disagree with the political views of a lot of my peers, despite us being in the same socioeconomic class, despite us working in the same industry, despite us having, you know, broadly similar interests, I would have thought. So, I don’t know, I can see incentives carry us some of the way, but yeah, they don’t completely close the mystery here.

Sacha Altay: No, of course, of course. I think it’s, you take the example of vaccines. I think most people who get vaccinated, they just get vaccinated because they trust institutions, they trust their doctors. Maybe they have seen their doctors for 20 years, their doctors tell them to get vaccinated, they do it. So that’s the main explanatory agent here is just they trust some institutions, some experts who tell them to do something and they do it.

You wanna jump in, Dan?

Dan Williams: Yeah, I was just going to say, I think it seems like it’s possible to think, and as I understand Sacha, your view, this is your view. It’s possible to think that we overestimate the degree to which people are kind of influenced by whatever content they happen to stumble across in their media environment or the viewpoints that they happen to encounter in their social network—that we tend to think people are too gullible when it comes to those things.

It’s possible to think that, but also to accept that, of course, we are going to be influenced in complex ways by the information we get from people that we trust, from sources that we trust, from our upbringing, from our social reference networks and so on. So the idea that we’re not gullible and not credulous shouldn’t be sort of conflated with the idea that we somehow are born with our entire worldview from the start in ways that aren’t influenced by the media environment and by the testimony that we encounter. Like clearly we’re massively influenced by what we hear from other people, but sort of my understanding of the perspective that you’re outlining Sacha is that process whereby we build up beliefs about the world—firstly, there are some things that just everyone kind of finds natural, like maybe like there’s something weird about vaccines when you hear about the concept, most people just have a kind of instinctive aversion to it, but also things like, you know, my group is good, the other group is bad, or like certain kinds of maybe xenophobic tendencies that come naturally to people and so on. So there are certain ways of viewing the world and certain things which are intuitive, maybe as a consequence of our evolutionary history, and that interacts then in very kind of complex ways with our experiences, with our social identities, with our personality, with the people that we trust, the institutions that we trust, those we mistrust, and so on and so forth. So you can accept all of that and the role of social learning within that whilst also thinking people tend to exaggerate how gullible, how credulous people are when it comes to sort of incidental exposure to communication. Is that your view, Sacha? Is that a kind of accurate representation of it?

Sacha Altay: Yes, yes, yes it is. I think a lot of the reason, like when we change our mind drastically, it’s either because like we have a lot of reasons to trust the source. Like if the BBC says that the Queen died and the BBC says it, the Guardian says it, we’re going to update our beliefs immediately. And most people, even the people who distrust the BBC are going to update their beliefs directly.

And it’s the same if like, I don’t know, my wife tells me that there is no more milk in the fridge and I have to buy some. I’m going to update my beliefs about the milk in the fridge and buy some, you know, in some ways, of course we update our beliefs based on the information that’s provided to us. It’s just that we do so I think in ways that is broadly rational in the sense not that it’s perfect, but that it serves our everyday actions and our incentives, like what we want to do in the world, like very well. So I think that’s also the way in which I mean it is that when we update it and when we do it, we do it quite well, not to discover the truth, but at least to get along in the world.

Dan Williams: And could you maybe say a little bit more about this point concerning selective exposure? So the fact that when people are engaging with media, with the viewpoints of pundits and politicians and so on, a lot of that is, quote unquote, demand driven in the sense that people have strong attitudes, they’ve got strong political, cultural allegiances, they identify with a particular in-group, they want to demonize like those people over there or that kind of institution, et cetera. And it’s these sort of pre-existing attitudes, interests, allegiances, which often build up in complex ways over a long period of time, which then causes people to kind of seek out information and often misinformation, which is consistent with their attitudes and their interests, rather than the picture I think sometimes people have, which is—I think the way Joe Uscinski puts it is, you know, they’re walking along and they slip on a banana peel, you know, they encounter some conspiratorial content on social media and now they believe in QAnon or like Holocaust denial. That’s just not the way that it works. Could you say a little bit more about that concerning like selective exposure and the demand side of misinformation?

Sacha Altay: Yeah, we know for instance that misinformation on social media like Facebook or Twitter, which have been the most studied in particular Twitter, you have a very small percentage of individuals who account for most of the misinformation that is consumed and shared on these platforms. And it’s like very small. It’s like 1% or less than 1% that account for most of the misinformation that is consumed and shared on these platforms.

And these people, they are misinformation sharers and consumers, not because they have like special access to misinformation because they have a lot of money or whatever, but simply because they have some traits that make them more likely to seek out such content, such as having low trust in institutions, being politically polarized. And because of these traits, because they don’t trust institutions, they are looking for counter-narratives to like the mainstream narratives they find on mainstream media. Because the thing is that these people who consume and share most of the misinformation on social media, and give us the impression that there is a lot and that many people believe it—these people are also exposed to mainstream narratives. It’s just that they decide to reject the mainstream narratives and instead of trusting what the TV tells them, they go on some Telegram channels, they go on some weird websites to learn about the world and do their own research.

And this is, I think, some of the strongest evidence, at least in the case of misinformation, that the problem is not in the offer of misinformation because it’s actually quite easy, quite free, quite accessible. It’s super easy to find misinformation online, but most people consume very little of it. But you have a small group of people who are very active and very vocal who consume most of it. And they have low trust in institution and are highly polarized. And I think it matters a lot for how we want to tackle the problem of misinformation. The problem is not that you have a majority of the population that’s kind of gullible and so we should avoid them being exposed to misinformation, rather you have some people who have some very strong motivations to do some specific stuff. And I think we should address these motivations. And because addressing the offer is impossible. And I’m not against like content moderation and stuff. I think we should try to be in an information environment where the quality of the information is the highest possible, et cetera. But if you have motivations to look and to pay or to consume some content, then the offer will be met, like people will create such content.

Dan Williams: Could we maybe just before we move on to these issues about kind of social media and AI, because I really want to get to those, there’s another point connected to this issue about gullibility where I think there’s this massive kind of gap between common sense, conventional wisdom, and what the empirical research shows, which you’ve written a lot about, which is like the impact of things like political influence campaigns and commercial advertising and so on. So you go into that in your paper on generative AI and why you think there’s been a lot of unfounded alarmism about that, which we’re going to get to shortly. But even separate from the issue concerning AI, could you say something about what the evidence that we have actually shows when it comes to the impact of political and sort of economic advertising campaigns?

Sacha Altay: So political scientists have been studying that for a while because in the US there is so much money that is being spent on political advertising, especially in presidential elections. And so the best studies, they come from political science. And to give you an example, some of them have up to 2 million participants that are being exposed to hundreds or thousands of ads for long periods of time, like months.

And so these are the kinds of study that are being done in this field, like very large sample, long periods of time, et cetera. And the consensus is that political advertising in presidential elections in the US has very, very small effects. The effects are not zero because of course, with such big sample size, long periods of time, et cetera, you do find significant effects, but the effects are very, very small, like point of percentage. And so that’s the consensus in political science in the US.

So it’s a bit specific because the US you have like Democrats and Republicans and you socialize these identities and these identities are very hard to change. Like if you’re a Democrat, it’s very hard for you to change and vote Republican. And of course, in the US, you often have only two candidates that are very prominent and people hear about them all the time. So it’s difficult to move the needle. But in like other elections in other country, multiparty, you have more room for political advertisement to have an effect. But even in these cases, even when it’s like lower stakes campaigns with less known candidates, the effects are still quite small. Like, I don’t know why we have this idea that advertisement works very well, it influences people, but at least when it comes to political voting, it’s just very hard to influence people’s vote. And it’s the same for like marketing, like online ads, like on social media—are very ineffective, the thing is that they are very cheap as well. So I don’t want to say that they are useless because they’re actually extremely cheap. So that’s why these companies do them a lot, but they’re also extremely ineffective. And so that’s the consensus in political science.

Henry Shevlin: So I had a question about this in relation to your paper again. It really paints quite a dismal view of the power of advertising in general. And yet this is like a vast global industry. Is it all just founded on sand? Is it all just smoke and mirrors? Are people basically wasting hundreds of billions of dollars a year on advertising that doesn’t, largely doesn’t work?

Sacha Altay: That’s the opinion of many people. Yeah. Many people think that at least it’s overblown. I don’t want to say that it’s completely useless, et cetera. Like, of course, if you want to buy like a washing machine, they all look the same. And if they are all about the same price, if you have more information about one and the information is good and the reviews are good, et cetera, you’re probably going to buy it more. But it’s just you already want to buy the washing machine and you have a price range and you have already like so at the margin, advertisement can work and has an effect, it’s just that the effect, like they calculate basically the elasticity. So how much more when you spend on advertisement, how much more will you sell basically? And the elasticity is like super small. It’s like, I forgot exactly, but it’s like very small.

But yeah, some people have written books about how the whole internet and know, products on the internet, like social media, et cetera, are free because we are the product and they sell us advertisement and stuff. And all of that is a bubble. Some people think that it’s completely a bubble. I don’t think it’s completely a bubble, but clearly I think, yeah, it’s overvalued. I think ads are a little bit overvalued. And I don’t think AI is gonna change that much.

Dan Williams: Okay, so just to sort of summarize what we’ve got to so far. So on this question of how prevalent misinformation is, if you’re focusing on fake news, it doesn’t seem to be anywhere near as widespread as many people think it is. Once you start stretching and expanding that definition to encompass more and more things, yes, misinformation so defined is much more widespread and plausibly is much more impactful, but it becomes so kind of amorphous that it’s difficult to apply scientifically.

Then the second thing we talked about was this issue concerning gullibility, where in your view, Sacha, and I agree with you, even though obviously people are influenced by social learning and there is evidence that, you know, persuasion can work, it can influence what people believe, people also tend to dramatically overestimate how gullible people are.

Let’s now turn to technology and where AI is relevant. And let’s start with social media, kind of very broadly construed. Henry, actually, why don’t I bring you in here? Because I think in a few of our previous conversations, you said something like the following, and you can tell me whether I’m remembering correctly. You said, we can contrast two kinds of cases, like video games and social media. In both cases, there was this big societal panic. Video games are going to make people really violent. They’re going to play Call of Duty, and then they’re going to go out and start shooting people in their community.

And your view is, the evidence there is actually incredibly weak and that there’s very little to support that kind of panic. Whereas when it comes to social media, there was a lot of panic, maybe not initially, actually, I think there was a lot of optimism about social media initially. But these days, there’s a lot of kind of concern about social media and how it’s, you know, destroyed democracy and human civilization itself. It’s this awful thing, having all of this sort of awful set of political consequences. And am I right, Henry, in thinking you’re actually quite sympathetic to that view about social media, even though you’re not sympathetic to the violent video games story.

Henry Shevlin: Yeah, yeah, no, great. I’m glad you bring up this example. Two things. One is I think my main point with that example is about sort of the time course of these worries that with violent video games, we had this massive initial panic that sort of died down as the evidence sort of basically didn’t arrive. As we saw that there wasn’t that as much concern as initially there was we thought there was reason to think there was. Whereas in the case of social media, there really wasn’t that much concern at first. It was seen as, if anything, a positive technology and concern has just sort of grown over time. And that sort of point about the time course of sort of the moral panic is sort of separate from the degree to which these are robust.

That said, I do, I am more sympathetic to the idea that social media presents an array of worries. So I’m probably more sympathetic than both of you to sort of Jonathan Haidt’s worries about the impact of social media and mobile phones on teenage mental health, which is a separate point from misinformation. I also worry about the role of social media and things like political polarization. Again, at least a little bit distinguishable from misinformation. But yeah, I guess I’m a little, at least a little bit worried about the role of social media and misinformation as well.

Dan Williams: Okay, I’ve got sort of views that are difficult to summarize about this. Let’s stay away from the teen mental health, because I think that opens up a whole can of worms, et cetera. Let’s focus on kind of the political impacts of social media broadly construed. Sasha, my understanding of your view is you basically think that the panic over social media and its political impacts is unfounded and it’s not well supported by evidence. Is that fair? Care to elaborate?

Sacha Altay: Yeah. So I’m just going to start by mentioning, I think, the scientific literature and what I think is the best evidence that social media have weaker effects than people think. So there have been many Facebook deactivation studies. So basically, you pay some participants to stop using Facebook for a few weeks. And in the control group, the participants are the same, but they are paid either to stop it for one day or to do something else.

And in general, what these studies find is that when you stop using Facebook for a few weeks, you become slightly less informed about the news and current events, suggesting that using Facebook regularly helps you slightly know about the world and what’s going on in the news. But it also makes you slightly more sad. So you’re slightly less happy when you use social media. So participants who deactivate social media, especially Facebook for a few weeks, are slightly happier. It’s not exactly clear why. It could also be because they are less exposed to news and news is sad and makes people less happy, etc. So it could be that. And there are also many other studies on Instagram.

And basically what all these studies suggest is that the effect of social media on stuff like affective polarization, political attitudes, voting behaviors, is either extremely small or no. And so the effects are very small. But now that I’ve mentioned this literature, I want to mention that there are many critics of this literature and of these experimental designs. For instance, even the longest RCTs are like two months. And of course, two months is super small at the scale of social media. They have been here for years. And you could imagine that it takes a few years for the effects of social media to kick in.

You can also imagine that, of course, participants stop using social media for a few months, but the world continues using social media. People around them continue using social media. So you kind of have these network effects that are possible. And of course, the effects of social media are not individual, they are collective. And so these RCTs are kind of missing the point. They cannot capture the collective and more systemic effects that social media could have. So that’s another critique. And there are many other critiques.

But I still think that what these RCTs show is that social media probably has effects. And there are studies like in collaboration with Meta showing that if you change Facebook or Instagram with like a chronological feed, that is instead of showing users the most engaging content, you show them the most recent content. When you do that, they spend much less time on the platform. Like the time they spend on the platform is diminished by one third.

And it has a lot of effects on in-platform behaviors, but very few effects on out-platform behaviors, on attitudes, on et cetera. So we should take these studies with a grain of salt, but I still think they show us that the effects are probably not as big as at least the most alarmist texts suggest.

Dan Williams: Hmm. I think maybe another critique that some people have raised is that these studies, especially that set of Facebook, Instagram studies that you mentioned, were conducted after there had been a lot of adjustments to the platforms and the algorithms in light of concern about things like misinformation and their effect on polarization and so on.

So that just goes to say, as you say, many people have generated lots of different criticisms of what we can really infer from these studies. I mean, my own view is they tell us something, which is that the most simplistic, alarmist stories about social media don’t seem to be supported by the current state of really kind high-quality empirical research. I don’t think they provide very strong evidence that should cause someone who goes into this with a really strong prior that social media is having all of these catastrophic consequences to update that much. And that then suggests that like how you view this topic is going to be shaped by a lot more than just the empirical research itself. So in your case, I assume that you’ve got these general priors about how media doesn’t have like huge effects on people’s attitudes and behaviors and these things are shaped by all sorts of complex factors other than media. And am I right in thinking that’s doing a lot of the work when it comes to your skeptical assessment over and above these studies themselves?

Sacha Altay: Yes, but I would say the strongest argument maybe in favor of my position is descriptive data on what people do on social media and how often they encounter political content. Because to be politically polarized, you need to be exposed to political content. And there are more and more descriptive studies, some of them on the whole US population in the US, showing that it’s less than 3% of all the things that people see on social media.

So less than 3% of all people see on Facebook is either political or civic content. And there are also super nice recent studies that are using a novel methodology, which is basically recording what people see on their phones. So it’s like a lot of participants download an app and the app records what people see on their phones like every two seconds or so. And these studies have shown that in the last US presidential election, for instance, people have been exposed to content about like Donald Trump less than three seconds per day. So during the US presidential election people have seen so little political content on their smartphones that it’s ridiculous and it’s so small that in my opinion it can only have small effects.

Then again a contrary argument could be it’s the average and they do find that you have a small minority who is exposed to a lot of political information but then again who are these people? Again I think they have attitudes, have priors and they have motivations, they are partisans. And yes, misinformation or content on social media can reinforce, exacerbate, radicalize them a little bit. But I think for the mass and for the general public, who’s generally not that interested in politics, etc. I don’t think it can have very strong effects.

Dan Williams: Yeah, I just want to double click on that and then I’ll bring Henry in. One other kind of stylized fact, which we should flag, which I think is surprising to some people, is if you’re the kind of person who cares about politics and follows the news carefully, and you read political commentary and so on, you are extremely unrepresentative of the average person. Most people don’t follow politics. They don’t follow current affairs closely at all.

And if you ask people very, very basic questions about politics, they are shockingly uninformed about things. That is shocking relative to the perspective of someone like us who follows politics very, very closely. And that’s another thing which I think people who are highly kind of politically engaged often get wrong when they’re thinking about this topic. If the picture in your head when you’re thinking about social media and politics and so on is that the person who’s constantly posting on X about politics is representative of ordinary people. You’ve got an incredibly skewed, misleading picture.

Okay, there’s tons, I think, more to say here. Henry, did you want to come in with any kind of pushback or any more articulation of your perspective?

Henry Shevlin: Yeah, this is all really interesting and helpful. I guess the only thing I’d say is that it seems to me social media has also just changed the kinds of voices that get platformed in the first place in a way that’s both positive and negative. But, we think about things like the rise of Tumblr and its contribution to sort of a lot of so-called, you know, woke discourse, particularly in sort of the late 2010s. And we could equally say the same thing about, for example, reactionary bloggers or neo-reactionary bloggers like Curtis Yarvin and so forth. I think these are the kind of voices that probably just wouldn’t have found an outlet in the prior social media ecosystems. Maybe that doesn’t matter, right? If none of this stuff actually impacts people’s views that much. But it does seem like an interesting shift in our broader political media landscape that social media has changed not just the kind of how much time people spend interacting with content or the way in which they do so, but also the kind of content that gets out there in the first place. Does that figure at all in the impact of these things?

Dan Williams: Sacha, before I bring you on, just want to say just one really quick thing about that, which is the reference to Curtis Yarvin there made me laugh because I think like he’s an example where like the overwhelming majority of people won’t be aware of him. But I think he probably is influential within the kind of ideas, intelligentsia space of the political right. But this idea that like social media and the affordances and incentives of social media kind of changes which voices become influential and prestigious. I think that’s such an interesting and important point, but Henry, I thought you were going in the direction of, like someone like Donald Trump can absolutely murder on social media because he’s so good at like tapping into the attention economy dynamics on social media in a way that, you know, he’ll be much less successful if we lived in a kind of Walter Cronkite kind of media environment.

But there’s this other aspect, which is like the decline of elite gatekeeping, which is characteristic of social media and it’s via that route, I think, where people like Curtis Yarvin can enter the conversation in a way in which they probably wouldn’t have been able to if you go back to like the 90s, 2000s. Sorry, I just had to double click and say that. Sacha, did you want to respond to Henry’s point?

Sacha Altay: No, yeah, I agree. I just also want to say we often mention Trump as like the example of like someone we don’t like who benefits from social media, but there are also people who we like who benefit from social media like Barack Obama. Like he used Facebook a lot during his campaign. He’s super charismatic. And if he was president today or if he was running today, he would do great on TikTok. He still does great on TikTok. Like he’s so charismatic, so good. So it doesn’t always benefit the worst actors.

And I want to say, it’s a very important point about how social media may also shape how politicians communicate. There are some studies, for instance, in France on how short format videos like TikTok is changing how parliamentary members are talking at the parliament. And there are studies showing that especially at the extreme and especially the extreme right, they are doing more and more speeches that are like with more emotions and more, I don’t know, buzzwords. And what they say is that then they post this on social media, and the more buzzwords, the more emotions, and the more all of that, the more it’s going to go viral. And so that their goal is not to convince other parliamentary members, but instead just to buzz on social media and reach some parts of the population.

Then it’s a normative question, whether it’s good or bad. Probably using emotions and stuff is bad. But you could also imagine that if they were speaking to the general public in more authentic way and try to reach them because a lot of people are not interested. That could also be good. But of course, because it’s the extreme right and stuff, we don’t like them. And I think we have good reasons not to like them. But I think we should be careful and we should also think of ways in which it could be used to do good stuff. But I agree that in general, it probably hasn’t done very good. And it’s very hard to quantify it.

Dan Williams: Just before we move on to the topic of generative AI, my view is there’s so much uncertainty in this domain when we’re asking these really broad questions like what’s the impact of social media on politics that we can’t really be very confident about any view that we might have. But it does seem like, at least in my view, a lot of the popular discourse and academic research has focused on things like recommendation algorithms and filter bubbles and so on, where I think I’m very close to your view, Sacha, in thinking that there’s just a lot of kind of unfounded alarmism. But there’s this other aspect of social media, which I think probably has been very consequential, which is just its democratizing consequence. The fact that like prior to the emergence of social media, it was a much more elitist media environment. Whereas now, anyone with a phone, a laptop, whatever, can open up a TikTok account, get on X and start posting about their views.

And I don’t think you need to view that through the lens of, well, that means they’re going to start articulating their views and then persuading large numbers of people. But what I think it does is certain views, which were kind of systematically excluded from the media environment before the emergence of social media, can now become much more normalized. And also people can achieve kind of common knowledge that other people share views that used to be much more marginalized and stigmatized. So those sorts of views can end up being more consequential in politics, even though the views themselves aren’t necessarily more widespread.

And I think you find that with things like conspiracy theories. My understanding of the empirical research, again, people like Joe Uscinski, is that the actual number of people who endorse conspiracy theories hasn’t really increased, but they do seem to play a more consequential role within politics because people with really weird conspiratorial views used to be kind of marginalized in media. Whereas now it’s very easy for them just to start expressing those views online, finding people who share similar kinds of views, coordinating with them. And so it can play a bigger role in politics, even though it’s nothing to do with, you know, mass algorithmically mediated brainwashing or anything like that.

Okay, I’m sort of conscious of time and I really want to focus on generative AI. So there was this big panic about how once we’ve got deepfakes and other features of generative AI, this was going to have really disastrous consequences on elections. It’s going to shift people’s voting intentions in all sorts of dangerous ways. Sacha, you’ve written a paper which we’ve already referred to with Felix Simon. Looking into the evidence on this and presenting a kind of framework for thinking about it, what’s your take?

Sacha Altay: I will start by saying there are three main arguments why people are worried about the effect of generative AI on the information environment. The first one is that generative AI will make it easier to create misinformation and basically to kind of flood the zone with misinformation. The second one is that it will increase the quality of misinformation, better, faster misinformation. And the last one is personalization.

Generative AI will facilitate the personalization of misinformation. I think these are the three main ones and I can go quickly over them and argue why I don’t think they are a big deal. So about quantity, I think that quantity does not really matter. Today we are exposed already to so, I mean, there’s already so much information online and we are exposed to a very tiny fraction of that information. So adding more content does not necessarily mean that people will be more exposed to it. And I think it’s particularly true in the case of misinformation, where I think demand plays a very important role. And so it’s not because there is more misinformation that people will necessarily consume more misinformation. Like it’s not because you have more umbrellas in your store that people will buy more umbrellas. There needs to be like factors, like I don’t know, rain. If it rains more, you will sell more umbrellas. But so there needs to be something, there needs to be like incentives for people to demand more misinformation, to consume more of it. And that’s why I don’t think that the quantity argument is very strong.

I think also the cost of producing misinformation are already extremely low. Like we see it with Donald Trump or whatever, they just say something that is false and they say it with confidence and that’s it. Like the costs are very low. Also, we are very imaginative as a species, like humans have come up with like incredible, fascinating, engaging stories. And of course, AI can improve our innovative skills, but still we are very good at making up stories that make us look good, that make our group look good. And so I don’t think generative AI is going to help that much in creating more misinformation. Regarding quality, yeah...

Dan Williams: Just to interrupt you so that we can sort of take these step by step. So the first worry is generative AI both with kind of large language models and the production of text, but also deepfakes. I take it you’re including kind of both of those categories. The worry is, well, this is going to just really reduce the costs of producing misinformation. Therefore you’ll get this explosion and the quantity of misinformation and that’s going to produce all sorts of negative consequences. And your view is, well, the bottleneck that matters isn’t really quantity anyway. It’s like what people are paying attention to. So you can increase as much misinformation, you can increase the amount of misinformation as much as you want. And in and of itself, that’s unlikely to have a big impact on people’s attitudes and behaviors. Do you have any thoughts about that, Henry, before we move on?

Henry Shevlin: I guess one concern would be even though media environments are flooded with content already, and I completely agree attention is the sparse commodity, maybe you could think of generative media as allowing sort of very niche areas to get flooded with content in a way that wouldn’t have been easy before. I’m just thinking here’s a silly little example, maybe an interesting example from recent media. Some of you may have seen the anti-radicalization game that was launched in the UK, featured this character about two weeks ago, featured this character called Amelia, a purple-haired anti-immigration activist in this fictional game, which was quickly seized upon by a lot of the anti-immigration right in the UK. And now there’s a flood of AI-generated content all about Amelia, mostly making her look really cool and some of it kind of playful, some of it kind of silly. But the point is this was just like a niche news story that I think people found amusing, but I think it would have died a lot quicker had it not been for the ability of people to seize upon this and generate huge swathes of content about Amelia in a very, very short time. So maybe there was just pre-existing demand there, but it would have been demand that would have been perhaps hard to meet without the ability of generative AI tools to create the content to meet that, which maybe is a difference.

Sacha Altay: Yeah, no, I mean, that’s possible. But when you look at the memes on the internet, most of them are like very cheap. It’s just like an image with some text and you just change a little bit the text and we’re probably going to go into that. But it’s the same with like deepfakes. Like cheapfakes are much more popular than deepfakes because they are super easy to do. Like you just change the date or change the location of something and boom you have your cheapfake. And that’s why they are super popular. Yeah, I don’t know, anyway.

Dan Williams: What’s the definition of a cheapfake, Sacha?

Sacha Altay: A cheapfake is just a low-tech manipulation of information. Like you have an image and you change the date of the image, or you change the location of the image. So in opposition to deepfakes, which are like high-tech, completely like for instance, generated image where like it’s usually sophisticated, et cetera. Cheapfakes in opposition, like very cheap, like that you can, most people can do with their computer without like requiring any tech skills basically.

Dan Williams: Sorry, I think I cut you off. I just wanted to give some clarity to people who weren’t familiar with that. Okay, so that’s quantity. And the next thing that you mentioned as a sort of worry that many people have is quality. That is generative AI won’t just enable us to increase the amount of misinformation but increase its quality and initially at least you’re understanding quality as being different from personalization. You’re treating that separately, is that right? Okay, so give us like, surely the concern here is just that, okay, quantity in and of itself isn’t gonna make a difference. But once we’ve got the capacity to generate like incredibly persuasive text-based arguments and deepfakes, even if it’s true that you can create these sorts of cheapfakes and they can be influential, in different contexts, surely the quality of the misinformation must make a big difference to how many people get persuaded by it.

Sacha Altay: Yeah, I think quality is the most perhaps intuitive argument because it’s the idea that you’re going to be able to create images or videos that are unrecognizable from real videos or images. And so of course people are like, how am I going to trust images or videos anymore if they are unrecognizable from real ones? So I think that’s like a very fundamental fear that people have. And I think it makes a lot of sense. It’s very intuitive. But I don’t find it very convincing.

I think it raises a lot of challenges, but I don’t think it raises enough challenges to be alarming. For instance, I think we have had this challenge before with photography. We have been able to manipulate photography in ways that we cannot distinguish them from real photography since the beginning of photography. And how did we solve this problem? Not with technical tools or whatever, but just with social norms about the use of images to not mislead others.

And we have been able to create like fake texts or say false stuff forever and we haven’t solved the problem with like some fancy tech innovation but simply by having rules, reputation, social norms and trusting more or less people based on what they have said before based on what our friends think of them based on their past accuracy and I think all of this we will still be able to use it to help us navigate an environment in which videos could be AI generated or could be real.

And I mean, something I’ve mentioned before, but quite fundamental is that, for instance, we trust the BBC or the New York Times to be broadly accurate most of the time. We also trust them to not use AI in misleading ways and not share like deepfake footages of like presidential candidates that mislead us. And I think this trust and the institutions that exist are sufficient to prevent most of the harm from this.

I think this will have effects. For instance, maybe we will be less able to trust people and sources that we don’t know. Because if we don’t have their track records, how can we trust them that the information they are sharing is true or false or AI generated or not? But I think that’s a very old problem and we will manage. It will make it more complex, but I think we’ll manage. Yeah, Henry?

Henry Shevlin: I was going to say though, isn’t there a worry that sort of new technology creates kind of normative gaps that allow for sort of a kind of annealing or a kind of recalibration of norms? I’m thinking about something here like file sharing, for example. Like I’m of the generation where, you know, Napster, the generation where suddenly it became possible to download music for free. And this created a whole bunch, a whole shift in norms where I think for my generation, at least, you know, this form of theft was basically just completely normalized. Hence we had advertising campaigns like you wouldn’t steal a car, therefore why would you download a song or a movie? And basically pirating went from something that was niche and maybe frowned upon to something that was just completely normalized.

In the same way, I think you might worry that the ease, ubiquity of generative AI is gonna shift our norms around creating fake content. And arguably we’re already seeing this. We had just a very recently the White House itself retweeting pictures, I think of a protestor at an anti-ICE rally and they had manipulated the image, right? And you know, I think if called out on that, they’d probably say, yeah, sure, you know, of course, yeah, we play around with images, you know, that’s what generative AI can do. That’s just the way things work these days, which does seem like a normative shift perhaps, one partly occasioned by technology.

Sacha Altay: My intuition is quite the opposite, is that if anything, challenges that AI, these new challenges of AI will instead increase the epistemic norms that we have. And because we want to know the truth, like we don’t want to be biased. We don’t want to be misled. We don’t want to be misinformed. And so the fact that the challenge is becoming harder, that it’s going to become harder to know if a video is true, authentic or not, is going to make us harder and harsher on people who do as the White House did, where they, we don’t know if it’s them who manipulated it or not, but they share the manipulated images that do not portray her accurately. And so I think people are going to be angry at that. And I think it’s just going to increase how people, the level of the expectation, like what people expect. And I think people are going to expect more. They’re going to expect news outlets and people to be better. I mean, it’s just a prediction. I hope I’m right. I’m an optimist, but...

Dan Williams: So can we connect that to the worry many people have about the liar’s dividend? The idea that, once we’ve got deepfakes, I mean, we’ve currently got technology to create hyper-realistic audio and video recordings, which are basically indistinguishable from reality. There’s the kind of initial worry many people have, which is, my God, people are going to become persuaded en masse that this stuff is true. And I think that’s very unsophisticated as a worry.

But then there’s another story people have which is, okay, maybe it won’t persuade people, but now that you’ve got the capacity to create these deepfakes, politicians, elites, other people who do shady things, they can use the possibility of something being a deepfake to just dismiss any kind of recording which is raised against them as evidence of them doing something shady.

And I guess connected to that as well, there’s the worry people have, which is that just as consumers of content, now if we encounter any kind of audio or video which goes against what we want to believe, we can just say, well, it’s a deepfake. I don’t have to believe it. So we’re just gonna end up becoming like more and more cocooned within our own belief system, not having this access to learn about the world via recordings. So it’s a kind of liar’s dividend worry and this general worry that this is just going to just obliterate the kind of epistemic value, the informational value of recordings. What’s your thought about those kinds of worries?

Sacha Altay: First thing, I think the liar’s dividend does not hinge on AI itself, but rather the willingness of politicians and some elites in particular to lie and to evade accountability and responsibility. AI will certainly be a new weapon in the arsenal and we have seen it in the elections in 2024, etc. Many politicians have used AI to their benefit and many politicians and elites are continuing using them. So for sure, it’s something we should be, we should worry about and we should regulate, et cetera. But will it be a particularly good weapon in the arsenal? Will it be a game changer? I’m not sure. I mean, time will tell. So far, I don’t think it has been particularly good. I don’t think it has been used in particularly good ways. I don’t think people particularly buy it. I don’t think when people share something and then they’re like, no, it was just AI or like try to use AI as the excuse. I don’t think it works very well. And I think there are going to be reputational costs for people who try to do that. We are going to remember that they have tried to do that. And so I don’t know. Again, time will tell. It’s an empirical question. I may be wrong. I don’t know. Yeah, Henry.

Henry Shevlin: I was just going to chime in. I’m sure I’m not alone in having seen on Facebook in particular, lots of cases of AI-generated media being mistaken. I don’t want to pick on boomers too much, but it is often boomers who completely seem to buy it. Like you might have seen these examples of people breaking these glass bridges, these videos that went viral and lots of people, particularly I say older respondents who completely seem to believe this is a real video they’re seeing.

But I guess two responses to that that you might push back with, Sacha, one would be like, well, we’re just in a transitional period, right? This is new. This is so new to a lot of people seeing seeing this concept for first time that they just aren’t aware yet that this is possible and they’ll adjust over time. Another would be to say, look, yeah, maybe if I’m producing a cute image of, I don’t know, a rabbit or an image of someone breaking a bridge or something non-political, it’s easier to convince people that that’s real than it would be the case to, for example, change their political views. So mean, either or both of those responses, things you’d like to go with in response to that.

Sacha Altay: Yeah, I mean, my impression is that if a rando shares a video of Macron doing something crazy, people are not going to believe it. They are going to wait for like France Info and like the real media to cover it. Because if I don’t know, Macron is saying, we are starting a new war with this country. People are not going to believe it, even if it’s very high quality, because they know if it happens, all the media are going to cover it. So I think in very in this high case of like the politician saying something absolutely crazy, people are going to be vigilant and are going to wait for the mainstream media to buy it.

I think many of the AI slop that we see a lot on Facebook, but also on TikTok, are humorous ones. I think there is some part of the boomers, but not just the boomers, who want to be entertained. And for entertainment, they don’t really care whether it’s true or not, whether it’s authentic or not. And you have extremely, you can create extremely cute images of like little animals doing cute stuff and you get what you wanted. You have like this super stimuli, super cute, super entertaining, super engaging. You have what you wanted. Like, and whether it’s authentic or not, I do care. I don’t understand how people don’t, but at the same time it’s like, yeah, it’s brain candy. It’s a candy, brain candy that people get and I don’t see why it’s wrong.

And I just want to point out that we as elites, because we have always looked down on the content that the mass population consumes. Now we look down on like short video formats on TikTok, but we have always looked down on their entertainment practices, et cetera, saying that it makes them stupid, et cetera. And so I think we should be careful about that. Careful about saying that kids are stupid because they are on TikTok and are watching short format video or whatever. I think we should be careful. And I think we are falling a bit into that with the AI slop, but the TikTok AI slop are very different from the Facebook AI slop. The TikTok AI slop are very weird and absurd. And I think they work because they are extremely weird and absurd. You know, they’re something weird about them and people are playing with it. They are playing with the fact that it’s AI and that you can do extremely weird stuff, but it’s very different from the AI slop on Facebook that works, I think, among older populations.

Henry Shevlin: Since we’re discussing TikTok, just a quick point that’s been lurking in the back of a lurking worry I’ve had is it seems to me most of the research focuses on adults. And yet a lot of the worries about both social media misinformation and generative AI misinformation concerns teenagers and young people. And I’m curious, A, whether there’s how much specifically targeted research there is looking at that group. And B, I think there probably are some good prior reasons for worrying about that group more than others, just because teenage years—firstly, our political beliefs are less likely to be stabilised at that point. And secondly, it is obviously an important window for the formation of political identities in the first place. So even if the worries about social media and generative AI misinformation are overblown for adults, could there be more to worry about there in the case of teenagers?

Sacha Altay: No, that’s very possible. That’s a point that has at least has been made for social media and mental health that very few studies have looked at adolescents or young adolescents and that’s probably the group that’s like could be the most sensible to these effects. And so that’s a totally fair point. Regarding generative AI, I also think we should acknowledge that they are also probably much better at using the technology and recognizing it, like whether it is ChatGPT, DALL-E or like all the AI technology, I think they are much better.

And that’s why the AI slop I see on TikTok are like very meta, like they are second degree, third degree, like very meta. Whereas on Facebook, they are just like first degree, like, look, I did this amazing thing. Oh look, this cute baby. So I think very different. So to be honest, I’m not so worried about teens and generative AI on TikTok. Regarding mental health, I don’t know and we need more data, but it’s a very fair point.

Dan Williams: Just on this point about quality, so we’ve been talking about deepfakes, but there’s this other aspect of generative AI, which is just producing kind of tailored text-based content. And there has been this flurry of empirical research, so I’m thinking of like Tom Costello’s work on chatbots and conspiracy theories and so on, work by people like Ben Tappin showing that LLMs can be pretty persuasive with the content that they produce, partly because they’re just very good at recruiting evidence and persuasive rational arguments that is tailored to people’s specific pre-existing beliefs and informational situation. What’s your feeling about the impact of generative AI there? Because presumably there, it’s a very different conversation about deepfakes. And it does seem to me at least that generative AI, you might argue, is going to disproportionately benefit people with sort of bad, misinformed views, because that’s often where you’re lacking kind of human capital, right? You don’t have access on tap to the sophisticated intellectual skills of the intelligentsia when it comes to a lot of this kind of lowbrow misinformation. So they can now access, you know, generative AI, at least if it’s not subject to various sorts of like safety and ethical requirements, and that might happen down the line, isn’t there a real risk there that that’s going to kind of asymmetrically benefit people pushing out misinformed conspiratorial narratives?

Sacha Altay: So it’s good you mentioned these studies because they find super large effect sizes on important topics like politics. But all the authors acknowledge that these effects are estimated in experimental settings and it’s unclear how this would translate outside of experimental settings where LLMs are not going to be prompted to convince participants or users of believing something.

So first, they are not going to be prompted to do that. Second, they are not going to be paid to pay attention and use the LLM in that way. And so that’s why also, you know, Ben Tappin has this piece on like for mass persuasion, it matters more like attention. Are people actually going to do that? Are people actually going to be exposed to that rather than how persuasive it is? And that’s why I’m not so worried.

And it’s important, I think you mentioned the symmetry or asymmetry because I don’t see any good reason why bad actors would be more successful in using generative AI to mislead than good actors using generative AI to inform and make society better or citizens more informed, et cetera. I think in general, good actors have more money, have more trust. Like in France, if the French government releases an AI or whatever to inform people, it’s going to be more successful than if it’s the Russian government. And so in many ways, I think good actors have the advantage, but they need to take it seriously. They need to act and they need to proactively use these tools for democracy and for the better. They should not wait, I think, for the bad actors to attack and them to defend. They should already be using them in the best possible ways to improve society.

Dan Williams: Yeah, my thought concerning asymmetry was just take something like Holocaust denial, right? I think to a first approximation, everyone who believes in Holocaust denial is like stupid for the most part. And if you give them access to highly intelligent generative AI tools, well, they’re gonna be able to use the kind of on-tap intelligence to rationalize that false perspective. Whereas when it comes to the truth, namely that the Holocaust actually happened, we can use generative AI maybe to improve the persuasiveness of the arguments that we’re going to generate, but we’ve already got extremely persuasive evidence and arguments, right? Because that’s where all of the intellectual research and so on exists.

In any case, again, I’m conscious of time. Could we end with this point about personalization? So I still meet people who think that Brexit was due to Cambridge Analytica and micro-targeting and things like this. I think it’s a very kind of common belief people have, which is that once you start targeting personalizing messages, you can have like really huge impact on what people believe. And one of the consequences of AI, very broadly construed, is that they’re gonna greatly enhance the personalization of persuasive messages. So what’s your take on that?

Sacha Altay: Maybe the best evidence is actually the papers by Ben Tappin, Tom Costello and stuff who have actually measured what matters more. Is it whether the arguments generated by the LLMs are targeted to the users based on their political identity, etc., or whether they present more facts and the quality of the facts, etc. And in general, what they find is that what matters is facts. So the more you provide people with facts and good arguments, the more they change their mind. And personalization matters very little.

And in political science, there’s a whole literature showing mostly the same thing, that like, of course you need some targeting, like you need to target based on the language or like some basic level of targeting is needed, but like micro-targeting based on like, yeah, political preferences, values, et cetera, broadly ineffective basically, especially compared to the most convincing arguments you can make.

I think also there is a whole literature in like communication showing that people highly dislike targeted messages when they are very targeted, when they feel like it’s very targeted at them, people recognize it and they dislike it. Yeah, the Cambridge Analytica thing is just a scam basically. I still don’t know why people believe it that much. It’s just a company. They are selling influence. They said they influence major elections and all of a sudden people are like, oh yeah, of course I understand why they do that. People have priors about other people being gullible and being swayed by social media. So when a company said that they sway people on social media, people are receptive to it. They’re not being gullible. It’s just on their priors, et cetera. But yeah, no, there is very little evidence that Cambridge Analytica affected the Brexit or the 2016 US presidential election. And it’s better to present people with good arguments and facts rather than to micro-target them.

Henry Shevlin: If I can squeeze just another angle into the personalization discussion, something you talk about in the paper is relational factors, which is sort of related to personalization, but a bit distinct. And I’m curious about whether you think AI could play a role there. We’ve talked on the show previously about social AI and the idea that young people in particular might be forming deeper and more profound relationships with AI systems or AI friends, companions, lovers, which then potentially could be leveraged for changing their views.

And it seems to me just intuitively that these kind of relations, whether they’re sort of direct relations or more like parasocial relations, can be really influential if we think about, for example, something like Logan Paul’s Prime Energy Drinks. You know, this was an influencer who promoted his own brand of energy drinks that then became a massive sensation, hundreds of millions of dollars, if not billions of dollars in sales over a very short period of time. So it seems like these relationships can be powerful. Is that not a worry that AI could leverage them?

Sacha Altay: And to be honest, I’ve been, it’s very hard, it’s a very hard question. I’m being asked that all the time. And I think the best counter-argument I have at the moment is just, there is very little evidence that people change their mind according to their life partner. Like the people they trust the most, they sleep with, et cetera. There is very little change of mind. And when there is, it’s hard to know whether it’s because the incentives are getting more aligned. Like, you know, they get married, so they are sharing their money, they are buying a house together, they live in the same place, etc. So of course when the incentives are getting closer, you could imagine their beliefs, etc. are getting closer. But basically attitude change is very small with your life partner.

And I imagine that if my wife, who I trust a lot, I love, etc. tells me, GMOs are bad, nuclear energy is bad, etc. Why would she convince me? Like I trust her a lot on many things, but I’m not like completely blind to her. And so how would ChatGPT beat my wife at this, like, I don’t see it, I don’t see it. But to be honest, it’s just my opinion, let’s see how it goes, but I don’t find it very convincing.

Dan Williams: I can confirm that my girlfriend would very much like to influence my political attitudes, but is not having much success as of yet. Okay, one thing we didn’t do actually is you’ve given us your kind of analysis and your belief, Sacha, about the impact or lack of impact of generative AI. But we should mention there were all of these sort of alarmist forecasts about the impact of generative AI and deepfakes on the kind of 2024 election cycle.

And one of the things that you do in your paper is you don’t just go through each individual worry, but you actually kind of survey what the empirical research that we have says. So briefly, what does the research that we have actually say about the impact of generative AI on that election cycle?

Sacha Altay: I mean, to be honest, it’s not like a systematic review, like it’s not super reliable. I just went over and looked at what happened in these elections. And basically, in most countries, the consensus is that there have been some problems with elections, but that it’s old problems with elections, such as politicians lying, trying to gain, to change, like basically politicians doing bad stuff. And generative AI has been used a lot to illustrate what politicians want to say. Often they want to say that they are strong and that their opponent is weak or stupid. So they have been using generative AI to do that in the US, in Argentina, in many countries. They have used generative AI a lot to do some kind of like soft propaganda, portraying themselves and their group as good and the others as bad.

In some countries, apparently, generative AI has been used to do some good stuff, like in India, where we have like many languages and where translation is often a problem and takes time. And apparently, generative AI has been used a lot to translate some political campaigns into all the languages and dialects that exist in India. So I think it’s very varied and not as catastrophic, let’s say, as the alarmist tech suggests. But I think it’s just suggestive evidence. And of course, it’s just the beginning of generative AI. So we should see how generative AI will be used in the future, in future elections. But we should not forget that it can be used to do good stuff. Like it’s not necessarily being used to do bad stuff. You can use it to translate to, and even to illustrate, you can use it to do like faithful imitation, illustration. You don’t need to like portray yourself as super strong and the opponent as bad. You can do, I don’t know, some good or artistic stuff.

Dan Williams: Yeah, we didn’t really talk about the positive side of generative AI very much in this conversation. But my view is, at the moment at least, the kind of boring truth about large language models is that they’re basically just improving people’s access to evidence-based kind of factual information. And I think if you compare the kind of like one-shot answer you get from ChatGPT or Claude or Gemini on any political issue to what you get from the average voter or pundit or politician, it’s just of much higher quality. But I think that truth doesn’t really get the attention that it deserves because it’s sort of boring for the most part. It doesn’t fit into these kind of threat narratives. And it’s kind of counterintuitive because like, why would it be that these, you know, profit-seeking companies that everyone despises have just had a really beneficial consequence on the information environment? But that is in fact, what I think the case is.

Sacha Altay: So you’re totally right because another concern I haven’t mentioned is just hallucinations, like individual users using LLM on their own and being misled by an LLM because they confidently say stuff that is false. But as you say, I think it depends compared to what? How often do they hallucinate and how correct are they compared to alternative sources of information like other human beings, social media, TV?

And I think they would do pretty well actually compared to most of these other sources. And so that’s why I’m not so worried. I think the confidence thing is a bit annoying, but I think most people who use AI regularly kind of know that, yeah, sometimes they completely hallucinate and they go completely awry, but we know it. And I think most people who use it often know it. And that’s why I’m not so worried. But again, it would be better if they did not hallucinate and were perfect, but it’s setting the bar a bit high.

Dan Williams: Okay. Okay, fantastic. Well, thank you, Sacha. We’re going to have to bring you back on at some point because I feel like we’ve just barely scratched the surface with many of these issues. Was there anything that we didn’t ask you that you wished we had asked you?

Sacha Altay: No. I mean, as you said, many things to talk about.

Dan Williams: Okay, fantastic. Well, thanks, Sacha, and we’ll see everyone next time

Discussion about this video

User's avatar

Ready for more?