17 Comments

I don’t understand how to reconcile what you’re saying with the fact that maybe a third of the American electorate believes that the 2020 election was stolen. This was (and is) massively consequential. It clearly wasn’t a preexisting belief. To get people to believe it, it was necessary and sufficient for the Trump information ecosystem to tell them to believe it. Is the claim that even without social media, traditional media would’ve disseminated the message equally effectively? How do we know such a counterfactual?

Expand full comment
Jan 25Liked by Dan Williams

I agree that all four points ("facts") are valid but must disagree with the conclusions you draw from them.

1. "Online disinformation does not lie at the root of modern political problems"

This is a strawman. The assumption that AI only matters in online discourse is too strong a caveat, and what counts as online is growing anyway as streaming grows as a distribution medium. The real question is, does the introduction of AI into the spheres of political discourse---online and not---pose new and dangerous hazards for collective epistemic health? The roots of political problems may indeed lie in conflicting interests, cognitive foibles, bad-faith actors, etc. But that doesn't mean that AI won't be an especially de-stabilizing factor and amplifier for incoherence.

2. "Political persuasion is extremely difficult."

The danger is not that some candidate's AI-generated ad will out-persuade some other candidate's ad. The danger is that foundational belief systems are susceptible to corrosion and distortion as the overall information landscape becomes polluted with manipulative content at scale. An example is the normalization of manifestly outrageous ideas due to repeated exposure and perceived acceptance by peers and thought leaders. Not that the ideas should be excluded from discourse, but that the immune system becomes disabled to counter them. Once registered, a volley of mental impressions is irreversible even if, in a different mental frame, some may later be acknowledged as questionable.

3. "The Media Environment is highly demand driven"

This is an important crux. What's most important is, it's a system. Demand is shaped by supply. If AI amplifies bad actors' ability to flood the zone with crap, then the content of this crap forms the backdrop for what people will be talking about at the diner. Which in turn shapes what cable channels, radio talkers, marketing messages, podcasts, and streaming videos they attune to.

4. "The establishment will have access to more powerful forms of AI than counter-establishment sources."

The establishment plays by different rules. As you point out, the establishment has an interest in credibility and veracity. Other actors don't. Simple confusion and disillusionment is a sufficient outcome to achieve nefarious ends. A corrupt lie travels around the world while the truth is still tying its shoelaces. The empirical fact is, con-men often win. "Moral panic" is prudent in the face of any new devices they may come to master before the establishment gets its act together to figure out new defenses.

Expand full comment
Jan 25·edited Jan 25Liked by Dan Williams

One metaphor I find useful is to think of misinformation (or misleading information) as the currency in which bad beliefs are traded back and forth, or the package it comes in, but not primarily the *source or reason* for those beliefs. So it's kind of like Aristotle's material cause (or maybe even formal cause?), as opposed to the efficient cause that would count as the "real problem" to target. Meanwhile, the final cause goes in the opposite direction (misinfo functions in the service of identity, values, signaling, motivated reasoning). AI can make the currency more plentiful, like when the Fed pumps tons of fake money into the economy. But there is only so much economic activity and so much demand, after a point. I think AI could do some epistemic harm via information in the short term when this is still novel and unpredictable, but over time as we adjust expectations and get more experienced, it will even out.

I guess what concerns me is not that AI would trick or sway lots more people, but rather that a flood of AI-generated BS combined with panic over AI will gradually lead us to cynically distrust everything and/or grow less responsive to genuine info even when it's important, either as overcompensation or out of sheer exhaustion. So, the bar gets lowered and we start tuning out (or retreat to the private sphere).

Expand full comment

I think people tend to associate evil or unethical things with effectiveness, so they might be imagining AI-generated misinformation as more effective than AI-generated political content in general but it's not at all obvious why that would be the case (could also just be a "feminist bank teller" style conjunction fallacy issue). Similarly, I think with microtargeting people are reacting to what they imagine might be possible rather than seeing what kind of effects can actually be achieved in the world.

Expand full comment
Jan 24·edited Jan 24Liked by Dan Williams

While I appreciate the instinct that institutions will be able to outmatch renegades, establishment narratives aren't always the most saleable, people sometimes judge whether a news outlet is trustworthy by how well it conforms to their worldview (especially true in the US recently), and isn't it imaginable that we encounter an asymmetry where the content-generating tech vastly outclasses the content-policing tech?

Expand full comment
Jan 24Liked by Dan Williams

It's really easy to use AI for what is already really effective: spreading division. Russia is expert on this, already started researching the subject before Internet.

We have already populist leaders coming to power everywhere - the types who get power when the people are divided and angry.

Hedge funds on the other hand only need to spread disinformation of one company or possibly even one company leader to make millions.

Everything in this leads towards more centrally lead fascist-like goverment apparatus. So I agree kind of in one thing: the goverment will have much power. The down-side is that only bad goverments will use that power really effectively.

Our only hope is to spread awareness, and this article is not helping, sorry.

Expand full comment
Jan 24Liked by Dan Williams

While I appreciate the time and thought put into this essay, it doesn't hold when new or simply variations arise. For example: when a once in a lifetime (or merely the first) pandemic kicks-off, every person on the planet is at the same starting line. Who knew, back then, that billions of people were susceptible to believing complete nonsense, and were also indifferent to their own health and the health and well-being of their families and communities? Back in the days when we were all tethered to landlines, who knew that there were a whole lot of people out there so afraid of new technology that they probably already had their tinfoil hats in preparation?

Changing minds is hard, but not impossible. Making minds, on the other hand, is up for grabs. Sometimes the best liars win.

I don't think AI will be better at lies, and the world is already full of people who are very skilled at dreaming up and distributing lies. One isn't worse or better than the other. People are gullible, and apparently easily manipulated. AI merely offers some people a more efficient tool for increasing the velocity of lies.

Expand full comment