50 Comments
User's avatar
Laura Creighton's avatar

Whether your vision comes to pass depends on whether people decide that they can trust the answers they get from AIs. We already know that we cannot trust the elites to police their own content. Scientific studies do not replicate, preference falsification rules, and in recent memory cancel-culture came, not for the liars, but for those questioning the lies or unwilling to lie enough. Having the correct credential became more important than being correct. In Yeats' words "The best lack all conviction while the worst are full of a passionate intensity". How do we regain the truth in a world full of highbrow and lowbrow liars, as well as those who are sincerely wrong about things, all flooding us with untruths?

One idea. Build an agent and then keep track of trustworthiness. If we made it impossible to have prestige without being trustworthy, we would live in such an epistemologically brighter and more hopeful world. see: https://deepcode.substack.com/p/the-coming-great-transition-v-20?utm_source=share&utm_medium=android&r=8o0zz

Kenny Easwaran's avatar

> We already know that we cannot trust the elites to police their own content.

I think everything you are saying here is that we cannot trust elites to produce a system that gets things right on the first pass. None of the things you cite show that elites don't still converge more accurately and faster than every other epistemic system ever developed - just that they don't get things 100% right from the first publication or the second responses.

Laura Creighton's avatar

They could if they would make the effort.

Ali Afroz's avatar

You’re almost certainly right about Llm’s as they currently exist, but I do think you’re perhaps showing too much confidence in making such predictions of a technology that could develop in many possible ways. for example, you presume that of course eventually, people will be able to train an AI which does not share a centre left orientation, but it seems entirely possible that the mechanism causing the centre left orientation is the same mechanism that causes them to share expert opinion, namely that most experts are centre left so you should not be this confident that the ability to train ideologically diverse AI would not include the ability to train AI with populist opinions in the sense of opinions that are popular among non-experts.

You write that Must can pump a lot of nonsense for the consumption of his fan base, but an AI company cannot, but that seems self refuting Must demonstrates that there is a market for such nonsense. Although I will grant you. It’s a niche market. However, you could of course argue that there isn’t a large in a fan base, but the thing is that for example 3/4 of the American public things, the JFK assassination was the product of some conspiracy. So the market for false information is in fact, large, which is understandable, given the difficulty of discovering the truth in the modern world and the limited amount of attention people spend on it.

Part of why I am much more concerned about this than you are is that I just think the gap between expert opinion and the opinion of the masses is just too huge for this to be sustainable. For example, even deep seek trained in China will insist that corporal punishment of children is unacceptable and should not be done when even the American public supports it by a comfortable majority given the world population and how conservative an alien many cultures can be. I find it. Unbelievable that we should not have some significant probability on the possibility that this will provide a large market for an AI with opinions that are contrary to experts. I am not saying this will happen like I said in the beginning. I’m just very unsure of how the technology will develop. I just think this is one possibility.

Another possibility that AI agents become much more sophisticated and act like directed agents that can like other agents do intentional propaganda, at which point you apparently have a huge army of super persuasive agents running around. My point isn’t that you are not sketching out a possible scenario. It’s just that the technology is in its infancy, so I think you’re being too confident here and should consider the possibility of AI causing a more technocratic information environment to be just one scenario, instead of giving it overwhelming probability.

I do agree regarding the effects of LLMs as they exist right now. I just expect that we can’t be too confident of whether they’ll be in a similar place in say 5 years.

Liam Riley's avatar

Child corporal punishment isn't the best example of the difference in expert and public opinion, as the US is a real outlier among developed nations in its level of support (47% in a 2021 poll). Countries as varied as Japan, South Africa, Peru, Bulgaria, Scotland, and New Zealand have outlawed it. https://www.researchgate.net/profile/Ashley-Stewart-Tufescu-2/publication/373222487_Corporal_Punishment_The_Global_Picture/links/65de010ec3b52a1170fc1c83/Corporal-Punishment-The-Global-Picture.pdf

In that way, this view is aligned with the majority of the public in developed nations. LLMs are often not aligned with the views of the public of lower income countries, especially on cultural issues. This is one example where the American public is more aligned with lower income country views than it may realise.

Ali Afroz's avatar

Compare to say Europe, the American government is much less responsive to elite opinion when it conflicts with popular opinion so I don’t think you can be as confident that the fact these countries have outlawed it means it isn’t considered acceptable by many. However, my main point was that even something trained in China is refusing to talk about this and China definitely permits it and find it acceptable and there are many other practices where the elites are widely out of sync with public opinion. In any case the majority of the people in the world don’t live in the developed world, although it’s also true that the developed world has richer consumers willing to pay more.

However, after checking with Grok, I think you are correct that popular opinion in many countries in Europe is in fact as you describe and I was wrong to generalise from the United States, although many American surveys show higher support, then in the one you referenced showing something like 60 to 75% approval, but my main point was that there is a huge market out there of people, especially in alien and conservative cultures where popular opinion is very very much out of sync with current Llm’s approval, even when these countries would provide a substantial market. Even in the west there is often a substantial market and also on issues like the JFK assassination, public opinion clearly disagrees with what an AI would tell you. After all, even if say 50 to 30% of the population disagrees with an AI, that’s still a huge market.

Arnold Kling's avatar

To me, this sets up a conflict between AI and the people who have come to depend on their social media bubbles for validation. I am not sure that the end result will be what you predict. The opposite could be people coming at AI with pitchforks, tar, and feathers.

Doug Bates's avatar

What about the cases of conflicting expert-level information? I'm witnessing an example of this right now in my small town. There's an action that a "populist" movement wants the town to take. Town government is opposed to it. The populist movement used LLMs to get legal advice they could not otherwise afford to create a ballot article designed to force the town to take the desired action.

Luke Cuddy's avatar

Seems like it would be different with scientific expert-level misinformation as opposed to legal. For instance, in some of the public health issues surrounding Covid where many experts initially got it wrong--from school closures to the lab leak--ChatGPT is pretty good at parsing expert opinion and acknowledging cases that are not settled, and showing the disagreement still existing on both sides.

Jan Zilinsky's avatar

Interesting! And underscores how people will use LLMs in all kinds of unexpected ways.

Victor Kumar's avatar

I'm convinced that LLMs would be epistemically positive if people relied on them to form beliefs about politically-relevant facts. But will they? Not if political irrationality is driven by demand rather than supply. Maybe they'll load up ChatGPT to get (decent) medical advice and then turn on Tucker (or his even worse successors) to satisfy their political needs?

Jamie Freestone's avatar

Great post & I wish I shared your optimism!

Clearly, LLMs will nudge people toward technocratic, expert-consensus type opinions. It's probably already happening.

I'm not worried about old school disinformation or persuasion, but a level of control that is unprecedented because the control of our total information landscape is unprecedented — with search engines, browsers, and newsfeeds being curated, annotated, and filtered by just a few major companies' LLMs. This could lead to a re-centralisation of media/information that creates Leviathans who make the Murdochs and Hearsts of yore look like minnows. Politically, that's a nightmare & would certainly contribute to the things you are worried about (i.e. gradual disempowerment, rise of authoritarianism).

Apologies for the rant but this is my usual schtick: https://jamiefreestone.substack.com/p/the-recentralisation-of-media

JP's avatar

"even Grok," 😂

Great post again Dan! The only reason I'm hesitant to fully share your optimism is that these systems are so powerful, influential, and unpredictable (as the failed attempts to control them you mention illustrate) that if they derail in unforeseen ways, it's going to be really hard to put the ghost back into the bottle.

One phenomenon that could occur through widespread proliferation of GenAI use that people may overlook is what will happen if AIs start being overfed with their own output, the percentage of which grows with increased usage. To give a weird metaphor here, I was an exchange student in the 1990s in Germany at a time where Germans were obsessed with recycled paper. This mean that nobody could use white paper anymore, for risk of being accused of not being sufficiently "öko" (sustainable). So the recycled paper we recycled again and again, until we all were writing and printing on very dark grey paper. It would have become black if it weren't for the fact that less toxic bleach methods were invented, creating white recycled paper which then was fed into the cycle. This could happen epistemically (and stylistically) as well. GenAI writes better than most of us, and occasionally better than even the best of us. But if we all let GenAI write our stuff all the time, and it's fed back into itself, we might find ourselves in epistemic one-way streets we can't get out of anymore.

This is not to attack your position, which I agree with, it's just an example that we can't be sure bad stuff won't happen, and, more importantly, if bad stuff does happen, it's very hard to reverse.

But that doesn't change the fact that I mostly agree with your analysis, and am grateful that you wrote it up so clearly and persuasively!

Dan Williams's avatar

Thanks - great comment. (And analogy!). I agree that's a serious concern.

Mike Hind's avatar

Such a good piece. And a breath of fresh air in an area too often dominated with the talking points of recent Guardian think pieces. I know there's little epistemic value in anecdotal testimony, but my daily use of ChatGPT for a range of analytical processes (relating to a WW2 history project I have) or just riffing with on various ideas confirms the progress of which sceptical 'normies' seem to be unaware. My favourite aspect of ChatGPT's evolution is that it no longer hazes me. In fact, it holds my feet to the fire on all kinds of things, which forces me to sharpen my thinking. But opinions out in the wild are calcified and all the early negative stories have become cemented in many midwit minds. This unfortunately means that using AI is beginning to have a status cost.

Jan Zilinsky's avatar

So much to think about there! Here's an interesting complication though - Andy Hall reports: "One of our tests found ChatGPT generating an entire answer about the Iran strikes based solely on a single source with a strong point of view, Al Jazeera. In other cases, models are so committed to "both sides" that users walk away with no idea what's actually going on."

It will be fascinating to study whether and under what conditions interactions with chatbots increase people's understanding of the world, and also to test to what extent it matters whether people are asking about current/developing events versus events and phenomena which are better understood.

Jason S.'s avatar

“This has supported the emergence of very high-quality information for the very small minority of the population that seeks it out.”

Good one! ☝️

Charles Justice's avatar

You are making a convincing case. It has stopped me in my tracks and made me reconsider what I thought about AI. Myself, I still want to reserve judgement. For one thing you have malign agents like the Trump administration, Putin, and Xe who all have an interest in perverting AI to suppress the truth and better broadcast their lies. I don't believe that AI is immune to this. I mean, to put it into perspective, the first fourteen months of Trump's administration have been a systematic attempt at getting rid of scientific and medical expertise in favour of denial and quackery, and also an attempt to starve public education of funding. Trump's aim is to de-educate the masses, control the mass media and get rid of independent voices so that he can broadcast his lies and propaganda without any checks or balances. Even if he is not entirely successful, partial success would be bad. Maybe LLM's will counter some of Trump's malignancy. But it doesn't give me confidence when it's obvious that this administration is targeting accurate portrayals and independent expertise for suppression and elimination. The next few years will be a global experiment in how AI turns out. A positive outcome is not guaranteed, nor should it be ruled out.

William of Hammock's avatar

I would invert the direction of a couple of your premises, but share in your net direction.

First, I suspect it is more accurate to say that social media democratised the creation of available information, but not its consumption. This is based on the same mechanism you later cite as a corrective to the proliferation of deep fakes. With social media effectively "flooding the zone" with gladatorial opinion, leveraging the technocratic status quo to parse the noise is expected. Even when disagreeing, it still serves to replicate the common referent, outcompeting alternatives on position not product.

Second, it may be illusory that LLMs are converging to expert opinion specifically rather than the hidden agreements locked behind gladatorial framing. The amplification of bespoke disagreements often conceals and even complicates calibration, especially for a species that loses subtle, face to face cues. Again, this is implied by your suggestion that LLMs may be more persuasive on the same content based on difference in presentation. Removing the opportunity cost of the social medium may shift the final product toward the techocratic product, but not for the intuitive, apparent, and available reasons on its surface.

The rest of the analysis both survives these light inversions, and I think adds critical clarity to the debate.

Cheers!

Cranmer, Charles's avatar

I just skimmed your post and will go back and read more carefully.

One point you made is one of the most encouraging things I've read in a long time. Namely, that people will start to use AI rather than social influencers to get their facts. I hope you are right. That would be a big improvement.

Drew Margolin's avatar

Great post! As an academic and resident of a very progressive town, I get jeered for my AI optimism all the time — this post made me feel like I’m not crazy.

I actually made the McLuhan point in comments on another Substack — to me this is the key. It’s not _only_ that AI has these incentives (which you describe well), it’s that these incentives drive more than content of communication, they drive _form_.

I saw Dave Rand present the conspiracy paper before it was published. I had not really talked to LLMs before. In the presentation he showed not only _that_ people were persuaded, but the dialogues that got them there. Holy Cow, these things are rigorous. They are “reason” machines, not so much because their reasoning is so good but because they are so committed to speaking in reasoning “forms.” They use complete sentences. They are polite (as you note). They are patient. They do not try to humiliate you.

There’s some way of combining Habermasian ideals with the Turing test. Like “can you tell this interlocutor apart from a good citizen committed to rational discourse?” The LLMs would be impossible to distinguish. Everyone on social media, including the experts, would fail every time.

Eric Magnuson's avatar

I was very critical of the fact check culture that sprang up post-2016, and have been even more skeptical of social media, but so far I’ve found LLMs answers to be decently balanced and nuanced. For instance, I just asked Claude how effective surgical masks were at preventing the spread of COVID and it answered, “The evidence on surgical masks and COVID-19 transmission turned out to be more mixed than public health messaging often suggested.” Yet I was dinged on Facebook for posting a peer reviewed meta analysis that questioned the effectiveness of surgical masks.

John Smithson's avatar

This is a well reasoned paper but I am not sure I agree with its conclusion. LLMs are useful tools but I don't think they can replace expert advice. They can't build up experience. They can't make predictions. They can't run experiments or make observations. They can't infer causes or apply the scientific method. They can't reason. They don't understand.

LLMs are much more valuable than a search engine, but they have the limitations of a plagiarizer. They seem to have expertise but that's just superficial. Richard Feynman said that science is the belief in the ignorance of experts. That goes double for LLMs.