11 Comments
User's avatar
Ali Afroz's avatar

You’re almost certainly right about Llm’s as they currently exist, but I do think you’re perhaps showing too much confidence in making such predictions of a technology that could develop in many possible ways. for example, you presume that of course eventually, people will be able to train an AI which is not share a centre left orientation, but it seems entirely possible that the mechanism causing the centre left orientation is the same mechanism that causes them to share, expert opinion, name me that most exports are centre left so you should not be this confident that the ability to train ideological diverse AI would not include the ability to train AI with populist opinions in the sense of opinions that are popular among non-experts.

You write that Must can pump a lot of nonsense for the consumption of his fan base, but an AI company cannot, but that seems self refuting Must demonstrates that there is a market for such nonsense. Although I will grant you. It’s a niche market. However, you could of course argue that there isn’t a large in a fan base, but the thing is that for example 3/4 of the American public things, the JFK assassination was the product of some conspiracy. So the market for false information is in fact, large, which is understandable, given the difficulty of discovering the truth in the modern world and the limited amount of attention people spend on it.

Part of why I am much more concerned about this and you are is that I just think the gap between expert opinion and the opinion of the masses is just too huge for this to be sustainable. For example, even deep seek trained in China will insist that corporal punishment of children is unacceptable and should not be done when even the American public supports it by a comfortable majority given the world population and how conservative an alien, many cultures can be. I find it. Unbelievable that we should not have some significant probability on the possibility that this will provide a large market for an AI with opinions that are contrary to experts. I am not saying this will happen like I said in the beginning. I’m just very unsure of how the technology will develop. I just think this is one possibility.

Another possibility that AI agents become much more sophisticated and act like directed agents that can like other agents do intentional propaganda, at which point you apparently have a huge army of super persuasive agents are running around. My point isn’t that you are not sketching out a possible scenario. It’s just that the technology is in its infancy, so I think you’re being too confident here and should consider the possibility of AI causing a more technocratic information environment to be just once scenario, instead of giving it overwhelming probability.

I do agree regarding the effects of LLMs as they exist right now. I just expect that we can’t be too confident of whether they’ll be in a similar place in say 5 years.

Cranmer, Charles's avatar

I just skimmed your post and will go back and read more carefully.

One point you made is one of the most encouraging things I've read in a long time. Namely, that people will start to use AI rather than social influencers to get their facts. I hope you are right. That would be a big improvement.

Alexis Ludwig's avatar

Wonderfully counterintuitive and frankly refreshing argument, particularly for a cynic like me. I will do a few tests of my own before jumping on board, but am grateful you resisted the negativity bias and included several counter arguments (and tentative rebuttals) in your post. I look forward to the comments it spawns among the techno-cognoscenti, since I am not one. Well done.

JP's avatar

"even Grok," 😂

Great post again Dan! The only reason I'm hesitant to fully share your optimism is that these systems are so powerful, influential, and unpredictable (as the failed attempts to control them you mention illustrate) that if they derail in unforeseen ways, it's going to be really hard to put the ghost back into the bottle.

One phenomenon that could occur through widespread proliferation of GenAI use that people may overlook is what will happen if AIs start being overfed with their own output, the percentage of which grows with increased usage. To give a weird metaphor here, I was an exchange student in the 1990s in Germany at a time where Germans were obsessed with recycled paper. This mean that nobody could use white paper anymore, for risk of being accused of not being sufficiently "öko" (sustainable). So the recycled paper we recycled again and again, until we all were writing and printing on very dark grey paper. It would have become black if it weren't for the fact that less toxic bleach methods were invented, creating white recycled paper which then was fed into the cycle. This could happen epistemically (and stylistically) as well. GenAI writes better than most of us, and occasionally better than even the best of us. But if we all let GenAI write our stuff all the time, and it's fed back into itself, we might find ourselves in epistemic one-way streets we can't get out of anymore.

This is not to attack your position, which I agree with, it's just an example that we can't be sure bad stuff won't happen, and, more importantly, if bad stuff does happen, it's very hard to reverse.

But that doesn't change the fact that I mostly agree with your analysis, and am grateful that you wrote it up so clearly and persuasively!

Dan Williams's avatar

Thanks - great comment. (And analogy!). I agree that's a serious concern.

PEG's avatar

The key conflation here is between ‘expert consensus’ and ‘median educated Western internet opinion.’ LLMs are trained on Wikipedia, Reddit, and news articles—not journals and conference proceedings. These overlap but aren’t the same thing, and the gap matters enormously once you scale globally. The corporal punishment example from Afroz illustrates this well: a global majority view gets coded as fringe by systems trained overwhelmingly on WEIRD text.

The ‘expert consensus = truth’ assumption is also doing too much work. The history is littered with embarrassments—dietary fat, the ulcer-stress hypothesis, the replication crisis, the unanimous expert view that Trump couldn’t win. And the lag problem runs both ways: consensus is slow to correct errors and slow to absorb heterodox views that later prove right. A model trained on accumulated internet text inherits both pathologies simultaneously.

A more accurate description: LLMs are a smoothed reconstruction of what the literate Western internet thinks, with guardrails. Not a truth engine, not a propaganda machine—just confidently mid. Whether that’s better than the alternatives probably varies more by domain than we assume.

Seattle Ecomodernist Society's avatar

good point. will it become possible for a strategy to overcome or reduce this bias? will the development of LLMs in different countries and languages help? when you compare the same question to multiple LLMs you usually get different answers, is there an opportunity for competition and development?

Charles Justice's avatar

You are making a convincing case. It has stopped me in my tracks and made me reconsider what I thought about AI. Myself, I still want to reserve judgement. For one thing you have malign agents like the Trump administration, Putin, and Xe who all have an interest in perverting AI to suppress the truth and better broadcast their lies. I don't believe that AI is immune to this. I mean, to put it into perspective, the first fourteen months of Trump's administration have been a systematic attempt at getting rid of scientific and medical expertise in favour of denial and quackery, and also an attempt to starve public education of funding. Trump's aim is to de-educate the masses, control the mass media and get rid of independent voices so that he can broadcast his lies and propaganda without any checks or balances. Even if he is not entirely successful, partial success would be bad. Maybe LLM's will counter some of Trump's malignancy. But it doesn't give me confidence when it's obvious that this administration is targeting accurate portrayals and independent expertise for suppression and elimination. The next few years will be a global experiment in how AI turns out. A positive outcome is not guaranteed, nor should it be ruled out.

Jamie Freestone's avatar

Great post & I wish I shared your optimism!

Clearly, LLMs will nudge people toward technocratic, expert-consensus type opinions. It's probably already happening.

I'm not worried about old school disinformation or persuasion, but a level of control that is unprecedented because the control of our total information landscape is unprecedented — with search engines, browsers, and newsfeeds being curated, annotated, and filtered by just a few major companies' LLMs. This could lead to a re-centralisation of media/information that creates Leviathans who make the Murdochs and Hearsts of yore look like minnows. Politically, that's a nightmare & would certainly contribute to the things you are worried about (i.e. gradual disempowerment, rise of authoritarianism).

Apologies for the rant but this is my usual schtick: https://jamiefreestone.substack.com/p/the-recentralisation-of-media

William of Hammock's avatar

I would invert the direction of a couple of your premises, but share in your net direction.

First, I suspect it is more accurate to say that social media democratised the creation of available information, but not its consumption. This is based on the same mechanism you later cite as a corrective to the proliferation of deep fakes. With social media effectively "flooding the zone" with gladatorial opinion, leveraging the technocratic status quo to parse the noise is expected. Even when disagreeing, it still serves to replicate the common referent, outcompeting alternatives on position not product.

Second, it may be illusory that LLMs are converging to expert opinion specifically rather than the hidden agreements locked behind gladatorial framing. The amplification of bespoke disagreements often conceals and even complicates calibration, especially for a species that loses subtle, face to face cues. Again, this is implied by your suggestion that LLMs may be more persuasive on the same content based on difference in presentation. Removing the opportunity cost of the social medium may shift the final product toward the techocratic product, but not for the intuitive, apparent, and available reasons on its surface.

The rest of the analysis both survives these light inversions, and I think adds critical clarity to the debate.

Cheers!

Seattle Ecomodernist Society's avatar

It does seem that some aspects of current LLM level contribute an opposite direction from social media widening inputs to dialogue, with narrowing inputs to experts or technocratizing. there is great productivity advantage to expanded access to average expert opinion. but this change is to the inputs, not the overall affect on dialogue. it seems like a complement to the widening caused by social media, together they will 'dramatically expand the range of voices and viewpoints that can be expressed and made the media environment much more competitive', because the LLM is also accessible broadly and those broad masses will use it to enhance their expression and debate.

is this some corrective to something bad in mass dialogue like conspiracy theories? perhaps but somehow it seems doubtful that conspiracy theories will decline or be restrained. we've just seen the epstein pedo panic set a global record for QAnonsense. and just how has the expert opinion performed? it would be hard to argue that the expert opinion strata has not become central to peddling the 'disinformation' - judges with non required proceedings bribing 'victims' all the way to the bank, journalists and politicians milking the victimology term 'trafficking' for adults paid for consensual travel and sex. liberals who criticized pizzagate somehow became true believers of the exact same story line when it looked like they could pick up some voters. Would an LLM see through this fad and criticize it? perhaps one could be developed and hopefully that is part of the competition between LLM developers. however mostly this indicates the limitation of non human knowledge, intelligent SW will automate dexterous and thinking tasks and reveal the residual that is too complex/subtle or critical and requires humans. it takes a breathing thinking Michael Tracey with some mix of motives to see through the fad.

would an LLM not be influenced by the epstein pedo or the next mass scapegoating fad? if the fad influences the '37 year old reddit reader' which supply the big data for LLM learning, then it will be. the more subtle an idea the more difficult for LLM to sort. they will get specialized and way way smarter than today, but they will still be tools of humans that perform better or worse as they are used and directed by the human. accepting the observation that social media and LLM enhance different directions, the need is to critique the foibles of each and together, detect risks and limitations. believing LLM is fully accurate, whether because it reflects expert opinion or any other reason, will be the mark of the gullible and low productivity human