Discussion about this post

User's avatar
Ali Afroz's avatar

You’re almost certainly right about Llm’s as they currently exist, but I do think you’re perhaps showing too much confidence in making such predictions of a technology that could develop in many possible ways. for example, you presume that of course eventually, people will be able to train an AI which is not share a centre left orientation, but it seems entirely possible that the mechanism causing the centre left orientation is the same mechanism that causes them to share, expert opinion, name me that most exports are centre left so you should not be this confident that the ability to train ideological diverse AI would not include the ability to train AI with populist opinions in the sense of opinions that are popular among non-experts.

You write that Must can pump a lot of nonsense for the consumption of his fan base, but an AI company cannot, but that seems self refuting Must demonstrates that there is a market for such nonsense. Although I will grant you. It’s a niche market. However, you could of course argue that there isn’t a large in a fan base, but the thing is that for example 3/4 of the American public things, the JFK assassination was the product of some conspiracy. So the market for false information is in fact, large, which is understandable, given the difficulty of discovering the truth in the modern world and the limited amount of attention people spend on it.

Part of why I am much more concerned about this and you are is that I just think the gap between expert opinion and the opinion of the masses is just too huge for this to be sustainable. For example, even deep seek trained in China will insist that corporal punishment of children is unacceptable and should not be done when even the American public supports it by a comfortable majority given the world population and how conservative an alien, many cultures can be. I find it. Unbelievable that we should not have some significant probability on the possibility that this will provide a large market for an AI with opinions that are contrary to experts. I am not saying this will happen like I said in the beginning. I’m just very unsure of how the technology will develop. I just think this is one possibility.

Another possibility that AI agents become much more sophisticated and act like directed agents that can like other agents do intentional propaganda, at which point you apparently have a huge army of super persuasive agents are running around. My point isn’t that you are not sketching out a possible scenario. It’s just that the technology is in its infancy, so I think you’re being too confident here and should consider the possibility of AI causing a more technocratic information environment to be just once scenario, instead of giving it overwhelming probability.

I do agree regarding the effects of LLMs as they exist right now. I just expect that we can’t be too confident of whether they’ll be in a similar place in say 5 years.

Cranmer, Charles's avatar

I just skimmed your post and will go back and read more carefully.

One point you made is one of the most encouraging things I've read in a long time. Namely, that people will start to use AI rather than social influencers to get their facts. I hope you are right. That would be a big improvement.

3 more comments...

No posts

Ready for more?