Discussion about this post

User's avatar
Laura Creighton's avatar

Whether your vision comes to pass depends on whether people decide that they can trust the answers they get from AIs. We already know that we cannot trust the elites to police their own content. Scientific studies do not replicate, preference falsification rules, and in recent memory cancel-culture came, not for the liars, but for those questioning the lies or unwilling to lie enough. Having the correct credential became more important than being correct. In Yeats' words "The best lack all conviction while the worst are full of a passionate intensity". How do we regain the truth in a world full of highbrow and lowbrow liars, as well as those who are sincerely wrong about things, all flooding us with untruths?

One idea. Build an agent and then keep track of trustworthiness. If we made it impossible to have prestige without being trustworthy, we would live in such an epistemologically brighter and more hopeful world. see: https://deepcode.substack.com/p/the-coming-great-transition-v-20?utm_source=share&utm_medium=android&r=8o0zz

Ali Afroz's avatar

You’re almost certainly right about Llm’s as they currently exist, but I do think you’re perhaps showing too much confidence in making such predictions of a technology that could develop in many possible ways. for example, you presume that of course eventually, people will be able to train an AI which does not share a centre left orientation, but it seems entirely possible that the mechanism causing the centre left orientation is the same mechanism that causes them to share expert opinion, namely that most experts are centre left so you should not be this confident that the ability to train ideologically diverse AI would not include the ability to train AI with populist opinions in the sense of opinions that are popular among non-experts.

You write that Must can pump a lot of nonsense for the consumption of his fan base, but an AI company cannot, but that seems self refuting Must demonstrates that there is a market for such nonsense. Although I will grant you. It’s a niche market. However, you could of course argue that there isn’t a large in a fan base, but the thing is that for example 3/4 of the American public things, the JFK assassination was the product of some conspiracy. So the market for false information is in fact, large, which is understandable, given the difficulty of discovering the truth in the modern world and the limited amount of attention people spend on it.

Part of why I am much more concerned about this than you are is that I just think the gap between expert opinion and the opinion of the masses is just too huge for this to be sustainable. For example, even deep seek trained in China will insist that corporal punishment of children is unacceptable and should not be done when even the American public supports it by a comfortable majority given the world population and how conservative an alien many cultures can be. I find it. Unbelievable that we should not have some significant probability on the possibility that this will provide a large market for an AI with opinions that are contrary to experts. I am not saying this will happen like I said in the beginning. I’m just very unsure of how the technology will develop. I just think this is one possibility.

Another possibility that AI agents become much more sophisticated and act like directed agents that can like other agents do intentional propaganda, at which point you apparently have a huge army of super persuasive agents running around. My point isn’t that you are not sketching out a possible scenario. It’s just that the technology is in its infancy, so I think you’re being too confident here and should consider the possibility of AI causing a more technocratic information environment to be just one scenario, instead of giving it overwhelming probability.

I do agree regarding the effects of LLMs as they exist right now. I just expect that we can’t be too confident of whether they’ll be in a similar place in say 5 years.

41 more comments...

No posts

Ready for more?