Discussion about this post

User's avatar
Laura Creighton's avatar

Whether your vision comes to pass depends on whether people decide that they can trust the answers they get from AIs. We already know that we cannot trust the elites to police their own content. Scientific studies do not replicate, preference falsification rules, and in recent memory cancel-culture came, not for the liars, but for those questioning the lies or unwilling to lie enough. Having the correct credential became more important than being correct. In Yeats' words "The best lack all conviction while the worst are full of a passionate intensity". How do we regain the truth in a world full of highbrow and lowbrow liars, as well as those who are sincerely wrong about things, all flooding us with untruths?

One idea. Build an agent and then keep track of trustworthiness. If we made it impossible to have prestige without being trustworthy, we would live in such an epistemologically brighter and more hopeful world. see: https://deepcode.substack.com/p/the-coming-great-transition-v-20?utm_source=share&utm_medium=android&r=8o0zz

Arnold Kling's avatar

To me, this sets up a conflict between AI and the people who have come to depend on their social media bubbles for validation. I am not sure that the end result will be what you predict. The opposite could be people coming at AI with pitchforks, tar, and feathers.

32 more comments...

No posts

Ready for more?