I don’t understand how to reconcile what you’re saying with the fact that maybe a third of the American electorate believes that the 2020 election was stolen. This was (and is) massively consequential. It clearly wasn’t a preexisting belief. To get people to believe it, it was necessary and sufficient for the Trump information ecosystem to tell them to believe it. Is the claim that even without social media, traditional media would’ve disseminated the message equally effectively? How do we know such a counterfactual?
I think the more relevant counterfactual question is not whether traditional media would have led to a different outcome vs social media, but whether social media without so much AI (e.g. circa 2014) would have led to an appreciably different outcome than today's social media with more AI or tomorrow's social media with tons of AI. There is a whole other issue of how social media platforms shape and amplify misinfo, but as I read it this wasn't really the focus of the latest post; he was talking more about stuff like deepfakes, bots, algorithmic targeting, etc. (and so much misinfo is already on social media by default that for practical purposes digital platforms are the baseline we are comparing anything to in the first place).
But maybe part of what you're challenging isn't just the AI question, but Dan's earlier arguments about misinfo in general? Because this raises another interesting question of how much the specific details of the information environment matter, if the case was already made for misinfo itself not mattering that much. Is AI just another subset that could be folded into those skeptical takes, or is it such a dramatic break that it requires an entirely new analysis?
Personally, I suspect AI does bring something qualitatively different (especially with deepfakes which undermine our direct representation of reality itself, not simply interpretation of verbal material) - and further magnifies uncertainty about the future - but that this difference is probably not significant enough to transcend the more general principles discussed in previous posts.
> It clearly wasn’t a preexisting belief. To get people to believe it, it was necessary and sufficient for the Trump information ecosystem to tell them to believe it.
See also: "a third of the American electorate believes that the 2020 election was stolen".
What humans say in response to survey questions is in no way guaranteed(!) to be a highly accurate (high dimensional) representation of what they actually believe, particularly considering (among many other things) how easy it is for someone experienced with neurotypical humans to ask most any human a series of questions that can reveal blatant contradictions in their beliefs, and many of the reasons for this shortcoming (ie: a lack of depth in the academic skills necessary to consider such things competently).
Consciousness only seems simple because that's what consciousness tells us, and a big part of that is because that is what our consciousnesses have been trained on by the culture we've been raised in.
I agree that all four points ("facts") are valid but must disagree with the conclusions you draw from them.
1. "Online disinformation does not lie at the root of modern political problems"
This is a strawman. The assumption that AI only matters in online discourse is too strong a caveat, and what counts as online is growing anyway as streaming grows as a distribution medium. The real question is, does the introduction of AI into the spheres of political discourse---online and not---pose new and dangerous hazards for collective epistemic health? The roots of political problems may indeed lie in conflicting interests, cognitive foibles, bad-faith actors, etc. But that doesn't mean that AI won't be an especially de-stabilizing factor and amplifier for incoherence.
2. "Political persuasion is extremely difficult."
The danger is not that some candidate's AI-generated ad will out-persuade some other candidate's ad. The danger is that foundational belief systems are susceptible to corrosion and distortion as the overall information landscape becomes polluted with manipulative content at scale. An example is the normalization of manifestly outrageous ideas due to repeated exposure and perceived acceptance by peers and thought leaders. Not that the ideas should be excluded from discourse, but that the immune system becomes disabled to counter them. Once registered, a volley of mental impressions is irreversible even if, in a different mental frame, some may later be acknowledged as questionable.
3. "The Media Environment is highly demand driven"
This is an important crux. What's most important is, it's a system. Demand is shaped by supply. If AI amplifies bad actors' ability to flood the zone with crap, then the content of this crap forms the backdrop for what people will be talking about at the diner. Which in turn shapes what cable channels, radio talkers, marketing messages, podcasts, and streaming videos they attune to.
4. "The establishment will have access to more powerful forms of AI than counter-establishment sources."
The establishment plays by different rules. As you point out, the establishment has an interest in credibility and veracity. Other actors don't. Simple confusion and disillusionment is a sufficient outcome to achieve nefarious ends. A corrupt lie travels around the world while the truth is still tying its shoelaces. The empirical fact is, con-men often win. "Moral panic" is prudent in the face of any new devices they may come to master before the establishment gets its act together to figure out new defenses.
> 1. "Online disinformation does not lie at the root of modern political problems"
> This is a strawman.
Exactly, and a planted one at that (we are trained on it from day 1, not unlike we train LLM's), it exploits the mind's tendency to equate normalcy with ~correct.
[The objections to this line of reasoning, which are highly deterministic and can be accurately predicted by an adequately complex model of humans which would contain a normal distribution of conceptual responses (due to training).]
> The danger is that foundational belief systems are susceptible to corrosion and distortion
Exactly....if someone was to start attacking the *the abstract foundations of* our "reality" (rather than object level issues *within* "reality"), things could get messy real fast. See: science displacing religion, especially taking into consideration that science may not be the peak of what humanity is capable of. It is powerful indeed, and worshipped...but is it *the most powerful thing possible*?
> An example is the normalization of manifestly outrageous ideas due to repeated exposure and perceived acceptance by peers and thought leaders.
Don't forget that the reason they "are" (perceived *as a fact* to "be") "outrageous" is largely due to training. Have you ever tried to get a human to substantiate their claims of fact as being actually factual? For the optimum experience, choose maximally educated/~intelligent ones to run experiments on. For certain ideas, pursuing actual truth is considered literally outrageous.
> Not that the ideas should be excluded from discourse, but that the immune system becomes disabled to counter them.
Or, it was never equipped in the first place.
> Which in turn shapes what cable channels, radio talkers, marketing messages, podcasts, and streaming videos they attune to.
I would be careful forming strong beliefs about why the media projects what it does.
> As you point out, the establishment has an interest in credibility and veracity.
Veracity:: devotion to the truth : truthfulness. 3. : power of conveying or perceiving truth.
Absolutely agree on credibility , but am highly skeptical on veracity (though, *perception of* veracity is important, and "mission accomplished"....at least formerly). If this was really true, how do you explain the particulars of school curriculum, the design of our information distribution systems, the language we use, etc? Hanlon's Razor (Never attribute to malice that which is adequately explained by stupidity) is a great meme...but how true is it?
> Other actors don't.
No offense, but this is a fine example of the thinking our school curriculum produces: perceptions of the ability for omniscient knowledge - once you are able to spot this, you will see it *everywhere* in online conversations, in journalism, etc. Essentially, it is an implicit assertion that a *perception of* abscence of evidence is proof of absence. (And yes, I do "know what you mean", I'm just pointing out how bizarre our conventions of "reality" are, here in 2024, "The Age of Science").
> Simple confusion and disillusionment is a sufficient outcome to achieve nefarious ends.
Confidence and certainty on top of delusion is even better though....and what do we see all around us, *from ~everyone, on ~all sides*?
> The establishment plays by different rules.
There are certain things that the establishment can't do though, and we the people can (or *could*, with some education in fundamentals). As I see it, this is one of the biggest weaknesses in the system, and should be one of the primary attack vectors in a revolution.
One metaphor I find useful is to think of misinformation (or misleading information) as the currency in which bad beliefs are traded back and forth, or the package it comes in, but not primarily the *source or reason* for those beliefs. So it's kind of like Aristotle's material cause (or maybe even formal cause?), as opposed to the efficient cause that would count as the "real problem" to target. Meanwhile, the final cause goes in the opposite direction (misinfo functions in the service of identity, values, signaling, motivated reasoning). AI can make the currency more plentiful, like when the Fed pumps tons of fake money into the economy. But there is only so much economic activity and so much demand, after a point. I think AI could do some epistemic harm via information in the short term when this is still novel and unpredictable, but over time as we adjust expectations and get more experienced, it will even out.
I guess what concerns me is not that AI would trick or sway lots more people, but rather that a flood of AI-generated BS combined with panic over AI will gradually lead us to cynically distrust everything and/or grow less responsive to genuine info even when it's important, either as overcompensation or out of sheer exhaustion. So, the bar gets lowered and we start tuning out (or retreat to the private sphere).
I think people tend to associate evil or unethical things with effectiveness, so they might be imagining AI-generated misinformation as more effective than AI-generated political content in general but it's not at all obvious why that would be the case (could also just be a "feminist bank teller" style conjunction fallacy issue). Similarly, I think with microtargeting people are reacting to what they imagine might be possible rather than seeing what kind of effects can actually be achieved in the world.
While I appreciate the instinct that institutions will be able to outmatch renegades, establishment narratives aren't always the most saleable, people sometimes judge whether a news outlet is trustworthy by how well it conforms to their worldview (especially true in the US recently), and isn't it imaginable that we encounter an asymmetry where the content-generating tech vastly outclasses the content-policing tech?
It's really easy to use AI for what is already really effective: spreading division. Russia is expert on this, already started researching the subject before Internet.
We have already populist leaders coming to power everywhere - the types who get power when the people are divided and angry.
Hedge funds on the other hand only need to spread disinformation of one company or possibly even one company leader to make millions.
Everything in this leads towards more centrally lead fascist-like goverment apparatus. So I agree kind of in one thing: the goverment will have much power. The down-side is that only bad goverments will use that power really effectively.
Our only hope is to spread awareness, and this article is not helping, sorry.
How did you come to "know" this? From what source did you acquire it? What does "is expert on this" mean *at the observable, deterministic object level*?
> We have already populist leaders coming to power everywhere - the types who get power when the people are divided and angry.
Is division and anger the only reasons that populism is sometimes popular? Is the story of the righteousness of "democracy" not also dependent on the same general idea, the "will of the people" (despite us having no way to measure such things in a highly accurate way, *perhaps because we haven't tried to develop any*)?
> Our only hope is to spread awareness, and this article is not helping, sorry.
I kind of appreciate the sentiment, but be careful forming strong beliefs about how causality works....discussion is how we improve upon understanding.
While I appreciate the time and thought put into this essay, it doesn't hold when new or simply variations arise. For example: when a once in a lifetime (or merely the first) pandemic kicks-off, every person on the planet is at the same starting line. Who knew, back then, that billions of people were susceptible to believing complete nonsense, and were also indifferent to their own health and the health and well-being of their families and communities? Back in the days when we were all tethered to landlines, who knew that there were a whole lot of people out there so afraid of new technology that they probably already had their tinfoil hats in preparation?
Changing minds is hard, but not impossible. Making minds, on the other hand, is up for grabs. Sometimes the best liars win.
I don't think AI will be better at lies, and the world is already full of people who are very skilled at dreaming up and distributing lies. One isn't worse or better than the other. People are gullible, and apparently easily manipulated. AI merely offers some people a more efficient tool for increasing the velocity of lies.
If the things you say here were not actually true, would you necessarily be able to realize it?
For example:
> For example: when a once in a lifetime (or merely the first) pandemic kicks-off, every person on the planet is at the same starting line.
Do you not think people with substantial depth in medicine &/or science does not have an advantage?
What about someone with substantial depth in philosophy, psychology, neuroscience, etc?
> Who knew, back then, that billions of people were susceptible to believing complete nonsense
This has been well known for centuries.
> and were also indifferent to their own health and the health and well-being of their families and communities?
You believe yourself to be an exception to this do you? :)
> Back in the days when we were all tethered to landlines, who knew that there were a whole lot of people out there so afraid of new technology that they probably already had their tinfoil hats in preparation?
See also: people who believe they can read minds.
> Sometimes the best liars win.
And *usually*, speakers of unintentional untruth win....like in almost every single conversation outside of the hard sciences.
> the world is already full of people who are very skilled at dreaming up and distributing lies
Let's hope speakers of untruth aren't even more dangerous!
> People are gullible, and apparently easily manipulated.
Even scientists were at the starting line for the pandemic, albeit, some aspects allowed them to quickly build on two decades of prior efforts, on the other hand it took them seemingly forever to figure out the simple matter that Covid was airborne.
Yes, I absolutely was concerned with the health and well-being of my family and my broader community. Why wouldn't I have been? We had the longest lockdowns in the Western world, I had no complaints, thousands of lives were saved.
One point is that in our culture, representing one's opinions as facts is not just normal, but often enforced.....and doing otherwise is highly frowned upon, depending on the topic.
> Even scientists were at the starting line for the pandemic, albeit, some aspects allowed them to quickly build on two decades of prior efforts, on the other hand it took them seemingly forever to figure out the simple matter that Covid was airborne.
Is this to say that they had advantages over laymen (as opposed to "every(!) person on the planet is at the same(!) starting line")?
> Yes, I absolutely was concerned with the health and well-being of my family and my broader community.
What data type would you use to represent "was concerned with"? A true/false boolean perhaps (which is our cultural convention)? Or, this might be better asked as: "*Now that I mention it*, what data type....".
> Why wouldn't I have been?
I can think of many reasons: cultural conditioning, and the nature of consciousness.
If you are unable to think of any reasons, might you form any conclusions as a consequence?
>We had the longest lockdowns in the Western world, I had no complaints, thousands of lives were saved.
If I was to inject this into the conceptual context, does it change your thinking at all?
Do you believe that studying science has any effect on one's cognition?
Does science rest upon (take for granted as true) any methodologies, premises, and axioms? Are there any substantial flaws or shortcomings (known, or otherwise) in any of this?
Do you (does your mind, both conscious and subconscious) implement these abstract methodologies at the object level with perfection? How about other scientists (literally all of them)?
edit:
I feel I should add: while this may seem ~unfair, could the same not be said of scientists using their greater knowledge to disagree with non-scientist commoners?
I'm sympathetic to all of this. Here's a complementary point. At least for now, what AI lets you do is produce The sort of content that humans can write, but at scale--it now costs far fewer dollars per word to generate passable copy.
But it's very hard to come up with a plausible model of our information ecosystem where the cost of producing content is a crucial bottleneck preventing disinformation campaigns from being more successful.
It's now very easy to generate thousands of variants of Nigerian prince email scams, whereas before you needed to have somebody spend some time writing up a plausible sounding message. Do you expect this means far more people will be scammed out of their savings? I don't; in my mental model the cost of producing emails was just not a major factor limiting the success of phishing, so when that cost goes close to 0 the overall rate of phishing doesn't change much.
And fake new strikes me as a pretty similar dynamic.
I don’t understand how to reconcile what you’re saying with the fact that maybe a third of the American electorate believes that the 2020 election was stolen. This was (and is) massively consequential. It clearly wasn’t a preexisting belief. To get people to believe it, it was necessary and sufficient for the Trump information ecosystem to tell them to believe it. Is the claim that even without social media, traditional media would’ve disseminated the message equally effectively? How do we know such a counterfactual?
I think the more relevant counterfactual question is not whether traditional media would have led to a different outcome vs social media, but whether social media without so much AI (e.g. circa 2014) would have led to an appreciably different outcome than today's social media with more AI or tomorrow's social media with tons of AI. There is a whole other issue of how social media platforms shape and amplify misinfo, but as I read it this wasn't really the focus of the latest post; he was talking more about stuff like deepfakes, bots, algorithmic targeting, etc. (and so much misinfo is already on social media by default that for practical purposes digital platforms are the baseline we are comparing anything to in the first place).
But maybe part of what you're challenging isn't just the AI question, but Dan's earlier arguments about misinfo in general? Because this raises another interesting question of how much the specific details of the information environment matter, if the case was already made for misinfo itself not mattering that much. Is AI just another subset that could be folded into those skeptical takes, or is it such a dramatic break that it requires an entirely new analysis?
Personally, I suspect AI does bring something qualitatively different (especially with deepfakes which undermine our direct representation of reality itself, not simply interpretation of verbal material) - and further magnifies uncertainty about the future - but that this difference is probably not significant enough to transcend the more general principles discussed in previous posts.
> It clearly wasn’t a preexisting belief. To get people to believe it, it was necessary and sufficient for the Trump information ecosystem to tell them to believe it.
See also: "a third of the American electorate believes that the 2020 election was stolen".
What humans say in response to survey questions is in no way guaranteed(!) to be a highly accurate (high dimensional) representation of what they actually believe, particularly considering (among many other things) how easy it is for someone experienced with neurotypical humans to ask most any human a series of questions that can reveal blatant contradictions in their beliefs, and many of the reasons for this shortcoming (ie: a lack of depth in the academic skills necessary to consider such things competently).
Consciousness only seems simple because that's what consciousness tells us, and a big part of that is because that is what our consciousnesses have been trained on by the culture we've been raised in.
I agree that all four points ("facts") are valid but must disagree with the conclusions you draw from them.
1. "Online disinformation does not lie at the root of modern political problems"
This is a strawman. The assumption that AI only matters in online discourse is too strong a caveat, and what counts as online is growing anyway as streaming grows as a distribution medium. The real question is, does the introduction of AI into the spheres of political discourse---online and not---pose new and dangerous hazards for collective epistemic health? The roots of political problems may indeed lie in conflicting interests, cognitive foibles, bad-faith actors, etc. But that doesn't mean that AI won't be an especially de-stabilizing factor and amplifier for incoherence.
2. "Political persuasion is extremely difficult."
The danger is not that some candidate's AI-generated ad will out-persuade some other candidate's ad. The danger is that foundational belief systems are susceptible to corrosion and distortion as the overall information landscape becomes polluted with manipulative content at scale. An example is the normalization of manifestly outrageous ideas due to repeated exposure and perceived acceptance by peers and thought leaders. Not that the ideas should be excluded from discourse, but that the immune system becomes disabled to counter them. Once registered, a volley of mental impressions is irreversible even if, in a different mental frame, some may later be acknowledged as questionable.
3. "The Media Environment is highly demand driven"
This is an important crux. What's most important is, it's a system. Demand is shaped by supply. If AI amplifies bad actors' ability to flood the zone with crap, then the content of this crap forms the backdrop for what people will be talking about at the diner. Which in turn shapes what cable channels, radio talkers, marketing messages, podcasts, and streaming videos they attune to.
4. "The establishment will have access to more powerful forms of AI than counter-establishment sources."
The establishment plays by different rules. As you point out, the establishment has an interest in credibility and veracity. Other actors don't. Simple confusion and disillusionment is a sufficient outcome to achieve nefarious ends. A corrupt lie travels around the world while the truth is still tying its shoelaces. The empirical fact is, con-men often win. "Moral panic" is prudent in the face of any new devices they may come to master before the establishment gets its act together to figure out new defenses.
> 1. "Online disinformation does not lie at the root of modern political problems"
> This is a strawman.
Exactly, and a planted one at that (we are trained on it from day 1, not unlike we train LLM's), it exploits the mind's tendency to equate normalcy with ~correct.
See (just for starters):
https://en.wikipedia.org/wiki/Culture
https://plato.stanford.edu/entries/psychology-normative-cognition/
https://en.wikipedia.org/wiki/Consciousness
https://en.wikipedia.org/wiki/Propaganda_techniques
[The objections to this line of reasoning, which are highly deterministic and can be accurately predicted by an adequately complex model of humans which would contain a normal distribution of conceptual responses (due to training).]
> The danger is that foundational belief systems are susceptible to corrosion and distortion
Exactly....if someone was to start attacking the *the abstract foundations of* our "reality" (rather than object level issues *within* "reality"), things could get messy real fast. See: science displacing religion, especially taking into consideration that science may not be the peak of what humanity is capable of. It is powerful indeed, and worshipped...but is it *the most powerful thing possible*?
> An example is the normalization of manifestly outrageous ideas due to repeated exposure and perceived acceptance by peers and thought leaders.
Don't forget that the reason they "are" (perceived *as a fact* to "be") "outrageous" is largely due to training. Have you ever tried to get a human to substantiate their claims of fact as being actually factual? For the optimum experience, choose maximally educated/~intelligent ones to run experiments on. For certain ideas, pursuing actual truth is considered literally outrageous.
> Not that the ideas should be excluded from discourse, but that the immune system becomes disabled to counter them.
Or, it was never equipped in the first place.
> Which in turn shapes what cable channels, radio talkers, marketing messages, podcasts, and streaming videos they attune to.
I would be careful forming strong beliefs about why the media projects what it does.
See: https://en.wikipedia.org/wiki/Operation_Mockingbird
> As you point out, the establishment has an interest in credibility and veracity.
Veracity:: devotion to the truth : truthfulness. 3. : power of conveying or perceiving truth.
Absolutely agree on credibility , but am highly skeptical on veracity (though, *perception of* veracity is important, and "mission accomplished"....at least formerly). If this was really true, how do you explain the particulars of school curriculum, the design of our information distribution systems, the language we use, etc? Hanlon's Razor (Never attribute to malice that which is adequately explained by stupidity) is a great meme...but how true is it?
> Other actors don't.
No offense, but this is a fine example of the thinking our school curriculum produces: perceptions of the ability for omniscient knowledge - once you are able to spot this, you will see it *everywhere* in online conversations, in journalism, etc. Essentially, it is an implicit assertion that a *perception of* abscence of evidence is proof of absence. (And yes, I do "know what you mean", I'm just pointing out how bizarre our conventions of "reality" are, here in 2024, "The Age of Science").
> Simple confusion and disillusionment is a sufficient outcome to achieve nefarious ends.
Confidence and certainty on top of delusion is even better though....and what do we see all around us, *from ~everyone, on ~all sides*?
> The establishment plays by different rules.
There are certain things that the establishment can't do though, and we the people can (or *could*, with some education in fundamentals). As I see it, this is one of the biggest weaknesses in the system, and should be one of the primary attack vectors in a revolution.
One metaphor I find useful is to think of misinformation (or misleading information) as the currency in which bad beliefs are traded back and forth, or the package it comes in, but not primarily the *source or reason* for those beliefs. So it's kind of like Aristotle's material cause (or maybe even formal cause?), as opposed to the efficient cause that would count as the "real problem" to target. Meanwhile, the final cause goes in the opposite direction (misinfo functions in the service of identity, values, signaling, motivated reasoning). AI can make the currency more plentiful, like when the Fed pumps tons of fake money into the economy. But there is only so much economic activity and so much demand, after a point. I think AI could do some epistemic harm via information in the short term when this is still novel and unpredictable, but over time as we adjust expectations and get more experienced, it will even out.
I guess what concerns me is not that AI would trick or sway lots more people, but rather that a flood of AI-generated BS combined with panic over AI will gradually lead us to cynically distrust everything and/or grow less responsive to genuine info even when it's important, either as overcompensation or out of sheer exhaustion. So, the bar gets lowered and we start tuning out (or retreat to the private sphere).
I think people tend to associate evil or unethical things with effectiveness, so they might be imagining AI-generated misinformation as more effective than AI-generated political content in general but it's not at all obvious why that would be the case (could also just be a "feminist bank teller" style conjunction fallacy issue). Similarly, I think with microtargeting people are reacting to what they imagine might be possible rather than seeing what kind of effects can actually be achieved in the world.
While I appreciate the instinct that institutions will be able to outmatch renegades, establishment narratives aren't always the most saleable, people sometimes judge whether a news outlet is trustworthy by how well it conforms to their worldview (especially true in the US recently), and isn't it imaginable that we encounter an asymmetry where the content-generating tech vastly outclasses the content-policing tech?
> people sometimes judge whether a news outlet is trustworthy by how well it conforms to their worldview
Indeed....even worse: are there any specific exceptions to this rule that you can point to?
It's really easy to use AI for what is already really effective: spreading division. Russia is expert on this, already started researching the subject before Internet.
We have already populist leaders coming to power everywhere - the types who get power when the people are divided and angry.
Hedge funds on the other hand only need to spread disinformation of one company or possibly even one company leader to make millions.
Everything in this leads towards more centrally lead fascist-like goverment apparatus. So I agree kind of in one thing: the goverment will have much power. The down-side is that only bad goverments will use that power really effectively.
Our only hope is to spread awareness, and this article is not helping, sorry.
> Russia is expert on this
How did you come to "know" this? From what source did you acquire it? What does "is expert on this" mean *at the observable, deterministic object level*?
> We have already populist leaders coming to power everywhere - the types who get power when the people are divided and angry.
Is division and anger the only reasons that populism is sometimes popular? Is the story of the righteousness of "democracy" not also dependent on the same general idea, the "will of the people" (despite us having no way to measure such things in a highly accurate way, *perhaps because we haven't tried to develop any*)?
> Our only hope is to spread awareness, and this article is not helping, sorry.
I kind of appreciate the sentiment, but be careful forming strong beliefs about how causality works....discussion is how we improve upon understanding.
While I appreciate the time and thought put into this essay, it doesn't hold when new or simply variations arise. For example: when a once in a lifetime (or merely the first) pandemic kicks-off, every person on the planet is at the same starting line. Who knew, back then, that billions of people were susceptible to believing complete nonsense, and were also indifferent to their own health and the health and well-being of their families and communities? Back in the days when we were all tethered to landlines, who knew that there were a whole lot of people out there so afraid of new technology that they probably already had their tinfoil hats in preparation?
Changing minds is hard, but not impossible. Making minds, on the other hand, is up for grabs. Sometimes the best liars win.
I don't think AI will be better at lies, and the world is already full of people who are very skilled at dreaming up and distributing lies. One isn't worse or better than the other. People are gullible, and apparently easily manipulated. AI merely offers some people a more efficient tool for increasing the velocity of lies.
If the things you say here were not actually true, would you necessarily be able to realize it?
For example:
> For example: when a once in a lifetime (or merely the first) pandemic kicks-off, every person on the planet is at the same starting line.
Do you not think people with substantial depth in medicine &/or science does not have an advantage?
What about someone with substantial depth in philosophy, psychology, neuroscience, etc?
> Who knew, back then, that billions of people were susceptible to believing complete nonsense
This has been well known for centuries.
> and were also indifferent to their own health and the health and well-being of their families and communities?
You believe yourself to be an exception to this do you? :)
> Back in the days when we were all tethered to landlines, who knew that there were a whole lot of people out there so afraid of new technology that they probably already had their tinfoil hats in preparation?
See also: people who believe they can read minds.
> Sometimes the best liars win.
And *usually*, speakers of unintentional untruth win....like in almost every single conversation outside of the hard sciences.
> the world is already full of people who are very skilled at dreaming up and distributing lies
Let's hope speakers of untruth aren't even more dangerous!
> People are gullible, and apparently easily manipulated.
Indeed.
I don't know what point you're trying to make.
Even scientists were at the starting line for the pandemic, albeit, some aspects allowed them to quickly build on two decades of prior efforts, on the other hand it took them seemingly forever to figure out the simple matter that Covid was airborne.
Yes, I absolutely was concerned with the health and well-being of my family and my broader community. Why wouldn't I have been? We had the longest lockdowns in the Western world, I had no complaints, thousands of lives were saved.
I'm a scientist.
> I don't know what point you're trying to make.
One point is that in our culture, representing one's opinions as facts is not just normal, but often enforced.....and doing otherwise is highly frowned upon, depending on the topic.
> Even scientists were at the starting line for the pandemic, albeit, some aspects allowed them to quickly build on two decades of prior efforts, on the other hand it took them seemingly forever to figure out the simple matter that Covid was airborne.
Is this to say that they had advantages over laymen (as opposed to "every(!) person on the planet is at the same(!) starting line")?
> Yes, I absolutely was concerned with the health and well-being of my family and my broader community.
What data type would you use to represent "was concerned with"? A true/false boolean perhaps (which is our cultural convention)? Or, this might be better asked as: "*Now that I mention it*, what data type....".
> Why wouldn't I have been?
I can think of many reasons: cultural conditioning, and the nature of consciousness.
If you are unable to think of any reasons, might you form any conclusions as a consequence?
>We had the longest lockdowns in the Western world, I had no complaints, thousands of lives were saved.
If I was to inject this into the conceptual context, does it change your thinking at all?
https://plato.stanford.edu/entries/causation-counterfactual/
https://en.wikipedia.org/wiki/Counterfactual_thinking
> I'm a scientist.
Do you believe that studying science has any effect on one's cognition?
Does science rest upon (take for granted as true) any methodologies, premises, and axioms? Are there any substantial flaws or shortcomings (known, or otherwise) in any of this?
Do you (does your mind, both conscious and subconscious) implement these abstract methodologies at the object level with perfection? How about other scientists (literally all of them)?
edit:
I feel I should add: while this may seem ~unfair, could the same not be said of scientists using their greater knowledge to disagree with non-scientist commoners?
Yawn 🥱🥱🥱🥱🥱
https://www.pokerology.com/lessons/the-art-of-representing/
I'm sympathetic to all of this. Here's a complementary point. At least for now, what AI lets you do is produce The sort of content that humans can write, but at scale--it now costs far fewer dollars per word to generate passable copy.
But it's very hard to come up with a plausible model of our information ecosystem where the cost of producing content is a crucial bottleneck preventing disinformation campaigns from being more successful.
It's now very easy to generate thousands of variants of Nigerian prince email scams, whereas before you needed to have somebody spend some time writing up a plausible sounding message. Do you expect this means far more people will be scammed out of their savings? I don't; in my mental model the cost of producing emails was just not a major factor limiting the success of phishing, so when that cost goes close to 0 the overall rate of phishing doesn't change much.
And fake new strikes me as a pretty similar dynamic.