Misinformation research aims to transcend politics in favour of objective scientific analysis. The broader its definition of the term “misinformation”, the less realistic that ambition is.
It is an interesting trend within misinformation research:
1) Define something as “misinformation and get media coverage and calls for censorship.
2) At a later date, other researchers show why it is either nothing new, not harmful, not widespread or practiced by both sides.
3) Misinformation researchers change the definition of “misinformation” claiming the critics are missing the far bigger problem.
4) Repeat steps 1-4 endlessly…
At some point, we have to admit that this is a solution (censorship) looking for a problem to solve. The real problem is that ideologues have a world view that conflicts with material reality, and they refuse to admit it because their world view is key to their moral identity.
In a broad sense, yes, but I think some people make political ideologies central to their personal worldview, while most people do not.
For the vast majority of human history, religion has played that role instead. And it still does today. And religion tends to touch on far fewer issues than ideologies, and it often leaves government policy to a separate sphere.
Instead of getting stuck on fake news, misinformation, and disinformation, maybe it's more helpful to think about the bigger picture of 'motivated communications.' It's about recognizing something obvious, but easy to miss: 'Information' only really exists when it's being communicated to someone.
When we interact with each other sometimes we're just sharing facts ('Hey, it's raining!'). But most of the time, we're trying to make an impression, get someone on our side, or inspire action. This kind of 'motivated communication' isn't always about the truth—it’s more about getting a response. Just like in motivated reasoning, the accuracy of the message might not be the main point.
To gain more insights, misinformation research should dive into the mechanics of motivated communications, rather than debating what’s true or not.
Thank you, Dan, for a refreshingly clearly written piece on misinformation theory. You might find Jack Bratich's concept of 'moral panics' and the media regime disciplining function of labelling 'conspiracy theories' (or in the 20's, 'misinformation' and 'disinformation') as a part of forming social consensus (Gramscian hegemony) useful in developing your insights further.
Thanks! Yes I've written about misinformation alarmism as a moral panic before. (Haven't connected to the concept of Gramscian hegemony yet though...).
More of the same misguided approach, or better? Their findings seem consistent with your views on motivated reasoning and imply that targeting accuracy may not be useful. But it still relies on fake news as the proxy definition and presupposes what qualifies as untrustworthy. It also relies very heavily on Facebook, which has declined in popularity and probably is not nearly as influential on public opinion and downstream actions as it used to be. But maybe descriptive studies of affective dynamics around sharing of information are still valuable? I don't know.
"Whereas the definitional challenge is about accurately capturing what misinformation is, the objectivity challenge is about whether researchers can identify instances of that phenomenon scientifically."
These speak to the twin questions of "what" and "whether/how to." But the "who" is still conspicuously missing - and I think that's the crux of the inescapable dilemma, and thus the debate between those arguing that all such research is biased and political vs. those who maintain it's still scientific. If you think you can successfully leave "who" out of the equation, then maybe it's scientific as long as you check all the right boxes. If you can't leave "who" out of the equation, then it can never be as scientific as it purports to be.
Thanks Chris. I haven't read the McCloughlin et al study in any depth yet. My first reaction is that it does rest on a problematic operationalisation of "misinformation" (e.g., treating fake news from disreputable sites as if that's representative of misleading content generally, and treating true news from ostensibly trustworthy sites as if that's representative of reliable information generally). However, I'll need to dig into it more before having a take.
I'm not sure I follow the "who" comment. In a sense, I think the objectivity constraint for expansive definitions of misinfo would be difficult to satisfy no matter who tries to satisfy it (unless they're God). However, I also think there are specific biases and forms of partiality arise given who misinformation researchers in fact are, as I've written about before.
Oh, sorry if I wasn't clear. By "who," I meant that apart from any questions about misinformation itself (appropriate definitions and how to identify cases scientifically), there is a third, implicit question of *who* is "responsible" for that scientifically defined and researched misinformation. Whether framed morally or not, it is impossible to make claims about the "what" and "when/where" without also making claims about who is generating and driving it - and this is where coalitions, agendas, taking sides, politics come into play. Certainly the researcher's own positioning speaks to the "who" question, but I was more focused on assumptions about the people who (either with good intentions or in bad faith) are behind its persistence.
One last comment on the McCoughlin et al study: several of the authors have attested to the sheer amount of blood, sweat and tears (and enormous delays) that went into this study, which is rightly lauded as an epic achievement. Nevertheless, I have to ask whether these superhuman efforts support the value of such research or in fact point to the opposite: that maybe this kind of research is simply too hard to do well and not worth the enormous labor and cost. Perhaps that's even due to some of the conceptual problems you've been discussing here. Not to mention, this data is now 5 years or more out of date! The researchers deserve a medal and lifetime's worth of funding but....still.
I wonder if the lowest-hanging fruit might be to label as misinformation any memes that can be traced back to known deliberate misinformation factories in Russia and China.
My first introduction to misinformation came through the public reporting of two incidents which I was already acquainted with before they made the news. One was Gamergate. Gamergate, as I knew it, was a scandal inside the game industry involving game publishers who took out expensive ads in gaming magazines getting better reviews than publishers did not. I never even heard the allegations of misogyny and personal threats until I heard them from mainstream news media. When I did a search to see what news outlets were saying about it, every single "reliable source" had a wildly distorted view which said nothing about the actual issues in Gamergate, just a story about misogynist gamers. The only source that was remotely correct was Breitbart.
Later, a game developer's conference that I was interested in was cancelled. This spread across the news sites rapidly. The story they were telling was that it was cancelled at the last minute because those sexist gamers hadn't invited any female speakers, and the college it was to be held at required all events to have a female speaker.
This sounded strange, particularly as the conference had a female speaker scheduled. So again I searched and searched, but none of the reputable sites had the true story, which was that the female speaker had cancelled on the morning of the conference, after everyone had already flown there and checked into their hotel rooms; and the college cancelled it literally when it began. The only news source that had the true story was... Breitbart.
So I'm very skeptical of any classification of news outlets into "disreputable" and "trustworthy". There are no reputable sources; only sources that tell one side of the story, and sources that tell the other side.
I am a big fan of Starbird's work actually -- because it's an analysis of *information campaigns* and one of her central points is that in principle, there is very little difference between an "authentic" social media partisan poster versus an inauthentic one (say, one managed by the IRA in Russia). Their posting activities are very similar; the inauthentic ones are simply mimicking what the authentic ones do. That's I think what she's getting at in the quote you used.
The thing is: this sort of thing happens on the left as well. Just as it's difficult to see any kind of top-down leadership in social media information campaigns on the right, it's the same with left-wing information campaigns. The problem is that most researchers in academia are on the left so they are much more likely to study right-wing information campaigns (which then get labeled as "misinformation"/misleading narratives); when they study left-wing ones, the style of analysis is more celebratory (e.g. see the book #HashtagActivism: Networks of Race and Gender Justice).
What, I think, is needed here is people who are moderate/centrist (even right-wing) doing a skeptical and rigorous analysis of left-wing information campaigns without the rose-tinted glasses. I would be up for doing it although I don't have the computational skills that are required to scrape large amounts of social media postings.
This comment was really inspired by your point that "Determining whether communication or broader narratives are misleading is incredibly complex. In my view, much popular research and commentary concerning misinformation is, ironically, guilty of amplifying true scraps of information to support the misleading narrative that misinformation is widespread and highly dangerous."
What you're saying is that the left-wing narrative that "misinformation is widespread" on social media is itself an information narrative; and as Starbird says, it is not not driven by top-down leadership but through some kind of interaction between elites and regular posters. It would be interesting to see just how and why it gained the cachet it did on the left. I am trying to do it by looking at how the media covered Cambridge Analytica--a simple scandal about Facebook's lack of enforcement about its data-sharing rules that then became about anything and everything: from Russian interference to mind control and misinformation. But as with all projects, it's proceeding slowly. I would love to collaborate with others who are trying to pursue such projects!
So, would you say that Plato was the first misinformation reseacher? He thought philosophers should be in charge because they have escaped the cave and have "special access to the fabric of reality," Maybe the misinfo warriors should just relax and let the philosophers take over.
Labelling something as misinformation is essentially just a pejorative term for an opinion with which I disagree. The problem with the whole concept of misinformation is its unarticulated assumption that the debate is an issue about information when it is really about opinions.
The people who believe there is no objective reality and we form opinions based on lived experience,and insist indigenous wisdom is as important as the scientific method, claim there are trained experts who have the ability to judge if specific information is true or false. Also those experts can somehow always agree on important issues,despite each one having different background and personality. The current framework around misinformation is a scapegoat to remove things people with authority dislike. Unless we use universal principle,such as mathematics or a divine book,we cannot realistically pinpoint misinformation-only use some statistic (like prediction markets) to assess its probability.
In fairness there's probably not much overlap between relativists and misinformation researchers although I have seen it in some cases and found it very strange.
I understand your crusade in this from one angle. There's the first trump regime era rise of cancel culture and the like of which a sort of memeified misinformation thread was a common piece. On the other hand, I also feel like you're bogging yourself down with semantics. As in, it's simply true that illegal immigration across the Mexico border was WAY down from its 90s peak (and still is except for the post-covid surge in 2021). And that it wasn't a top concern for right leaning voters. Until Trump hit upon it as a good emotional anchor for various dissatisfactions (some real, some legitimately politically debatable) among many Americans.
I agree with you that among scholars it should just be called something like propaganda research, which is something all political movements and parties and candidates do. But a legit and definitely statistically studiable question to research is how Trump gained such momentum with an emotional anchor backed by fabrications and cherry picked moral panic type stories around immigration. The specifics of what's surely real, politically debatable, cherry picked, or surely fabricated seems kinda beside the point.
How so (a different project)? Honest question. I don't dispute that there's a lot of sciency (as in truthy) advocacy in the academy. But isn't the question being asked by mis/disinformation researchers usually more or less the one I proposed to call propaganda research with the illustrative example of how Trump has been so successful anchoring his appeal on objectively false fears of a massive surge in illegal immigration?
No, not the research I'm familiar with. Focusing specifically on one propaganda campaign and studying its impact is very different from establishing broad generalisations about misinformation in general (e.g., about its fingerprints, how fast it spreads, people's differential susceptibility to it, interventions that reduce this susceptibility, etc.) - all of that requires not just the ability to identify specific false claims but to reliably and impartially distinguish misinformation from non-misinformation across diverse contexts.
Fair enough. It is a problem that's easy to formulate poorly. I'm not keeping up on the space professionally or anything. I feel like I've seen things, for example, trying to look at election denial, where it kinda doesn't matter what the adjudication of a specific piece of media is. If you can classify it as even just "asking questions" you know it's lie-based propaganda that you can study as disinformation/propaganda. I agree it's not useful to more or less call mostly anything right wing misinformation and study that as misinformation.
On LinkedIn i follow a few 'leading' mis- and disinfo scientists, or perhaps better, political activists. Their research typically equates kicking in an open door, it's a remarkably petty field as far as i can oversee it.
A recent LI example.
'Important new paper in Science Magazine finds across 8 studies & multiple platforms, time periods, & definitions of misinformation that (a) misinformation evokes more outrage than trustworthy news & (b) people are more willing to share outrage-evoking misinformation without clicking on it'
They have merely 'discovered' that emotion is very effective to engage the reader. Media has been doing that for ages.
I like Justin e Lane's LI comments. Note especially how the LI author and disinfo 'science' hot shot Sander v/d Linden (BBC etc) doesn't seem to understand the matter when he replies to Lane, which Lane's 2nd comment makes clear.
You're welcome. Lee Jussim just released a, let's call it, contributing essay, on his substack:
In this essay, I explain why, if the only thing you know is that something is published in a psychology peer reviewed journal, or book or book chapter, or presented at a conference, you should simply disbelieve it, pending confirmation by multiple independent researchers in the future.1
It is above anyone’s pay grade, and yet someone must, or else it is unclear what we are talking about, or whether the central principle of society is consistently or even meaningfully applied.
Another definition, problem with "misinformation" is the unarticulated assumption that it is about information. Most of the time it is simply an opinion, which may have little or no significant informational content. In the typical case, therefore, the allegation of misinformation comes down to little more than that is an opinion with which I disagree.
"For example, some researchers suggest “misinformation” should refer to “any information that is false” ":
I would say that the assertion that any information is True or False is the mother of all misinformation. "Information" is mathematically defined by information theory as a measure of how much, numerically, a message adjusts our probability estimate for the truth or falsity of a proposition. True is mathematically defined as the state of a proposition having supporting evidence which proves that its probability of drawing the correct conclusion in a randomly-chosen situation is 1, and False is defined as having evidence proving that probability is zero. It's easy to prove, using Bayes' Theorem, that it would take an infinite amount of information to prove that any proposition is true or false.
Note how defining "information" runs into a similar definitional versus objectivity problem.
If there seems to be no tension here, perhaps because "the maths works" (internal consistency), then the problem is simply shifted to the conditions under which creedences are representative of what they are presumed to represent. The fit of some conceptual, parsimonious sample space is highly unlikely to fit well with the complexity of reality, especially while reasoning under deep uncertainty.
Shannon information is selective of a particular frame of reference, and those things held invariant in such frames are not invariant in all frames.
To my knowledge, it's the most commonly adopted definition of information in information theory.
Phil's point about whether information could be true or false is an insightful one, and getting specific about the definition should then be helpful in seeing whether or not the challenges mentioned in the article are dispelled at this level of analysis.
I disagree. We've built many machines which work in the real world using information theory, and have never run into problems caused by problems of subjectivity. Shannon information is defined within a framework of how you discretize your sensory input, but there has never been a case where this caused an insoluble problem. The worst that's ever happened is that people needed higher-resolution discretization, more computational power, or (in rare cases) a bit of randomness, prime numbers, or different pixel configurations, to avoid illusions caused by interference between repeated patterns in the sensory grid and repeated patterns of a different length in the real world.
Re. "one of a handful of genuine discoveries in philosophy is that concepts don’t need to be definable to be meaningful or useful":
No; that particular discovery was that concepts don't need to be definable by a list of necessary and sufficient conditions, expressed in words, which can unambiguously declare that an instance Is or Is Not an instance of that concept.
Concepts still need to be definable to be useful; they just don't need to be unambiguously definable, don't need to give a 100%-correct "true" or "false" output to the question "Is X an instance of concept Y?", and don't need to be defined in *words*.
A frog has a concept of "a fly" which doesn't care about wings or color, but only about motion and size. An outfielder has a concept of "a fly ball that I can catch" which is similarly defined. Neither could express this concept in words. Both will be wrong sometimes. A self-driving car constructs concepts like "sense patterns which are usually followed by the right wheels being off the road unless certain actions are taken". These sense patterns aren't built out of linguistic propositions, but out of numeric weights between many nodes which are linked on one end to sensors, and at the other end to effectors.
The more-important philosophical claim here, I think, is that of the logical positivists, who said that a statement is meaningless unless it's either a tautology, or empirically verifiable.
Thanks. Re. the first point, I think that's just semantics - on most interpretations of the term "definition", the point stands. Re. the second point: the logical positivist view is mistaken as is well-established in philosophy so I'm not sure I follow there.
It is an interesting trend within misinformation research:
1) Define something as “misinformation and get media coverage and calls for censorship.
2) At a later date, other researchers show why it is either nothing new, not harmful, not widespread or practiced by both sides.
3) Misinformation researchers change the definition of “misinformation” claiming the critics are missing the far bigger problem.
4) Repeat steps 1-4 endlessly…
At some point, we have to admit that this is a solution (censorship) looking for a problem to solve. The real problem is that ideologues have a world view that conflicts with material reality, and they refuse to admit it because their world view is key to their moral identity.
https://frompovertytoprogress.substack.com/p/why-ideologies-fail
I agree with this in part although in fairness many misinformation researchers don't advocate censorship.
I am glad to hear that. Based on those that get media coverage, it does not seem that way, though.
Perhaps if the anti-censorship misinformation researchers spoke out louder, we could have a more productive debate on the topic.
Michael, I'll agree with you... with the caveat that we are ALL ideologues.
In a broad sense, yes, but I think some people make political ideologies central to their personal worldview, while most people do not.
For the vast majority of human history, religion has played that role instead. And it still does today. And religion tends to touch on far fewer issues than ideologies, and it often leaves government policy to a separate sphere.
https://frompovertytoprogress.substack.com/p/where-does-ideology-come-from
Interesting take on how when information is framed and packaged with bias, it becomes misinformation.
Instead of getting stuck on fake news, misinformation, and disinformation, maybe it's more helpful to think about the bigger picture of 'motivated communications.' It's about recognizing something obvious, but easy to miss: 'Information' only really exists when it's being communicated to someone.
When we interact with each other sometimes we're just sharing facts ('Hey, it's raining!'). But most of the time, we're trying to make an impression, get someone on our side, or inspire action. This kind of 'motivated communication' isn't always about the truth—it’s more about getting a response. Just like in motivated reasoning, the accuracy of the message might not be the main point.
To gain more insights, misinformation research should dive into the mechanics of motivated communications, rather than debating what’s true or not.
Thank you, Dan, for a refreshingly clearly written piece on misinformation theory. You might find Jack Bratich's concept of 'moral panics' and the media regime disciplining function of labelling 'conspiracy theories' (or in the 20's, 'misinformation' and 'disinformation') as a part of forming social consensus (Gramscian hegemony) useful in developing your insights further.
Thanks! Yes I've written about misinformation alarmism as a moral panic before. (Haven't connected to the concept of Gramscian hegemony yet though...).
Curious what you think of this new study linking moral outrage with spreading misinfo online (it's getting a lot of buzz):
https://www.science.org/doi/abs/10.1126/science.adl2829
More of the same misguided approach, or better? Their findings seem consistent with your views on motivated reasoning and imply that targeting accuracy may not be useful. But it still relies on fake news as the proxy definition and presupposes what qualifies as untrustworthy. It also relies very heavily on Facebook, which has declined in popularity and probably is not nearly as influential on public opinion and downstream actions as it used to be. But maybe descriptive studies of affective dynamics around sharing of information are still valuable? I don't know.
"Whereas the definitional challenge is about accurately capturing what misinformation is, the objectivity challenge is about whether researchers can identify instances of that phenomenon scientifically."
These speak to the twin questions of "what" and "whether/how to." But the "who" is still conspicuously missing - and I think that's the crux of the inescapable dilemma, and thus the debate between those arguing that all such research is biased and political vs. those who maintain it's still scientific. If you think you can successfully leave "who" out of the equation, then maybe it's scientific as long as you check all the right boxes. If you can't leave "who" out of the equation, then it can never be as scientific as it purports to be.
Thanks Chris. I haven't read the McCloughlin et al study in any depth yet. My first reaction is that it does rest on a problematic operationalisation of "misinformation" (e.g., treating fake news from disreputable sites as if that's representative of misleading content generally, and treating true news from ostensibly trustworthy sites as if that's representative of reliable information generally). However, I'll need to dig into it more before having a take.
I'm not sure I follow the "who" comment. In a sense, I think the objectivity constraint for expansive definitions of misinfo would be difficult to satisfy no matter who tries to satisfy it (unless they're God). However, I also think there are specific biases and forms of partiality arise given who misinformation researchers in fact are, as I've written about before.
Oh, sorry if I wasn't clear. By "who," I meant that apart from any questions about misinformation itself (appropriate definitions and how to identify cases scientifically), there is a third, implicit question of *who* is "responsible" for that scientifically defined and researched misinformation. Whether framed morally or not, it is impossible to make claims about the "what" and "when/where" without also making claims about who is generating and driving it - and this is where coalitions, agendas, taking sides, politics come into play. Certainly the researcher's own positioning speaks to the "who" question, but I was more focused on assumptions about the people who (either with good intentions or in bad faith) are behind its persistence.
One last comment on the McCoughlin et al study: several of the authors have attested to the sheer amount of blood, sweat and tears (and enormous delays) that went into this study, which is rightly lauded as an epic achievement. Nevertheless, I have to ask whether these superhuman efforts support the value of such research or in fact point to the opposite: that maybe this kind of research is simply too hard to do well and not worth the enormous labor and cost. Perhaps that's even due to some of the conceptual problems you've been discussing here. Not to mention, this data is now 5 years or more out of date! The researchers deserve a medal and lifetime's worth of funding but....still.
I wonder if the lowest-hanging fruit might be to label as misinformation any memes that can be traced back to known deliberate misinformation factories in Russia and China.
My first introduction to misinformation came through the public reporting of two incidents which I was already acquainted with before they made the news. One was Gamergate. Gamergate, as I knew it, was a scandal inside the game industry involving game publishers who took out expensive ads in gaming magazines getting better reviews than publishers did not. I never even heard the allegations of misogyny and personal threats until I heard them from mainstream news media. When I did a search to see what news outlets were saying about it, every single "reliable source" had a wildly distorted view which said nothing about the actual issues in Gamergate, just a story about misogynist gamers. The only source that was remotely correct was Breitbart.
Later, a game developer's conference that I was interested in was cancelled. This spread across the news sites rapidly. The story they were telling was that it was cancelled at the last minute because those sexist gamers hadn't invited any female speakers, and the college it was to be held at required all events to have a female speaker.
This sounded strange, particularly as the conference had a female speaker scheduled. So again I searched and searched, but none of the reputable sites had the true story, which was that the female speaker had cancelled on the morning of the conference, after everyone had already flown there and checked into their hotel rooms; and the college cancelled it literally when it began. The only news source that had the true story was... Breitbart.
So I'm very skeptical of any classification of news outlets into "disreputable" and "trustworthy". There are no reputable sources; only sources that tell one side of the story, and sources that tell the other side.
I am a big fan of Starbird's work actually -- because it's an analysis of *information campaigns* and one of her central points is that in principle, there is very little difference between an "authentic" social media partisan poster versus an inauthentic one (say, one managed by the IRA in Russia). Their posting activities are very similar; the inauthentic ones are simply mimicking what the authentic ones do. That's I think what she's getting at in the quote you used.
The thing is: this sort of thing happens on the left as well. Just as it's difficult to see any kind of top-down leadership in social media information campaigns on the right, it's the same with left-wing information campaigns. The problem is that most researchers in academia are on the left so they are much more likely to study right-wing information campaigns (which then get labeled as "misinformation"/misleading narratives); when they study left-wing ones, the style of analysis is more celebratory (e.g. see the book #HashtagActivism: Networks of Race and Gender Justice).
What, I think, is needed here is people who are moderate/centrist (even right-wing) doing a skeptical and rigorous analysis of left-wing information campaigns without the rose-tinted glasses. I would be up for doing it although I don't have the computational skills that are required to scrape large amounts of social media postings.
This comment was really inspired by your point that "Determining whether communication or broader narratives are misleading is incredibly complex. In my view, much popular research and commentary concerning misinformation is, ironically, guilty of amplifying true scraps of information to support the misleading narrative that misinformation is widespread and highly dangerous."
What you're saying is that the left-wing narrative that "misinformation is widespread" on social media is itself an information narrative; and as Starbird says, it is not not driven by top-down leadership but through some kind of interaction between elites and regular posters. It would be interesting to see just how and why it gained the cachet it did on the left. I am trying to do it by looking at how the media covered Cambridge Analytica--a simple scandal about Facebook's lack of enforcement about its data-sharing rules that then became about anything and everything: from Russian interference to mind control and misinformation. But as with all projects, it's proceeding slowly. I would love to collaborate with others who are trying to pursue such projects!
So, would you say that Plato was the first misinformation reseacher? He thought philosophers should be in charge because they have escaped the cave and have "special access to the fabric of reality," Maybe the misinfo warriors should just relax and let the philosophers take over.
Ha. It is interesting how the same ideas repeat throughout history (and how many go back to Plato).
Labelling something as misinformation is essentially just a pejorative term for an opinion with which I disagree. The problem with the whole concept of misinformation is its unarticulated assumption that the debate is an issue about information when it is really about opinions.
The people who believe there is no objective reality and we form opinions based on lived experience,and insist indigenous wisdom is as important as the scientific method, claim there are trained experts who have the ability to judge if specific information is true or false. Also those experts can somehow always agree on important issues,despite each one having different background and personality. The current framework around misinformation is a scapegoat to remove things people with authority dislike. Unless we use universal principle,such as mathematics or a divine book,we cannot realistically pinpoint misinformation-only use some statistic (like prediction markets) to assess its probability.
In fairness there's probably not much overlap between relativists and misinformation researchers although I have seen it in some cases and found it very strange.
Got this one down to 1337 words. Making progress!
Ha - thanks!
I understand your crusade in this from one angle. There's the first trump regime era rise of cancel culture and the like of which a sort of memeified misinformation thread was a common piece. On the other hand, I also feel like you're bogging yourself down with semantics. As in, it's simply true that illegal immigration across the Mexico border was WAY down from its 90s peak (and still is except for the post-covid surge in 2021). And that it wasn't a top concern for right leaning voters. Until Trump hit upon it as a good emotional anchor for various dissatisfactions (some real, some legitimately politically debatable) among many Americans.
I agree with you that among scholars it should just be called something like propaganda research, which is something all political movements and parties and candidates do. But a legit and definitely statistically studiable question to research is how Trump gained such momentum with an emotional anchor backed by fabrications and cherry picked moral panic type stories around immigration. The specifics of what's surely real, politically debatable, cherry picked, or surely fabricated seems kinda beside the point.
Well besides the point of that project, maybe, but that's different from the project I'm criticising - thanks in any case.
How so (a different project)? Honest question. I don't dispute that there's a lot of sciency (as in truthy) advocacy in the academy. But isn't the question being asked by mis/disinformation researchers usually more or less the one I proposed to call propaganda research with the illustrative example of how Trump has been so successful anchoring his appeal on objectively false fears of a massive surge in illegal immigration?
No, not the research I'm familiar with. Focusing specifically on one propaganda campaign and studying its impact is very different from establishing broad generalisations about misinformation in general (e.g., about its fingerprints, how fast it spreads, people's differential susceptibility to it, interventions that reduce this susceptibility, etc.) - all of that requires not just the ability to identify specific false claims but to reliably and impartially distinguish misinformation from non-misinformation across diverse contexts.
Fair enough. It is a problem that's easy to formulate poorly. I'm not keeping up on the space professionally or anything. I feel like I've seen things, for example, trying to look at election denial, where it kinda doesn't matter what the adjudication of a specific piece of media is. If you can classify it as even just "asking questions" you know it's lie-based propaganda that you can study as disinformation/propaganda. I agree it's not useful to more or less call mostly anything right wing misinformation and study that as misinformation.
On LinkedIn i follow a few 'leading' mis- and disinfo scientists, or perhaps better, political activists. Their research typically equates kicking in an open door, it's a remarkably petty field as far as i can oversee it.
A recent LI example.
'Important new paper in Science Magazine finds across 8 studies & multiple platforms, time periods, & definitions of misinformation that (a) misinformation evokes more outrage than trustworthy news & (b) people are more willing to share outrage-evoking misinformation without clicking on it'
https://www.linkedin.com/feed/update/urn:li:activity:7268002314131488769/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7268002314131488769%2C7268539464862384128)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7268539464862384128%2Curn%3Ali%3Aactivity%3A7268002314131488769)
They have merely 'discovered' that emotion is very effective to engage the reader. Media has been doing that for ages.
I like Justin e Lane's LI comments. Note especially how the LI author and disinfo 'science' hot shot Sander v/d Linden (BBC etc) doesn't seem to understand the matter when he replies to Lane, which Lane's 2nd comment makes clear.
Thanks for sharing this. I need to read the paper in depth but I share your assessment based on my current understanding of it.
You're welcome. Lee Jussim just released a, let's call it, contributing essay, on his substack:
In this essay, I explain why, if the only thing you know is that something is published in a psychology peer reviewed journal, or book or book chapter, or presented at a conference, you should simply disbelieve it, pending confirmation by multiple independent researchers in the future.1
https://unsafescience.substack.com/p/75-of-psychology-claims-are-false
Dan, are you willing to take a shot at defining ‘truth’, subject to the following criteria?
https://substack.com/@michaelkowalik/note/c-79411321
Above my pay grade!
It is above anyone’s pay grade, and yet someone must, or else it is unclear what we are talking about, or whether the central principle of society is consistently or even meaningfully applied.
Another definition, problem with "misinformation" is the unarticulated assumption that it is about information. Most of the time it is simply an opinion, which may have little or no significant informational content. In the typical case, therefore, the allegation of misinformation comes down to little more than that is an opinion with which I disagree.
"For example, some researchers suggest “misinformation” should refer to “any information that is false” ":
I would say that the assertion that any information is True or False is the mother of all misinformation. "Information" is mathematically defined by information theory as a measure of how much, numerically, a message adjusts our probability estimate for the truth or falsity of a proposition. True is mathematically defined as the state of a proposition having supporting evidence which proves that its probability of drawing the correct conclusion in a randomly-chosen situation is 1, and False is defined as having evidence proving that probability is zero. It's easy to prove, using Bayes' Theorem, that it would take an infinite amount of information to prove that any proposition is true or false.
Note how defining "information" runs into a similar definitional versus objectivity problem.
If there seems to be no tension here, perhaps because "the maths works" (internal consistency), then the problem is simply shifted to the conditions under which creedences are representative of what they are presumed to represent. The fit of some conceptual, parsimonious sample space is highly unlikely to fit well with the complexity of reality, especially while reasoning under deep uncertainty.
Shannon information is selective of a particular frame of reference, and those things held invariant in such frames are not invariant in all frames.
I'm unclear what Shannon's definition of information has to do with this topic.
To my knowledge, it's the most commonly adopted definition of information in information theory.
Phil's point about whether information could be true or false is an insightful one, and getting specific about the definition should then be helpful in seeing whether or not the challenges mentioned in the article are dispelled at this level of analysis.
I disagree. We've built many machines which work in the real world using information theory, and have never run into problems caused by problems of subjectivity. Shannon information is defined within a framework of how you discretize your sensory input, but there has never been a case where this caused an insoluble problem. The worst that's ever happened is that people needed higher-resolution discretization, more computational power, or (in rare cases) a bit of randomness, prime numbers, or different pixel configurations, to avoid illusions caused by interference between repeated patterns in the sensory grid and repeated patterns of a different length in the real world.
There is no reason to expect all information to reduce to Shannon information. That doesn't mean we shouldn't use the tool, given its utility.
As far as I know, there is no formal means to identify non-shannon information. There's no reason to expect completeness, only self-consistency.
Re. "one of a handful of genuine discoveries in philosophy is that concepts don’t need to be definable to be meaningful or useful":
No; that particular discovery was that concepts don't need to be definable by a list of necessary and sufficient conditions, expressed in words, which can unambiguously declare that an instance Is or Is Not an instance of that concept.
Concepts still need to be definable to be useful; they just don't need to be unambiguously definable, don't need to give a 100%-correct "true" or "false" output to the question "Is X an instance of concept Y?", and don't need to be defined in *words*.
A frog has a concept of "a fly" which doesn't care about wings or color, but only about motion and size. An outfielder has a concept of "a fly ball that I can catch" which is similarly defined. Neither could express this concept in words. Both will be wrong sometimes. A self-driving car constructs concepts like "sense patterns which are usually followed by the right wheels being off the road unless certain actions are taken". These sense patterns aren't built out of linguistic propositions, but out of numeric weights between many nodes which are linked on one end to sensors, and at the other end to effectors.
The more-important philosophical claim here, I think, is that of the logical positivists, who said that a statement is meaningless unless it's either a tautology, or empirically verifiable.
Thanks. Re. the first point, I think that's just semantics - on most interpretations of the term "definition", the point stands. Re. the second point: the logical positivist view is mistaken as is well-established in philosophy so I'm not sure I follow there.