Effective misinformation will adopt whatever tone/communication standard that its targeted audience is accustomed to associating with valid information. Your example of “enhanced interrogation” reflects this. In this sense the proposed inoculation technique wouldn’t be effective against purposefully, well crafted disinformation, as you mention.
I think it’s also worth pointing out that since van der Linden’s book, LLMs have made it much, much easier to disguise any of the telltale “fingerprints” of misinformation that are often just symptoms of poor communication technique combined with outsized enthusiasm on an issue.
All that said, a headline that reads “Wicked, corrupt fascist leader conspires to eat beloved pets of true patriots,” should throw up some misinformation red flags, and though language this obviously crafted as propaganda can make unstable individuals act in dangerous ways, I don’t think it’s the primary problem.
"Effective misinformation will adopt whatever tone/communication standard that its targeted audience is accustomed to associating with valid information", and
"..., LLMs have made it much, much easier to disguise any of the telltale “fingerprints” of misinformation that are often just symptoms of poor communication technique combined with outsized enthusiasm on an issue." [I see this as similar to the problem of looking for trends predicting the stock market - the market adjusts to any known trend. Also similar to the ongoing price of freedom: "eternal vigilance, AND dynamically adaptive response."]
"...though language this obviously crafted as propaganda can make unstable individuals act in dangerous ways, I don’t think it’s the primary problem."
Agree! We rightly fear most, the determinant lie we saw too late - or never see at all.
Dan, at the expense of joining the "wrong side" of this debate, I tend to agree with the vast majority of your post here (I, too, am suspicious of the inoculation claims, so I was obviously biased in that direction to begin with). Part of the problem I have is that a lot of the inoculation research is attempting to help people develop heuristic rules to spot (and subsequently reject) misinformation without actually establishing that these rules are helpful for differentiating anything that isn't at one end of the extreme or the other. In other words, show me evidence that these heuristic rules are helpful for differentiating true from misinformation as we approach more of the gray area end of the spectrum (though spectrum is not really all that accurate).
But another one of my big problems is that "misinformation" has become this umbrella category that has lot any sense of meaning (if it ever had any to begin with). The simplistic categorization of content into misinformation vs. true information is overly simplistic. What degree of inaccuracy is required for something to be categorized as misinformation (how much unsupported premises/claims, faulty logic, cherry-picking evidence is required)? Is one bad part of an argument sufficient? Does it require to come from sites like InfoWars (where the misinformation may be blatant and recurring)?
Anyway, I won't write a lengthy essay-length response here, but from a strictly decision-making perspective, I would tend to agree that a lot of your arguments line up well.
>But another one of my big problems is that "misinformation" has become this umbrella category that has lot any sense of meaning (if it ever had any to begin with).
I join Matt, and add that it would be useful to get follow-up studies on the proportion of participants that keep actively using these newly adopted heuristics rules when facing new information. The cost of learning such rules and the (expected) low rate of application would make those literacy programs little useful on the long run
"[The conspiracy heuristic] is only ever used when it’s superfluous. When better reasons exist, it adds nothing at best and may even make the negative case seem flimsy. When no better reasons exist, it does active harm."
So let me get this straight: they claim that "Scholars and responsible fact-checkers tend to employ careful analysis, judicious reasoning, and neutral language to call out mistakes, as these are signals of objectivity," while simultaneously accusing you of being in denial and burying your head in the sand? That’s quite the contradiction—if not an ironic departure from their own stated principles.
Indeed. I couldn't help noticing that the respondents were leaving many of the same finger prints the original article describes. It was starting to feel like their study of what people do when they are misinforming others were autoethnographic :D
On the issue of misinformation, I agree with your stance.
I would just add that we are, after all, undergoing a major transformation—one in which many of our long-held convictions, truth-claims, and narratives are being redefined, while worldviews and perceptions are challenged.
In the midst of such a shift, assuming one can always draw a clear, straightforward distinction between misinformation and truth is incredibly difficult—unless, of course, one approaches it with religious certainty.
I notice that you’re using the rhetoric of misinformation to try to discredit those organizations with which you simply disagree. This is *exactly* the phenomenon that I think Dan is interrogating here.
What an incredibly absurd argument from someone who hasn’t even made the effort to read the essay—backed by substantial footnotes, including references from Swedish Radio’s investigative program Kaliber, which has done an outstanding job exposing the falsehoods.
Frankly, your message doesn’t merit a response, given its disingenuous attempt at a serious conversation. If that’s “exactly” what Dan is trying to interrogate, he certainly wouldn’t have liked my response.
Ironically, your response perfectly illustrates the rhetorical tactics you claim to criticise.
1) All around bruised egos in this exchange. All the kids were trying to do was sell a book reinforcing the beliefs of college educated urbanites to supplement their academic income, and Dan had to disrespect their very shiny credentials with truth seeking.
2) What we need is empiricism! Studies were held! P-values we're calculated! How dare Dan question how design may impact outcomes. When has a (social) scientific study ever given misleading results?
3) The ones who benefits most from calling all conspiracies misinformation are the conspirators.
I appreciate a good metaphor, and I think using "virus" and "Inoculation" to describe misinformation and how to avoid it is a pretty good metaphor. But that doesn't mean that in the real world the metaphor acts in any way like the thing it's describing. There is no way to inoculate a person against misinformation except to provide her with the critical skills to examine an argument and its premises, and then consider the empirical evidence that might support the claim or, as Karl Popper advised, falsify it. It is probably the case that false or unfalsifiable claims are typically presented with more emotion than true ones (for instance, religious claims) but it seems to me that the manner in which an argument is presented has nothing to do with its truthfulness.
Toward the end of your essay, you present what is the real problem which had been lurking in the background all along: confirmation bias. People are built to seek out the information that confirms their biases (and those of their tribe) and ignore information that doesn't. This is humanity's default position and it is this tendency that rational people must steadfastly fight against. The Enlightenment and science gave us the tools to rise above our limbic urges, but it is a constant battle, and one we now seem to be losing.
I wish you luck here. People overwhelmingly like simple explanations of complex problems. (I have an ‘everyone is a reductionist’ bias). You’re tackling the most complex thing out there - why people believe what they believe. ‘Fake news’ is a wonderfully simple (and thus treatable) explanation for wrong beliefs. ‘Rebuilding trustworthy institutions and rebuilding trust in them’ may be what is actually required, but it’s based on a very complex explanation so will be a very tough sell.
Wolfgang Munchau wrote an interesting piece on UnHerd last month in which he basically argued that the EU establishment (and presumably other autocracies and parties that control their position/narrative through control of the legacy media) is so afraid of the rightwing/populist surge of the last decade that it desperately seeks to control rightwing/populist access to alternative media (which is where the opposition needs to go to get their message across) and the way to do this is via a misinfo/disinfo/hate speech/fact checkers legal regime. Vance correctly called the EU oligarchs out during the Munich Conference by telling them that if their biggest fear is their own population and its views, then they are anti-democratic and the US would seriously consider pulling out of any alliance with them that was set up to protect democracy. Next to all the conceptual/philosophical problems concerning misinfo, I thought this bit of political analysis was useful in order to identify the forces driving the policies and laws coming out of oligarchical regimes (and the previous US administration).
I think your point of the risk of false positives actually needs to be stated more strongly, with more underlining of why that is bad. People don’t need to just avoid misinformation - they need trusted sources, they need accurate information they can rely on. When people are convinced a source is corrupt or can’t be trusted it’s difficult to earn that trust back, and part of how people get radicalized is when their distrust in media sources becomes SO widespread that they lose exposure to a lot of what’s actually happening. Cherry picking itself is dangerous not because people learn about the studies supporting the authors viewpoint, it’s because they DON’T get exposed to anything else.
And I think the view that you can detect emotional fingerprints of misinformation is so fucking dangerous because it just encourages people to NOT CHECK THE FUCKING SOURCES.
Which combined is kind of the opposite of what people should be doing. Expose yourself to a lot of information, and evaluate the sources in detail.
[the blindness about recognizing bias around “scientific sounding, calm, logical” sources is also interesting to me. It does seem like individual people might be susceptible to one kind of misinformation over the other - overt rage bait a la Fox News designed to make you feel angry and persecuted v practiced decorum a la NYT etc written to make you feel like a Very Informed Scholar. wonder how much it’s impacted by class/education/party affiliation).
Misinformation is "mistake" rebranded. Disinformation is "lie" rebranded. Why does this matter? Because it shifts the burden of proof. When you say something is wrong or a lie, people expect you to show your receipts. But when you use the newfangled replacement terms, somehow the claim is guilty until proven innocent.
The problem is that, in the Internet age, claims are talked "about" more than claimants are engaged "with," and so deploying a burden-switching label functions as an epistemic hit and run. By the time the claimant hears the accusation, the accuser is long gone. And probably no one who heard the accusation asked for the proof; they just took it on board.
I think your criticism of "emotional language" as an indicator of misinformation makes a lot of sense. Most of the misinformation propagated by so-called think tanks (on topics such as climate change) was successful precisely because they used neutral language, cherry-picked evidence, and sounded "sciency". When people "intend" to manipulate and convince, they probably refrain from using emotional language. I am also not convinced "inoculation games" will work for the same reasons a lot of "brain-training" games don't work in improving cognitive skills. People become really good at the game or similar tasks (near transfer), but it doesn't lead to improvement in the general skill it aims to transfer (far transfer: in this case, it would be "general skill to detect misinformation"). Not sure if the inoculation theory addresses the transfer problem.
Reading this - and no I haven’t read the background work - it seems like van der Linden discovered the ancient concept of “rhetoric” and relabeled it “misinformation fingerprints”.
While we’re there. Mark Anthony’s speech at Caesar’s funeral. Misinformation or no?
I am late cuz ot took me some days to get familiar with the work of your critics.
Basically the cite studies and research to report the (shocking,1st time in histrory) revelation that proffesional journalistic institutions usually provide more trustworthy information than social media influencers, and that illiterate,poor and lonely people with mental health issues are more likely to believe fake/misleading information online , so they write books not a single one of those people will read. And to deal with this new problem, we need to abandon free speach and democracy so the educated technocrats can chose what to information flag true or false before the plebeians consume it so they never vote against elite technocrats interests.
The obvious problem of that system is the 'who checks the fact-checkers' (or prebunkers in Sander's case) , because in most Russian media Ukraine being ruled by nazi white supremacists is a fact, and anything refuting it is either censored (fact-checked*), or cured* with the navalny treatment vaccine.
If institutions ( and in most cases these people mean left-leaning institutions when they say 'institutions', much like 'real' americans by far-right crowd can mean white americans ) want to regain trust of the masses , they have to use the time-tested more and better speach, not censorship, and if you want it to reach the populace you need to make it marketable/entertaining and meet those people where they are, not write a book with zero effort/money in advertising.
Exactly this. We used to have words for this: the words "lie" and "mistake." These words, though, kept the burden of proof with the accuser. The newfangled terms place it on the accused.
Dan, I am with you. All metaphor is simplistic and misleading.
A metaphor is not a physical model that can be defended by analysis or tested hypothesis. A metaphor ("misinformation is like a virus") is a figurative literary device that substitutes a complex subject (misinformation) with a simpler one (virus). It reveals similiarites between the two domains, no doubt, but it obscures what are usually even more numerous and significant differences. It is a conceptual bait-and-switch growing in popularity with writers of popular science.
I write about the metaphors used to describe how the brain works in "Metaphors We Think By" (https://tomrearick.substack.com/p/metaphors-we-think-by) and how they have failed us. Don't get me wrong: I love metaphor and believe that analogy is a fundamental element of human cognition (and so does Douglas Hofstadter). But metaphor is not a physical model and should never be used as if it were.
The biggest flaw of the "misinformation=virus" argument is that vaccines train the immune system to look for evidence of the virus, usually characteristic pieces of the virus. Unfortunately, as the author states, these "characteristic pieces" can be part of true information as well. A self-destructive auto-immune response, anyone? Where people either become complete cynics and believe nothing, or say to heck with it and believe whatever they feel like?
Of course, the best way to defend against misinformation is hygiene, not inoculations. Stay away from untrustworthy sources of information. True-up information from new sources against known sources. Be ready to dig in to new information with a critical but curious eye. Bone up on rhetorical fallacies and learn to recognize them. Have a diverse information diet. Unfortunately, there are no shortcuts here--you have to do the work.
Effective misinformation will adopt whatever tone/communication standard that its targeted audience is accustomed to associating with valid information. Your example of “enhanced interrogation” reflects this. In this sense the proposed inoculation technique wouldn’t be effective against purposefully, well crafted disinformation, as you mention.
I think it’s also worth pointing out that since van der Linden’s book, LLMs have made it much, much easier to disguise any of the telltale “fingerprints” of misinformation that are often just symptoms of poor communication technique combined with outsized enthusiasm on an issue.
All that said, a headline that reads “Wicked, corrupt fascist leader conspires to eat beloved pets of true patriots,” should throw up some misinformation red flags, and though language this obviously crafted as propaganda can make unstable individuals act in dangerous ways, I don’t think it’s the primary problem.
Three excellent points:
"Effective misinformation will adopt whatever tone/communication standard that its targeted audience is accustomed to associating with valid information", and
"..., LLMs have made it much, much easier to disguise any of the telltale “fingerprints” of misinformation that are often just symptoms of poor communication technique combined with outsized enthusiasm on an issue." [I see this as similar to the problem of looking for trends predicting the stock market - the market adjusts to any known trend. Also similar to the ongoing price of freedom: "eternal vigilance, AND dynamically adaptive response."]
"...though language this obviously crafted as propaganda can make unstable individuals act in dangerous ways, I don’t think it’s the primary problem."
Agree! We rightly fear most, the determinant lie we saw too late - or never see at all.
Dan, at the expense of joining the "wrong side" of this debate, I tend to agree with the vast majority of your post here (I, too, am suspicious of the inoculation claims, so I was obviously biased in that direction to begin with). Part of the problem I have is that a lot of the inoculation research is attempting to help people develop heuristic rules to spot (and subsequently reject) misinformation without actually establishing that these rules are helpful for differentiating anything that isn't at one end of the extreme or the other. In other words, show me evidence that these heuristic rules are helpful for differentiating true from misinformation as we approach more of the gray area end of the spectrum (though spectrum is not really all that accurate).
But another one of my big problems is that "misinformation" has become this umbrella category that has lot any sense of meaning (if it ever had any to begin with). The simplistic categorization of content into misinformation vs. true information is overly simplistic. What degree of inaccuracy is required for something to be categorized as misinformation (how much unsupported premises/claims, faulty logic, cherry-picking evidence is required)? Is one bad part of an argument sufficient? Does it require to come from sites like InfoWars (where the misinformation may be blatant and recurring)?
Anyway, I won't write a lengthy essay-length response here, but from a strictly decision-making perspective, I would tend to agree that a lot of your arguments line up well.
>But another one of my big problems is that "misinformation" has become this umbrella category that has lot any sense of meaning (if it ever had any to begin with).
I agree. I wrote a little about this myself:
https://woolery.substack.com/p/misinformation-is-the-wrong-word?r=ba1ue
I join Matt, and add that it would be useful to get follow-up studies on the proportion of participants that keep actively using these newly adopted heuristics rules when facing new information. The cost of learning such rules and the (expected) low rate of application would make those literacy programs little useful on the long run
"[The conspiracy heuristic] is only ever used when it’s superfluous. When better reasons exist, it adds nothing at best and may even make the negative case seem flimsy. When no better reasons exist, it does active harm."
https://thethirdedge.substack.com/p/the-conspiracy-heuristic-is-a-bug
So let me get this straight: they claim that "Scholars and responsible fact-checkers tend to employ careful analysis, judicious reasoning, and neutral language to call out mistakes, as these are signals of objectivity," while simultaneously accusing you of being in denial and burying your head in the sand? That’s quite the contradiction—if not an ironic departure from their own stated principles.
Indeed. I couldn't help noticing that the respondents were leaving many of the same finger prints the original article describes. It was starting to feel like their study of what people do when they are misinforming others were autoethnographic :D
On the issue of misinformation, I agree with your stance.
I would just add that we are, after all, undergoing a major transformation—one in which many of our long-held convictions, truth-claims, and narratives are being redefined, while worldviews and perceptions are challenged.
In the midst of such a shift, assuming one can always draw a clear, straightforward distinction between misinformation and truth is incredibly difficult—unless, of course, one approaches it with religious certainty.
Of course, in some cases, the line is clearer—such as when organisations like the UN, UNICEF, Greenpeace, the Red Cross, and major news outlets like the BBC and The Economist spread misinformation about climate change, distorting perceptions of empirical reality (as I have written about here: https://open.substack.com/pub/wehavebeenfooled/p/if-misinformation-is-the-enemy-why?r=3gca8w&utm_campaign=post&utm_medium=web).
But I doubt "vaccines" against this kind of misinformation were ever considered in their proposed solutions for detecting falsehoods.
I notice that you’re using the rhetoric of misinformation to try to discredit those organizations with which you simply disagree. This is *exactly* the phenomenon that I think Dan is interrogating here.
What an incredibly absurd argument from someone who hasn’t even made the effort to read the essay—backed by substantial footnotes, including references from Swedish Radio’s investigative program Kaliber, which has done an outstanding job exposing the falsehoods.
Frankly, your message doesn’t merit a response, given its disingenuous attempt at a serious conversation. If that’s “exactly” what Dan is trying to interrogate, he certainly wouldn’t have liked my response.
Ironically, your response perfectly illustrates the rhetorical tactics you claim to criticise.
Random sarcastic responses:
1) All around bruised egos in this exchange. All the kids were trying to do was sell a book reinforcing the beliefs of college educated urbanites to supplement their academic income, and Dan had to disrespect their very shiny credentials with truth seeking.
2) What we need is empiricism! Studies were held! P-values we're calculated! How dare Dan question how design may impact outcomes. When has a (social) scientific study ever given misleading results?
3) The ones who benefits most from calling all conspiracies misinformation are the conspirators.
I appreciate a good metaphor, and I think using "virus" and "Inoculation" to describe misinformation and how to avoid it is a pretty good metaphor. But that doesn't mean that in the real world the metaphor acts in any way like the thing it's describing. There is no way to inoculate a person against misinformation except to provide her with the critical skills to examine an argument and its premises, and then consider the empirical evidence that might support the claim or, as Karl Popper advised, falsify it. It is probably the case that false or unfalsifiable claims are typically presented with more emotion than true ones (for instance, religious claims) but it seems to me that the manner in which an argument is presented has nothing to do with its truthfulness.
Toward the end of your essay, you present what is the real problem which had been lurking in the background all along: confirmation bias. People are built to seek out the information that confirms their biases (and those of their tribe) and ignore information that doesn't. This is humanity's default position and it is this tendency that rational people must steadfastly fight against. The Enlightenment and science gave us the tools to rise above our limbic urges, but it is a constant battle, and one we now seem to be losing.
I wish you luck here. People overwhelmingly like simple explanations of complex problems. (I have an ‘everyone is a reductionist’ bias). You’re tackling the most complex thing out there - why people believe what they believe. ‘Fake news’ is a wonderfully simple (and thus treatable) explanation for wrong beliefs. ‘Rebuilding trustworthy institutions and rebuilding trust in them’ may be what is actually required, but it’s based on a very complex explanation so will be a very tough sell.
Wolfgang Munchau wrote an interesting piece on UnHerd last month in which he basically argued that the EU establishment (and presumably other autocracies and parties that control their position/narrative through control of the legacy media) is so afraid of the rightwing/populist surge of the last decade that it desperately seeks to control rightwing/populist access to alternative media (which is where the opposition needs to go to get their message across) and the way to do this is via a misinfo/disinfo/hate speech/fact checkers legal regime. Vance correctly called the EU oligarchs out during the Munich Conference by telling them that if their biggest fear is their own population and its views, then they are anti-democratic and the US would seriously consider pulling out of any alliance with them that was set up to protect democracy. Next to all the conceptual/philosophical problems concerning misinfo, I thought this bit of political analysis was useful in order to identify the forces driving the policies and laws coming out of oligarchical regimes (and the previous US administration).
I think your point of the risk of false positives actually needs to be stated more strongly, with more underlining of why that is bad. People don’t need to just avoid misinformation - they need trusted sources, they need accurate information they can rely on. When people are convinced a source is corrupt or can’t be trusted it’s difficult to earn that trust back, and part of how people get radicalized is when their distrust in media sources becomes SO widespread that they lose exposure to a lot of what’s actually happening. Cherry picking itself is dangerous not because people learn about the studies supporting the authors viewpoint, it’s because they DON’T get exposed to anything else.
And I think the view that you can detect emotional fingerprints of misinformation is so fucking dangerous because it just encourages people to NOT CHECK THE FUCKING SOURCES.
Which combined is kind of the opposite of what people should be doing. Expose yourself to a lot of information, and evaluate the sources in detail.
[the blindness about recognizing bias around “scientific sounding, calm, logical” sources is also interesting to me. It does seem like individual people might be susceptible to one kind of misinformation over the other - overt rage bait a la Fox News designed to make you feel angry and persecuted v practiced decorum a la NYT etc written to make you feel like a Very Informed Scholar. wonder how much it’s impacted by class/education/party affiliation).
Misinformation is "mistake" rebranded. Disinformation is "lie" rebranded. Why does this matter? Because it shifts the burden of proof. When you say something is wrong or a lie, people expect you to show your receipts. But when you use the newfangled replacement terms, somehow the claim is guilty until proven innocent.
The problem is that, in the Internet age, claims are talked "about" more than claimants are engaged "with," and so deploying a burden-switching label functions as an epistemic hit and run. By the time the claimant hears the accusation, the accuser is long gone. And probably no one who heard the accusation asked for the proof; they just took it on board.
https://thethirdedge.substack.com/p/misinformation-mistake-rebranded
I think your criticism of "emotional language" as an indicator of misinformation makes a lot of sense. Most of the misinformation propagated by so-called think tanks (on topics such as climate change) was successful precisely because they used neutral language, cherry-picked evidence, and sounded "sciency". When people "intend" to manipulate and convince, they probably refrain from using emotional language. I am also not convinced "inoculation games" will work for the same reasons a lot of "brain-training" games don't work in improving cognitive skills. People become really good at the game or similar tasks (near transfer), but it doesn't lead to improvement in the general skill it aims to transfer (far transfer: in this case, it would be "general skill to detect misinformation"). Not sure if the inoculation theory addresses the transfer problem.
The whole approach is advocating for truthiness in the most superficial manner.
Reading this - and no I haven’t read the background work - it seems like van der Linden discovered the ancient concept of “rhetoric” and relabeled it “misinformation fingerprints”.
While we’re there. Mark Anthony’s speech at Caesar’s funeral. Misinformation or no?
I am late cuz ot took me some days to get familiar with the work of your critics.
Basically the cite studies and research to report the (shocking,1st time in histrory) revelation that proffesional journalistic institutions usually provide more trustworthy information than social media influencers, and that illiterate,poor and lonely people with mental health issues are more likely to believe fake/misleading information online , so they write books not a single one of those people will read. And to deal with this new problem, we need to abandon free speach and democracy so the educated technocrats can chose what to information flag true or false before the plebeians consume it so they never vote against elite technocrats interests.
The obvious problem of that system is the 'who checks the fact-checkers' (or prebunkers in Sander's case) , because in most Russian media Ukraine being ruled by nazi white supremacists is a fact, and anything refuting it is either censored (fact-checked*), or cured* with the navalny treatment vaccine.
If institutions ( and in most cases these people mean left-leaning institutions when they say 'institutions', much like 'real' americans by far-right crowd can mean white americans ) want to regain trust of the masses , they have to use the time-tested more and better speach, not censorship, and if you want it to reach the populace you need to make it marketable/entertaining and meet those people where they are, not write a book with zero effort/money in advertising.
Exactly this. We used to have words for this: the words "lie" and "mistake." These words, though, kept the burden of proof with the accuser. The newfangled terms place it on the accused.
https://thethirdedge.substack.com/p/misinformation-mistake-rebranded
This article repeatedly made me laugh, which is suspiciously evocative - do I detect misinformation?!?!
Such a clear and systematic discussion. Great stuff.
Dan, I am with you. All metaphor is simplistic and misleading.
A metaphor is not a physical model that can be defended by analysis or tested hypothesis. A metaphor ("misinformation is like a virus") is a figurative literary device that substitutes a complex subject (misinformation) with a simpler one (virus). It reveals similiarites between the two domains, no doubt, but it obscures what are usually even more numerous and significant differences. It is a conceptual bait-and-switch growing in popularity with writers of popular science.
I write about the metaphors used to describe how the brain works in "Metaphors We Think By" (https://tomrearick.substack.com/p/metaphors-we-think-by) and how they have failed us. Don't get me wrong: I love metaphor and believe that analogy is a fundamental element of human cognition (and so does Douglas Hofstadter). But metaphor is not a physical model and should never be used as if it were.
The biggest flaw of the "misinformation=virus" argument is that vaccines train the immune system to look for evidence of the virus, usually characteristic pieces of the virus. Unfortunately, as the author states, these "characteristic pieces" can be part of true information as well. A self-destructive auto-immune response, anyone? Where people either become complete cynics and believe nothing, or say to heck with it and believe whatever they feel like?
Of course, the best way to defend against misinformation is hygiene, not inoculations. Stay away from untrustworthy sources of information. True-up information from new sources against known sources. Be ready to dig in to new information with a critical but curious eye. Bone up on rhetorical fallacies and learn to recognize them. Have a diverse information diet. Unfortunately, there are no shortcuts here--you have to do the work.