Did online misinformation fuel the UK riots?
On the importance of thinking carefully about complex problems
The UK has been experiencing its worst riots in over a decade.
On July 29th, three young girls were murdered in Southport, a town in the north of England. The following evening, violent rioters in the town attacked a mosque and local police. The riots then spread to other parts of England and into Northern Ireland, where rioters continued to attack mosques and the police, as well as hotels housing asylum seekers.
Since the riots began, many commentators have coalesced around two ideas. First, online misinformation has played a major role. Second, the government should therefore dramatically increase censorship and regulation of social media platforms, perhaps even banning them if they fail to clamp down on misinformation.
The first idea—that online misinformation has fuelled the riots—seems undeniable.
The initial riots in Southport were triggered by fake news spread on social media that the man who murdered the three girls was a Muslim asylum seeker named “Ali al-Shakati”, known to the security services as a possible terrorist threat.
In fact, the police later revealed that the suspect was named “Axel Rudakubana”. Born in Wales to parents from Rwanda, an overwhelmingly Christian country, he is not an asylum seeker and is likely not a Muslim either.
It is difficult to think of a clearer example of the dangers of online misinformation. Nevertheless, much of the commentary surrounding its role in the riots has been simplistic, uninformed, or misguided. It exaggerates and misrepresents misinformation’s impact and encourages naive policy solutions that might backfire.
“Legitimate grievances”
Some commentators dismiss the role of online misinformation in the riots because they think the riots reflect legitimate grievances about mass immigration, multiculturalism, and liberal elites.
That is not my view. There are undoubtedly many problems in the UK and many legitimate critiques one could raise about immigration policies, government failures, policing, economic stagnation, and so on. Nevertheless, these legitimate grievances are not driving the rioters, a loose collection of hateful, far-right fanatics and opportunistic, drugged-up thugs attracted to hooliganism and a good fight.
Sometimes, riots are symptomatic of deep, complex societal conditions. Sometimes, they erupt spontaneously from prosaic factors like strong emotions, stupidity, antisocial personalities, drugs and alcohol, sunny weather, and run-of-the-mill racism and hatred among people who lack impulse control. This is the latter situation.
Given this, my scepticism about the impact of online misinformation in these riots is not driven by a desire to legitimise them, paint them in a more positive light, or—as much of the pundit class has done—connect them to “root causes” that coincidentally align with my personal dislikes.
I have the opposite motivation: I think more responsibility and blame should be placed on the shoulders of far-right activists, thugs, and hooligans, not less.
1. What role did fake news really play?
The initial riots were triggered by online fake news about the identity of the Southport murderer. Although it is uncertain how many of the initial rioters engaged with and accepted this fake news, some clearly had. Nevertheless, those who assign significant weight to the misinformation in explaining the riots are not just describing how events unfolded. They assume that the riots would not have occurred without online fake news.
Because history only unfolds once, we will never know whether this counterfactual assumption is true. However, there are some reasons for scepticism.
Did the real identity of the suspect matter?
The initial fake news alleged that the Southport murderer was a Muslim asylum seeker. In reality, the suspect is a black British teenager born to Rwandan immigrants. How plausible is it that the violent rioters—primarily a group of far-right, racist, low-information thugs and hooligans—would have responded very differently to the real news?
Consider a Substack post by Matt Goodwin, who has repeatedly argued that the riots reflect legitimate grievances:
“These poor children… ended up being murdered. And who murdered them? The son of immigrants from Rwanda… [S]omething has gone terribly wrong in this country… The now inescapable conclusion that we’ve simply let too many people into our country who hate who we are.”
For Goodwin, the fact that the suspected killer is the “son of immigrants from Rwanda” supports the “inescapable conclusion that we’ve simply let too many people into our country who hate who we are.” Later in the post, he rants about illegal immigration, Muslim grooming gangs, and two-tier policing.
Does Goodwin have any evidence that the suspect’s parents “hate who we are”? No. That the suspect, who is not an immigrant, "hates who we are”? No. That the suspect or the suspect’s parents are connected in any way to illegal immigration, small boats, or grooming gangs? Also no.
If such simple factual details do not matter to Goodwin, an upper-middle-class writer with a PhD, I am sceptical they matter much to the low-information, racist, drunken thugs and hooligans perpetrating the riots.
In support of this scepticism, it is noteworthy that the riots continued and even spread after the initial fake news was debunked.
Can misinformation be blamed?
When misinformation is blamed for people’s actions, it is typically because the actions would have been justified if the information had been true. For example, if the Democrats had stolen the 2020 US presidential election, the behaviour of January 6 rioters would have been reasonable, even heroic. Similarly, if vaccines did cause autism, it would be sensible to avoid vaccination.
The situation is completely different in these riots.
Even if the suspected Southport murderer had been a Muslim refugee, the riots would have been equally appalling, hateful, and unjustified.
Of course, factors can explain people’s behaviour without justifying it. But the distinction in this case is important. Whereas Democrats in the US do not rig elections and vaccines do not cause autism, it is perfectly conceivable that a Muslim refugee could commit awful murders. In fact, in 2020, a Muslim refugee from Libya did.
If hateful, violent, far-right riots can erupt in response to correct information, this suggests the problem is hateful, violent, far-right thugs rather than online misinformation. It also casts significant doubt on the idea that such riots would not emerge without online misinformation.
2. What role did social media play?
Ethnic riots predate social media
In 1902, the Jewish population in Kishinev (now the capital of Moldova) was falsely accused of murdering children to use their blood in a religious ritual. In response to the rumours, inhabitants of the city murdered 49 Jews, raped dozens of Jewish women, and destroyed hundreds of houses and stores owned by Jewish people.
The pogrom exemplified a depressingly common pattern. In The Deadly Ethnic Riot, Donald Horowitz documents the frequent role of false and unsupported rumours in roughly 150 ethnic riots across 50 countries and multiple continents.
The fact that this pattern is cross-cultural and long predates social media casts doubt on explanations of the UK riots that assign a lot of weight to social media. False, toxic rumours used to justify hatred are as old as humanity. They range from strategic witchcraft accusations in hunter-gatherer communities to baseless conspiracy theories used to justify numerous genocides throughout the twentieth century.
Of course, this does not mean online fake news was irrelevant during the UK riots. Because so much communication today occurs on social media, it is unsurprising that information—including misinformation—connected to the riots was shared on it, too. (Much communication also occurred via local word-of-mouth and private messaging channels on Telegram and Whatsapp). Nevertheless, given that hateful, riot-justifying false rumours are as old as humanity, it seems unlikely that social media was the main problem in these riots.
Historical ignorance
More generally, it is noteworthy how much commentary about the role of social media in these riots lacks historical context.
A headline in The Guardian informs readers that “the far right has moved online, where its voice is more dangerous than ever.” Was it not more dangerous when it ruled over half of Europe, started a world war, and systematically exterminated millions of people?
Even focusing just on the UK, far-right groups and far-right attitudes have an extremely long history. Enoch Powell delivered his bigoted “Rivers of Blood” speech in 1968, which contained unsubstantiated, hateful rumours intended to demonise immigrants, including an anecdote describing immigrants posting excrement through a widow’s letterbox.
Most of the public (≈75%), roughly 98% white at the time, agreed with the speech.
More generally, throughout much of the late twentieth century, many British Asians confronted frequent racist abuse and threats of violence.
Over the past several decades, coinciding with the emergence of social media, British attitudes have become less racist and more sympathetic to immigrants and immigration, which is a general pattern across Western societies. In the recent UK election, a majority of white people voted for explicitly anti-racist, pro-immigration, progressive parties, which was an increase from 2019. Only 16% voted for Reform (an anti-immigration party), and only 49% of Reform voters think the rioters have “legitimate concerns”.
I am not suggesting social media is responsible for these broadly positive events or trends. Nevertheless, it is noteworthy that when pundits and commentators blame social media for things, they rarely even identify correlations, let alone provide evidence of causation. In fact, they rarely bother to establish that the negative trends they seek to explain exist.
3. Unsubstantiated claims about unsubstantiated claims
People generally spread fake news online because they do not bother to carefully evaluate whether the claims are true or supported by evidence. Given this, you would think that those who complain about online fake news would be extremely careful to ensure their claims are supported by evidence.
Enter Carole Cadwalladr. Like so many other influential pundits and commentators, she has repeatedly advanced unsupported claims about the impact of Russian influence, Cambridge Analytica, and social media on British and American politics in recent years. On August 3rd, she turned her focus to the riots.
In “our new age of algorithmic outrage”, Cadwalladr writes, the riots were “depressingly inevitable”.
The central thesis of Cadwalladr’s article is that social media is a “polarisation engine”. This is a surprising claim. There is very little evidence for it, and there appears to be no clear relationship between polarisation and social media use across countries. For this reason, a recent review article in Nature criticises the “misperception” that “social media is a primary cause of broader social problems such as polarization.”
Cadwalladr does not address such objections or even consider them. She also offers no evidence to support her main thesis. Instead, she quotes alleged experts to justify her analysis, including Maria Ressa, a Filipino journalist and “trenchant tech critic”.
Ressa acknowledges that “there’s always been violence” but argues that “[w]hat’s brought violence mainstream is social media.” To defend this claim, she points to the January 6 riots and argues that “people wouldn’t have been able to find each other if social media didn’t cluster them together and isolate them to incite them further.”
Is violence now more “mainstream” since social media arrived? Were extremists, fanatics, insurrectionists, or conspiracy theorists incapable of finding each other and launching riots before social media emerged?
The answer to both questions is obviously “no”. Violent crime in the UK has fallen dramatically in recent decades, as in other Western countries. And wars, coups, insurrections, pogroms, genocides, extremism, and so on were common—in fact, more common—in human history before the emergence of social media.
In another popular article in Time, expert Jacob Davey is quoted as saying that social media is essential for extremist groups to galvanize a “spark to flash”: “We wouldn’t see the types of activity we saw over the weekend [i.e., the riots] without it.”
If so, how do we explain the fact that we literally did see these “types of activity”—extremist groups, race riots, rapidly spreading false rumours, and so on—very frequently before the emergence of social media?
4. Misinformation is not a virus
In attempting to explain the role of online misinformation in the UK riots, experts and journalists have turned to their favourite metaphor: the “misinformation virus”.
An article in The Independent tells readers that
“Twitter – now called X – is where a foul virus spread in the wake of the horrendous stabbings of numerous children in Southport on Monday. That virus led to the rioting the very next day – and since.”
The influential psychologists Sander van der Linden and Richard Bentall agree.
As I have argued before, this is an extremely misleading metaphor for understanding human communication and the influence of misinformation.
Motivated misinformation
As is usually the case with ethnic or race riots, the UK rioters and their supporters actively sought out information that would rationalise their racist beliefs and actions. The simple reason a small minority of far-right activists, propagandists, and opportunistic pundits were so eager to spread and accept unsupported rumours that demonised Muslims and refugees was that they were strongly motivated to do so.
Such rumours were not a “mind virus” infecting hapless victims. Even if the initial fake news duped some people, it was primarily a pretext, an excuse, which formed part of a more general demonising narrative synthesised and spread among strategic agents motivated to rationalise their worldview and actions.
This rationalising function is evident in their eager acceptance of the initial fake news. It also explains why the riots and their apologists continued long after the original fake news had been debunked.
This process is nothing like the contagious spread of a viral disease.
Exposure ≠ acceptance
In addition, misinformation does not spread via mere exposure or “contact”.
People's beliefs are shaped not directly by the information they encounter but by complex interpretations of the information informed by preexisting views, experiences, worldviews, and interests. Given this, different people interpret the same information in different—sometimes radically different—ways.
Moreover, whether people accept information at all depends on whether they trust the source. Those writing about the “misinformation virus” in the UK riots were not themselves “infected” by it. This is because they do not base their beliefs on the attention-seeking lies of people like Andrew Tate or fringe online news websites. They await confirmation from sources they trust, like the BBC.
If so, this suggests that many problems of “misinformation” are not symptomatic of rampaging mind viruses; they reflect deep-rooted institutional distrust among disaffected segments of society.
Of course, if this were merely about a metaphor’s explanatory value, it would be of purely academic interest. However, the metaphor's core assumptions distort how many policymakers think about this issue.
5. The false allure of censorship
In response to the role of online misinformation in the riots, many politicians and pundits have proposed solutions involving greater censorship, greater punishment of those who spread misinformation online, and greater regulation of social media platforms.
These solutions all make perfect sense if you consider misinformation a contagious virus.
If misinformation infects people’s minds and causes them to behave in harmful ways, then reducing the amount of misinformation will reduce the number of people who are infected and, therefore, reduce harmful behaviours.
This simple model of misinformation is appealing, popular, and wrong.
Misinformation and misperceptions
Many policymakers assume that allowing misinformation online will necessarily increase the number of people who believe it. Of course, they do not think it will make them believe it. The danger comes with other people, the gullible mass public, who lack the ability to identify misinformation. For this reason, such policymakers also assume that reducing the amount of online misinformation will automatically reduce the number of people who believe false things.
This picture of the relationship between misinformation and what people believe is mistaken.
Because belief formation is highly complex and mediated by interpretation and judgements about the trustworthiness of sources, there is no simple linear relationship between misinformation exposure and misperceptions.
Some people see Elon Musk’s tweets and agree with him. Others are horrified and reduce their trust in Musk as a source of information. Given the overwhelming opposition among the British public to the recent riots and the far-right, I suspect more people fall into the latter category than the former.
Because discovering that someone spreads misinformation often causes people to distrust them, exposure to misinformation often improves the accuracy of the audience’s beliefs. For example, by assigning much less credibility to Elon Musk since the riots began, people will be less likely to accept his misleading claims in the future. Once you distrust someone, there is no amount of unverified misinformation they can throw at you to make you believe it.
At the same time, when people are strongly disposed or motivated to believe something, they require very little exposure to misinformation before adopting inaccurate beliefs. In some cases, misinformation is not necessary at all. Popular misperceptions often emerge from simple ignorance, unreliable intuitions, over-generalising or misinterpreting experiences, or encountering a steady stream of accurate but unrepresentative information, which is the norm in mainstream media.
For these and many more reasons, it is a simple fallacy to assume that reducing the amount of online misinformation will automatically improve the accuracy of the public’s beliefs.
Censorship has costs
Moreover, censorship and social media regulation also have various costs and risks.
First, knowing what people in society actually think can be helpful. It helps leaders and citizens understand public opinion and highlights areas where it is important to correct misinformation and address misperceptions.
Censorship causes self-censorship, inevitably hiding what people think and feel about various issues. This creates volatile, unpredictable societies. If you want to know how people will behave and vote, you must know what they really believe.
This might not matter if censorship automatically reduced popular misperceptions and conspiracy theories. But for the reasons already documented, it does not. Even worse, it might exacerbate them.
Online engagement with low-quality fake news is concentrated among a relatively small minority of the population, including in the UK riots. This group has many distinctive characteristics, including strong institutional distrust and anti-establishment attitudes. It is implausible that aggressive online censorship and speech regulation would reduce such feelings; it seems more likely to aggravate them.
These are risks even if censors are infallible, which they never can be. Censors inevitably make mistakes, whether by accidentally censoring legitimate content or by censoring fake news in a one-sided, biased way. When that happens, the consequences can be explosive.
It is tempting to assume that 99% reliable censorship is 99% as good as 100% reliable censorship. But this is another popular fallacy. Sometimes, getting 99% of the way to a destination leaves you worse off than if you had stayed where you were. (If your flight to Sicily drops you off once you made it 99% of the way there, you will drown in the sea).
Many people in the UK are already angry about biases in applying speech laws. There is a risk that increasing censorship in response to the riots will inflame this perception, exacerbating feelings of anger and institutional distrust that drive some people further towards extremist and anti-establishment politics.
These reflections do not settle the issue of how the government and social media companies should address online misinformation. Even if many pundits and politicians think about online misinformation in simplistic and misguided ways, false and misleading online content clearly does sometimes have negative social and political consequences.
Nevertheless, policymakers should acknowledge the complexity of these issues. When faced with such complexity, it is a good idea to pause, reflect, hear from experts with diverse views, and avoid making hasty decisions based on simplistic and fallacious assumptions. In the rush to respond to the riots, I worry that the UK government is doing the opposite.
Two thoughts on this:
1) Yes these violent riots were down to ugly people drunk on their own nasty personality traits and substances. But I think similar things about many of those who go on 'mainly peaceful' protests in liberal Western countries. Many of them too seem to me to be drugged up...in this case on their own wilfully ignorant self-righteousness. The difference is a matter of degree. Nobody goes on a protest in Iran or Afghanistan for the fun of it....for an edgy day out with their mates.
2) I despair of the current wave of 'misinformation' discourses coming from all sides of the political spectrum. The very meaning of the word is riddled with imprecision and tendentiousness. For example when a 'news' broadcaster picks from the vast array of events unfolding around the world on that day - which highly selective bits they choose to amplify as important and which bits to ignore - is that 'misinformation'? If so 'misinformation has been the overwhelmingly dominant character of mass media right from its inception.
Excellent and timely post Dan. I agree on all points.
But some anti-censorship crusaders treat counterspeech itself as censorship, and this has led to the malicious demonization of people like Renee DiResta. Flagging misinformation can lead to content moderation decisions by platforms but the flagging itself is counterspeech and perfectly legitimate.
I mentioned the Powell speech in passing here, along with some precursors to the current unrest:
https://open.substack.com/pub/rajivsethi/p/two-lives-in-one-day