2025: Review and Recommendations
My top ten essays, how I use AI to read, and my favourite books, articles, and more.
I started this blog on January 1st, 2024, so I’ve now been publishing weekly essays here for over two years. It was one of the best decisions I’ve ever made. I’m grateful to everyone who reads and engages. Even the haters and losers (of which, happily, there aren’t many) often provide interesting and informative critiques.
I’m especially thankful to those who have paid subscriptions. I’m aware that many of you subscribe not simply to access paywalled articles but to support my writing. I’m truly moved by this. It’s also a helpful corrective to my broadly cynical views about human nature.
As of 5th January 2026, the blog has roughly 19,800 subscribers. It averages approximately 120,000 views per month, though with substantial variance.
In this post, I will review the blog’s output from 2025, recommend the best things I read last year (as well as other favourites), and then briefly outline how I will approach this newsletter in 2026.
Year in Review
Based on the number of readers, here were my top ten essays in 2025:
Status, Class, and The Crisis of Expertise — This argues that one underappreciated factor driving the “crisis of expertise”, and hostility towards knowledge-producing institutions more broadly, is feelings of humiliation and resentment among conservative voters with low levels of education, who view experts as a condescending and hostile social class. Among many others, it draws on the work of Thorstein Veblen (whose concept of conspicuous consumption inspires this blog’s title), Marcel Mauss, Will Storr, Musa al-Gharbi, David Hopkins, and Matt Grossman.
Let’s Not Bring Back The Gatekeepers — This argues that the media transformations of the digital age have created new pressures and responsibilities for small “l” liberals like me. Put simply, if you can no longer control the public conversation, you must participate in it, which, especially in recent years, too many liberals have been unwilling to do.
Is Social Media Destroying Democracy—Or Giving It To Us Good And Hard? — Much of the discourse about how social media is terrible blames engagement-maximising algorithms. Because companies profit by keeping people engaged and glued to their screens, algorithms feed people the epistemic equivalent of junk food: content that generates outrage and resentment, inflames our tribal instincts, and taps into negativity bias. Although important, I argue that a bigger factor is simply that social media has radically democratised media. Many people have ugly, illiberal, misinformed, and generally bad views and values, and social media gives them a platform and much greater consumer power. Admittedly, this view is not very politically correct to acknowledge, but it’s accurate.
On Highbrow Misinformation — There’s a tendency to think that “misinformation” is entirely something that right-wing elites, sinister corporations, and uneducated hoi polloi engage in. But in fact, there is a considerable amount of left-coded “highbrow misinformation” that circulates within the prestigious knowledge-producing institutions that bang on about the dangers of misinformation. I give many examples in this essay and also explain why and how such misleading content emerges and proliferates, often as a consequence of the politicisation and progressive groupthink that has captured many institutions.
The Case Against Social Media is Weaker Than You Think — This essay summarises and develops ideas from an article I wrote for Asterisk magazine (“Scapegoating the Algorithm”). The main point I make is that although social media platforms obviously aren’t harmless (see Essays 3 and 4), most of the discourse surrounding their dangers is driven more by vibes, anecdotes, and moral panic than rigorous argument or social science.
The “Everyone is Biased” Bias — This essay makes the simple point that although everyone is biased in ways that are important and under-appreciated, it’s not the case that everyone is equally biased. There are significant differences between individuals, norm-governed communities, and institutions in how they handle and process information. So, a recognition of the universality of bias must co-exist with avoidance of the “everyone is biased” bias, which flattens such important differences.
The World Outside and The Pictures in Our Heads — This provides an opinionated summary of the Lippmann–Dewey debate over democracy, public opinion, and the role of experts in complex, modern societies. I am a huge Walter Lippmann fan. I think he’s the most insightful political epistemologist of all time. This essay sets out his views on the essentially insurmountable challenges of acquiring adequate political knowledge and understanding in the modern world.
On Conspiracy Theories of Ignorance — This essay explores Karl Popper’s critique of the “conspiracy theory of ignorance,” which assumes that the truth is so self-evident that popular false beliefs must result from some deliberate conspiracy. Although Popper was mostly concerned with how Marxists and other leftist intellectuals think about “ideology”, the critique is equally pressing for much establishment hysteria about “disinformation” and “merchants of doubt” as the source of all popular misperceptions. I try to explain why Popper’s critique is valuable even though the world does in fact contain highly consequential conspiracy theories of ignorance.
On Becoming Less Left-Wing (Part 2) — This is the second in my series of essays detailing how I have become less left-wing in recent years. I explain in greater depth than I have elsewhere why political knowledge is, in general, extremely hard to attain, how tribal allegiances and other interests inevitably distort our beliefs, and why political ideologies are both inevitable and inevitably simplistic, selective, and vulnerable to distinctive failure modes. Think of it as “postmodernism but good”.
Domination and Reputation Management — A popular theory of “system-justifying ideologies”—for example, the belief in the divine right of kings, or that group-based domination is legitimate because subordinate groups are intellectually and morally deficient—is that they function to persuade the oppressed to acquiesce in their oppression. I argue that the real function of such ideologies lies in reputation management among oppressors. This leads me to a broader account of how reputation management doesn’t just produce apologetics for power; it also distorts the belief systems of those who think they’re “unmasking” power, including many “radical” left-wing intellectuals whose critiques of “ideology” were easily co-opted by history’s most despotic, exploitative regimes.
There are several unifying ideas across these essays:
The truth is not self-evident, even though we are often disposed to think that it is. Reality is vast and complex, much more complex than we can even imagine, and we access it not directly but through messy, often-opaque chains of testimony, trust, categorisation, and interpretation. Even the part of reality that we are in “direct” contact with—the bits we can actually perceive—are typically understood through socially-learned conceptual schemes and belief systems. As Walter Lippmann put it, modern politics deals with “indirect, unseen, and puzzling facts, and there is nothing obvious about them.”
Experts are necessary but human. Although journalists, pundits, intellectuals, scientists, and other “epistemic elites” have critical advantages in confronting and uncovering such facts, they are also vulnerable to the same biases as everyone else. Moreover, their advantages are often used to indulge such biases rather than correct them. The critical theorist who “unmasks” ideology doesn’t escape ideology. “Misinformation experts” aren’t strangers to misinformation. And so on.
The epistemic is not merely epistemic. The beliefs, narratives, ideologies, and social norms that regulate our minds and behaviour are distorted by propaganda, grubby motives (e.g., self-interest, reputation management, and status competition), and tribal allegiances. Such distortions are obvious in our rivals and enemies but not in our friends or ourselves. The failure to correct for this bias produces lots of bad social theory and politics.
Humans are kinda sorta rational. The popular image of human beings as credulous fools riddled with cognitive biases is mistaken. We are far from perfectly rational, of course, but people—yes, even the people you dislike—are typically far more sophisticated, critical, and intelligent than they seem. The contrary impression arises from a combination of the “third-person effect”, misunderstanding people’s real goals (e.g., assuming their primary motivation is always to figure out the truth), and underestimating the challenges of acquiring knowledge in complex, modern societies (see above).
Avoid technopanics. Technology is highly consequential, but most popular (and much scholarly) discourse about technology involves simplistic moral panics that obscure the complex, sophisticated ways people use such technologies, and their interaction with pre-existing features of societies. People aren’t passive, credulous victims of algorithms. And the effects of social media platforms are often mediated by long-standing pathologies of democracy, public opinion, polarisation, the growing diploma divide, and the politicisation of institutions, many of which are far more complex and uncomfortable to discuss than algorithms and Russian bots.
Podcasting
I appeared on several podcasts this year, including Evolutionary Psychology and The Good Fight with Yascha Mounk. Both provided really valuable outlets for discussing my views about human nature, belief, and self-deception (in the former) and misinformation, institutions, social epistemology, and politics (in the latter).
In the last few months of the year, I also started an AI podcast with my friend, Henry Shevlin, where we discuss the big-picture philosophical, scientific, and political questions thrown up by rapid developments in artificial intelligence.
I am convinced that AI will be utterly transformative in the coming years and decades. Although I did my PhD (between 2015 and 2018) on various philosophical questions surrounding generative AI, I immediately pivoted to the area of “political epistemology” in the years that followed, albeit still with a strong focus on psychology and cognitive science in a way that distinguishes me from most scholars in this area.
Now, I am back to thinking about AI a lot, focusing less on the technology itself than on its social and political significance (including its interaction with questions concerning misinformation, institutional trust, expertise, and public opinion).
My podcast with Henry is a way to keep up to date with this area in ways that other people will hopefully find beneficial. In the first six episodes, we covered big-picture debates about AI and existential risk, consciousness, education, LLMs’ environmental impact, and relationships:
AI Sessions #1: AI – A Normal Technology or a Superintelligent Alien Species?
AI Sessions #2: Artificial Intelligence and Consciousness – A Deep Dive
AI Sessions #4: The Social AI Revolution – Friendship, Romance, and the Future of Human Connection
I will always be a writer first and foremost—that’s where my strengths lie—but I’ve found these conversations to be really enjoyable and stimulating. This year, we will be speaking to many interesting guests.
Recommendations
2025 was an excellent year for Substack. I spend more time reading articles on this platform than on any other. For those (like me) interested in science, philosophy, and intellectually serious, evidence-based contributions to politics and current affairs, there is nowhere better.
I am reluctant to name specific Substackers I enjoy because I know I’ll accidentally leave out many brilliant ones. But if you want suggestions on whom to read, you can check my Recommendations, and also follow me on Notes in the Substack app, where I do my best every day to share the excellent articles I come across.
I read fewer new books than usual this year. The main reason is that I’ve been re-reading extensively with LLMs such as ChatGPT, Claude, and Gemini.
Book quality is extremely heavy-tailed. Most books are bad. A tiny number are exceptional. So, the overall value you get from reading is heavily influenced by decisions about what to read, and you are often much better off trying to master and internalise the ideas of exceptional books than reading new ones.
LLMs make this a lot easier. You can upload a PDF of the book and have a quasi-conversation with it, testing your understanding, receiving tailored explanations and tutoring, creating flashcards to import into programs like Anki (for spaced-repetition-based learning), and more. If you haven’t played around with NotebookLM yet, you’re making a huge mistake. So, this year, I spent much of the time I would have ordinarily spent reading new books on implementing this process for the best books I’ve already read.
Nevertheless, I did read some new books. More precisely, I read several new and several old books for the first time. In no particular order, here were the best ones:


