Philosophy of Artificial Intelligence: 10-Week Syllabus & Readings
Can computers think and feel? Will "super-intelligent" machines cause human extinction? How will advances in AI transform democracy, society, the information environment, and human relationships?
Over the next several decades, advances in artificial intelligence are likely to transform our societies, economies, cultures, and understanding of what it means to be human. Not enough people have begun to grapple with this fact, and with the wide range of philosophical, ethical, and political issues it raises. If you’re interested in learning (or learning more) about this topic, I hope that this introductory syllabus is helpful.
It’s the syllabus for my undergraduate course, “The Philosophy of Artificial Intelligence”, at the University of Sussex. I’ve previously co-taught the course (with Robyn Waller). However, this year I’m solo-teaching it and have redesigned the entire syllabus to better align with my interests and the topics I believe are most important to cover.
The course moves from foundational questions about AI's nature to practical and political concerns about its impact on our collective future.
It begins by examining what thought, intelligence, and consciousness mean in the context of machines (Weeks 1-3).
It then confronts questions about whether AI poses an unprecedented existential risk or is simply another technology in human history (Weeks 4-5).
The middle section examines the impact of AI on our society and institutions, including democracy (Week 6), the information environment (Week 7), and the economy (Week 8).
It then concludes by exploring human-human and human-AI relationships in a world transformed by AI (Week 9) before reflecting on our AI future and the role of human agency in that future (Week 10).
For a ten-week course aimed at students with no technical background, I had to be highly selective in choosing the topics, readings, and other materials. If you think I’ve missed anything important, please let me know in the comments.
AI Philosophy Course Reading List
Week 1 — AI: History, Core Ideas, and the Turing Test
Essential
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.
Boden, M. A. (2016). Chapter 1: What is Artificial Intelligence? In AI: Its nature and future. Oxford University Press.
Recommended
3Blue1Brown. (2017). But what is a neural network? [Video]. YouTube.
3Blue1Brown. (2017). Gradient descent, how neural networks learn [Video]. YouTube.
3Blue1Brown. Large Language Models explained briefly. [Video]. YouTube.
Russell, S. J., & Norvig, P. (2021). Introduction (Ch. 1). In Artificial intelligence: A modern approach (4th ed.). Pearson.
Masley, A. (2025). All the ways I want the AI debate to be better. Substack.
Week 2 — The Meaning and Possibility of Artificial Intelligence
Essential
Borg, E. (2024). LLMs, Turing tests and Chinese rooms: The prospects for meaning in large language models. Inquiry.
Boden, M. A. (2016). Chapter 2: General intelligence as the "holy grail". In AI: Its nature and future. Oxford University Press.
Boden, M. A. (2016). Chapter 6: But is it intelligence, really? In AI: Its nature and future. Oxford University Press.
Harper, T. A. (2025, June). Artificial intelligence illiteracy. The Atlantic.
Recommended
Crane, T. (2015). Can a computer think? (Ch. 7). In The mechanical mind (3rd ed.). Routledge.
Russell, S. J., & Norvig, P. (2021). Philosophical foundations (Ch. 26). In Artificial intelligence: A modern approach (4th ed.). Pearson.
Shanahan, M. (2024). Talking about large language models. Communications of the ACM, 67(8), 64–73.
Millière, R., & Buckner, C. (2024). A philosophical introduction to language models — Part I. arXiv.
Millière, R., & Buckner, C. (2024). A philosophical introduction to language models — Part II. arXiv.
Todd, B. (2025). The case for AGI by 2030. 80,000 Hours.
Aschenbrenner, L. (2024). Situational Awareness.
Narayanan, A., & Kapoor, S. (2025). AGI is not a milestone. AI Snake Oil / Normal Technology.
Karpathy, A. (2023). Intro to Large Language Models [Video]. YouTube.
Week 3 — The Meaning and Possibility of Artificial Consciousness
Essential
Chalmers, D. J. (2023). Could a large language model be conscious? arXiv preprint.
Schneider, S. (2019). The problem of AI consciousness (Ch. 2). In Artificial you: AI and the future of your mind. Princeton University Press.
Recommended
Birch, J. (2025). AI consciousness: A centrist manifesto. PhilArchive.
Seth, A. K. (2025). Conscious artificial intelligence and biological naturalism. Behavioral and Brain Sciences.
80,000 Hours. (2024). Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more [Podcast compilation].
Week 4 — Will Artificial Intelligence Kill Us All?
Essential
Vold, K., & Harris, T. (2021). How does artificial intelligence pose an existential risk? In J. Cowls & L. Floridi (Eds.), The Oxford Handbook of Digital Ethics. Oxford University Press.
Intelligence Squared. (2018). Stuart Russell vs. Steven Pinker on AI and x risk [Video]. YouTube.
Recommended
Carlsmith, J. (2022). Is power seeking AI an existential risk? arXiv.
Pinsof, D. (2025). AI doomerism is bullshit. Everything Is Bullshit (Substack).
Kulveit, J., Douglas, R., Ammann, N., Turan, D., Krueger, D., & Duvenaud, D. (2025). Gradual disempowerment: Systemic existential risks from incremental AI development. arXiv.
Week 5 — A Normal(ish) Technology or a Scary Alien Species?
Essential
Narayanan, A., & Kapoor, S. (2025, April 15). AI as normal technology. Knight First Amendment Institute.
AI Debate: Runaway Superintelligence or Just a Normal Tool? [Video]. YouTube.
AI in Context, AI 2027 Overview [Video]. YouTube.
Recommended
Toner, H. (2025). Unresolved debates about the future of AI. Substack.
Alexander, S., AI Futures. (2025). AI as profoundly abnormal technology.
The Gradient. (2023). Why transformative artificial intelligence is really, really hard to achieve.
Rothman, J. (2025). Two paths for A.I. The New Yorker (Open Questions).
Lazar, S. (2024). Anticipatory AI ethics. Knight First Amendment Institute.
Week 6 — AI, Society, Democracy
Essential
Berliner, D. (2021). What AI can't do for democracy. Boston Review.
Farrell, H. Han, H. (2025). AI and democratic publics. Knight First Amendment Institute.
Tasioulas, J. (2024). The classical key to the AI revolution. Engelsberg Ideas.
Tang, A. (2024). Democracy in the age of AI. RSA Journal.
Recommended
Summerfield., et al. (2024). How will advanced AI systems impact democracy? arXiv.
Jungherr, A. (2023). Artificial intelligence and democracy: A conceptual framework. Social Media + Society.
Lazar, S., & Cuéllar, M.-F. (2024). AI agents and democratic resilience. Knight First Amendment Institute.
Harvard Business School. (2020). Taiwan's digital revolution — Audrey Tang.
Goldberg et al., AI and the future of digital public squares. (2024). arXiv.
Landemore, H. (2023). Can AI bring deliberation to the masses? Academia.
Garfinkel, B. (2021). Is democracy a fad?
Week 7 — AI, Misinformation, and the Epistemic Commons
Essential
Rini, R. (2020). Deepfakes and the epistemic backstop. Philosophers' Imprint, 20(24), 1–16.
80,000 Hours. (2024). Hugo Mercier on misinformation & mass persuasion [Podcast].
Recommended
Simon, F. M., & Altay, S. (2024). Don't panic (yet): Assessing the evidence and discourse around generative AI and elections. Knight First Amendment Institute.
Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385(6714), eadq1814.
Hackenburg et al., The levers of political persuasion with conversational AI. (2025). arXiv:2507.13919.
Habgood-Coote, J. (2023). Deepfakes and the epistemic apocalypse. Synthese.
Week 8 — AI and Work
Essential
Waelen, R. A. The desirability of automizing labor. Philosophy Compass.
Drago, L., & Laine, R. The Intelligence Curse.
Recommended
Susskind, D. (2024). What will remain for people to do? Knight First Amendment Institute.
Parmer, W. J. (2023). Meaningful work and achievement in increasingly automated workplaces. The Journal of Ethics.
Santoni de Sio, F. (2024). Artificial intelligence and the future of work: Mapping the ethical issues. The Journal of Ethics.
BBC Future. (2024). AI art: The end of creativity or a new movement?
Huang, S., & Manning, S. (2025). Here's how to share AI's future wealth. Noema Magazine.
Acemoglu, D., & Johnson, S. (2024). Learning from Ricardo and Thompson: Machinery and labor in the early Industrial Revolution and in the age of AI. Annual Review of Economics, 16, 597–621.
Week 9 — Social AI (Chatbots, Friendbots, and Sexbots)
Essential
Devlin, K. (2024, October). Relating with social robots: Issues of sex, love, intimacy, emotion, attachment, and companionship. King's College London (KCL Pure record).
Shevlin, H. (2024). All too human? Identifying and mitigating ethical risks of social AI. Law, Ethics and Technology.
Chen, A. (2025, March 26). The rise of chatbot 'friends'. Vox.
Recommended
Smith, M. G., Bradbury, T. N., & Karney, B. R. (2025). Can generative AI chatbots emulate human connection? Perspectives on Psychological Science.
Danaher, J. (2019). The philosophical case for robot friendship. Posthuman Studies, 3(1), 5.
Winthrop, R., & Hau, I. (2025, July 2). What happens when AI chatbots replace real human connection? Brookings.
Mahari, R., & Pataranutaporn, P. (2024). We need to prepare for "addictive intelligence". MIT Technology Review.
Caddy, B. (2025). People are falling in love with ChatGPT, and that's a major problem. TechRadar.
Week 10 — Technological Determinism and Our AI Future
Essential
MacAskill, W., & Moorhouse, F. (2025). Preparing for the intelligence explosion. Forethought.
Acemoglu, D. (2021). AI's future doesn't have to be dystopian. Boston Review.
Recommended
Lazar, S. (2024). Can philosophy help us get a grip on the consequences of AI? Aeon.
Dafoe, A. (2015). On Technological Determinism: A Typology, Scope Conditions, and a Mechanism. Science, Technology, and Human Values.
Orwell, G. (1945). You and the atom bomb. Tribune.
Heath, J. (2023). Why the culture wins: An appreciation of Iain Banks. Substack.
von Neumann, J. (1955). Can we survive technology? Fortune.
Finally, for a conversation I had about many of these issues recently with Henry Shevlin, see here:
Will AI Change Everything?
In recent years, I’ve become increasingly convinced that advances in artificial intelligence (AI) are going to be the biggest story of our era. I think AI systems will transform not just the economy, but our societies, cultures, political institutions, ideologies, values, self-understandings, and much more. And I think this will happen over the next sev…
A whole week on AI doomerism, and not a single line by Yudkowski ?! That is very brave of you. :p Especially considering his book on precisely that topic is coming out next week
Is the Chinese Room worth putting on the reading list?