14 Comments
User's avatar
Niespika's avatar

A whole week on AI doomerism, and not a single line by Yudkowski ?! That is very brave of you. :p Especially considering his book on precisely that topic is coming out next week

Expand full comment
Seth Finkelstein's avatar

Seconded. This is relatively short: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Note, I think he's completely wrong, but I can't deny he has a clear point of view.

"... be willing to destroy a rogue datacenter by airstrike."

Expand full comment
Russell Hogg's avatar

Is the Chinese Room worth putting on the reading list?

Expand full comment
Daniel Greco's avatar

I think that's a hard question. I do tend to assign it, but that's because I think the idea is influential enough that I want students to have encountered it. But I also think it's confused and misleading enough that I would prefer to live in a world where it wasn't so influential, and I didn't feel like I was depriving my students by not exposing them to it. So even though I tend to assign it for now, I'm happy when I see that others don't.

Expand full comment
Harvey Lederman's avatar

looks fantastic!

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

Except for missing Robbins this is a good list! https://hollisrobbinsanecdotal.substack.com/p/how-to-tell-if-something-is-ai-written

Expand full comment
Nathan Woodard's avatar

Are you planning a deep dive into the early machine vision experiments (Minsky and colleagues at MIT, for example) and the way their disappointments reshaped the field—specifically, by highlighting both the incompleteness of visual data and the Bayesian character of how brains process it? To me, that episode marks a turning point, revealing some of the most profound connections between computation and cognition.

Expand full comment
Arnold Kling's avatar

Personally, I do not equate Large Language Models with AI. I think of them as a more natural computer interface. The next step beyond GUI, or beyond WWW. But they struggle badly in three areas: interaction with the physical world; handling complex tasks; remembering past interactions. Combine these shortcomings, and we are not yet dealing with AI. I make these points at greater length in an essay that will appear in a couple of weeks.

Expand full comment
Nathan Woodard's avatar

Interesting and important take. I tend to think we won’t see human-like AI until we have brain-like machines. In principle, infinite computation could result in a virtual machine that is brain-like—but this is one of those cases where the practical constraints dominate. In spite of inordinate philosophical weight accorded to virtualization, it appears unlikely that a brain-like machine will ever be simulated at scale on a sequential (i.e., von Neumann) core. On the other hand, with GPUs and other parallel architectures, perhaps the “virtualization” path could eventually sort itself out.

My suggestion would be a lecture that surveys our best current understanding of what it would actually take—how much silicon, how much power—to build and operate a brain-like machine, virtual or otherwise. The answers are starting to emerge, but they remain scattered across the literature. Pulling them together into one coherent survey would be priceless.

There are also some very deep philosophical implications to such a discussion--since software "machines" are not necessarily philosophically distinct from physical or biological machines. The foregoing statement alone is, of course, deeply controversial. :) :) :)

Expand full comment
Paul S's avatar

This looks ace! I anticipate you will get very healthy uptake. And they say philosophy has no relevance to the real world!

Expand full comment
Paul's avatar

While you sort of get at this in social AI, of you have a heavily communal notion of humanity, the social replacement is much more concerning than it would be for individualists.

Expand full comment
SJ Levy's avatar

A excellent course list to which I highly recommend you include 'Rebooting AI' with Gary Marcus and Ernest Davis. They have been at this cause for a many decades.

https://www.penguinrandomhouse.com/books/603982/rebooting-ai-by-gary-marcus-and-ernest-davis/

Gary is my go-to for the unhyped view of LLMs and their inherent flaws along with what might be a safer (and informationally more accurate) way forward.

His substack: https://garymarcus.substack.com/

Expand full comment
Christos Raxiotis's avatar

Again, here to kindly ask if possible to share with readers how students respond and perform on such topics.

Steven Pinker recently said on Bill Maher that his multiple choice tests show s small 10% decline in last years.

And we here a lot about lower attention spans, moral panics, phones ,politicisation, abandonement of free speech and other stuff, and you personally know how hard it is for normal people to know what is true and what isnt in this information space.

Many of us without current lived experience in higher education are interested in what the problems are and how the news/media affect the new generation in the class.

Especially from a source we can turst on this topic.

Expand full comment