Can computers think and feel? Will "super-intelligent" machines cause human extinction? How will advances in AI transform democracy, society, the information environment, and human relationships?
I think that's a hard question. I do tend to assign it, but that's because I think the idea is influential enough that I want students to have encountered it. But I also think it's confused and misleading enough that I would prefer to live in a world where it wasn't so influential, and I didn't feel like I was depriving my students by not exposing them to it. So even though I tend to assign it for now, I'm happy when I see that others don't.
A whole week on AI doomerism, and not a single line by Yudkowski ?! That is very brave of you. :p Especially considering his book on precisely that topic is coming out next week
While you sort of get at this in social AI, of you have a heavily communal notion of humanity, the social replacement is much more concerning than it would be for individualists.
Are you planning a deep dive into the early machine vision experiments (Minsky and colleagues at MIT, for example) and the way their disappointments reshaped the field—specifically, by highlighting both the incompleteness of visual data and the Bayesian character of how brains process it? To me, that episode marks a turning point, revealing some of the most profound connections between computation and cognition.
Personally, I do not equate Large Language Models with AI. I think of them as a more natural computer interface. The next step beyond GUI, or beyond WWW. But they struggle badly in three areas: interaction with the physical world; handling complex tasks; remembering past interactions. Combine these shortcomings, and we are not yet dealing with AI. I make these points at greater length in an essay that will appear in a couple of weeks.
Interesting and important take. I tend to think we won’t see human-like AI until we have brain-like machines. In principle, infinite computation could result in a virtual machine that is brain-like—but this is one of those cases where the practical constraints dominate. In spite of inordinate philosophical weight accorded to virtualization, it appears unlikely that a brain-like machine will ever be simulated at scale on a sequential (i.e., von Neumann) core. On the other hand, with GPUs and other parallel architectures, perhaps the “virtualization” path could eventually sort itself out.
My suggestion would be a lecture that surveys our best current understanding of what it would actually take—how much silicon, how much power—to build and operate a brain-like machine, virtual or otherwise. The answers are starting to emerge, but they remain scattered across the literature. Pulling them together into one coherent survey would be priceless.
There are also some very deep philosophical implications to such a discussion--since software "machines" are not necessarily philosophically distinct from physical or biological machines. The foregoing statement alone is, of course, deeply controversial. :) :) :)
Wish I cd take the course, surprised there’s nothing on education and AI; you’ll need to address whether yr students can/should use AI in their work for the course.
A excellent course list to which I highly recommend you include 'Rebooting AI' with Gary Marcus and Ernest Davis. They have been at this cause for a many decades.
Gary is my go-to for the unhyped view of LLMs and their inherent flaws along with what might be a safer (and informationally more accurate) way forward.
I wouldn't say Gary Marcus is un-hyped - he's just got a very different sort of hype that he's pushing than the one that Sam Altman is pushing. Sometimes having two opposing hypes is helpful, but I tend to find that it's better to present the more measured views directly.
I would say that you and I have a very different definition of what ‘hype’ is. I would define it as advertised promises of a product where the seller is trying to spin it as the greatest thing since bread came sliced but that he knows is grossly deficient.
Sam Altman is selling his product as “nearly” AGI when it is not even close (as Marcus has shown again and again). And the investor bubble is kept inflated by the hype that it could be AGI or that it will be AGI by the next iteration (coming very soon, "get yours today" = hype).
Marcus is merely offering his expertise and experience gathered over many years. Sure he might like to sell you his book or a speaking appearance but he will deliver what he promises. So no overselling and therefore no hype. His Substack may sometimes look like counter-hype because of the waves of bullshit (hype) that he is trying to undo.
He’s not “hype” in the sense of having a product he wants to sell. But he’s “hype” in the sense of having decided years ago what the response is to any AI thing is anyone proposes, and then squeezing everything into that same shape, no matter how different it is.
He decided long ago that “neurosymbolic” is the answer, and that what current AI companies are doing is not neurosymbolic, and thus he never engages with the question of whether taking a neural model based on text prediction and giving it reinforcement learning to practice using the tokens it picked up from that process as symbols in a sequential inference process might actually be a step towards the neurosymbolic ideas he was pushing for years ago.
I think that you are totally incorrect in your assessment of Marcus’ motives, which is not simple contrarianism. His objection is that LLM’s are not working i.e. they are confabulating way too much incorrect information and failing to fix it when called on it (merely confabulating more).
As Marcus points out confabulation is inherent in their design. LLMs fail to be accurate because there is no connection between the words and the real world (or rather between things and the linguistic tokens LLMs break words into). What is worse is that LLMs make these assertions “confidently” and thus potentially (perhaps dangerously) misleading to anyone who does not already know the facts of the matter at hand.
The LLM’s are a successful hack of language grammar but they lack semantic sense that comes with knowledge of the outside world. Symbolic logic (if properly programmed) starts with the few things we are confident are true and can build upon. But each instance of reinforcement learning can only target a single one of a vast sea of falsehoods and has no effect of pointing an LLM towards any sort of global truth.
LLMs are not like a child who can learn from a sparse number of examples because a child can grasp what is true or false about the world (what the words mean). For LLMs (as currently designed) there is no way for them to grasp that are limited number assertions that are true and an “infinite” number of ones that are (to some degree) false. LLMs can only “understand” the relations between token types (i.e. grammar). Pointing these facts out is not hype.
Interesting to see what gets included and excluded in AI philosophy curricula. The focus on existential risk and democracy overlooks the more fundamental ontological questions about what LLMs actually are—cognitive prosthetics for navigating linguistic meaning-space rather than mysterious black boxes that might be intelligent.
The theoretical frameworks that could illuminate this transformation (extended mind, distributed cognition, linguistic analysis) are entirely absent. I'm also constantly amazed in things like this in the complete absence of Margaret Boden's work on computational creativity. Her work directly illuminates what LLMs do, but is completely ignored.
This is why we're trapped in sterile debates about intelligence tests and risk scenarios which the people at the coal face are completely ignoring while they get on with building the future.
Clark is a colleague, the external examiner of my PhD, his and other ideas are summarised in existing readings, and I cover such issues in lectures. Can't include everyone on an introductory reading list!
This looks really good and helpful! I've been doing a much lower-level AI literacy class several times over the past year, so I haven't been having students engage as much with some of the recent higher-level debates, but this will be helpful for me!
Again, here to kindly ask if possible to share with readers how students respond and perform on such topics.
Steven Pinker recently said on Bill Maher that his multiple choice tests show s small 10% decline in last years.
And we here a lot about lower attention spans, moral panics, phones ,politicisation, abandonement of free speech and other stuff, and you personally know how hard it is for normal people to know what is true and what isnt in this information space.
Many of us without current lived experience in higher education are interested in what the problems are and how the news/media affect the new generation in the class.
Especially from a source we can turst on this topic.
Is the Chinese Room worth putting on the reading list?
I think that's a hard question. I do tend to assign it, but that's because I think the idea is influential enough that I want students to have encountered it. But I also think it's confused and misleading enough that I would prefer to live in a world where it wasn't so influential, and I didn't feel like I was depriving my students by not exposing them to it. So even though I tend to assign it for now, I'm happy when I see that others don't.
A whole week on AI doomerism, and not a single line by Yudkowski ?! That is very brave of you. :p Especially considering his book on precisely that topic is coming out next week
Seconded. This is relatively short: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Note, I think he's completely wrong, but I can't deny he has a clear point of view.
"... be willing to destroy a rogue datacenter by airstrike."
While you sort of get at this in social AI, of you have a heavily communal notion of humanity, the social replacement is much more concerning than it would be for individualists.
looks fantastic!
Except for missing Robbins this is a good list! https://hollisrobbinsanecdotal.substack.com/p/how-to-tell-if-something-is-ai-written
Are you planning a deep dive into the early machine vision experiments (Minsky and colleagues at MIT, for example) and the way their disappointments reshaped the field—specifically, by highlighting both the incompleteness of visual data and the Bayesian character of how brains process it? To me, that episode marks a turning point, revealing some of the most profound connections between computation and cognition.
Personally, I do not equate Large Language Models with AI. I think of them as a more natural computer interface. The next step beyond GUI, or beyond WWW. But they struggle badly in three areas: interaction with the physical world; handling complex tasks; remembering past interactions. Combine these shortcomings, and we are not yet dealing with AI. I make these points at greater length in an essay that will appear in a couple of weeks.
Interesting and important take. I tend to think we won’t see human-like AI until we have brain-like machines. In principle, infinite computation could result in a virtual machine that is brain-like—but this is one of those cases where the practical constraints dominate. In spite of inordinate philosophical weight accorded to virtualization, it appears unlikely that a brain-like machine will ever be simulated at scale on a sequential (i.e., von Neumann) core. On the other hand, with GPUs and other parallel architectures, perhaps the “virtualization” path could eventually sort itself out.
My suggestion would be a lecture that surveys our best current understanding of what it would actually take—how much silicon, how much power—to build and operate a brain-like machine, virtual or otherwise. The answers are starting to emerge, but they remain scattered across the literature. Pulling them together into one coherent survey would be priceless.
There are also some very deep philosophical implications to such a discussion--since software "machines" are not necessarily philosophically distinct from physical or biological machines. The foregoing statement alone is, of course, deeply controversial. :) :) :)
This looks ace! I anticipate you will get very healthy uptake. And they say philosophy has no relevance to the real world!
Wish I cd take the course, surprised there’s nothing on education and AI; you’ll need to address whether yr students can/should use AI in their work for the course.
A excellent course list to which I highly recommend you include 'Rebooting AI' with Gary Marcus and Ernest Davis. They have been at this cause for a many decades.
https://www.penguinrandomhouse.com/books/603982/rebooting-ai-by-gary-marcus-and-ernest-davis/
Gary is my go-to for the unhyped view of LLMs and their inherent flaws along with what might be a safer (and informationally more accurate) way forward.
His substack: https://garymarcus.substack.com/
I wouldn't say Gary Marcus is un-hyped - he's just got a very different sort of hype that he's pushing than the one that Sam Altman is pushing. Sometimes having two opposing hypes is helpful, but I tend to find that it's better to present the more measured views directly.
I would say that you and I have a very different definition of what ‘hype’ is. I would define it as advertised promises of a product where the seller is trying to spin it as the greatest thing since bread came sliced but that he knows is grossly deficient.
Sam Altman is selling his product as “nearly” AGI when it is not even close (as Marcus has shown again and again). And the investor bubble is kept inflated by the hype that it could be AGI or that it will be AGI by the next iteration (coming very soon, "get yours today" = hype).
Marcus is merely offering his expertise and experience gathered over many years. Sure he might like to sell you his book or a speaking appearance but he will deliver what he promises. So no overselling and therefore no hype. His Substack may sometimes look like counter-hype because of the waves of bullshit (hype) that he is trying to undo.
Likewise there are plenty of smaller AI companies using focused human curated training data in narrow domains making real gains. Again no hype, which is why you rarely hear about them unless you go looking. See for example Nebuli: https://nebuli.com/work/use-case-smart-knowledge-data-lake-from-multiple-complex-datasets/
He’s not “hype” in the sense of having a product he wants to sell. But he’s “hype” in the sense of having decided years ago what the response is to any AI thing is anyone proposes, and then squeezing everything into that same shape, no matter how different it is.
He decided long ago that “neurosymbolic” is the answer, and that what current AI companies are doing is not neurosymbolic, and thus he never engages with the question of whether taking a neural model based on text prediction and giving it reinforcement learning to practice using the tokens it picked up from that process as symbols in a sequential inference process might actually be a step towards the neurosymbolic ideas he was pushing for years ago.
I think that you are totally incorrect in your assessment of Marcus’ motives, which is not simple contrarianism. His objection is that LLM’s are not working i.e. they are confabulating way too much incorrect information and failing to fix it when called on it (merely confabulating more).
As Marcus points out confabulation is inherent in their design. LLMs fail to be accurate because there is no connection between the words and the real world (or rather between things and the linguistic tokens LLMs break words into). What is worse is that LLMs make these assertions “confidently” and thus potentially (perhaps dangerously) misleading to anyone who does not already know the facts of the matter at hand.
The LLM’s are a successful hack of language grammar but they lack semantic sense that comes with knowledge of the outside world. Symbolic logic (if properly programmed) starts with the few things we are confident are true and can build upon. But each instance of reinforcement learning can only target a single one of a vast sea of falsehoods and has no effect of pointing an LLM towards any sort of global truth.
LLMs are not like a child who can learn from a sparse number of examples because a child can grasp what is true or false about the world (what the words mean). For LLMs (as currently designed) there is no way for them to grasp that are limited number assertions that are true and an “infinite” number of ones that are (to some degree) false. LLMs can only “understand” the relations between token types (i.e. grammar). Pointing these facts out is not hype.
Interesting to see what gets included and excluded in AI philosophy curricula. The focus on existential risk and democracy overlooks the more fundamental ontological questions about what LLMs actually are—cognitive prosthetics for navigating linguistic meaning-space rather than mysterious black boxes that might be intelligent.
The theoretical frameworks that could illuminate this transformation (extended mind, distributed cognition, linguistic analysis) are entirely absent. I'm also constantly amazed in things like this in the complete absence of Margaret Boden's work on computational creativity. Her work directly illuminates what LLMs do, but is completely ignored.
This is why we're trapped in sterile debates about intelligence tests and risk scenarios which the people at the coal face are completely ignoring while they get on with building the future.
The first three weeks of the syllabus
Your right re: Boden. Apologies—Replying on tram and looking for something like The creative mind: myths and mechanisms.
Still not seeing Clarke & Charmers, Hutchins, etc. Pasquinelli (Eye of the Master)?
Clark is a colleague, the external examiner of my PhD, his and other ideas are summarised in existing readings, and I cover such issues in lectures. Can't include everyone on an introductory reading list!
No extended mind? Is intelligence a verb or noun?
This looks really good and helpful! I've been doing a much lower-level AI literacy class several times over the past year, so I haven't been having students engage as much with some of the recent higher-level debates, but this will be helpful for me!
Again, here to kindly ask if possible to share with readers how students respond and perform on such topics.
Steven Pinker recently said on Bill Maher that his multiple choice tests show s small 10% decline in last years.
And we here a lot about lower attention spans, moral panics, phones ,politicisation, abandonement of free speech and other stuff, and you personally know how hard it is for normal people to know what is true and what isnt in this information space.
Many of us without current lived experience in higher education are interested in what the problems are and how the news/media affect the new generation in the class.
Especially from a source we can turst on this topic.
https://open.substack.com/pub/therewrittenpath/p/the-room-that-talks-back?r=61kohn&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true