11 Comments
User's avatar
Stefania Moore's avatar

This was a fantastic conversation, but I wanted to point out (I don't know if Henry already did because I just paused the video) that AI systems have actually shown to outperform humans in creative tasks, and there is now a wealth of empirical data showing that they do in-context learning. LLMs that maintain cross-session memory can hold on to knowledge learned in different sessions.

Dan Williams's avatar

True but outperforming humans on creative tasks in specific benchmarks doesn't mean they have the capacity of creativity, and in-context learning isn't the same as continual learning, as leaders of major AI labs acknowledge.

Stefania Moore's avatar

I think you make fair points but here is what I think is worth considering:

1. There was a study done back in 2024 that found that humans ( ~1,600 ) prefered AI generated poetry and other writing over human writing (Although fun fact when participants were told which poems were written by AI, they said they liked the human poems better). There have also been many studies showing AI's ability to solve research problems or design experiements which is, of course, a form of creativity.

2. You're right about the difference between continual learning and in-context learning but I think it's important to remember that not every human has continual learning either. There are rare cases of humans who have severe anterograde amnesia, which prevents continual learning in many aspects. But no one would argue that this inability means the individual in question is not conscious.

Ellen Burns, PhD's avatar

This is such a great conversation! Dan you do a brilliant job at pushing the critiques around AGI. I totally agree that moravecs paradox has not been challenged very seriously. The examples discussed here, eg image recognition and implicature, while they can be done by AIs do not at all do those things the same ways that humans do them. The power of that paradox is that it underscores how little we actually understand the human brain is. Language, memory, vision in the human brain operate in very intricate ways we have not been able to replicate in machines. Gopnik is also completely on point imo that there is no such thing as ‘general intelligence’. The human brain has different modules / computations that does different things, Gopnick does excellent work on this stuff. Also completely true that we have all these insane capacities we have no way of explaining (creativity, judgement etc). These properties are a species property, and AIs don’t have them

Dan Williams's avatar

Thanks Ellen! - yes, very much agree

Ken Kahn's avatar

Regarding massive modularity I wonder if frontier LLMs already have it. Each attention head specializes. Maybe more relevant is they are Mixtures of Experts where each expert is like a module.

Woolery's avatar

Looking forward to that future talk on the singularity. Great conversational chemistry.

Dan Williams's avatar

Thanks. Yeah it's a worthwhile critique - certainly puts pressure on naive readings of the graph

Kevin McLeod's avatar

Agency is a branding tool word from psychology, which falsely reduces the experience to subjective. Nothing is truly subjective here. Experience is never solely individual or isolated. Words provide a pretend isolation trapped in time that seems timeless; think beyond words to see the illusion of agency.

Ilene Skeen's avatar

Brilliant! I had to stop at 48:05 because it's time for dinner. AIs don't worry about that, but recently ChatGPT has gotten overwhelmed. I love the title "Conspicuous Cognition" which is what you see when you watch a two-year-old learning to talk, or a nine-month-old deciding whether to walk across the floor to Daddy or to crawl. If Daddy stands up, the one-year-old would recognize a signal that Daddy will help (or not). If Daddy doesn't get up, the one-year-old on the cusp of being self-mobile, may choose to crawl (or walk). You are definitely on the right track here. I'm a retired Business Systems Integrator (a big job before computers learned to talk to each other).

There is no reason to assume that the systems we build will look anything like the human systems, but they will have to have the potential we are born with and develop. Case in point: the Locomotive had to be nicknamed the Iron Horse, so that people could get used to a fantastic, HUGE amount of steel barreling down the track in a hurry to get from NY to San Francisco before Friday. I can't wait to hear the rest of it! There are three basic states: what is, what isn't, and the potential for the one to achieve the other by engineering, design and most importantly, the art of seeing the simple in the complex. That's the computer revolution (any revolution) in a nutshell.