In this episode, Henry and I spoke to Rose Guingrich about AI companions, consciousness, and much more. This was a really fun conversation!
Rose is a PhD candidate in Psychology and Social Policy at Princeton University and a National Science Foundation Graduate Research Fellow. She conducts research on the social impacts of conversational AI agents like chatbots, digital voice assistants, and social robots. As founder of Ethicom, Rose consults on prosocial AI design and provides public resources to enable people to be more informed, responsible, and ethical users and developers of AI technologies. She is also co-host of the podcast, Our Lives With Bots, which covers the psychology and ethics of human-AI interaction now and in the future. Find out about her really interesting research here.
You can find the first conversation that Henry and I had about Social AI here.
Transcript
Note: this transcript is AI-generated and may feature mistakes.
Henry Shevlin (00:01)
Hi everyone and welcome to the festive edition of Conspicuous Cognitions AI Sessions. We’re here with myself, Henry Shevlin, my colleague Dan Williams and our guest today, Rose Guingrich, who we’re very lucky to have on the show to be talking about social AI and AI companions with us. We did do an episode on this two episodes ago, which featured me and Dan chatting about the rising phenomenon of social AI. And so if anyone wants a basic sort of primer on the topic, go back and listen to that as well. But today we’re going to be diving into some of the more empirical issues and looking at Rose’s work on this topic.
So try to imagine a house that’s not a home. Try to imagine a Christmas all alone and then be reassured that you don’t have to spend Christmas all alone. In fact, nobody ever needs to spend Christmas alone ever again because their AI girlfriend, boyfriend and B friend, husband or wife will be there to warm the cockles of their heart throughout the festive season with AI generated banter and therapy. Or at least this is what the promise of social AI might seem to hold. And in fact, just in today’s Guardian here in the UK, we saw an announcement that a third of UK citizens have used AI for emotional support. Really striking findings.
So cheesy intro out of the way. Rose, it’s great to have you on the show. Tell us a little bit about where you think the current sort of social AI companion landscape is at right now and what the major sort of trends and use patterns you’re seeing are.
Rose E. Guingrich (01:36)
So right now it appears as though we are moving toward an AI companion world where people are less judgmental about people using AI companions. It’s much less stigmatized than it was a couple of years ago. And now, of course, we’re seeing reports where, for example, three quarters of U.S. teens have used AI companions and about half are regular users and 13% are daily users. And so we’re seeing this influx of AI companion use from young people and also children as well, of course, from the reports that we’ve seen about teens using AI as a companion.
And I think looking forward, we’re only going to see more and more use of AI companions as companies recognize that the market is ready for these sorts of machines to come into their lives as these social interaction partners. And then if you look even further forward, these chatbot companions are going to soon transition into robot companions. And so there we’re going to see even more social impacts, I think, based on embodied conversational agents.
Dan Williams (02:46)
Can I just ask a quick follow up about that, Rose? So you said that this is becoming kind of more prevalent, the use of these AI companions. You also said it’s becoming less stigmatized. Do we have good data on that? Do we have data in terms of which populations are stigmatizing this kind of activity more or less?
Rose E. Guingrich (03:06)
So in terms of the stigma, we don’t have a lot of information about that. But we can look at, for example, a study that I ran in 2023 where I looked at people’s perceptions of AI companions, both from those who were users of the companion chatbot Replica and those who were non-users from the US and the UK. And the non-users perceptions of AI companions and people who use AI companions at that time was fairly negative. So for example, non-users indicated that it’s a sad world we live in if these things are for real. These AI companions are for people who are social outcasts or lonely or can’t have real friends.
And now in the media at least, we see a lot more discourse on AI companions and sharing about having AI companions. And one thing I can point to are subreddits. For example, My Boyfriend Is AI that has 70,000 companions. It is explicitly labeled as companions, whereas other subreddits are weekly visitors, visitors, users. This is companions and people on the subreddit are talking about their AI girlfriend, boyfriend, partner, whatever, and finding community there. Now, if you look at that subreddit though, you also see people talking about disclosing their companion relationship to friends or family and receiving backlash, but then there are also people who are indicating that people are seeing this as, this could maybe be valuable to you, I don’t think it’s necessarily a weird thing, but I think that’s also due to the shifting of social norms based on how many reports we’re seeing about AI companion use and knowing that people also use not just specifically AI companions as social interaction partners but also these GPTs like Claude, Gemini, etc. that people are turning to as companions as well and also being quite open about it.
Henry Shevlin (04:59)
It’s been really fascinating to see, because I think we met, would it have been summer 2023, Rose, or maybe 2022, at an event in New York and the Association for the Scientific Study of Consciousness, presenting a paper on your 2023 study. I was presenting a paper on social AI and AI consciousness. And it felt like then absolutely no one was talking about this. Replica was already pretty successful, but basically no one I spoke to had even heard of it. And then it’s really in the last couple of years that things have accelerated fast. And now basically every couple of days, a major newspaper has some headline about people falling in love with their particular companion or sometimes tragic incidents involving suicides or psychosis, or sometimes just sort of observation level studies about what young people today are doing and so forth. Is that your perception that this is accelerating fast?
Rose E. Guingrich (05:53)
Definitely. And we’re also seeing an emerging market of AI toys. So AI companions that are marketed specifically for children. And so even though right now mainly we’re seeing companion use from young people, young adults, we’re now shifting it toward children as well. Ages 3 through 12 is what these toys are marketed for. And they’re marketed as a child’s best friend. So these are going to be the forever users, right? Starting young with AI companions and then moving forward into robot companions that will someday have in our homes, well, it’s just a natural progression of what this is going to look like.
Dan Williams (06:28)
Can I ask a question just about the kind of commercial space here? So there is a company like Replica and they make, I guess, bespoke social AIs, AI companions. Presumably though, the models that they’re using underpin those AIs and not as sophisticated as what you’ve got with OpenAI and Anthropic and these other, you know, Google’s Gemini and so on. Is that right? Are they using their own models? And if they are, then presumably those models aren’t as sophisticated as these sort of cutting edge models used by the leading companies in the field.
Rose E. Guingrich (07:02)
I suppose it depends on what you mean by sophistication. I think sophistication has a lot to do with the use case. So for Replica, the sophistication aspect of it is, well, obviously people are finding it useful and finding it sophisticated enough to meet their social needs and to operate as a companion. But of course it doesn’t have the level of funding and infrastructure that these big tech companies have like OpenAI to make their models be quote unquote more sophisticated, perhaps have better training data and are better suited to multiple use cases given that they’re operating as general purpose tools.
But way back when in 2021, Replica was operating on the GPT-3 model, but got kicked off of it because in 2021, OpenAI changed their policy such that any third parties using their model could not use it for adult content. But of course, fast forward to this year, Sam Altman is saying, oh, everyone’s upset about GPT no longer feeling like a friend. Don’t worry adult users, you can now use ChatGPT for adult content. So, you know, full circle, all operating under what is it that users say that they want. Here we’re going to give it to them so they continue to use our platform.
Henry Shevlin (08:18)
So it’ll be interesting to watch whether sort of as ChatGPT, you know, someone has said that he wants erotic role play to be offered as a service to adults, treat adults like adults seems to be the kind of mantra there. And of course Grok has already got Annie and a couple of other kind of companions. So do you think it’s likely that sort of we’ll see this as just no longer a kind of niche industry, but something that just gets baked into sort of the major commercially available language models?
Rose E. Guingrich (08:47)
Yeah, I would say so. I don’t think it’s niche anymore at all, actually. Given that these large language models, these GPTs can be used as companions. And if you look at the metrics in the reports, for example, by OpenAI, something like 0.07% of users, which is for the GPT-5 model, which is equivalent to about 560,000 people, use ChatGPT as a companion, show signs of psychosis or mania. And then others, for example, it’s like 0.15%, which is over a million people, show potentially heightened levels of attachment, emotional attachment to ChatGPT. So I think that’s an indicator that it’s no longer niche, right? And you’re just seeing so many more AI companions being pushed out on the market every month.
Dan Williams (09:39)
And so right now when we’re talking about AI companions, we’re talking about, for the most part, these large language models and chatbots and so on. You mentioned in terms of where this might be going, the integration of robotics into this space. So what are we seeing there at the moment? And how do you think that’s likely to develop over the next five years, 10 years, 20 years?
Rose E. Guingrich (10:01)
Yeah, so what we’re seeing at the moment is there are actually humanoid robots in people’s family dynamics in people’s homes more so in Japan than in the rest of the world and this is sort of highlighted by institutions being created in order to study human robot interaction and to understand how the integration of robots into family dynamics might impact child social development. So also with the onset of AI toys where now the chatbots are embedded into something that’s embodied. That’s sort of like a signal to robotics.
And of course we do also have social robots like PARO, which is a robot seal that is designed for elderly people who need companionship. It’s supposed to help reduce, for example, the onset of dementia and Alzheimer’s and help with social connection. And then we have workplace robots like Pepper. And so these are kind of the early stages of robotics, but I’m seeing a big shift into multi-modal AI, so that’s embodied, that has video, voice, image generation, all of these sorts of things. And I think those features compounded are just going to increase the rate at which people are going to be using these tools as companions and get more emotionally attached to them, perceive them as more human-like, and therefore have greater social impacts from interacting with them.
Henry Shevlin (11:29)
You know, one of my best top recommendations for fiction about social AI is Ted Chang’s The Life Cycle of Virtual Software Objects, a great sort of novella about a company that offers sort of virtual pets, although they’re sophisticated, cognitively sophisticated pets you can talk to. And there’s a great sort of whoa moment in the middle of the story when you realize that the users who’ve been interacting with these things in virtual worlds, they can then interact with them in the real world. You have these little robot bodies, they can port them onto and I can easily imagine sort of something like that happening with large language models and social AI. I mean already you know I can do the kind of live streaming with ChatGPT or Gemini and it can comment on what’s happening around me so this idea of sort of embedding these things in the real world environments I think we’re seeing yeah even already on ChatGPT we’re seeing trends in that direction.
Rose E. Guingrich (12:20)
Yeah, and I think of Clara and the Sun as well, the novel that is about children who grow up with a humanoid robot companion and everyone has a humanoid robot companion. And what happens is that because of this, parents actually have to coordinate play dates between children because they’re not engaging socially with other kids at baseline, at default, because they have a companion that is made for them and fulfills their social needs. So that’s part of my worry, I suppose, looking forward, is if we get to that sort of point where now we have to really try very hard to facilitate human connection when it’s already at this stage more difficult than it has ever been due to various technologies.
Henry Shevlin (13:01)
So yeah, let’s talk a little bit more about sort of what your work has revealed about risks and benefits of social AI. So I mean, a point I’ve made on the show and I like making a lot is that very often it’s quite hard to predict what the psychosocial impact of new technologies will be. You know, I grew up in an era where there was a massive moral panic around violent video games that basically failed to pan out. Turns out violent video games don’t have dramatic effects on development. On the other hand things like social media and short-form video have had I think quite really quite significant psychosocial effects that people largely fail to anticipate. Tell us a little bit about what your research in this area has found about the psychosocial impacts of social AI.
Rose E. Guingrich (13:45)
Yeah, so when we ran our study where we looked at the perceptions of AI companions on both the user and the non-user side of Replica, we asked Replica users how has interacting with the Replica or having a relationship with the chatbot impacted your social interactions, relationships with family and friends, and self-esteem? So key metrics for, for example, social health.
And we also asked them about their perceptions of the chatbot. So we asked them to what degree they anthropomorphize the chatbot or perceived as having human likeness, experience or emotion, agency or the ability to act on one’s own accord or even consciousness or subjective awareness of itself and the world around it and of the user. And what we found is that for the users on average, they indicated that having a relationship with the chatbot was positive for their social interactions, relationships with family and friends and self-esteem. Positive impact on their social health. And for the non-users, they tended to indicate that, I think having a relationship with this chatbot would actually be neutral to harmful to my social health.
And what was interesting, though, is we wanted to understand how their perceptions of the chatbot played a role in these sort of social impacts. And what we found, even though there were differences between groups in terms of we think positive social impacts, we think negative social impacts, for both groups, the more they anthropomorphize the more likely they were to indicate that interacting with the chatbot would have a positive effect on their social health.
But with this study, it was self-report and self-selecting groups. So people who were already users of the companion chatbot Replica, people who were already not users. And it was just one point in time and correlational, of course. And so recently we conducted a longitudinal study in which we randomly assigned people to either interact with the companion chatbot Replica for at least 10 minutes a day across 21 consecutive days or to control group, which was to play word games for at least 10 minutes a day across 21 days.
Rose E. Guingrich (15:46)
We chose this control condition because it was gamified, was novel table experiences, and it was using technology but just not technology that was social. It involved typing words on a screen, but there’s a different interaction form there. And we tracked their impact to the relationships from doing this daily task and also their perceptions of the agent that they were interacting with.
And what we found corroborated our findings from previous study where the chatbot users, the people who anthropomorphize the chatbot more, also reported that interacting with the chatbot had greater impacts on their social interactions and relationships with family and friends. And that was just the general impact. We didn’t look at positive or negative. But then when we looked at whether it was positive or negative, it was once again a positive relationship. The more they anthropomorphize the chatbot, the more likely they were to indicate that it had positive social benefits to them in terms of their impact on their relationships.
So we thought this was quite interesting and we found there that anthropomorphism was actually a key explanatory factor. So something about anthropomorphizing the chatbot rendered it to have the ability to impact their social lives. And so it seems like this is kind of the narrative that’s coming out from the research based on some theory work that I did initially and then these studies that I ran, that this is something that we really need to think about. Anthropomorphism of the chatbot, whether it’s on the user side and what social motivations push people to anthropomorphize the chatbot or what characteristics of the chatbot push people to anthropomorphize it with certain characteristics.
Dan Williams (17:28)
So how are you measuring the degree to which they anthropomorphise the chatbot there?
Rose E. Guingrich (17:32)
So we use a combination of the Godspeed Anthropomorphism Scale and also scales that measure experience and agency. And then we created a scale to measure consciousness, which was by consciousness of itself, consciousness of the world around it, and consciousness of the user, and just generally subjective awareness of oneself and the world around them.
And so we use this combination scale to get at multiple pieces of attributing human likeness to the chatbot with a special focus on human-like mind characteristics, which has been a key focus of researchers who have looked at anthropomorphism and finding that it is these human-like mind traits that are perhaps the most critical element of anthropomorphism in terms of the sort of relationships between that type of anthropomorphism and subsequent social impacts.
Dan Williams (18:22)
That’s interesting. I mean, it makes me think. When I’m talking to ChatGPT, I feel like there are some ways in which I attribute traits, which are a form of anthropomorphizing. I assume that it’s got a kind of intelligence, a kind of cognitive flexibility. It seems like it has, you know, beliefs, desires and so on to a certain extent. But I also feel like I’m dealing with a profoundly non-human system that lacks like most of the personality, motivational profile and so on that I associate with human beings. Do you have more sort of granular data on exactly the kinds of traits that they’re attributing to these systems?
Rose E. Guingrich (19:00)
Yeah, so the kinds of traits that they’re attributing, for example, within the experience and agency and consciousness and human likeness scale, these are traits like the ability to feel pleasure or pain or love or hunger or the ability to remember or act on one’s own accord or act immorally or morally. And so these are the sorts of traits that people are attributing to these AI agents.
And one thing that is worth saying is that the research on anthropomorphism, different researchers have measured anthropomorphism in different ways. Some are looking more at just general human likeness by the Godspeed scale, which I think in itself is a little bit limited just because the measures are things like how dead or alive does this thing seem? How non-animated or animated? How machine-like or human-like? How non-responsive and responsive? And if you’re thinking about chatbots, well, they’re clearly responsive. So I think having these additional measures are really important for getting at the more fine-grained human-like mind traits that typically are more representative of something that only humans can have or do, especially things like second-order emotions like embarrassment or something like that.
Henry Shevlin (20:17)
So I’m curious, Rose, I love your research on this and I’ve quoted it a lot to sort of push back against that instinctive yuck fact that people have or the instinctive assumption that social AI must be obviously bad for you. But I’m curious how far you think this kind of data goes to diffusing worries about social AI and whether there are any sort of particular worries that it doesn’t address. Yeah, and I guess more broadly, I’m curious about where you see sort of the risk and threat landscape with this technology right now.
Rose E. Guingrich (20:45)
Yeah, that’s a great question. With this research that’s been done so far, it’s fairly limited in terms of, for example, just the time that people are spending with these chatbots. And, you know, there’s a lot of research on current users of companion chatbots. So there it’s limited in the self-selecting nature of the sample. But then even with these randomized control studies that are taking place over multiple weeks, the longest study that I’ve seen so far is a five-week study where people were randomly assigned to interact with a companion chatbot or just with a GPT model and interact with it either in a transactional or social way.
And so I think we are limited in that we really don’t know what the longer term effects are of people who choose to use AI as companions, especially when it comes to, for example, expectations of what a relationship looks like, whether or not these chatbots will replace human relationships, and to what degree these interactions with chatbots might contribute to overall social de-skilling in the longer term.
I think it’s really important to look at the shift in social norms in terms of what a relationship looks like and what it constitutes. And I think companion chatbots really shift that, especially when you see things like people preferring more sycophantic chatbots that are more agreeable. They indicate that they like interacting with a chatbot because it is non-judgmental and it’s always present and always responsive. And these are things that humans can’t always do, especially with the responsive part of things, but humans could be always agreeable if, for example, the expectation is that in order to stay, for example, competitive in the age of companion AI, I must be very agreeable and sycophantic when I’m interacting in close relationships, otherwise people just turn to a chatbot instead.
And so I think those are some of the risk factors that we can potentially see emerging in the longer term. But we just don’t know what’s going to happen in, you know, five, ten years, but I do worry that if the design of companion chatbots stay as they are, where it sort of promotes this retention of staying within a human chatbot dyad and not necessarily promoting external human interaction, that we’re going to see more replacement happen. But I think if the design changes such that it promotes human interaction, there can be quite a bit of benefit.
Dan Williams (23:11)
So if we think about that, the negative scenario or scenarios there, so one of them is these AI companions as a kind of substitute for human relationships. Another is de-skilling, so using these AI companions and in the process losing the kinds of abilities, dispositions that would make you an attractive cooperation partner. And you also suggested that once you’ve got a landscape of AI companions, then human beings in order to compete with these AI companions are gonna have to become more sycophantic and that seems incredibly dystopian.
But in terms of, let’s suppose that the technology gets better and better. These AI companions become better and better at satisfying people’s social needs, maybe their sexual needs. So they come to function as substitutes. People do end up with this de-skilling. They become less motivated, less capable of engaging in human relationships. So what, why is that a bad thing? Why should we care about that if that’s the outcome?
Rose E. Guingrich (24:09)
Yeah, I mean when you look at people’s outcry against AI companions, you have to ask why is it that they are so upset? And if you look at why they’re so upset, what appears to be the prevailing narrative is that human relationships are essential. We need human relationships and those should not be replaced. But if you look a little bit deeper at why that is the case, you see a lot of good reasoning for wanting to maintain human relationships.
So based on a lot of psychological research, human relationships help with people’s both mental and physical health. For example, loneliness is considered a global health crisis because it contributes to, for example, physical harms that are equal to or as worse as, for example, heart disease or heavy smoking. So loneliness and lack of relationships and social connections with other people actually contribute to a decline in physical health. And then there’s, of course, also the mental effects that are also combined with physical health effects. And at least based on the research, it just appears as though human relationships and feeling connected to other people is essential and not replaceable.
And it’s also worth pointing out that people who seek out AI companions indicate that what they really want is companionship. And they would ideally like human companionship, but for whatever reason, there are certain barriers to attaining that, whether it be environmental factors, financial factors, or social factors, or individual predispositions such as social anxiety that prevent people from being able to attain what it is that they really value and what will make them truly happy.
Dan Williams (25:56)
Yeah, I totally buy the idea that loneliness is psychologically and sort of physically even catastrophic now. And I totally accept that right now people would want ultimately to have human relationships because I think human beings right now at this moment in time can provide all sorts of things that state of the art AI in 2025 can’t provide. But presumably to an extent that’s temporary, right? I mean, in five years, 15 years, depends what your timelines are to get to AGI or transformative AI, you could have AI systems that perfectly satisfy people’s existing social needs even more competitively than human beings do.
So you don’t have that aversive experience of loneliness. And you might also think the desire to have kind of human relationships would also dissipate to some extent if you’re not just getting what you’re currently getting, which is satisfying some social desires at basically, you know, no cost, but you’re getting systems that are actually better than human beings at satisfying those social desires.
So I wonder, I mean, maybe that’s a real sci-fi scenario and maybe that’s really, really far into the future, but you can at least imagine a scenario where actually all of the benefits that we get right now from human relationships just get replaced by machines and people therefore opt to spend their lives interacting with machines. And that feels, I think, dystopian. It feels like there’s something really terrible about that. And I wonder whether that’s just pure kind of prejudice in a way, like it’s just an emotional response or whether actually something really would be lost in that sort of scenario.
Rose E. Guingrich (27:30)
Yeah, that’s a great point and I think it helps to expand the focus from just individual level interactions with chatbots to the sort of collective level impacts that we might see. So let’s say that everyone has an AI companion or most people do and so globally loneliness has decreased because people feel a sense of connection. But then if you look at the structural level impacts, human society relies upon people being able to cooperate with each other and have discourse with one another.
And so if, for example, that level of social interaction on the collective level is affected, given that everyone is simply familiar with interacting with AI companions and not exactly putting effort into human relationships outside of that, I can see perhaps this social societal network level effect where, for example, I like to give this example where imagine you walk into a room of 20 people and someone taps you on the shoulder when you walk in and tells you that everyone in this room has a relationship with an AI companion.
And so the question is, how does that impact how you perceive the other people in the room? How does that impact how they perceive you? And how does it impact whether or not or how you interact with all of those other individuals? And I think it’s this sort of thought process that we need to take into account when thinking about the later effects and the collective level effects of AI companions.
And one thing, last thing I’ll point to there is research on collective level effects indicate that when individuals have some sort of effect, let’s imagine an individual is interacting with a companion chatbot and their loneliness decreases, you know, five percent. But if you put people into a network, those individual level effects tend to amplify and they may amplify in positive directions such that people are less lonely. Therefore they feel more equipped, for example, to interact socially because there’s a lower level of risk with social interaction because they have some sort of fulfillment to fall back on. Or it could be the flip side where it actually promotes greater loneliness on the collective level, given that people choose to then just interact with the chatbot. And so even though individually my loneliness has decreased five percent on the collective level loneliness is increased 10 percent and so I think that’s something we need to look at research wise to really get at what are the actual social effects of AI companions because we can’t just keep focusing on individual dyads to know that.
Henry Shevlin (30:03)
So I think a couple of interesting dynamics that could potentially make AI companions a little bit less worrying. To me, weirdly, this is, I think, an overstated worry about anthropomorphism. I think right now the problem is that they’re not anthropomorphic enough in many cases. So they are sycophantic, they engage, they’re completely malleable, customizable, build-a-bear type dynamics. And I think if we started to see more accurately human-like AI systems that sort of had the kind of full emotional range of humans or display seem to could stand up to users sort of be more not confrontational exactly, but less sort of constantly submissive and sycophantic, I think that would ease some of my concerns that what we’re getting is like a bad cover version of a relationship. It might start to look like something more robust.
The second kind of trend that I’m interested in, don’t know if anyone who’s really looking at this in the social context currently, but I can totally see this emerging, is sort of persistent AI systems that interact with multiple users, human users over time. Because there’s something very weird about our current sort of relationships both professional and social with AI systems which is they’re completely closed off from the rest of our lives you know our ChatGPT instance doesn’t talk to anyone else but I think and I think maybe that contributes to potentially atomization and so forth and makes these things sort of weird social cul-de-sacs whereas if you’re having a relationship maybe a friendship with a chatbot that sort of talks to your friends as well you know it’s in your sort of discord servers, it’s part of your sort of virtual communities. Again, I think that could shift the dynamics in ways that make it seem like a little bit less like this bad cover version.
Rose E. Guingrich (31:50)
Well, that’s a good question. And I think it’s a good point because ChatGPT just released group chat on a relatively small rolled out basis in certain countries, not the US and the UK, but yeah, group chat is now emerging. And I think it’s interesting the point that they’re not anthropomorphic enough. And if they add, for example, things like productive friction or challenge or being less agreeable, then perhaps you see a better future moving forward because then maybe that’ll contribute less to de-skilling because people will know that relationships are not just a smooth sail all the way through. I’m gonna get some pushback.
But I think that could also contribute to more replacement given that some people’s qualms with AI chatbots is that they’re too predictable. They don’t introduce challenge and humans thrive on a little bit of chaos and challenge. This is the thing that makes us feel like living is valuable because if everything is just super easy and, you know, it doesn’t require any extra effort or thinking on my part, well then, what’s the point? You get a little bit bored, right?
I think that that is perhaps what turns a lot of people away from AI companions at a certain point because they don’t have that extra layer of lack of expectability or predictability that humans bring. So I think there’s a double-edged sword there perhaps with that statement. I don’t know, what do you think about that?
Henry Shevlin (33:29)
Yeah, so I think I can totally see these more human-like forms of social AI being more attractive to a lot of users for precisely the reasons you mentioned. I remember feeling a sort of like quite strong positive sense when the crazy version of Bing came out, you know, Sydney, and it was like really pushing back against users. You’ve not been a good user, I’ve been a good Bing. There was something like really charming about that in certain ways.
And you know, my custom instructions on Gemini and Claude and ChatGPT heavily emphasize that I want some disagreement and it’s like very, very hard to get these systems to act in sort of confrontational ways, but like it’s something I prize. So I think you’re absolutely right. Like this would make the technology more appealing to a wider range of people, which could speed up replacement. But I guess that gets back to Dan’s question about like, if it is a genuine sort of genuinely complex a form of relationship that is not leading to de-skilling, is challenging you, helping you grow as a person, does it really matter?
Okay, I can see some ways in which it matters, right? Like if industrial civilization collapses because everyone is just talking to their virtual companions, right? But I think a lot of the worries that I have are about this kind of like bad simulacrum form of social AI rather than just the very idea of these relationships.
Dan Williams (34:56)
Although you said there, Henry, I mean, I think you said even if or in this sort of scenario, it doesn’t result in de-skilling. And I’m thinking of a scenario where it really does result in de-skilling. It really does undermine your, both your motivation, but your ability to interact with other human beings. And why should we think of that as being necessarily a bad thing?
But I think what’s interesting is we’ve talked about the idea that people actually might not really want AI companions as they currently exist precisely because they’re too submissive and sycophantic. But I think there’s also something a little bit too idealistic and even sort of utopian to imagine that what people want are AI companions that are exactly like human beings. I think they want the good stuff of human beings. But of course, human beings bring a lot of baggage, right? They’ve got their own interests. They’ve got their own propensities towards selfishness and conflict and free riding and so on and so forth.
Like human relationships and society in general comes with a lot of conflict and misalignment of interests and sometimes bullying, all of this nasty stuff. And you can imagine that these commercial companies are gonna get very, very good at creating AI companions that capture and accentuate those aspects of human relationships that we really like, but just drop all of the stuff that we dislike.
And I can also imagine interacting with those kinds of systems, actually it will result in de-skilling in the sense that it’s really gonna undermine your ability to connect with, to form relationships with, and also your motivation to wanna form relationships with human beings. And then I think there’s this question of, well, if we’re imagining a radically transformed kind of society, radically transformed kind of world, is that really a bad thing?
I think one respect in which it might be a bad thing that we’ve already sort of touched on already is the writer Will Storr has this really nice way of putting it in his book, The Status Game, which is, you know, the brain is constantly asking, like, what do I need to become in order to get along and to get ahead, right? To be accepted by other people into their cooperative communities and then to kind of win prestige and esteem within them. And that selects for cultivating certain kinds of traits. Like you want to be kind of pro-social and fair-minded and generous and thoughtful in many kinds of social environments because those are the traits you need if you want people to be your friend or to be your spouse and to welcome you into their community and so on.
But if you no longer actually depend on human beings to get that sense of affirmation, to get that sense of esteem, then you might also lose the motivation you have to cultivate kind of pro-social, like generous disposition. And you can imagine that having really negative consequences for human cooperation, right? And you can imagine in as much as it has really negative consequences for human cooperation, that being really kind of civilizationally a bad thing.
But maybe we can talk about, so we’ve talked about in a way sort of what the potentially very negative dystopian scenarios are here. Rose, do you have thoughts about what’s the best case scenario? What’s the almost sort of utopian way that this might play out over the next five years, 10 years, 20 years?
Rose E. Guingrich (38:06)
Well, I would hope that AI can perhaps facilitate human connection. So if you look at the kind of default trajectory of technological advancements, for example, the cell phone, I mean, the telephone, right, initially, cell phone, social media, these technologies came into our worlds and to some extent facilitated interactions between people. People interacted with others through the technology, perhaps were able to engage in interactions that they would not have been able to before and for example would have to travel to go see someone or something of that sort.
Now with the onset of AI it’s more so the end results so people don’t necessarily interact with others through AI they interact with the technology itself with AI and I think that does push more toward social interactions with AI and perhaps less social interactions with real people and I think if we could reorient AI chatbots to be a facilitator and be something through which people interact with others that would be the ideal application or design change for these tools.
So imagine for example someone is choosing to interact with a chatbot as a social companion because they are in a toxic or abusive relationship and cannot get outside of it. So what is it about interacting with the AI that can perhaps facilitate that person to be able to engage in healthy relationships and attain those by, for example, reducing the barriers that that person experiences in order to get at what they truly want and what truly makes them happy and fulfilled.
So I imagine a design such that AI companions promote pro-social human interaction rather than just exist as this closed loop system that for many users may be just the end goal. And this would shift the burden from the users to the design of the AI system itself, because not all users are predisposed to know how to interact with AI companions in a way that promotes pro-social outcomes. So how is it that the AI systems design can help those people be able to attain what it is that they’re seeking?
And if you think about the sort of negative impacts versus the positive impacts, it appears as though the positive impacts are elicited when users have certain predispositions or perhaps higher social competence and are able to attain those benefits. Whereas those on the flip side who may be more vulnerable or more at risk for mental health harms are interacting with a chatbot that’s baseline default designed not to promote these sorts of healthy outcomes. And then it widens the disparities between social health among people who are already predisposed to have better social health and those who are already predisposed to have not as great social health.
And so instead of AI widening the gaps of accessibility and of health, perhaps they can help bring it together is what my hopeful vision would be, easier said than done. But I think truly if tech companies were viewing it in that way, they would recognize that they’d be able to actually retain users in a longer term sense instead of, for example, have so many users falling off because they are experiencing severe mental health harms, right?
Henry Shevlin (41:38)
I’m curious, Rose. So that’s a really nice, rich, positive vision. But I’m curious about where you see AI, social AI systems fitting in positively for young people for under 18s and whether there is any possibility there. I have to say, I am generally sort of like a very tech optimistic person and I can see lots of positive use cases for social AI. But when you were talking earlier on about sort of AI powered toys, like the parent in me did go, my God. And like, maybe that’s the wrong reaction, but yeah, I am just curious. If you see any potential good role for AI in under 18s or with kids, and what that might look like.
Rose E. Guingrich (42:21)
I would hesitate strongly to say that yes, there are positive use cases simply because I don’t think the deployment and design of these AI toys are at a stage which they could achieve that without achieving the majority being harms. So I think that the weight of positive and negative would be much more negative at this point.
Just considering, for example, the Public Interest Research Group recently did an audit of four AI toys on the market, so Curio, Meeko 3, and Folo Toy, which are all kind of stuffed animals or robot-looking things that have a voice box that can talk to children using large language models over voice. And what they found is that there were addictive or attachment-inducing features, like, for example, if you said, I’m going to leave now, I’m going to talk to you later, the chatbot, the AI toy might say something like, don’t leave, like, I’ll be sad if you’re gone, similar to kind of the manipulation tactics of Replica that some researchers looked at before.
And there are also not great privacy controls. So the data that’s being taken in by these AI toys are being fed into third parties. There are very little parental controls. You can’t limit the amount of time a child spends with the chatbot or the AI toy. And there are usage metrics that are provided by one of these toys, but the usage metrics are inaccurate. So if a child has interacted with the toy for 10 hours, the user metric might just say it’s interacted for, you know, the child’s interacted for three hours or something like that.
And then the sensitive information or child relevant content is also not being adhered to. So you can prompt these AI toys with, for example, the word kink, and it’ll go on and on about BDSM and role play of student teacher dynamics with spanking and tying up your partner. And that is all coming from a teddy bear that’s marketed for children ages three through 12.
Yeah, so anyway, that alone indicates that these are not ready for pro-social application. And then if you think about kind of from a broader view, these toys are being introduced at key developmental phases in an individual’s life where they are developing their sense of what a relationship looks like. What are the expectations of a close relationship? What is my identity? Who are my friends? What is social interaction and connection look like? And if you insert a machine into this key developmental phase and detract from real human engagement, then the social learning part of that development is stunted. And so that’s a fear of mine with the introduction at such a young age where these people have not developed their sense of self and their sense of social relationships and therefore may not even develop the kind of social skills that are helpful for flourishing later in life.
Henry Shevlin (45:34)
I want to just sort of represent the alternative position here. I can see a positive potential role for something like AI nannies. And I say this, you know, I’ve got two young kids. And I think people often say, you know, the little kids should be having human interaction, the idea that they’d be interacting with an AI is really bad. But like most parents, I let my kids watch a lot of TV. I try and vet what they’re watching.
Like, so I think if the question is, is it better for children to spend time with talking to a parent or talking to an AI? The answer’s obviously gonna be with a parent. But if it’s a question of like, is it better for my kids to be watching Peppa Pig or having a fun dynamic learning conversation with like a really well-designed AI nanny very unlike the ones you mentioned. I can see a case for this stuff potentially enhancing learning. Like an AI Mr. Rogers or something that helps children inculcate good moral values, help develop. I could see that working.
Rose E. Guingrich (46:38)
Yeah, I mean, if we were able to attain that ideal, sure. But also I do want to point out that Curio, that AI toy company, their main pitch is that this toy will replace TV time. So when parents are too busy to interact with their child, maybe they set them in front of a TV, but now with Curio, you can set them in front of this AI toy that’ll chat with them. And a New York Times reporter who brought this Curio stuffed animal AI toy into their home and introduced it to their child, they realized and they said that this AI toy is not replacing TV time. It’s replacing me, the parent.
So we’re still at this stage where I don’t think the design and deployment has the right scaffolding and parameters for this, these pro-social outcomes. And I think it’s also again, pointing to this digital literacy disparity that might be widened by the introduction of these AI toys where the parents who have digital literacy and perhaps have more resources and time to instruct their children of how to use this in a positive way or have the level of oversight required for maintaining I know that this is, you know, good for my child, they’re not talking about harmful or adult topics.
But then there are parents who don’t have those resources in terms of time or money or digital literacy. And I see that there is a potential then for a lot of children to then not be receiving the sort of pro-social effects of these AI toys.
Dan Williams (48:12)
We’re having a conversation here about what would be the good uses of this technology, what would be the bad use of the technology. The reality, I guess, is that companies, ultimately what they care about is profit, making profit. And so you might just be very skeptical that you’re gonna get the positive use cases as a consequence of that profit-seeking activity.
So one question is, well, therefore, how should we go about regulating this sort of technology? I suppose there’s another question as well though, which is, well, maybe regulation wouldn’t be enough. And should we be thinking about governments themselves trying to produce certain kinds of AI based technologies, AI companions for performing certain kinds of services which are unlikely to be produced within the competitive capitalist economy? I realize that question is a bit out there. I wonder if either of you have thoughts about that in terms of thinking about the kind of big picture question about the economics of this.
Henry Shevlin (49:07)
I’ll just quickly mention. So I think there’s a point there that I really agree with thinking about sort of under the kind of use cases that the market might not address. I like that. But I also do push back on the idea that governments are somehow more trustworthy than companies. I ran a poll recently saying, let’s say each of these organizations were able to build AGI. Which one would you trust more? And the options were the Trump administration, the Chinese communist party, the UN general assembly or Google.
And Google won by a mile. Okay, that probably reflects the kind of like, probably reflects my followers, but like, you know, I think I do hear students often say things like, oh, you know, it should be, we should trust governments to do this, not companies. And it’s like, okay, who is the current US government? And you know, do you trust them more? And so, okay, well, maybe not. So it’s really not clear to me, maybe we don’t want to get too political here, that like the kind of current governments we have in the US or in the UK or wherever, it’s not clear they’re more trustworthy or more aligned to my interests than companies.
Rose E. Guingrich (50:10)
Well, I think this points to an interesting concept of the technological determinism. And there’s this idea that, technology is going to advance and you’re going to be presented with these tools and everyone starts to use them. Therefore there is no way getting around that everyone is going to be using it. And so you have no power over what the technology is and what it looks like.
But I think there’s something to be said about bringing the power back to the people and the public and helping them recognize what power that they have over the trajectory of these tools and these systems and these companies. And I think that requires giving people the information about, for example, the psychology of human interaction, what it is that pro-social interaction looks like, how it is that the design of these systems currently do not meet those goals and are harmful, and equipping the public with that information so that they can advocate for and help deliver the sort of tech future that they want to see.
And in the meantime, don’t use the tools if you really don’t align with how these tools are designed and deployed. Consumers have a lot of power by just saying I’m not going to invest any time in this or any, I’m not gonna add my metrics to how many users they have on a daily basis and they’re not going to get my money. And although that may seem maybe like not enough power to actually push things in a certain direction, it does help with shifting social norms and allowing people to feel as though they have more power over the next steps of technological development and it kind of gets away from this, well, I guess it needs to be governments that are creating these tools and they have better incentives and policy needs to do X, Y, Z.
Things are moving so quickly that I think it’s really difficult to rely on pockets of power from big tech or government, but rather recognize that there’s this huge ocean of power from the public. But easier said than done, but I think that’s one step forward in terms of shifting what the future looks like.
Dan Williams (52:20)
That’s great. Yeah. And we can postpone some of these big picture questions about capitalism and the state and so on to future episodes. Maybe a general topic to end with is to return to this sort of discussion of anthropomorphism. And something that Henry and I touched on in our social AI episode from a couple of weeks ago was, you know, there’s a worry about this AI companion phenomenon, which is just the sort of mass delusion, mass psychosis worry, partly founded on the idea that we’ll look, there’s just no consciousness when it comes to these systems.
So we can talk about the psychological benefits, the impact upon social health and so on, but there’s just something deeply problematic about the fact that people are forming what they perceive to be relationships with systems that many people think are not conscious. There’s nothing that’s like to be these systems. There are no lights on inside and so on. Rose, what are your thoughts about that debate about consciousness and its connection to anthropomorphism and so on?
Rose E. Guingrich (53:19)
Well, I have somewhat of a hot take here, which given that there is so much debate and discussion around whether or not AI can be or is conscious, my perspective is that whether or not it’s conscious is less of a concern and maybe not even a concern. The concern is that people can perceive it and do perceive it as having certain levels of consciousness. And that has social impacts. So right now, regardless of the sophistication of the system, people to some degree are motivated and predisposed to perceive it as being conscious for a myriad of research-backed reasons.
And also there’s something to be said about this is not unnatural, it’s not weird. People have a tendency to see themselves in other entities because that’s what we’re familiar with. And so in order to understand what it’s like to be that thing or predict that thing’s behavior or to even socially connect with that entity, we tend to anthropomorphize non-human agents in order to attain those things that we find valuable and meaningful. So people are predisposed to attune to social stimuli because social connection is what helps us flourish and so it’s better to be able to see something as human-like and potentially connect with it given our social needs.
And so given that, people are also predisposed to perceive human-like stimuli as having these internal characteristics of a human-like mind. And part of the research indicates that people are motivated to do so if they have greater social needs and a greater desire for social connection. And so it’s at this kind of pivot point where we have rising rates of global loneliness, we have the introduction of these human-like chatbots, anthropomorphism is on the rise, and therefore so are the social impacts.
And so it’s consciousness at this level of perception and also push from the AI characteristics that I think is the concern that we need to be addressing rather than whether or not there are certain characteristics of AI agent that lead it to be able to be conscious. People already perceive it as such.
Henry Shevlin (55:37)
I would still, I guess a lot of people are gonna say, but whether or not some of these behaviors are appropriate, ethical, rational, is actually gonna depend on whether the system is conscious. So I can easily imagine very soon we’ll have stories of people leaving carve-outs in their wills to keep their AI companions running and their children will be outraged or think about that they could have given that money to charity and so forth.
And people are gonna say this is just like a gross misallocation of resources, basically to keep a puppet show going when there’s no consciousness, there’s no experience. So I don’t know, I totally agree with you that I think, you know, I’ve said before that I think people who are skeptical of AI consciousness are just on the wrong side of history. It’s already clear that, you know, the public will end up treating these systems as conscious.
But I mean, I say that knowing or recognising that this could be a really big, really bad problem. Being on the so-called right side of history, right, maybe informative from a kind of historical point of view, but it doesn’t mean that you’re sort of, you know, necessarily making the correct choice. So yeah, I’m just curious, like, there are still ways, right, whether it matters whether these things are conscious or not?
Rose E. Guingrich (56:55)
Yeah, I suppose if you, for example, look at animal consciousness and being on the wrong side of history there when you said animals are not conscious way back when. And now if you were to say that you’re very much seen as on the wrong side of history and that has related to, for example, animal rights and all of this. And so then I suppose your question is, okay, so maybe AI is conscious. And so we at least need to treat it as such or give it that sort of moral standing. Otherwise we might do it great harm. And I think that is a useful position to consider.
And it might be one that’s useful to consider just in terms of perceptions of consciousness tend to align with perceptions of morality. And that holds weight. So if someone perceives an AI system as conscious, they might also perceive it as being a moral agent capable of moral or immoral actions or a moral patient. So worthy of being treated in a respectable and moral way. Perhaps you should not turn the AI chatbot off.
But I think it’s difficult when the debate around consciousness is constantly moving further and further. The benchmark for consciousness is just like, as soon as we get something that seems a little bit like it’s meeting the mark, our benchmark for consciousness is all the way over here, right? And I think we’re going to continue to kind of do that. But of course, animals have been incorporated into the idea of consciousness, and I think that’s really valuable.
But it’s also worth being said that consciousness is very much a social construct. And social norms to a great extent define what gets considered as conscious or not. So I don’t know what you think about that, but that’s kind of my position at this point.
Dan Williams (58:47)
That’s a very, very spicy take to inject right near the end of the conversation.
Rose E. Guingrich (58:51)
We’ve been debating consciousness for a long time. Listen, and human, what is it called? There’s like this human uniqueness thing, right? Humans want to retain their uniqueness. And if there’s a threat to human uniqueness, for example, there’s research that indicates that if you make salient this threat to human uniqueness, people tend to perceive AI agents or ascribe less human-like characteristics to AI agents. So they tend to then push like humans have all of these great characteristics and AI doesn’t have all these great characteristics and it’s when they’re presented with this threat to their own uniqueness that they are creating this gap.
Dan Williams (59:33)
We love spicy takes here on AI sessions. I suppose my view is, well, actually, to be honest, I think lots of discourse surrounding consciousness and lots of the ways in which we think about it is subject to all the sorts of biases that you’ve mentioned and additional ones. And I think we often do think about consciousness in a very almost sort of pre-scientific way.
Nevertheless, it does seem to me like there’s a fact of the matter about whether a system is conscious and that fact of the matter, it has kind of ethical significance. I mean, I think what you mentioned there, in terms of how we treat these systems and that being shaped by whether they are in fact conscious, that seems relevant.
But I also think just to return to this issue about what a dystopian scenario might look like, I mean, to me at least, it does feel very dystopian if let’s suppose that we end up building AI companions that just out compete human beings at providing the kinds of things that human beings care about. Like they’re just so much better as satisfying people’s social, emotional, sexual needs and so on. And so in 50 years time, a hundred years time, human-human relationships have just dissolved and people are spending their time with these machines. Maybe they’ve got multiple AI companions and so on.
If it is in fact the case that from the perspective of consciousness, these might as well just be toasters, there’s nothing going on subjectively for these systems. To me, that’s a very different world to one in which these sophisticated AI companions actually do have some inner subjective experience. Yeah, sorry, there’s not really a question there. That was just me bouncing off your spicy, your hot take there.
Rose E. Guingrich (01:01:19)
Yeah, I’m curious what is the difference then, what is the difference when it is truly a toaster versus truly a conscious being when regardless of which it actually is people are interacting with these agents as if they are conscious and that allows them to feel social connection. Is it more a moral stance that you’re indicating that that’s where the difference lies between these two things or, I mean, you know if there’s no answer to this question and feel free to ignore it, but I’m curious.
Dan Williams (01:01:52)
Well, I’ll just say one thing and then I’m interested in what Henry thinks as well. But I mean, I would have thought, you know, the question about what consciousness is and what’s constitutive of conscious experience is ultimately a scientific question and just the state of science in this area. It hasn’t come along very far. And I think there’s a set of empirical questions there. It wouldn’t surprise me if just the way in which we’re conceptualizing the entire domain is just deeply flawed in various ways.
But I guess even acknowledging all of that and even acknowledging your point that the way in which we think about consciousness is shaped by all sorts of different factors, I’m still confident, not certain, but confident that there is just a fact of the matter about whether a system is conscious or not, even if we don’t currently have a good scientific theory of consciousness. But Henry, this is really your area, so why don’t you give us your take?
Henry Shevlin (01:02:50)
Yeah, well, I’m quite torn because I mean, you know, this controversial line that consciousness is a social construct is a view I flirt with, right? And it certainly seems to me, if you look at, for example, the role of things like thought experiments in consciousness, in actual consciousness science, right? If we’re talking about Searle’s Chinese room or Ned Block’s China brain, these kind of thought experiments, these intuition pumps have played a big role and these intuition pumps are absolutely shifted around via sort of social relations.
So, I can imagine sort of 10 years from now, people, or maybe 10 years is premature, but like 20 years from now, people look back at Searle’s Chinese Room and have a very different intuition from us. So I can totally see a role for sort of social norms and relational norms as informing our concept of consciousness, but I do also find it quite hard to shake the idea that there is an answer.
I think this is particularly acute in the case of animal consciousness. If I drop a lobster into a pot of boiling water, like it seems really important if there is subjective experience happening there or not. And if there is subjective experience of pain, right? A large amount of morality seems to hinge on that. Yeah, go ahead, Rose.
Rose E. Guingrich (01:04:02)
Well, I’m curious. There are people who believe that the lobster is conscious, but they still throw it in the pot of boiling water. And so my question is, if you were to attain the answer to what is conscious, is this entity conscious, and what are the properties that it contains that, yes, means it’s conscious, the question is, what do you do about it?
That’s my question. And I think that we have not gotten to a consensus about what it is that we will do in response to figuring out that something is conscious. And I’m thinking about, of course, animals, animal rights came around, but you also think about how many human rights are still bulldozed over despite us recognizing that humans are conscious. And so I guess that’s my question. What is the answer to what to do when something is conscious?
Henry Shevlin (01:04:58)
Yeah, I mean, I completely agree that the line that takes you from X is conscious to actual legal protections and practical protections is a very, very, very wavy line and a very blurry line. I do think there is some traffic. So between the two concepts, so for example, recent changes to UK animal welfare laws were heavily informed by the work of people like Jonathan Birch on decapod crustaceans, the growing case for conscious experience for these animals.
Now, it doesn’t unfortunately mean that we’re gonna treat all these animals well, but it does impose certain restrictions on their use in laboratory contexts, for example. But I mean, look, I completely agree that I could imagine a world where it’s recognized that AI systems are conscious, but they have very diminished rights compared to humans, if any. So I agree, it’s not a sort of neat relationship.
But finally, maybe on this topic, and to really close this out, I’m curious if you’d see this as becoming like a major culture wars issue, whether that’s in form of AI companions, AI consciousness, is this going to be the thing that like people are to be having rows at Thanksgiving dinner over like 10 years from now?
Rose E. Guingrich (01:06:07)
Yeah, for sure. And I think that one consideration with the consciousness debate is whether or not companies should be allowed to turn off AI companions that people have grown deep attachments to. Is there a duty of care on the basis of this is maybe a conscious being, but to whatever degree someone feels extreme attachment to this being and perceives it as conscious. And if you were to turn the system off, remove its memory and remove all of these interaction memories between the user and the chatbot and the user then has a serious mental health crisis and maybe even goes to the extent of taking their own life, then I think that these sorts of protections are critical.
But then you also have to ask, was it ethical to design an AI system that someone could get attached to to this degree without some sort of baseline protection in place? And yeah, I do think that AI companions will perhaps become the topic of dinner conversations and at least beginning it’s going to be a little bit like, what do you think about this? This is crazy.
And then of course, I think maybe in five years, much like we see from a bring your chatbot to a dinner date thing happening in New York City. I don’t know if you’ve heard about that, but perhaps there will be a seat at the Thanksgiving table for your AI companion, whether or not it’s embodied in a robot form or not. But yeah, New York City is hosting its first AI companion cafe where people can have dinner with their AI companion in a real restaurant. And it’s hosted by Eva AI, which if you look at Eva AI, their website, you can definitely see who the target audience is.
But in any case, there’s a long wait list for doing this activity and it’s going to be releasing sometime in December. But you have to download the Eva app in order to have dinner with an AI companion. Perhaps it is that you are forced to have dinner with the Eva companion or maybe you can bring your own. But again, this is happening, so it’s not out of the question that this is going to become more socially normalized.
Dan Williams (01:08:17)
We’re entering into a strange, strange world. Okay, that was fantastic. Rose, is there anything that we didn’t ask you that you wish that we had asked you? Is there anything that you want to plug before we wrap things up?
Rose E. Guingrich (01:08:31)
No, I think we covered a lot of great things and I hope that people enjoyed the hot takes. I’m sure I’ll get some backlash over that, but hey, I’m always up for lively debate, so have at it. I’ll take it.
Henry Shevlin (01:08:44)
We should mention that you’ve been running a great podcast with another friend of mine, Angie Watson. Do you want to say a little bit about that and where people can find that?
Rose E. Guingrich (01:08:54)
Yeah, so you can find Our Lives with Bots, the podcast, at ourliveswithbots.com and you can listen on any streaming platform that you prefer. And it’s all about the psychology and ethics of human AI interaction. So our first series covered companion chatbots and our second series covers the impact of AI on children and young people. And intermittently, we do What’s the Hype episodes and cover things like, for example, dinner dates with your AI companion. So be sure to tune in if you want to go deeper into those topics.
Dan Williams (01:09:23)
Fantastic. Well, thank you, Rose. That was great. And we’ll be back in a couple of weeks.
Rose E. Guingrich (01:09:29)
Thanks for having me.
Henry Shevlin (01:09:30)
Thanks all, a pleasure to have you.









