I sat down with Henry Shevlin and Andy Masley to discuss AI’s environmental impact and why Andy thinks the panic is largely misplaced. Andy’s core argument: a single ChatGPT prompt uses a tiny fraction of your daily emissions, so even heavy usage barely moves the needle.
The real issue, he argues, isn’t that data centers are wasteful—they’re actually highly efficient—but that they make visible what’s normally invisible by aggregating hundreds of thousands of individually tiny tasks in one location. And he argues that the water concerns are even more overblown, with data centers using a small fraction compared to many other industries. We also explored why “every little bit counts” is harmful climate advice that distracts from interventions differing by orders of magnitude in impact.
In the second half of the conversation, we moved on to other interesting issues concerning the philosophy and politics of AI. For example, we discussed the “stochastic parrot” critique of chatbots and why there’s a huge middle ground between “useless autocomplete” and “human-level intelligence.” We also discussed Marx, “technological determinism”, and how AI can benefit authoritarian regimes.
Finally, we touched on effective altruism, the problem of “arguments as soldiers” in AI discourse, and why even high-brow information environments contain significant misinformation.
I enjoyed this conversation and feel like I learned a lot. Let me know in the comments if you think we got anything wrong!
Links
Andy’s Weird Turn Pro Substack
Using ChatGPT Is Not Bad for the Environment - A Cheat Sheet
“Sustainable Energy Without the Hot Air” by David MacKay:
George Orwell - “You and the Atomic Bomb”:
Transcript
(Note: this transcript is AI-generated and so might be mistaken in parts).
Dan Williams: I’m here with my good friend Henry Shevlin and today we’re joined by our first ever guest, the great Andy Masley. Andy is one of my all-time favorite bloggers. He writes at the Weird Turn Pro Substack and he is the director of Effective Altruism DC and he’s published a ton of incredibly interesting articles about the philosophy of AI, the politics of AI, why so much AI discourse is so bad. And he’s also written about the main thing that we’re going to start talking about today to kick off the conversation, which is AI and the environment.
So I think many people have come across some version of the following view that says there’s a climate crisis. We have to drastically and rapidly reduce greenhouse gas emissions at the same time as AI companies are using a vast and growing amount of energy. So if we care about the environment, we should feel guilty about using systems like ChatGPT. And maybe if we’re very environmentally conscious, we should boycott these technologies altogether. So Andy, what’s your kind of high level take on that perspective?
Andy Masley: A lot to say. Just going down the list here. So basically for your personal environmental footprint, using chatbots is basically never going to make a dent. I think a lot of people have a lot of really wildly off takes about how big or small a part of their environmental footprint chatbots are.
There are a few specific issues that the media has definitely hyped up a lot, especially around water, which I talk about a lot. So living around data centers, I think is not as bad as the media is currently portraying. But in the long run, I’m kind of unsure. There are a lot of wild different directions AI could go. So I don’t want to speak too confidently about that. And I also just want to flag that I’m kind of a hobbyist on this. I feel like I know a lot of basic stuff, but I don’t have any kind of strong expertise in this stuff. So I’m very open to being wrong about a lot of the specific takes.
Dan Williams: But I think one of the things that—sorry Andy, just to cut you off—but I think one of the things that you point out in your very, very long, very, very detailed blog posts is that you don’t claim to be an expert, but you just cite the expert consensus on all of the very specific things that you’re talking about.
Andy Masley: Yeah, I do want to be clear that every factual statement I make, I think I can back up. How to interpret the facts is on me. I’m using some basic arguments. I was a philosophy major in undergrad, so I like to think I can deploy a few at least convincing or thought out arguments about this stuff. I’ve also been thinking about climate change stuff since I was a teenager, basically. So I have a lot of pent up thoughts about just basic climate ethics and have been pretty interested in this for a while.
Yeah, not claiming to know more than experts on this. What I am claiming is that I think if you interpret the facts and just look at how the facts are presented in a lot of media, I think a lot of journalism on this is getting really basic interpretations kind of wrong. I remember my first article that I read about this years ago—I think this was 2023—when an article came out that framed ChatGPT as a whole as using a lot of energy because it was using at the time like two times as much energy as a whole person. And at the time I was like, man, that’s not very much. A lot of people are using this app. And if you just add up the number of people using it, it shouldn’t surprise us that this app is using two times as much as an individual person’s lifestyle.
So there are a lot of small things like that over time that seem to have built up. It seems like there’s kind of a media consensus on like, this thing is pretty bad for the environment in general, so we should all report this way. And so any facts that are presented are kind of framed as “this is so much energy” or “this is so much water.” And if you just step back and contextualize it, it’s usually pretty shockingly small actually.
So I have a ton of other things to say about this. I think a part of the reason this is happening is that a lot of people just see AI as being very weird and new. And I agree there are valid reasons to freak out about AI. I want to flag that I’m not saying don’t freak out about AI, but I think the general energy and water use has been really overblown. We just need to compare the numbers to other things that we do.
The Problem with AI Environmental Reporting
Henry Shevlin: Yeah, this seems to me such a problem with the debate. And Andy, you say you don’t claim to be an expert in this, but I regularly interact with academics working in AI ethics and policy who make grand claims about the environment, but just don’t seem to have a good grasp on the actual figures. And I often ask people—they’ll say ChatGPT uses X amount of water or X amount of electricity, and this was before I started reading your blog post and I knew these figures themselves. I’d ask basic questions like, okay, what is that as a percentage of overall electricity use? Or is that just for the training runs or is that inference costs? And half the time they looked at me like I was a Martian. Or I was really saying, “Hang on, don’t you—you’re not supposed to ask questions like that. I’ve just told you, isn’t 80 million liters or whatever, isn’t that a really big number? Isn’t that enough? Why do you need to know what percentage that is of...?”
So I think honestly, you’re one of the very few people in this space who’s actually sitting down and doing the patient, boring work of quantifying these things and putting them in contrast with other forms of other ways in which humans use electricity and water.
Andy Masley: Yeah, and I will flag for anybody else who wants to look into this—I’m very proud of the work I’ve done, but I have to say it’s actually not boring for me anyway. It’s quite exciting to dig into and be like, “Wow, this like, you know, almost everything we do uses really crazy amounts of water” or “Here’s how energy is distributed around the world.” And so I think one hobby horse I’d like to push a little bit more is that a lot more people should be doing this in general. The misconceptions are really wild.
I’ve bumped into a pretty wild amount of people who are experts in other spaces or just have a lot of power over how AI is communicated and stuff like that. I consider them as—a lot of them are just sharing these wildly off interpretations of what’s up with this stuff. I’ve talked to at least a few people who have bumped up against issues where, if they’re involved in a university program or something like that, and they want to buy chatbot access for their students, some of whom are pretty low income and might not buy this otherwise, they’ve actually been told by a lot of people like, “We can’t do this specifically because of the environment.” Specifically because each individual prompt uses 10 times as much energy as a Google search or whatever, which by the way, we don’t actually know—that’s pretty outdated. But you know, I really want to step into these conversations and say, 10 Google searches isn’t that much. 10 Google searches worth of energy is really small.
And it seems like there’s been this congealing of this general consensus on this stuff that is just really wildly off. I think this is one of my first experiences of going against what’s perceived to be the science on this. I remember when I first started talking about this at parties and stuff, when people would be like, “You used ChatGPT, it’s so bad, it uses so much energy,” and I would be like, well, as you know, I was a physics teacher for seven years and so I had a lot of experience explaining to children how much a watt hour is and so I would kind of go into that mode a little bit. I’d be like, “Oh well, it’s not that much if you start to look at it.” But I kind of couldn’t get past their general sense of like, “Oh, this guy has been hypnotized by big tech or something. He doesn’t know that it’s bad.” And so, a lot to say about that.
Henry Shevlin: So just to throw a couple of anecdotes—that completely resonates with my experience. So we had a debate at CFI quite early in the consumer LLM phase, I think shortly after the release of GPT-4. And someone asked in completely good faith, “Hang on, when we were discussing our generative AI policy for students, maybe we should be discouraging students from using it precisely for environmental reasons.”
And I did some really rudimentary napkin math looking at the sort of energy and water footprint of a single paperback book versus a single ChatGPT query. And I came up with something in the region—very loose estimate—that you could get 10,000 ChatGPT queries for roughly the equivalent environmental impacts, carbon impacts, and water impacts of a single paperback book. And I was like, “Hang on, are we going to tell students to stop buying books as well? Because that would seemingly be consistent.”
But I wanted to also ask, there is this idea that I found very useful of “arguments as soldiers,” where basically if you are seen to be arguing against an argument associated with a political position, then that can easily make you seen to others not just that you’re critiquing the argument, but you’re critiquing the whole view that it’s coming from, right? And so I think this often—I see this quite a lot in environmental discussions where if you say, “Actually, this isn’t a particularly good argument for why we should be concerned about loss of biodiversity,” or “This isn’t a good argument for why ocean warming is a problem,” and people will immediately sort of assume, “Oh, you’re one of them. You’re one of the bad guys.” Right?
So I’m curious if you think that influences the dynamics here or why you think you’ve encountered opposition when you try and throw numbers around here.
Putting ChatGPT Prompts in Context
Dan Williams: Could I just say, Henry, before we jump into that conversation—because I think that is a really important conversation—I’m just conscious of the fact that I’ve read Andy’s 25,000 word blog posts, and you’ve read them as well, Henry. Andy, you’ve written them, so I can only imagine how much in the weeds of this topic you actually are. But if someone’s coming at this for the first time, I think they’ve come across this, as you put it, almost a kind of consensus view, at least in some parts of the discourse, that using ChatGPT is terrible for the environment.
And your claim, as I understand it, is actually that’s not really true. And you give loads of different arguments and you cite loads of different findings that sort of put ChatGPT use in context. Maybe it is just worth starting with something we’ve already touched on, but I think it’s helpful context for the entire conversation. If we’re thinking about one prompt to ChatGPT, I mean, you mentioned sometimes people talk about this as being sort of 10 times the energy use of a Google search, but there’s some uncertainty about that. But in terms of, okay, let’s acknowledge there’s some uncertainty. If we want to place a ChatGPT prompt in context, what’s the quickest way you know of conveying the fact that, hmm, we probably shouldn’t be as alarmed about the environmental impact concerning this as many people think we should be?
Andy Masley: Yeah, ultimately, my personal way of conveying it is just asking how many ChatGPT prompts throughout the day would the average American need to send to raise their emissions by 1%, basically. And so I think the question actually explodes in complexity really quickly because you realize people have these wildly different definitions of what it means to be bad for the environment, where some people will believe that being bad for the environment means it emits at all. And so in that case, literally everything we do is bad for the environment. And so ChatGPT falls into that category because it’s something that we do. And I don’t think that’s satisfying.
And so I think a better way of talking about it is like, will using this thing significantly raise my CO2 emissions? And I try to start there and say, okay, my best estimates right now, if we include every last thing about the prompt—because people will always bring up like, it’s not just the prompt, it’s the training, it’s the transmission of the data to your computer and things like that—once you add all that up, it seems like a single ChatGPT prompt is about one 150,000th of your daily emissions or something like that.
And so if you send like a thousand ChatGPT prompts—an average, median prompt, I mean, obviously there’s a lot of variation, this is something else we can get into—but if you send a thousand ChatGPT prompts throughout the day, you might raise your emissions by about 1%.
So I’ll intro with that and then say, but if you were spending all this time throughout the day just poking at your computer, making sure you send those thousand prompts, it’s very likely that your emissions are actually way lower than they would have otherwise been because your other options for how you spend the time—you could be driving your car, you could be playing a video game, you could be doing a lot of other things that emit way more.
And so on net it seems actually almost physically impossible to raise your personal emissions using ChatGPT because the more time you spend on it, the less time you’re doing other big things basically. So I try to frame it that way.
Sometimes I’ll, in more of a joking way, when people will be like, “It’s ten Google searches,” I’ll kind of just sit back and be like, “If I told you a few years before ChatGPT came out that I had done a hundred Google searches today, would your first thought be like, ‘Man, that guy doesn’t care about the environment. He’s a sicko. He hasn’t got the message?’” Just trying to frame what they themselves are saying in the context of everything else that we do and saying, would you have ever freaked out about this before?
ChatGPT has this kind of negative halo of being perceived as this environmental bad guy. And then the conversation will usually go in a lot of different directions where maybe it’s not actually about your personal footprint, it’s about these data centers. But at least I try to keep it limited to the personal footprint at first because that’s what they’re initially interested in and say, yeah, it’s about a hundred and fifty thousandth of your daily emissions.
The Uncertainty Around Different AI Uses
Dan Williams: And just to double click on that. So as you’ve acknowledged, there’s also some uncertainty, I guess, about the actual energy use when it comes to a single prompt, because it’s partly like, how do you calculate that? What are you considering as part of the cost of a single prompt? But also these days, especially, you can use state of the art AI to get a quick text based response. You can also use things like deep research, where it’s going to go away and produce a detailed research report. You can use things like Sora 2, where it’s generating really detailed synthetic video and so on.
So I guess one question is, how do we know at all the energy use or the carbon emissions associated with a single prompt? How can we have any confidence when we’re answering those questions? And secondly, does your skeptical take apply to all uses of this technology, or is it just the basic kind of text-based use of systems like ChatGPT?
Andy Masley: Yeah, there’s a lot to say. I do have to kind of—my main big piece is called “Using ChatGPT is Not Bad for the Environment” and not “Using AI Isn’t Bad for the Environment.” Because I could imagine very extreme edge cases where it is. I don’t actually know anything about video actually—our information about how much video uses is surprisingly scant. I think for these larger systems, there’s a part of me that wants to make the move of saying it would be weird if these AI companies were giving everyone really, really massive amounts of free energy.
I have access to Sora 2—I’ve made some goofy videos, it’s pretty addicting honestly, I have fun—and there’s not really an upper limit to my knowledge anyway that’s approachable for me of how many videos people can make in any one day. There probably is, I just haven’t looked into it. And it would be a little bit surprising to me if that were like “we’re giving you as much energy as you use in a day” or something. That just seems like way too much.
But that’s kind of hand wavy. The truth is I don’t actually know too much about video, which is why I haven’t written about it as much. For longer prompts, we’re also a little bit in the dark. The best we have right now is there was a Google study a few months ago that tried to really comprehensively analyze how much energy and water the median Gemini prompt uses. And the takeaway they had was, okay, it uses about 0.3 watt hours, which is incredibly small by the standards of how we use energy. And about 0.26 milliliters of water, which is about like five drops. And I can go on forever about how that’s not measuring the offsite water, which maybe raises it up to like 25 drops of water, but it’s still not very much.
So it seems like a lot of other estimates for how much energy a chatbot prompt uses were converging on about 0.3 watt hours anyway. And so this seems like probably our best but pretty uncertain guess right now for how much the median chatbot prompt that someone engages with actually uses. And again, there are huge error bars on this because if you input a much larger input or if you request a much longer output that takes a lot more thinking time for the chatbot, that can go up by quite a lot.
But a weird thing that happens there is that if the prompt takes longer, it’s usually both going to produce way more writing that takes you longer to read, and often it’s higher quality. I’m sure you guys use chatbots a lot. You’re familiar with deep research prompts—they’re incredible to me. There was a time when I was sympathetic to people saying that chatbots don’t add value. It’s kind of hard for me to understand how someone can come away from a deep research prompt on something they’re trying to learn about and think this is adding nothing. It’s definitely adding some kind of value.
And so if you’re looking at this longer period of time with much more text, and you just factor in how long it takes you to read that text, it’s actually not using too much more energy than a regular prompt if you measure by the time that you will personally spend reading the response, basically. But again, wildly uncertain about this.
I was trying to do a calculation a while ago where it was like, okay, if I were trying to max out how much energy I use on AI to harm the environment as much as possible, what would I do? And I was like, well, I guess I would just start hammering out deep research prompts and video stuff. And even then I couldn’t get it to be especially high. And if I were trying to harm the environment as much as possible, it’s actually quite good that I’m using AI and not just circling—the best thing to do there would be to just get in my car and start driving in circles.
So it’s very hard for me, even with deep research stuff, to come away thinking that this is gonna be a big part of your personal emissions, or even a small part. It seems like an incredibly tiny part. So yeah, hope that was useful.
Henry Shevlin: So one of my friends who’s a teacher at high school actually struggles to try and get his students to use ChatGPT more, or at least some of them.
Andy Masley: That’s like, he’s like the one teacher in the world who is facing that problem. That’s funny. That’s good.
Henry Shevlin: Exactly. He’s like, “No, please, please use this tech, learn how to use it.” But he says a lot of them are reticent because they worry about environmental footprint. And I guess, reflecting the priorities of teenagers, he’s found it helpful to frame it in terms of how many short form videos a single prompt is worth. And it’s way less—it’s like way less than one. It’s like, if you just watch one fewer TikTok video today that will cover all of your ChatGPT requests for the day.
Andy Masley: Yeah. This is another general point where I think that this AI environment stuff is probably the first time that a lot of people are thinking about data centers and the fact that the internet in general—I think it was news for a lot of people that the internet uses water. It’s very easy to think about, you know, this is just this ephemeral thing that beams down to my phone and it doesn’t really exist anywhere. And I think a lot of people weren’t actually aware that there are these giant buildings that house these large computers that host the short-form video content that you like.
And yeah, I think comparisons to everyday things like that matter a lot. Especially as a former teacher, I know that most students most of the time are just consuming short form video content. I remember one of my best students came up to me a few years ago and was like, “Did you know that no one in the school reads?” And I was like, “No, I didn’t.” And he was like, “Literally every student gets their information from TikTok in school, and that’s it.”
And I want to be able to take students in that situation and be like, “Well, you’re using a data center right now. Those short form videos are also using energy and water.” And there, it’s also not very much. And it shouldn’t shock you that something else that you were doing online also uses energy and water. And I think just getting that across can be pretty powerful. But yeah, a lot to say about that.
The Data Center Microwave Analogy
Dan Williams: Can I ask, Andy, just following up on that concerning the data center, the role of data centers when it comes to using these chatbots? I think you make a number of really important points about how to think about data centers and how not to think about them. Because I think often people look at these data centers and they seem to be using up enormous amounts of energy. And there are also these issues which you’ve touched on concerning water use, which I take it as somewhat distinct, although connected. And your point is, well, it’s true that if you just focus on the data center itself, it looks like it’s using a vast amount of energy, but you need to put that in the context of the fact that—in fact, why don’t I let you explain it? You’ve got this really nice analogy with a kind of big microwave that you imagined. So maybe that’ll be good to...
Andy Masley: Yeah, yeah, yeah. So my basic point is that if you just look at a data center without any context, and you just say this is a really large, weird looking building, it’s basically just a box that plopped down in the middle of my community and is just using huge amounts of energy compared to the coffee shop down the street, it can look really ridiculous. And you might infer from that that what is happening inside of that data center must be really wasteful because it’s using so much energy and water. Surely all of that can’t be being used for good—it’s probably using a lot of energy and water per prompt because I’ve heard people talk about ChatGPT using a lot of energy and water per prompt. So these things are really wasteful and we need to get rid of them basically.
And so the thing that’s wrong about that is that the average person I think has no idea just how many people are using and interacting with data centers at any one time. It can be in the tens to hundreds of thousands of people. What a data center ultimately is is just an incredibly large, incredibly efficient computer-sized computer basically. And it’s designed so that people around the world can all go into it and interact. And every time you’re using the internet, you’re basically using a data center in the way that you would a computer. And all of those people are invisible, because they’re around the world and all their individually incredibly tiny environmental impacts are so, so concentrated in this one building, which among other things actually makes it really efficient. It’s very good to pile a lot of these types of small tasks in one place because that means that they can really optimize where the energy and water goes.
And so the reason why data centers are using so much energy—outside of training, we can talk about training separately for AI models—but the reason why data centers that house inference, just normal answering of the prompts, the reason why they’re using so much energy is basically entirely because they’re serving so many people at once, not because the individual things happening in them are inefficient.
As Dan had mentioned, my national microwave example—there are a lot of other things that we all collectively do that use huge amounts of energy in total that are kind of invisible to us because they’re all spread out across our individual homes. So I did a very rough back of the envelope calculation and the best I can guess is that every day all American microwaves together probably use as much energy as the city of Seattle.
And I think that if we had concentrated all of those microwaves in one place and we could somehow beam our food into that microwave and then get it back—very similar to how data centers work, we kind of beam our computing somewhere else and then get it back—there would basically be a single really gigantic city-sized microwave that would be guzzling up huge amounts of energy and I think it would probably draw a lot of protests and opposition because people would be like, “This thing is using as much energy as Seattle? There’s this new tech bro way of heating food and we don’t need that, we have ovens already.”
And I think what they would be missing is that this amount of energy looks really small in the context of your just everyday home and it’s like the thing that it’s doing wrong is that it’s just very visible.
And I think if we just make more things visible, it becomes clear how much energy they’re all using and how much they still don’t really add too much to our total amount of energy. And I think one of the big reasons why people are upset about data centers is that they’re actually just making these tiny little things—or the aggregates of these tiny little things—very visible.
Obviously there are still some problems. Because they’re concentrating so much in one place, they can create some problems for local communities. I’m not saying data centers come with zero issues or things to think about in the way that any other large industry does. But we have to consider that this thing is serving tens to hundreds of thousands of people at once. And believe me, it can be overwhelming looking at some of these data centers. For me, even I’m like, “Wow, it’s truly insane how much energy it’s using.” We just need to keep in mind that the thing that it’s doing is aggregating these incredibly small, individually very efficient tasks rather than just blowing through huge amounts of energy and water for no reason.
The Water Issue Is Even More Overblown
Henry Shevlin: Can I actually ask to drill down a little bit more, no pun intended, on the water issue? Because the impression I get from your blog, and correct me if I’m wrong, is that you think the water issue is even more overblown than the electricity issue, that there are maybe greater concerns around electricity.
Andy Masley: Very, very much. Yeah. So basically, I think specific utilities have come out and said that we believe a part of the reason why we’ve had to raise rates is because of data centers, because we have to build out new infrastructure to manage these massive amounts of new energy demand from data centers. And I’m also pretty convinced that is lower than a lot of people think. Energy prices in America specifically have risen quite a bit over the last few years for a lot of different reasons. And I think a lot of people are sometimes projecting all of that onto data centers, where it’s actually mostly Russia invading Ukraine and driving up the price of gas. That’s a whole separate conversation.
But yeah, the water issue I think is actually just really wildly overblown. I can’t find a single place anywhere where water prices seem to have gone up at all as a result of data centers. And almost every news article I read about data centers will literally—most articles that I read will kind of frame data center water use as literally just a very large number of gallons of water basically.
They’ll be like, it’s using 100,000 gallons of water per day. And the reader’s kind of left on their own to be like, “That sounds like so much. I don’t have anything to compare that to except how much water I personally drink.” And you know, I just drink a few glasses a day. So that sounds ridiculous. And in a lot of responses to my writing, I’ll see people be like, “100,000 gallons of water per day could be used for people, but instead it’s being used for AI.” And I think they’re kind of internally comparing this water use to just their everyday life.
And if you actually just step back and look at how much do data centers use compared to all other American industries—a lot of American industries and commercial buildings and recreational buildings will actually also use vast amounts of water for a lot of different reasons. Water is very useful for a lot of different things. It’s actually very cheap in America. America has some of the cheapest water rates of any very wealthy country.
And yeah, most places actually—the main issue they have isn’t that there’s a lack of raw access to water in America. It’s more that their infrastructure for delivering water is aging. And so if you compare data centers to golf courses—I don’t have the exact stats on me right now—but even if you include all data centers in America, not just AI, they seem to be using like 8% of the water of just golf courses specifically. And again, I’ll need to circle back on this. I don’t have the exact numbers in my head right now. But it’s very easy to just Google “how does this compare to other things?”
And there are some places where data centers are using a significant proportion of the water in a local region. There’s this one city in Oregon called I think it’s pronounced The Dalles—I’m not actually sure, should really know that—but they basically have a really large Google data center in their town that a lot of headlines will jump on as using 30% of the community’s water.
And that sounds really big until you read that this place is about 16,000 people. And if instead of reading this as “this weird new thing,” you just interpret this as “the main industry in the town,” it’s like—if there were a large college or a large factory in the town that were using 30% of the community’s water, I think the average person would say, “This is just a pretty normal thing.” But I think people have this internal sense that any water or any physical resource that’s used on a digital product like AI is wasted.
Like all of this valuable community water is just being blown into nothing because it’s being used on a digital product rather than something physical people can use. And so another general theme of my writing is that people really need to get over the sense that it’s always sinful and wrong to use a physical valuable resource like water on a digital resource. Separate from your beliefs about AI, digital resources more broadly are very valuable because information is valuable.
So yeah, a lot more to say about the water stuff. But I’m just pretty convinced that if you see a news article that’s scary about water and you literally just Google, “How much water is this compared to other industries?” Not compared to me personally, but just how does this compare to golf courses and farming and factories and things like that? It’s pretty easy to find, oh, there’s a car factory that uses more water, or there’s a golf course or other things. So yeah, a lot to say about that, going off in a bunch of different directions here.
Steel-Manning the “Climate Crisis” Objection
Dan Williams: I think it would be helpful if Henry and I throw some objections to you as a way of sort of clarifying and stress testing your perspective. I mean, the first one, this isn’t really an objection, but I think it’s probably going to be going through the minds of lots of people and why they’re going to be skeptical of this kind of take. And it’s just something like, look, there’s a climate crisis, there’s a climate emergency, there’s this huge industry, this growing industry where there’s a vast amount of investment going into it. It does use a lot of energy. We should expect that to just increase over the next few years and decades. And it sounds like what you’re saying is, just chill out everyone, there’s nothing to worry about here. Is that—so, I mean, this is not really an argument, but it’s just me trying to put myself into the headspace of someone who hasn’t read your blog posts, they’re listening to what you’re saying for the first time, they’ve heard that this is terrible for the environment. It seems like it must be terrible for the environment, just on the common sense ways in which we think about this topic. If I’ve got that kind of view, what would you say to me?
Andy Masley: Yeah, I mean, first of all, it’s totally understandable to be really concerned about this if you first hear about it. And also, again, usually in these conversations, I want to flag that it’s totally understandable that a lot of people are skeptical of me at first, because I’m a guy with a Substack, basically. So they should actually just examine the arguments for themselves, see the sources I’m using and see for themselves whether it makes sense.
But yeah, basically something I want to flag here is that a lot of people—first of all, 100% where I would describe us as living in a climate crisis. I’m quite concerned about climate change. There are a lot of little nuances there where I don’t think climate change is gonna end civilization tomorrow, but I also think there are a lot of tail risks that are obviously just really concerning and I don’t actually think we’re on a guaranteed path to do a good job with climate change in general. And so there’s a ton of reasons why I’m totally sympathetic to this take and if you see some new massive use of energy that you personally don’t think is valuable at all, totally makes sense to be really worried about that.
I do want to flag that I worry a lot about how people think about the idea of a new source of emissions basically, where I think for a lot of people they kind of see a lot of emissions that are currently happening as kind of in the past somehow or they’ve been locked in. So as an example, it seems like if a new data center pops up and it’s emitting some new amount that wasn’t there before, it seems like people kind of treat this as this special, unique, like, these emissions are much worse and they stand out much more than the emissions from cars because cars are kind of in the background.
And I don’t want to go out and make the claim that these emissions don’t matter so much, but I do want to make the claim that it doesn’t really matter which emissions are new and which aren’t because in some sense, every day we wake up and emit huge amounts of new carbon dioxide into the air, mostly from much more normalized everyday things that we do. And so I don’t really want—I don’t want AI to receive zero scrutiny, but I also want its emissions to receive the same scrutiny or the same level of proportional scrutiny as other normal things that we do.
Even though, say, a new AI data center might open up, if it’s only emitting one thousandth as much emissions as the cars in the local city, I think it should still only receive about one one thousandth of our attention or something like that. There’s another interesting argument here where you can say because emissions are new, they’re maybe more malleable and changeable than cars. But I kind of worry that this locks us in too much to old ways of doing things. And it’s in some ways kind of an inherent argument against starting any new industry in America whatsoever, where any new industry is going to come with quote unquote new emissions.
And so I think my claim isn’t that, yeah, again, it’s not that we shouldn’t worry about this at all, but I just want these to be compared to all the other ways we’re emitting. And when we’re thinking about what to cut, we should think about the things that will actually cut emissions the most. And the fact that some emissions are quote unquote new shouldn’t really blind us to all the ways that other emissions have been normalized, if that makes sense.
Henry Shevlin: So developing this line of objections, you mentioned golf courses, but golf courses to be fair do get a lot of hate already.
Andy Masley: They’re crazy. Yeah. I mean, to be clear, golf courses use unbelievable amounts of water. So I do want to flag that there are a lot of other comparisons you can make. But sorry, go ahead.
Henry Shevlin: No, it’s interesting, right? I think there’s lots of other stuff that people would say, “Okay, sure, we don’t need—maybe golf courses are even worse than AI, we should get rid of golf courses.” But there’s lots of other stuff that we really need. Obviously we need water for agriculture, we need cars and trucks to bring our food to supermarkets. We don’t really need this AI stuff in the same way. So its carbon footprint and water footprint is more concerning than the carbon and water footprint associated with haulage or farming or other core goods as they see it.
Andy Masley: Yeah. This is an interesting question that brings up how much we should worry about the emissions from something that we see as basically useless. Where I think a lot of people are really kind of thrown off in these conversations where they’ll be like, “No matter how little AI uses, and even if Andy says it only uses one 100,000th of my daily emissions or something like that, literally all of its emissions are wasted or bad because AI is useless.”
And I don’t really want to get into conversations and debates about whether AI is that useful because people just go in so many different directions here. So I kind of just want to bracket that and instead say that most of the problem with climate change is actually that—limiting this to climate change for now and can talk about water separately—but the big problem with climate change is mostly, most of the time, that the ways that we use fossil fuels are actually incredibly valuable to us. And it’s not that we’re wasting all these emissions on these things that don’t matter for us. It’s like driving my car makes my life much easier. Flying makes people’s lives much easier.
And it doesn’t really make sense to really, really hyper-focus on small ways that we use energy just because it’s all wasted. That obviously matters and it doesn’t count for nothing. And if we are wasting a lot of energy, yeah, if I thought AI were useless, I would want to generally shut it down and I would be more worried about AI and the environment to be clear. But I also don’t want us to lose sight of the fact that if I do something that’s useless like AI and it uses one 100,000th of my daily emissions, and then I drive my car and it’s very useful to me, I still think driving my car is actually worse for the environment by any meaningful measure, basically.
And then on the food thing, I got a little bit snippy about this, as a vegan of 10 years, where if I’m being lectured to about not using AI by someone who just ate a burger or something, I want to be like, well, something like half of the agriculture, half of the food that we grow in America is used to feed animals that we then eat. And so I’ve cut my personal agricultural water footprint by 50%, and that’s actually quite a large part of my total water footprint. And so a lot of this food that we see as inherently—”people really need to have access to steak and pork and chicken”—I don’t agree with. And I want to say that there are actually more promising cuts we can make there. I don’t talk about that so much just because I think the main problems with animal welfare are not its environmental impact, but it is kind of a nice win on the side.
And so, even there, if you think AI is completely useless and you also really like eating beef, those are both fine, but you need to be able to say the beef does actually matter more for climate, even though it’s providing you value in a way that AI doesn’t, if that makes sense. So that would be my general take. Again, not to not worry at all. If I thought that all these massive data centers were providing nothing of value, I would say, yeah, we should shut these down. These are bad for the environment because they’re adding nothing. But even then, they wouldn’t be as big of a climate problem for me as cars or animal products or other things.
Henry Shevlin: I guess I’d also—sorry, do you want to go ahead, Dan? I was going to just add, I think also when people are drawing this comparison between AI usage, ChatGPT usage versus food or vital infrastructure, right? Maybe that’s not the kind of marginal comparisons to be making. They should be thinking, well, you know, if we’re going to cut, what is the most expensive and least rewarding thing that I do? Maybe it’s that 200th short form video that you watch at 3 a.m. when you really should have gone to sleep an hour ago, maybe it’s not turning off your computer for the convenience of leaving it running so you don’t have to go through a startup sequence. So maybe again, that’s not an apples to apples comparison thinking about the most essential things we do versus AI.
I had another objection to throw your way though, which is how about just this worry that in a lot of cities around the world, but I guess particularly in places like Arizona, there’s just this real risk of running out of water. Home building projects are getting canceled or frozen because there’s just not enough water to go around. Should we really be building data centers when we are encountering these hard limits in terms of actual water supplies?
The Arizona Water Situation
Andy Masley: Yeah, a ton of stuff to say about that. I mean, first of all, I don’t want to lecture city planners and would defer to them on this stuff. Again, just some guy here and I definitely don’t think my position is we should just build data centers willy nilly. I kind of see these as equivalent to very large factories that have very specific resource and energy demands. And I wouldn’t say we should just plop a large factory down literally anywhere and not care about the environmental impacts.
I do have a lot of thoughts about Arizona specifically, just because this comes up a lot in the data center conversation because a lot of data centers are being built there and it’s an incredibly weird situation, because it’s in the middle of a desert and people will say we shouldn’t be building anything new that uses water in the middle of a desert like Arizona. That’s really bad for water access. I’m very liable to ramble on this stuff. So please stop me if I go too far here.
But something really weird about a lot of areas in Arizona where data centers are being built is that the water is already being pumped in from hundreds of miles away. The general situation with water is actually very weird there already. This is actually a place where water access and environmentalist concerns about water really, really come apart. Because if you care about the fragility of local water systems, the Phoenix area has grown really fast in the past 20 years or so. It has really rapid growth. If your goal is to make water for every new person who comes in as maximally cheap as possible, I think that’s admirable. I’m more on team “technology is good, people should be able to live where they want” and stuff. But I also will sign off that this will create some problems for all the water systems that we’re pumping water from a hundred miles away. This is not exactly the most environmentalist thing we could be doing to keep Phoenix’s water rates as low as possible.
So I think the first thing to say is that this is a place where environmentalists and equity actually really come apart. And I think if you’re a really hardcore water environmentalist, I think your first take should be, people should not be living in Phoenix. People should move somewhere else. So there’s that.
Secondly, a lot of other industries in Phoenix use a ton of water. Best guess I have is that all the data centers in that area are using something like 0.2% of the water in the Phoenix area. Again, I would need to go back on this. But again, Phoenix has a ton of golf courses which is crazy to me. I don’t know if either of you have ever been out in that area but it can be really eerie because you can be in this barren desert and just you stumble on this lush, almost sickly green amount of space that’s just been artificially created out of this water that’s being pumped from far away or pumped from local aquifers and stuff.
Again, what I’m saying, I could be getting some of this wrong. I don’t know exactly how Phoenix gets its water. So don’t quote me on this too much and people listening just do your own research on this.
But yeah, basically any argument that says that data centers should not be built in Arizona seems kind of like an argument that Arizona should not have any industry. And ultimately, if we want people to be able to live in these places, we need some kind of way of supporting the local tax base. Data centers don’t provide very many jobs. This is kind of another issue with them where they use a lot of resources relative to the jobs they provide. But at the same time, they’re usually providing very large amounts of tax revenue to whatever locality they’re in. My best guess right now is that data centers are using 1/50th of the water that golf courses use in that area of Arizona, but they’re actually providing more tax revenue on net than the golf courses are just because they’re part of the single most lucrative new industry in America right now. And so the locals are benefiting quite a bit.
And so my claim is that if you’re against data centers being built in the desert, you’re also against the city existing in the desert in the first place. And that is a legitimate take, but you need to be clear that people’s water bills should not be made as low as possible if that’s your goal ultimately. So yeah, I’ll have to say about that.
“Every Little Bit Counts” and Why That’s Wrong
Dan Williams: I really want to circle back to this point you made about the utility of AI and how people’s views about the utility of AI affect how they think about this topic. And I think that would be a nice segue into another set of issues. But just to sort of conclude this discussion specifically about AI and the environment, I think some people might say, “OK, let’s suppose you’re right that actually the environmental impact of using ChatGPT is much lower than many people have assumed. Isn’t it the case that every little bit counts?” I think you’ve got a kind of interesting response to that kind of intuition that many people have, which is that, okay, maybe it’s not such a big aspect of our energy use overall, but if there’s a climate crisis, then shouldn’t we pay attention to even small things like this?
Andy Masley: Yeah, this brings up—I very strongly recommend for anyone who is interested in doing more climate communication and thinking about proportions like this, there’s a really great book called Sustainable Energy Without the Hot Air by David MacKay. And the introduction of that book has this really great long rant by him of that exact quote, “every little bit counts” basically. And I think that the quote “every little bit counts” is actually really quite drastically bad for the climate movement overall for a lot of different directions.
Something weird about this conversation is that back when I was in college, at least in the conversations I was having, I feel like everyone was kind of on the same page about this where people are talking about climate change as this looming crisis. A lot of times they’ll compare it to being in a war where we’re in this war where we have to strategize and make all these really complicated moves in society to make it go well. And the quote “every little bit counts” basically implies that you can kind of just shoot off in random directions and be like “I’ll just cut this, this, this, and that, and I’ll feel good about climate change,” regardless of how much emissions I’m actually cutting altogether.
So one comparison I make is different people making different decisions about what to do for climate change, where one person cuts out—he’s addicted to ChatGPT, he uses it a thousand times a day, he can’t stop, and he makes the noble sacrifice for climate change where he cuts it out. Another person decides that she’s going to try to keep a nuclear power plant open for another 10 years and a third person goes vegan for a year.
And best rough estimate is, even if this other person who’s trying to keep a nuclear plant open for another 10 years, even if she’s working with 500 other people to do this and their impact is divided by 500, she’s still saving tens of thousands of times as much CO2 from being emitted as the vegan even, where there used to be this general idea that individual lifestyle changes don’t really add up compared to big systematic changes we can make to the grid overall.
And so if you were approaching these people, I think you should tell the vegan and the person addicted to ChatGPT, if you are going to focus this much attention on the environment, you really need to strategize about what you’re doing and what you’re actually cutting because the difference in what you can do is on the order of millions of times as much.
I think people sometimes mistakenly think that their impact on the environment is either a little, a medium amount or a lot. And using ChatGPT a lot can bump you up from a little to a medium. And what’s actually happening is there are huge orders of magnitude difference in the interventions we can make. And literally just giving people the advice, “every little bit counts. That’s what matters,” I’m worried kind of gives a lot of status to those really tiny moves that just don’t actually make a dent for climate change at all.
Like going back to the war comparison, if we’re in World War Two and we were thinking about how we can win and I was like, “I’m gonna do my own thing, every little bit counts. So if I’m strategizing about how to win, maybe I can just build a little wall out of sticks and that will...”—in no other situation that is an emergency, do we think “every little bit counts” basically. I think this is just a very bad way of thinking and I’m influenced a lot by basic effective altruist ideas here about you should really target interventions that do a lot of good because the differences in interventions are on the order of hundreds of thousands or millions of times as much good sometimes.
And so yeah, I basically worry that the “every little bit counts” thing is basically a way of peppering your life with random amounts of guilt that you then assuage and you mostly don’t emit less than you were before.
AI Utility and Hostility Toward the Technology
Dan Williams: I think it’s just on the point I mentioned about the utility of AI and coming back to that conversation. So I think it’s worth saying, I use ChatGPT and Gemini and Claude all of the time. I get enormous value from these systems. So do you. I know that Henry does as well. I mean, we’re pretty weird, I think, in the fact that we are very pro the utility of these systems. And I think many people not only do they think actually modern chatbots are kind of useless, they hate modern chatbots. They think they’re actively harmful, right? They’re plagiarism machines, they’re slop generating, stochastic parrots, they’re enriching these sociopathic tech billionaires. And if you’re coming with that web of associations when you’re approaching the topic, it’s gonna be a very different kind of conversation than if you’re like us, where actually we’re pretty positive about all of the different use cases of that.
So I’m interested in your view about that. I’m also interested in what Henry thinks about that. Many people are approaching this specific conversation, but AI more generally, with just loads of hostility towards this technology. And I think we need to kind of acknowledge that. And I’m wondering what you two think about that as a driver of how people are thinking not just about AI and the environment, but the entire topic.
Andy Masley: It’s definitely a noticeable part of almost every conversation that I have about it where I think if I talk about AI doesn’t use this much water. A lot of people read that as me saying there’s this sinful evil technology that’s ruining society. And I, Andy, want to sacrifice this precious resource and just burn it up forever for the sake of this evil thing. And I think that gets at something really primal, some basic reaction they have to “there’s this precious thing that keeps us alive and you want to sacrifice it to this massive social catastrophe basically.”
So most conversations about the environment I try to frame as saying we need to kind of separate out how bad you think AI is and instead just ask, if you were talking to an environmentalist and you were being honest with them and say, “Where in the world should you work to do the most good for climate? What’s actually going to really move the needle, regardless of how much evil you think AI is doing?” Its evilness isn’t really a good way of determining “this is the thing I should work on for climate.” I would almost always tell anyone doing this, you need to help with the solar scale out and you need to build out new storage systems for energy and things like that and just this is gonna be what moves the needle on climate not preventing one additional data center from coming up. It just in terms of impact it just seems to completely dwarf the other one.
But yeah in terms of the value, I do try to be clear in my writing that I’m trying to write this for a general audience. If you hate AI I want to convince you as well that this isn’t the biggest deal. But that being said, man, it’s really difficult to communicate just how useful these tools are to me. I think a lot of my personal blogging success has been because I have these little robot research assistants where—and when I say that I want to clarify that I’m double checking every source—but basically I think of ChatGPT as a little helper where I can be like “Okay you sometimes lie to me and that’s okay. I understand that you just have a compulsion to do this but I’m going to ask you to assemble all these external resources for me on a question so I can just use a thinking prompt and be like, assemble a ton of information on how much water Arizona uses.”
And it comes back with sources. And I’m like, “Thank you. I don’t totally trust you. So I’m going to read all these. But look, they said what you were saying. That’s good this time.” And that on its own, even if the chatbot lies to me one in every five times it does that, is still just such a phenomenal upgrade from what I was doing before. So I’m personally just like, I really think I wouldn’t have had nearly as much success as I’ve had if I didn’t have these little research assistants to help me with stuff.
That’s just one of the few things I use it for. But Henry, did you want to chime in about your thoughts on this?
Henry Shevlin: Yeah, I’m very much on the same page. I guess I find it frustrating that so many positions get bundled together that I think would be worth handling appropriately. So “AI is dangerous,” right? “AI is useless.” These are two positions that are often held by the same people. And there is maybe a way to make them work together. But I hear people talk about the massive harms that AI is doing. Also, “AI is completely undeserving of the term intelligence. It’s useless, doesn’t do anything valuable. AI is exploitative.”
These all seem like viable debates and important debates to have, but I don’t—it’s dispiriting when they’re all bundled together. And if you even start to push back against one sub clause in one of them, then it seems like you’re already pitching your—you’re already flying your flag as “you’re one of these pro AI, tech bro people.”
What I think, for example, one thing that I always found strange in debates around AI safety is many of the people who were most critical of AI X-risk as a topic, but also very dismissive of AI capabilities. So people—these evil tech bros pushing their AI safety agenda—there’s already some confused sociology informing some of these debates. And by the way, AI is useless. This sort of idea though that AI is useless, I do think it is fascinating how many people, how much variation there is in people I speak to about how useful they find AI. So even some really quite smart, relatively tech savvy people I know, say that they are quite dismissive of the value of LLMs. They say, “No, it ultimately takes more time to fact check everything AI’s put out. It doesn’t really save me time.”
And yet other people, I mean, I guess the three of us would say that, “No, this is incredibly useful.” So one of my pet projects for the next year or so is to try and get a better handle on why some people find so much more utility in LLMs in particular, and others find them borderline useless. I’m curious if Dan or Andy have thoughts on this.
Dan Williams: Yeah, I mean, I would have thought one factor explaining some of the variation is some people just hate AI. And if you hate AI, you’re not going to invest in trying to figure out how to use it. But the set of people who are smart, sort of intelligent, knowledgeable, informed people who aren’t just part of the tribal anti AI crowd, who also don’t get any utility from these chatbots—I’m personally baffled by that because I get such an enormous amount of value as someone who is a researcher, a blogger, et cetera. I find them just incredible research assistants for brainstorming, for getting information, as Andy says, as a kind of initial fact-checking filter. So many sources of positive utility from these systems. So I’m baffled and I’m sort of looking forward to where your project goes if you’re going to try to investigate what’s going on.
It occurs to me, Henry, I cut you off earlier when you were talking about the arguments as soldiers framing. And is this connected to what you were just saying about people bundling all of these different critiques into like a single thing? Is that the idea?
Henry Shevlin: Yeah, exactly. It’s like, I don’t know, let’s just say we’re debating the hobby of torturing kittens. And you’re throwing out all these arguments about—I’m also firmly against torturing kittens. But someone says, “And you know what? Torturing kittens has a massive climate impact.” And you’re like, “OK, hang on. I’m not sure it really does.” “Oh, you’re pro-torturing kittens. I see.” That’s the kind of essence of the problem.
Andy Masley: Yeah, it is really weird. I’ve noticed this quite a bit. Obviously, I want to flag, I’m director of Effective Altruism DC. I’m around a lot of people, including myself, who are really quite worried about the potential for future AI systems to do a lot of bad, even outside of the standard X-risk cases, just—I can see a lot of ways that more advanced systems could really benefit authoritarian governments and surveillance. And it’s just very odd to me. Sometimes I meet people who are very critical of AI but who will also have these same criticisms as well where “one, AI is useless and two, it’s totally gonna empower authoritarian governments and militaries and you’re sick for using it because maybe it’s already been involved in X or Y or Z war crimes and stuff and also it uses so much water.”
And my reaction to that is just like, if the authoritarian dictator’s troops are bearing down on me, my first thought isn’t like “each gun used a whole cup of water to make.” It’s just so disproportionate where it seems like a lot of people really underrate the fact that their really valid concerns about a lot of aspects of AI could actually really be helped if they stopped saying “this is destroying the social fabric and it uses a few drops of water every time it does.” And I’m like, “Well, let’s talk about the social fabric thing. This other thing just seems kind of silly.”
And I think that’s actually an especially powerful talking point. I found where a lot of people who viscerally hate AI and again can totally understand being worried—I am quite concerned about a lot of aspects of AI—but even for people who hate it, actually I find that it gets through to them to say that sprinkling on this thing about energy and water at the end just is actually really diluting your point and it’s really helpful to just talk directly about the potential harms rather than just adding on this little addendum.
Even for me, as someone who’s really into animal welfare, the way we treat chickens is just abysmal. I think this is one of the huge moral catastrophes happening right now and I actually don’t think chickens are especially bad for the environment. If you actually just read about how much impact they have, they’re super clustered together, they eat really basic food and stuff. I mostly don’t say when someone’s eating a chicken sandwich, “that’s really adding to your food emissions or water use.” I mostly just say, “there was a little guy who had a really bad time his whole life.”
And I think the environmental thing just really takes away from that. So yeah, I tend to say, I feel like there are a lot of funny comparisons between how I experience animal stuff and how a lot of people experience AI. Yeah, there are these big evil buildings that use a lot of energy and water called factory farms that a lot of our everyday things are coming from. So a lot of separate comparisons to make about that.
Henry Shevlin: And also produce massive amounts of animal suffering.
Andy Masley: Yeah, yeah, I’ve wanted to, at some point I want to write some short post where I’m like, let’s just compare data—I’ll totally indulge in whataboutism for a second and just be like, data centers have all these big computers in them. And there’s this other big evil building called factory farms and they all have pigs having just incredibly bad lives and they’re both using a ton of water, but the factory farms are using way more. And if you’re worried about one thing, please God, maybe just focus a little bit more on the factory farms. But I think this would correctly be read as a whataboutism thing that’s maybe a little bit distracting so I haven’t stooped to that level yet.
How to Make the AI Debate Better
Henry Shevlin: Just to throw one quick point as a reflection on the kind of arguments as soldiers idea, I think maybe—we’re all trained as philosophers for our sins. And one point that I really stress, particularly on undergrads and early grad students is that very often, including weaker arguments in your papers, drags the paper down, right? So your paper is only often as strong as your weakest argument. And this is particularly relevant if you’re sending publications for review. You just want to make sure you don’t include any subpar arguments in there at all, right? Just better to drop them entirely. But I think a lot of people just kind of have this kind of buffet idea. It’s like, “Well, why not? Well, let’s throw another argument on the pile.”
Andy Masley: Yeah. And yeah, it goes kind of back to the soldier thing where you can almost feel the author writing as if it’s like “and I got him with another one and another one.” And you stop feeling like the author is actually trying to build a comprehensive understanding of the world. And it’s just firing off individual like “this will really destroy the tech bros who like AI or whatever.” And yeah, totally buy the idea that—yeah, a lot of—it’s very easy for people—I personally have a lot of experience with people reading through my 20,000 word post and being like, “But this one thing he said is so stupid. So the whole rest of the thing is a waste.” I’m like, “God, it’s actually quite hard to write about this stuff.” And so yeah, I totally buy that. Just getting that across matters a lot.
Henry Shevlin: So maybe this is a good time to talk about another one of your posts that I found really helpful, which is all the ways in which you want the AI debate to be better. Could you give us just a quick overview of what you see as the—I know it’s a really long post. It’s like what, 10,000 words or something. But what are some examples, perhaps, is a better way to phrase that question, of ways in which you think the debate could be better.
Andy Masley: Yeah, I think there were just a lot of general tendencies that people would get really tribal really fast in general conversations about AI. I was noticing there were a lot of places where people would suddenly start to argue for things that I just don’t think they actually believed a year ago. A lot of people who were very secular would suddenly start talking about how there’s something fundamentally magic in the human mind that a machine can literally never replicate. Other really contradictory weird ideas where AI is both really dangerous and completely useless. Other things like I think people get too tribal in both directions about whether the overall arc of AI is going to be good or bad.
I’m again pretty wary and uncertain about the future of AI. I can totally see ways it makes society much worse overall. And I think my background goal there was to kind of just poke at people who were doing this really soldiery thing and just trying to throw a lot of arguments in both directions about how I would really like us to just step back and stop trying to be a soldier for our side and just engage with some of what I think are these pretty convincing arguments in a lot of different directions. I was trying to raise the standards of the conversation a little bit and just say we shouldn’t just be throwing out blindly these slogans that I don’t actually think cohere with everything else that we believe.
Is AI Just a “Stochastic Parrot”?
Dan Williams: Yeah, it’s a great blog post. I assign it as part of my reading list for my students when I teach philosophy of AI.
Andy Masley: Yeah, the dream for me, by the way, that was insanely flattered by that. So thank you. Yeah.
Dan Williams: No, no, no, it’s a great post. You mentioned there just in passing, so some people have come to believe, maybe they always believed, that there’s something kind of magical, supernatural, non-physical about the human mind, such that machines couldn’t in principle replicate the kinds of capacities that we see associated with human beings. I mean, I take it there are sort of, there are two issues in that general conversation. One is whether you think even in principle machines could replicate human competencies that we associate with intelligence and so on. The other is your assessment of current state of the art AI. Because I take it somebody could think, yeah, okay, in principle, there’s nothing magical about human intelligence and it’s just this very sophisticated information processing mechanism. But nevertheless, it’s also the case that chatbots are just glorified autocomplete or stochastic parrots and so on.
And you’ve got a really nice discussion in that blog post, which sort of attempts to address that latter skeptical position. You also look at the other more general skeptical position. I know that you’ve got loads of interesting views about this as well, Henry, but what’s your take in response to this idea that, meh, it’s just a stochastic parrot, it’s just glorified autocomplete, there’s nothing really intelligent when it comes to these systems?
Andy Masley: Yeah, I mean, part of my motivation for writing the environmental stuff, honestly, was I really want a lot of people who aren’t touching chatbots because of environmental reasons to just sit down and play with them and see for themselves. If this were a stochastic parrot, what would I expect them not to be able to do basically? I’m familiar with stochastic parrots. Back when I was in college, I was a physics major and a lot of my fellow physics majors and I were sometimes not entirely ethically using Wolfram Alpha. And sometimes that would also spit out just wildly incorrect things. And it was very limited in what it could do. And that I would say was kind of a stochastic parrot.
Post-recording note from Andy Masley: “Wolfram Alpha is not a stochastic parrot. I use that a lot in examples of a chatbot-like thing I was using before ChatGPT and got my wires crossed, sorry!”
Obviously, this is actually very different. This might not be the best comparison but just more broadly, first defining for yourself what a stochastic parrot is. Ultimately, you can say about that stochastic parrots don’t actually do any internal thoughts. They just kind of assign probabilities to different things and spit out an answer kind of randomly. And there’s nothing going on inside. And so if you hold this view and you can engage with GPT-o1 and say “okay, these are the things I don’t expect it to be able to do” and if your category of things that you do expect it to be able to do include literally all useful cognitive tasks, at some point I’m like, it doesn’t really matter if it’s a stochastic parrot or not.
And then separately I think a lot of people really misunderstand what it means to say that AI models predict the next token blah blah blah where I think they mistakenly think that the AI model is just taking this one word at a time and just kind of rolling the dice. And it’s like, the last word was America. Maybe the next word seems very likely that it is. And I think what they’re misunderstanding is that to competently predict the next word, you often need a really powerful general world model inside of the AI model, which I think there’s been a lot of just interesting implications that AIs do have something that can be called a world model from their training that they’ve learned partly by just predicting so much text that over time you start to learn “okay, this idea is associated in vector space with all these other similar ideas” that you can start to draw some general web of meaning between them.
I’ll flag that I’m also not an expert on AI, so take everything I say with a grain of salt here too. But one example that I really like from Ilya Sutskever, formerly of OpenAI, just really massively influential AI scientist, is if you’re reading a detective novel and you get to the end of the novel and the detective is giving their spiel about who killed them and they say “and the killer was”—your ability to predict the next word there actually depends on your general world model of the entire book and if you could do that correctly, it’s a sign that you have what I would consider to be general understanding of the world.
So I definitely don’t think AIs are human level. There are surprising ways that they fail still and it still seems like there’s a lot about human intelligence that’s very mysterious to us. And I’m totally open to the idea that being just physically embodied maybe just matters a lot. And there’s just a limited amount of information things can get from text. But that limited amount of information also just contains a huge amount of stuff that’s very useful to me personally.
So even if they’re very far from human level, they’re already way beyond the level of “they’re personally useful to Andy.” I think sometimes people will be like “they’re either human level or they’re just stochastic parrots.” And there’s this huge middle ground where I’m like “well, they’re not human level, but they can act as basically useful research assistants to me.” And wildly useful research assistants actually. So just having more of that spectrum also matters a lot. But Henry you should go ahead. I know you have a lot of thoughts about this.
AI Capabilities and Moravec’s Paradox
Henry Shevlin: Yeah, okay, I’ll try and limit—I agree with everything you said, Andy. I think one of my particular frustrations, one that you call out in that post, is that I don’t see people make claims about what chatbots can and can’t do without just experimenting themselves.
Andy Masley: Yeah, I will, I just need to scream this really loud for just a moment. If you are making claims about a chatbot, you should try using the chatbot and see if it can do what you were claiming it can’t do. I’ve just seen—there have been articles in the New York Times where people will say they can’t do something and then you can literally just hop on and be like, “Do this please.” And it does it perfectly every time. But sorry, go ahead.
Henry Shevlin: Well, yeah, I was thinking, I think in the Chomsky piece written, I think in early 2023, there were confident claims about various things that ChatGPT can’t do. There was also the piece that I think you mentioned where it was suggested that chatbots can’t detect irony, which is particularly striking to me because it’s something that I kind of regularly probe ChatGPT on. I’ll often make slightly dry or sardonic asides and it’s amazing at picking up when I’m doing so.
In fact, I think that’s probably one of the most surprising capabilities. I think we’re used to operating with this sort of Commander Data mindset when thinking about AI, where it’s all the subtleties and nuances of human communication where they fail and they’re really good at the kind of logical, rigorous thinking. But actually these days it’s almost kind of the opposite. There’s this idea, long-standing idea called Moravec’s paradox, which is “what’s easy for humans is hard for AI and vice versa.” And the funny thing is LLMs kind of go in the opposite direction. They kind of violate the normal expectations around this insofar as so many of the kind of soft skills that we think of as very human, like sarcasm, wit, dry irony, and so forth, they do great at. And it’s exactly the same stuff that we find hard—very, quite complex, multi-step logical reasoning—that they find hard.
Another source of frustration for me here is sort of, I feel like people should be keeping score a lot more about what AI can do this year that it couldn’t do last year, rather than getting into these essentialist debates about “no, the nature of these systems is such that they can never do X.” And it’s like, when those debates are constantly changing and this year it’s X, next year it’s Y, the year after, the constant goalpost shifting, I think is frustrating and elides or obscures the actual really demonstrable progress across multiple capabilities that’s happened.
Andy Masley: Yeah, so much to say about that. I will say just super quick that it’s really overwhelming just how much popular commentary about AI is this really rapidly congealed common wisdom that people develop where they’re like, “Obviously, AI can make kind of okay art, but it’s never going to get hands right,” and just a few months later, it’s there. And people don’t seem to notice what’s happening there where some new capability will come out and just some new take on it—the popular “all intelligent people believe this” take will just snap into place and people will be like “this is what popular intelligent people say about these systems.” Like “yeah I can’t get hands right” and then Sora 2 comes out and it’s like “yeah, this one tiny thing if you have a gymnast flipping sometimes their leg is just slightly off place and that will never change.” And it’s just the rapidness at which people are willing to be like “this just changed, we’re not really gonna notice that massive change and just be like, ‘Well, all intelligent people know this is the limit.’” It’s been a little bit alarming to me, actually.
There’s this one guy on Twitter who will go unnamed but seems to have a pretty big following in AI. I remember I posted a while ago, “I predict within a year, image models will be able to make these shapes well.” And this guy swoops in and is like, “You’re anthropomorphizing the AI. You’re thinking there’s some magic in there, but it’s actually just a stochastic parrot, and that’s so silly.” And then my prediction comes true within five months or something and I’m just like “that was easy.” You can literally just—you don’t have to think there’s a little human hiding in the LLM to think that it seems like the capabilities are going to continue to improve. Just even going back a few years, I really want people to maybe experiment with GPT-2 a little bit just to see how far it’s come since 2021 or something.
Post-recording note from Andy Masley: “GPT-2 was available in 2019 and GPT-3 came out in 2020.”
Dan Williams: I do think, maybe to steel man the other side a little bit. I mean, you might think that there’s this sort of moving the goalpost style strategy, but you might also think, it’s just very difficult to identify precisely what constitutes a test for the kinds of capabilities that we care about. And it’s just the case that when you specify, “A system won’t be able to do this,” and then it can do that, you might think, “What you’ve learned there was actually that was never a particularly strong test for what you really care about to begin with.” And I do have some sympathy for that. I think, like I said, I use these sorts of systems all of the time. I do think they’ve got this incredibly strange pattern of competencies and failures. And that’s why I would push back a little bit against this framing in terms of human level. I’d rather think of it as a kind of alien intelligence where it’s superhuman on certain kinds of things and it’s just nowhere near what human beings can do on certain other kinds of things.
And although I take the point that there are people who have that kind of perspective and they’ve tried to therefore say, “Well, there are these specific discrete tasks that they’re not going to be able to do,” and then when the next iteration comes out, they look like an idiot. But I think there’s a sort of more charitable framing of that where it’s just, it is just really, really difficult to state precisely what would constitute an adequate test for the kinds of capacities that we care about. And the fact that they just destroy so many of the benchmarks that we’ve been coming up with, that might be telling us something about how if we just get better and better at doing whatever they’re doing, we’re going to get to super intelligent AI systems. It might also just tell us something about how these benchmarks and these tests that we’re using actually aren’t all that reliable as a way of getting at the thing that we really, really care about. That would be my kind of steel man of the rival perspective.
Andy Masley: And I totally agree with that. I think that in the totally other direction, there’s a really unfortunate tendency by people who are very bullish on AI capabilities to sometimes imply that there’s going to be this perpetual smooth line and really underrate just how many different, very varied, complex capabilities we’re talking about. Where I’m totally open to the idea that the current quote unquote paradigm might just not produce really high levels of intelligence in the way that we would expect. It might just be that they are pulling from their vast amounts of training data to basically replicate what they’ve seen, but they can’t actually come up with new things yet.
It’s kind of another case where I feel like if you limit the spectrum to be “AI is incompetent versus it’s competent,” and you give a lot of arguments for the competent side, that really blurs just how varied and how huge of a gulf there is between different levels of competence. It could be that in five years, LLMs are much better at X or Y or Z things, but just don’t achieve the type of high level cognition that we’re expecting from really advanced AI systems. And I do want to make it clear that when I say LLMs I predict will get better I mean they’re getting better in very spiky ways and I’m really not confident at all that the current paradigm scales to AGI or something close to that. Really wildly unsure, mainly because I’m just some guy and have almost no technical background in this stuff so listeners please take everything I say with a huge grain of salt again.
The Challenge of AI Benchmarking
Henry Shevlin: I mean, I completely agree, Andy, and I agree, Dan. I think benchmarking is just really, really hard. And I think if you’re sticking your neck out and saying, “I think this is a good test,” right, there’s a very high likelihood in a rapidly changing technological paradigm that you’re going to make some bad calls. So just to give a personal case, back about 10 years ago, 2016, 2017, I got really excited by the Winograd Schema Challenge. So this was a proposed benchmark for AI that relies on the fact that a lot of pronouns in English are really ambiguous.
So if I say, “The trophy won’t fit into the suitcase because it’s too small,” right? There’s nothing in the grammar or syntax of English that tells you what the “it” refers to. It could be the trophy or the suitcase. But any competent speaker of English will know that if the sentence is, “The trophy won’t fit in the suitcase because it’s too small,” the “it” is referring to the suitcase. On the other hand, if the sentence was, “The trophy won’t fit into the suitcase because it’s too large,” then it refers to the trophy.
So in other words, we resolve these kind of ambiguities through common sense. I thought, “This will be a great test. Any AI system that could reliably disambiguate these pronouns would have to have something like genuine common sense.” But I mean, long before ChatGPT, already by the late 2010s, you had systems performing near human level, just using statistical analysis of what the most likely completions are.
So, I mean, I got that badly wrong and I still think that’s really fascinating for me to process what happened. And yeah, benchmarking is just nightmarishly hard. I think one, just to add to build also on something you said, Andy, I think, and Dan, I think there are some really striking areas where AI is just still so bad. And my favorite case here, there’s a great blog post by Steve Newman called “GPT-o1: The Case of the Missing Agent.”
Andy Masley: It’s crazy.
Henry Shevlin: Where he just really runs through the bizarre and baffling failures that our sort of still quite early stage AI agents are making. My favorite example is this experiment by Anthropic where they had Claude run a vending machine at Anthropic HQ, and the hallucinations quickly got really bad. I mean, maybe hallucinations isn’t quite the right word, but the kind of strategic failures and misunderstandings. Claude was saying it would show up and meet respective suppliers in person. And it’s like, “How are you going to build an effective AI agent if it doesn’t realize it’s not a human?” right? Which is not to say that I think that’s going to be a persistent problem, but I think we are very much in the early days of agency. Sorry, go on.
Andy Masley: I’m blanking out on who shared this, but there was some—I think my favorite personal example of this, I feel bad that I’m getting—I’m blanking out on who specifically it was, but someone on Twitter had shared a screenshot of GPT-o1 saying something about AI, something about deep learning that was wrong. And he was like, “Oh, where did you learn this?” And GPT-o1 very confidently said, “I went to a deep learning conference in 1997, and I remember overhearing this conversation at the deep learning conference in 1997” and just stuff like that is—it’s still wild to me.
And in some ways it makes sense. If you think about LLMs as being these piles of individual, kind of a soup of heuristics rather than a little guy inside, there’s no reason why these heuristics have to all be aligned to reality and it’s like if someone asks you where you learned something about AI you can tell them “I went to a deep learning conference”—that’s just a useful heuristic to use and it’s wild.
I guess on the other end they do also want to push people to think that maybe humans are also often just especially successful more aligned soups of heuristics rather than angels sitting in our brains manning things from behind.
Dan Williams: Yeah. Even when it comes to the next token prediction, I think people underestimate the extent to which many neuroscientists think that minimizing prediction error on incoming data is a fundamental learning mechanism within the cerebral cortex. It’s how animals and human beings learn an enormous amount about the world precisely because as you say, one really good way of building up sophisticated world models is just getting better and better at prediction. It’s this incredible kind of bootstrapping, self-supervised learning strategy.
I think one really good topic to end on would be to ask you a little bit about effective altruism. But maybe just before we get there, there’s another thing that you touch on in the “all the ways I want the AI debate to be better,” which is technological determinism. And I think this would be such a cool conversation to do a separate episode on, because I think it’s just a huge can of worms.
Technological Determinism and AI
Andy Masley: Sure, sure. Yeah.
Dan Williams: You say there that you’re—I mean, basically this falls into the category of things that you think aren’t necessarily obviously true, but should be taken seriously. So you say you’re a kind of technological determinist. Walk us through what that means and why you find that kind of view plausible.
Andy Masley: Sure, so I think that there are a lot of different social theorists who have put forward some basic version of technological determinism in the past. So I would attribute this in part to Marx—I’m very much not a Marxist to be clear—but I think a lot of his really useful insights actually come from some basically technological determinist insights where the basic idea is that new technology and the way that technology works is actually going to have a really huge outsized influence on everything else in society. The technology we have access to is really going to influence social relations and social relations can influence our own roles in the world. I totally buy some basic materialist story where a lot of our behavior is determined by our incentives, our status and things like that. And I think a lot of this is downstream of technology specifically.
So there are a bunch of individual examples of—I don’t think feudalist societies can exist now in these patchworks of how states relate to each other and really complex kind of weird social dynamics, partly because the weapons we have are just very different than what existed in feudal Europe.
Another example that I think a lot of people are really interested in is how farming impacted civilization where before farming, we were hunter gatherers, we were in these small tribes that may or may not have been deeply egalitarian—there’s a lot of unknown unknowns there, we don’t really know very much about that—but it does seem like in general the invention of farming both caused the human population to boom, made people much more dependent on big centralized hierarchical states and ways of relating to each other, and a lot of this just seems to flow from the nature of the technology itself. Farming just makes you dependent on a very specific way of relating to other people.
Another really obvious example is nuclear weapons. If someone goes and invents nuclear weapons, the world is just fundamentally changed. The way that states relate to each other is changed. In that piece I quote one of my favorite essays ever, George Orwell’s essay on the nuclear bomb specifically about how different states or different eras you can kind of predict how free or authoritarian they are, partly by the weapons that are available to everyday people. So he has some funny line about how if everyday people have very easy access to guns, they stand a chance against the state. And as soon as the state gets these really big, powerful new weapons, they can basically just clobber their citizens into submission.
As an aside, I’m not really interested in talking about gun control, separate issue. I don’t really have strong takes about that. But I think one thing that’s really interesting about Marxism is that Marx specifically was one of the first people to, I think, correctly see the full implications of the Industrial Revolution, where a lot of other economists were talking at the time about “this is just this new way of trading.” And meanwhile, Marx was writing all about how this is going to completely dissolve all hitherto existing social relations and make people see the bare structure of society and stuff. And I mostly agree with that, honestly. Everybody becoming very wealthy and becoming very specialized but at the same time a lot of workers in general becoming interchangeable—I don’t want to butcher Marxism or have Marxists get mad at me here so just want to say that his basic description of society where after the Industrial Revolution we should expect people to—a lot of social relations to really rapidly fall away and we can’t expect a lot of pre-existing power structures to continue. I think that’s the case.
And so when I look forward at AI and I’m like, something that could mimic most economically valuable cognition and automate a lot of labor and concentrate power in places—I’m not exactly excited about this. I assume that whatever could flow from this could actually be really quite bad. And we should assume that having access to advanced systems, even if you don’t buy the doom scenario of “AI just rises up and kills us,” there’s still a very high chance that it changes society in a way that leaves the average human in wildly less free or just in a really strange situation that we might not want in the same way that the average hunter-gatherer might not have wanted to suddenly live in a farming society or something.
So yeah I’m pretty into—I basically want technological determinism to be taken seriously as an idea and for people to engage with it and not just say “we can just choose how we use these systems.” I don’t actually think we can choose how we use nuclear weapons. We have some control over that, but ultimately they just create such radical new incentives that people are just really gonna be pushed in specific directions.
Henry Shevlin: Yeah, I think that’s—I completely agree. And I think it’s frustrating sometimes the way technological determinism functions as kind of, as a phrase, functions as kind of a thought terminating cliche, particularly in academic debates where it’s like, “No, that’s technological determinism.” And please go ahead. Yeah.
Andy Masley: Yeah. Well, I’ve been—there have been a few times I’ve really been shot down on this where people are like, “You’re a technological determinist—that’s so outdated.” And I’m like, “I don’t know. It seems like a basically useful insight here.” I haven’t been following how people in academia have thought about this. So again, just some guy here, but the basic impulse just seems to make sense to me and just shooting it off as “that’s this thing we don’t say anymore” just doesn’t really make sense to me. But go ahead.
Henry Shevlin: Yeah, and I think on the one hand, there are sort of the most extreme forms of technological determinism where sort of given technologies mandate or make inevitable certain kinds of outcomes. And I agree, that’s silly. There’s usually some cleavage, but it seems to me the sensible form of technological determinism and the one I think you’re endorsing, Andy, is that technology changes incentives. It changes affordances of states. For example, certain kinds of authoritarian control that would be massively hard to coordinate without things like CCTV cameras become a lot easier if you have CCTV cameras. Things like signals monitoring technologies, again, make certain kinds of authoritarianism more dangerous.
And I think to the extent—and I absolutely share with you, probably my single biggest worry about AI is its capacity for abuse by authoritarian governments. And I think that is, we shouldn’t just assume that all technology—because actually here’s another thought terminating cliche or another common line, people say “no technology is value neutral,” which I think is absolutely right, right, it’s getting there, but it does function as a bit of a platitude. To the extent that I think AI is likely to make certain kinds of authoritarianism more scary, I think we should be wary about shutting down discussions of this just through quick lines about technological determinism.
Dan Williams: Yeah, I think this just general question of whether liberal democracy, as we understand it today, can survive in a world with advanced AI—I feel like that’s such an important and underexplored question. I’m personally pretty pessimistic, but that’s a topic for another day.
Maybe we can end then with effective altruism.
Andy Masley: It’s very disturbing. Yeah, go ahead. Yeah.
Effective Altruism: State of the Movement
Dan Williams: So just as lots of people dislike AI, lots of people dislike effective altruism. I don’t think as many people—so you’re associated with contrarian, controversial opinions from many different areas. I mean, what’s your basic understanding? First of all, I mean, I think the philosophy of effective altruism is trying to do good in a way that’s evidence-based and quantitative in a way that I think came across in our conversation about AI and the environment. But I’m not really part of the EA community. I know people in EA, and lots of people say they’re EA adjacent, and I never know exactly what that means.
Andy Masley: Yeah, yeah, yeah, whoever’s organizing EA adjacent, the C is doing a great job. Let me tell you, they’re killing it. Yeah, yeah.
Dan Williams: Right, right. But what’s the state of EA today? I mean, I think loads of the kind of conversations that people are having about AI, especially in the X-risk debates and so on, to me at least, someone observing, it feels like that’s really connected to developments that have happened in EA, and EA has played a really big role in much of that. But you’re obviously professionally involved with effective altruism. What’s the state of the movement today?
Andy Masley: Ooh, a lot to say about that. I think overall, behind the scenes, I feel like EA in specific places, especially, is really quite healthy. For me, one of the things that I really love about it is it’s actually quite heterogeneous in terms of different people’s thinking on it. People aren’t really in lockstep about any one idea. Within EA DC, which is one of the largest EA communities anywhere, so I have a lot of access to how different people are thinking, people have really wildly different takes about basic questions about AI X-risk, a lot of other things.
We’re still getting a lot of just wildly interesting, competent people coming to the general movement in general, which I think is a sign of a lot of health. Obviously, the last few years have been a wildly bumpy ride. We can talk about that a lot as well. But I mostly—I don’t know, I’ve been pretty excited about basic EA ideas since I was a teenager. I remember I saw a Peter Singer video in 2009 that was one of a few really profoundly impactful videos on me where he’s just kind of walking around talking about global poverty and talking about how strangely people spend their money given all the problems in the world. And so I think in its full arc as I’ve seen it I’m quite bullish on it honestly.
Yeah, a ton to say basically. And I do obviously want to acknowledge that EA has produced both a lot of good and bad ways of thinking about AI. I definitely agree with a lot of its critics that there are a lot of places where people have kind of developed wildly overconfident, very specific world models and models of how AI works, especially. And I think at its worst, EA can be kind of a way of tricking yourself into feeling more confident about a very complicated topic than you actually do.
But I think for me personally, part of the reason why I’m so excited to put my face to it, especially the DC network, is that it’s still been a space where an incredible amount of new, very valuable ideas about the world and AI especially have been generated. I think in my own writing, I’ve been influenced a lot by basic EA sources in writing, where you try really hard to not be super teamy in what you write. You try to give people a complete overview of the issue and really try to push forward the idea that, yeah, we should be very quantitative and understand that there’s actually these huge gulfs between how different things impact the world. So yeah, I mean, I’m a big fan basically, but there’s a lot more to say about that, but happy to go in any specific direction there.
The Three Main EA Cause Areas
Henry Shevlin: Can I ask, by the way, Andy, so I would consider myself EA adjacent, or at least adjacent to EA adjacent, know a lot of EA adjacent people. So being really crude here, it seems like the three big cause areas throughout the history of EA, at least to my mind—tell me if I’m missing any—are animal welfare interventions, development interventions, things like bed nets, famously deworming and so forth, and third, X-risk. I’m curious if that seems like a good initial division of the cake and also if you have any thoughts on how the balance of those three have been evolving in the movement.
Andy Masley: Yeah, I tend to say—so basically the motivation for all of those three is that the specific guiding idea of EA is that you should spend time trying to figure out where you can do the very most good and what things you can do with your career or donations that will actually have the most total general positive impact on the world. And usually those correlate a lot with either where there’s the most suffering that can be very easily alleviated. So in global health, it’s really shockingly easy to donate to the right charities to permanently save a person’s life for like a few thousand dollars.
Animal welfare, obviously, if you ascribe any value to animal experience at all, there’s a gigantic moral catastrophe where hundreds of billions of conscious minds are suffering in really brutal ways that again, there’s a huge amount of low hanging fruit to fix because the funding for the total amount of funding in animal welfare is about a tenth of the funding of the admittedly large school district I used to work for. It’s incredibly small.
And then X-risk, the basic idea is that any small thing you can do to decrease X-risk has a huge outsized impact on both current people and potential people in the future. This is where it gets very controversial where you start to talk about speculative far future stuff and how the future can go.
Yeah, I definitely noticed in my time that AI has taken up more and more oxygen within EA. I think understandably, honestly, from the inside, I have access to a lot of high level people in the movement and I really don’t read their motivations for this as coming from some kind of “oh, AI is just the thing to talk about right now.” Because a lot of them were really, their hair was on fire about this back in 2015. And so, I don’t know, I was recently at this panel where someone who’s pretty critical of EA was saying “well have EAs just hopped on the bandwagon of AI?” And I was losing my mind a little bit because I was like, “I was doing this back when everyone was like, ‘Why are you guys so focused on AI? This does not matter at all.’ And suddenly we’re being accused of hopping on the bandwagon.”
So yeah, AI is taking up more and more oxygen. There’s a lot of sudden interest in how AI affects other things like AI and animal welfare or how AI will affect great power relations or very poor countries as well. There’s still—I think EA has worked surprisingly well as eight or nine kind of overlapping communities who mostly radically disagree with each other. In a lot of EA conferences I’ll go to, I’ll still meet a ton of global health and animal people who basically just don’t buy a lot of the basic AI X-risk case or just don’t feel there’s anything they could do about it.
And so it still feels pretty healthy in that way, but I do have to say that if you’re getting more involved in EA, it is very important to understand that AI is really being focused on now. Just a lot of people are much more convinced than they were a few years ago that very scary capabilities could arrive very soon basically. And there’s a lot to go off about that.
I would actually be curious, Henry, if you want to talk about it—I know a lot of people who identify as EA adjacent and I’ll be like “what do you mean by that?” where they’ll be like “I donate 10% to charity and I buy AI X-risk and I’m a vegan but I’m not an EA because I don’t buy this very specific view of utilitarianism.” And I’m like “well I don’t either.” So I’m curious about what makes you adjacent.
Henry Shevlin: I would actually say it’s almost a self-deprecating use of the term in the sense that there’s a lot of the EA value system and mission that I admire, but just fail to live up to in my own life. There are certain kind of interventions that are super easy for me. I’ve been vegetarian since I was quite young. I work on an area that I think does have high expected utility, namely AI ethics. But there are so many other areas. I’ve tried to go fully vegan multiple times and I try and—I think these days I’m reduceatarian, I think these days. I’ve managed to phase dairy out of my coffee. But yeah, so one reason I call myself EA adjacent rather than full EA is because I don’t feel like I’m not there yet. I’m not good enough at the implementing, turning my aspirations into reality.
Andy Masley: Yeah, I mean, I’m not living without sin. Most people I know—it’s actually only a minority of EAs who are fully vegan. It’s incredibly hard. And I think even within animal welfare actually—I’ve been meaning to write a blog post about this for a while actually, but I actually really worry about veganism as being the singular “this is what it means to care about animal welfare.” For the same reason that I don’t think you should be barred from being someone who was worried about climate because you personally drive a car or something like that. If anything, that would probably really blow up the movement. So I don’t know, I wouldn’t be so self-deprecating. Basically, I think if you’re doing AI ethics work, you’re thinking about X-risk and the suffering of AI systems and you’re vegetarian, I wouldn’t hold back on the label. But you know, totally understand if you don’t want to associate further reasons.
Henry Shevlin: Yeah, congrats. You have my blessing.
Andy Masley: Yeah, yeah. Love to hear it. Yeah, love to hear it. Cool, cool, cool.
Henry Shevlin: Nice. So I can call myself EA now. Okay, I’ll drop the adjacent. I’ll drop the adjacent. Yeah.
Dan Williams: You’re officially welcomed into the community. Okay, so that was so fun. There was so much stuff that I think we covered there. Do you two have any final things you wanted to touch on that you wanted to talk about? Andy, any questions that we had asked that we didn’t ask?
Closing Thoughts and Recommendations
Andy Masley: Yeah, this was a blast. I’m actually—I guess I would be curious about any of your recent takes, Dan, just because I know you’ve thought a lot about tribalism and polarization and just how people relate to expert consensus on stuff. And I guess would be interested if you have any additional thoughts on how the debate about LLMs have kind of evolved within that broader spectrum of how people think about deferring to experts and which experts to trust and stuff like that.
Dan Williams: Yeah, I mean, I think we’ve touched on some of this already. I would say I think the kind of work that you do is really valuable and really underrated, which is just putting in the work to persuade people with evidence and rational arguments. And to the extent that you do that in a good faith way, and the evidence that you’re citing is in fact the evidence of expert consensus on different views, I think people dramatically underestimate how impactful that kind of activity could be.
I think the issue of expert consensus in general is very, very challenging. I mean, you said a few times with AI, “I’m not an expert,” but I think it’s actually very difficult to say what precisely makes someone an expert when you’re talking about certain kinds of issues. I think Hinton is an expert when it comes to deep learning. I think he talks about a lot of other issues where I would say he’s not an expert, but he gets treated as one because it’s somehow connected to AI.
So that’s really complicated. I also think, and I’ve written a post about this recently, that there’s a lot of kind of high brow misinformation. This is a term that I’m taking from the philosopher Joseph Heath, which is, often we think about misinformation as being something associated with the low brow kind of dumb alternative media environment, Candace Owens, Tucker Carlson, et cetera. And I do think to be clear, there’s just a whole load of informational garbage in that space, as understandable that people focus on that. But even in highbrow information environments staffed by highly educated professionals with overwhelmingly center left progressive views, there’s a lot of just really low quality selective misleading content. I think we’ve touched on some of it today.
So I think this is a perfect case where when I talk to people in my social and professional circle about AI and the environment, I think they’re just really kind of misinformed about it. And I think you’ve done a great service in pointing that out. But I think even when it comes to climate change in general, there’s all of this bad right-wing denialism. Totally, I think that is bad. That needs to be called out. There’s also a lot of this kind of high brow catastrophism surrounding climate change, which is also not founded upon our best expert consensus on it.
So I think the heuristic many people have, which is “Oh, if I just kind of align my beliefs with what smart people affiliated with the institutions believe, then everything will be okay.” I don’t think that’s true at all. I think there’s a lot of incredibly misleading communication associated even with our kind of expert-based institutions. So sorry, that was waffling for a long period of time, but that’s my assessment of how that connects to what we’ve been talking about.
Andy Masley: No, that was great. No, that was good. Yeah, no, that was really good. Yeah, totally agree. Also a huge Joseph Heath fan in general. And yeah, I’ve definitely bumped into quite a bit of highbrow misinformation about climate in general where I am quite worried about climate. I think maybe even more so than a lot of EAs where I do probably ascribe more of a probability to quite bad things happening in the long run because of all this. And I’m not totally bought into the idea that technology is actually going to lead us on to the correct path on its own so we need a lot of policy and stuff.
But yeah, even there I just have so many individual memories of friends and people I would meet very confidently telling me basically that civilization would end by the mid-2020s or so. It was just a very common experience of my daily life throughout the 2010s. And so yeah, I have separately hoped that my pieces have been a small push back against that. I’ll pepper in where I can that you should actually just read the IPCC report summaries. Just try to understand what the actual science is like. Start there. Don’t start on a scary TikTok. Just go to the Wikipedia, understand what the IPCC says, and then run from that. So yeah, I totally agree on the issue of misinformation coming from multiple directions here.
Henry Shevlin: One thing that I love that happens sometimes on podcasts at the end is when people give me the opportunity to promote either work of mine or work of colleagues or just interesting cause areas. So maybe another nice way to close out would be to say, what are some stuff that you’d like more people to look at, whether it’s stuff from your own writing or other people or cause areas that people should be getting involved in?
Andy Masley: Oh man. I mean, there’s a lot. I would—let’s see. For EA stuff, I always like to plug 80,000 Hours. I think that if you enjoy my writing, a lot of my writing has actually been very directly influenced by their writing style, specifically where they try to give you a pretty comprehensive general overview using very non-teamy language about big, huge issues. I think their article on animal welfare especially is my single favorite place to drop on new people. If you’re interested in EA more, you should definitely check that out.
Climate stuff, definitely strongly recommend—and if you enjoy my writing—Sustainable Energy Without the Hot Air by David MacKay. I think, I’m pretty sure that’s the title. I read that in college and that was also just hugely influential. And if you read that book, you’ll be like, “Andy’s just doing the David MacKay thing.” This is literally just—Andy’s just doing this imitation of what he did in that book, basically. So I very strongly recommend that.
Yeah, but those would be two places to start to understand more of where I’m coming from specifically. And obviously, subscribe to my blog. I love to get subscribers, especially if you live in Nevada or South Dakota specifically. Those are the last two states before I’m on a 50-state blog, so I’m constantly on the hunt for those. Please, God, subscribe if you live in those states. But yeah, those would be my recommendations, basically.
Dan Williams: Fantastic. Well, thanks, Andy. This was great. And Henry and I will be back next time with another conversation about AI. Cheers.
Andy Masley: Yeah, yeah, this was so much fun guys, thank you. Yeah, cool, see you soon, bye.
Henry Shevlin: Thank you, this was great.
[End of Transcript]








