Abe Rutchick (rutchick.com, twitter: @aberutchick) talks about his psychology research that showed that killing is easier at a distance, how the experiment was designed, and about antisocial behavior in general being more likely when at a distance. A transcript is below. Other topics discussed: how his killing-at-a-distance research relates to our behaviors online; research he did about how people attribute moral responsibility for harm inflicted by autonomous self-driving vehicles; some studies he worked on that involved poker and poker tells; some research of his related to how differences in election maps could affect perceptions of American polarization.
Links to this episode:
Studies and work discussed in this episode:
Welcome to the People Who Read People podcast, with me Zach Elwood. This is a podcast about better understanding others and better understanding ourselves. You can learn more about it at behavior-podcast.com. Please, if you like it, share it with your friends and leave me a review on iTunes or another platform; I’d greatly appreciate it.
In this episode, I talk with applied social psychologist Abe Rutchick. We talk about a study he did that showed that people were more willing to kill ladybugs when they were distant from that happening. This is an interesting and topical study in how it relates to all sorts of things we humans do at a distance, from the military using drones to attack people, to us being more likely to treat each other badly when talking to each other online, to being more cold and removed when considering distant and abstract ethical problems, and to the food we eat and the products we buy and how we’re less likely to consider the animal cruelty or human cruelty or other harms involved when it’s so far removed from us. We also talk about some research Abe did regarding autonomous vehicles and how people reach moral judgements about who’s at fault for what those vehicles do. And we talk about some studies involving poker that Abe has been involved with. I’ll have links to all the studies discussed in the page for this episode at behavior-podcast.com if you want to check those out.
A little more about Abe:
Abe Rutchick is a Professor of Psychology at California State University, Northridge. He is, broadly, an applied social psychologist. His earlier work was on social perception, with a focus on the way people perceive political groups. He also conducted research on the nonconscious influence of everyday objects, including formal clothing, red pens, churches used as polling places, light bulbs, and ibuprofen.
This work has been featured in many media outlets, including the Wall Street Journal, the New York Times, The Atlantic, Time, and The Huffington Post. Strangely, this has also led to him providing “expert commentary” on other subjects about which he knows little, such as the effect of prison uniforms on recidivism and the effect of workplace fashion on employees’ confidence and work ethic. The highlight of his media career was probably being made fun of in a story by National Public Radio’s Yuki Noguchi for dressing “like a slob”.
More recently, he and his lab have begun a program of research at the intersection of social cognition and emerging technology. This work addresses how the capability of new technology to create both remoteness and intimacy influences the way we think and act.
You can learn more about him at his site https://rutchick.com.You can follow him on twitter at @aberutchick. Okay, here’s the interview.
Hi, Abe, thanks for coming on.
Abe Rutchick: Happy to be here, Zach. Thanks for having me.
Zach Elwood: So maybe we could start with the research you did that involved the ladybug killing? Where did the idea for that originate and why did you all pick that idea?
Abe Rutchick: That idea started so long ago about before the actual paper was published. We started it in I want to say 2009, a year after I’d gotten to my academic job at CSUN Cal State University, Northridge where I teach. My colleague Rob Yeomans who now works for YouTube, who was down the hall at the time, we chatted about ideas all the time. At some point, I don’t even remember which of us had the initial idea, but one of us had read an article about drone strikes and we thought about a way to maybe study… We immediately had this thought that killing remotely might be a different psychological process than killing close up. It just struck us both. We started chatting about it and wondered whether we could find some way to look at an analog of that in an experimental lab context.
That’s really to start with the ultimate implication as opposed to that being a down the road downstream consequence. It really was inspired by it initially. And then even though that was the inspiration, we weren’t really trying to replicate that experience. Obviously, warfare and actual killing of people in that context is not something you’re going to be able to capture in a psych lab, at least not in this country in this time, maybe back in the ’50s when they could do anything. So we started just battling ideas around.
Rob left CSUN fairly soon after that, went to do another academic job and then on to the industry, but we stayed in touch on the idea and over the years, we built out a method for doing it. It took us, no exaggeration, I think six years to build a protocol that worked. In terms of constructing a specific apparatus to do the experiment, we had to work out a system that was believable, but also ethical. Getting the approval from the human side of committee was non trivial as you can imagine. And so it took quite a bit of time just to set up and very fortunate to be at an institution that there’s not as much publication pressure, you can take these windmill tilting approaches to research.
Zach Elwood: One thing I often wonder about studies that involve people asking subjects to engage in some bad or suspect behavior is, wouldn’t some people in the study realize it was likely a psychology study and maybe some set up? Is that at all a factor? Maybe I’m wrong on that, and maybe the fact that you still get different results shows that these things are not so much of the factor.
Abe Rutchick: No, I think it’s a really important point. It’s definitely a factor, definitely a thing you have to consider and a lot of the art and craft of doing this work, psychologically realistic work, is in creating a setting that is psychologically real. Clearly, when you think about generalizability from an experiment to a real-life situation, you think about are these people representative of the population I care about, that kind of question? Probably not, they are students, but that’s a concern. To what extent can we go from this situation to a real situation? That notion of ecological validity or being a psychologically real experience is crucial. Certainly, them not deducing that knowing what we’re studying is really important. Understanding hypotheses, whether it’s a real killing or non-real killing in experimental design, super important.
And I don’t actually agree that if they knew what we were studying, of course they knew it as an experiment, but if they knew what we were studying and didn’t think it was real or something like that, I don’t think that would be that. I think we’d be studying a different process or following a lot of metacognitive stuff. That is not what we’re really interested in.
In this case, yeah, you need a cover story that works and that’s one reason why it took so long. Our cover story was we told them that we were doing a human factor study like a user experience study. Our sect department historically had a big wing of it that was focused on that so that’s a plausible cover story and we said, ”Look, there’s lots of reasons why you might want to kill insects at scale. You can extract dye from them if they’re colorful, or you can use DNA sampling. And so what we’re doing is doing that one person at a time with a mortar and pestle is too cumbersome for industrial context and so we’re looking to do it quickly and so we have this setup with the conveyor belt and the buzz and we’re looking to test the usability of this thing.” And we had them answer questions about usability and all that stuff to make that make that ruse real.
We did have… I’m trying to think. So we’ve done actually three studies. One is published and the other two are not. So across all these studies, about 1,000 people have run through this protocol. There are people who don’t believe, it’s about a 3% rate of disbelief. And we do a careful debriefing afterwards where we say, ”Any comments on the study? Okay, cool. Anything weird about it? Okay, cool. Anything suspicious about it?” It’s called a funnel debrief where you gradually get closer and closer to like, ”Did you think it was real?” And they’re like, ”Now that you say that,” [inaudible 00:08:20]. Yeah, exactly.
So you go all the way down, you evaluate at what point you have a coding scheme for that and you’re like, ”All right, this guy didn’t believe this. This guy didn’t.” And you toss that out. We completely excluded from analysis people who we believe based on our viewing of those responses that they didn’t really believe it was really happening. Then the design itself is super realistic. The machine is this big black box. It’s like a toolbox like the middle toolbox. There’s a conveyor belt on top of it. The ladybugs which are real live ladybugs are sitting in little capsules, little plastic, two hemispheres that have paper tape them on the bottom, you can see them they’re moving, they’re real. The experimenter was demonstrating and says, ”Okay, I’m going to show you how it works. I’m going to kill one ladybug.” And they advanced the conveyor belt using this controller, they drop it into the killing machine, they run the grinder that makes a loud grinding noise. It’s like a computer fan on a nail so it’s super realistic.
And then they reach into the back of it and they pull out a little output tray and they show them a crushed-up ladybug, which actually it’s not a real crusher, it’s not real. But we previously had a ladybug that we’re able to use and it’s convincing. They look at it and they are like, ”Okay, now if you don’t mind just show how to use it if you could just crush this rice crispy.” We give them a rice crispy and the same capsule and they crush the rice crispy and then we take out the output tray again, it’s all sleight of hand. It’s two separate pre-prepared opportunity metrics. It’s got crushed crispy, it’s got more plastic, it’s got the bug in it. It’s like okay, they believe it. It’s very convincing.
Zach Elwood: Yeah. One of my questions was how do you set that up? It sounds like a lot of work goes into making that believable. Yeah, interesting.
Abe Rutchick: It took a lot of work. I called a friend of mine, a dear friend who’s a set designer for theater and I was like, ”Hey, can you help me with this?” And he’s like, ”I’m not sure I want to get involved in this. This is scary.” I was really asking a lot of folks for advice on how to make this realistic, how to make it work. It was not an easy process. It was long and painful. It’s the kind of work that doesn’t really get done anymore just because science needs results sooner. You want to get an answer. You can’t look at a grad… We have a master’s program, not a Ph. D. program, so a graduate student that’s three careers worth of graduate students. So they’re two years fine just in the design phase. If the payoffs aren’t there for most folks at that time, they can pick and access them. So it’s the kind of work that doesn’t get done. It’s the kind of stuff you used to see in the ’50s and ’60s a lot, some of the really niche classic studies and certainly that’s an inspiration for the work. Again, like I said before, we’re lucky to be at a place where they tolerate these absurdities.
Zach Elwood: There’s one thing I was wondering about that study was if you had done a follow-up study judging people’s degrees of guilts from the two groups, I wonder if you would have seen interesting things about like how being more likely to kill things remotely and how that played into feeling bad about it. Maybe did you have something in your study that included that?
Abe Rutchick: We did. It’s not in the published paper so you wouldn’t have seen it. You’ve predicted it, but we did it. In this paper, we did a bunch. We did this paper which is the one study, we replicated this, which unpublished manuscript that we’ve been trying to get published and sitting on the back burner as we keep trying. The journals tend to say, ”Well, we’ve done this already. What’s new here?” Like, ”Well, we’re replicating it, that’s important truly.” We have some new questions showing why we think it’s happening and it’s not quite convincing enough, this novel, which is a side issue in itself. Novelty is an interesting criterion because we want to believe that the stuff keeps working.
But anyway, in that one, we also followed up with some of those books a month later and asked how guilty they still felt. We looked at trajectory and whether it differed, whether they did it in person or via Skype in those days so whether they did it remotely versus in person. We also looked at… And we didn’t really get a huge difference there. It’s pretty hard to pin down, nothing significant statistically. My guess is that it because they have different levels of feeling bad about it at the time. So people who are in person kill less and feel worse about it, people remote kill more and feel less bad about it. And so trying to look at those trajectories, they’re not starting at the same spot and it’s not quite clear that they should go at the same slope if they’re actually the same. It got a little complex.
We also looked at that moral foundation’s theory before and after they did it. So the question of this killing, does participating in this act change you in some way? Does it change your morality? Do you feel like it’s less bad to kill animals after you kill anybody else? For that one, we had to delay our debriefing for a little bit. Also, they believe they did it. So the way we set it up, they had to kill at least two and there were 10 on the conveyor belt. Now they didn’t have to do it. They could of course stop at any time. But we said, ”To give it a good test, you get to kill these two.” And so our dependent variable here is, did they kill just the two? Did they kill more? Or did they kill all 10?
By the way, interestingly, most people, two thirds of people either kill two or 10. You don’t get a whole lot of… Once you get going, you’re going to keep going. You get a few who stop it. Very few stop at three. That doesn’t happen. And then some will stop at like six or seven like, ”Yeah, I’m done with this. ” Maybe just due to boredom. Our third study we did all in person actually and we looked at, so not remotely not looking through this question, but we looked at a bunch of personality variables to see what might predict it. We also got facial coding on their face while they were doing it. We recorded their faces, and we can feed that into emotion coding software. Our first pass at that wasn’t fruitful, but at some point, we’ll have some grad student who really wants to look at this data and figure it out. So yeah, we’ve tried to extract as much of these really hard to do studies as we can.
Zach Elwood: As you say, it seems super complex because there’s even this factor of once you do something, you justify your behavior and make peace with it in various ways and it’s hard to know how that dynamic goes on and there’s probably all sorts of factors even within that. It seems super complex.
Abe Rutchick: Yeah, it is. And we also asked people, we describe the scenario and showed people a video of it and said, ”How many of you think you would kill if you’re in this situation?” And not surprisingly, they said they killed very few. They actually say they would kill fewer in person than remote. We just described the situation to them. But there’s a giant gulf between what they actually would do with what they say they would do, which is not surprising, I suppose, but still interesting.
Zach Elwood: So when it came to getting those results out there and the mainstream press covering it a bit, did you see… Was there much… It seems like something that’s theoretically interesting to the mainstream without regards to topical issues like drone killings and such. Did you see much interest in it? If so, did you see more or less than you expected?
Abe Rutchick: Well, a little bit but certainly less than I expected. In my view as I was doing it, it’s clearly my favorite piece I’ve ever done. I am immensely proud of it being honest. I think it’s really niche. I’ve done other things that have been more cited. This is barely cited. It isn’t a textbook, however, which is something I’m super proud of. But it’s not been cited much by the papers, which is for academics, that’s our key metric. Like, is it getting cited? Who else is citing it? Very little. I’ve got some stuff that’s been cited hundreds of times, that’s fine. Interesting. I don’t do any work that I don’t like. If I don’t really believe it, I’m not going to publish it. But this is not among the ones… This is way down on the list of how much it’s cited.
In terms of media and conversation around it, the thing that gets cited most of my work is some work on wearing formal clothing. That’s the one that gets cited the most. I take a call every like week or two or three about that. I’ve done countless interviews about people wearing suits. Well, we all wear clothes. And that’s what’s cool. It’s niche work. Again, I’m super excited to chat with anyone about this. That one was on NPR. That one was on… I’ve been here three times for some of these. It’s crazy. But it’s always been these things that are fun and frivolous like the clothing one. Yeah, the clothing one, red pens make you great harsher. That was my first NPR hit back in 2010. Not that that’s not interesting work, but it is a little more… I do less studying stuff that matters for your everyday life. But it does seem less important than killing things. So I’m a little frustrated that this hasn’t gotten a bit more.
Zach Elwood: Too much of a downer.
Abe Rutchick: Yeah, maybe that’s it. Maybe it’s a little scary. And in their head, it’s a little controversial. I remember we applied for an internal grant to do some of the work. Now I’m definitely talking out of school so to speak. How can I say this to be fair? I got some insider info. We didn’t get the grant. I got some insider info from someone who I won’t name. That someone on the internal committee was themselves a veteran and was enraged by the idea that our work in the lab could possibly capture what it’s like to be in combat. I totally take that critique. I don’t think that that means that we should do the work and I don’t mean that it… But that’s a legitimate concern. I don’t know what it’s like to face someone who wants to kill me and kill them? That’s wild. I’m not suggesting it does, but I think that it’s really hard to study this process and I would counter with like, ”Well, what else do you want me to do?” Maybe we should clearly interview people and see what that experience is like qualitatively. We should clearly look at all sorts of ways to study this super important fundamentally human process. Given the tools of my discipline of experimental social psychology, maybe I could upgrade to goldfish or something, but this is pretty much as far as I’m willing to go. I think you can learn something useful from… People thought they were killing something. And it’s not like a mosquito, it’s a ladybug, they like it.
Zach Elwood: I can’t see anything objectionable about your study because it’s hard to argue at an intuitive level. Sitting in a room in Virginia and pushing a button on a drone to kill someone would clearly seem to be easier than being actually at that place and doing it yourself. It’s hard to imagine that being objectionable or controversial to-
Abe Rutchick: Well, the controversy I guess would be around me saying that reasonably replicates what that is like knowing your ending someone’s life a human’s life. And certainly, I could have written the paper in a way that would have been unfair and presumed something that isn’t right. Look, if I had a different pre-science career and had been a Marine, I think that that person’s objection would be easy enough to dismiss. I’d say ”No, I know.” But that’s a reasonable critique. It’s a reasonable critique. I’m not going to blame someone for thinking that I don’t know what I’m talking about in the sense that I don’t know what that’s like. I’ve been punched in the face and punched people in the face who wanted to punch me in the face and so on. That’s not the same either, it’s not same as killing. So I take that critique.
Zach Elwood: I’d written a piece about social media and social media effects and about how there are inherent aspects of internet communication that lend themselves to amplifying our divides and amplifying some bad us versus them thinking. And I referenced your study in there as a way to make the point that if it’s easier to kill from a distance, then it’s understandably easier to do many bad behaviors, antisocial behaviors like threatening people, insulting people online, generally treating people badly online. In short, everything bad and anti-social presumably would be easier to do at a distance. I’m curious if you’d agree with that interpretation. Is it something that you’ve extrapolated from your study to other things in life?
Abe Rutchick: Well, first thanks for signing in. I hope you signed in twice. No, I was just kidding. I definitely have agreed with that thought. I thought about it some. I should point out that there’s work that looks at this quite directly like actually this textual analysis on more and less anonymous channels and looks at the effect of anonymity in a much metaphor direct way so that speaks to that even more, but mine does I think add another brick to that wall particularly when we’re talking about more deeply problematic behaviors like cyber bullying and suicide baiting and things along those lines, and it gets pretty grim. Sometimes there’ll be a few people who have been prosecuted, swatting, for sure with real consequences. I feel like people don’t realize that’s serious until it is. There is a disconnect between their understanding, that’s a funny prank. It’s like delivering 10 pizzas to their house and then someone’s dead.
Zach Elwood: It’s like when things are distant, they’re less real. It’s like, well, what could happen? I don’t really know. It’s abstract. It’s far away.
Abe Rutchick: Underscore is the point. Yeah, for sure. There have been people that believe there’s a woman in Boston, if I’m not mistaken, who was convicted of eating the suicide of her boyfriend and it was a manslaughter conviction of some kind then entirely through online experiences. Broadly, I think absolutely. It’s funny we’re definitely not the only folks to think of this. Louie CK has a bit about it even where he talks about the road rage that people will do, with a pane of glass between two people, you’ll say anything horrible about somebody, but you’d never do that at the same distance without that pane of glass. Imagine turning to someone on an elevator and screaming these horrible things. It’s definitely true.
Every possible thing we can introduce that decreases our intimacy increases our sense of remoteness and distance, psychological distance, I think is going to lead to problems by and large. It makes me think of lots of stuff. There’s lots of good psychological reasons for this that people have looked at a bunch of, it’s reasonable, we want to get credit for our good deeds and avoid blamed for bad ones. I do an exercise in my class every year where I say, ”Look, if you’re invisible for a day for 24 hours, what would you do?” And I have them anonymously write it down and then I read them and it’s like, rob a bank, rob a bank, rob three banks, go to Disneyland for free, kick somebody in the pants, go to area 51 and they’re all like it’s pranks, it’s spying, it’s theft. And then you get a few people who are like they don’t understand that they could do most of these things visible and they are like, ”I’d go to France.”
Zach Elwood: Well, that’s just what they say. Imagine what they’re not saying too.
Abe Rutchick: That too. And then you get one out of 100 sweet souls who are just like, ”I’d leave cupcakes on my friend’s door.” And somebody like, ”I’ll cure cancer.” I’m like, ”I’m not sure your visibility is the obstacle here.” No, but it’s great and I’ve done it for my classes. I’ve done it for corporate audiences when I speak in those settings and they’re real there. But it’s striking. I don’t believe this is not like human nature is bad when left unsupervised, I don’t think that’s the lesson here. I think it’s that is this very reasonable thing. It’s like here’s a unique opportunity to not experience culpability for actions. Let’s go. I set it up as it. But you’re not going to be invisible whenever you want that you probably act differently. It’s a special chance to be naughty.
In the setting of online though, I will be pretty invisible and say mean stuff to people and it’s titillating. In real life if I started screaming at somebody, there might be consequences reputationally or otherwise. This is a chance to do this. And so, you do get that. And pretty much every behavior is going to get worse when you crank those things up and you take away that accountability. And it’s not just I’m worried about the consequences, there’s some psychological theory to suggest this notion of self-awareness, it has an impact internally. If I’m conscious of who I am at the time, if I’m thinking I’m very self-aware, that’s going to make me better.
There’s a famous study which it’s almost the apocryphal territory because I don’t think it’s been replicated, but kids stealing Halloween candy. If there’s a mirror behind the bowl, they’re less likely to steal the candy because they’re so like, ”I shouldn’t do this. My moral compass is restored by seeing my own face.” Whereas if you’re masked anonymously in a big group, you’re more likely to misbehave. So yeah, I broadly agree. I think it has implications for, I don’t want to be too grandiose about my particular study, but this and related work has implications for how we function in a workplace, particularly in the COVID era. I wonder whether that’s something we could observe more broadly. I mean, there’s so many things have changed. It’s hard to pinpoint a cause and effect here, but we’re doing everything through email and through Zoom and so fewer things face to face with just that do. Yeah, I think it’s good implications for all that work. The broad point has been belaboured and made though like anonymous is bad, remote is bad. It tends to make us worse.
Zach Elwood: A small edit here, I took out some talking that Abe and I did on the subject of fake and anonymous social media accounts. We talked about how social media companies don’t have an incentive to reduce anonymity. They don’t want to put up more hoops for users to jump through. They don’t want to hurt their market share and make it harder for potential customers to join. Facebook, for example, deletes about one to two billion fake accounts per quarter, which gives a sense of the problem. I talked about how I blamed Facebook for not doing more to cut down on fake accounts because even though they claim that’s against their terms and conditions to create a fake deceptive account, they have very few obstacles on signing up that would prevent people from creating these fake accounts. While I’m on the subject, I wanted to mention that I’ve done some independent research into fake Facebook accounts. That research was featured in 2017 in a New York Times article titled, Facebook Says It’s Policing Fake Accounts, But They’re Still Easy to Spot.
Later, some other research I did was featured in 2018 in The Washington Post in an article titled, When a Stranger Takes Your Face: Facebook’s Failed Crackdown on Fake Accounts. I was even invited on the Chris Hayes show to talk about that work, but I missed an email they sent and I missed that opportunity. If you want to know more about this work, I have some articles about that fake account research on my blog at Medium which you can find by searching online for Zach Elwood research Medium. I’ll jump to where Abe starts to talk about a study he did involving autonomous vehicles.
Abe Rutchick: I did a series of studies a few years back with a wonderful master student and Ryan McManus is now a PhD student at Boston College. He led these studies. I was there as an advisor and I did my share work, but he was the boss of this. He’s a moral psychologist that he studies how people make morally relevant decisions. I had a burgeoning interest in technology and so the intersection of that is like how do we make moral decision making in a technologically advanced context?
We’re talking here about the question of self-driving cars of AI control vehicles EVs and basically how you find judgments of guilt if there’s an accident in different scenarios. One of the scenarios we were looking at was this this scenario where sometimes these cars will have to make decisions about what they do. These are somewhat contrived arguably, but sometimes we’ll be in a situation where you’re going on the road quickly and someone runs across the road and do you hit that person or do you swerve out of the way having some risk to you?
I’ve actually been in this… It’s artificial, yes, but I’ve been in this situation actually in my car before. I was driving once down the road and it is this misty morning and it’s about 7:30 in the morning on a weekend so it’s quiet, heading off to get some breakfast and a truck was coming towards me with a ladder dangling horizontally, unbeknownst to this fellow obviously, off the back of his truck like off the tailgate blocking both lanes. And I’m driving towards this guy and I’m like, ”Is that really happening? That’s really happening. There’s a ladder across the road.” And this street happened to be, you couldn’t write this in an experimental stimulus and have it be believable, but this truly was no sidewalk. The thing on the right was like lawns of people’s houses. Do I get hit by a ladder at 40 miles an hour for each of us or I drive blind onto someone’s lawn where there could be a child. I did the worst thing which is like the halfway in between where I basically kept going straight slowed down and shirked my shoulder, duck down a little but also move to the right slightly. It probably was the worst because it killed both sets of people.
Zach Elwood: Did both die?
Abe Rutchick: Yeah, exactly. The ladder shatters my side mirror thankfully because it swung up by the wind. Anyway, these things do happen. These seemingly artificial dilemmas are real. Yeah, exactly. I mean, you wouldn’t think so, but it happened to me. So what do you do? Do you preserve the most life, the classic utilitarian thing? Do I swerve to avoid two people and then you’ll potentially put one person in the car at risk? Or do you preserve the driver at all costs? There’s some work on this that came along around the same time as ours where someone basically was showing that people think that others ought to make the utilitarian choice that what we ought to do is have a car that saves the most lives possible. But I would prefer to ride in one that seems me. That’s a very famous paper. It got cited a lot of times and I’m just like, ”Wow, yeah, that is true and I don’t know if we needed the study to know that.”
Zach Elwood: Right. It ties into the to the ladybug killing because it’s like I don’t mind death when it’s a distant remote thing, but if it involves me, I’m very particular.
Abe Rutchick: Exactly what the impression I had back then. I don’t want to mock those guys. They’ve done some really cool other work extending that and looking at the demographics of who’s crossing the street. Is it three old ladies and a cat versus two young men? Who would you sacrifice to save whom? That’s really cool work there.
That was the idea, the classic trolley dilemma. So if listeners are not familiar with it, there’s this trolley problem or trolley dilemma that goes back to a philosopher named Philippa Foot back in the ’60s. And this is the idea of there’s a train or trolley going down the tracks, there’s five people on this track, do we pull a lever to divert the trolley to not kill those five people, but instead kill one person on another track? People generally make that utilitarian choice and do pull the lever. The reason it’s tricky is that we’re now taking an action to kill someone who was not imperiled before this. If we just stay still, all these guys die. But we make the utilitarian choice more often than not. There’s some fun variations, which actually tie back to our idea of intimacy that we were talking about before on remoteness where the footbridge version of this where instead of pulling a lever to divert it, you push a guy off a footbridge onto the tracks and the train hits that guy instead.
People don’t tend to do that. They tend to not make the utilitarian choice. Now, you can’t just push an innocent guy who was nowhere near the tracks onto the tracks, which is interesting, the same number of lives lost but I guess we’re blaming the other fellow for being near a train track. It’s really interesting. And then when you take another step further and you say, ”Okay, if I push this guy off the train tracks using a long stick, people are more likely to do that than do it by hand.” That’s exactly it right? That the distance does seem to matter, the intimacy of putting your hands on a person and pushing them to the death really seems to be a driving factor here. The lava log poking device on the other hand is it adds just enough remoteness. I think that’s fascinating work.
Anyway, that was the framework from which we approached this. We said, ”Okay, this is the scenario. Either the person is driving and makes the selfish choice or the selfless choice. We have an AI controlled system that was programmed by the manufacturer of the car, they make the selfish choice, the selfless choice. We have an AI system that was programmed by the driver when they bought the car so that’s the same set of things. And then we had some override condition, so the person had a preprogrammed thing that they chose to be selfish, but then in the moment of truth, they hit the override switch and behaved selflessly. Or the reverse, they decided that they were going to be selfless but then when it came down to it, they were actually selfish. They hit the override switch.”
So all those conditions in place, the different outcomes. And you find what you expect, which is that in the manual condition of course, the driver gets a lot of blame or praise for their actions and that is diminished when it’s programmed by them and it’s an AI and that’s still further diminished when it’s programmed, not by them, by the manufacturer and it’s an AI. We saw some interesting stuff around the override switch. If you go back on your previous selfless actions, so I decided to behave selflessly but, in the moment, I just couldn’t do it and I decided to be selfish. That’s actually even worse than being selfish the whole time, which is cool, this last seat condition and the reverse too for being selfish and then deciding you know what, I can’t do it, I can’t kill this person who’s in front of me, and they act selflessly. That’s actually better than being selfless the entire time. So it’s niche. I think there’s something in the idea that what you do at the moment of truth is more diagnostic of what you’re really like your true character. I think that’s probably behind that.
Anyway, we did that and we’ve done some interesting follow ups on like casting blame to different agents like do you blame the manufacturer or the driver? And how much you blame each of these entities?
Zach Elwood: I think that area is so much more complicated than I think a lot of people realize. It was about 10 years ago, I made a bet with a friend $500 bet with a friend and I bet that we would not see fully autonomous vehicles with no driver at all on public roads for more than 15 years. The reason I thought that was because I thought the technology would be good enough to do it, but I was skeptical of these more complicated human factors. For one thing, I thought it was underestimated how fully autonomous vehicles are perceived as more freaky and creepy than I think many people… I think for many people, especially older people, that was one factor. And then the other reason for me was that some of these things you’re talking about were the moral and legal complications of how these things would actually work are so much more complicated than they seem on the surface. Who makes those decisions? Is it legislated? If it’s a black box AI system that you don’t really know the workings of is that ever really workable if you can’t say exactly what it’s doing? And a lot of these AI systems are black box situations where it’s machine learning. It’s all these factors that make it a lot more ethically and legally complicated than I think is perceived. I haven’t really been following the actual state of things recently and so I wonder how much that is held up, the stuff versus the technology itself. I don’t really know. But it’s super complicated.
Abe Rutchick: Well, I think at this point, I would have made that same bet with you and I think you’re going to win your bet. If I think about how my views have changed as well, I think a lot of driverless smiles have been driven even on public roads actually, but there’s this guy in the car, it’s theoretical ability to take control of that thing. So they’re not a means of transport that anyone can just have access to. The full self-driving features on Tesla are always being disabled and reenabled and things along those lines. We don’t do well… I’m sure Ryan McMahon is talking about moral judgment, not me. But we don’t do well when there are multiple targets. We like simplicity. I think of an insurance situation, they really like to know who’s at fault in the accident.
Sometimes some states have multiple faults. I got into an accident in New York one time, they’re like, ”Well, it’s 20% your fault? I’m like, ”Was it? I don’t think so.” But I guess it was raining and I could have been going slower. But we don’t do that very well. We assign fault to someone for a lot of these things. Think about when you’re wronged interpersonally or something happens to you that’s bad, you look around to see whose fault it was. We want someone to blame. And usually, it’s one entity. It’s not like we don’t want to look for blame. We don’t like, ”Darn this complex system of interacting variables.” In a nuanced way, it made my day worse. It’s just not how this works. And so, when we have what is clearly a complicated system of interacting numerous variables, it’s not quite clear how people are going to just be able to get their heads around that legislatively, morally in the court of public opinion, financially when it comes down to an insurance claim. So yeah, I do think those social obstacles are very real, independent of the technological challenges.
With that said, the worst of these systems is better than the best human driver now. I think it’s actually I think one reason why I think there’s a mistake here, which is that change is hard. We tend to get blamed for action more than inaction. If I’m sitting there and I just hold on to my money and inflation happens and I gradually lose some or I hold on to my money, I don’t invest in crypto and it goes up a lot. I’m not going to get like ”I’m so stupid. I wish I’d done this thing.” You’ll be like, ”Okay, I miss an opportunity. That’s not great.”
On the other hand, if I put money in and it goes down, now I feel terrible and everyone looks at me like dummy. We tend to get blamed more for actions and praised more for actions without considering what the inaction is or what the baseline is. We are in a situation now where a tremendous amount of people die in their cars. It’s a lot. It’s really bad. It’s I believe the sixth leading cause of death. Maybe that’s within certain age bracket or something along those lines. I looked at it several years ago. But it’s a problem.
Zach Elwood: Yeah, 35 to 40,000 people a year in the US-
Abe Rutchick: That’s a fair amount. I mean, as we’ve seen just because a lot of people are dying from something that doesn’t necessarily mean we’ll take the appropriate actions to stop it, but it’s clearly a thing that is an issue. It’s also tremendous costs infrastructurally environmentally and so on. A very bad AI driver is going to reduce those deaths, and I think we’re quite far. The first time the AI does something stupid that a human wouldn’t have done, we’re going to be like, ”There we go, this is not good.” Never mind the fact that it wouldn’t have done the 20 stupid things that humans do. So it’s a tricky thing. And a lot of it has to do with our embracing of the status quo and our bias towards that.
Zach Elwood: Yeah, it gets into existentialists thought to where we’re always making decisions. Even when we think we’re not making decisions, we are making decisions. So you and I initially started talking to each other because of a poker related paper, it was a pretty well-known study by Michael Slepian. Is that how you pronounce his last name? Slepian, okay. That study got a lot of mainstream press speaking of things, getting a lot of mainstream press. Maybe you could talk a little bit about that one? One thing I was interested in was as we were talking about with your study getting press, that got a lot of press and it struck me as it was because of the sexiness of Poker Tells and how Poker Tells hold a mystique and excitement in the public’s eye and maybe especially in America with the prevalence of poker and culture. Do you think I’m right about why that got so much attention? Maybe you’d like to talk about that.
Abe Rutchick: Yeah, absolutely. It’s funny, it did get some attention. It’s been cited a decent amount. It didn’t get nearly as much attention as some but yeah, it did get a decent amount of attention.
Zach Elwood: I guess maybe not citing but in the mainstream because it was on NPR and several other things.
Abe Rutchick: It was. It did get some good amount. You’re right. Not as much as not as formal clothing, but it did get some.
Zach Elwood: It was more than was more than ladybugs though.
Abe Rutchick: Yes, exactly. It was more than ladybugs.
Zach Elwood: Less than the formal clothing, but more than-
Abe Rutchick: Way less than that. [crosstalk 00:40:27] Yeah, exactly. It continues to resonate in popular culture actually, that study. I don’t know if anyone’s read Maria Konnikova’s wonderful recent book on her poker journey, but she has this scene in the book where she goes up to Erik Seidel. Again, I’m not sure how poker savvy our audience is, but it was Erik Seidel and wants to recruit him as her coach and she has her trump card, she pulls out of her bag this paper that apparently no one has read and I’m fairly certain, I haven’t had a chance to ask her yet, but I am more than 50% confident that was that piece, the Slepian piece.
Zach Elwood: I think she mentions that in her book because she talks about-
Abe Rutchick: She mentions that that was the one she pulled out-
Zach Elwood: Well, she talks about Slepian’s piece later in the book I think so presumably, it’s probably the same.
Abe Rutchick: Yeah, that’s my hunch by my detective work. Anyway, Michael was my undergraduate back when I was teaching at Syracuse as a visiting professor early in my career. He was a film major and, and took my social psych class and decided to shift gears and do social psych and now he’s done 10 times as much as I have. He’s a superstar. He’s actually a plug. He’s got a book that’s coming out just now yesterday available for preorder is called The Secret Life of Secrets. He’s a secrecy researcher and his book is coming out. I’m delighted to get that preorder in soon.
Anyway, that study just a little the origins of it, I’ll describe it. And yes, I agree with your broad sentiment that the mystique of poker and the sexiness of poker is a big factor here. He had done the work with another student and his advisor and brought me aboard largely to make sure that there weren’t any horrible gaffes in his description of the rules of hold’em, it was as a poker consultant. I did have some writing edits and so on too, but I didn’t have much to do with the actual running of the studies. The work is really niche though and it’s in a very prominent journal. That’s the other thing that I think helped a lot is that it’s arguably the best journal that social psychologists publish outside of science. It’s in a journal called Psychological Science, which is a great journal, very high profile. It basically had lay people watch videos cut from World Series of Poker footage and essentially evaluated and there’s a few quirks and stuff but basically, they’re evaluating hand quality on just a Likert type scale like strong hand [crosstalk 00:42:55] hand strength.
And again, it’s done in the context freeway. There’s a lot of… We can talk about the flaws of the study if you’d like, there’s plenty. We’re not looking at the whole… Anyone who studies poker knows that the strength of a hand in the abstract is not a meaningful thing here. We look at a hamstring by looking at the percent chance to win from that point on. But of course, you don’t know what your percent chance to win is because you don’t know the other person’s cards. Maybe you do if you have the nuts. But often that’s a question that is unknown to you.
And so we looked at how people rated how strong the hands were as they watched a clip of a person placing the bet like moving the chips in and we altered the videos in one of three ways. One, we didn’t in one case in the control. Another condition, we showed just the face like the shoulders up. And in the other we showed just arms, shoulders down. What we find is that people are basically at chance if they look at the whole body. They can’t tell what’s strong and what’s weak from watching them, but they are statistically worse than chance if they look at just the face and they are better than chance if they look at everything but the face so if they look at the hands, arms moving.
The idea being that when we’re making a bet in poker or doing anything in poker, we’re concealing and maybe even deceiving actively how strong our hand is. Everyone talks about a poker face, not too many people talk about poker shoulders. And we try to stay stoic or even again, misleading and then the evidence suggests deep misleading, but we exert less conscious control, whether we don’t do it or we’re unable to do it, on our hands, on our motions. I mean, people try of course, but we’re apparently worse at that. It’s the smoothness with which we put our chips in seems to predict strength and people are probably not aware of whether they’re doing this well or poorly, and lay people can pick up on this and get useful information.
Zach Elwood: A small note here regarding Michael Slepian’s poker study, soon after that came out, I had written a critique of that study. You can find it on my readingpokertells.com site or by searching for Zach Elwood criticism Slepian poker study. If you’re interested in poker or in poker related scientific studies, you might enjoy it. Back to the interview. You did another poker related study too, right? Is that right?
Abe Rutchick: Yeah, we back with my colleague Dustin Calvillo on hindsight bias in poker so looking at hindsight biases, this phenomenon, this I knew it all along effect where you recall your previous predictions as being closer to what actually happened than they really were. It’s a well-known very easy to demonstrate cognitive bias and that cognitive psych literature and what we found is the expertise attenuates aspects of that bias, that aspects of bias are attenuated to other ethics or not. So the role of expertise in biases is really interesting. Sometimes expertise can solve certain biases, sometimes it doesn’t and knowing that is really useful. Again, poker is somewhat a narrow domain of endeavor, but at least in that domain, we have an interesting effect where expertise sometimes helps and sometimes doesn’t, which is niche.
We have another study actually that’s unpublished. I was looking at it before we chatted because I wanted to remind myself of it that another student did with me, Johnny Cassie, and did his thesis on this work. We haven’t published it. Johnny moved on to industry and is working in market research, so the incentive to get the work done, to get that into a publishable manuscript is not as there as it would have been. But we looked there at embodiment in poker so another round of the of the study I did with Michael Slepian looking at what people’s hands and arms are doing as they act and how that affects strength. And we found that how far people moved their hands like the literal distance they moved their hands predicted the strength of their cards. And basically, nothing else in this particular study did. So there’s clearly something-
Zach Elwood: Was it farther… Were you saying it was how far they put the bet into the pot?
Abe Rutchick: The physical distance they move the chips, yeah.
Zach Elwood: The farther it was related to more relaxation strength?
Abe Rutchick: Yeah, that’s right. Sorry, I cut off for one second as you say. Yeah, that’s right. The farther they put it, the stronger it was. That’s what we found there.
Zach Elwood: Now that’s really interesting. I want to talk to you more about that later. Let’s see. Is there anything else you want to talk about that we haven’t mentioned?
Abe Rutchick: I guess the only other study we ever really discussed, this is going to be a little out of order I suppose, but we talk a lot about social media and polarization in the context of my anonymity work, my anonymous work. I have done some work directly examining polarization that might be germane. The one that was prominent to those actually had this really interesting trajectory around where I did it back in 2009 and it laid dormant, a few people cited it. I don’t think I did an immediate on it. And then this past election cycle 2020, Adam Grant tweeted it. It read as a well-known industrial organizational psychologist a famous fellow in our field and it immediately went completely bananas and hundreds of thousands of people were talking about. It was one of the most cited papers in social psychology that year was discussed on Twitter. I think called the Altmetric and those things and I started seeing this thing go bananas. Getting out, people were talking to me, ”Hey, your study is going bananas.” And I started looking and it was really interesting to see.
So that one has got some prominence. That’s the one where we look at maps and how looking at different color of maps affect people’s perceptions of polarization. If you look at a classic red and blue election map, the Democrats winning the states are colored blue, and the Republicans win and the states are colored red, you could do that. That’s one depiction you could use. But you could also go with a red blue blend and use the color purple appropriate to their share of votes. You could do that at a county level too and show the rich spectrum of differences in the states or something along those lines. There’s a million different ways to represent this, but we choose this. The most prominent that we tend to see is this red blue color. Yeah, exactly.
Well, what it does is tell you the election app, which is we want to know that. I had people look at one map or the other map and then make judgments about polarization, about stereotyping, or the political views of people in different states, about the ability of people to affect outcomes and the efficacy voting. I had a version of it where I didn’t get what you’d expect. But then I had a version where I printed the vote share on the map. The numbers are there. These guys are 54, 39 with some not voting for the candidate right on the map. And it made no difference whatsoever. The colors dominate the vote numbers. They’re literally asking the question of like, if you randomly chose a person from Texas, how conservative are they? The numbers right in front of them doesn’t matter? The color rules. And so that was interesting study, and it’s gotten some gotten some play. I think we have a lot to say about polarization as a field, and I think it’s something we’re not going to stop looking at any time soon. It’s a tough nut to crack.
Zach Elwood: Yeah, you have quite a few polarization studies. I’ll put some links to those in the page about this episode, too. That’s really interesting because I mean, so much of the polarization stuff is about these simplistic perceptions. I was reading Peter Coleman’s book recently about polarization, I think it was called The Way Out. He talks about one of the main strategies for mediation and conflict resolution between two conflicting groups is to highlight the complexity of the situation because it destroys these simplistic narratives that we have. So much of the polarization stuff is related all of this group or all of this place is like this and related to liberals saying bad stuff about Georgia or whatever, Texas, and I’m like, ”Close to half of those people are liberals.” It gets lucky. It’s just the simplistic. So I think that map thing is super interesting. Showing the complexity, going into granularity on the maps and how everything is so much more nuanced and complex on a map even can give you that sense of things are just not simple, people are not simple, things are all these shades of things.
Abe Rutchick: And the challenge, of course, there and I think if that’s the way out, if the way out is to understand complexity and emphasize complexity, I think that’s interesting. People hate doing that. That’s the problem. We have a very strong drive for understanding and clarity and closure and that sort of thing and I think that’s a tough way out. It’s better than the crude we’re all Americans, damn it. Let’s unite that. That doesn’t work.
Zach Elwood: That’s simplistic too. It’s like that doesn’t work for the same reasons. It’s like nobody wants to hear that and they view that as simplistic too so it’s like highlighting the nuance and the complexities is one route.
Abe Rutchick: The speaker also matters too. If someone in your group or someone else is the one talking that’s a big deal. But even still, it’s pretty easy for these things to actually not merely fail, but actually backfire. If done by the wrong person incorrectly is the challenge. It is grim and it’s not going away anytime soon. I wonder what those paths are. People are testing interesting interventions where they get people together to agree on something, something small that they can agree on that everyone would agree on or whatever it is like these even mundane things.
Zach Elwood: Red taste good.
Abe Rutchick: Yeah, exactly. Well, ideally, something that is apolitical issue of some. It could be something somewhat substantive like Americans should be able to file income taxes with accurate information provided by the government for free. Most people actually agree with that, that there’s very few… It’s funny. Actually, in the old days, meaning 10 years ago given, there are all sorts of issues that are considered apolitical. You could use them as stimuli. Trust in science was one, voter participation was another. Now I can’t think of two things that are more polarizing. It’s really striking.
Zach Elwood: As things ramp up, everything starts getting sucked into the vortex of animosity and polarization in anonymity.
Abe Rutchick: Yeah, it does seem to be the case. So pretty soon the TurboTax will get stuck too, I’m sure.
Zach Elwood: Yeah, I really don’t think there’s anything you couldn’t theoretically suck into that us versus them vortex. You can imagine seatbelts or speed limits was one I was playing around with it, imagining how speed limits would become polarized and you can imagine it becoming polarized on two different ways depending on how things went. It’s like none of these things are off limits and in theory.
Abe Rutchick: Well, that’s the other thing. It’s like the hypothetical world, the counterfactual world where what if this had been reversed, what if the right was to enforce driving for more vaccinations? Very easy to imagine. I do think that’s absolutely viable. In fact, I am a little surprised that’s not how it went given that Trump was in power when this stuff was going on. But of course, things happen. It did try to construct those narratives but, in this case, I think it’s entirely plausible one. And yeah, you can suck it all in. We’re built for that kind of conflict as you pointed out itself. The us versus them lens is something that we are designed evolutionarily to see things through.
Zach Elwood: Well, thanks, Abe. This has been really interesting and I think you do some really interesting work in your lab so thanks for coming on and talking about it.
Abe Rutchick: Thanks so much for having me, Zach. It’s been a pleasure.
Zach Elwood: That was an interview with psychology researcher Abe Rutchick. His site is at rutchick.com and his Twitter handle is aberutchick. Again, if you want to see some of the studies discussed in this talk, go to behavior-podcast.com and look for the page for this episode. If you’re interested in learning more about my poker tells work, that’s at www.readingpokertells.com.
This has been the People Who Read People podcast, with me, Zach Elwood. If you enjoyed it, please leave me a review on iTunes or another platform, and please share it with your friends. I may not be able to work on the podcast that often in future, just due to me not making any money on it and it taking a bit of time, so any encouragement you can give me is greatly appreciated.
Thanks for listening.