A talk with communication researcher Tim Levine about nonverbal behavior and deception detection. Tim Levine is the author of Duped: Truth-Default Theory and the Social Science of Lying and Deception. His work was featured in Malcom Gladwell’s book Talking to Strangers. Transcript is at bottom of this post.
Topics discussed include: what the research tells us about the usefulness of nonverbal behavior for detecting deception; why it’s so hard to find indicators of deception; common myths about nonverbal behavior; why we expect others to tell us the truth and why we tend to tell the truth; Paul Ekman’s work, including micro-expressions and “truth wizards”; the differences between analyzing verbal content and nonverbal behavior; the TV show Lie to Me; poker tells; and more.
Resources discussed in this episode or related to the topic:
- Tim Levine’s website
- Tim Levine’s Wikipedia
- Levine’s study on TV show Lie To Me
- Levine’s book Duped
- Piece about the misconception that most communication is nonverbal
- Levine’s paper on Truth Default Theory
- Zach’s piece about behavior bullshitter Jack Brown
- Related podcast episodes:
- A YouTube channel with police interrogation analysis that I’ve enjoyed
Zach Elwood: Welcome to the People Who Read People podcast, with me, Zach Elwood. This is a podcast aimed at better understanding other people, and better understanding ourselves. You can learn more about it at behavior-podcast.com. If you’re interested in deception detection, I have several related episodes; for example, I have an episode where I talk to David Zulawski about interrogation techniques, and one where I talk to Mark McClish about analyzing statements for hidden meaning. And quite a few others that are related.
As humans, we tend to think that we’re pretty good at spotting when people are lying. But research shows that almost all of us are quite bad at telling when people are lying. The existing research shows that, as a group, we’re slightly better than chance at detecting deception.
We also tend to think that there are certain behaviors that are associated with lying; for example, not making eye contact and having shifty eyes, or being physically fidgety or stumbling over words. But research shows that there’s almost no reliable information in such behavioral cues; there’s a lot of variation.
Tim Levine is a communication researcher who has studied deception detection for more than 30 years. He has a book called Duped: Truth-Default Theory and the Social Science of Lying and Deception. In that book, he criticizes some of the more popular theories of deception detection – for example, some of Paul Ekman’s well known ideas – and he presents a new theory called Truth-Default Theory, which he says explains a lot of the findings in this area that other theories can’t explain.
To quote from his book Duped:
My objectives here are ambitious and radical. I want to start a revolution. I seek to overthrow existing deception theory and provide a new, coherent, and data-consistent approach to understanding deception and deception detection. For more than twenty-five years, I have seen a need for a new theory of deception and deception detection. Ekman’s idea of leakage was hugely influential, but the deficiencies were apparent almost immediately. His focus shifted over time from the leakage hierarchy to a focus on the face and micro-expressions. But my read of the ensuing literature reveals more excuses for why the data do not seem to support his theory than solid, replicated, affirmative scientific support. Interpersonal deception theory is even less viable. It is logically incoherent, and I knew it to be empirically false four years before it was eventually published. The new cognitive load approach in criminal and legal psychology does not seem to be the path forward either, for the theoretical reasons identified by Steve McCornack, as well as weak, inconsistent, and just plain odd empirical findings. The need is clear. Existing theory does not cut it. A new perspective is needed. [end quote]
If you’re someone interested in understanding behavior and detecting deception, I think Tim’s book is a must-read. If you happened to have read Malcolm Gladwell’s 2019 book Talking to Strangers, you might recall that Gladwell talks about Levine’s theories in that book.
A little more about Tim: he’s a Distinguished Professor at the University of Alabama at Birmingham, and the Chair of Communication Studies. If you’d like to learn more about him, just search online for ‘tim levine psychology’ and you’ll find his website and his wikipedia page.
If you didn’t already know, my own main claim to fame is my work on poker tells. I’ve written three books on poker tells, and I have a video series. I’ve also worked at analyzing tells for several high-stakes poker players; two of them were World Series of Poker Main Event final table players who were playing for millions of dollars and wanted to look for behavioral patterns in their opponents or in themselves. And my work has been called the best work in this area by many poker players, and that includes some professional high-stakes poker players.
And some people might assume that, because I’ve worked on poker tells, that I’d disagree with Levine’s work, or find it disappointing. But I don’t: I’ve always been skeptical about the idea that there’s much value from studying behavior in real-world situations like interviews, speeches, and interrogations. When people have asked for my takes on such things, I will tell them I think that it’s mostly a waste of time to concentrate on such things, and that I have very few opinions on such things, because there’s simply just so much variance. There’s many reasons why, for example, someone who’s innocent might be or seem anxious. I do think there’s a lot of interesting patterns when it comes to verbal behavior, the actual content of what someone says, but I’m pretty skeptical about getting a lot of value from nonverbal behaviors, although I think there’s a lot more use for such things in games and sports.
And I also think that poker, and most competitive games, are completely unlike the scenarios studied in most deception detection setups; and also completely unlike interrogations and interviews. Many of the reliable tells in poker are not even related to deception detection, but more just related to the tendency people have to leak their level of relaxation when they’ve got a strong hand, which isn’t related to deception but more just about people sometimes feeling good and having fun, and not being as fully stoic and unreadable as they could be. To take another example: some tells in poker are related to being mentally focused or unfocused, and those kinds of tells are also not related to deception detection. And for another example: some tells in poker are about someone not wanting to draw attention to a strong hand, in a similar way that people in competitive situations don’t like to draw attention in general to their “treasure”, so to speak, and that can manifest as, for example, a player being less likely to stare at strong cards and more willing to look away from strong cards, things like this. There’s just a whole lot of differences I could name. And all that said, I always try to make it clear that tells are a small part of poker; I think they can add at most something like 15% to a poker player’s win rate, but for most people it’ll be significantly less.
In this talk, Tim and I do talk a little bit about poker tells, but if you’d like to hear more about that, I’ll add some more thoughts at the end.
Another reason I find Tim Levine’s work so interesting is that we are surrounded by a lot of bullshit when it comes to reading behavior. I’ll give a specific example, as I think it’s just such an egregious example; there’s a so-called behavior expert named Jack Brown, who’s main credential seems to be having a lot of Twitter followers. As I’m writing this, he has 167,000 Twitter followers. You can find him often making extremely confident claims on Twitter about people’s behaviors that are just so off-base from what real research and even common sense would tell us. And people eat this stuff up. He is regarded by many on Twitter as an actual expert in behavior, despite just being so clearly wrong and irresponsible in so many ways.
To take one example: Jack Brown promotes the very debunked idea that you can tell if someone’s being deceptive or not based on the direction of their gaze. So that’s a pretty big giveaway right there of the quality of his analysis. He also makes very confident pronouncements about what people’s behaviors mean, based on very ambiguous and high-variance behaviors that just simply don’t contain any interesting or meaningful information. To give one example: he once confidently proclaimed that Trump is quote “a severe, long-term drug abuser” end quote, and that he believed that Trump had a hole in his hard palate from cocaine abuse. He often confidently states that public figures are exhibiting signs of deception and shame and guilt in interviews, based on them exhibiting very common and very ambiguous behaviors. And the long story short of why so many of the behaviors he draws attention to aren’t reliable or interesting is that there are many reasons people can be or seem anxious that have nothing to do with guilt or deception.
So-called behavior experts like Jack Brown are basically trying to squeeze blood from a stone. They want you to think they have this amazing secret knowledge that gives them amazing insight into people’s motivations and what they’re hiding. If you’d like to read a piece I wrote about this guy and see some examples of what I’m talking about, just search online for ‘jack brown behavior’ and the piece I wrote should come up pretty prominent; you can also find it on my readingpokertells.com site, on my blog there.
And so Tim Levine’s work is important for making us more skeptical of such things, and drawing more attention to how little we’re able to read people. People interested in reading behavior should recognize the uncertainty present in these areas; they should avoid trusting the Jack Browns of the world. We should be skeptical of people who make confident pronouncements that, for example, public figures are lying or hiding something based on reading their nonverbal behavior. Because often those ideas, if we absorb them, will just be reinforcing our biases about people and actually make us worse at navigating the world. For example, when people listen to Jack Brown and think that they can now read these common and ambiguous behaviors and tell that someone is lying, people will use that to filter the world through their existing biases, while feeling that they’re doing something sophisticated and smart. It lends a veneer of respectability to our biases. And this stuff lends itself to, for example, police interrogators or job interviewers being highly confident about someone’s guilt or abilities when they really shouldn’t be; these things have real-world negative effects on people’s lives. And such things even add to our us-versus-them polarization, in terms of someone being more likely to see a political leader speak and think something like ‘oh, see, Hillary Clinton lowered her gaze at that question; I saw Jack Brown talk about that; I know she’s lying.’ These bullshit ideas lend themselves to what I see as one of our biggest problems: being too certain about others and too certain about the world. I think uncertainty and humility are needed more than ever.
I think combating bad and simplistic ideas about behavior is important. I think that drawing attention to nuance is important. And so I think Tim Levine’s work is important.
Okay, here’s the interview with Tim Levine.
Zach: Hi Tim, thanks for coming on.
Tim Levine: Oh, happy to be here.
Zach: So maybe we could start with how I first learned about your work which was a study you did about the show Lie to Me. Could you talk a little bit about what that study found?
Tim Levine: Sure, that’s a fun study. First to lay out just the general experiment, research participants come in, they do a standard lie detection task where they have to watch several interviews, some of which the people are lying, some of which the people are telling the truth. And those interviews are scored to see how well they do, scored just like a true/false test. In the experiment part of it, people either just did the task normally or one third of the people, based on random assignment, watch the TV show Lie to Me, which is about a psychologist who can detect lies based on nonverbal communication, it’s based on the work of Paul Ekman. And another control group watched a different crime show called Numbers in which people solve crimes through, it’s a math professor who solves crimes through math. And then the third control was just not watching any show at all. What the findings were is there wasn’t really much difference between the two groups. If anything, the people who watched Lie to Me were a little worse at detecting deception, and the show tended to make them more cynical, but it didn’t make them any better at lie detection. And the reason is because nonverbal things just really aren’t very useful in lie detection.
Zach: One of the things you talk about in your paper was the show makes a claim, I’m not sure if it made it once or if it keeps repeating in the show, I’ve only seen one episode of the show, but the show repeats the claim that people lie really often, I think it says three times in 10 minutes. And can you talk a little bit about that and what they got so wrong about that idea?
Tim Levine: Yeah, they actually used that in their promotional materials and it was on their website. And unlike some claims about how often people lie with the implication of people lie all the time, this particular claim actually has a basis in research, but totally taken out of context. So the experimented question was people had to come in and they were told to make a good impression on somebody else. People presumably took that instruction as make an unrealistically good impression on other people. So if you come into a lab setting and you’re told what you understand to be make an overly good impression, then people follow instructions and do that and as a consequence say up to three false things in 10 minutes. On the other hand, if you’re just normal… So in the first 10 minutes of this podcast, chances are there won’t be any lies probably during the whole thing.
Zach: Yeah. If you were to ask me how many lies I’ve told recently, I mean, I would be hard pressed to think of a situation where I lied recently. So yeah, I think it’s a very pervasive misunderstanding. It kind of reminds me of the common myth that’s so often repeated that nonverbal communication makes up most communication. For example, I was just Googling now and saw one of the top things was most experts agree that 70 to 93% of all communication is nonverbal.
Tim Levine: Oh, false. Oh, that is so wrong. I mean, it says that in books, it says that in textbooks.
Zach: Exactly, yeah. It’s wild. It’s just wild how pervasive these myths are. Do you see these kinds of things as related and are there other things in this area that you often see repeated even though there’s no good reason for them?
Tim Levine: Oh yeah. But before we get there, let me give your listeners a little background on where that most communication is nonverbal finding comes from.
Zach: Yeah, that’d be great.
Tim Levine: So the actual finding was when what we’re doing nonverbally contradicts what we’re saying verbally, then people will often believe what is done nonverbally over what people do verbally. But that most communication is nonverbal is just ludicrous because how could we possibly do this podcast nonverbally? I’m making all these great expressions, communicating very expressively and using all this body language, and you can’t see it. Now you can get the tone in my voice, but if we stripped out the content of the words and you’re just hearing the tone in my voice and you’re hearing me get a little bit excited about this topic, you could take that away, but that would be just a tiny, tiny, tiny little bit of the message. Most communication is conveyed through the words.
Zach: Yeah, and that totally relates. And I almost didn’t realize how much it relates to your truth default theory until talking about it now, and we’ll talk more about that later. But getting back to one very important point you make in your work is about how important it is that lies are rare and understanding that point. So when you’re trying to determine if someone is good or not at detecting deception, it matters a whole lot how many lies are in the mix. And I think you relate this to something you call the veracity effect, and maybe you can talk a little bit about that angle.
Tim Levine: Sure. So one of the oldest findings in lie detection research is something called truth bias. My good friend, Steve McCornack, coined the term in his undergraduate research. He now works with me at my university. The idea is that if you see a bunch of communication and you’re asked to guess, “True or false? Do you think they’re lying or telling the truth?” People guess true more often than lie completely independently of whether they’re seeing a truth or a lie. And so this is called truth bias, people guess true more often. So the veracity effect is an idea by a professor named Hee Sun Park, who saw rather obviously. But before she saw it, people didn’t really tune into this, that if you think most things are true, then you’re going to be right when they are true, but you’re going to be wrong when they’re lies. So for example, the average across hundreds of studies of lie detection is people are just 54%, a little bit better than 50/50. But if you break it out by truth and lies, people are better on truths. And the more truth bias, the more better they are in truths and they’re worse on lies, so accuracy is below 50% for lies. And the more truth bias, then the worse they are at detecting lies per se. And the veracity effect is simply the difference between your accuracy for truths and your accuracy for lies. The consequence of this is it the best predictor of whether you’re going to be right in deception detection is the honesty of the person you’re talking to. So if you’re talking to somebody who’s honest and you believe them, you’re going to be right. Not because you’re good at this, but just by chance. But if they’re lying, you’re going to be wrong about this. Well, now, if most communication is honest most of the time, then people are right most of the time. And lie detection experiments create a very unrealistic portrayal because lies are much more prevalent in deception studies than they are in the actual world.
Zach: Yeah. And to tie this back to your Lie to Me study, one of the points you make in your book Duped and elsewhere is that simply if you’re in a test situation or just the fact that we are so prone to believe people, if you give anyone any sort of education no matter how bad it is about deception training, it will make them detect lies more often simply because we are prone to believe people. So for example, if you watch Lie to Me, even if the information is bad, you’re going to increase your ability to detect a lie a bit just from being more skeptical. And maybe you could talk a little bit about that and how that ties into maybe the perception that doing any sort of detection training or education can make it seem like you’re actually becoming better at detecting deception.
Tim Levine: Yeah, but it almost always comes at the cost of getting more errors about–
Zach: Exactly, you’re not getting better, it just seems that way if you’re in an environment where you’re being made to find lies like in a study environment where they’re giving you more lies than you find in your everyday life. So it seems like you’re getting better at it, but you’re actually getting better at the cost of detecting accurately when people are telling the truth.
Tim Levine: Yeah. So cynicism only works well in an environment where there’s a big risk of being deceived about something important.
Zach: Right, which isn’t the case for our day to day lives, which is the basis of the truth default theory that there’s reasons for why we have a bias for finding things true. Maybe you could talk a little bit about what sets the truth default theory apart. That was one thing that was a little bit hard to understand how this was such a revolutionary idea differing from the previous ideas.
Tim Levine: So we already talked a little bit about truth bias and the veracity effect. So let me now talk about how defaulting to the truth is a little different. So in the standard deception detection experiment that’s done in the social sciences, people see some collection of truths and lies and then they’re asked, “Do you think this is lie or do you think it’s truth?” Now, the second I ask you to judge or to make that assessment, now you know this is a lie detection task. But in everyday communication situations like you’re just sitting around listening to a podcast, if the podcast isn’t about deception and maybe even if it is, is that true isn’t necessarily coming to mind unless prompted. So the idea of the truth default is unless there’s something to get you thinking about it, the idea of truth, falsity, honesty, deception just don’t even come to mind. So if I’m showing you now I do the study a different way and I’m showing you interaction between two people, and I’m just asking you open-ended, “What are you thinking about?” The idea that one of them might be lying to the other just doesn’t come to mind. People are thinking about what they’re wearing, they’re thinking about their mannerisms, their idiosyncrasies, they’re thinking about the content of what’s said, and they just kind of accept it at face value. It’s remarkably difficult in a lot of circumstances to knock people out of their just passive belief and get them to be skeptical. Now, there are times when we can be skeptical, we know somebody’s trying to sell us something, we’re hearing people we disagree with or unpopular ideas, then suspicion can be triggered. But in much of our daily life that just doesn’t happen. We’re on this communication autopilot where what we say is honest unless we have a reason not to be honest, and we believe people unless we have a good reason, strong reason not to believe.
Zach: So if I’m understanding it correctly, Hee Sun Park’s big contribution, big awareness, the revolutionary thing was that she realized that all of these studies that were being done were basically biasing the experiment by getting people skeptical from the beginning by the questions. So basically it was throwing off all the results. Is that accurate?
Tim Levine: I think the statement’s accurate, I think that’s more kind of a later implication of her idea. I think she had two really big ideas. First was the idea that accuracy’s different for truths than for lies, which is the veracity effect. Related to that, what matters is the ratio of truths and lies in the environment, that’s one of her really important things. And she had another really important thing which we haven’t talked about yet, which is that most lies are detected after the fact. So most of the times we do actually detect lies in real life, we’re not detecting them in real time based on how people are coming off, but the truth tends to come to light at some later point in time.
Zach: Yeah, it’s kind of you might have a suspicion once you get into the skeptical realm of thinking someone might be lying, but you’re not going to really know it’s a lie until you actually confirm it with real evidence or something.
Tim Levine: Yeah, exactly.
Zach: So let’s talk about the nonverbal behaviors, and you obviously take a very skeptical stance on the idea that there’s much relevant or reliable information to study when it comes to nonverbal behavior in the realm of detecting lies, detecting deception. Can you talk a little bit about the main reasons for why you believe that, for example, based on the meta-analysis studies and other things?
Tim Levine: Sure. Well, first off, my position is that nonverbal things are incredibly important in how people are perceived. What I doubt is the diagnostic value of nonverbal things, that is that they have a set fixed meaning, especially when it comes to truths and lies. So almost everybody everywhere believes that you can tell when somebody’s lying because of some set of nonverbal things. The most common belief, folk belief, is probably that liars won’t look you in the eye. And that’s been found pan-culturally.
Zach: That people believe that.
Tim Levine: Yeah, people everywhere believe that.
Zach: The shifty eyes thing.
Tim Levine: Yeah. It just has no validity at all. Last I saw almost 50 studies of this, and the average difference in eye gaze between liars and telling the truth is zero. So there’s been decades and decades and decades of research trying to find kind of the magic tell for deception and either linguistic behavior or more commonly nonverbal things. So there’s all these studies that look at what liars are doing and what honest people are doing and looking for differences in them. And a lot of studies find that this difference or that difference happens. The trouble is the next study finds the exact opposite thing or nothing at all. So when you plot out findings of all these studies over time, they just don’t hold up. And the more they’re studied, the less difference, the less the average difference between truths and lies. So you reference meta-analysis, for the listeners who don’t know, a meta-analysis is simply a study of studies, so we’re looking at trends across a whole bunch of different studies. And what I noticed when I was looking at meta-analyses of nonverbal cues and deception detection is that the more a given nonverbal behavior was studied, the less difference it made in research. Which suggested to me that the findings that were there were probably smoke and mirrors.
Zach: Right, it was reverting to the mean kind of idea.
Tim Levine: Yeah, where the mean was zero.
Zach: Another common conception or maybe it actually has some truth is the voice pitch thing, but it seems very slightly reliable or do you think that’s not reliable either?
Tim Levine: It depends on reliable in what sense. So if we analyzed a couple hundred people who are telling the truth and a couple hundred liars, on average liars have a slightly higher about two-tenths of a standard deviation higher vocal pitch. But to use it as a lie detection tool in any one person it’s just completely useless.
Zach: If it’s there, it’s just so small.
Tim Levine: Yeah. So maybe a baseball analogy, somebody who has a 0.3 batting average is more likely to get a hit than somebody who has a 0.2 batting average. But that doesn’t mean that the person with 0.3 average is going to get a hit and the person with 0.2 isn’t if that makes sense.
Zach: Yeah. And if I was understanding this correctly in your book, I think you were making a point about the difference between something… We talk about something being statistically significant, and sometimes that seems to be people will interpret that as being actually significant. Was I understanding that correctly that there’s some like confusion or language confusion there that people talk about things that are statistically significant as if they’re very meaningful or something?
Tim Levine: Yeah, that’s an unfortunate term. Statistically, what it means is that a finding of absolutely no difference across a large number of people would be sufficiently improbable to presume that there’s something there. So it’s a statement of probability, but it’s even worse than that because the math behind it presumes that you’re only testing one hypothesis. And the trouble that with modern research is people are using a probability statement for testing one hypothesis when they’re actually testing a whole bunch of things statistically. So that probability doesn’t have that meaning anymore. But that’s way too statistically nerdy probably.
Zach: Is it accurate to say that some people, say lay people, will see something about significance and think like, “Oh, it’s significant,” which might explain how some of these misperceptions about nonverbal behavior gets started in the common audience. Do you think that’s–
Tim Levine: Yeah, that’s accurate, I think, but it’s also accurate that 90% of professional researchers or 95% also think that.
Tim Levine: So it’s not just lay people and it’s not just the media, these kind of misunderstandings are more widespread than that.
Zach: Does that get into the replication errors area of people interpreting the results of things too confidently or mistakenly?
Tim Levine: That’s my read on it. So social sciences are undergoing a huge replication crisis where findings in the best peer review journals just aren’t holding up at a really disturbingly low rate, and findings are almost always small. It is not just deception cues, findings are generally smaller when they’re studied again. My read on why that’s the case is this opportunistic use of statistics. They’re using this statistical idea of significance in a way that really is not justified probabilistically.
Zach: A small note here, if you’d like to learn more about what Tim was talking about, you can Google the research replication problem. Long story short though, what Tim was referring to was the fact that if you collect a whole bunch of data, you’ll end up finding some correlations in the data that may seem interesting, but may just be due to randomness and the fact that you’ve gathered so much data that some random correlations are likely to be present. And that aspect can help explain why some findings are hard to replicate later. I actually talked to a previous guest about this if you’re interested. I talked to Brandon Shiels about his poker tells research, and we spent some time talking about the problem of finding illusory correlations in data and how one way to combat that is with pre-registering your research, which requires you to write down your predictions beforehand so that any correlations found are things that were theorized about and less likely to be a random illusory thing. Okay, back to the interview.
So getting back to why it is so hard to find reliable nonverbal behaviors tied to deception, I mean, I think basically it’s not surprising to me because humans are just good at deceiving. I mean, it’s not surprising that we have control over our behaviors in a pretty good way most of us. So I think that helps explain it. I think the question that you sometimes see the question, well, why is it so hard to detect deception? It’s almost like, well, why would it be easy to detect deception? I’m curious if you have any thoughts on that.
Tim Levine: Yeah, I think that’s half the answer. So I think for most of us, but not all of us, by the time kind of we get through high school we’re pretty good about telling a lie if we need to. I think there are probably a few people out there who can’t lie well. I know just anecdotally if you ask people, some people say, “Nah, I can’t do this.” And I suspect they can’t and they don’t lie very much because they know they can’t. But I think there’s another reason too, and this really gets at the heart of the idea of the truth fault is that there’s probably no single thing more important to humans than our ability to communicate. Humans are able to share information and pass down knowledge which makes all our technological and scientific advances possible. We are able to cooperate and work together which enables all kind of modern production. And it enables us to make friends and form good professional, social, personal relationships, which is incredibly important to our wellbeing and physical health. Communication only works if you can trust what’s communicated. If you have to second guess everything, you can’t really learn anything because everything’s uncertain. You can’t work together because you don’t know that you can trust the other person you’re supposed to work with. You can’t form relationships because you can’t trust this person. So if we can’t trust other people and what they say, and if communication loses its functionality, and this is just way, way, way too important to us, we have to believe other people. Because if you did the kind of thought experiment of what it would be like if we didn’t believe anything we communicated, we would absolutely absolutely be lost. If you can’t trust, you can’t get on a plane, you can’t get in your car. You can’t drive through a green light if you don’t believe the people on the red light are going to stop. Functioning requires this. So I think it’s not only that people can tell good lies, but it’s that we have to believe them as the business as usual. It’s not that we can’t, suspicion can be triggered. But as our kind of business as usual default mode of working, we have to have to have to take things at face value because otherwise we just immediately get bogged down. It has to be this way.
Zach: To get off topic a little bit getting into the fake news and misinformation area, so many people focus on the idea that like, “Oh, we need to get people to believe the right things, the things that we believe.” And I think that’s actually a mistaken goal. I mean, for one thing, it’s never going to happen. But the second reason is I think we actually just need more people to be as equally skeptical of everything as they are of the things that they perceive as biased. For example, for people who doubt the mainstream media and think it’s mistaken and biased and corrupt or whatever, we need those people to not trust random theories they see on Facebook or whatever. We just need more skepticism and less truth default for things across the board. I’m curious if you have any thoughts on that.
Tim Levine: No, I could not agree more.
Zach: So to get back to the people who aren’t good at lying, which is a very important point in your work too when it comes to explaining the slight ability across meta-analysis, the 54% ability to detect deception in these studies, the general average, the slightly better than chance average, you point out that some of that is just due to some percentage of the population being pretty bad at lying, at deceiving. Would you say that’s basically because they’re portraying the stereotypical behaviors that we have that we associate with lying like not being good at eye contact or stumbling in their words, those kinds of things?
Tim Levine: Yeah, I think that’s exactly what’s going on. There’s also another group of people who just come off, what I would call the transparent liars. They’re transparent. When they’re telling the truth, you know they’re telling the truth, but they just can’t lie. So there’s some people who are kind of the opposite of poker face people, you know exactly what’s in their hand. And we tend to get those people right. But there’s this other group of people which is probably larger which I call the mismatched folks, and they come off differently than they are. So if you think about people who are perfectly honest but who have social anxiety or maybe they’re a little bit on the autism spectrum. So they’re doing these things that people associate with deception, but they’re honest. So people tend to systematically get those people wrong, and that’s part of the thing that pushes accuracy down towards chance. So there’s these transparent liars that makes accuracy better than 50/50, but then these people who are mismatched who keep us from being very good at it.
Zach: Yeah, and the interesting thing too is for the people that are bad at lying, that have the stereotypical behaviors and are more easily caught in these kinds of studies, it’s actually almost meaningless to judge them on a case by case basis because in a practical sense the only way you would actually be able to catch that person lying in a meaningful, reliable sense is if you studied how they behave when they’re telling the truth and how they behave when they’re telling a lie. So in other words, in a study environment, you might correctly guess that someone’s lying because they’re seemingly bad at lying, but that could just as easily have been a person telling the truth. So it’s almost meaningless in a practical sense.
Tim Levine: Yeah, and it’s even more complicated than that because then you have to have a lot of other people watching them lying and telling the truth over multiple instances to see that there’s regularity in how other people are seeing them.
Zach: Right, you really need a statistical sample size to know that like, “Oh, this person’s actually bad at lying, and I’m actually finding something,” versus like, “Oh, they’re just one of the mismatched people or just they have random variations that make some people think they’re lying when they’re not.” It’s so much more complex and requires more study than it seems on the surface. And we have these simplistic ideas of how this stuff works in the popular culture and in our minds about this stuff that’s spread through media and such. So one thing I wanted to ask you about was Ekman’s truth wizards thing, which seems to be another popular idea that’s in Lie to Me and other places that there are people amongst us who are exceptionally good at detecting deception. Do you have any thoughts on that?
Tim Levine: Yeah. So generally, if you don’t work with Paul Ekman, who’s maybe kind of the biggest name, most famous researcher in the topic area, most academic, modern academic, deception theorists and researchers are deeply skeptical of the idea of the wizards. That said, I’m not a hundred percent sure what to think about them. If the claim is that there’s kind of maybe one in a thousand people who can do this, modern social science isn’t very good at dealing with the super rare disease or the super fluky sort of person. It’s hard to study. It’s very hard to study kind of very rare events or very rare people, because how do you go about finding them? How do you know it’s not just kind of fluky? I will say I had one of Ekman’s wizards contact me one time, and I did test them on some of my deception detection materials, and they did amazingly well. But I don’t want to say because of this one person and this one instance that, “Oh, now they exist,” that wouldn’t be very good science of me. But at the same time, I’m reluctant to be as critical of it as some people are just because I think, it’s easier to test ideas that where you can find examples of them easier if that makes sense.
Zach: So one thing in that area, it seems like, correct me if I’m wrong, but someone can be… We’ve been talking so far about nonverbal behavior, and that’s a lot different from reading logical inconsistencies or what people call statement analysis, which is just examining language for evidence. And I’m wondering, could that have played a role, for example, in the test you did or was that only nonverbal?
Tim Levine: In the test I did, if you know what to look for, you can do better than 54% if you’re really familiar with the context. The content can help you, but it probably couldn’t help you enough to make this person as good as they were. On the other hand, somebody wins the lottery, so chance fluky things happen. I don’t think people appreciate how lumpy randomness can be.
Zach: Right. And then we form perceptions based on those outliers.
Tim Levine: Yeah. If we flip enough coins that really truly are fair, there’s going to be some point where long streak of heads comes up in a row. And it’s just hard to sort that out.
Zach: I’ve read that there hasn’t been much evidence for people being consistently truth wizardy over time. Am I wrong on that? And why haven’t people studied that more, that a person is consistently good?
Tim Levine: Well, it’s hard to do overtime studying. And you’re right, that is the evidence. My best thinking is there might be people who are good, but it’s because they know a whole lot about the particular circumstances. So my guess is that a really experienced financial forensic accountant is going to be much better at spotting lies about financial issues than you or I. Particular type of criminal investigators might know a whole lot about this particular genre of crime in this particular area, and that knowledge really helps them use what is said in a useful way. Similarly, people who have really good critical thinking skills are going to be better at spotting logical inconsistencies than people who are less critical thinkers. But if I’m right about that, what it means is the financial forensic accountant isn’t isn’t necessarily going to be good about detecting the honesty of their spouse about non-financial things.
Zach: So getting back to that idea of the nonverbal versus the verbal and the statement analysis actually analyzing statements and logical inconsistencies and sort of psychological aspects of people’s language, do you have much thoughts on… Because to me, for example, personally, I’ve read Mark McClish’s book, I Know You Are Lying, which is about statement analysis, and I’ve written a book about verbal poker tells called Verbal Poker Tells, and that stuff to me is so much more reliable because it’s about how people communicate. And there can be so much hidden information in how people communicate and what they avoid talking about, for example. So it’s not nearly as ambiguous as nonverbal behavior, it’s not to say it’s very reliable either, but it’s just to me so much more meaningful and so much more there than nonverbal. I’m curious if you’d agree with that.
Tim Levine: I’m not sure if I do or don’t. So one of Ekman’s ideas that I really like is the idea of the hot spot, which is something that doesn’t seem right. And hotspots could be nonverbal. So somebody might be reacting in a particular nonverbal way, or let’s say at the poker table, they might be doing something nonverbal that strikes you as off or might mean something or it might be verbal. So if we view these as not as, “Oh, they’re lying or oh, they’re bluffing,” but instead as, “There’s something that I need to dig deeper on or explain or pay attention to,” then I think these things have real utility. So in the statement analysis, if it is being used then to go into an interview and ask deeper questions about these areas, then I think that’s a fabulous idea. If you were saying that, “Oh, they seem to be dodging around this issue, that means they did it,” then I think that’s tenuous because it could mean a lot of different things.
Zach: Right. And to be clear, it’s not like you can ever, even if something’s seems very obvious in the verbal things, it’s not like you could ever be like, “Oh, I’m very certain about this.” I mean, you might feel you’re certain, but you’ll still need some evidence. Which gets into how almost unimportant some of these things are when it comes to interrogations. For example, if you’re bringing someone in for interrogation, you probably have a reason to interrogate them. And your approach probably won’t be that much different. You’re just going to keep plugging away at them using the traditional interrogation techniques and do your thing. You spotting some nonverbal or verbal thing that makes you think they’re guilty probably doesn’t make too much of a difference because you probably already have good reason to think they’re guilty anyway. So I think that gets into almost the practical low value of them in practical interrogation and interview situations. Would you agree with that?
Tim Levine: Let me phrase it a little differently. There’s actually two things I want to jump off on. First, I think the best practice in the interrogation room is what you try to do is if you don’t have evidence already, you want to ask questions where you can kind of nail them down in ways that you can go do more investigation and check if that makes sense. So what I’m trying to do if I’m trying to question somebody is I’m trying to get information out of them that I can then use later to investigate and that I can check. Because if I already have evidence, then I don’t need to be really talking to them. But I’m talking to them because I don’t have enough evidence right now. So I’m trying to figure out what I need to go investigate and what I can check. But about the earlier point, let’s say, so as a deception researcher, I notice perhaps to a fault when people are leaving things out or when they’re changing the topic on me, and I have this kind of ongoing debate with another deception researcher who does political deception. And so he’s thinking you got a reporter who’s talking to a politician, and the reporter asks a question and the politician goes off topic and talks about what they want to talk about. So the question is, is that politician, they’re definitely being evasive, but are they being deceptive? This other researcher thinks, “Yes, evasion is deception. They’re being deceptive.” And I want to say, “Well, wait a minute, who gets to set the topic of what we’re going to talk about? Why is it that the reporter gets to say, ‘Here’s our agenda,’ and the politician has to stick to the reporter’s agenda?” So to this point of you need to pay attention when things are being left out or topics being shifted or people are being ambiguous, but you also want to really contextualize that.
Zach: Yeah, to be specific about interrogations or even poker because I think that one of the most meaningful tells in interrogation and in poker actually too is the conciliatory behavior from people who are guilty or bluffing. So for example, one of the most prevalent things, one of the most telling things in interrogations is when the interrogator makes an accusation directly or indirectly, and the person being interrogated basically just acts neutral and acts conciliatory and is not. An innocent person would understand immediately that they’re being accused and would be defensive. But you see this kind of subdued conciliatory behavior from someone who’s guilty just because their instinct is to be subdued and not arouse anger or more anger from the interrogator. And similar in poker too, you can find these things of when someone’s bluffing, they’re less likely to act in an irritating or aggressive manner either verbally or nonverbally to their opponent. This is interesting because it’s kind of a mix of both verbal and nonverbal. It’s just a demeanor almost, it’s a collection of things. And so I wanted to throw that in there to say it’s not as if we can’t get information from these things, but I guess the real question is if you’re in an interrogation spot, for example, I guess that can be very valuable for the investigator to feel that they have the right person, but obviously that’s not evidence, it might help you in feeling like you’re questioning the right person. I wanted to throw that in there to say there can be meaningful things, I think, in these areas.
Tim Levine: Absolutely. But at least in the interrogation point of view, I really urge caution and jumping to conclusions based on that at least in my own kind of deception tapes I’ve created which mimic interrogation situations pretty well, I think. Honest people respond all different kinds of ways, and so do deceptive people. Some deceptive people definitely go figure best defense is a good offense. Not everybody responds the same. There might be these patterns over large numbers of people. And if you’re playing the odds, you’re more often right than wrong, let’s say in poker, but you’re going to get some wrong because not every person responds the same.
Zach: Right, for sure. And I guess that gets into the impractical aspects of it because if the only thing you have is your feeling based on this person’s conciliatory behavior that they’re guilty, unless you have much else, that’s not really a reason to follow someone as a suspect for very long if you don’t have much else going for you. So I think that gets into the impractical aspects of it. It’s like how much is it meaningful really when you get down to it?
Tim Levine: Yeah, there’s this huge, huge, huge variability in how humans respond in given situations.
Zach: Very high variance lot as humans, yeah.
Tim Levine: Yeah.
Zach: A small note here, one thing that stands out to me as being pretty consistently meaningful behavior in interrogation situations is the tendency of guilty people to answer pretty straightforward questions with long meandering stories with way too much detail and divergences when innocent people will tend to answer straightforwardly. And this can be seen to be related to conciliatory behavior because we can see that guilty people can have a motivation to attempt to seem likable and cooperative, whereas innocent people just don’t have that desire, they just want to answer the questions. I wanted to elaborate on that a little bit more as a way to emphasize the point that what people say and how they say it can be interesting to study and pay attention to, even if we can debate how meaningful or actionable specific situations really are. Okay, back to the interview.
So I’m pretty skeptical about microexpressions and I’m sure you probably are too. I see that people often bring that up, people ask me about microexpressions and poker and such, and I’ve basically never based a decision on a microexpression and I don’t find them generally in poker. And so I’ve always been skeptical of them in terms of genuine. There are some things where people do like weak means strong and strong means weak things in poker where they’re basically conveying the opposite of what they feel and sort of a duping aspect. But that’s different from the idea of microexpressions as a leak of genuine emotion or feeling. I assume you’d just be very skeptical about that too, but I wanted to ask about that.
Tim Levine: Yeah. So the research community is very skeptical of microexpressions, there isn’t strong evidence. I would guess that microexpressions if they even exist and if they are useful, they might be more useful in poker particularly among novice players than in lie detection. The reason is because the emotions you’re expressing, the link between those and truth telling or lying is pretty tenuous, but I could imagine, do you ever see somebody who’s got like a really good hand who just lets this little smirk out when they first look at their cards? I’m sure professionals have got this under control, but–
Zach: Well, I think there is something to that for the very beginner level people. And I think, interestingly, we could talk about that for a while, but the more experienced they are, the more the opposite things leak out where they’re slightly trying to convey the opposite of what they have. But I think you’re right, at the very beginner level stages, there are those kinds of genuine leaks.
A note here: when I was talking here, I was focused on microexpressions. There are larger macroexpressions of genuine emotion that occur pretty regularly from all types of players of all skill levels. For example, it’s pretty often a player who makes a big bet with a strong hand will have genuine smiles and things like that. I’ll talk a little bit more about that at the end. I just wanted to emphasize that I was attempting to talk about just microexpressions here. Okay, back to the talk.
Tim Levine: Yeah. So there might be a kernel of truth to the microexpression thing, but I don’t think they’re going to be useful at all in lie detection.
Zach: It’s so different, it’s just such a different environment.
Tim Levine: Yeah. And so poker, can people fake microexpressions?
Zach: Well, that’s a really interesting question because when I’ve thought about this in the past, and I should probably write something up about this, but the thing I’ve have seen is that there’s actually these small, what people might consider microexpressions, but they’re the opposite. So for example, someone who’s betting a strong hand would have a very quick expression just briefly pass their face of having like an irritated look or their brows would be furrowed, almost like a confusion or an irritation microexpression. But it’s the opposite because they’re strong, and it’s almost like they’re not even trying to purposely, consciously do that, which is the interesting thing, because I don’t think the people who do these things are always planning to fool their opponent. It’s almost like because you’re in such a deceptive realm, poker is such a deceptive realm and most games are, you’re automatically just trying to almost subconsciously convey the opposite of what you have. So it’s almost this instinctual trying to do the opposite of what you have, weak means strong, strong means weak, which is interesting because I think a lot of people would think like, “Oh, they’re trying to fool me.” But the fact that a lot of these things are microexpressions, they just briefly… And actually in my video series on poker, I have a lot of examples of this, and you just don’t find that from bluffers because bluffers are very much aware of what they’re portraying. So they’re going to have a much more neutral, stoic thing. So it means that you’re pretty unlikely to detect these things from a bluffer, detect meaningful things from a bluffer, because they are trying to be so stoic and so neutral and that’s how most people behave. But some people with strong hands will leak out these small, opposite emotion things that give them away really. They’re really highly reliable because a bluffer is not likely to leak out these small things of uncertainty or irritation, these small expressions. So yeah, it’s an interesting area and it’s very interesting. I should write something up about it more official.
Tim Levine: I’m not an experienced poker player, but so one strategy is to just be poker faced or stoic and be unreadable.
Tim Levine: So what I would call zero transparency, there’s just no signal there. The other strategy would be try to be very unreliable and throw other people off their games. So you mix in some real things and some false things and some stoic and just convince everybody else at the table that what they think they’re seeing could mean any number of different things.
Zach: Yeah, and the interesting thing about that is that that would actually be good, but in practice it’s like most people are afraid of looking stupid. And this actually plays a big role in poker, we could go on for a while about how poker and other games are so different from interrogations and interviews. But one of the things in poker is you might think that’s a good strategy, but in practice you’d be like, “Well, what if I do something and that person reads it as a weak hand and calls me and then I’d feel stupid for trying all these?” So in practice that explains why people just try to be stoic because it’s more effort, more conscious, mental load and thought, and you have to think about, “Am I being balanced on all these spots if I’m trying to be high variance, for example, and throwing out this noise?” So that just helps explain why the best approach is to just be as stoic as you can. We got a little bit off topic there, to get back to your work, one thing I heard you say in a talk, I think it was a podcast was the nuance you’re bringing to this discussion isn’t the most exciting thing because people do love the sexiness, the excitement of tells in general and the idea that we can read people. And I think the thing you said was you’re not likely to be invited to do a TED Talk anytime soon. I’m curious if you can talk a little bit about the public’s perception of we have this kind of love affair with behavioral cues, people love shows like Lie to Me or other shows or even poker tells. There’s this perception in the public eye that poker tells are really important and they play a big role in poker when I emphasize in my work they’re a very small part of poker. They come up occasionally, you might just use them once or twice a session that actually changes a decision. So it’s a pretty uncommon thing. But in the public eye, we have this kind of love affair with behavior and reading people. Do you have thoughts about what attracts us so much to those ideas that we can read people well?
Tim Levine: In part, people always like the little secret, get rich quick ideas. And to some extent maybe the idea of reading nonverbal communication is a lot like a little mini get rich easy solution. It has appeal. Again, getting into poker, I’m sure there’s all these little, “Here’s the secret to being a great poker player, and you’re going to learn it in 10 minutes.”
Zach: There’s a lot of bullshit, yeah.
Tim Levine: Yeah, but there’s a market for it. So I think there’s probably some of that.
Zach: Yeah, you’re right. It’s like if people feel like they have some secret knowledge that’s going to make them better at their jobs, make them better in their intimate relationships or whatever it may be, they feel like they’re getting an advantage on society. I think you’re right, there is some aspect of that.
Tim Levine: Oh, I just went through a job training thing where the consultants come in and they’re going to teach us how to do difficult communications, and they’ve got their little consultant soundbites. I don’t know how much money they soaked out of my university to do this, but it was just all junk. They would never let them in the classroom teaching real communication skills to real tuition paying young adults. But there’s a market for this and they’re selling it. People want the easy path to something that takes a lot of skill and learning and practice.
Zach: Yeah, there is just so much junk out there to name a couple examples. I was watching some podcast where they were having an FBI behavior expert weighing on things, the behaviors in interrogation. I just thought most of the things he was saying were just so not meaningful and just could easily have been found in an innocent person. And compared to the things the person was saying, it was just like, “The nonverbal stuff is just so uninteresting and non reliable.” I’m just like, “Why even focus on that?” Just watching interrogations in general, I’m like all the things that stand out as interesting are just based on what the person is actually saying, not the non-verbal stuff. But let me change direction, I think one really interesting thing to me, one surprising thing to me is just how much people dislike lying. We have a real aversion to directly lying to people. And this helps explain some of the verbal behaviors, verbal indicators in interrogation situation and in games like poker. For example, even someone who’s murdered someone often doesn’t seem like they want to come right out and say, “I didn’t kill that person,” or directly lie, and they instead use hedging language or avoid making a direct statement. And you can see some of that in poker too, people don’t like to directly lie about their hand strength when they know it might be exposed later. For example, someone who actually has a pocket pair of eights is unlikely to say, “I don’t have pocket eights,” they’re unlikely to make these direct statements, it’s just very rare. And so it’s kind of been wild to me that in areas where you think lying would be completely understandable considering the situation, whether it’s poker where you’re allowed to lie or when someone’s committed a serious crime, you’d think they would have no problem lying, but it seems like people still don’t like to lie. And I’m curious, do you see that? If you think that is there, that tendency to avoid lying, is that related to the truth default idea and is it possible that the reason that we so instinctively trust others is that there is some serious deep down aversion for us as social creatures to lie? Is there something to that?
Tim Levine: Yes and yes. So part of the truth default is that we are honest. Most of us are honest unless we have reason not to be. Because most people are honest, then this makes believing other people very functional adaptive. But the thing to remember too is that lying behavior is not normally distributed across the population. There are people out there that lie a great deal and seem to have no problems with it at all. I’m currently working on an essay on something I call bold and shameless lying. So bold lying is when I lie even though I know the truth is easy to check. And shamelessness is when you call me out on it, I’m going to double down and just keep asserting the falsehood. And maybe we can think of people in public life who do this, but they are out there. So I think your observation is true for the vast majority of people, but there are a few people out there that just are not tied to the truth at all and seem to have absolutely, absolutely no problem saying complete obvious falsehood and are completely without shame when people try to call them out.
Zach: And presumably those would be people with the more narcissistic or psychopathic traits, is that fair to say?
Tim Levine: Yeah. I think both of those could account for that, maybe some Machiavellian traits too could produce something like that.
Zach: And probably the context and the motivation for lying would… Well, I guess that wouldn’t explain why they’re lying frequently. Yeah, nevermind.
Tim Levine: So when I teach deception classes, people keep a deception diary, and I pay attention to my own too. But what I’ve discovered in these diaries is some people who lie a lot do it in a particular situation. So they have a particular job that requires them to tell a particular lie in a particular circumstance. And they do it a lot, but this is the only time they lie. They don’t lie to anybody else in their life, it’s just this kind of one place where the truth doesn’t work. Then there’s this other group of people who just lie a lot. In the extreme case, we’ve got the pathological liars who lie when the truth would work better for them. And there’s not many of those people out there, but boy, if you meet one, once you figure out what’s going on and that there’s just no pattern to their honesty or deception, it’s really unsettling.
Zach: Yeah, it is. I think it’s so unsettling for the fact that we do have such a tendency. The truth default, it’s like if that’s our logical default stance to the world and then we stumble across people that just have no problem lying, that is disturbing at some existential level, I feel.
Tim Levine: Yeah. And I think this is why bold and shameless lying actually works because most people think nobody would do that.
Zach: Yeah. They’re like, “It can’t be happening. No, it can’t.”
Tim Levine: Right, it doesn’t make sense. It doesn’t make sense.
Zach: Yeah, exactly. No, that explains a lot I feel like of people’s trustworthiness. So one thing I had a question about, I haven’t delved into the research enough to know this, is it common to set up a study where someone rates not just whether they think someone is lying or truth telling, but also rates their confidence in whether they’re correct?
Tim Levine: Yes. I wouldn’t say it’s super common, but it happens enough that there’s a good amount of research doing that.
Zach: Okay. I might ask you afterwards if you have examples of that, because the thing that strikes me there is say if you forced me to guess a bunch of poker spots, for example, if you put a bunch of different poker behaviors in front of me and said, “Guess all these things,” I think I would have a very low ability to tell bluffs from value hands from strong hands. And that’s in fitting with how I say the times you’ll actually spot something that’s meaningful, that is reliable are actually pretty rare. So in other words, if you put all these spots in front of me, I would have low confidence for most of them, but occasionally I would have very high confidence. And then if you just judge me on the ones I was highly confident on, I think you’d see a significant difference. I’m just curious, it seems like such a rather obvious way to try to detect the people that are good at detecting deception in whatever situation. And I’m curious if you think is that a good idea and maybe people should do more of that in these kinds of tests?
Tim Levine: I think it is a good idea when people have some degree of expertise in the context and when there might actually be kind of real tells or real signal there in some proportion. So when there’s signal variability and when there’s expertise, then that can help. So in the literature as a whole, there’s really no correlation between how white people are and how confident people are. But those generally come from your standard deception to text experiment where there’s no real signal there.
Zach: Yeah, there’s no signal if they’re just saying, “Yes, I did this or no, I did this.” There’s not much signal to these very simplistic ones. It’s like the more context there is, the more verbal stuff there is, whatever. The more signal there is, the more likely you are to get something.
Tim Levine: Yeah, so when there’s a variable signal and you have enough expertise to kind of understand that, then I think confidence becomes very important. So my colleague, Pete Blair, and I designed this lie detection task and we had it run, and we didn’t know who was lying and who was telling the truth. But we built it, so we thought there would be a signal there. And so we’re both trying to do lie detection in this with this new set of materials, it was a few years back. What we found is we both got 86% on them. The ones we missed were different, but we were sure about the vast majority of them, but there were four particular interviews that we were uncertain about. And we went exactly different ways on the ones we were uncertain about, but we agreed a hundred percent on the four we were uncertain about if that makes sense. And it was absolutely what you were saying. We knew the ones we might be missing, and we knew the ones we were probably right about, and we were absolutely chance at the ones we just didn’t see a signal or we saw mixed signals. But where the signal we were looking for was there, kind of we knew it and we got all this right.
Zach: So is there anything you’d like to add here that we haven’t touched on that you think would be interesting to throw in?
Tim Levine: I think we’ve covered a lot of ground.
Zach: Yeah, this has been great. Thanks a lot, Tim. And thanks a lot for your work, very interesting. Your book Duped was great, and you were mentioned in Malcolm Gladwell’s book Talking to Strangers, which must have been good for you to get some extra attention. That must have been exciting.
Tim Levine: Yeah, Malcolm Gladwell’s been very kind in dropping my name around.
Zach: Okay. Thanks a lot for coming on, Tim.
Tim Levine: My pleasure, I really enjoyed it. Thanks for having me.
Zach Elwood: That was deception detection researcher Tim Levine. He’s the author of Duped: Truth-Default Theory and the Social Science of Lying and Deception. I highly recommend that book if you are interested in behavior and deception detection.
To come back to the discussion of how poker tells differ from general deception detection scenarios: one anecdote of mine can help us see how different these areas are. In 2013, I was watching the final table of that year’s World Series of Poker Main Event as it was being broadcast. I was live-tweeting it. These were players playing for millions of dollars; they’d outlasted thousands of other players. First place was $8 million. At one point, a player made a big bet and another player was thinking for a long time. Based on the bettor’s demeanor, specifically their genuine-seeming smiling and laughter, I was very confident they had a strong hand; bluffers can smile but it’s rare for them to have more exuberant and genuine-seeming smiles; these are smiles that affect their eyes and that are more dynamic with more movement and looseness. I was so sure about this that I tweeted “If Jay is bluffing here, I’ll eat my hat. No way.” His opponent ended up calling. He was wrong and I was right; the bettor did have a strong hand.
Now clearly, with my poker tells books and work, I have a lot at risk to make a public guess like that. And it’s seldom that I would make such a pronouncement. As I emphasize in my poker tells work, it’s seldom that you can be very confident in a tell. But sometimes I will see spots where I’m highly confident, almost certain, that someone is strong or weak. Some of these can be cold reads; some behaviors are very unlikely with certain hand strengths, even not knowing anything about a player. In other cases, the confidence might come from seeing how someone behaves over several hands, to have more player-specific knowledge.
And so for this example of me correctly and confidently reading that player in the World Series of Poker, we can see that it doesn’t have much to do with deception detection. A lot of tells from players making big bets have to do with them leaking information about how relaxed they are, and some of that has to do with the fact that players who have a strong hand can just be feeling really good about things; they could be savoring the moment; they could even have some tendency to goad their opponent a bit, which can manifest verbally or even with just more direct eye contact, or with more irritated or belligerent-seeming facial expressions. But these behavioral patterns are not about deception. And there’s no equivalent to this in an interrogation or interview scenario; most people being interrogated don’t suddenly feel great about the situation and happy to be there, whether they’re innocent or guilty.
To take another example: another class of tells in poker are related to a player’s level of focus or lack of focus. For example, early in a hand, a player who gets a strong hand, let’s say pocket Aces, will have a tendency to be more mentally focused, because they seldom get a strong hand and because they don’t want to waste it; they want to play it as well as they can and know they’ll be in the hand for a while. But a player with a weak hand who makes a bet or raise early in a hand, is often less mentally focused. They know they have the option to fold if someone raises them; they know they can always check and fold; basically they haven’t invested much in the hand yet. And these dynamics means that the more loose and ostentatious behavior, whether verbal or nonverbal, early in a hand when the pot is small, will be more linked to weak and medium-strength hands and not to strong hands.
And those are tells that also are not really related to deception; they’re just tells of focus versus lack of focus.
And another different thing about poker is that players are constantly going into and out of these highly emotionally polarized but also short-lasting situations, and that means there’s a chance to look for imbalances over time. And a lot of people just aren’t that good at being balanced and aren’t even trying that hard, especially when it comes to doing that over many situations over many hours, or even days or weeks or months when you play with someone regularly for a long time.
And finally, in poker, behavioral information can be valuable even when it’s slightly reliable. In poker, you’re often put in spots that could go either way from a fundamental strategy perspective. In other words, leaving aside any behavioral stuff, it’s often a toss up whether to call a bet or fold to it. So if you see a behavior you think is slightly more likely to mean one thing than another thing, that can be valuable in the long term, because you’re making so many small decisions in poker. So small edges can be valuable. And there’s just no equivalence in interrogation; interrogators aren’t going to change big decisions based on one small behavior they spot. And this aspect of poker doesn’t even map over to most other games or sports, and that’s because poker involves so many decisions that are based on low-information; for example, in chess, there’s no equivalent to this, because all information is on the table and is known, whereas in hidden information games, especially versus skilled players, you’ll often be put in spots where your decision could go one of two or even three ways. And that’s one big reason skilled poker players find tells valuable; the cumulative effect of small edges over time.
I could talk about this for a while, but I just wanted to help make the case that reading poker tells is quite different than deception detection and real-world situations like interviews. And part of the reason I wanted to do that is to encourage any behavior and psychology researchers listening to do more studying of poker tells, to show that there is still much to study in poker that hasn’t yet been studied.
If you find this stuff interesting, check out my poker tells site, readingpokertells.com. I also have videos on youtube on my Reading Poker Tells youtube channel. You can sign up for a free email series on verbal poker tells at readingpokertells.com.
I wanted to give a shout-out and thank you to Alan Crawley, who goes by the online handle SinVerba, which is Spanish for nonverbal. Alan does youtube videos and classes on behavior. I was recently talking to him and he got me thinking again about comparing interrogations and poker and that was what led to me finding Tim Levine’s work and what led to me doing this podcast. So thanks for that, Alan.
This has been the People Who Read People podcast, with me, Zachary Elwood. If you like this podcast, please leave it a rating on Apple Podcasts. That’s a great way to show your appreciation. And of course please share it with your friends if you’ve liked it; that’s also hugely appreciated.
Okay thanks for listening.
Music by Small Skies.
One reply on “Questioning if body language is useful for detecting lies, with Tim Levine”
[…] A previous podcast talk with Tim Levine that touches on same subjects […]