Many people think there are telltale signs of lying — shifty eyes, nervous fidgeting, maybe a quick smile — that can give someone away to trained observers. But according to decades of research, that’s a myth. Still, some scientists push back on that consensus. A recent paper by well-known researcher David Matsumoto (of the company Humintell) argues that combinations of nonverbal cues might actually reveal deception. In this episode, I talk with deception researcher Tim Levine, author of Duped and creator of truth-default theory, about whether that claim holds up — and what the science says about our ability to read lies using behavior.
Below is a transcript and related resources.
Episode links:
- YouTube (includes video)
- Apple Podcasts
- Spotify
Resources related to this talk:
- Previous talks with Tim:
- First talk: Questioning if body language is useful for detecting deception
- Second talk: On eye-direction myths
- Matsumato’s paper defending nonverbal behavior as useful for deception detection
- 2014 Hartwig and Bond paper discussing using multiple behaviors to detect deception
- Tim Levine’s 2018 paper reconciling findings on behavioral cues as useful vs what meta-analysis says
- Tim Levine’s recent work on using AI to detect deception in statements
- If you’re curious how I can work on poker tells but be skeptical about the use of nonverbal behavior for deception detection, that is discussed in my first talk with Tim and in this episode, too
TRANSCRIPT
(transcripts contain errors; this one was automatically generated)
Zach Elwood: [00:00:00] Many people think that there exist reliable nonverbal behavioral cues that can help detect deception and tell liars from truth tellers. But as I’ve covered on this podcast several times in the past, there’s no evidence for that. Not when we’re talking about practically useful reads of deception or truth telling in a general population.
And when we’re leaving aside person specific reads. I was scrolling through LinkedIn recently, and I saw a post by David Matsumoto, who’s a well-known behavior researcher and the head of human tell. A company that says that they can help you, quote, master the skills to read behavior, decode, motivation, and lead high stakes conversations, whether you’re hiring, interviewing, negotiating, or managing teams.
End quote. In this LinkedIn post of his, he shared a paper that he’d co-written [00:01:00] titled Behavioral Indicators of Deception and Associated Mental States Scientific Myths and Realities. In that paper, they pushed back on the consensus view that there are no non-verbal behavioral cues useful for detecting deception.
I’ll read from the abstract that paper. We suggest a reconsideration of broad and sweeping claims that research has demonstrated that nonverbal behavior are not indicators of deception. We reexamine several methodological characteristics of a seminal meta-analysis that is often cited as non-evidence and caution the field from drawing over generalized conclusions about the role of nonverbal behavior.
As indicators of deception based on that reexamination. The gist of the paper was that while single nonverbal [00:02:00] behaviors haven’t been showed to be useful, there’s evidence that shows that combinations of multiple nonverbal behaviors may be highly reliable at the end of the paper. They mentioned their conflict of interest saying the authors are employees of human tell.
A company that engages in research and training related to behavioral indicators of mental states and deception. This got me interested in digging into this topic more. Is there actually evidence that combinations of non-verbal behavior are useful for detecting deception? I had not heard that. If so, what were these combinations?
What’s the scientific evidence? I’ve talked to Tim Levi a couple previous times for this podcast. Tim is a highly respected researcher on deception detection. I’ll read a little from his website, which [email protected]. His expertise involves the topics of lying and deception, [00:03:00] truth default theory, interpersonal communication skills, credibility assessment, and enhancement interrogation.
Persuasion slash influence and social scientific research methods. He’s the author of the book, duped Truth Default Theory and the Social Science of Lying and Deception. And my first talk with Tim was about the ideas in that book, focusing on his truth default theory topics Tim and I discuss in this talk include, is it true that combinations of nonverbal behavioral cues can help us detect deception?
The fact that so many papers finding certain behaviors correlated with deception or truth telling have failed to replicate. Are micro expressions a thing? Are they actually useful? If you’re interested in serious researched views on behavior and not, the bullshit takes on behavior that are so popular these days on various YouTube channels, I think you’ll enjoy [00:04:00] this talk.
If you like this talk, I think you’d like the other couple talks that I had with Tim about behavior and deception detection. Also, just wanna say sorry about my noisy audio. I recently moved to New York City and don’t have a great audio set up, so that’s definitely made my audio much worse than it used to be.
Okay, here’s the talk with Tim Levi. Hi Tim. Thanks for joining me again.
Tim Levine: Happy to be here. Nice to see you.
Zach Elwood: Nice to see you again. Uh, so yeah, the reason I had reached out to you was I happened to see this study by David Matsumoto basically kind of defending the idea that, um, you know, pushing back on the idea that nonverbal behavior, uh, is not a.
Useful tool for detecting deception. And I got the gist of it seemed to be that he was saying, some studies seemed to show that uh, maybe using multiple nonverbal behaviors could be more useful than [00:05:00] using, you know, a single nonverbal behavior. Uh, but I’m curious overall, what were your thoughts on that paper and the overall ideas in it?
Tim Levine: Uh, so first, uh, let’s not call it study. Let’s call it a, a paper. A a paper or an essay, or a commentary or, uh, you know, an argument. Um, so it’s, it’s no new data. Uh, but I think you, uh, you framed the, uh, claim, uh, pretty well. Um, maybe. We could give a more generous conclusion to them that, um, maybe the verdict’s not in yet.
Um, so maybe there’s, you know, a lot of findings that, uh, seem to suggest that nonverbal behaviors in particular aren’t very [00:06:00] useful in deception detection. Uh, but it might be that if studies were done differently. Uh, then more supply supportive findings, uh, might emerge. And, uh, while I think that’s counterfactual at this current point in time, uh, it is true.
You never know what the next finding’s gonna, next study’s gonna find or next finding’s gonna find.
Zach Elwood: Right? It was basically just a, basically just saying it’s possible that. If you link together multiple nonverbal behaviors, which, you know, which makes sense, like in theory if, uh, you know, more, more information, more data about someone could theoretically lead you to better conclusions.
Right. But I’m curious. Yeah. What are your, what are your thoughts on that with your knowledge of the field? About what ’cause, because they mentioned some previous studies and meta analysis that. [00:07:00] They said, seemed to show that, you know, there was one that they mentioned, what was it? The, um, Harwick and Bond 2014, I believe.
Yeah. Harwick and Bond. Yeah. What, what are your thoughts on that and their, the idea that, I guess the quote was something, what was the quota? It was like, uh, that the lies can be detected with 70% accuracy. I had a hard time parsing what they meant by that ’cause it seemed kind of theoretical to me.
Tim Levine: Uh, yeah, that is a, uh, that is a true finding and it might actually be a little higher than 72%.
Um, but let me, let me, this is gonna take like
Zach Elwood: a lot of unpack. Yeah. I, I get, I think there’s a lot of unpacking, which is, I found it hard to understand what exactly they were saying with my, you know, not great. Um. Parsing of academic papers and such.
Tim Levine: Yeah, so the um, harwick and bond study was a meta-analysis.
So a meta-analysis is a, uh, study of studies [00:08:00] and they were, um, looking at, um, how diagnostic cues were, so there weren’t any humans in the equation, right? It was if you do statistical modeling based on observed behaviors. How good could your algorithm be? Right? So imagine, uh, we’re on camera right now, so in modern technology, we could, uh, have cameras capturing all our facial movements and mapping those dynamically over time.
Right. And we could use machine learning, um, to map what we’re saying onto our facial expressions, theoretically, right? And then the algorithm could test if your blinking rates are faster when you’re listening than when you’re talking, for example.[00:09:00]
And it might be that in any given segment of communication, these things would appear. Diagnostic of listening versus talking.
Zach Elwood: I guess I’m confused. How could they put a number on it that exactly, that 70% number approximately, that they chose.
Tim Levine: Uh, there is a, uh, statistic called, um, multiple discrim analysis.
And if you’ve ever heard of regression, it’s kind of like regression, but it’s predicting a, a dichotomous outcome. So what you’re doing is you’re putting in enough of a bunch of predictors and then you’re waiting them to maximize predictability, and then what you can do is do a classification. Based on that, it was invented, uh, by my understanding is by, uh, anthropologists, physical anthropologists who were trying to [00:10:00] predict what kind of animals came from a discovered bone.
So if you know, like this characteristic of the bone and this characteristic of the bone and this characteristic of the bone, what probability is it that it’s this dinosaur versus this dinosaur? Um, but, but the plot thickets.
Okay. Uh, and I, I actually, um, I should be able to pull the year off. I actually wrote a paper, uh, based on this ’cause there’s these two apparently really super inconsistent findings. There’s the famous Apollo etal 2003 meta-analysis of qs, which about and analyze cues individually. And that meta-analysis found that most, the vast majority of cues don’t have any diagnostic value.
Uh, the ones that do their diagnostic value is statistically relevant, but practically, um, useless.
Zach Elwood: Right. [00:11:00] Um,
Tim Levine: very low.
Zach Elwood: Like meaning that they’re, they’re statistically significant, but the usefulness, even if, even if that’s true, the usefulness is extremely low
Tim Levine: in, in any given communication. Yeah. This is, they would be useful in classifying large numbers of people.
Zach Elwood: Mm-hmm.
Tim Levine: At rates better than chance.
Zach Elwood: And also we should, it might be worth mentioning to the 2003 study you mentioned the meta-analysis was a big part of what the paper we started out talking about, the matsu motto. Uh, one was push Yes. Pushing back on that because the 2003, uh, paper was largely what people point to when they say non-verbal behaviors aren’t a good correlation with deception detection.
Yeah.
Tim Levine: And they are right that you can’t, you shouldn’t be looking at nonverbal behaviors individually. Uh, and, you know, my whole work on demeanor points to this, that behaviors, you know, it’s, it’s global impressions that influence judgments and not specific behaviors. So there’s really strong [00:12:00] evidence for problems with looking at QS individually.
So, so in the DePalo study, they looked at individual cues and the effects over studies, right? So if you’re testing. I don’t know, um, how many details there are in a statement. Uh, the finding is that on average, uh, honest people have higher number of details. Honest things tend to have higher number of details than deceptive things, at least given in the type of scenarios that have been tested in the study, right?
So that you test that d that difference in details or in blanks or in eye gaze, study over study. And, um, what the Apollo analysis shows that some studies find one thing and some findings are incredibly mixed. And when you average them out, the more times a given Q has been [00:13:00] studied,
uh, the more it tends to have averaged zero. No diagnosticity.
Zach Elwood: Mm-hmm. Mm-hmm. Right. So even if it
Tim Levine: starts
Zach Elwood: out in the previous study showing like
Tim Levine: something
Zach Elwood: useful about it, it tends to revert down to
Tim Levine: the, yeah, the media. And it doesn’t just get smaller. In order to revert to zero, it has to flip signs, right?
So it has to be diagnostic and then anti diagnostic. And when you average those, it comes to zero, right? So a nonverbal behavior might mean one thing. In a given instance of communication and the exact opposite thing in the next,
Zach Elwood: and when you say it means something, are you saying it could, it was, it was theoretically actually a good predictor within that situation.
But
Tim Levine: not,
Zach Elwood: yes, later.
Tim Levine: Yeah.
Zach Elwood: Yeah.
Tim Levine: But the unit of analysis, so to speak, in the DePalo [00:14:00] uh, study was the individual queue that was studied over time.
So in the Harwick and Bond, the unit of analysis was the individual study. So they looked at studies that studied some number of cues, right? And they found that in virtually all Q studies find support. For some kid, it doesn’t mean it’s the same cue, but they find effects for some kid. Because they’re studying, like a lot of these are studying like 10, 20 different things.
And in almost every study one pops, or two pops or three pops. So what they find is there’s really, really big Q effects at the level of the individual study. If you study those same cues over time, you find those effects go away, but you only see that when you [00:15:00] study the same cue. The Harwick and bond study is inq.
Yeah. Right, so, so this creates a paradox. So how is it that individual studies are always finding effects, but those effects never replicate when you follow up on them? Right. So in the bond, uh, heart wing bond. I might have mistakenly said Bond and de Paulo. That’s a different one. Um, the Hartwick and Bond multiple Q study, the one we’re talking about, um, they’re just tracking the biggest effects in individual studies and then averaging those effects, but they’re not tracking which Q was being diagnostic there.
Right? And so when you look at the average diagnostic in studies that study multiple Q, it’s better than 70%. Because all Q studies pretty much find support, [00:16:00] right? And if there’s publication bias in the literature and there’s a bias towards publishing studies to find support, then people are right. So, right.
Presumably they’re, they’re testing all these different variables, right? They’re finding one that pops. And in the heart, we, and Bond, it wasn’t mult. Multiple Qs weren’t that much better than single Qs.
Zach Elwood: Mm, mm-hmm. Right.
Tim Levine: Okay. So the, the, the multiple Q effect was a 0.5. The single Q effect was a 0.4.
Zach Elwood: Mm-hmm.
Tim Levine: This is in neo correlation. So 80% of the effect is be driven by one q Mm. But we know from the DePalo data that that one Q isn’t reliable across studies.
Zach Elwood: Right.
Tim Levine: Right. So what this means is there’s, if you are, this lets people cherry pick studies. ’cause you can find support for anything, right? And this is why it’s so important to replicate [00:17:00] research and look across studies and look at the pattern,
Zach Elwood: right?
So this, um. This study, the one we’re, we’re talking about the, um, Hartwig and Bond one. Mm-hmm. That, that, um, Matsumoto references the using more than one mm-hmm. Nonverbal behavior. You’re saying They’re basically just using the most rosy, optimistic picture and not factoring in the fact that those results, you know, when you actually do more work on each of those things that pop, those, those things tend to revert to.
Meaningless or near meaningless. So yeah, it’s a distorted view of they’re, they’re taking a very rosy picture of what you can do with that data, and it’s not reflecting the reality of, of, of, of how weak those things actually are. Yeah.
Tim Levine: Yeah. But I think we should be more generous because it’s, it’s easy to see a finding, right.
And go, oh, you know, and especially if that finding [00:18:00] fits, um, what you wish were true. Right. And it is legit. They’re not quoting the study wrong. What they are doing though is they’re leaving context out,
Zach Elwood: right?
Tim Levine: And there’s an even bigger context they’re leaving out here, which is if you dig into the findings deeper.
So, um, it’s Maso and Wilson start out their argument. Um, by using a form of argument I would call, I’ll blame the methods. Right, and the argument goes, if only the studies were done differently and had these different methodological features, then surely the data would support, right? So in the Hartwig and Bond study, they tested what are called moderators or these various methodological culprits.
That are proposed in the Matsumoto and Wilson paper, and what they find is [00:19:00] there’s always large Q facts, whether lies are high stakes or low stakes, or regardless of all of these things. So it doesn’t matter how many cues you’re looking at, it doesn’t matter whether it’s high stakes or low stakes. Q studies, individual cue studies find big effects, right?
So there’s, if you dig into the, into the details of their analysis, right? There’s actually findings in that paper that undercut, uh, their argument.
Zach Elwood: Hmm. Can you, can, can you summarize that in like a layman’s terms, uh, for, because I, I think that might be it. It’s, it might be, uh, hard to understand all that you said.
Maybe you could summarize it in a couple languages about how to, how it undercuts it.
Tim Levine: Yeah. So hypothetical example, let’s say we thought that, um, eye blinks were [00:20:00] only diagnostic. In employment interviews and not interpersonal lives.
Okay. And, um, all the studies had been, the argument is all the studies were done in interpersonal lies. So if only you had done them in employment lies, you would’ve seen the effect. But the studies included in bond, uh, hartwig and bond’s meta-analysis include both types. Right. That’s what you’re saying.
And the findings are the same either way.
Zach Elwood: Yeah. So they’re trying to criticize the methods, but yet regardless
Tim Levine: of the
Zach Elwood: methods, there
Tim Levine: are spikes. The studies that they’re later gonna support in support of their claim actually tested that and found that didn’t matter.
Zach Elwood: Right. Right.
Tim Levine: And that undercuts the argument of the paper,
Zach Elwood: which is a, which is an interesting thing about the the stakes thing because some people will say.
If the, if only the stakes were higher and more like real life situations, you’d be catching more. [00:21:00]
Tim Levine: Yeah.
Zach Elwood: Correlations or imbalances and such.
Tim Levine: And this is an incredibly plausible
Zach Elwood: mm-hmm.
Tim Levine: Argument.
Zach Elwood: Yeah.
Tim Levine: It goes back to the original ekman stuff. And, uh, and people believe this, people buy this. Um, you know, but it makes, it makes great intuitive sense.
Zach Elwood: Right. It’s also interesting though, that you can think of another logical thing where it’s like. Theoretically the lower stake situations would be more likely to find imbalances because the liars in the high stakes situations have more incentive to act like the non liars, right? Yeah. So you, you can kind of reason edit both ways, you know,
Tim Levine: or, or maybe there’s even more sophisticated ones.
The type of people who put them in themselves in the situations where there’s high stake cases, right. Are the people who are the better sorts, better bluffers sort. Right, right. Because if I can’t bluff, I don’t play poker.
Zach Elwood: Right. Yeah, yeah, yeah. And then there’s, there’s also things like, you know, people have [00:22:00] talked about, oh, are college students good?
Uh, yeah. Examples of general population people, are they the, you know, fitting people to study? So there, yeah, there, there, there is all this
Tim Levine: discussion. There’s a million
Zach Elwood: methodologies. Yeah, yeah, yeah. Right,
Tim Levine: right. And it, it always runs into this as I, as I describe it in my book, duped, the circular argument where you didn’t find what.
I think you should have.
Zach Elwood: Right.
Tim Levine: Therefore, you didn’t do your study right.
Zach Elwood: Right.
Tim Levine: I know you didn’t do your study Right. Because you didn’t find what I thought you were gonna find.
Zach Elwood: Right. Which is a problem with so many
Tim Levine: Yeah.
Zach Elwood: Theories and the theory of, you know, firm believers of, of ideas who, yeah.
Tim Levine: And, and of this, for your listeners of this means.
That the conclusions from the research might completely turn around in the next 10 years.
Zach Elwood: Right,
Tim Levine: right. We never know what the next study’s gonna find until we do it. [00:23:00] Right. And there, there might be something that’s been overlooked. Somebody might have been, you know, I’ve made my whole career on finding things other people have overlooked.
Right. And turning over those stones and going, look what we found. Um, so, you know, we, you know, I don’t, I don’t know that the next time I turn over a stone, anything’s gonna be there. Right. And I certainly don’t have any kind of superpower or lock on being the one who
Zach Elwood: mm-hmm.
Tim Levine: You know, uh, can find stuff.
Zach Elwood: Yeah.
Tim Levine: So
Zach Elwood: you’re just looking at what comes up. Yeah.
Tim Levine: Yeah, yeah. You know. Nature is what nature is, right. How we look at it. Things definitely shape how we understand them. Right? And until we do something we don’t know.
Zach Elwood: Yeah. And, and there, I mean, there’s, with all this AI, machine learning stuff, there are theoretically or things that might be found.
I mean, I, I interviewed someone, I don’t [00:24:00] know if you, you probably saw about this, this, uh, study by Dino Levy and his team that. Uh, use some machine learning stuff to monitor facial muscles and claim to have a 73% deception detection de uh, rate.
Tim Levine: Oh, this is right. I note that this is exactly what Hartwig and Bond will say will happen.
Zach Elwood: Mm-hmm.
Tim Levine: And it doesn’t matter what you’re looking at. Right. Use the machine learning stuff to look at any package of variables. On average, you’re gonna get 73% accuracy. And it doesn’t matter what the content is, it doesn’t matter what the variables are. Right. It’s, it seems like you always get that.
Zach Elwood: Mm-hmm.
Mm-hmm. Yeah. And I, when I interviewed him and looked at that study, I, I, I’ll admit I was, I, I didn’t find it very convincing that it, it was gonna be replicatable or anything, you know, and especially, it seemed kind of iffy with what exactly the machine learning was doing, because some of these things are kind of.
A little black boxes. [00:25:00] Like it wasn’t clear to me what the, you know, the algorithm was even detecting. ’cause it seemed, and I, it might, you know, it could theoretically have been him, not me, not understanding it, but it seemed like I had a hard time even understanding what he had said that the, the algorithm had, had even detected.
So, uh, yeah, just to say, I’m like,
Tim Levine: probably
Zach Elwood: like you
Tim Levine: are, I’m skeptical
Zach Elwood: whether a lot of these things would, would replicate. Yeah.
Tim Levine: Yeah. And that there’s a word for that and it’s called cross validation. So let’s take it in a completely different context. Imagine Amazon was trying to model our purchase behavior,
um, and there’s all kinds of things that are going on on the page, right? When we look at something and they know whether we click by or not. Right, so they could have their AI or their machine learning start plotting [00:26:00] out with what features of the page get us to click and what don’t.
Okay, so they get that algorithm. Now, let’s say we took that algorithm and applied it to new customers with new projects. How well does it do in predicting that’s cross validation,
Zach Elwood: right? Right to, to ensure you’re not just getting noise and random randomness spikes.
Tim Levine: Yeah. So here’s another example. When I did my demeanor work, and for the listeners who don’t know this is on how cues aren’t in isolation.
People present things and behaviors all are presented in a package, and these packages are all inter correlated. Uh, so I came up with a set of 11 behaviors that. Um, and impressions that seem to predict really well, uh, whether somebody’s gonna be believed or disbelieved. [00:27:00] Those are completely independent of whether they’re lying, lying or not, right?
But we know who gets believed and who doesn’t, and, and they’re basically being friendly, confident, uh, and outgoing. Right. Whereas people who are anxious or awkward, um, tend not to be believed related to your truth. Default theory. Yeah. Related to truth. Yeah. And, and these things, you know, so I, I documented that these 11 particular behaviors and impressions, um, seem to be the believability quotient.
So I had these, so what I did is I collected a whole different sample. Of truth tellers and liars coded those for these behaviors. Had a whole different sample of participants. Rate them for honesty. Had a whole different sample of people, judge them for these behaviors, and then showed that the judgments of the behaviors [00:28:00] predicted the judgments of honesty with these separate groups of people on a whole new set of communicators.
Right. Right. Cross validation and, and cross validation. And when I did that, then I went, oh, I think I’m onto something. But until I did that, we would never know if the findings were induced syncratic to that particular, you know, samples or coders or method.
Zach Elwood: Right,
Tim Levine: right.
Zach Elwood: Yeah. ’cause some of the, yeah. So is it your view, am I understanding it right, that.
When they, when they get these spikes that, you know, don’t replicate these findings, uh, correlations, do you think in some of these situations they actually were, the things that they found actually were good predictors for that specific situation and set of, set of factors of whatever sort? Like if they had ran that same situation multiple times, even some, [00:29:00] some of the findings might be related to that, that specific.
Situation and the types of people in it or things like this? Or do you think most of it is entirely just kind of random spikes? If that makes sense, if that, if that question
Tim Levine: makes sense. I think all of the above.
Zach Elwood: Yeah. It’s a, it’s a mix. Yeah.
Tim Levine: Um, so I did a study, uh, kind of way, way back, uh, in two, oh, uh, 2 0 5, where I was trying to, um, uh, to train people, um, to read nonverbal behaviors and then see if this would make them more accurate.
And I, of course, predicted that it wouldn’t. Um, but the, the gimmick of the study, uh, was including a, uh, placebo control. So I, I, one group of people were assigned to read that the best met analysis of the time behaviors that was Zuckerman et all in 81, they were trained to do the behaviors that, that were the most diagnostic from that meta-analysis.[00:30:00]
Um, and that kind of got overturned by DePalo in 2003, but I trained him on that. And then I, another group got trained on five behaviors that should have no validity from that meta-analysis. And then the third, the control group didn’t get any training at. And in the first study, what we found is the people who got trained in the nothing cues did the best, and the people who got trained in the valid cues did the worst with the control group in the middle, which is absolutely befuddling.
So then what we did is we went to the particular truth tellers and liars and coded the, the nonverbal cues we were training. And we found that for those particular samples, the things that weren’t diagnostic and meta-analysis actually were diagnostic, and the things that were diagnostic and the research weren’t.[00:31:00]
So then we went back and trained to do things that were person, message, situation specific. And we found when we trained to do that, it made people 2% better. It improved them from 56% to 58%. Um, but in that coding, what I learned is within a situation which was constant, there were big differences between people.
And they’re also within people distances between utterance and utterance. Right. So what might be diagnostic in one snippet might not be in the next, and this is a real complicated combination of person, situation and variability, not only across people, but within people.
Zach Elwood: Mm-hmm.
Tim Levine: Because [00:32:00] people just aren’t that constant, you know?
Zach Elwood: Right.
Tim Levine: It, you know, if we were coding your number of blinks during this interview, you’re not blinking at a set rate.
Zach Elwood: Right.
Tim Levine: Right. Depending on where we snip the tape.
Zach Elwood: Mm-hmm.
Tim Levine: We’re gonna find you going. Right. And me going like this.
Zach Elwood: Mm-hmm. Mm-hmm. Mm-hmm.
Tim Levine: Um.
Zach Elwood: We’re very complex. Yeah.
Tim Levine: Yeah. So the way I think about it, so I tend to think Q findings are real in the sense that they are real in the data that showed them, right.
They are not at all robust. That is, they don’t extend very well. Within person to different situations, um, across people, even from moment to moment. Um, so cues are, as I think of them, ephemeral, and this is why it’s easy if I’m selling you on a lie [00:33:00] detecting method, that I can point to cue examples where they work because you can see cues in everyday communication.
Right. And if you pick situations in which they actually, your preferred queue actually works, right? Then you can show great examples on video. But the trouble is those things tend not to extend. They might flip and do the exact opposite thing in the next instance.
Zach Elwood: Mm-hmm. Um, so if I had to summarize your view of.
Matsumoto argument. The paper we started talking out, talking about at the beginning. Uh, I, I’d imagine your view is basically like, yeah, theoretically there’s, you know, you, you could, there, there could be some findings in future that, uh, are replicatable and show that more than multiple nonverbal cues might [00:34:00] be highly correlated with deception.
But you just, we haven’t seen anything like that. There’s no. There’s no specific evidence for anything like that.
Tim Levine: A little bit more subtle. There’s lots of evidence for that. There’s not a lot of evidence that holds up across, right. There’s evidence studies in, in very kind of predictable, reliable ways.
Zach Elwood: Gotcha. Okay.
Tim Levine: Um, so at the level of the individual study Yes, absolutely. At the across studies in ways that, um. I am really comfortable relying on, not yet, but I agree with them that the verdicts sh is not, and it shouldn’t be, um, entirely in yet because, you know, if he just, oh, this is a dead end and nobody researches it anymore, we’ll never know if it really should have been a dead end.
Right. So, [00:35:00] I, I, you know, I, I try not to, um. Uh, I think people can test whatever hypotheses they want, and I think it’s good that there’s difference of views and people are pursuing different things. And I think in the long run, this is gonna put us in a lot better scientific position than we would be if there was just one orthodoxy, uh, and everybody had to follow it.
Zach Elwood: Um. Yeah. Yeah, I
Tim Levine: know that’s, that’s not hugely satisfying.
Zach Elwood: Well, just, just from a, uh, like a logical perspective, it seems, you know, when you think about, I mean, humans can control their behavior a lot. So if, if there, if there was some say it came out that there was some. Major combination of behaviors that were known to be, you know, decently tied to deception.
It would just become that most, the word we get around and people would try not to do those things. Sort of like, we know that liars don’t wanna do various things that [00:36:00] they think are tied to deception, right? Like, so there’s, you know, just to say humans are. Very complex. If there’s something we can do to, you know, adjust our behavior to help ourselves, we, we will.
So it makes you think like, if there is, if there are gonna be reliable signs of deception, they would’ve to be things that you couldn’t control, you know? But even in that realm, like, you know. Heartbeat and these kinds of, you know, uh, Galvan skin response and stuff. Even that stuff, you know, we know isn’t reliable because you can get excited for various reasons and get nervous for various reasons.
So just to say, I, you know, there’s various reasons I’m, I’m, um, not to say like you, I’m open-minded that they could find some combination that’s, you know, for general
Tim Levine: populations. Yeah. But I’m, but I’m in clearly camp skeptical.
Zach Elwood: Yeah. Clearly
Tim Levine: about the, about the Q thing. And if I was. When I’m investing in what I’m gonna put my time and effort to in my lab, um, [00:37:00] I’m not trying to save, uh, cues.
Um, you know, I’m, I’m investing my, uh, my time and energy, um, in, in different directions. So, you know, I, I, I, I, I think people can invest in whatever they want. Um, um, but, but that’s not. I, I think there’s enough of a, a data story out there to suggest that, um, uh, different paths are gonna be more fruitful.
Zach Elwood: Um, we don’t have to talk about this if you don’t want, but.
Do you, do you have any thoughts on, you know, I mean, Matama is, you know, clearly tied to this company that he is the head of Human Tell, which sells courses on, you know, getting people mm-hmm. Uh, better at reading people and corporate or personal, you know, situations basically. So, you know, and as he says in his study, you know, his, his papers that he puts out.
Tim Levine: Yeah.
Zach Elwood: [00:38:00] Uh, he, you know, that’s a, obviously a conflict of interest, but I’m curious, do you have anything to say about that and understand that’s not, you know, something we need to get into?
Tim Levine: Um, my interactions with, uh, David have been like, super positive. Um, he’s done like some really cool studies, like with blind athletes and stuff.
I, I think he’s done some, uh, really good science. I think, uh, um, the conflict of interests are always a concern, but he, he seems to be very open about disclosing those, um. So, um,
yeah, that, that’s,
Zach Elwood: I noticed Sonas site, the Human Tell site. I was just looking at the Human
Tim Levine: Tell site. Oh. The other thing is, um, you know, he’s a, he’s an Eckman protege, right? I know. State Collaborate. Yeah. There’s some.[00:39:00]
Zach Elwood: We’ve talked about Eckman before on a
Tim Levine: previous episode. Yeah. And, and so in kind of academics, um, a good rule of thumb is don’t speak badly about other people’s advisors, um, or mentors. Uh, ’cause you know. Yeah, just like you wouldn’t wanna say bad things about people’s parents or, you know, favorite, uh, sports stars or, you know, it’s, it’s, it’s okay to, um, to have a viewpoint and, um, yeah,
Zach Elwood: and I, it,
Tim Levine: it’s, it’s, it is good that those things are disclosed.
Zach Elwood: Yeah. And I’ll put it, uh, I’ll probably put in a note about the previous episode of people that are curious about. Our previous discussions about this, but, um, I’m curious while I have you here, I I, I had been thinking about the micro expressions thing recently and I’m, I’m a big skeptic about the, the micro expressions and the usefulness of them.
Do you, do you have a, are, are there, uh, I assume your [00:40:00] would be a pretty big skeptic too, but is there a, a study that you’d point to or a favorite study or two that shows, uh, skepticism why skepticism is warranted about the micro expressions?
Tim Levine: Uh, not ones that I could pull the sites to, uh, off the top of my head.
Zach Elwood: Or do you
Tim Levine: have, maybe
Zach Elwood: just share your thoughts on the, the overall I idea of their practicality, practical use.
Tim Levine: Um, my understanding is, um,
there’s some debate on whether microexpressions are a thing or not, um, but at least some people. Some of the times seem to do micro expressions, um, saying that this, they mean this or they mean that. Um, outside of maybe revealing a particular motion, [00:41:00] I think is probably pretty tricky. Um, I, I think most of the researchers right now.
I’m pretty skeptical about microexpressions. I think now that machine learning’s good enough to do facial recognition and track microexpressions. I think that, um, we’re gonna see a whole bunch of studies, uh, applying that methodology to find just what Hartwig and Bond found about every queue is that, um.
They can, they’re diagnostic of something. Um, whether or not that something holds up over time is a different story.
Zach Elwood: Yeah, I’d say I, I, I’ve thought about them a good amount when I learned about them, and I mean, I’ve, I look for them in poker. Never found any real use for, for them. If anything, I find that.[00:42:00]
The little expressions are the opposite because in a competitive situation, you know mm-hmm. There’s actually an, an instinct for somebody to act the reverse of what they are. Right. So if you see tiny signs of somebody looking uncertain or worried, you know, who’s, who’s made a big bet that that’s actually like highly correlated with them being relaxed and strong.
So just to say and, and, but, but I think that that’s an interesting thing because it kind of maps over to some writing I’ve seen on. Like microexpressions and deception detection and interrogations and such, where I think there’s some study that found that, uh, that truth tellers actually are more likely to have signs of contempt and these kinds of things that most people would associate with liars.
But there’s, there’s, there’s like different ways to look at it because you can, you can make up logical. Reasons why they would be present for liars and truth tellers because truth tellers are more relaxed. So they might be more willing to let their contempt and other negative emotions [00:43:00] show, you know, or you might reason it the other way and say, liars are more likely to, you know, just to say there can be many ways to try to explain findings of, of whatever sort.
Right. So, but I, I’ll say, yeah, I, I’ve long been skeptical about microexpressions ’cause I, you know, if they were something I would’ve expected to. Find, uh, you know, see, see more of them in, in poker basically. But I, I just, I, I don’t, haven’t made use of them basically. So, yeah. Thanks for, thanks for that. Uh,
Tim Levine: you know, a good place to test them, uh, might be in amateur poker.
Zach Elwood: Hmm. Yeah, I think the, uh, well, I do think, I, I do think the,
Tim Levine: you know, really novice poker players
Zach Elwood: Yeah.
Tim Levine: Maybe might be a fun.
Zach Elwood: Although I, I do, I do think, uh, yeah, we could, we, we could talk about this for a while ’cause you and I have talked about, I’ve, I’ve talked
Tim Levine: about, I’m not saying that they would be diagnostic
Zach Elwood: Yeah.
Tim Levine: But you might see a lot more variability. [00:44:00] Um,
Zach Elwood: yeah. The, the interesting thing about the poker and, and other formal kind of competitive game and sit situations is that there’s this assumed. You know, assumed competitive environment. So people are more likely to try to put on, they’re not even, they’re not, they’re not even necessarily trying to, to deceive.
It’s like an instinctual thing to put on the opposite of what they are. Mm-hmm. You know, in a game environment, which to me has no correlation to like interrogations or real world interviews. ’cause there’s not a competitive, directly competitive situation where you’re trying to get somebody to do a specific thing.
Right. So it’s a very, it’s a very, uh, yeah. I think it’s interesting how different this, the, the, the, the areas are between a fully competitive spot versus, you know, a non-competitive real world spot. But yeah. Uh, so do I, I was curious. I’ll, I’ll let you go shortly, but I was curious, do you wanna share any other, uh.
Interesting things. You’re, you’re working on these days projects? [00:45:00]
Tim Levine: Um, no, I think there’s probably a bunch, uh, in the works. Um, but right now they’re, um, sufficiently, uh, underdeveloped, uh, to be ready for, uh, public, uh, broadcast. Although I do have a, uh. A recent paper with, uh, Dave Markowitz outta Michigan State on asking, uh, AI to try to detect deception.
Oh, interesting. Okay. What’s the name of that? And it was, uh, it’s, uh, so it’s published in a Journal of Communication and David Markowitz. You’re asking a dyslexic how to spell. Um, well, what’s
Zach Elwood: the, uh, pap is there if, can they find it online or can I link to it?
Tim Levine: Uh, you probably can or I can send it to you.
Zach Elwood: Okay. I’ll put
Tim Levine: it in the show
Zach Elwood: notes for this. Yeah.
Tim Levine: Uh, but this was a, uh, a video platform, uh, that could, uh, listen to audio and watch video.
Zach Elwood: Okay. Interesting.
Tim Levine: And the finding was it was, uh, more biased, [00:46:00] um, than humans.
Zach Elwood: Hmm. Okay. I wanna read this. Wow. It sounds interesting.
Tim Levine: So it was more context dependent. Um, than humans were, but in, in kind of very stereotypically biased ways.
Zach Elwood: Mm-hmm. Mm-hmm.
Tim Levine: Cool. I, I wanna read that. Um, yeah. I’ll have to remember the, uh, uh, send it to you.
Zach Elwood: Yeah.
Tim Levine: Um, so that’s, um, that’s kind of my big most recently published work. Very cool. Um, that, that I think will get some traction.
Zach Elwood: Yeah. Then I have a feeling there’s gonna be a lot of interesting AI related, uh,
Tim Levine: yeah.
Studies and papers, but we wanna be super careful with the results of that paper because AI technology’s changing daily. Right. So the, you know, findings are very much tied to one particular platform at one particular point in time. That
Zach Elwood: that is true.
Tim Levine: Yeah. Um,
Zach Elwood: uh, yeah. Things are changing so rapidly. Yeah.
Uh. So, uh, for people that are interested in your [00:47:00] work that like this talk, uh, what’s the best way? Is there a book of yours you’d recommend, uh, them getting started with?
Tim Levine: Uh, yeah. I have one book on the topic. It’s called, uh, duped.
Zach Elwood: Mm-hmm.
Tim Levine: Um, truthful Theory and the Social Science of Lining Deception. Uh, that is, um, it’s an academic press book, so it’s, uh, you know, it’s, it’s a little.
Um, nerdy.
Zach Elwood: Mm-hmm.
Tim Levine: Um, but it’s nothing, um, you know, that, that people can’t work through.
Zach Elwood: Yeah. I found
Tim Levine: it quite, quite weird. I would, I would tell ’em to read, read the reviews on amazon.com. They’re super informative.
Zach Elwood: Hmm.
Tim Levine: So, you know, all, all the academics say. This is so easy to read. This is right. And, and some of the non-academics are like numbers.
Zach Elwood: I found it, for what it’s worth, I found it quite readable. And I think, you know, for those kind of books, if pe you can always, you can always skim over the really heavy stuff and get to the more [00:48:00] explanatory things if you want
Tim Levine: to read a book like that. And even, and even when there are numbers, I I, there’s text in there that tells you what all the numbers mean.
Zach Elwood: Mm-hmm. Mm-hmm. Mm-hmm. Uh, well thanks Tim. This has been great. Thanks again
Tim Levine: for, uh oh. It’s always a pleasure, Zach.
Zach Elwood: Yeah.
Tim Levine: Thanks for, uh, thanks for reaching out.
Zach Elwood: Thank you. Yeah,
Tim Levine: and always nice to talk to you. And I, I hadn’t seen the, uh, Matia Moto and Wilson article until you pointed it to me, so
Zach Elwood: Oh,
Tim Levine: nice. Yeah.
Glad I could. I was, that’s always good to keep up on the literature.
Zach Elwood: Nice. Glad I could help a little bit. Okay, thanks a lot. That was a talk with Tim Levine. I am Zach Elwood, and this has been the People Who Read People Podcasts. You can learn more about [email protected]. Thanks for listening. Music by small skies.