Categories
podcast

How do doctors read “drug-seeking” behaviors?, with Dr. Casey Grover

A talk with addiction specialist Dr. Casey Grover about behavioral indicators of so-called “drug-seeking behavior,” which is when people try to deceptively convince doctors to prescribe them drugs. Grover hosts the podcast Addiction in Emergency Medicine and Acute Care. We talk about: why Casey thinks “drug-seeking” is a bad, unhelpful term; what behavioral clues doctors use to determine if someone might be “drug-seeking”; why most behaviors aren’t that reliable; America’s drug problems (opioids, fentanyl, methamphetamines).

A transcript is below. 

Episode links:

Resources discussed in this episode or related:

TRANSCRIPT

[Note: transcripts will contain errors.]

Zach: Welcome to the People Who Read People podcast, with me, Zach Elwood. This is a podcast about understanding other people and understanding ourselves. You can learn more about it at behavior-podcast.com.

If you’ve enjoyed this podcast, would you be willing to give it a review? If you’re willing to do that, please leave it a review on Apple Podcasts; if you don’t know how to find that, there’s a link to Apple Podcasts on my site at behavior-podcast.com.

In this episode, I talk to Dr. Casey Grover about so-called drug-seeking behavior, which is the term used for when people who have drug addictions, for example, to opioids, attempt to deceive doctors in order to get a prescription.

I learned about Dr. Grover when I read a research paper from 2012 that he was a part of; the paper was called How Frequently are “Classic” Drug-Seeking Behaviors Used by Drug-Seeking Patients in the Emergency Department?. That study looked at a population of patients who were flagged as being likely to have so-called drug-seeking behaviors, and found that the behaviors that were often most associated with drug-seeking were pretty uncommon in these people’s emergency department visits. To quote from that study: “The most prevalent classic drug-seeking behavior was complaint of 10/10 pain, followed by complaint of headache, and then complaint of back pain. The least prevalent behavior was complaint of lost medication.” end quote. That paper also pointed out that such behaviors were pretty common genuine complaints of people in emergency departments, which pointed to the general unreliability of using such behaviors for the basis of making decisions.

Dr. Grover is also the host of a podcast about addiction, which is called Addiction in Emergency medicine and acute care. To quote from the podcast description: “A practical and evidence-based podcast on how to think about, diagnose, and treat substance use disorders in the Emergency Department and Acute Care.” end quote

In this episode, we talk about the reasons why most of the well known drug-seeking behaviors aren’t strong evidence of addiction; we talk about some behaviors that can be meaningful; we talk about the pressures doctors face to both give patients the care they need while at the same also trying to avoid giving drugs to people with drug abuse problems; we talk about America’s drug problems, including our problems with opioids, meth, and fentanyl. One of Casey’s recent podcast episodes included his thoughts on why the phrase ‘drug-seeking behavior’ is not a helpful one and should be retired, and we talk about that, too.

A little more about Dr. Grover: he’s the Chair of the Division of Emergency Medicine at Community Hospital of the Monterey Peninsula. He graduated from medical school at University of California, Los Angeles as one of the top three students in his class. He completed his residency at Stanford in Emergency Medicine, and was chief resident. He’s also currently in the process of becoming board certified in addiction medicine in addition to being board certified in emergency medicine.

Okay, here’s the talk with Dr. Casey Grover.

Zach: Hey Casey, thanks for coming on.

Casey: Thank you so much for having me. I’m very pleased to be here.

Zach: So maybe we can first talk about, maybe you can explain a little bit about what the term drug seeking behavior means, and maybe also why you are not a fan of that term.

Casey: I. That is a fantastic question to get us started. And I think the definition and understanding of what the term means has changed over time, kind of in my understanding and also in the understanding of the general medical community. Um, so I graduated medical school in 2010 and at that time, drug seeking really, we associated with people trying to seek prescription opioids.

And at the [00:04:00] time, really kind of what we all did was. When we identify this behavior is we as doctors said, you patient have a problem. I’m not gonna prescribe you anything you need to be discharged. And that really was in part, um, what drove the movement from the first part of the opiate epidemic of prescription opioids to the illicit.

Opioid market, which was the second wave or the second part. So I personally, by telling someone who is asking for a refill of their, you know, oxycodone, you’re drug seeking, you need to be discharged. I pushed people personally from prescription opioids to heroin. I. And at the time it, it was kind of just what we thought made sense.

This was a new phenomenon of overprescribing opioids. This was a new phenomenon of seeing young people coming to the emergency department or their primary care setting, asking for opioids. And over time I now realize that we really missed the mark. [00:05:00] Drug seeking is a symptom. Just like nausea, it’s not a diagnosis and that’s really where the problem is and why I initially researched drug seeking behavior ’cause I wanted to learn more about it.

And over the last 10 to 12 years, I’ve really began to understand it better and the nuances that drug seeking behavior is a symptom that must be further investigated by us as doctors and healthcare professionals.

Zach: So if you think someone is addicted and you don’t wanna push them away, obviously for the reasons you mentioned, what is the, the proper path?

Is it to ask more questions and, and maybe try to get them into a program, something like that?

Casey: A absolutely. So, you know, kind of if you break it down. Drug seeking behavior may be one of several things. Number one, it is what is called a use disorder. I think people are used to the term addiction, but I now think of the term as a doctor opioid use disorder.

They are functionally addicted to [00:06:00] opioids. For those people, I’ll often actually offer them medications to help treat their opioid addiction. People may recognize either Suboxone or methadone. Um. But it may be more complicated. They could be in withdrawal, meaning they’ve taken opioids regularly and when they stop, they get withdrawal symptoms.

They might be afraid of withdrawal. They’re on their last dose of oxycodone, they’re about to run out and go into withdrawal. Or this person might have addiction and real pain, or they might just be having pain. It becomes so nuanced. But to come back to your question, specifically for so-called opioid addiction or opioid use disorder, I now offer my patients in the emergency department medications.

Counseling and I often recommend them to follow up with a drug treatment program, whether that’s residential or sometimes just following up with a mutual support group like Narcotics Anonymous. Um, but my goal is treatment. When I [00:07:00] identify this behavior in somebody who has opiate addiction, I.

Zach: I imagine that must be a, a kind of touchy, awkward subject, especially if they’ve been, you know, trying to be deceptive about, uh, why they want the, the drugs.

Is, is that a difficult conversation to, you know, to segue into?

Casey: Yeah, I mean, unfortunately, um. You know, we’ve seen such devastating effects of the opiate epidemic that things have really changed. So in 2010, I often felt like I had to play detective at work. Um, you know, somebody would come in with back pain.

And I was trying to guess, is this real? Is it not? Now, um, generally the medical community has really tightened down on the number of opioids that we prescribe. Mm-hmm. And with the arrival of ultra cheap fentanyl. If patients have an addiction, they’re usually buying it on the street. So for me, I tend to not have to have those difficult conversations as much as I did say 10 years ago.

Which is tragic because you and I both know how much fentanyl is killing America and the rest of the [00:08:00] world. And you know, I. A couple of times we’ve kind of joked in a kind of a dark humor way of, gosh, I miss when people were coming to the ER for pills. ’cause at least they were safe. Mm. You know, if it was five milligrams of oxycodone that they got, it was actually five milligrams of oxycodone.

Now if you buy five milligrams of oxycodone on the street, it’s probably fentanyl, or who knows what else? So, you know, I think the, the, the most difficult conversation I have is when someone doesn’t recognize that they’ve got a problem. That they’re developing addiction or developing an opioid use disorder, and I have to sit down and be non-judgmental and really engage them to say that there’s more here than I think you realize and I’m worried about you.

Mm-hmm. Um, but yes, over the years I’ve had plenty of very difficult, confrontational conversations when patients want an opioid and I don’t feel comfortable as the treating doctor.

Zach: This topic usually seems to be around opioids and and painkillers, but is there some percentage of [00:09:00] so-called drug seeking behavior that is about uppers, like Adderall or Ritalin or other classes of drugs?

Casey: Well, I, I’m gonna try to make a joke here, and I wrote about this. I have asthma. When I ask my doctor for an albuterol refill, I have albuterol seeking behavior.

Zach: Hmm.

Casey: So in some ways, drug seeking behavior is what it is. A person’s trying to obtain a, a medication or, or therapeutic drug,

Zach: bad term. To

Casey: your point, I think the.

The kind of the connotation of drug seeking is usually of addiction or the so-called use disorder. And for that, people can be addicted to many different substances. Opioids, sedatives like benzodiazepines. People really like Xanax. Unfortunately, that’s a very highly addictive drug. Sometimes it’s muscle relaxants, sometimes it’s stimulants.

We even sometimes see people, uh, who have addictions to other medications that are not as common, such as, um, [00:10:00] Gabapentin, which is a nerve pain agent, or even sometimes medications for severe mental health. So, to answer your question, people can so-called druge for many different therapeutic classes. Um, I think we are most aware of it simply because of what we’ve all seen with opioids in America.

Um, but it, it’s many different drug classes.

Zach: So I was reading some in preparation for this. I was reading some papers and articles and listening to your podcast too, and looking at the research paper you did on these topics. And the thing that really stood out is just how difficult it is to determine if someone is seeking drugs for addiction related reasons versus other reasons.

And it totally makes sense because detecting deception is just so hard in general. And then if we’re talking about opiates, OPI, opioids, you know, pain is so. Subjective. And so it makes sense that it would be pretty easy for people who have a use disorder to emulate that behavior and for no one to know.

So all that’s just to say it [00:11:00] makes sense that it would be a pretty hard task to, uh, differentiate someone who. Has a use disorder from someone who doesn’t? And am I getting all that correct? That summary?

Casey: You said it beautifully. It is exceptionally challenging and when I did research in the early 2010s on this topic, that was what I was trying to answer.

But again, at that time it was predominantly patients getting prescription opioids and often addicted to those opioids. Or fearful of withdrawal or in withdrawal. And I, I didn’t really understand the topic well enough, nor did really anyone at the time, um, kind of like with COVID, how we had to kinda learn on the fly with America’s opiate epidemic.

It was very similar. No one had ever really seen kind of what would happen if you distributed a potentially addictive medication widespread across, across the country. So. My preliminary research, I wanted to try to figure out are there certain things that we can pick up on as doctors [00:12:00] that suggests this person might be having a problem with their opioids.

And I don’t know how much, um, you spent reading the papers, but my statistics really weren’t that strong. But it’s really some of the only research that’s ever been done on the topic to try to quantify it. And to your point, it’s. Just so hard to confirm that a person has an opioid use disorder unless they admit it.

And if they’re trying to be deceptive because of all the shame and stigma that comes with addiction, you know, you might not get that answer, so. Mm-hmm. It’s really hard. Um, and for me as a doctor now, and I’m. Soon to be board certified in addiction medicine. Um, it’s really time with the patient being non-judgmental and really being willing to listen and ask some difficult non-judgmental questions and make a therapeutic alliance to say, you know, sir, even if you have a problem, I’m still gonna take care of you.

But yes, you said it beautifully that it’s, it’s a very nuanced and challenged diagnosis to make, if you will. And I just wanna [00:13:00] clarify again. Drug seeking isn’t a diagnosis more of a symptom, but to really make that final diagnosis of the person who came in saying they have back pain only to realize that they have a, uh, an opioid use disorder and they’re trying to obtain prescription opioids.

Zach: And I’d, I’d imagine it’s can be a blurred line too. Uh, right, because some people. WI, I would presume, wouldn’t technically be aware that they were addicted and they may actually view it in terms of having bad pain. Am I, am I correct in that?

Casey: Absolutely. It is exceptionally nuanced. Um, and sometimes you can have kind of all four of the behaviors.

You can have drug dependence, drug withdrawal, addiction, and pain. And what’s hard is, and you sent me an example of somebody who was very frustrated about how. Patients with real pain have been often turned away when they need, uh, legitimate pain relief. It’s been just really hard to parcel out is this pain?

Is it uncontrolled pain? Are we developing an addiction? There are some [00:14:00] obvious red flags, you know, if somebody’s, I. Melting their pills and injecting them, or smoking their pills or snorting their pills. Those are obviously extremely major red flags that somebody is developing an opioid addiction. Again, also known as an opioid use disorder.

But if somebody just comes in and they’re like, doc, my pain is worse. You gotta go up on the dose. That can be really challenging. Mm-hmm. Um, I tend to look for a few kind of, uh, risk factors for addiction. So has this person been addicted to something else? Um, now with a lot, a lot of electronic medical records, when I open the person’s chart, I see their history automatically.

And if somebody has had, for example, an addiction to alcohol, alcohol and opioids can often be similar. And the. Predisposition for addiction to opioids is much higher because of the previous addiction to alcohol. So I usually ask my patients if I’m worried, you know, is there a family history of addiction?

Do you currently have a different addiction? Have you had a previous addiction in the past? And those are all things that I might think of [00:15:00] that are gonna increase my concern that what might seem just like pain could have a more complicated facet of addiction going on as well.

Zach: That’s really what struck me about this.

You know, I’ve, I’ve heard, and I, I’ve even thought this myself, that, you know how much doctors have theoretically contributed to this. You sometimes see doctors as a whole get grief or criticism about the opioid epidemic, but you know, like you’re saying, it’s, it’s a really tough spot to be in as a doctor because, I mean, the last thing you want to do.

Deny someone who, who is suffering, uh, some, some help. So, and, and if it’s basically hard to determine if someone is being deceptive about their pain, it, it’s understandable that most doctors would err on the side of providing the help. And it’s similar, you can see it as similar to the legal system, you know, where you’d rather see a hundred guilty men go free, then punish an innocent person.

It seems like there can be a, a similar dynamic at work that helps us or makes us err on the side of. Providing that those drugs. And does that all sound? [00:16:00] Am I getting some of that right?

Casey: Yeah, there. There’s only one other kind of point that I would like to add, which is that there’s so much more to pain management than just opioids.

Mm. And I’ve been the chair of my hospital’s pain management committee for about six years now, and myself and one of my colleagues where I work. We were kind of the canaries in the coal mine in our community as early as like 2013 to say something’s not right with opioids. And we would tell our colleagues, please don’t prescribe opioids.

And they said, well, okay, but then what can I use? Hmm. And we have sense. About the mid 2010s as physicians really focused on what are called alternatives to opioids or A LTO Alto, and as an example, when somebody had a pinched nerve in their back when I was in residency, I was really taught to give them opioids and that’s it.

Hmm. Now I use what’s called a multimodal approach. I will use an [00:17:00] anti-inflammatory. I will use a steroid. I will use acetaminophen. I will use a nerve pain agent. Sometimes I’ll even add in an IV lidocaine drip or even an antidepressant. And those six drugs combine in an additive fashion that can often be much stronger than a single dose of an opioid.

So I think it’s. Even gotten better from a pain standpoint that we have such better approaches to pain management than we did when this started. As an example, I just gave a lecture on managing kidney stones without opioids, and most people are aware that kidney stones are one of the most painful conditions that we treat.

And so it’s really been great as a physician that as we as doctors have cut down on our opioids, we’ve opened the door to so many other great non-opioid options, and I’m very grateful that. You know, my hospital has been really tip of the spear on writing new protocols and using drugs in new ways that are not addictive and are very effective for pain management.

So it’s a little bit of a different [00:18:00] scenario, you know, as compared to 2010 versus 2022. We just have more options for pain relief. In addition.

Zach: It’s probably an impossible question to answer because it’s so broad and probably varies across the country. But do you think, do you have an opinion on whether.

The problem these days with opioids is that people are prescribing them too much or, or too little. Has it swung the other way?

Casey: Yeah, that is a fantastic question. You are a hundred percent correct. It has definite regional variations. Um, I think that the pendulum in 2010 was to over-prescribing and the pendulum in 2022 may be towards the side of under prescribing opioids.

And you know, I have a number of. Uh, patients that come through the emergency department that, you know, say something to the effect of, you know, I was told the only place I could get an opioid was the ER ’cause it was too dangerous. And there are plenty of patients who can be managed on long-term opioids [00:19:00] by their primary care physician and they do really well.

Not every person who takes chronic opioids gets addicted. Now I have to say, with a caveat, I prefer to avoid. A new start of an opioid in my practice. So I’ll give you an example. If you come in with a fracture, I’m gonna try to do everything I can to keep you as comfortable and treat your pain and avoid starting an opioid.

But if they’re needed, I use them in my practice regularly. I think, and again, you sent me, um, a, I believe it’s a, a Twitter post from somebody who talked about patients being denied opioids. That absolutely does happen. And you know, the circumstance that I find frustrating is you have somebody who’s, you know, 85 years old, they’ve never had an addiction, they’re on a blood thinner and they have a lot of medications, so they can’t take kind of that multimodal approach I talked about.

They take, you know, two to. Two opioids a day so they can walk their dog. They’re not crushing their pills, they’re not snorting their pills. They’re doing fine, and their doctor [00:20:00] says, we need to taper you right now. I think this sounds so silly to say, but really doctors should just use their best clinical judgment if opioids are helping someone who doesn’t have another option.

Then it’s totally appropriate. Obviously that patient would need to be monitored to make sure that they don’t develop any signs of an opioid use disorder. If it’s a 17-year-old that can be managed without opioids, then do that. Um, but I think overall, unfortunately, the pendulum has swung away from opioids.

Whether that’s the right thing from a population standpoint is an interesting question because when they look in the studies, when people get started on an opioid, there’s a certain percentage that end up on the long term. A new start of an opioid is not a benign or kind of innocuous event. So I think if you look at across America.

If we had been more judicious with opioids 20 years ago, we’d see a very different landscape. But on the individual patient level, you know, [00:21:00] again, a person with legitimate pain that’s never misused, their medications may be suffering because their doctor’s not willing to prescribe opiates for them.

Zach: So let’s talk about, uh, some behavioral indicators of people who may be seeking drugs for use, disorder related reasons, and with the caveat, of course, as we’ve said that.

Many of these behaviors, or, or, or all of them probably are, uh, aren’t that reliable and are very, you know, very subjective and can actually be done by people who legitimately have pain and such. Uh, but maybe you could talk a little bit about, of the, of the various behavior indicators, behavior behaviors that tend to.

People tend to point to. Are there some that stand out for you as, as being the most reliable?

Casey: Yeah, absolutely. Um, so the, the research I did, uh, back in the early 2010s, I. I just kind of asked around my colleagues what tips you off that a person is trying to obtain opioids, and we all kind of had a list in our mind and it [00:22:00] was non-specific complaints like back pain, dental pain, or headache that are pretty common and usually don’t involve a lot of testing.

Things like asking for a medication by name, asking for an IV dose rather than an oral dose, saying that your pain is more than a 10 out of 10. Um, asking for a refill. There were a lot of things we looked at and it’s a little hard to understand kind of if that study is still relevant today because again, the landscape has changed so much.

Interestingly. Back pain in my career has really changed when I was in residency in the early 2010s, most patients with back pain were on chronic opioid therapy and were either out and needed a refill or were on that dangerous line of is this addiction or is this pain? I. My back pain patients now are pretty legitimate.

You know, many of them have a pinched nerve. Some of them need an epidural injection, so I think that kind of non-specific pain complaint no longer applies. And people when they come in for these complaints, tend to be very [00:23:00] open to whatever I want to prescribe. And oftentimes patients will actually tell me, doc, I heard about those opioids.

I don’t want those. The one that I do think still carries some weight is asking for a medication by name, particularly when it’s a medication that’s known to be. Somewhat euphoric. And if people will ask me, doc, what are you gonna give me? And I’ll say, you know, I’m gonna give you a dose of an IV anti-inflammatory.

We’re gonna give you a little bit of an opioid, some acetaminophen and a nerve pain agent. And they say, okay, I don’t have any red flags. But if they say, I have to have this medication at this dose. What I now call that is they’re opiate sophisticated. Mm-hmm. They may be a chronic pain patient who is really knowledgeable about how their body responds to opioids.

They may also have opioid addiction, and that’s just a flag to me that I need to dig into their chart more. Mm-hmm. And spend more time talking with them, you know? Do I see in the chart that they had an opioid overdose in the past? Do I see that they were on methadone in the past? It’s just, it’s more work on my [00:24:00] part, which is appropriate for me to do.

Mm-hmm. Um, the other one that’s still interesting is when people say that their pain is greater than 10 out of 10, that to me means the patient wants to get my attention. And that doesn’t necessarily mean it’s addiction, but they, they want to make a statement to me. Doctor, this is serious. Sometime it is because they are addicted and they really wanna push me to give them a shot of morphine or something.

Sometimes it’s, they’re just miserable and they want me to do something and you know, I think other than that, you know, most now, most often now, when people are requesting refills, it’s pretty legitimate and. Since 2010, we now have what are called prescription drug monitoring programs where you as a doctor can log in, um, and look to see what medications someone’s been on.

And I’ll give you an example. I had a patient show up from out of town. Which made me a little bit nervous. And he said he was on Fentanyl patches, which are really, really [00:25:00] potent. And I was nervous. I was like, is this guy trying to obtain opioids because of an addiction? And I logged in and I was able to see that he’s been getting his fentanyl patches every month regularly from the same doctor.

It exactly matched with his story. And I said, sir, I am so happy to help you. And I think, again, kind of like with. Pain going from just opioids to this multimodal approach. There’s more tools available to us as doctors to be able to dig in a little deeper to the history. So I think the one that really kind of makes me the most nervous as a doctor, if somebody has a specific request about name of medication and then also the route and the dose, and again, that just means they’re sophisticated.

I do have to do more work to really dig in.

Zach: Yeah, that kinda reminds me of police interrogation there. There’s nothing that. Specifically will say, this person, you know, is X. It’s more like an indicator that you should ask more questions or look into [00:26:00] it, you know? I think, I think that’s what some people get that confused about police interrogation things too, where it’s like something may be a little suspicious and it doesn’t mean that the cops should think that they’re guilty.

It just means, oh, maybe I’ll follow up on that question a little bit more. That kind of thing. And, and I think that some people can have a, a, a sense or belief that. You know, people are, are just being like, oh, this means that, you know, where it’s, it’s a lot, as you’ve said, it’s a lot more subtle than that and just is there, there’s nothing that will a hundred percent mean anything.

Casey: Yeah. And, and I’m, I, I may be a little bit of an outlier on this. You know, I, I wrote a piece why I believe that the term drug seeking needs to be retired. ’cause I still do occasionally see physicians who identify drug seeking behavior and tell the patient I. That’s it. We’re done. You’re discharged. Mm. I’m not gonna continue this for you.

It still happens. Um, I, I believe that I’m doing those patients harm because, you know, if they have prescription opioids and I cut them off, [00:27:00] they may go to the street market and I don’t want them on fentanyl. If they’re addicted, they’re risk at risk of overdose. I need to treat them. And if they’re just in really bad pain, well I can help with that too.

So I think, you know. I would like to believe that physicians are doing a better job of identifying, Hmm, something’s, something’s making me nervous here. I’m gonna dig deeper. But I think there’s still, unfortunately maybe some reflex to say, something makes me nervous. I’m gonna cut this person off, and I’m gonna tell them that I, I can’t do anything else for them.

Which to me is, is why I wrote the article. It’s a tragedy to your point, you know, patients need the appropriate treatment, particularly for pain, and I wanna do my best to advocate for my patients.

Zach: Yeah, to make an analogy, it’s, it’s probably like any profession where, you know, and the analogy in the, in the police world would be a, a detective who’s interrogating somebody and, and they’re acting strangely, and the detective just immediately a hundred percent thinks they’re guilty without good evidence.

You know, there’s probably, uh, in any [00:28:00] profession, there’s, there’s people overreacting or, or taking a little bit of evidence as, as definitive evidence or something.

Casey: Absolutely.

Zach: Would you say it’s also the case that I assume that, uh, you know, when, when it comes to practice in the field, that, that doctors are kind of subjectively because these things are so subjective and unreliable.

I’d imagine a lot of it just comes down to doctors getting a, a general feeling of how suspicious or trustworthy someone is. And that might depend on, you know, very subtle things, things we haven’t mentioned, you know, just like. Very subtle things, like a story not coming together very logically or someone seeming a little strange, uh, their eyes being a little shifty or something, or someone rambling too much about their reasons.

Basically the same kind of subjective things that can make police, interrogators, interrogators suspicious of someone they’re interrogating. And this isn’t to defend those, uh, more subjective things. It’s just to say that probably it makes [00:29:00] sense because we are dealing with such a. Subjective, uh, area in such a, a, a vague and gray area that it, it probably makes sense that doctors do have to rely on these more subjective reads and their feelings and such.

And do, I’m wondering if you have anything to say about that.

Casey: Yeah, I mean, I think there’s a couple of, couple of answers here. So the first is, is there has been some research on this, and I just pulled this up on my phone while we’re talking. There’s actually a scoring tool called the Opioid Risk Tool for Narcotic Abuse, and I don’t really like that title ’cause number one, narcotic.

Would suggest that it’s illegal and these could be prescription opioids and then abuse is a stigmatizing term. Um, it probably would be best, you know, to be said as the opioid risk tool to assess for opioid use disorder. But I’ll get off my soapbox there about stigmatizing language. But it’s essentially a tool that you can plug in the patient’s variables and it’ll give you some predictions on whether or not the patient’s at high risk for [00:30:00] misusing their, their opioids.

And it’s important to understand exactly when that tool can be applied. And that was really, as I understand it, for the chronic pain clinic where somebody’s on opioids long term and the doctor wants to see are they going down that slippery slope from just pain. To pain and an opioid use disorder. Um, so there are a couple of different, uh, scores.

If I remember correctly, there’s probably about five or six. None of them really caught on in the emergency department just because it was so hard to study and it was more geared again, towards the chronic pain clinic, chronic opioid management world. Um. To your point, sometimes it really is subjective.

Uh, I was thinking about a case where somebody came in and I just got a really weird vibe. Um, it was a young female patient, we’d never seen her at my hospital before, and she kept repeating the same phrase over and over again, trying to describe her pain. [00:31:00] And it almost seemed rehearsed. And I asked her point blank, have you ever been on an opioid before?

She let me. She was very compliant. She let me give her whatever meds I wanted, you know, alternatives to opioids to try to treat her pain. And she said, doctor, I’m doing better, but I think I’m gonna need something stronger. And I just got this weird little kind of vibe of, gosh, something seems weird here.

What I did in her case is I logged into California’s prescription drug monitoring program and I found that she’d had something like 30 or 40 opioid prescriptions of all, almost all from different doctors to suggest that she was basically going clinic to clinic, emergency department to emergency department, trying to obtain opioids.

And that gave me that confirmation. And if, if I hadn’t found that, I may have just given her the benefit of the doubt and said, this is a legitimate pain patient. And in her case, what was really challenging, I. And this was probably about five or six years ago, is I tried to offer her treatment for opioid use disorder and she said, doctor, I don’t have a [00:32:00] problem.

And then asked to be discharged. So, um, it, it’s just a, it’s, it’s, it’s almost like parenting. You know, you kind of know when your kid’s up to something. Medicine is about pattern recognition just as much as you know your child. That’s the look when my child’s hungry. That’s the look of my child’s tired.

It’s almost just looking for this patient is behaving differently than the majority of my patients. Something doesn’t add up. And you’re right, it’s, it’s subjective and I think that’s again why it’s incumbent upon us as doctors to. Take more time with the patient to really try to parcel out what’s driving this behavior.

And again, if it’s prescription opioids, you gotta check that prescription drug monitoring program to really get that objective data. ’cause that, for me, sealed it that this is somebody who’s not using opioids appropriately. So, yes, very subjective. And, um, I think some people feel like they have good radar, you know, and I, I don’t know if they ever get accurate feedback if they cut someone off who was a legitimate pain [00:33:00] patient because it’s, again, so subjective.

You know, as an example, if I miss a case of appendicitis. Usually my colleague will follow up with me or in my uh, department. We have like a monthly educational lecture where we review cases where we can learn from them. There really isn’t that in this space, so if somebody kind of self proclaims or self identifies as somebody who’s really good at picking up drug seekers, they could be really wrong and they don’t get really good feedback to, to learn from it just because oftentimes those patients just won’t come back to our department.

Zach: Do you have any other, uh, interesting anecdotes that, that come to mind for, uh. These kinds of things.

Casey: Well, I mean, you know, sometimes, you know, it can really spiral when people who have a opioid use disorder don’t get their meds. Um, I can think of a case when I was a medical student about a patient who lied about having cancer to get really high dose prescription opioids.

And I was able to, as a medical student, review his [00:34:00] medical record. Um, and without getting too into the weeds, I was able to basically prove that he was lying about his, his cancer and that he had never had it. And this was all to obtain opioids. I. And, uh, when we confronted him of, you know, sir, you’ve not been honest with us, he just erupted.

He yelled at staff, he threatened to commit suicide. He ended up going on an involuntary hold and being admitted to psychiatry. And I don’t know if he was able to, you know, even see an addiction medicine specialist when he was admitted. Um. You know, not as much anymore. Patients would sometimes verbally be abusive with doctors and nurses when they wouldn’t get what they want.

Um, I actually had a patient as recently as about a year ago, who just absolutely screamed at my staff that he wasn’t getting opioids. And you know, that’s, it’s, I. You know, healthcare is really hard right now. I mean, morale and healthcare is low, and I really try to defend my staff. Um, so you, you know, it just, it, addiction is just so disruptive to the [00:35:00] brain as far as the ability to weigh out what is a good decision and a bad decision, and it can really cause people to escalate when they don’t get what they want.

Zach: Do any anecdotes, uh, spring to mind of the opposite situation where you were pretty sure someone had an a use disorder, but. Ended up not having one, anything like that?

Casey: Absolutely. I mean, that goes back to my point about the Prescription drug monitoring program. Uh, as I mentioned earlier, you know, I had a gentleman who showed up to my emergency department, never been to my hospital before from out of town requesting really high dose opioids.

And I was just already kind of skeptical and I did my homework. I actually called his old doctor. Um, he was from OUTTA state. His doctor confirmed, no, this is legitimate. I manage him. She actually faxed me records to confirm, and I was actually grateful that I was his doctor that day because I. I, in my work that I do wanted to do what’s right, I wanted to take him seriously.

I wanted to make sure that if [00:36:00] he needed these meds, I could get them for him. Or if it was a use disorder, I wanted to be able to offer him treatment for addiction. So, in that case, I mean, I, you know, I know I, I, I, I hope. This is not true, but I worry that a couple of my colleagues, you know, in my region of California might have looked at ’em and said, you’re looking for that, and you’re from outta state.

You’ve never been here before. Yep. I’m not buying it. Mm-hmm. Um, but you know, you, you kind of dig deeper. You do your homework as a doctor and you can prove that people are legitimate patients and then they’re so grateful that you took them seriously. It, it ended up being a, a really great interaction between me and him.

He was, um, you know, very grateful that, you know, he was out visiting. You know, my part of California for personal reasons for family, and he didn’t have to fly back home, uh, and disrupt his, his family obligations to be able to go get his meds that he had accidentally forgotten. So, absolutely, I can think of many more similar cases.

Zach: So I guess, uh, I’m, I’m gathering that you would probably agree that training for doctors in this area is, uh, should, should [00:37:00] be better. It’s not as good as it could be.

Casey: I got one lecture, uh, on addiction in medical school, and ironically it was only on gambling addiction. Um, and my training in medical school from 2006 to 2010 was that pain is always to be treated with opioids.

Once acetaminophen and ibuprofen don’t work and you always escalate the doses of opioids. Um, as the patient needs. Um, I believe now there is a lot more work going into education for medical students. I’ve actually personally been asked to speak to medical students, um, about the work that my colleagues and I do, you know, at my hospital.

In this sphere to be able to treat patients appropriately and really kind of dig into what’s really driving the behaviors. Um, and definitely across the nation we are seeing that residency trainings are starting to incorporate addiction. We do have one residency in my county in California, and they do a wonderful job of [00:38:00] giving their residents exposure to addiction.

Many of them come work personally with me and my colleagues, kind of in my, uh, kind of in my sphere of addiction, pain, and emergency medicine. Um, and then really being so aggressive about alternatives to opioids for severe pain. Um, it’s really incredible how much pain relief you can give people without opioids.

So I. Truly, we are seeing more and more over time as America is getting deeper and deeper into what is now the fentanyl crisis. And we just see how bad things are. Uh, I’m actually pretty hopeful. Uh, we just hired three new physicians at my hospital and they are all, uh, very savvy with kind of this sort of con conversation that you and I are having to really kind of dig in and understand those nuances.

Zach: So we’ve talked, we focused on, uh. Opioids and painkillers. But is there anything that comes to mind in terms of differences in behavior for. People that might have a use disorder with, uh, amphetamines or, or other uppers. [00:39:00]

Casey: Yeah. It’s unfortunately the same story as with prescription opioids turning into street fentanyl.

Methamphetamine is, uh, it is so cheap. It is so prevalent. In about kind of the mid 2010s, the formulation of how methamphetamine was made, changed from a plant-based process with Ephedra, um, to a lab-based process. And it just allowed the production of methamphetamine to skyrocket. And in California, the price of methamphetamine dropped by 90% you about the last seven years.

So if again, it’s the same answer, if people really want potent drugs, it’s so much easier to get them on the street than to go to a doctor and get them. That being said, 10 years ago, um, you know, absolutely patients were coming to the emergency department requesting stimulants like Adderall or Ritalin or, um, you know, those A DHD stimulant meds.

Um, and it’s just, again, it’s, it’s as the, as the landscape has changed, [00:40:00] we don’t see that very much anymore. Um, now people do occasionally come in saying, I’m outta my A DHD meds, can I have a refill? And if it’s a Friday, I’ll give them, you know, three doses until Monday if I can confirm that it’s legitimate.

Um, sometimes I’ll, if I can’t confirm it, I’ll say, you know, you’re not really gonna have withdrawal. It’s the weekend. Let’s get you followed up with your doctor on Monday. Um, but yeah, it’s, it’s, it’s much less common than it wa than it was before. And I think just in general, across all the, the different classes of drugs, the street market has just gone crazy with how available and cheap these drugs are.

And if people really are looking for something, it’s just so much easier, unfortunately for them to get it off the street.

Zach: And I’ve, I’ve heard those stories, you know, from, from people personally about saying they, they have a DHD and, you know, need help studying for a test or whatever. They can’t focus and, uh, using that deceptively to, to get, um, Adderall, Ritalin and such.

Do you think there all will ever be like a pushback against that in the same way? I guess it could [00:41:00] be viewed as. Those are theoretically paths to, you know, stronger, um, amphetamine use and and such. But I haven’t really read anything about that. I’m curious if that’s, if that’s a concern of anyone’s.

Casey: Yeah, I mean, I think anecdotally, um, I am seeing much less primary care management of A DHD with stimulants and people are really in the medical community saying.

If you need a stimulant, you should actually see a psychiatrist. Hmm. Um, and I think that’s been, you know, what’s also happened with opioids is if I, if me as your primary care doctor, if I can’t manage your pain and you need to be on long-term opioids, I need you to see a specialist. And I think that’s been kind of what has.

Happened a lot in a good and bad way. Sometimes primary care physicians that could manage opioids just don’t wanna get involved at all. Sometimes it’s absolutely the right move to send someone to a pain specialist because there are more options, uh, than just opioids. But really in my community, and I think in kind of in my region, kind of gen, the general consensus is if you have [00:42:00] a DHD.

You need to see a psychiatrist. If they determine that you need stimulants, they will manage it and we will defer that to them because we wanna make sure it’s really the right thing for you. And, you know, absolutely A DHD can be very debilitating. Um, I’m very lucky where I work, we have a lot of great psychiatrists and, you know, I trust them if they’ve got someone on stimulants, I, I know that they’ve really done their homework and trying to make sure that the diagnosis is right and that they’re choosing the right.

Therapeutics for the patient.

Zach: Mm-hmm. Does seem like people are taking it much more seriously due to the Yeah. The recent, uh, yeah, last few years, absolutely. Of opioid crisis. Yeah. Do you have any, uh, anything else that we haven’t talked about that you’d like to mention?

Casey: Yeah, I mean, I think the only thing that has, you know.

Really me losing sleep at night, you know, in this topic that we haven’t yet covered is just the street market and our youth. And what’s so interesting is when I was in high school, really the only drug out there was alcohol. [00:43:00] And alcohol is very well labeled. It’s regulated, it’s sold in a store, you know exactly how much you’re getting and if you drink too much, it usually is.

You know, bad decisions, slurred, speech, vomiting, alcohol’s actually relatively hard to overdose on in my college years. In the early two thousands, a couple of my friends were starting to dabble in prescription pills, and at the time it was almost all from physicians. And so if you bought five milligrams of oxycodone, it was actually five milligrams of oxycodone.

And so, as you know, kind of the youth have this inherent desire to experiment. It really wasn’t that dangerous for experimenters. Most of our opioid overdoses at that time weren’t people who were chronically on opioids or misusing their opioids or even using illicit opioids. But now what we’re seeing is high schoolers going and buying pills on the street.

And they’re having fatal or near fatal overdoses the first [00:44:00] time they try because of what’s in the street pills, which is fentanyl and these ultra potent fentanyl analogs. And what’s so hard is these kids are being sold Ritalin. They’re being sold. Adderall, they’re being sold Xanax, they’re being sold, you know, Percocet, and it doesn’t contain any of that.

It’s almost all fentanyl. And these Fentanyl derivatives and just the stakes are so much higher for our youth right now. And that’s just so scary that I. You know, it’s, it’s if somebody, you know, just wants to have fun on a Saturday night and 10 years ago when they bought a pill on the street, it was no big deal.

Those stakes are so high now, and so we’ve seen an increasing number of overdoses, including fatal overdoses in our youth, and that’s just so devastating to a community, a school, a family, a friend group. Um. And then of course the tragedy of that young loss of life. So I think that’s one thing that, you know, I, I didn’t necessarily think of as we were just starting to see fentanyl [00:45:00] arrive in my community and the illicit market was just at what great risk it put our youth because of what they were used to in the past.

What the wide availability of legitimate prescription drugs on the street.

Zach: It seems a lot more dangerous out there for sure. Some of the news stories I’ve seen with Fentanyl being in a wide range of drugs and deceptively given to people. Absolutely. Uh, there’s a, I was gonna mention this in the intro, but might as well mention it here too.

There’s a great book. I actually haven’t read it, but I’ve read part of it, the least of us, and I can, oh yeah, I just, I just finished it. Yeah. And it’s the, by the author of Dream Brand, I think, which was also about es He’s wonderful. Yeah. Es yeah. Um, and yeah, that’s, that’s, I read an excerpt from it, but yeah, he talked, he talked about some of the things you talked about where, you know, for example, the, the meth problem is related to the opioid problem too, because some of the people that were addicted to opioids, uh, transition to meth when opioids weren’t available, and the new meth, uh, formulation that you mentioned is so much more mentally destabilizing.

Absolutely. And ends up absolutely. Ends up with people in the, uh, you [00:46:00] know, in taking up mental health, uh, facilities because of the, the meth in a, in a pretty quick order. It’s, it’s, it’s much more aggressive than the, the old plant-based, uh, ephedra variety. But yeah, so all these things are, I. Kind of related and, uh, pretty, pretty scary stuff these days with the drugs and, and also, you know, seeing that related to, uh, someone related to the, the homeless crisis we’re facing.

You know, uh, that, that’s part of that too.

Casey: Totally agree. And, and if anyone hasn’t read them, dreamland was Sam Quinones first book, um, about waves one and two of the opiate epidemic, and then the least of us was his follow-up. Looking at waves three and four. And just to clarify those waves, wave one was doctors over-prescribing opioids.

Wave two was people transitioning to illicit opioids, usually heroin. And the arrival of increasing amounts of heroin into the US wave three is fentanyl. And then as you stated, wave four is meth. And it’s so interesting that just cheap and easy to get meth has [00:47:00] taken people that traditionally use opioids and don’t like stimulants, and we’re seeing them use.

Methamphetamine because it’s so cheap and easy to get. Um, it’s just, it’s so sad. You know, you drive up and down California’s highways and you see tents, uh, and many of those, pat, many of those people unfortunately, have methamphetamine use disorder. Um, and you’re right, the, the newer formulation of methamphetamine causes a lot more psychotic symptoms.

Um, my most recent episode of my podcast was on methamphetamine psychosis, and oh my gosh, that is just so debilitating.

Zach: Hmm. And your podcast is called,

Casey: uh, mine is called Addiction in Emergency Medicine and Acute Care. Um, I put it together about 18 months ago. Um, it’s a podcast written for a medical audience, but I try to keep it pretty simple.

Um, I do have some non-medical people that listen to it. I kind of, the way I think of it is when I go to work and I’m gonna work tonight in the emergency department, oftentimes I’m. Kind of confronted by a clinical question like, is drug A or is drug B better? [00:48:00] Or what’s the best way to diagnose this condition?

And I usually kind of dive into the medical literature, um, to answer the question and then in my own mind, try to come up with what I think is the best practice or the best approach or the best diagnostic algorithm. Just because unfortunately a lot of this stuff outside of having formal addiction medicine training is, is kind of hard to get.

Um, so I’m also sitting for my addiction medicine boards and this was a way for me to learn. And yeah, shameless, shameless plug addiction and emergency medicine and acute care. I’ve had a lot of fun making it.

Zach: Alright, thanks Casey. This has been great. Anything else you wanna mention before we, uh, we end it?

Casey: Just wanna say thank you for having me and thanks for talking about this very important topic and, uh, thanks for, for putting this out on the air.

Zach: Thanks for your work.

That was Dr. Casey Grover, addiction specialist and host of the podcast Addiction in Emergency medicine and acute care.

This has been the People Who Read People podcast with me, Zach Elwood. You can learn more about it at behavior-podcast.com. If you’ve enjoyed it, please consider leaving me a rating on Apple Podcasts; you can find a link for that on my site at behavior-podcast.com.

You can learn more about [00:49:00] [email protected]. If you’ve enjoyed it, please consider leaving me a rating on Apple Podcasts. You can find a link for that on my [email protected]. Thanks for listening. Music by small skies.

Categories
podcast

Predicting schizophrenia with language, with Neguine Rezaii

This is a reshare of a 2020 talk I did with psychology researcher Neguine Rezaii. We talk about her research using machine learning to find patterns in language used by teenagers who were at risk of schizophrenia that were correlated with later schizophrenia diagnosis. The two language patterns found in the subjects’ speech were 1) a low semantic density (i.e., low degree of meaning), and 2) speech related to sound or voices.

Episode links:

Categories
podcast popular

Reading and predicting jury behavior, with Christina Marinakis

This is a reshare of a 2018 talk with Christina Marinakis about reading and understanding jury behavior. Marinakis works for the firm Litigation Insights; you can see her bio here. There’s a transcript of the talk below.

Episode links:

TRANSCRIPT

Zach Elwood: Hello, and welcome to the People Who Read People podcast with me, Zach Elwood. This is a podcast about understanding other people and understanding ourselves. You can learn more about this podcast and sign up for updates at behavior-podcast.com. If you like the podcast, I ask that you leave me a review on iTunes, that’s the best way you can show your appreciation and encourage me to do more. I’ve been pretty busy working on my book aimed at reducing American anger and political polarization. So I’ll continue re-sharing some of my early interviews. This one will be a talk from 2018 with Christina Marinakis, a specialist in jury selection for the organization Litigation Insights. In this talk, I ask Christina about some of the more psychology and behavior-related aspects of jury selection.

When it comes to how people in serious high pressure jobs make use of psychology and behavior, I think it’s one of the more interesting talks I’ve done. It was my original goal with this podcast to talk to people from a wide variety of fields about how they read and make use of people’s behavior. Because I think there’s all sorts of interesting domain-specific knowledge out there that we just don’t hear much about unless we’re in those niche areas. And I think some of that knowledge can be valuable to people who work in other fields or even just in our personal lives by increasing our empathy and understanding of other people. I’ve been a bit distracted from that original goal due to my interest in political polarization, hopefully I’ll get back to that original focus as I have a long backlog of ideas for guests from various fields and pastimes that I’d love to interview. And if you ever have ideas of interesting people to interview or subjects to tackle, feel free to send me your thoughts via the website which is behavior-podcast.com.

One interesting recent thing about Christina Marinakis, she was a consultant for the prosecution in the case against Derek Chauvin in Minnesota. If you search for her name and Derek Chauvin, you can find some pieces about the jury consultancy work she did for that very high profile case. Okay, here’s the talk with Christina Marinakis. 

Today is September 24th, 2018, and today we have Dr. Christina Marinakis joining us. Dr. Marinakis’s education includes an undergraduate degree in bioscience psychology, a master’s in clinical psychology, a doctorate in psychology, and a law degree. She’s currently the director of jury research at Litigation Insights, a national trial consulting firm, and she has 17 years of jury research study and applied practice in law and psychology. Her case experience includes but is not limited to product liability, antitrust litigation, class action, legal and medical malpractice, contract disputes, patents, securities, fraud, and criminal work. And she does this work for both prosecutors and defendants. Dr. Marinakis contributed to a new edition of the book Pattern Voir Dire Questions, a compilation of tips for voir dire strategy. And that book includes over 2000 questions for investigating and a listening bias from potential jurors. Besides jury selection work, she also is hired for witness preparation and communication training, and that involves giving feedback to witnesses who are preparing to testify to make sure they’re perceived well by the jury. So today Dr. Marinakis and I will mainly be discussing jury selection, the basics of how the process works, how strategy and game theory can play a role in the process, and how an understanding of psychology and behavior can impact jury selection. So without further ado, welcome to the podcast Dr. Marinakis, thanks for coming on.

Dr. Marinakis: Thanks so much for having me.

Zach Elwood: So we’ve got a lot of interesting things to talk about today, and a lot of questions people will find interesting I think. So let’s jump right into those questions. Could you give a simple explanation of how the voir dire process works for people who don’t know much about that process?

Dr. Marinakis: Sure. So a lot of people refer to what we do as jury selection, but the more accurate term would be jury de-selection. We’re not really picking who we want on our jury, it’s more of an elimination process of picking who we don’t want on the jury. So there’s essentially three ways that you can get a juror off the panel. And the first way is through hardship. And so if a juror says that they have an extreme financial hardship or a personal hardship such as they are caring for a young child at home or caring for an infirm adult, the judge decides whether those people meet the statute for whether they should be excused for hardship. And the attorneys can often comment on that and can make arguments whether a juror meets that statutory hardship language, but that’s really a decision that ultimately rests with the judge. The second way that people can be removed from the jury panel is through what we call peremptory strikes or peremptory challenges. And in every case, both sides are permitted a certain number of what we call strikes, meaning that you can remove people from the panel for no reason at all, any reason, and you don’t even have to tell the other side or the judge what the reason is. Now there is an exception, and you can’t remove someone based on race, gender, or in some state’s sexual orientation. That is against the law. But other than that, you can remove that person from the panel and you don’t have to give a reason why. There’s a balance number of strikes per side, and that varies by jurisdiction. Most of the time in state cases and civil cases, it’s anywhere between three strikes per side to about six strikes per side. In some cases, if you have more than one defendant who has adverse interests, the judge might decide to allow you to have eight strikes per side if that’s what you want. But it’s always balanced. In criminal cases, it tends to be more, you might have up to 20 strikes per side, but that’s what we call peremptory challenges. And they usually alternate. So once you have a panel of jurors, usually the prosecution or the plaintiff will strike first and they’ll say, “We’d like to thank and excuse juror number four.” And then the defense goes and they say, “We’d like to thank and excuse juror number 12.” And it goes back and forth until both sides pass. So you can pass and try to save up your strikes. And so you might say, “We pass, we accept this panel,” the other side then makes a strike. Now you get to go back and make another strike. Now once both sides pass and they accept the panel, that’s your jury. So that’s the second way. And then the third way, which is really where a lot of the psychology comes in, is what we call cause challenges. And there’s an unlimited number of cause challenges. And what that involves is each side is trying to get the jurors that they don’t want on the panel to admit that they can’t be fair. There’s statutory language that differs by state in terms of what you need to get the jurors to say. For example, in California, there’s a number of ways you can get a juror, what we call, kicked off for cause. If they evidence enmity against or a bias in favor of one party or the other, that’s enough reason to get them off the jury panel. In most states, it’s whether they can be fair and impartial, but there’s certainly some differences. Again, in New York, they have to give an unequivocal assurance that they can be fair. If they can’t do that, they get kicked off for cause. So each side gets to question the jurors, and that’s what we call the voir dire process or if you’re in the staff they call it voir dire. And it’s a process where each side gets to ask jurors questions and ask follow up questions. And the ultimate goal is to identify the people that you don’t want on your panel without exposing the people that you do want, because if you expose those good jurors, now the other side is just going to be able to identify them and get them kicked off for cause or they might use one of their peremptory challenges if they can’t get the juror to say they can’t be fair. And so since there’s an unlimited number of those cause challenges, that’s really the end game, is the side that gets the better jury is really the side that is able to get as many of their bad jurors off for cause which gives you a leg up on the other side.

Zach Elwood: So how many people are typically starting out in a jury pool, jury selection pool, before the process starts?

Dr. Marinakis: It varies a lot by jurisdiction, but in general, I’d say you’d need anywhere from fifty to a hundred jurors. And sometimes it just depends on how many jurors sit on the final panel. So although many states have juries of 12, there are certain states like Maryland and Florida where you’re only sitting juries of six. So obviously you don’t need as many jurors. So the way they decide how many jurors we need is you take the number that are finally seated, whether that’s six or 12, and then you add up the number of strikes that each side has. So again, that could be anywhere from three to six. So just for example, if you have a jury of 12 and then each side has six strikes, that means you’re going to need at least 24 jurors, 12 for the box plus the 12 that are stricken. And then you want to have a couple extra jurors because you anticipate that some of those jurors are going to be gone for cause. Now the longer the trial is, the more jurors you need. Many of my clients have trials that run 5 to 12 weeks long, there’s going to be a lot more jurors who will have financial hardships. And so if you know your trial is going to be a longer trial, you might need to start with 200 jurors to get enough jurors for the final panel. If it’s only a three-day trial, you might be able to start out with 40 jurors and be just fine. Now, same thing goes with whether it’s a high profile case or involves some really sensitive issues. Clearly if you’re trying a case for Bill Cosby, there’s going to be a lot more jurors in the audience who have already formed an opinion about his guilt or innocence, and so you’re going to lose more jurors for cause.

Zach Elwood: Right. So when you ask the questions of the potential jurors, can you ask anyone questions or do you pick one person at a time or do you ask it to the group? How do you decide answers to those kind of questions?

Dr. Marinakis: Again, it varies by jurisdiction. Each state has different rules on how they conduct voir dire. The states that are in the northeast like New Jersey, Maryland, Pennsylvania, Massachusetts, New York, they question the jurors individually. So each juror comes back into the room, into the chambers, sometimes the judge is present, sometimes the judge is not present, and the parties ask the questions individually of each juror. Because of that, the jury selection process in those states can take several days up to several weeks in certain trials. Other states like Texas do a panel. So each person in the veneer, people that are sitting in the benches, will have a paddle almost like an auction that has their juror number. And then the attorneys have to ask the question of the entire group. “How many people feel like corporations put profits over safety?” Then people who think, “Yes,” they raise their panel, and you jot down their numbers and then you have to follow up with them. Most of the time the follow up is done in open court. There are some jurisdictions where you ask the questions of the entire group, but then any juror who raises their hand or raises their paddle then comes up to be questioned individually. So it just really depends on the rules and the court system. But usually the jurors are in a certain order. In the field, we call it a random list. Now the jurors may not realize what order they are in. Sometimes they’re seated in order in the courtroom, and sometimes they’re not. But the attorneys always have a list of who’s first and who’s coming up because the jurors they’re seated in an order or they’re in an order in a list. So if we have a list of 50 jurors and I know that we only need to get 24 to sit the jury, I’m only going to focus on those first 30 people on the list. There’s no point in me asking questions to the juror who’s seated in seat 60 because the chances that we’re going to get to that juror are very unlikely. Now once we start losing jurors for cause and losing jurors for hardship, we can calculate how deep into the panel we will get and know who we need to ask questions of.

Zach Elwood: But you know the order, so there’s theoretically some reading ability that you could base on how a person acts or looks theoretically to know something about what some of their stances might be theoretically if you know the order.

Dr. Marinakis: Certainly. I’d say we know the order at least 90% of the time. And so we’re looking at who are those people in the first group of 30. And many times we get a little bit of information about those people. It might just be a card that has their occupation, their marital status, maybe the ZIP code where they live, their age, or sometimes we get a huge questionnaire where they filled out several pages of questions. Now, the other thing and I anticipate we’ll get more into this that we do is we’ll look up these jurors, we get the list of names and immediately start looking up folks LinkedIn profiles, their Facebook, their blogs, their public records. So we have an idea of who is on our panel. And then there is a little bit of stereotyping. So if I’m representing a corporate defendant, most likely people that are wearing business suits are going to be good for my side. I’m not going to start off asking those folks questions. I’d probably start off asking questions of people who might look to be more blue collar or maybe aren’t dressed as sharply, maybe look like they’re of a lower economic status who are more likely to identify with a plaintiff who’s suing that large corporation. I’d target my questions to those people first. Now the other thing we do is we’d ask one of those general questions again, “How many people think corporations put profits over safety?” If 10 people raise their hand to that question, I’m going to go to those 10 people first to do the follow-ups.

Zach Elwood: Got you. So the legal process often seems like a game with its team versus team nature and its sometimes obscure roles that can lead to complex strategies. And this seems especially the case for the voir dire process. Is there a lot of strategy and game theory involved? I guess you’ve already answered a little bit of this, but…

Dr. Marinakis: Absolutely. The best jury consultants and attorneys who participate in voir dire are able to anticipate the next side’s move and what the consequences of that move will be. So when I’m trying to decide who we want on the panel, the only way we can do that is through the striking process. I have to think about if I strike this juror, who’s going to take their place? So if there’s 12 jurors on the panel, I strike juror number four. Now juror number 13 is going to move into that seat. Well, now the panel composition has changed, and I have to think about now who is the other side going to strike. If the other side strikes juror number nine, now juror number 14 is going to move into that seat. And you have to be able to anticipate who is the other side going to strike and who is going to move into those seats and how many strikes do you have left. If you use your strikes on someone who is a juror you might not want but not the worst juror, well, if someone worse takes their seat and you run out of strikes, now you end up with a undesirable jury. The other thing that I mentioned was the passing system. So I might strike a juror, if the other side passes, they could pass because they think that there’s someone on the panel that I must strike, a juror that I cannot have on there. So they would pass in order to start saving up their strikes because ultimately that gives you an advantage. If you’ve got four strikes left, the other side only has two strikes, now you’re able to control the panel easier. However, you can call the other side’s bluff. And if the other side passes and you pass, now you’re stuck with that panel. So there could be someone on the panel that they don’t want and they’re passing because they think that you need to strike somebody and then you pass, now you’re stuck with the panel. So it absolutely is a game of chess. And because it moves so quickly, it’s really a game of speed chess.

Zach Elwood: Right. You said for a lot of them they can be only 30 minutes long.

Dr. Marinakis: Right. And really that’s the process for asking jurors questions, when it comes to doing your strikes, it’s right there in court. The judge usually won’t even give you time to confer with your co-counsel. They’ll just say, “Okay, plaintiff, what do you want to do?” And then you make your strike immediately.

Zach Elwood: So it goes very quick?

Dr. Marinakis: Yes. Immediately the defense says, “Okay, plaintiff, who do you want to strike?” And the actual striking process can occur within a minute.

Zach Elwood: And are the potential jurors in the room at that point too?

Dr. Marinakis: Depends on the state. In California, you say, “We’d like to thank and excuse juror number four.” And the juror number four gets up, they leave the courtroom. The next juror the judge will say, “Okay, juror number 14, now you take their place. Now the other side, you strike,” and it works like that. In other jurisdictions, they say, “Okay, attorneys, you’ve got one minute, write down the six people you want to strike.” Other side does the same. And then you submit the list, the judge cuts those people, and you’re done. And you don’t get to see the other side, it’s not a back and forth process. The funny thing is sometimes when you do that, both sides end up striking the same person which is interesting. Either they’re concerned that they don’t know that person well enough and they’re afraid to leave them on the panel or sometimes one side or the other just gets the juror wrong.

Zach Elwood: Oh, that’s interesting. That sounds like a very stressful process for having to be done so quick. I mean, it sounds like that could easily lead to some frayed nerves.

Dr. Marinakis: Right. The jury selection process isn’t for anyone, there’s a lot of different consultants who work with attorneys, and some of them just do the witness work that you mentioned earlier, where you’re working directly with witnesses, working on their communication strategy. And some consultants just do the jury selection piece, because they really require two different skill sets. And it’s not for everybody, you really have to be able to have calm under pressure, to be able to think quickly, anticipate the other side’s moves, and really just having an excellent memory and being able to remember exactly what each juror said and having great organizational skills, being able to keep track of who’s on the list, who’s coming up next, what did they say.

Zach Elwood: Right, that’s a lot of factors, yeah. So considering all that work and complexity, how much influence do you think voir dire strategy has on a case, in your opinion?

Dr. Marinakis: A lot. It’s almost sad to say, but I think the composition of the jury has a bigger influence on the outcome of the verdict than the facts of the case sometimes. The other piece of my work is performing mock trials. So before a case goes to trial, we will present the case to people in the community, many people, sometimes up to 60 people. And test the case with them to see what the likely outcome is and what the strengths and weaknesses are of the case. I can tell you in the many, many years I’ve been doing this, I have never had a case where all the people agree on the verdict, never. Yet they’re hearing the same exact evidence, hearing the same exact arguments, and yet they view the evidence differently. And that’s because each of us has our own experiences and our attitudes and our history that creates a lens. And we view the facts of the case through that lens. And because of our backgrounds, we either accept and remember the things that are consistent with our preexisting beliefs or we reject, we forget, we misinterpret things that don’t correspond with our preexisting beliefs. And so the same piece of evidence is going to be viewed differently depending on your outlook. And so you can’t necessarily change the facts of the case, but you can change the lens that it’s going to be viewed through. And so ultimately the jury selection piece and deciding who’s on the jury will decide how the facts, the evidence, and the arguments get interpreted.

Zach Elwood: Yeah. That can give you a sort of pessimistic view of how likely a defendant is going to get a fair trial, just makes me think of that. And so I’m wondering, how much do you see jury selection as working on behalf of your client and how much of that process is a collaborative attempt from both sides to make a jury most fair? Or is it, I guess, one could lead to the other?

Dr. Marinakis: Well, really our system in the United States is based on an adversarial system. There’s other countries out there where they have a single judge or a panel of people who are supposed to be neutral and who decide the case and decide the legal issues. And I think the great thing about our system is it is adversarial, but I think that leads to better, more accurate results. If you have one person like a judge or a supposedly neutral panel deciding the case, who’s going to challenge that panel when they make mistakes? Who’s going to challenge that panel’s bias? Because people are still people. And so someone may be a neutral moderator or a neutral panel of observers, but even those people are going to have their own biases. And if there’s not an adversary or someone on the other side pointing out those mistakes or those flaws, that’s going to lead to a flawed system. Now because our system is adversarial, we are pointing out the mistakes in the other side’s case, the holes in the other side’s case, the injustices in the other side’s case. And ultimately that leads to a better truth. If you’ve got two people arguing and really fighting for their position, that helps weed out the truth for a neutral fact finder. And the same thing is true of jury selection. So while I’m doing that for my client and trying to get off the jurors from the panel that are the worst for my case, the other side’s doing the exact same thing and they’re trying to get off their worst jurors. The end result is really to get a fair and impartial jury, but honestly, that’s not my goal, my goal is to get the best jury for my client, the other side’s jury consultant, that’s their goal to get the best jury. And maybe the person who’s more skilled will get the better jury in the end, but most of the time you end up with a fair panel.

Zach Elwood: Got you. Let’s move on to the behavioral psychology part of the interview. And I’ll ask you, what role does physical behavior play in a typical jury selection process?

Dr. Marinakis: Sure. There’s really two things that we’re looking for when we’re observing people’s behavior. And one of them is to identify how they’re answering the questions. Because whether a juror is a good juror or a bad juror or even if it’s just the difference between a bad juror and a very bad juror, sometimes depends on not what they say, but how they say it. So for example, there may be many people in the audience or in the, we call them the veneer, who have had maybe a negative experience with something, maybe this is an employment case. Let’s pretend it’s an employment case, I’m representing a company who’s being sued because they discharged someone and they’re being alleged for wrongful termination. So there may be multiple people there who have been fired from a job, but how they respond to that situation will determine who I get rid of on the panel. I might say, “How was that experience when you lost your job?” If one person says, “That was a tough experience,” another person says, “I was devastated,” there’s a difference. And if I only have one strike and I need to exercise it, choose between those two individuals, I’m going to strike the person who says they were devastated and they say it with a sigh, and you can see the pain in their face as opposed to someone who says, “Yeah, it was tough.” To me the person who says, “Yeah, it was tough,” they say it quickly, they don’t seem upset, they were able to move on versus someone who might still be clinging on to the pain of that experience. So I’m looking at their facial expressions. Do they look pained? Do they have a furrowed brow? Are they hesitant? Is there a quiver in their voice? Their body language, do they look sullen and sulky? Or are they confident and able to move past it? Same thing goes in cases where maybe we’re dealing with a cancer case and the plaintiffs are alleging that my client corporation’s product cause their cancer. A lot of people have had losses due to cancer in their life, but it’s how they dealt with those losses and how it still affects them today that determines whether they’d be a good juror or not. So again, I’ll ask them, “Tell me about that experience.” And if they look like they’re on the verge of tears and they’re having a hard time talking about it and then they say, “But yeah, I can still be fair to your client,” I’m going to have a hard time believing that they can really be fair to my client versus someone who says, “Yeah, it was really tough when we lost our mother, but we enjoyed our time that we had with her.” How that person dealt with that situation will determine how they view the evidence and that filter and that lens that they see the evidence in your case. Go ahead.

Zach Elwood: Yeah. I was going to say, one of the things that I was remembering from the voir dire book is looking for reactions, when someone’s being questioned, someone else might have a reaction like shaking their head slightly. You had one example of someone shaking their head in what they thought was probably a very subtle, minor reaction to a question someone else was asked, but that enabled you to say, “Oh, this guy probably has some anger and some bias on this issue.” So looking for reactions like that.

Dr. Marinakis: Absolutely. It’s interesting because we ask these questions, how many people feel this way? And there’s always people who don’t raise their hand. Usually they just don’t want to speak in front of a hundred strangers and talk about their biases in front of a bunch of people or they’re shy or they just don’t like public speaking, which is most people. So if I’m talking to someone who did raise their hand and I see someone else who’s making faces, who’s nodding along or maybe disagreeing, maybe I’ll follow up on them and I’ll say, “Mr. Smith, I know you didn’t raise your hand to that question, but I saw you nodding along, do you feel the same way?” And then that juror might now open up that, yeah, they probably should have raised their hand. And so each person shows their emotions differently. There are some people who wear their emotions on their sleeve, and they’re nodding along and they’re making facial expressions and they’re wincing or they’re furrowing their brow or they’re scoffing or laughing, and then other people are very stoic. So certainly some people are more difficult to read than others, but those are all cues that I’m watching for when both my client is asking the questions and when the opposing counsel is asking their questions. If they’re asking questions and I see folks in the audience who are either agreeing or disagreeing with them, that gives me some insight into whether that juror would be good or bad for my client.

Zach Elwood: How often would you or how often in general will lawyers face decisions or follow up questions on the physical behavior or behavior in general of potential jurors? I was just wondering how often it played a role, many times or seldom?

Dr. Marinakis: For me, it plays a role every time. Most of the time my clients are focusing on the conversation, and they need to do that. They need to be tuned in to what people are saying. They can’t both watch the audience and question jurors at the same time. That’s why it’s important to have a consultant or someone else there who can do the watching for you. So they might not even realize the different body language and reactions that people are having or they just don’t have the experience to identify what that means. And it’s very easy to misinterpret body language if you haven’t seen it over and over and over again. But for every person, I’m looking at them, seeing how they respond to questions. And then I didn’t get to the second thing that I’m looking for, which I think is even more important, is signs of group dynamics. And ultimately a jury decision is a group decision, whether it needs to be unanimous or whether it’s 9 out of 12 or something similar, it all depends on who you have on the jury and what are their personalities. So I’m not just thinking about who’s going to be a good or bad juror for my case, but who’s going to be a leader in the deliberation room, who’s going to be a follower, who’s going to be what we call a consensus builder, someone who’s going to try to get everybody to agree. Oftentimes you think teachers, they tend to be consensus builders. They try to get people to negotiate. You’re also looking for people who are what we call contrarians. A contrarian is someone who will always challenge the status quo. They like to play devil’s advocate. And then you’re also looking for people who might alienate others. Someone might be a great juror, but if they’re kind of a unique individual or maybe a little weird or maybe they just smell bad, are they going to alienate the rest of the jurors and people aren’t going to want to agree with that person? He might be a great juror, but I’m not going to want him on my jury if I feel like he has a possibility of alienating others. So I’m looking at how jurors interact with one another, who’s having lunch with who, who’s talking with whom in the hallways, who’s opening the door for everybody, passing out pens, that person’s probably going to be someone who’s a consensus builder. Or people who are making jokes who other people are laughing, that person has a possibility of being a leader, who respects whom? So you’re really looking at the jurors and how they interact to determine how they’re likely to interact once they get in the deliberation room. And that plays a huge role in determining how I’m going to exercise my strikes.

Zach Elwood: And there’s different applications for recognizing there’s different types of people. For example, we might talk about this more later, but one example you gave was when there’s a contrarian in the group, you might want them on the panel if you think that they might lead to a hung jury in your favor, right? You might want that kind of person in there.

Dr. Marinakis: Right. So it depends on the facts of your case, your client, your attorneys, what kind of group dynamics you want. And it also depends on the jurisdiction. There’s some jurisdictions and some cases that require unanimous verdict, and other cases you only need 9 out of 12. So I think you’re referring to I had one criminal case, and I don’t do that many criminal cases, but we do a few a year. And in this criminal case, the evidence was really stacked against my client for the most part. It was a murder case that involved a strangulation, and my client’s DNA was found on the murder victim’s neck and cell phone and then also on the knob of a stove, and the gas on the stove had been turned on all the way up, and a candle had been placed next to it, presumably so that they could blow up the crime scene. So we thought this would be a very challenging case given the popularity of DNA evidence in shows. At the time CSI was really big or Criminal Minds. And so we had some serious concerns that we would lose the case for our client, who we believed was innocent. And so we thought that probably the best we could get was to get a hung jury. And so we were looking for a contrarian who would be able to challenge no matter what the group thought, would always play devil’s advocate, would stand his ground and be a strong voice and ultimately hang the jury. So we looked for someone who in the process, the jury selection process, was challenging everything. The judge says, “Sit in this order,” “Well, why do I need to sit in this order?” “Here’s a piece of paper, call this number.” They’re just always challenging the bailiff, the other jurors, the judge even, and really are expressing unique views. So any time an attorney would ask a question, they might say, “Well, yeah, that’s true most of the time, but other people, other times this happens.” And so immediately we were able to identify this juror as a contrarian, and I don’t think the other side really did. This contrarian was dressed well, he was a successful banker, and so I think the prosecution thought he would be a good juror for their case. Usually people that are higher SES, Republican tend to be more likely to decide for prosecution in criminal cases. So they left him on the jury. We left him on the jury because he was contrarian, and ultimately he was the one that fought on behalf of our defense. Just briefly, our defense was DNA transfer, that our client had used a towel in the victim’s apartment, and that the murderer, the true killer, used that towel to then clean the crime scene to wipe the victim’s neck, to wipe the knob, and he transferred the DNA from the towel to the crime scene. And it’s a very unconventional defense, there is scientific basis to it, but it’s not well known. And so this juror who was the contrarian was able to argue that, and ultimately we ended up not with a hung jury, but with a full acquittal for our client.

Zach Elwood: Oh, wow. Okay. Let’s talk a little bit about some specific behaviors from the potential jurors. Does eye contact tells come into play at all? How they look at you and you can read maybe some anger, frustration from the questions you’re asking, does that play a role ever?

Dr. Marinakis: It does play a role, but I really caution against trying to, what we call, reading tea leaves. Because oftentimes the signs of nervousness are often the same signs as someone who might be lying. And so this is what we work with our witnesses a lot with in terms of building their credibility. So someone who’s not making eye contact, it could be that they’re just nervous, especially jurors. I mean, being asked questions in front of a group of people by lawyers and judges is very unnatural for them. And so most of the time there’s a lot of jurors who are nervous to do so. And they might not be making eye contact because of that, not because they’re not telling the truth. So you really have to be cautious. Same thing with people whose arms are crossed. There’s sometimes lawyers or clients or jurors who feel like if someone’s arms are crossed, they’re being standoffish, they don’t like your position. Well, maybe that person is just cold. Or sometimes if someone has a big belly, it’s comfortable to put your arms on top of your belly.

Zach Elwood: Yeah, exactly. You always hear that stereotype about the arms crossed being standoffish. Just because of that, even though I know it’s often untrue, but I find myself uncrossing my arms in groups just because I don’t want people to think I’m standoffish, even though I’m not. So yeah, it’s like everything, it’s often ambiguous and doesn’t give you as much information as some people think.

Dr. Marinakis: Exactly. And so that’s when we work with our witnesses, we work with them on things, uncross your arms, make good eye contact, because we don’t want their nervousness or personal ticks or habits to be misconstrued as untruthfulness. What we do look for though is inconsistencies in how someone is reacting depending on who’s speaking. So for example, if someone has their arms crossed both when the plaintiff is asking the questions and when the defense attorney is asking the questions, it probably doesn’t mean anything. But if their arms are always crossed only when the defense lawyer is speaking and yet they’re sitting forward and they look more attentive and they’re leaning in when the plaintiff attorney is speaking, I might take notice of that and then try to make an interpretation from the differences in their behavior. So it’s not the behavior themselves, but how it differs between who’s speaking and what evidence is being shown.

Zach Elwood: Looking for those imbalances in behavior, as we sayin poker a lot imbalances.

Dr. Marinakis: Exactly.

Zach Elwood: Yeah. So let’s see, what else do I have on this list? Are there certain things that prospective jurors often lie about such as knowing how to read or using drugs in the past, things like that?

Dr. Marinakis: I think more often than not people are honest. I mean, most jurors are told they have to swear to tell the truth, and I think most people do take that very seriously. Certainly you hear stories about people trying to get off of jury duty maybe pretending that they can’t speak English or that they can’t read or write or the big thing is pretending that they’re racist, even though it’s often silly, because race rarely plays a role in these cases. But more often than not, I think people are trying to be honest. The bigger issue is that most people are unaware of their own biases. People want to think of themselves as good people, fair people. And so regardless of their backgrounds, most people will say, “Yes, I can still set that experience aside and be fair and impartial.” But usually they do have biases, in the industry we call them implicit biases, that people are unaware of, but that will influence how they view the evidence in the case. And so my role is not to necessarily detect lying, but to detect these implicit biases and get the juror to ultimately realize that they can’t be fair. And we have a number of techniques that we use to do that to try to get a juror to realize that maybe this isn’t the case for them, and they actually, despite their best efforts, can’t be fair to my client.

Zach Elwood: That was an interesting thing in the book with the voir dire suggested questions. The book was aimed at trying to get strategies for getting potential jurors to admit, verbally admit, their bias and walk them through. Once they started to show bias, get them to verbally admit in a clear way, “Yes, I’m biased. I can’t be unbiased on this.” So that was interesting seeing those strategies in that book.

Dr. Marinakis: Yeah. It’s a difficult thing to do, again, because most people feel like they can be fair. So you really have to get the juror to feel the bias. So here’s just an example, instead of just coming right out of the gate and saying, “Who here is going to have a hard time setting aside sympathy for this person with cancer?” You can’t ask that question right away because not that many people are going to raise their hand. But if you preface it with, “Mrs. Smith has been through dozens of surgeries. She’s spent months in the hospital. She can’t breathe because of this illness, it’s like suffocating.” And you start to describe it that way. “Her family has had to quit their jobs. They’ve had to put their house on the market to pay these medical bills.” Now all of a sudden you start to conjure up these images and these emotions, and now the juror can really start to feel in the gut of their stomach that sympathy and emotion. So I’m going to build that first, and then I’ll say, “Okay, given all that, who’s going to have a hard time at the end of this case looking at Mrs. Smith and her husband and her children in the eye and telling them, ‘You know what, we can’t give you any money because you didn’t prove your case.’ How many of you think you’re going to have a hard time doing that?” So now they felt that emotion, and you’re going to get more people that raise their hand to that question than you would have if I came right out of the gate and asked it.

Zach Elwood: And you would be asking that from the other side, you wouldn’t be asking that from the plaintiff’s side?

Dr. Marinakis: Exactly. And I should have brought this up in the beginning, my firm, we primarily represent defendants in civil cases. So we might work for plaintiffs every now and then, but probably more than 90% of the time we’re representing the company, the corporation or the manufacturer, the employer, we’re usually on the side of trying to identify people who are going to have a hard time setting aside their sympathies.

Zach Elwood: Right. We’ll talk more about that later about some specific strategies. So my next up question is, how many of the decisions you make are based on quick read kind of stereotypes? For example, this person’s an older blue collar woman, she might have certain stances, or this person’s piercings and tattoos would make them more likely to side with the underdog, the plaintiff. How much do those kinds of stereotypes play in general would you say?

Dr. Marinakis: It depends on the jurisdiction and how much you can question the jurors. What we like to say is that a juror’s attitudes are the most predictive way of how they’re going to view the evidence, but there are some states and some judges that won’t let you ask about the juror’s attitudes, you can only ask about their experiences. Now, sometimes experiences correlate with attitudes. So the fact that someone maybe has had a relative with cancer might mean that they’re more empathetic. Now it might not, but it could. There’s some jurisdictions where you don’t even get to ask about that. You might only get to see their demographics. And if a juror doesn’t raise their hand, there’s no opportunity to ask follow up questions and yet you need to make a decision whether to keep or to strike that juror based on someone that you’ve never even spoken to. And unfortunately, you have to rely on stereotypes because the truth is stereotypes are a statistical advantage. I gave an example that I’m a white woman, and if you went to Starbucks and you didn’t have time to call me to ask me what I wanted from Starbucks, and you ordered me a pumpkin spice latte. Now more often than not, you would be right that a white woman would enjoy a pumpkin spice latte. Now, me personally, I hate them, so you would’ve been wrong. But even if it’s just 6 out of 10 white women who like pumpkin spice lattes, you’ve now increased your odds of getting the right answer. And in my field, it’s all about increasing odds. You will never be able to 100% predict anything, but if you can increase your odds of getting the right person, that’s the end game. And unfortunately, it’s awful that sometimes you have to use a stereotype because it’s not going to apply to everybody, but it’s a statistical advantage.

Zach Elwood: You’re kind of forced into it. I mean, you have a very limited amount of time to make decisions on very limited information. So you’re just trying to pull information from wherever you can, even if it’s not the greatest information.

Dr. Marinakis: Exactly. So if 6 out of 10 times a blue collar worker is going to side with the plaintiff and all I know about this person is that they’re a blue collar worker and I have to decide between them and a white collar worker, you’re right, I would probably strike the blue collar worker if that’s all I had to go on, because that’s the best chances that I have. So that’s why we really advocate to judges, “Please let us talk to individuals, let us get to know them,” because otherwise we’re left making unfair and quite frankly, unconstitutional, if we’re basing our decision on race, gender, sexual orientation, or any other what we call cognizable group, that’s unconstitutional. But if a judge doesn’t give us an opportunity to ask questions, then that’s all we have to go on.

Zach Elwood: Right. It does seem strange considering what you’ve said, and it does seem logical that the voir dire jury selection process is very important. I’m surprised that the time limits given are so short.

Dr. Marinakis: Yeah. It just really depends on the judge. And some judges don’t see the value, and they feel like, “Well, people can be fair and impartial, the case should rest on the evidence.” But I don’t think those judges have sat in on all the mock trials that we have to see how much the juror’s background really influences the verdict.

Zach Elwood: Yeah. It’s a very optimistic view of the average jury I feel like. That stance that, “Oh, it’ll all be the same probably.” So are there laws pertaining to researching jurors like looking at their social media accounts? You had mentioned that, and I was just wondering if that was always allowed or not sometimes.

Dr. Marinakis: Presently there are no laws that prohibit researching jurors. There are however ethical rules for attorneys and for people who work for attorneys about contacting jurors. So what constitutes contact can often vary, and there’s an opinion out there that basically says that even if you don’t initiate the contact but you cause a contact, that could be an ethical violation. So here’s an example. If you look at someone’s LinkedIn page and you are not logged in the privacy settings, that person will get a notification that says, “Christina Marinakis viewed your page.” Under certain court rules, that’s a violation because that is a direct contact between the jury, even though I never sent the juror a message, I never tried to request them, to connect with them, because now they know that I looked at their page, that’s a violation. So really if you’re doing research on jurors, you need to understand the applications and the platforms that you’re using and the settings to ensure that there’s no unauthorized contact.

Zach Elwood: So you got to be very sneaky.

Dr. Marinakis: And just ethical. You can’t go around the rules and say, “Okay, well, I can’t friend request you, but I’m going to have my secretary friend request you so I can see your private page,” that is against the ethics rules. And I say ethics, but ethics rules are also actual rules. If you violate those, you could lose your license and you could lose the case. So those are the rules and laws that pertain, but there’s really no limit unless a judge has particularly said, “In this case you cannot search the jurors.” So it’s really more judge-based, but in my career I’ve only had one judge who ever did that. And that’s because the jury consultant for the other side had her laptop up and was looking at jurors pages, and one of the jurors saw it and reported to the judge that it made them uncomfortable when they saw their Facebook page on the…

Zach Elwood: That would not make you feel very safe.

Dr. Marinakis: Right. But everything that we do search, and this is unlike probably what you’ve seen in TV or movies, everything is public records. We do not search anything that is private. So, yes, we might look at property, deeds or vehicle registrations, history of bankruptcies, liens, criminal records, these are all public documents that anybody could find if they had enough time.

Zach Elwood: You’re not hiring a private investigator, it’s open source. Got you. How often is your read of a juror accurate?

Dr. Marinakis: It’s hard to calculate, but it’s something that I do keep track of because I always feel like… People ask me, “How many cases have you won?” And I don’t feel like that’s a good indication of whether you’re a good consultant. Sometimes the facts of the case are bad or you can only control who’s on the jury panel and you only get so many people to strike. But what I feel like is an indication of whether you’re a good jury consultant is what you say, how often do you get a person right? And so I’ve kept track of it, and I feel like overall it’s about 10 out of 12 that I’m able to identify whether they’d be a plaintiff juror or defense juror. And we often try to predict who’s going to be a leader versus a follower, and I think about 10 out of 12 times we’re right. Sometimes it’s 12 out of 12. I think that’s pretty good. I don’t know what other people’s stats are, I’ve never compared it with anyone else, but me personally, I have kept track of that.

Zach Elwood: That’s interesting.

Dr. Marinakis: And some people are more difficult to read than others certainly, so it does vary from case to case. Sometimes I’ll get all 12, sometimes it might only be 9 out of 12, but I usually say there’s always one or two that surprise you.

Zach Elwood: Are you able to say or see after the trial is over what every juror voted or how it worked, what the breakdown was, if that makes sense?

Dr. Marinakis: Yes. So in every case, either the jurors have to sign the form. So say there’s 12 jurors, and we’ll say, “Okay, everyone who agrees with this verdict must sign the form.” So you’ll get to see the names of the people who signed it versus the people who didn’t. There’s also something called polling the jury. So the jury foreperson might say, “Okay, we the jury find the defendant liable, not liable.” And then counsel can request to poll the jury. And they’ll say, “Juror number one, is this your verdict? Yes or no? Juror number two, is this your verdict? Yes or no?” I think you might have seen that if you watched the OJ Simpson or one of those documentaries, that they poll the jury to see. And then oftentimes we interview the jurors afterwards. We talk to them individually, we do interviews, we take them to lunch to really find out what they thought of the case, what were the strengths and weaknesses, and how can we improve for other future cases?

Zach Elwood: Yeah. I would think that would be very interesting just to break down how these people you chose at the beginning of the process went through the whole process and what their thought processes were along the way. It seems like that would be very interesting.

Dr. Marinakis: Oh, absolutely. One of my favorite things to do is if I’ve been involved in the jury selection is then to interview folks afterwards. And it’s funny, because almost always I finish the interview and I say, “What questions do you have for me?” And inevitably they say, “Why did you pick me?” I don’t go into this entire podcast, but we talk about how people’s backgrounds can influence how they view the evidence, and really it’s not that we pick them, it’s just we didn’t pick to get rid of them.

Zach Elwood: Yeah, you didn’t not pick them. You didn’t strike them. Let’s see what else we have here. How often is it that potential jurors act angry or aggressive or act out in order to give the impression that they really don’t want to be there? And does that make them more likely to be rejected by acting that way?

Dr. Marinakis: I don’t think people are acting when they do that, I think they’re legitimately distressed, especially a lot of the cases that I do are multiple week trials, and it is very nerve-wracking for most people to even think about having to miss six weeks of work or having to miss a vacation if that’s what they think or not being able to pick up their child from school every day for the next six weeks. That is very anxiety provoking. Some people handle it better than others, but I have definitely seen people break down, cry, throw a temper tantrum, and I don’t think they’re acting, I think they’re really in distress when that occurs. And ultimately this goes back to the hardship issue, and so it’s the judge’s decision whether to let that person be excused or not. But there are certainly times where someone is so distressed and maybe they don’t meet the statutory requirement to be excused, and the judge will kind of look at the attorney and say, “Well, what do you guys think? Do you want to agree to let this person go or not?” And sometimes we’ll look at the other side and say, “Do we really want this kind of bad karma? Is this good for either of us?” Probably not, because that juror might take it out on one side or the other. They could take it out on the plaintiff for filing a frivolous lawsuit or they could take it out on the defendant for refusing to settle what they see as a legitimate lawsuit or it might not bother them at all once they get seated.

Zach Elwood: Wild card.

Dr. Marinakis: Exactly. And usually neither side is willing to take that chance and will agree to excuse the person. But again, it ultimately rests with the judge. And if the judge says, “Look, they don’t meet the statute.” Say for example, they say they have an extreme financial hardship, but the truth of the matter is they actually get paid for a lot of the days of jury service or they’ve got a savings account, and it’s not as extreme as someone who doesn’t get paid at all. That judge might refuse to let them go. Personally, I wouldn’t waste one of my precious strikes on someone like that.

Zach Elwood: That’s what I was going to ask, is if you have someone who both sides suspect they want to get rid of, because somebody has to strike that person and you don’t want to waste strikes, and so it seems like there’s not a good way to collaboratively strike a person. So it’s kind of wasting a strike if you do it.

Dr. Marinakis: Exactly. And neither side will be willing to do that, usually the judge will. If someone is truly, truly that distressed, most of the times most judges will let the person be excused. Or the other thing is if the juror gives a hint of a cause challenge. Maybe it’s a cancer case and their mother just died of cancer, we could say, “Okay, plaintiffs, you agree that this juror probably couldn’t be fair, right?” [wink, wink] And we agree to excuse the juror on cause basis, but it’s really truly because we just think the juror’s going to be disrupted

Zach Elwood: Not using the strikes, right, yeah. So you can still find a cause that you don’t have to use strikes for. I was wondering about, I don’t think we’ve talked much about those initial questionnaires, and when do you use those written questionnaires versus doing them more in person?

Dr. Marinakis: We almost always suggest to our clients to submit a questionnaire. And that’s just because people tend to be more candid when they’re writing something down versus in open court in front of a bunch of strangers. But especially in cases that involve sensitive issues. So I’m involved in a rape case that’s coming up, and this judge never uses questionnaires, but we feel strongly that it is to the disadvantage of everyone in the courtroom to try to ask questions about people’s abuse history in open court. That puts everyone in a bad position, the judge, the lawyers, the juror. But we need to ask those questions because it’s important to know their background and history. So we’re going to advocate strongly to this judge like, “Look, this case is very unique. We don’t want to embarrass jurors, but we need to get these questions answered. So pleas allow us to use this questionnaire.” And we present the questionnaire to the judge in advance, and hopefully the judge will agree to that.

Zach Elwood: You wrote a piece on the TV show Bull, which I’ve never seen, but that show is based loosely on Dr. Phil McGraw’s jury consultancy business. It seems quite exaggerated from what you wrote of it, which is not surprising considering it’s a TV show. One of the things you wrote about it was in the pilot episode, Dr. Bull shows his client an ultra high-tech jury monitoring system complete with over a dozen flat screens and devices that monitor mock jurors physiological reactions through palm reading devices. It claims to have a system exclusively used by Homeland Security to collect a wealth of information about jurors and their family members that cannot be obtained elsewhere. So can you talk a little bit about how unrealistic and exaggerated that is?

Dr. Marinakis: Yeah. And I think I already touched on that. He talks about using Homeland Security to get private information. That’s not something that we could do. Even if we had the technological capability to do it, ethically, legally, that’s not something that we would do. And the biggest thing I’ve noticed about the show, and I’ve only seen a couple episodes, is they talk about it in terms of a 100% guarantee. “We can 100% predict whether someone will be a plaintiff juror or defense juror based on their physiological responses or their responses to questions,” and it’s never 100%. My whole occupation is based on increasing the odds, increasing the odds that this juror will be favorable. And in the show they use what they call mirror jurors, in our industry we actually call them shadow jurors, where we try to find people who are similar to people who are on the actual jury. And they sit in the audience during the trial and watch, and we talk to them at the end of the day. The value in that is not being able to predict exactly what the jury’s going to do, the value is in learning what are the strengths and weaknesses of our case. What is confusing? What do we need to clear up on? What are some things that might be bothersome? It’s more of that qualitative feedback than a quantitative statistical prediction of what the actual jury’s going to do. And that’s because no two people are alike. You can find a mirror juror or a shadow juror who’s very similar to someone, but surely they haven’t had the exact same life experiences. You never know how someone’s experiences are going to influence how they view the evidence.

Zach Elwood: Right, you’re just trying to get another set of hopefully somewhat similar eyes to give you different points of view and feedback. So in the voir dire questionnaire book that you helped write, you had some strategies for listening bias from potential jurors. And I really like this strategy that you talked about in there of downplaying the strengths of your case during voir dire, in essence drawing jurors out to reveal the strength of their prejudice. And in the book, there’s an example where by giving a very simple synopsis of their side’s case, the jury then let its biases be known, was more willing to let its biases be known by seeing the weaknesses in that case. And the most prejudiced people, most biased people were more easily exposed. And doing that too, the other side of the case was not able to know who to strike because most of the potential jurors were focused on the weakness of one side of the case. So that strategy made a little bit of sense, and I wonder if you’d talk about that a little bit more and I’m wondering, is it a pretty well known and standard strategy?

Dr. Marinakis: So this is what I like to call throwing your mini opening. I should start off by saying in most jurisdictions the lawyers are allowed to give a little synopsis of the case before they start questioning jurors to help orient the jurors to what is the case about and what is each side’s main arguments. And this is a very counterintuitive approach, and I’ve actually never seen it done before. I don’t want to say I invented it because I don’t know what other jury consultants are doing. But I had noticed that when my clients were giving very strong mini openings and coming right out of the box and saying, “You know what, our product was approved by the FDA, the plaintiff who is alleging it caused her cancer has a family history of genetics, and we firmly believe that our client did not cause her cancer.” They open up with that type of what we call mini opening, now all of a sudden you start getting jurors raising their hands who are saying, “Well, wait a minute. If your product is approved by the FDA, then I’m already on your side. If she’s got a family history of cancer, then no way your product caused her cancer. I can’t be fair.” And now we’ve just lost our best jurors in the case are now gone for cause. And so I noticed that was happening, and I thought there’s got to be a better way. So I then recommended to a client who trusted me, I’ve worked with him a lot, we’ve never had a bad verdict ever. And I said, “You know what, I think you need to throw your mini opening. Don’t get up there and tell them this stuff.” And said, “Give the bad parts of your case. Let them know that there’s 50 people out there who used your product and all 50 of them got cancer. Put that types of facts out there. Talk to them about how your CEO doctored a piece of evidence. Put the really bad stuff out there.” And so he thought, “Oh, no, I can’t do that. We’ll lose the case. My client will kill me.” And so we did that, and the other side came out really strong and they put all that strong evidence on their case. And what happened was the jury started saying, “Well, obviously I’m going to side with the plaintiffs. Your CEO already admitted wrongdoing and your product, clearly a lot of people have died from your product or gotten cancer. I can’t be fair to the plaintiffs.” We got rid of 27 jurors in that case for cause, which is unheard of really to get rid of that many people who said they couldn’t be fair to the plaintiff or couldn’t be fair to the defendant, my client. And now my client’s sitting in the courtroom and they’re like sweating bullets thinking about, “Wow, these people really hate us.” Well, you know what, all those people who really hate us are off of the panel now.

Zach Elwood: You’re really drawing people out. It’s like putting a trap in the ground and people are just falling into it exposing their biases.

Dr. Marinakis: Right, and so we got rid of all those people. Now, who are the people that are left? Now, the people that are left on the panel are the people who heard all of those terrible things about my client and about the company and who nevertheless still kept an open mind and were still able to be fair. Now those are the jurors that are truly going to be fair and impartial. And now the other side, they didn’t identify any people who might be for the defense, who might say, “Well, I think corporations are good. Corporations employee people. Plaintiff lawyers are always chasing ambulances.” Nobody said that because they were so focused on the bad conduct.

Zach Elwood: Yeah. And I’m sure the other side, if this isn’t a very common strategy, the other side was like, “Oh, this case is going to be so easy. Everybody hates this company.” And then you are left with weeding out the worst potential jurors and left with a more analytical group of people.

Dr. Marinakis: Yeah, they never saw it coming. I’ll tell you, in that case, we took a lunch break, and they were high fiving each other, they thought, “Wow, wow, all these jurors hate these people. We’re going to win the case.” And then as the judge excused, “You’re excused, you’re excused, you’re excused,” you could see the smile on their face just all of a sudden turn to severe panic. And they got no cause challenges, we had 27, and they had no idea who to use their strikes on. We ended up with an amazing jury that they just settled the case at that point because they knew that there was no chance of winning. So it really is counterintuitive. But I tell my client, “Look, voir dire is the time to identify those people. Do you want those people to say those horrible things about you now in voir dire or would you rather have them say that in the deliberation room when they’re trying to come back with a verdict?” It’s like get rid of them now. And then you know what, now that you’ve got your jury seated, now come out with a really strong opening statement. Now that you’ve got your 12 fair people you say, “We’re approved by the FDA, and this woman had a history of cancer, and all those other 49 women, they too had a history of cancer in their family and they use these other products or whatnot.” Convince the jury of your case during openings not during voir dire.

Zach Elwood: Yeah. It’s also interesting too because that process of getting them all talking and on the same side in the very beginning draws people out too, because other people are talking about it. If the group was talkative like that, it seems like it would lead to more volunteering of bias basically.

Dr. Marinakis: You’re absolutely right about that. Once one or two people start opening up, other people feel more comfortable opening up. And a technique we’ll use too is say Mr. Jones just voiced that he hates corporations, I might say, “Okay, Mr. Jones said that, how many people feel like Mr. Jones?” And then people start raising hand. “Okay, Mr. Jones said he couldn’t be fair, do you kind of feel like that too?” They say yes. Okay, now I just got two jurors off for cause very quickly.

Zach Elwood: Right. And I also like something else you talk about in that book was using your own body language to encourage people. Like that question you just mentioned, how many people, you’d be raising your hand too to kind of show that’s socially acceptable or to encourage them to express their bias.

Dr. Marinakis: Exactly. That’s all part of getting people comfortable opening up, and almost subconsciously, if we see someone doing something, we want to emulate it. You almost say like monkey see, monkey do. And I don’t want to imply the jurors are monkeys, but personality-wise and behavior, if I’m raising my hand when I’m just asking the question, “How many people feel this?” And I raised my hand, that’s almost subliminally sends the message to the jury like, “It’s okay, raise your hand.” And it also goes to the way that I ask the question. So instead of saying, does anyone, if I say, does anyone feel that way? It almost implies that this is an unpopular belief or an unacceptable belief versus when I say how many of you. How many of you implies that this is a common belief, and certainly there’s going to be people in the audience who feel this way. So how many of you feel this way? Using that body language and the wording of the question together gets people more likely to raise their hand to those types of questions. Another example is just nodding my head slightly. Someone is telling me about their experience with cancer, I’m nodding along very, very, so slightly or I have my client do this. You can’t even notice that they’re nodding along, just very slowly nodding, “Yes, I’m following you, I’m feeling you.” Match the juror’s facial expressions. If the juror’s wincing, the attorney should be wincing. If the juror is smiling, the attorney should be smiling. These are all techniques that I’ve learned in my experience as a clinical psychologist doing therapy, it’s about matching a person’s emotions and getting them to tell me more about that and reflecting back. A juror says, “Yeah, it was a tough experience.” “Wow, that sounds like that was a really tough experience for you. Tell me more about that,” and reflecting back to the juror what they said.

Zach Elwood: That reminds me of a popular interviewing technique where you ask someone question and then they answer it, and then you give a little pause. And the person being interviewed or asked questions will sometimes fill in that slightly awkward silence, they’ll volunteer something even more meaningful at the end. Does that ever come into play, giving the little silence?

Dr. Marinakis: Yes, absolutely. People are uncomfortable with silence, and so I recommend attorneys to do that during voir dire to draw out more information. And it’s funny that we give our witnesses the opposite advice, “Don’t fall into that trap.” So something we teach them is these are the tricks that the opposing counsel will do during cross examination to get you to volunteer more. Be comfortable with silence.

Zach Elwood: So when you answer your question you can stop talking then.

Dr. Marinakis: Yeah. The other thing people will do, another kind of trick, is to ask the same question but in a different manner. And people will think, “Well, if you’re asking the question again, you must be looking for something different,” and they’ll give a different response or give more information. So we tell our witnesses, “Look, no matter how the question is asked, even if it’s asked in five different ways, your response is always the same.” I answered that question, this is my answer. Don’t volunteer more.

Zach Elwood: Getting to your witness preparation and communication training. A couple questions about that, are there any rules around how you’re allowed to advise a witness on how they should speak or act when they testify?

Dr. Marinakis: Well, first I should say that when we’re meeting with our witnesses that is protected by client-product confidentiality, attorney-client privilege. So anything a witness says to the attorney that’s on the case is confidential. So when we conduct these sessions, we always have an attorney in the room there to ensure that our session is covered by that lawyer-client confidentiality. Now I’m a lawyer myself, so I don’t have to worry about that as much. But if there’s a jury consultant who does not have a law degree and is not bar-ed, an active member of the bar, you must have an attorney there to keep that conversation privileged. Now that said, there’s still some rules, and these go back to those ethics rules for attorneys which are actually laws. You cannot tell a witness to lie. And in fact, you can’t even ask a question on direct examination if you know that witness will lie, that is against the ethics rules. But we never do that anyway. We don’t want witnesses to lie. Most because they have poker tells, and jurors will call them out on it. So we’re not telling our witnesses what to say, but how to say it. How do you word something both verbal, behavior, and non-verbal behavior to give what you say more credibility so that the jurors believe your version, your truth? How do you effectively communicate that truth so the jurors believe you and they don’t misinterpret signs of nervousness or personal ticks as being signs of dishonesty?

Zach Elwood: Right. That brings an interesting point because the fact that you have to do this is mostly due to the fact that everybody thinks they can read people well, even though they can’t. So you’ll have a lot of people in the general population who are like, “Oh, she looked down when she said this, she’s lying. Or she was blinking a lot, she’s lying.” It’s just like in poker where usually those things are so ambiguous you would have to have such a big data set to even reach a conclusion like that. So you’re basically trying to make your witnesses unreadable basically, because people are going to draw all sorts of weird conclusions from their behavior.

Dr. Marinakis: Exactly. And people watch these TV shows, the Lie to Me, The Bull, those types of things, and they think that they know the signs of untruthfulness when you’re right, more often than not, those are signs of being nervous. Even just having your hand over your mouth is a huge thing that when I talk to jurors and they say, “Oh, I didn’t trust that witness because he had his hand over his mouth. He was afraid that the truth was going to come out because his hand was over his mouth.” And usually that’s just the person’s nervous and it’s a nervous tick. So I have to work with witnesses to get them to, you’re right, be unreadable and to be confident. Even if you’re not confident, even if you’re nervous, speak confidently, keep your hands down, make eye contact, that’s going to make you more credible to the jury.

Zach Elwood: Right, yeah. You just want them to get across the content of their testimony and leave out all the extraneous behavioral stuff.

Dr. Marinakis: Right. And in a way, jurors will remember it. So we talk about themes and having thematic content. Most jurors have very limited attention spans, especially in today’s age of 40-character news stories. And a jury’s not going to listen to a two-minute diatribe about something, but they will listen to a couple seconds. So we work with the witnesses on their non-verbal skills and also their verbal skills and being short, direct, to the point. Otherwise, they’re going to lose the jury and the jury will tune out.

Zach Elwood: Right, makes sense. Any other interesting examples of reading people from your work come to mind? Any great reads you’re proud of or that you’ve witnessed other people in the industry make?

Dr. Marinakis: I think I probably have more stories about bad reads, where I have the lawyers who, yes, they have a lot of experience, but so many times they’ll be like, “Well, I just don’t like juror number seven. There’s something about her. I just don’t want her on my jury. She gives me the hibbie jibbies. She’s given me a bad look.” And I have to say like, “That juror just has resting bitch face. That’s just how they are. Everything on paper, they look like a great juror.” So many times I have clients say, “We got to strike her, I got a bad feeling,” and I really have to talk them off the ledge from that and explain to them how, “Look at her face. She’s making the same face when the other side is talking too.”

Zach Elwood: Right. So just her baseline and they’re overreacting to small data points.

Dr. Marinakis: Right, or they’ll say, “Juror number seven is totally on our side. She’s nodding, she’s taking a lot of notes.” Then all of a sudden that juror comes back with a complete opposite verdict, and the attorneys are just shocked. And I talk to the juror or I’m observing them, and I realize that they’re nodding along not because they agree, because they’re following. People do that. I’m following what you’re saying, I’m nodding along. Or I’ll talk to the juror, and they say, “Yeah, I was doodling. I was drawing or he was talking and I was writing down that’s BS, I don’t agree with that.” So just because someone’s taking a lot of notes doesn’t mean they’re writing down what you’re saying, they could be writing down that they hate what you’re saying.

Zach Elwood: This guy’s an idiot, yeah. The nodding is interesting because I do that a lot when I talk to people. Nodding a lot, small nods, just as an encouraging way to set people at ease. And I think it does lead people to like tell me things they otherwise wouldn’t because they think I’m on their side. So I get random people confessing weird things to me sometimes, and I think it’s just because I nod and look like I’m interested and sympathetic.

Dr. Marinakis: Exactly, and that’s what I was talking about earlier when you’re talking to the jurors, doing that very subtle head nod gets them to open up even more to you.

Zach Elwood: Yeah, I think that’s pretty powerful. We’re near to wrapping up here, I won’t keep you too much longer. Do you think recent popular documentaries that show the inner workings and frequent mistakes of the legal system have lowered people’s trust in how fair jury trials are? Do you think that impacts your work?

Dr. Marinakis: I do worry about this a lot. Before when I told people what I do, they had never heard of it before. But now with the documentaries and with the show Bull, people have a bad impression about what we do. They think we do things unethically because in the show they’re always doing things that are unethical. Talking to the judge, manipulating the jury, talking to jurors, and that’s not really the reality of what we’re doing. So it gives our profession a bad name. The other thing that I see is that you’re seeing more and more stories about misconduct, whether it’s corporate misconduct, government misconduct, and people being bought off, that’s what all these TV shows are about, documentaries about an unfair justice system. And the truth of the matter is that yes, that happens, unfortunately it does, but that’s not the norm. But unfortunately, because people watch these documentaries and these shows, they come in with these expectations, that’s almost their biases about what they think is the truth. And usually being on the defense side, that works against my client’s favor, where people think, “Oh, okay, you’re approved by the FDA, but I’m sure you guys probably bought off the FDA and you manipulated the scientific studies.” It’s like come on, I know that happened…

Zach Elwood: Everything’s a conspiracy.

Dr. Marinakis: Yeah, and so we’re finding we’re having more and more difficulty getting fair jurors for our cases because so many people have been tainted by these… And there are bad companies out there. There are the [end rounds], there are certain companies that have done bad things, and it might only be one or two individuals within that organization that were corrupt, but people feel like now everybody’s corrupt, all corporations are corrupt, and it really works against… The other thing I feel that we see a lot is people feel like, “Well, the corporations must have more resources, so it’s not fair. Corporations can hire people like jury consultants to do that.” And the truth of the matter is it’s actually more balanced than you might think. Corporations are usually insured, and the insurance carrier will limit how much resources can be spent on trial. They might limit it. Whereas plaintiff lawyers, you see a family versus a corporation, but what you don’t see is that the corporation is really defended by the insurance company with a limited budget, and the plaintiff lawyer, they’re coming off of maybe five other trials where they just got multi-million dollar verdicts. So you say, “Oh, the family doesn’t have resources,” but plaintiff lawyers represent people on contingency basis. So if the attorney they don’t win the case, the plaintiffs pay nothing. That family loses the case, they pay nothing, and the law firm is putting up all the costs ahead of time. Now that law firm is going to take 100 million dollars they just got on a previous case, use those resources to hire their own jury consultant to do the mock trials, and they’ve actually got the money to do that stuff that maybe even the big corporation doesn’t. Seems hard to believe, but that’s actually more often the case than not.

Zach Elwood: Interesting. That’s an interesting thing because I would’ve been in that group that thought it was quite unbalanced usually.

Dr. Marinakis: No, I can tell you, in terms of clients, the plaintiff lawyers are the ones with the private jets and the multiple yachts, because they’ve got these 100 million dollar verdicts in the past. And then my clients, I’m not saying they’re not well to do, they’re big corporations, but they’re nowhere near the type of stupid money that some of these plaintiff lawyers have.

Zach Elwood: And they’re also limited by how much they can spend on that too.

Dr. Marinakis: Right, and now in the criminal realm, it might be a little bit different. You certainly have criminal defendants who can’t afford a jury consultant or the best lawyer and that there is definitely probably more imbalance, but neither can the state. The state is not going in there and spending a lot of money trying to argue these cases or to hire people, so it’s almost balanced there too. And in fact, if someone is on trial for capital murder and they have a public defender, they are awarded funds for a jury consultant. I’ve done many cases where we’ve worked for criminal defendants who are indigent or just we offer our time pro bono, for free, representing criminal defendants to give ourselves more experience, to do something and give back to the community. And so you find more evenness and parity there than you might otherwise think.

Zach Elwood: Nice. So my final question would be, as you’ve worked in the profession so long, do you have any opinions on things you would change in the jury trial legal system that would make cases more fair in general? Anything that you would change?

Dr. Marinakis: I touched on this earlier about relying on stereotypes, and I think we really need to advocate somehow for jurisdictions to allow a better opportunity for the jurors to be questioned. There’s some states, again, like in the northeast where the judge is the person who asks the questions, and the attorneys never even get to talk to the jurors. And so you might know nothing about them, all you can see is their race and their gender, how they dress, maybe their education. And as I mentioned before, that really forces us to base decisions on stereotypes, and that’s just really unfortunate. So there’s certain laws and the judges who don’t allow sufficient questionings are really doing society a disservice.

Zach Elwood: Okay. That’s about it. And we will end with some places you can go to learn more about Dr. Marinakis’s work. There’s litigationinsights.com, that’s the company she works for. And there’s a blog series on there with some interesting blogs that people might find interesting with client questions and answers from Litigation Insights.

Dr. Marinakis: Oh, absolutely. We post two blogs a month, and these are all based on questions that our clients have asked us, and they range anywhere from what is the statistical social science research behind something to should I shave my beard for trial and what should I wear? So there’s a variety of different questions that have been asked of us, and we answer them for people, and it’s all available on the website under our blogs.

Zach Elwood: There’s also the book, the voir dire book. And to find that if anyone’s interested in that, that’s at jamespublishing.com, and just search for voir dire questions on that site. The book is called Pattern Voir Dire Questions, and it’s the second edition that Dr. Marinakis helped out with and added contributions to. That was a talk with jury selection specialist Christina Marinakis. This has been the People Who Read People podcast with me, Zach Elwood. You can learn more about it at behavior-podcast.com. You can follow me on Twitter @apokerplayer. If you enjoyed this podcast, please leave me a review on iTunes or another podcast platform. Music by Small Skies.

Categories
podcast popular

How to spot fake online reviews, with Olu Popoola

This is a rebroadcast of a 2019 episode where I interviewed Olu Popoola about indicators of fake online reviews. Popoola is a forensic linguistic researcher who specializes in finding indicators of deception, or other hidden clues about traits of the writer. His website is at www.outliar.blog.

Episode links:

Categories
podcast popular

Persuasion in polarized environments, with Matthew Hornsey

A talk with psychology researcher Matthew Hornsey about group psychology, polarization, and persuasion. Hornsey has been a researcher on over 170 papers, with many of those related to group psychology topics.

Want a transcript of this talk? See the transcript.

Topics discussed in our talk include: why people can believe such different (and sometimes such unreasonable) ideas; persuasive tactics for changing minds (including in polarized dynamics); tactics for reducing us-vs-them animosity; why groups mainly listen to in-group members and will ignore the same ideas from out-group members; the effects of the modern world on political polarization; social media effects, and more.

Episode links:

Here are some resources mentioned in our talk or related to our talk:

TRANSCRIPT

(transcript will contain some mistakes)

Zach Elwood: The idea that groups don’t respond well to criticism from outsiders is a theme Matthew Hornsey has explored in his research. His research has delved into the psychological dynamics between groups, and how messages can be persuasive or not depending on whether they come from an in-group member or an out-group member, and what other factors make such messages likely to be persuasive versus ignored or disrespected. So his work is very relevant to anyone interested in reducing us-vs-them polarization, and I think reducing polarization is hugely important not just to the United States, but to the entire world. Because studies have shown that most countries in the world have become more politically polarized since 2005.

A little more about Matthew: he’s published over 170 papers, and in 2018 he was elected a Fellow of the Academy of Social Scientists in Australia. If you like this talk and are interested in group psychology and being more persuasive with your communications, I recommend checking out his papers, which you can find at Google Scholar. I’ll include some links to his work on the entry for this episode at my site behavior-podcast.com.

Okay, here’s the talk with Matthew Hornsey:

Hi Matthew, thanks for coming on the show.

Matthew: Thanks for inviting me.

Zach: So it seems like a major theme of your research is examining why people can believe such different things. Is that an accurate way to put the theme of a lot of your research? And if so, maybe you could talk a bit about why that theme of research interests you.

Matthew: Yeah, I think that’s a pretty close description of the various things I’ve done. If I try and throw a blanket over all my research projects, I sometimes think, “Well, I’m really interested in why people resist apparently reasonable messages.” And I think that — I don’t know if you’ve heard that phrase, quite often researchers gravitate towards things that they’re terrible at. And, you know, historically I think I was pretty terrible at persuasion. I was no good at influencing people. And I sort of lowkey blame my dad for this; when I was a kid my dad used to used to tell me, “Matt, you really don’t have anything to fear about speaking your mind, even if what you have to say is confronting to people. They might get defensive in the short term but if you have right on your side, if you have the facts on your side, then your argument’s going to win out in the end.”

That sounded like a noble and appropriate way to live your life. And so I went into my adulthood and I guess I became quite mouthy and assertive at speaking out because in my mind ‘good arguments win out in the end,’ right? I had nothing to fear. But over time, it became pretty clear that this wasn’t really working out for me. Yes, people were getting defensive. But no, this defensiveness wasn’t going away like my dad had predicted. And also, I wasn’t really changing people’s minds. If anything, other people seemed to be able to change people’s minds better than I could. And so at some point I had to stop and say, “Dad, I love you, but your advice was terrible.” And I had to go back to the drawing board and ask myself that question, “Why is being right not enough?”

And so that started me on this 20-year journey examining the science and the art of persuasion and influence. And I’ve carried that through. I started off looking at why people resist apparently reasonable criticisms of the grip culture, and then I was looking at why people resist reconciliation efforts from outsiders. I also do a lot of stuff about why do people reject conceptual views on science around vaccination, around climate change, etc.

Zach: When it comes to the divergent narratives that we can have about the world and about reality, is that divergence of narratives something that concerns you? Do you see it as one of the existential threats the human race faces; our tendency to get in these highly conflictive divergence of narratives?

Matthew: Well, look. Yes and no. I mean, many of these divergent narratives don’t really harm anyone. People can fight as much as they want about the origin of our species and about evolution versus creationism, but I struggle to see the victim sometimes, other than my internalized sense of scientific honor. And, you know, you’d have to say, “Look, would you wish it away if you had a magic wand and you could create a world where there was no diverging narratives and everyone thought the same thing and there was no conflict around ideas… Would you want that kind of world?” Because that could get a bit cult-like and creepy.

But then one of the reasons I’ve focused on climate change and vaccinations, for example, is that these are core existential threats. We need to know how to respond to a pandemic. And we need to know how to respond to climate change. And scientists are trying to help us there. That’s where I started to get concerned. And you see these schisms and society and cultural wars developing over high-stakes situation that actually we need to be agreeing on.

Zach: Yeah, it seems like there’s different areas in there because there can be differences in opinions or differences of perceptions of issues and various topics, but then you’ve got the highly polarized kind of Us versus Them stances, which are often so emotionally driven. And I guess that was the thing I was more thinking about of these narratives of perceiving the world in an Us versus Them, Good versus Evil way, which then kind of informs various other narratives and topics. That seems to be the real destructive form of divergent narratives. At least that’s what I was thinking about.

Matthew: That’s right. I mean, if I had to create a world, I’ll create a world that allowed people to disagree and to have conflict. But ultimately, I’d like to think that it was with a view to creating consensus. Like, the fighting and the differences of narratives and the conflict is just a painful way of getting to the truth. That’s my preferred mental model of how humanity should work.

Keep reading: For the rest of the transcript, see this post.

Categories
podcast

Analyzing speech for hidden meanings, with Mark McClish

This is a reshare of a 2018 episode where I interviewed Mark McClish about statement analysis: analyzing written and spoken speech for hidden meaning. McClish is the author of the books “I Know You Are Lying” and “Don’t Be Deceived.” He’s a law enforcement trainer and a former US Marshal.

For a transcript, see the original episode.

Episode links:

Categories
podcast popular

Relationship “tells”, with Brandi Fink

A talk with relationship researcher Dr. Brandi Fink, about behavioral indicators (aka “tells”) of healthy and unhealthy relationships. We talk about her work, the work of scientifically analyzing behavior in general, behaviors that are unhelpful to relationships, and more. Brandi has done a lot of work analyzing the behavior of couples and families experiencing problems, including issues of physical abuse, emotional distress, and drug/alcohol abuse. She also once worked with the well known relationship researcher and therapist John Gottman.

This is a reshare of a 2019 episode. For more details about this episode, see the original post.

Episode links:

Categories
podcast

Pros and cons of different social media content moderation strategies, with Bill Ottman

A talk with Bill Ottman, co-founder and CEO of the social media platform Minds (minds.com), which is known for its minimal content moderation, “free speech” approach. Ottman and other Minds contributors (including Daryl Davis, a black man known for deradicalizing white supremacists via conversations) recently wrote a paper titled The Censorship Effect, which examined how strict censorship/banning policies may actually increase antisocial, radicalized views and that perhaps more lax moderation was the better solution. Ottman and I talk about the psychology that would explain how heavy censorship policies would increase grievances and anger, about the complexity of social media content moderation strategies, about strategies they’ve used at Minds, about why people think open-source approaches are optimal, and about Elon Musk buying Twitter and what it might mean.

Episode links:

Other resources related to or mentioned in our talk:

Categories
podcast

Are a majority of Americans actually prejudiced against black people?, with Leonie Huddy

A talk with political scientist Leonie Huddy about research into American racism and prejudice. I wanted to talk with Huddy about headlines like this 2012 one from USA Today: “U.S. majority have prejudice against blacks.” I wanted to ask her if such framings were justified based on the research, or if they were, as it seemed to me from looking at the research, over-stated and irresponsible.

A transcript of this talk is below.

Other topics discussed include:

  • An overview of studies of racism/prejudice, with a focus on America.
  • The ambiguity that can be present when attempting to study prejudice, especially for research that seeks to measure it in less direct and explicit ways.
  • How worst-case and pessimistic framings and interpretations of studies can contribute to us-versus-them political animosity and polarization

Episode links:

Other resources related to or mentioned in our talk:

TRANSCRIPT 

Welcome to the People Who Read People podcast, with me, Zach Elwood. This is a podcast about better understanding other people and better understanding ourselves. You can learn more about it at behavior-podcast.com. 

In this episode, I interview political scientist Leonie Huddy on the topic of studying racism, and especially about studying racism in America. 

The reason I was interested in talking about this topic is that it’s obviously a big factor in our polarization problems in America. There are many people on the left who believe and promote an extremely pessimistic view of race and racism in America. I was thinking about this recently when I was reading Ezra Klein’s book Why We’re Polarized, and the narrative he was promoting was largely the often-heard one that Trump support is largely about race; that many white conservatives are either racist or else resentful about America’s growing diversity and the idea that white people, as a group, are losing power. As someone who’s spent a good deal of time researching our divides, these narratives strike me as simplistic and as taking the worst-possible interpretation of various things that could have multiple interpretations. 

For one thing: clearly there are a significant number of Trump supporters who are in racial minority groups. 12% or so of black voters voted for Trump in 2020, as did roughly 40% of Hispanic voters, as did roughly 30% of Muslim American voters, and 30% of Asian-American voters. To give a few example figures. And those numbers increased substantially from 2016. If you can wrap your mind around how it’s possible to be in a racial minority and not find Trump or the GOP bigoted or racist, you can also see how it can be possible to be white and support Trump for reasons not related to bigotry. 

One of the studies referenced in Ezra Klein’s book to support the ‘Trump support is largely about bigotry’ narrative was an Associated Press study from 2012. To give you a sense of how this study was largely interpreted in the mainstream, a USA Today headline about it was titled “U.S. majority have prejudice against blacks,” and that was roughly how Ezra Klein interpreted that study. And many other news sources and pundits have taken that study and other similar studies and made similar interpretations with them, to make the case that a very large swath of Americans are prejudiced. 

But when you actually take some time to delve into this area, you’ll find that there are plenty of reasons to be skeptical about such interpretations. There is plenty of respected work showing why much of this data is quite complex and ambiguous, and showing why academics and journalists should be cautious and careful when talking about these topics. And this would seem to be especially the case considering how divisive we know these topics are. 

One of the people who’s researched and written about the complexity and ambiguity in this area is Leonie Huddy. A 2009 paper Leonie wrote with Stanley Feldman was titled On Assessing the Political Effects of Racial Prejudice. Part of that paper delved into the difficulty of reaching firm conclusions from the data gleaned from so-called “racial resentment” research. 

A little bit about Leonie Huddy from her professor page on Stony Brook University’s site: “She’s a Distinguished Professor of Political Science at the State University of New York at Stony Brook. She studies political behavior in the United States and elsewhere through the lens of intergroup relations, with a special focus on gender, race, and ethnic relations. Her recent work extends that focus to the study of partisan identities in the United States and Western Europe.”

The following is from her wikipedia page: Huddy has been involved in the leadership of several major organizations and journals in political psychology and public opinion. From 2005 until 2010, she was the co-editor of the journal Political Psychology,[1] and she has also served on the editorial boards of other major journals like the American Political Science Review and the American Journal of Political Science.

Before starting the interview, I also want to make clear: questioning some of the more pessimistic narratives about racism in America doesn’t mean that I or Leonie are saying that racism doesn’t exist or that it’s not a problem. But it’s just asking the question: how much of a problem is it? What does the research actually tell us? Because clearly there will always be a spectrum of people’s perceptions about race and racism, or about any topic, and some people will have inaccurate perceptions at various places along that spectrum, and the truth of the matter will lie somewhere on that spectrum, probably somewhere between the more extreme perceptions. And I’d say that the more polarized a society becomes, the more people will hold inaccurate and distorted perceptions of what the truth is about many hot-button topics.  

And I think these conversations are very important. Because if our goal is reducing our visceral us-versus-them animosity, which is the root cause of our polarization and our dysfunction, then we must be willing to dispassionately examine the narratives that cause us to hate each other and be disgusted with each other. We must be willing to question the narratives that emotionally appeal to us, the tempting narratives that whisper in our ear “the other side are all bad and gross people.” We must be willing to examine nuance and complexity, and try to avoid simplistic “the other group is all the same” types of narratives. 

Okay, here’s the interview with Leonie Huddy. Hi, Leonie. Thanks for coming on.

Leonie: Great to be here, Zach.

Zach: So maybe a good place to start is what led me to being interested in talking with you. I was reading Ezra Klein’s book, Why We’re Polarized which is about polarization, and specifically American polarization. And he quoted some studies and interpretations of studies that expressed a pretty confident view that a large percentage of Americans are racist. And to give a sense of this kind of take, there’s a headline from USA Today in 2012 that read, “US majority have prejudice against Blacks.” And then to quote from the first paragraph in that article, “Racial attitudes have not improved in the four years since the United States elected its first black president, an Associated Press poll finds, as a slight majority of Americans now express prejudice toward blacks whether they recognize those feelings or not.” End quote. And you can find similar views based on assorted studies that purport to find either explicit racism, the more obvious direct forms of racism, or more subtle and hidden forms of racism. So maybe we can start with the question; what are your thoughts when you see a news headline that says something like more than half of Americans are racist?

Leonie: Well, you know, I’m a social scientist and we try to stay away from these labels. I mean, we do a lot of work trying to pick up negative attitudes. And it’s a scale. Some people do, I think we both agree, some people have what we would both consider to be pretty strong prejudicial attitudes. But our job in social science is to try and engage these continuums. And one thing that I’ll say is that I’m a social psychologist and a political scientist, I look at both of these things. And it’s very human for us to like our own groups a little bit better than others. It’s pervasive, it’s almost universal. So if I asked you, how much do you like your whatever group it is; your religious group, your racial-ethnic group, I’ll always say I like it a little bit more than outsiders. So the question is, really, when does this spill over into a problem? When do we think that these negative attitudes turn into something that’s problematic or divisive? So I don’t think using labels is particularly helpful, but in our research we’ll try to grade people. Try to take them from those who really are very even-handed in the way they rate these groups to others on a continuum that really are further out on the negativity scale. And then we try to understand what are the consequences of holding those attitudes? I don’t think you’ll find many people in social science who’ll say, “This person is a racist,” but we can scale people on some sort of continuum that ranges from more or less racial negativity. Now, I don’t know if that’s a great answer to your question but I think we’d avoid the labels. And we try to gauge this continuum. Again, people vary. And this kind of human, you know, it is what we call the ingroup bias phenomena. It’s very, very pervasive.

Zach: And then there’s the question of how much of the things that are judged to be racist or interpreted as racist are actually due to just political sentiment. That was the subject of your paper with Stanley Feldman that interested me in talking to you because you talk about sometimes there’s difficulty of distinguishing between answers to surveys about racial resentment, for example, that can be seen as being due to just political sentiment versus racism. I’m wondering if you could maybe give an overview of how you view that separation and that ambiguity.

Leonie: I think the audit, again, is on us as social scientists to do good research. And we should poke out measures, poke the questions we ask people, and make sure that we’re getting at what we say we’re getting at. We should be held to a high level of scrutiny about this. You mentioned this concept of racial resentment, which is basically holding some negative attitudes along with some level of resentment that perhaps another group is getting special treatment in American society. We see a lot of these grudges on all sides, right? Lots of people have grudges against other groups. So that’s one issue. And another is, some of the questions in that particular scale touch on views that let’s say, a conservative or someone who’s very supportive of an individualistic view of humans and how they should behave would be more likely to endorse. So in my view, we have to work a bit harder at this. Yes, we can take statements that we might see in the press or that people make– and I think that’s how that scale got developed, was just picking up language that people were using. But there’s a higher bar to say that this in fact is is prejudicial or discriminatory, you know, that it’s a view that would lead to some of these discriminatory consequences. So I think that we have to think a little bit about the consequences of holding the attitude. Maybe we’ll get into the content of that particular scale but I will say in the history of measuring these concepts, in the beginning people would be asked really about outright bias, the view that another group was inherently inferior. Those were some of the kinds of attitudes that were being measured in let’s say, at the beginning of the 20th century. And people would acknowledge that they harbored them. They thought, for example, that Black Americans were less intelligent than Whites. And I think we’d all agree that that’s a strong prejudicial view. But we’ve moved away from that, that is sort of the history of these concepts. So it became less likely that people would endorse those views, especially in the wake of the civil rights movement. And so these new measures were developed to try and pick up what people thought was sort of discriminatory standpoint. And it’s complicated but some of that was related to what they saw as resistance to policies that would try and improve the position of Black Americans in everyday life and people are asking- Well, in principle, they seemed to support equality and they believed in the value of racial equality, but they’re opposed to these particular remedies. And so they developed this racial resentment scale to try, in their view, to think “Oh, maybe this is the way we now detect racial bias in some ways, to help us explain why people are opposed to programs like busing or affirmative action, which we all may agree may have other problems associated with them.” So there is a long history to this where we’ve moved away from purely discriminatory statements that people would make to more subtle sorts of statements. And I think that’s where we can bring in questions about, is this really racial discrimination?

Zach: Yeah. It seems like there’s a few problems in that area which you talk about and other people have talked about in various papers. For one, it’s hard to separate some conservative views from views that some would categorize as racist or racial resentment. For example, if you’re a conservative who believes in a small government and believes in personal responsibility, that’s going to overlap with things that could be interpreted as racial resentment, the kinds of questions they asked to determine racial resentment. The other related problem is the more indirect an approach you take for measuring racism or anything, the more open to interpretation and ambiguous and noisy the findings can be. Would you agree with both of those?

Leonie: No, I think that’s correct. I think that’s absolutely correct. We, again, as social scientists we have to work hard at this. If it is a difficult concept to measure, we’ve got to work harder at it and make sure that we are not using labels that are incorrect for a response that people make to a particular question. So if we’re talking about this racial resentment scale, one of the questions is people should try harder. If Blacks would try harder, they can be just as well off as Whites. There is some research where you substitute Blacks for other groups and people will just simply agree, “Yes. Yeah, if you work harder you can get ahead!” That’s part of the problem. That it may not be a racial view, it may simply be the view that you think if people work hard they can in fact be just as wealthy as anyone else in the society. And so I think it’s our job to try and make sure that we’re not measuring a support for that sort of hard-work principle as opposed to something that’s more prejudicial attitudes towards a particular group of people.

Zach: Yeah, that was- Speaking of other studies, you mentioned that study which was a study by Riley Kearney and Ryan. It was about substituting Blacks– they substituted Lithuanians for Blacks in these same surveys, and found the same patterns which suggests that these studies were largely studying views about the role of government and how much any individual group should be helped. Then there was another study by Cindy D. Kam titled Racial Resentment and Public Opinion across the Racial Divide. And in that one, they studied the responses of Black Americans to these kinds of questions and found it was largely about politics where Black Republicans would answer in similar ways to White Republicans on these answers and in the same ways that have been interpreted as representing racial resentment. Are there many studies that kind of criticize some of the harder more certain interpretations of these things?

Leonie: I would say among researchers, it is an ongoing debate. One of the things that we’ve tried to do in our research is try to find evidence of actual discrimination. What I mean by that is, let’s say there’s a survey, we’re conducting a survey… We might describe a person– one of my current studies is about immigration so it is whether or not someone would be prejudicial to [woods] somebody who is dark-skinned versus someone who is white-skinned with the same qualities who wishes to come to the United States. And so if we find that the person who’s described identically with same qualifications, same background, same capability of assimilating into American life… If there is a penalty for your skin color, then that’s fairly clear-cut, I guess. And we can take some of our questions such as this racial resentment scale which I don’t really use or other questions that are more blatant, and say it’s the person who holds the more blatant prejudicial attitude less likely to support a person who seems qualified to come to the country just because of their skin color. And we do find evidence of that. In our recent research, yes, there’s a penalty if you harbor the most extreme of these negative attitudes. You will be more likely to reject someone who is let’s say, a Nigerian, than a guy in our last study was who was from Russia described exactly the same way. [unintelligible 00:16:25] an evidence of prejudice or discrimination in action. And I think in some ways that’s more clear cut. And we’ve tried that kind of thing with the racial resentment scale. This was asking about a program that would take top high school scores and allow them free entry into this state college, and the program was described as either benefiting Black or White students. And there was discrimination. People were more likely to support the program when it seemed to benefit White than Black students even though it’s described the same way. But we also look to see across this range of racial resentment, is that helping us to understand who is less likely to support a program for black teenagers? And it didn’t work very well for conservatives who were perhaps not very enthusiastic about the program in general, and it didn’t really matter if it was described for Whites or Blacks. Those who scored highly were just like, “No, we don’t really like this program.” That’s telling me that it’s not discriminating on the basis of race, it’s just telling me something about the reaction to the program, if that makes sense. It’s trying to see discrimination in action, in combination with the scale as a kind of test of the scale. That’s what I mean by we have to work a little bit harder to show that the scale is picking up, something that we think is a problem. Just pure discrimination against something or a policy, purely because it’s directed at one group versus another.

Zach: Right. And you write in your papers and work, and others have too, that academics may be too quick to dismiss the fact that there is actual explicit racism that we can measure and there seems to be a reaching for these more indirect findings or ambiguous findings, when there seems to be so much you could do even with just very explicit direct forms of racism. For example, Seth Stephens-Davidowitz who wrote the book Everybody Lies, which was about examining Google search results for various findings. He found the correlation between Google searches for the N word and related negative racial search engine results and the political activity of specific regions. In other words, there seems to be a lot of interesting research one can do even just for explicit racism without getting into the more ambiguous areas. Which I think is the point you’re making, and also the fact that I agree with you is like we seem to be so often focused on these ambiguous or like trying to read people’s minds, when it seems much more worthwhile to focus on what are the actual implications and consequences when it comes to specific policies and things like this.

Leonie: Yeah. Because I think we just get into trouble with people taking issue with our claims. And so I think it benefits the enterprise if we’re able to show clearly some of these issues that people won’t dispute. And so I’d say it’s a lot older now but in the 2000s, we conducted large national survey. And the questions were about why are there economic differences, let’s say between Blacks and Whites? Or why are test scores different between kids who are black and white in schools? And we gave people different kinds of reasons, and one was that basically the other group is genetically inferior. Now, this would seem to be a version of fairly blatant prejudice against a group, right? I think we’d agree in this day and age not many of us believe that. And I’d say, you know, people are allowed to say, “Whoa, no. That’s absolutely not a reason. It might be a bit, or something.” People who said, “Yes, this greatly explains it or explains it somewhat.” There was about 25% of people in this national survey who said, “Yeah, you know, that could be one of the reasons.” So it’s not so difficult to ask the questions. We think, “Oh, no, you couldn’t possibly raise those issues.” But there are people out there who really, you know, perhaps live in a context where this is the way they talk about the other group. And we shouldn’t be afraid to find that out. I mean, it’s possible to ask these questions. What I will say, at that time I was running a survey research centre, and the interviewers don’t want to ask the question. And I kept saying, “It’s okay, there are people out there that don’t mind telling you. This is what they think. We’re just listening. We’re just trying to understand what’s going on.” So I think our own concerns, our own views colour our perception of what things like out there in the world.

Zach: To get back to that, the reason I wanted to talk to you was these headlines and these framings that, you know, for example the the USA Today headline that said US Majority Have Prejudice Against Blacks, which was based on a specific survey that asked the kind of typical racial resentment questions. The thing that strikes me there is, would you agree it seems that that kind of confident framing seems irresponsible, considering the ambiguity we’ve talked about and considering how divisive these topics are?

Leonie: Again, I don’t think us social scientists would ever say X percentage is racist. We’re just not in that kind of business. And the problem with this is that it does harden perceptions on either side. It is never a good idea when you have some divisions to throw a fire bomb at the other side. It doesn’t help our relationships. So I think it would be much more satisfying if that language is more guarded and more qualified. It’s a complaint that we often have as researchers or social scientists, that some of our research is heavily simplified for headline purposes, right? So if you are interested in social science, it’s really good. I know some of it is complex but it’s really good to try and dig into the complexity of these studies yourself to understand what’s going on if you can. It’s good to have a long-form format such as ours now on a podcast to talk about these issues, because I think social scientists generally think with greater nuance about this. Maybe not everyone, but I think we are beholden to that sort of concept that we’ve got to be clear and straight ahead in what we’re doing if we want other people to believe us.

Zach: I’m someone who’s interested in the political polarisation, dynamics and the psychology behind that, and I talk about that on this podcast a good amount. One thing that strikes me about America’s race-related divides and our divides in general is that there can be these various feedback mechanisms that amplify these conflicts. For example, the more that liberals promote worst-case interpretations about both race relations in America and about conservatives’ views being indicators of hidden racism, the more anger that generates in conservatives and the more that anger on the conservative side will manifest in ways that will be then interpreted even more as so-called racial resentment or racism. In other words, there can be this view amongst conservatives, but not only conservatives, that many liberals are being unreasonable and divisive on matters of race and that liberals focus too much on race and racism to the exclusion of more important things like helping struggling people in general. And the more that that perception grows, the more people will be likely to vent their frustration about these things in surveys related to race and racism. There just seems to be these various feedback mechanisms at work. And I see this not just in racial things, but pretty much any topic we could pick that’s a contentious topic. I’m wondering if that’s something you’ve thought about these kinds of feedback mechanisms.

Leonie: Well, I think about group conflicts very generally. One of the things that can happen is if I think the other side hates me, it will not improve matters. And so some of these accusations, the hurling of insults, will never improve the situation and basically people will just stop listening to each other. And I think we’re sort of in that situation, for some people at least, with partisan polarisation. So it is important to listen to understand each other that unfortunately, forces, greater listening. When it comes to racial matters, we would say or social scientists say there is racial inequity in this country. I think it’s clear cut with, I think for example, on Long Island where I live, we have a lot of school districts. We have 126 school districts and we have some minority districts that are the poorest performing, they’re the smallest, they have the weakest tax base, we have a large differential in terms of spending per child on these different kinds of children in school districts. And most people are unaware of that. I’ve done polling on this, so they don’t know. So it would be helpful if we could educate each other about where the sources of problems are in our society without getting hot under the collar about such insult, because it doesn’t help matters. I personally think politics is not a religion, it is a practical exercise. [laughs] And we often lose sight of that. We have to compromise. Politics is inherently about compromise. We have to listen, and without that we’re really not going to solve the problems. We’re not gonna see the problem. That, I think, is bothersome. It is worrying to me because if we want to be clear-sighted, for me as an educator, one of the things and the places we get started is with education. And equal educational access would seem pretty important. That’s at least a beginning place where we can start to perhaps have that conversation. I’ll just say one of the difficulties that we have is understanding the difference between individual merit– so, you know, I should get rewarded for the things that I do– and then how do we reconcile that with the fact that we have group inequities in our society? I think we have to acknowledge that we do. If we look across different racial and ethnic groups, there are different outcomes on average. Some of that may be baked, it may not be anything to do with people’s attitudes. It may be baked into other aspects of our institutions that need some examination. But that’s a more complex way of thinking about things. We have to understand a place like Long Island, “Well, why are those school districts like that?” Part of it is residential segregation for us as a history. We are one of the most segregated suburban places in the country, and the tax base is very different in the school districts and so you can spill it out, you can play it out. There’s a long story in there. And it isn’t just about people’s attitudes towards each other, although that doesn’t help those attitudes. That’s a long way of getting back to saying it would be really good to have a sort of rational assessment of what these problems are without yelling and screaming at each other.

Zach: Right. That’s what strikes me about polarisation. Extreme polarisation is just so bad because it kind of prevents solving problems, you know? It prevents us from having nuanced conversations. It leads so many people to take simplistic views of things or just views that are pushing against the extremity perceived on the other side. It all sets up to just prevent anyone from solving an actual problem.

Leonie: Yeah, definitely bothers me. [laughs] I get very exercised about that because I think people just don’t understand politics. Politics is always about compromise, we’re never going to get exactly what we want. We live in a diverse society and so it’s really about listening and understanding. And I do think that this name-calling, hurling things at each other across these divides is completely counterproductive. It would be good if we can take the temperature down and listen to each other more. One thing that I’ll say about that is I think that the younger generation of let’s say, younger White Americans have grown up in a more diverse society. And when I look at their attitudes, and again not calling anyone racist but on the scales and so on, they tend to show more tolerant attitudes than older generations. In that sense, there may be greater capacity to listen to what’s going on on both sides of these debates, and maybe more open-mindedness. I think of that generation as one, let’s say the under 30s, as having grown up in a more diverse society in the US with more diverse students in schools and so on. And so they’ve had more contact and harbor less of these prejudices towards people of other groups. It sometimes helps to know people from another background, that seems to be one of the things that helps tamp down this name-calling and heated opposition.

Zach: I was going to go back down to a granular level, you had briefly touched on the specific questions about some of these kinds of surveys that we’re talking about but the thing that strikes me in that area is that even at this very specific question level on these surveys, there’s just so much room for ambiguity and different interpretations of the questions and different interpretations of the answers to the questions. I’ll take one example here. Let’s see. There’s often a question about agreeing or disagreeing with a statement like, “Racial problems in the US are rare, isolated occasions.” Another one is, “Government officials pay less attention to a complaint from a Black person than from a White person.” The assumption often is that people’s inability to recognize that racism is a problem or a big problem, or their unwillingness to say it’s a big problem is itself a sign of racism. But that strikes me is just such a big assumption because it’s possible to imagine people living in areas or environments where they simply don’t perceive that racism is a problem, or they watch the news that doesn’t present racism as a problem. Or even as having different definitions of what rare or isolated occasions mean in a country of 300 million people. That’s just one example but it strikes me with all these questions that there’s so much room for interpretation and what kind of strikes me with some of the interpretations like for example, the USA Today headline article that I mentioned. It’s like there’s often these filtering of all these things to the worst-case interpretation. I’m wondering if you see some of that ambiguity in the questions themselves?

Leonie: Let me draw a distinction that I think is an important one in some of our work when we’ve asked these questions. And we’ve posed, you know, what’s the explanation for, let’s say, these differences in economic outcomes? We divide those explanations up into what we will call internal attributions. In other words, we blame people themselves for their failures and say that there is a weakness of character or so on, leading towards a more prejudicial judgment about a group of people, and distinguish that from societal explanations. So in other words, there’s been a history of discrimination in our country or discrimination exists. Those are two very different things. And it’s hard to say that this perception of the current existence of discrimination has anything to do with other aspects of a prejudicial judgment about the group. I think discrimination is really hard for people to see. I mean, if you live in a certain area, maybe you never seen it, you don’t know it, you haven’t experienced it, you’re unaware of it. I think it’s very difficult to call that racial prejudice. These judgments about whether discrimination exists or hard to make. Even a person who experiences it isn’t sure if they were disadvantaged because they were a woman or somebody from a particular group. We could reflexively say it’s that, but it turns out that when we look at people’s attitudes, those judgments about whether society discriminates are very different from saying that there is a deficient character to a group of people or that they are inferior in some ways.

That tells me that discrimination is something else. It’s got to do with perhaps where we live, what we experience, how we understand the world. And that, again, is different from judging a group of people negatively or saying that they’re all terrible people. I would prefer to say prejudice against a group of people is the ladder that I’m making very broad, negative generalizations about them as people. “I don’t like them, I think they’re inferior perhaps, I think they’ve got really negative attributes, and I’ve labeled them all the same.” That’s closer to our concept, I think, of group prejudice. But acknowledging that or knowing or even being aware of discrimination is much more complex. It isn’t the same thing and I think that’s where we’ve gotten tangled up to some degree. And I think that where you were pointing a little bit.

Zach: I want to ask you too about something I’ve, in previous recent episodes, I’ve talked about examining survey results and interpretations of survey results. One factor that seems relatively unexamined to me is there can be in very polarised societies on these surveys, I feel like that can be a venting factor where people are just using the surveys to kind of vent frustration at the other side. I wonder if you’ve seen any examination of that or think that can be a factor in making people more likely to answer survey questions in a way that’s just like venting, “I want to make a point against the other side by answering this in a way that may not even reflect the way they really feel.”

Leonie: It’s hard to say, that’s really difficult to get at. One of the things that we can do is try to look at how their answer goes with other attitudes that they have in a survey. When people answer these questions, we’re taking them at face value to some extent. We can’t hook them up to something and say, “Lie detector test. Are they lying? Is this real?” We can’t really do that. It’s very, very difficult. But what we can do is sort of see well, how does that go along with their other positions? Is there a consistency? Does this seem out of line with the other things that we were saying. One thing that we tend to forget is that there are gradations in all things. We talked about how positive or negatively someone feels about another group, but it’s also true for how they feel about the political parties. So while we have a small group on both sides that are very intense and hold very negative attitudes towards each other, there’s a whole bunch of people in the middle who don’t do that. We tend to lose sight of them because they’re quiet. So what I would say is when I ask people how strongly they identify with a political party, how much it means to them and so on, they’ll be the ones that express the most negativity towards the other side. I think if we look at their behavior, it might be consistent with their behavior as well. Again, it’s a small group of people but it’s very hard to say they may actually feel this kind of negativity towards the other side. It may be moderated when they actually meet someone, they might have to tone it down, so that’s another matter. We all know that our behavior has to be conditioned to some extent on circumstances and context. We can’t always just express our attitudes if it results in someone punching you, you know? There are constraints on our behavior. But  I don’t see any reason in this case, I think that when people say those things– and we’ve done some experimentation by let’s say grading someone in terms of how strongly they identify with their political party and then making them read something that is threatening towards the political party. Typically, the strongest identifiers will say the most angry about these things. That seems to be consistent with the behavior, they’re more likely to do things. Anger seems to be motivating them to take actions. So I don’t really have any reason to think that it’s fake. I actually think some people feel pretty strongly about this. But again, it’s a minority. The strongest are a small group on both political sides, and then we have gradations and people in the middle.

Zach: Yeah. And I know people who say some pretty extreme things like all cops are Nazis. They might vent these kinds of things on social media but then when you actually talk to them, of course they don’t actually believe that. That’s the kind of thing I was thinking. But yeah, not to say that there’s, you know, clearly there are things to study there. I guess that’s the kind of dynamic I was thinking of in these areas of people just being very angry, and also the fact that a lot of these surveys tend to happen more and more online these days as opposed to in-person, which I think somebody studied that, that people can have different responses online than they would in person for social reasons and things like that.

Leonie: Yeah. We have to remember that there’s a range, even though I might hold a particular attitude, I can say different things in different contexts. And so if possible, then online makes me more inflamed in general because that’s where I’d spout off on social media. What we would think is that typically, the social desirability pressures are less intense online. Now, that’s based on research concerning sensitive topics like sexual behavior, other things that people don’t want to move to, they’re more likely to be honest when they’re asked without a person. So I think it depends on who you think you’re talking to, if there is an interview and someone’s asking that question. The person’s thinking, “Who am I talking to?” And if they think they’re talking to someone who agrees with them, maybe they express stronger positions. The general notion is that online should get rid of some of that. I guess we need to do more research on that to figure that out. It’s an interesting proposition.

Zach: Do you want to mention anything else that you wanted to say that we didn’t get around to?

Leonie: I’ll say one thing about polarisation, because this is some of the work that I have been doing more recently, looking at these partisan identities. I’m just interested in what I’d call intergroup relations, so that just means that there are some common concepts, explanations, processes that cut across all of these different group relations. And so recently, we were doing some work looking at how we can decrease negative feelings about the other political party. Basically, this is back to the idea that if we don’t think the other side hates us, we can calm things down a bit. In that particular study, there were several studies where people read about Chuck Schumer and Mitch McConnell in a restaurant where they were either nice to each other or they were insulting each other. And then independently were agreeing or disagreeing on immigration matters. And we found that their agreement or disagreement didn’t really matter that much, what helped to make us more positive towards the other side was the fact that they were nice to each other. So showing that our leaders can actually be warm and have a pleasant and congenial relationship helps to decrease this idea that the other side hates us. The takeaway here is if we could see some better behavior from our leaders or people on either side of these political divides, it would help to take the temperature down a little bit and make it easier to, “Listen, we can disagree, we have to disagree. We will always disagree. This is the nature of politics. We will never all agree about things.” But the question is, can we do that in a way that results in us listening to each other and making concessions and compromising or not? And I think we’ve reached a bad place in American politics where that is increasingly unlikely.

Zach: Thank you, Leonie. This has been great. Thanks for coming on and talking about this.

Leonie: My pleasure, Zach. Pleasure to be here.

Zach: That was a talk with political scientist, Leonie Huddy. You can learn more about her work by searching for her name and finding her Google Scholar page, or her StonyBrook University professor page.

If you’d like to read the paper that initially interested me in interviewing Leonie, that paper is titled “On Assessing the Political Effects of Racial Prejudice”. 

One thing that we didn’t get to discuss, but which was discussed in that paper, was the ambiguity that’s also present for some of the ‘unconscious racism’ or ‘unconscious bias’ types of tests. This is another area where there has been a mainstream interest in these tests, and an understanding that such tests reveal prejudice and racism in people that people aren’t aware of. But the reality is that these kinds of tests are much less revealing and much less accurate than is widely perceived in the mainstream. 

If you’re interested in learning more about this, I’d recommend as a starting point a Vox article by German Lopez titled “For years, this popular test measured anyone’s racial bias. But it might not work after all.” The synopsis for that piece reads: “People took the implicit association test to gauge their subconscious racism. Now the researchers behind the test admit it can’t always do that.”

To quote from one paragraph in that: “The research so far comes down somewhere in the middle of the debate. It seems like the IAT predicts some variance in discriminatory behaviors, but its predictive power to this end seems to be quite small: Depending on the study, the estimate ranges from less than 1 percent to 5.5 percent. With percentages so small, it’s questionable just how useful the IAT really is for predicting biased behavior — even in the aggregate.” end quote

If you’d like to see some of these resources, I’ll have some of the ones discussed at the entry for this episode at my behavior-podcast.com site. 

If I had one point I hope you take with you from this episode, it’s that we should be more skeptical of people and media that use the kinds of research discussed here to support their claims that a large swath of America is racist. 

I think we should all try to aim for nuance on this topic, and on all topics that feed into our us-versus-them divides. We should attempt to question and push back when people and media make over-confident assertions that we see as relying on weak or ambiguous data. I think the more we do that, the more we’ll combat false and exaggerated us-versus-them narratives and the more we’ll reduce animosity. 

I’m currently working on a book aimed at healing American divides and reducing polarization. If you’d like to read some of the kinds of ideas I’ll be talking about in that book, you can check out a piece I just wrote: it’s on my Medium blog and it’s called “The importance of criticizing your own political side in reducing political polarization.” To find it, you can search for “medium zach elwood political polarization” and you’ll probably find it. That piece discusses an idea that I believe is one of our major paths out of polarization: convincing more people to criticize bad and polarized thinking they see in their own political group. So if you care about American stability and reducing dysfunction, I hope you check it out. 

This has been the People Who Read People podcast, with me, Zach Elwood. You can learn more about it at behavior-podcast.com. If you appreciate my work, please leave me a review on iTunes; it’s the most popular podcast platform so it’s definitely the place where a review is most appreciated. I make no money on this podcast and spend a good deal of time on it. So if you think I’m doing good things and want to send me some financial support to encourage me to do this more, you can send money to my Patreon, at patreon.com/zachelwood, that’s zach elwood. 

Categories
podcast popular

Cryptocurrency, problem gambling, and addiction, with Paul Delfabbro

A talk with psych researcher Paul Delfabbro about cryptocurrency, problem gambling, and addiction. Delfabbro has done a lot of research on problem gambling and on addiction. He’s worked on several papers related to cryptocurrency, including “The psychology of cryptocurrency trading: Risk and protective factors” and “Cryptocurrency trading, gambling and problem gambling.”

Topics discussed include:

  • How big a problem is problem gambling amongst cryptocurrency traders?
  • What are some of the psych factors that can be present for the more addicted and cult-like crypto behaviors?
  • Might covid have played a role in cryptocurrency price fluctuations?
  • The role of the internet in amplifying temptations and addictions.
  • The role of social media in getting people excited about cryptocurrency.
  • Video game addiction.
  • Can making a large bet/investment in something affect one’s beliefs (for example, a liberal makes a large bet on Trump to win for purely financial reasons but finds themselves rooting for Trump and therefore seeing the world differently)?
  • Day trading and problem gambling.

A transcript is below.

Episode links:

Other resources related to or mentioned in our talk:

TRANSCRIPT

[Note: transcripts will contain errors.]

Zach: Welcome to the People Who Read People podcast, with me, Zach Elwood. This is a podcast about better understanding other people, and better understanding ourselves. You can learn more about it at www.behavior-podcast.com.

On today’s episode I’ll be talking to psychology researcher Paul Delfabbro about cryptocurrency, and how some people’s crypto trading can be a form of problem gambling. We talk about addiction in general, about how the internet can contribute to online addictions, about day trading, about video game addiction, and more.

A little bit about Paul Delfabbro from his professor page at the University of Adelaide in Australia:

Paul has worked at the University of Adelaide since 2001 and he lectures in the areas of learning theory as well as methodology and statistics. His principal research interests are in the area of behavioural addictions (gambling and technology) as well as child protection and out-of-home care. Most of his research work involves statistical analysis of cross-sectional and longitudinal surveys and experimental studies.

I found Paul’s research because I’ve been interested in doing an episode about cryptocurrency and cryptocurrency-related psychology. A paper from 2021 by Paul, daniel King, and Jennifer Williams was titled “The psychology of cryptocurrency trading: Risk and protective factors.” To quote from that paper: “We review the specific psychological mechanisms that we propose to be particular risk factors for excessive crypto trading, including: over-estimations of the role of knowledge or skill, the fear of missing out (aka FOMO), preoccupation, and anticipated regret.” end quote.
Another paper Paul worked on was titled “ Cryptocurrency trading, gambling and problem gambling.”

One interesting thing about Paul’s work is that, for the purposes of studying it from a psychology point of view and a gambling behavior point of view, he’s had to learn a lot about it. For example, that first study I mentioned had an in-depth analysis of the various risks involved with cryptocurrency, and where those risks came from and why they existed. I mention this because Paul isn’t just knowledgeable about the psychology aspects of this, but he also seems quite knowledgeable about cryptocurrency in general.

Apart from cryptocurrency-related work, Paul has also worked on research regarding gambling addiction in general, including video game addiction, and has worked on conspiracy theory psychology.”

You can learn more about Paul Delfabbro by searching for his name and finding his University of Adelaide page and his Google Scholar page.

A note about this talk: I am pretty ignorant about cryptocurrency, so if I get any phrasing about it wrong, that’s why. And hopefully this is clear, but just in case: my choice of focusing on the more gambling-related and addiction-related aspects of cryptocurrency shouldn’t be interpreted as me having a negative view of cryptocurrency. It’s just that this is a psychology podcast, so I wanted to choose something psychology-related to focus on, and that seemed to be one of the main things to focus on.

As this episode relates to gambling, I wanted to briefly mention my own gambling-related work: if you didn’t already know, I’m the author of some well known books on poker behavior, also known as poker tells. My books have been called the best work on the subject by many poker players, both amateurs and professional players. My first book has been translated into eight languages. If you play poker, you might like checking out my site readingpokertells.com and reading the reviews. Okay, sorry for the shameless self-promotion.

Here’s the talk with Paul Delfabbro:

Zach: Hi Paul. Thanks for coming on.

Paul: Well, thank you for having me on.

Zach: Yeah. So you’ve done a wide range of psychology research and including a lot of things that are interesting to me personally, including problem gambling and addiction to technology and conspiracy theory beliefs.

So I’m curious what drives your research interests and other certain themes that, uh, you’re drawn to and, and what are the, what are those?

Paul: Yeah, my interest in gambling probably has a number of different, uh, influences. I, I think, um, my principal area, area of research or teaching interest has always been in learning and [00:04:00] behavior.

So I, I’ve always been interested in how very simple habitual behaviors are maintained by, uh, you know, schedules of reinforcement, simple stim and those sorts of things. So, slot machine gambling, uh, was something that’s always interested me because, uh, it seemed to be a, a natural human extension. Or some of the simple behaviors we see in animals.

So I’ve always had an interest from my undergraduate days in that type of simple behavior. I, I guess from a personal point of view, I, I had an, an uncle who owned a, a slot machine Mm. In a boys’ room many years ago when, uh, and, uh, we used to play it when I was a kid. And I, I used to see these machines, uh, in various locations and be curious about them.

And I think that’s, um. Sparked my interest, uh, from an early age. And, and then when, um, Australia started to introduce these machines, uh, particularly our state started to introduce these machines in the mid 1990s, uh, a time I started to do research. Uh, it, it almost all those, those I guess, earlier interests and the potential for this.

To be an interesting, [00:05:00] uh, topic for research and I guess regulation and, and just general public interest, uh, all came together, uh, for me to start doing that research in the 1990s. And I, I think what I really like about it is it provides an opportunity to apply some of the more abstract principles we learn in psychology to real world behavior.

Mm-hmm. So I think all those things coming together, um, created that interesting. Gambling conspiracy theories I think are, are a topic I’ve, I’ve come to a bit later. Uh, I think, I think it’s one of those topics which, uh, many people have, uh, views about. I think there are, as a, you know, general citizen, I think we should always be thinking about what government is doing.

And we, we certainly know that, you know, government doesn’t always tell us the truth about things and that’s for often quite legitimate reasons. Uh, we obviously don’t release all the current. Cabinet papers in Australia when things are happening. Uh, but we find out, you know, 20 years later, uh, what, what decisions the government’s making.

I think also it’s, it’s, uh, an interest that came about through the simple fact that I was a person who was born in the late sixties, who grew up in the [00:06:00] eighties when, in the nineties when, uh, all the stone was making conspiracy movies. Mm-hmm. And there’s always been a general interest, I guess, in this era about, you know, the JFK assassination and, and other similar stories.

And I think with gambling, you, you’re looking a lot at, um, a rational. Or beliefs as well about the nature of outcomes. And, uh, and given I also had a background in finance and economics, uh, I think all these things sort of came together for me to have a curiosity about why people believe certain, uh, extravagant beliefs, uh, particularly when there’s a lot of, not a lot of evidence to back them up.

Zach: Yeah, that’s interesting actually, as you were talking about that, I just made the connection, you know, there, there are a good amount of similarities you could find between, you know, gambling addiction and other. Irrational behaviors and conspiracy theories, like there could be some similar kind of addictive kind of things there that, that affect people’s behavior.

Would you, would you say that’s true?

Paul: Oh, for sure. I, I think, um, even though gambling is very driven by simple conditioning and, um, I. You know, very behaviorally based, uh, mechanisms. There [00:07:00] certainly is an element of irrational beliefs. So there’s been studies where you get people to verbalize their thoughts about, um, gambling outcomes.

So particularly when they play slot machines and you find that quite a lot of the information or things they say indicate the presence of quite common cognitive. Biases. So you certainly do see, um, beliefs such as that people believe the machines are pre-programmed to do certain things, or people will try to develop some personal relationships with machines.

Mm-hmm. Believing they can beat the odds. Now there is some sort of, like, with all these sort of things, some scic of truth there. We know for example, there’s, you know, talk of concepts such as Easter eggs, you know, people that are hidden, things that people might do the. Program has put in there to enable you to win.

Uh, but in general, um, the, the average person’s not gonna know anything like that. So to believe that you’ve got any control over these chance based devices, of course, entirely irrational.

Zach: Mm-hmm. It seems like there could be some connections the other way too, where. There can be elements of people getting, uh, maybe some, some ego boost or other kind of, [00:08:00] uh, pleasures from beliefs in conspiracy theories.

You know, even though some extreme beliefs in conspiracy theories can be quite self-destructive in a similar way to addictive. Behaviors. And yet there might be like different kinds of rewards for those kinds of beliefs. Do you, do you see some of that too?

Paul: Yeah. In, in my lectures, uh, I talk a lot about, uh, the illusion of control and some of these common cognitive beliefs and what are the psychological mechanisms that maintain them.

So part of it is, uh, motivational. People like to believe they’re in control of their lives. People often don’t like, um, uncertainty. Um, people also have. Brains, which are hardwired to find connections between things. So we have this natural tendency to want to see control. Mm-hmm. And we often, um, tend to do that most when we’re facing uncertainty or emotional turmoil.

So if we think back to some of the major crises of the world, you think of nine 11 and others. The remarkable number of, you know, uh, erroneous beliefs that emerged during that period, simply because people were looking for, um, ways to [00:09:00] explain the uncertainty to deal with their anxiety. Uh, and, and we know that with, um, many gamblers, they are quite anxious people sometimes, uh, has been seen in some cases as a, uh, form of coping, which is an anxiety based, um, behavior which makes, which is actually amenable to people having more of these sorts of beliefs.

Zach: Uh, maybe you could talk a bit about. How some people’s cryptocurrency trading has, uh, similarities or, or an overlap with, uh, problem gambling. And you talked in a couple of your papers about how, you know, crypto, for example, has a lot in common with day trading, and then for some day traders they day trading in ways that can be, you know, seen as, as, as problem gambling.

So maybe you could talk a little bit about that.

Paul: I guess the preface to this argument is that over the years we’ve, uh, been experiencing what’s called, uh, technological convergence, or digital convergence, which essentially means that in the past we were able to, uh, compartmentalize, you know, different activities.

We would go to the. Drive in or cinema to scene movies, we would play games somewhere else. We’d gamble [00:10:00] somewhere else. Of course, what’s happening now is that there’s an increasing blurring of lines between different activities because you can essentially use the same device, the same technology to, to gain access to all these different activities.

So over the years there’s certainly have been an increasing convergence of gaming and gambling. So we’re seeing increasing number of gambling, like elements emerging in games. Uh, people often talk about loop. Boxes has been one example of that. And increasingly, uh, you’re starting to see some elements of speculative trading starting to overlap a little bit with gambling.

Now, cryptocurrency obviously is, um. Something that’s been around for about a decade when Bitcoin was obviously ca came into being in 2009. But, um, certainly in the last five years, particularly 20 17, 20 18, uh, we sort, we had that, that that bull run, uh, in the markets. It has become a much. Uh, more well-known activity.

It’s still a very small proportion of population, you know, actually engaged in this, in any sort of regular, um, form. I think some of the surveys probably overestimate how many people are really doing [00:11:00] it. Um, but research that has been done suggests that those people who do engage in more speculative trading, whether it be day trading or whether it’s, um.

Crypto, uh, do share some characteristics with, with gambling. Other words, if you look at the, if you do a survey administering measures of gambling, trading crypto, you’ll find that statistically people who have an interest in gambling and even those who might have some problems with gambling, tend to be attracted to the sorts of activities which share, um, the characteristics of gambling.

And so people who are problem gamblers who play games will. Tend to be more likely to spend money on games and, and, and buy loot boxes and games. They similarly will also be more likely to engage in speculative trading with crypto, which is, which doesn’t have to be entirely that speculative if people take a longer term view, however.

Mm-hmm. Uh, statistically you would say that someone who’s already, uh, quite a regular gambler, um, is probably more vulnerable to the more speculative side of crypto trading. [00:12:00]

Zach: Yeah. They, it seems like the more frequent the, the trades are or the Yeah. The exchanges, the, the more likely it becomes that someone might, you know, be, become addicted to, to that rush it seems like.

Would you say that’s that’s true?

Paul: Yeah. The term rush is often used in more traditional addiction models where you’re talking about drugs and those sorts of things. And, uh, I, I would say that, that, yeah. Some Russian adrenaline would play a role in, in trading. I think people would certainly get quite a, um, you know, arousal response when they see, um, you know, a particular coin taking off, um, very quickly.

Uh, and it certainly will be the case with, with trading as well. So one of the things which has become a very important part of addiction research, uh, at least behavioral addiction research, which starts with gambling, has progress to gaming and, and other activities, is that we do realize technology does play a very important.

In the uptake of these activities. So we know that the level of involvement, the, uh, the extent to which you engage in impulsive behavior is very much influenced by the [00:13:00] accessibility of the behavior or the activities. So that if you have a 24 hour market, um, or. Which is available on a very convenient app, which you can take everywhere.

Uh, that does increase the, uh, opportunities for being preoccupied with the activity and monitoring the prices, maybe making impulse decisions, uh, about what you’re gonna buy and sell. So it’s certainly the case that I. The internet and the, I guess, the ability to carry it around your pocket, um, has made these sorts of activities potentially, um, yeah.

More common and potentially more addictive.

Zach: Yeah. The thing that really strikes me about this is the, the internet and internet-based technologies have just given us so much power at our fingertips. You know, just so much control, like the, whether it’s the ability to, uh, to gamble at any time, the ability to trade stocks or crypto at any time, the power to.

Run up a bunch of debts on credit cards, the ability to imitate other people and use fake names. The ability to watch porn at a moment’s notice, the ability [00:14:00] to meet up with people easily. It’s like these are all really powerful abilities that the internet gives us. And with that power comes a lot of, uh, temptation to engage in bad and destructive aspects of ourselves.

And in a way I think that, you know, just simply did not exist until. Pretty recently in the, in the, in the internet age. And I’m curious if you agree with all that and, and see the internet as generally amplifying addictive behaviors. And maybe you answered that a little bit, but maybe you can go into a little bit more detail.

Paul: Yeah. You, you could certainly argue that the, what the, um, internet does is it, uh, amplifi, it’s a bit like alcohol on mood. It’s, um, it amplifies, um, things which previously existed. And so one of the interesting things about the internet, I, I know I talk about in some of. Technology and psychology courses is that with some behaviors, you could argue they’ve been around forever or for, for a very long time.

So, pornography, gambling, various behaviors have always been there. And, and, and to the extent that the internet, [00:15:00] uh, is used for those behaviors, you could argue it’s more of a vehicle to make the, the activity easier to access rather than necessarily creating the activity. Mm-hmm. Whereas some forms of behavior such as, you know, compulsively checking social media, spending all your time on Facebook and.

Tweet tweeting all the time, um, is a behavior which has really only come about as a result of this technology. It’s hard to see as having a, uh, a similar historical, uh, antecedent. And so you could almost argue that addiction to social media. Uh, while I’ve had some, you know, rough. Sort of, uh, parallels in the past.

Um, it really is something, a phenomenon of the last decade. Mm-hmm. And so you could argue that behavior in particular, uh, particularly and the extent that social media was used to fuel these other behaviors, that’s really a phenomenon that’s occurred in the last few years. And you could argue probably the two thousands when that really started to take off.

And with the advent of, you know, mobile devices, which now we have the internet. Uh, on your phone and walking around, that of course takes it to another [00:16:00] level in that you’re able to do it at any time, any place.

Zach: Mm-hmm. And you’ve studied, um, addiction to video games, and maybe you could talk a little bit about that.

What are, what are some of the interesting things you found, uh, in that area? I.

Paul: Yeah, it’s, it’s a, it’s a topic which my colleague, uh, Daniel King has probably done, uh, most of the work. Um, I, I tend to do most of the gambling work, but mm-hmm. Certainly gambling, gaming has being of something of interest to me.

I’ve, you know, played video games like many people, right back to the, the eighties. My teenage years were very much like the show Stranger Things. Um, and I remember the days of, you know, the noisy arcades and, uh, those simple handheld games and how the gradual console. Well developed. Uh, there’s been a lot of discussion, um, internationally about, um, gaming and whether it’s, uh, can be a form of addiction and, and the World Health Organization take quite a lot of discussions about, uh, this topic and, uh, looking at its ways, whe whether or not, you know, internet gaming disorder should be a, a valid addiction.

And of course, it has been recognized in some of the [00:17:00] measurement, um, consensus as a, as a, a valid form of addiction and that they tended. To map it to what we know about gambling. So people who might be, uh, have addiction to gaming, you know, spend too much time doing it, they’re preoccupied. They, uh, tend to spend a, an ordinate amount of time doing it.

Uh, when it comes to the harms associated with it, it’s not quite the same as gambling in that people don’t spend quite so much money. They’re not usually a financial risk from it, but what they tend to do is just run their health down. So they spend a lot of time eating bad food, not sleeping. You know, not, not studying, working.

So gaming tends to sort of ease away at people’s other activities and their health. I think that seems to be the principle consequence. We, we have encountered some clinical cases where people have just spent, simply spent, you know, weeks in their room really coming out very infrequently or even going to the bathroom, you know, to, to, to continue to gain.

And we know that’s a phenomenon that’s perhaps been documented. To a greater degree in some of the major Asian countries such as South Korea [00:18:00] and Japan in particular, where young people just disappear. And you, you don’t even know they’re alive, apart from the fact that the tray comes out with, um, no food on it.

Uh, and so that, that, that’s, so gaming is certainly a topic which has been, um. Of increasing interest to researchers. And we find these days, it’s, it’s when parents ring us up about, um, young people, it’s, it’s usually about gaming and not about gambling anymore.

Zach: Hmm. So, uh, getting to the, uh, cryptocurrency, I don’t have much opinion about CRI cryptocurrency in general, and I see the positive aspects of it that many people talk about.

It’s decentralized nature and, but I also just don’t have much opinions because it’s, it seems very. Unc to me, what will happen with it? You know, there’s, there’s just so many factors, it seems to me, uh, for the whole industry, let alone, you know, specific coins. Uh, but one thing I notice with some people, it seems like there’s a high amount of certainty from some people, almost like a, a faith like certainty that some people have, uh, you know, acting as if [00:19:00] this is a certainty that a cryptocurrency will be the future or, or be a, a specific coin.

Will be the future. Future. Clearly not everyone who says those kinds of things, uh, actually has a faith like belief, I think. But it seems like a lot of people really do and I’m, I’m curious if you see that kind of what I view as an unreasonable amount of certainty and such things. Do you see that kind of certainty, faith, like certainty as being related to.

Addictive behaviors?

Paul: Yeah. Not, not really. I, I guess, uh, what I’ve observed from looking at the, the crypto market is that it has changed, uh, dramatically over the last four years. I think there are some certainties, I think, to do with this technology as there were with the internet, I think, and many of the baits, which were raised about the internet back, um, in the.

Late, uh, 1990s and even early two thousands, there were some people still talking about, uh, the internet as not really being something that was gonna be viable for, for many purposes, which we now, of course commonly use it for every day. I mean, going back even [00:20:00] further, there were, people were saying that airplanes weren’t gonna be very useful in warfare back in the early 1920s.

I think some certainties are that blockchain’s definitely here to stay. I think Bitcoin as a. As a, you know, a, something with a established protocol, limited supply, um, you know, fairly widespread. Um, ownership, uh, I think is probably here to stay. And some of the major, um, you know, Ethereum is probably another one, which I think is, is probably here to stay.

I think we could certainly say that blockchain and cryptocurrency are gonna be part of the future and. What we’re seeing in the last four years, there’s been a major shift from much of the crypto market being all about speculative retail investors, which we saw particularly in 20 17, 18 to in the last two years, massive institutional involvement.

So at the moment we’ve seen a market where whereby, I think there’s a Bitcoin conference going on in Miami at the moment, and apparently the whole first day is all just. Big institutions. Um, and you look at Google Trends, hardly any retail investors are really searching the word Bitcoin and crypto. It’s all institutional investors.

So we’ve [00:21:00] sort of gone from it being a, a very speculative fringe activity to one, which I think is now being picked up by the smart money. But the issue still is that it’s a, it’s still a very new and un in many cases, unregulated market, whereby there are some things which are now a bit more, um, certain.

Uh, but, but there is still a considerable, um, you know, number of scams and uncertainty and certainly not a lot of consumer information out there, which therefore means that retail investors, when they do come back into the market, are gonna be vulnerable to the same sorts of speculation, uh, and problems that we’ve seen in previous years.

Mm-hmm.

Zach: Yeah, and I guess what I’m talking about the. You know, some of the extreme certainty is, is things, people will say things like Bitcoin is definitely going to a hundred thousand, you know, um, per Bitcoin in the next whatever length of time. Or, uh, this will be the, the coin of the future. Like, almost like this, uh, very, very extreme belief in things that I think are [00:22:00] just unknown.

You know, like what, I have no doubt that crypto current or the blockchain kind of technology will be around for a while. It’s just, uh. I’m talking about the, yeah, the, the, the more, the very confident beliefs, which, which seemed to me either to be. Kind of almost cult-like? Or, or, or just, I think some people are just as, as people do, they’ll, they’ll express confidence in order, in order to convince others of their beliefs.

Paul: Yeah, I think that’s a very valid observation of, uh, certainly what happened last year. I think there was a lot of, one of the problems with, uh, this market, there’s not a lot of data points on which to draw comparison. So that there, the bitcoin markets tend to move in cycles every four years based on the halving and so.

What you notice in the social media influences is they often map what’s gonna happen based upon what’s happened in the past. So they fall victim to the classic inductive logic whereby they say, well, this is what’s happened in the past, and they get these fancy charts out and map it and they have these models.

Right. It’s

Zach: just such a short Yeah, it’s such a short timeframe. Yeah. [00:23:00] Yes, yes. So

Paul: that there, there was a course, anyone who’s listening who knows about the crypto market will know that there was this guy called Plan B who had this doctor flow model and various other models that were. Convinced, convincing everyone he got right for several months and then Bitcoin was gonna go up to a hundred thousand or more by the end of the year.

We’re gonna see, you know, this big bull run similar to 2017. Of course it didn’t happen. Even some of the top guys were, were caught out by it not happening because so much of what happens in the market is dictated by larger macro factors rather than what happens on these charts. So it is certainly the case that, um.

You have influencers who, who make a lot of money from running their, their YouTube channels, and they want to keep their, uh, audience, uh, interested. They wanna keep them motivated. This is quite often, uh, retail investors, although I, I would say at the moment, people who are listening to the more serious ones, uh, tend to be probably more serious long-term investors.

Um, what you tend to find is when the market goes down, it’s not doing so well, which is. Currently what we’re seeing at the moment, um, [00:24:00] you’ll see a massive drop in the number of, uh, people watching these social influences. And the more, I guess, uh, fringe and the more extreme ones who have less experience, um, in investing in general tend to disappear from, from YouTube.

And then when the market picks up again, suddenly the, the meme coins are all popping. And you suddenly see all these people coming back outta the woodwork. Promoting, um, pretty limited speculative knowledge to, uh, inexperienced retail investors. But certainly I agree with the cult-like nature of some of the beliefs.

I think there is a, uh, this is where it does overlap a little bit with the conspiracy beliefs. Just occasionally you see, uh, even some of the more experienced, uh, very competent influence if, if you’ve watched some of them, uh, they’ll drop a few words here and there, which, um, indicate they might be sympathetic to some of the more extreme beliefs.

So, for example, I see sometimes the word. Name, Rothchild will be dropped, or they’ll talk about bail-in, that’s one of the thing, you know, banks that you, you can lose your withdrawal or, or your deposits in banks. If banks can’t run into trouble. [00:25:00] Um, they pushed the, uh, the inflation narrative very hard that, you know, Bitcoin is a store of value against inflation.

Well, in the last 12 months, you’d probably say it’s probably done over a decade, has done much better than gold. But however, uh, I think in the short term. If Bitcoin’s going up and down it, I’m not sure how strong that narrative is. I think over, over a longer period, certainly like any other hard asset or any asset, you’d probably say, well, uh, you wanna have money in Bitcoin and, and cash over for the next five years or 10 years.

However, in the short term, when there’s inflation, Bitcoin could go up and down. So you’re not quite sure, um, whether it’s going to, um, hold its value across 12 months. And this is of course, the issue of, of using Bitcoin as a form of payment. Um, if it goes up and down in value, I think this is what’s been found in El Salvador.

Uh, people might find it as a useful asset, but then there are problems if it goes down in value. When you wanna go and spend it. Now this, this happens, um, to Australians as well. I mean, we have, our currency, um, has, has dropped, you know, very low in, [00:26:00] in previous periods. Uh, I don’t wanna say that, uh, I think Americans have probably never had the experience of having your currency suddenly worth about, uh, 40% less than what it was worth.

You know, a few months earlier, back in 20, I think 2001, US Australian dollar was 47 US cents. Mm. Um, and of course now it’s, you know, mid, mid seventies. So we have drop. Uh, significant drops in currency. And of course there are other people in the world who might for whom that the Bitcoin narrative might be stronger, uh, like Argentina and other country, Turkey, where you do have very significant depreciation in your currency.

Um, but certainly, uh, there are challenges with using, uh, cryptocurrency as a, uh, a store of value like, like fiat currency, because one, you’ve got the issue of it. You know, potentially going down. The second issue is, at the moment, is treated as an asset. So when you spend it and dispose of it, there are potential capital gains implications.

Zach: Mm-hmm. Right? How big a problem I. Do you think, have you seen any data on how big a problem problem gambling is in cryptocurrency trading? [00:27:00]

Paul: Yeah. I, I wouldn’t, I wouldn’t think, um, that cryptocurrency will be a major cause of problem gambling. Um, if you, if you look at the, for example, at Australia, for example, which has one of the highest, um, rates of gambling exp expense, probably the highest.

Per cap expenditure of gambling in the world. Um, something like 70 to 80% of all problem gambling comes from salt machines. So it, and that’s because it’s very attractive to both men and women of different ages, whereas cryptocurrency tends to attract a very narrow band of the population. So when you look at those who do it quite regularly, who put reasonable money into it, it tends to be younger males who are into it.

I’d say proc characterize 85% of the, of the investors. So you tend to have people who are already into what you find is that those, um, sort of people tend to already gamble on sports. They gamble on casino games. They have a wider range of, uh, of activity preferences. And so like with sports betting, uh, even though it’s the most, probably the form of gambling, which is growing the most, it’s still a very small [00:28:00] activity relative to the salt machines.

Mm-hmm.

Zach: And so crypto made big gains, some of its biggest gains. Post COVID. Uh, COVID hit early 2020, and then in October of 2020, Bitcoin really took off in a, in a large way. And I’m curious if you have any opinions on whether you saw any COVID related psychological effects going on in terms of maybe COVID lockdowns and financial and existential stressors making people.

More likely to speculate and more likely to basically gamble, things like that.

Paul: Yeah, I, I think there’s certainly an argument that people were thinking, um, about changing their lives during the COVID period. I think it made people reevaluate their, their lives. I think many people, I. Uh, have also reevaluated the abuse of governments quite a bit during that period too.

I think government doing some good things during the course of the pandemic, and also I think some very bad things. We’ve had some particularly extreme government behavior in Australia, particularly in Victoria. We’ve seen some pretty extreme behavior in Canada recently from governments, [00:29:00] whereas other governments have been, I think China probably an example now.

Whereas we’ve seen other countries take a more, uh, more reasonable view and so we’ve seen in some ways the best and worst of what government can offer in the last sort, uh, two decades, so many people. Last two years, I think people have, you know, started to, to rethink the nature of the world and how they, you know, manage their finances.

We know that people will save a lot of money during the COVID period. They also were given, uh, quite a lot of, um, stimulus money. Uh, there’s a lot of quantitative easing occurring during that period. So, so it will be the case that. All the conditions were rife, ripe for a, you know, an asset bubble where people would put money into renovating their house, buying shares, buying, buying crypto.

And that would certainly be the case that young people would be attracted to, to that. I think the, the growth in the, in the Bitcoin price last year obviously had a lot of different causes. I mean, the halving of Bitcoin occurred in 2020, I think, or 19. So it was sort of naturally going up anyway during that period.

Uh, we also, uh, had the, yeah, all the stimulus money coming [00:30:00] into the market. Which would’ve pushed up the speculative trading. So I think, I think there were a number of things which came together during that period to, uh, to push up the price. It, at the moment, I think we’re in a state of great uncertainty in the world about many aspects of finance.

So it’d be interesting to see, uh, what plays out in the next, um, couple of years.

Zach: I was curious, are there, are there, is there an increase when, when there, when times are tough in general? Aren’t there some studies that show like there’s an increase in, you know, things like, uh, gambling and, uh, alcohol use and things like that?

Or am I, uh, getting that wrong? I.

Paul: Yeah, I, I think that there’s some evidence for that. I mean, during the COVID period, uh, people didn’t have much else to do. And in fact, um, yeah, alcohol consumption gaining certainly increased dramatically during that period. It’s, mm, it’s difficult to, to know whether it’s caused by, um, the fact that people were bored, didn’t have much else to do, then stay at home and, and do it.

So if you, if you have a, if you just imagine holding all things constant that people drink a certain amount per week and all, if all. Pubs and clubs are closed during that period. [00:31:00] Where else would you drink that? But then, but at home. And so you, you, you obviously go to the bottle shop and buy drinks and take them home.

And so it might be that’s, uh, you know, if you’re seeing greater alcohol purchases in people’s budgets, that would just be a reflection of them re diverting their, uh, going out and having a drink. Money back, back home. And I think with the gambling, it’s, that’s been the same. We’ve seen spikes in gambling activity in some of the states of Australia, uh, in recent months.

But that could be, ’cause people had that money saved away that they otherwise would’ve spent over a more gradual period. Mm-hmm. It could be that, that they’re spending their sort of, um, need for going out, uh, in a more concentrated period.

Zach: Right. It’s complex, like most things. It’s, uh, hard to. Boil it down to a, a single thing.

Uh, so when you were talking about the sort of like the Rothschild’s, uh, conspiracy theory kind of beliefs, uh, one thing I have thought about cryptocurrency in that area is there can be an element of the sheer fact of investing money in something can change someone’s thoughts [00:32:00] and beliefs. Uh, you know, for example, if, uh, someone who’s invested a good amount of money into cryptocurrency and then.

That there can be a psychological pressure to really start to believe in cryptocurrency, even though at first the, the inclination was, or the motivation was just to see it go up. But by hoping it goes up, you might start believing, really believing in the, in the mission and, uh, your, your beliefs might change.

And to make an analogy, it might be like somebody who’s a. Politically liberal person who places a big bet on Donald Trump winning the election for purely financial reasons. And then they actually start to somewhat hope a bit for Trump’s win. Uh, and that hope, that feeling of hope might change their beliefs in some way.

Make them look at things from a different angle, maybe somewhat against their initial wishes. And as someone who’s gambled on a few political events, myself, I felt a little bit of this finding myself looking at things from. New perspectives or, or feeling emotionally pulled in kind of weird ways, just solely due to the [00:33:00] money I’d bet on something.

And I’m curious if you see some of that in the cryptocurrency area where I think there can be, because people, uh, hope for cryptocurrency to succeed. They start resenting, uh, uh, they might start resenting a bit the, you know, the, the government forces or the, or the social forces or, uh, that, that are holding cryptocurrency back that they perceive as, as holding.

Cryptocurrency back, like people’s idiocy for not getting it, things like this. Uh, and, and that can kind of foster kind of an anti, not antisocial, but maybe like antisocial in some senses, uh, perspective. But I’m curious if you have thought about that angle of things about how placing bets. Can, can change people’s ideas.

Paul: I think that’s very true. Um, what you do see, once people have a, um, a financial investment or stake in a particular cryptocurrency, they become very defensive about it. They will engage in quite a bit of confirmation bias. They’ll, they’ll be looking around to read material or look at [00:34:00] influencers who are, uh, backing up their views about that particular coin.

They wanna read stuff to validate the decision they’ve already made. Talk about their membership ownership in, um, certain coins as almost like a form of club. So, you know, people who invest in Chainlink talk about themselves as being part of the Link Army. And you get, you know, I’m not sure that there are lunatics for, for Luna Terror Luna.

Yeah. People, um, always feel like they belong to a club when they own a particular coin and they get quite resentful when. People criticize, um, that bitcoin, knowing that if it’s been done by a leading influencer that can potentially, uh, drop or suppress the price. Now, I think it’s quite important to have objective analysis of some, even some of the good projects, which may well still be good projects longer term, but I think certainly, um, it is a rational when people start to attack those who are providing fairly legitimate, uh, objective appraisals or technology.

Uh, I think, I think the, the crypto industry, um, does have some. Having sort of been academics of writing [00:35:00] about it, you, you do see some. Yeah. While you do see this sort of almost cult-like faith in the technology, uh, and the future, sometimes you do also see sometimes the frustration of some of the, you know, very well-informed, um, tech guys when they get, uh, they’re faced with sort of quite obvious ignorance from politicians who are making very important decisions about the industry.

I think, um, I. There’s always this classic, they always talk about the FUD associated with, with crypto, and one of them of course is it facilitates crime when in fact we know that it’s very easy to track, um, transactions on, uh, on blockchain much more easily than it is with via currency, which is of course the, uh, vehicle of choice for most organized crime.

And I guess the, the environmental fund gets dug out. We know that, you know, Bitcoin uses a lot of power, but increasing, we know it’s an increasing amount of renewables being used and we know that, uh. Gold mining is, you know, and banking uses a lot of power too. So, um, and you know, China banning Bitcoin, I think South Park’s even done a Skittle on that one.

It’s sort of [00:36:00] thrown out whenever they wanna drop the price. It’s one of those things. So you do see, um, some reason why some of these, um. Influencers almost feel like there’s a bit of a conspiracy against the technology. And I think mm-hmm. We do see that with some of the senior, you know, um, politicians. I think some, um, seem very, uh, articulate and knowledgeable about the, uh, technology.

I think sim similar alumnus, I think you probably know, uh, seems, uh, even Ted Cruz I think is even, I think he mentioned, um, is, is said some sensible things about it. But you get other, uh, people who I sort of feel. Probably ask questions in some of those Senate inquiries, which indicate they don’t really understand the technologies.

I think so I think that the bottom line is that, yeah, it, it bes those who make important decisions about this new technology as well as the casemaker with the internet days to make sure they’re fully appraised of all the um. You know, the pros and cons, both sides of the arguments. Uh, and not to be driven by ideological views about these things.

’cause the internet could easily have been banned. Uh, if we had, if we raised concerns about, I think certain, you know, rules were [00:37:00] passed back in those days, which made the internet possible. Um, and if, you know, the, the legislation had been too harsh back in those days, it might not have, uh, evolved to what it’s today.

Zach: Mm-hmm. Yeah. It seems like, uh, the thing that strikes me there is, there’s, there’s plenty of reasons to, uh, understand. Reasons for why, uh, you know, society might, uh, not adopt this quickly. You know, like, just like, it doesn’t a, a adapt anything very, very quickly. And, uh, so there can be yeah, legitimate, quite understandable reasons, uh, and on both sides, like to, to think more about it and then, you know, and then there’s people that just really don’t get it.

But the thing that strikes me is, um, I, I think there’re gonna be an element of, because I, you know, because I’ve invested a lot of money in this thing. I’m very, um, I’m very frustrated with it, with it not taking off in the way that it is clearly obvious to me. And, and that leads to, I, I kind of joked about this online that I, I, I actually had bought a good amount of cryptocurrency and then sold a good amount of that off.

And I [00:38:00] joked that because I’d done that, I, I was happy with it going up or down. ’cause if it went up, I, I was like, oh, well, at least I have some. But if it went down, I was like, uh, well, okay. I was smart for selling some off. Right? So I, it was kind of this, uh, idea that. Our, our, uh, our trades, we make our, our gambles, we make can, you know, influence, uh, our feelings and then our feelings can influence our beliefs.

And that, that was kind of the, uh, the idea I was getting at.

Paul: Yeah, I, I think the general view I have of, um, cryptocurrency is that, uh, it, it needs sensible regulation. I think at the moment I think there does need to be more, um, regulatory. Uh, oversight. And I think what, on the problems of regulation, it tends to be sort of chasing, you know, technicalities to do what’s a security and what’s not.

Uh, as opposed to, you know, just fundamentals to do with consumer protection, which, which may be related to that as well. I know, uh, I, I think that, you know, when you see quite a few scams out there, people putting up projects with nameless. People who, there’s no real accountability, no sort of clarity about who’s doing it, and no [00:39:00] clarity about what information’s being provided to, you know, consumers about the token ons and fundamentals of some of these projects.

I think that’s where people are being, being scammed, even with what might be considered, uh, legitimate projects. I think that’s important. And I think for just general advice on, um, there’s no easy money in the world. There’s, there’s no. Um, crypto can be gambling, but it can, can also be investment. And I think what everything converges on is that, uh, everything has to be taken with a long-term perspective.

That like with, you know, shares of the, you know, technology based shares of the late 1990s, they went up and down, crashed, went up again. Um, people have to look at sort of fundamentals, look at, um. Research things properly and take a longer term perspective like an investor rather than, um, and if they are going to do any sort of speculation or short term, it should be small value what people can afford, and knowing that is like gambling, that you would expect to lose your money.

And that, uh, any sort of short term gains really should be put into the safer longer term plays. And I think Bitcoin, probably Ethereum [00:40:00] and some of these top protocols probably will be around in five years. But, uh, as people often say, um, when you. Uh, 2018. Many, many of those coins have gone and that could happen with some of these very promising protocols people have great faith in.

Now,

Zach: Paul and I got a late start for our talk, due to some tech problems, and I didn’t get a chance to ask him all the questions I was curious about. But I emailed him some of those questions and he sent me some responses, so I’ll just read a few of his thoughts.

One question I sent him was: “Regarding day trading: I think it’s underappreciated how much day trading is tied to problem gambling. I personally know someone whose father was a day trader who lost all their family’s money and ran up debts and ruined his children’s credit scores by taking out debts in their name. How big a problem is problem gambling when it comes to day trading?”

Paul wrote back: “The two behaviours are related. Those who engage in gambling and/or day trading are statistically more likely to engage in other activity. This is part due to 3rd variables (gender, higher social economic status, often higher education + impulsivity). There will be many day traders or TA guys who hate gambling, but being a gambler and a day trader would pose some risk because the trading is likely to be more like gambling.” end quote

Another question I sent him was: With the advance of the internet and trading algorithms and machine learning and such, it would seem to me almost impossible for day traders to have an edge, unless they were very skilled. Am I wrong on that? If I’m right, does that mean that almost everyone who engages in day trading these days, all the amateurs who do that and who aren’t professionals, that almost all of them may have a problem?

Paul responded: “Some can do it, but it is rare. Kahneman talks about this in Thinking Fast and Slow. The best performers are often not the same from one year to the next because trading performance is not consistent. Amateurs are unlikely to do well from this. The top guys who do OK tend to profit from having inexperienced people who provide the liquidity: who buy high and sell low. So, a lot of it is simply being better than others vs. actually really being all that good in technical terms. The extreme volatility and insider knowledge of what is likely to ‘pop’ is what gives the more experienced people the edge. They also have research teams who are doing the background work, e.g., following Discords, etc. to see what is likely to be appealing to retain investors.” end quote

A note here that what Paul says here about experienced operators profiting from inexperienced operators, who provide the liquidity, is much like the poker world. The main reason it is possible to be a professional poker player is just that so many people over-rate their skill at poker and just don’t see all the angles and complexity involved, and that’s what allows the more experienced players to reap their profits .

Another question I sent Paul was: “How much of a role does ego and Dunning-Kruger type effects play in problem gambling? One thing that strikes me about problem gambling in poker and day trading and such is that, because they are such complex endeavors with so many factors present, and such swings even when you’re very skilled, is that it’s easier to convince yourself that you’re just getting lucky, that you are skilled but just having bad luck. The complexity of the endeavor makes it possible for some very smart people to have a problem and not realize it, or be more easily able to avoid it, anyway, in a way that wouldn’t be possible for simpler endeavors like slots or blackjacks. Do you agree with that and can the complexity of the game be a big factor in problem gambling?”

Paul wrote the following: “Yes, that can play a role. One reason I follow some of the influencers is that they are very bright and rational and experienced, e.g., Ben Cowen; James from Invest Answers; Rob from Digital Asset News. They play it quite safe: very long term; Ben calls TA ‘dubious speculation’. They stick with the big projects only and only put small %s into more speculative stuff; take profits, buy low. Many of the top guys were caught in the 2017 and 18 and admit openly to their mistakes and have learned from them. I don’t see a lot of the Dunning Kruger effect in crypto.
Anyone who is very confident and sure- is probably going to get wrecked.” end quote

Another question I sent was: An interesting aspect with crypto is how much social media may be playing a role. Social media, as we’ve seen with things like the Arab Spring and the George Floyd-related protests, is a powerful tool for focusing a lot of people’s attention on something. It allows for a sustained focus on something across a large population for a long period of time in a way that just wasn’t possible pre-internet. And the internet is kind of a weird place because it is a distorted view of things, in that a relatively small number of people can give a perception that something huge is happening. And that sustained emotional focus can have other effects, like leading bystanders to have a fear of missing out, and leading people to think that because many people are passionate and confident about something, that it must be something real and exciting, things like this. So I’m curious how you see social media as affecting people’s cryptocurrency behaviors and maybe digital addictions in general.

Paul wrote the following: “Yes, definitely. Social media is not playing much of a role at the moment because a lot of the retail interest is gone from the market. However, when the market goes up, then retail comes back in and social media explodes. There will be thousands of videos with people promoting the most speculative of coins. The dodgy ones are those who promote tokens which they picked up a low prices in IDOs and which retail is now buying at a much higher price. It’s a complete conflict of interest.
I see some influencers promoting projects that they provided venture capital for.
The most risky videos are those telling you to buy coins. If the coin is known then it probably has already gone up and therefore is riskier.” end quote

Okay this has been an episode featuring psychology researcher Paul Delfabbro.

This has been the People Who Read People podcast with me, Zach Elwood. To learn more about this podcast, go to behavior-podcast.com. If you’ve enjoyed this podcast, I’d hugely appreciate a rating or review on iTunes or another podcast platform. I don’t make any money on this podcast and I spent a good deal of time and effort on it, so any way you can help me is greatly appreciated. Sharing episodes with your friends and family is also hugely appreciated. If you wanted to show some financial support, I have a Patreon at patreon.com/zachelwood, that’s ZACH ELWOOD. If you donate to my Patreon I’ll send you some occasional updates on projects I’m working on and ask your opinion, things like that.

And if you’re a poker player, just a reminder that I’ve done some poker tells work and you can read about that at my site readingpokertells.com.

You can follow me on Twitter at @apokerplayer.

Thanks for listening.

Categories
podcast

Lie detection using facial muscle monitoring and machine learning, with Dino Levy

A talk with Dino Levy about his team’s lie detection research, which used monitoring of facial muscles and machine learning to detect lies at an impressively high 73% success rate. Their paper was titled “Lie to my face: An electromyography approach to the study of deceptive behavior.” Topics discussed include:

  • The setup of the study, and the theoretical causes of the findings
  • How this method compares to polygraph technology (lie detector machines)
  • Applications of the technology
  • Thoughts on ideas of the universal nature of emotion-related behavior
  • Speculations on using these findings to analyze poker players

Episode links:

Other resources related to or mentioned in our talk:

Categories
podcast

The scientific study of poker tells, with Brandon Sheils

Brandon Sheils (twitter: @brandonsheils) is a professional poker player and poker coach who recently did a scientific study of poker behavior (aka “poker tells) as part of his seeking a Masters degree in Psychology at the University of Nottingham. Brandon also has a poker-focused YouTube channel.

Topics discussed in our talk include: the challenges of studying poker tells; how he set up his study and the reasons behind the structure; what the results were; the meaning of something being “not statistically significant”; speculations on what AI and machine learning might hold for the analysis of poker tells; some times Brandon has used opponent behavior in poker hands.

Links to this episode:

Things discussed in this episode:

TRANSCRIPT

Zach: Welcome to the People Who Read People podcast hosted by me, Zachary Elwood. This is a podcast about better understanding why people do what they do. You can learn more about it at www.behavior-podcast.com

On today’s episode, I talk to Brandon Sheils, that’s SHEILS, a professional poker player who recently did a scientific study of poker tells. We talk about the challenges with studying poker tells, the structure of Brandon’s study and why he set it up that way, what the study found, some talk about general scientific concepts, some speculating on what AI and machine learning approaches might hold for analyzing poker behavior, and then at the end we talk about some poker hands where Brandon used behavior in his decision process. 

If you didn’t already know, my own main claim to fame is that I’m the author of some respected books on poker tells; my first book, Reading Poker Tells, has been translated into 8 languages. I’m most proud of my second book, Verbal Poker Tells, which attempts to find the hidden meanings in the things poker players often say during a hand of poker. If you’d like to learn more about that work, you can go to www.readingpokertells.com. If you’re interested in this subject, you might also enjoy a previous episode of this podcast where I talk to Dara O’Kearney about poker tells. 

My work on poker tells is what led Brandon Sheils to reach out to me when he was starting work on his study. I helped him a bit in brainstorming the setup of the study, the criteria of what poker hands would be included, and the behaviors he’d examine. And I helped him a bit in going through footage and finding poker hands that met the criteria that he’d later log. 

A little more about Brandon and his work: He’s a professional poker player and coach who plays both online and live. He has a poker-focused Youtube channel at brandon sheils, again that’s SHEILS. If you’re curious about his poker tournament scores, you can check out his profile on HendonMob.com, which is a site that tracks tournament results. His Twitter handle is at @brandonsheils. 

Brandon did his poker tells research as part of his pursuing a Masters degree in Psychology at the University of Nottingham. That study is not yet published. 

One thing that might be important to emphasize before the interview is that good poker players are generally only infrequently basing decisions on poker tells. I want to emphasize this because I think the importance of poker tells is quite exaggerated in the public’s understanding, based on depictions of poker in movies like Rounders or James Bond movies and such. The ability to read poker tells well has been called the “icing on the cake” by some, in terms of it being much less important than having a strong strategy. Poker is a tremendously complicated game; it is a much tougher to solve game computationally than chess and strategy is so much more important than tells. In my poker tells book, I’ve given the estimate that being strong at reading poker tells might add anywhere between 1-15% to a live poker player’s win rate. Put another way: you can be a hugely successful professional poker player without ever thinking about poker tells. As someone who considers themselves quite good at reading tells, if I were playing a full day of poker against somewhat decent players at decent stakes, I might base a decision on a tell only a few times during that session, with some of those spots being pretty small decision points early in a hand, like whether to raise or fold pre-flop. I like to emphasize all this because I think for a lay audience there can be exaggerated ideas about poker tells and how often good players are using them, and all this is especially relevant for Brandon and I’s talk about his study. 

Ok, here’s the talk with Brandon Sheils.

Zach: Hey Brandon. Thanks for coming on.

Brandon: Hi Zach. How are you?

Zach: Good. Thanks for joining me. I guess we’ll start with maybe you can go into a little bit about how you got interested in poker and maybe go into how you’ve gotten to playing for a living and what led to your interest in doing this study. I know that’s a lot of questions I just threw out there, but maybe a brief summary of that stuff.

Brandon: Yeah, I’ll try and condense my life of  poker into a paragraph as I can, I guess. I’ve always been interested in strategy games, so growing up I played different card games and different games that were more around trying to out-strategize your opponent. And then my parents played poker for a living when I was very young. My first memory, this is when I was seven or eight, and they’re playing like home games sometimes or I’d see them playing on TV. So I knew poker as a game we would play as a family at home sometimes. And I wasn’t like super into it, but I enjoyed playing as a family. And the strategy elements obviously I was clearly very bad at it compared to two professional players and my brother, who’s four years older than me. I think that kind of started my interest. I used to watch the World Series on TV and I liked the idea that I could watch these people playing for a lot of money, it seemed like infinite money at the time. And I could spot mistakes people were making on TV at the time when I was seven or eight years old, and then I started to play home games and pub poker games, and I would win against my dad’s friends even, maybe they were actually drunk people at the pub or whatever. I didn’t understand variants at the time either, it is possible I was just super lucky, but it felt like I was making good decisions and they were making very bad decisions already at the start of playing poker. And I was getting money for this at an age where even 20 pound is infinite money or $25 is going to be infinite money at that age. I didn’t play it too much as I was growing up because there’s no opportunity obviously, but if there was a home game or a pub poker game and I could play, I’d gravitate towards that. And then my brother became a professional player when he was 18 or 19 or as he was finishing uni, and that’s when I was still underage because he’s three or four years older. But that again, continued my interest, my family have made a living at this all at different points. And then I went to the casino for my 18th birthday pretty much during that period of time, I was doing my A Levels, which is the exams that get you into uni in the UK, and it’s like the most important study phase effectively. And I would bring my A Level revision to the casino and do it between hands, because I was that interested in just playing as much poker as I possibly could. And it went pretty well, and then I’ve kind of had a fun relationship with it and having a normal career. And I enjoyed the fact that when I eventually started doing a psychology masters for probably lots of reasons, that I could combine my love for poker with creating a study of my own. And I have a lot of passion for psychology, I have a lot of passion for poker, and it seemed like the natural culmination that I’d end up doing this study.

Zach: So maybe before we get too much into the details of the study, maybe you can talk a little bit about the difficulty of setting the study up and the difficulties you ran into of trying to study poker tells in a scientific manner. What stands out as the major obstacles you encountered?

Brandon: Yeah, so the most forefront problem is creating uniformity across what we’re measuring, because if you try and measure turn decisions or anytime someone bets or times where someone has 10 big blinds or 20 big blinds, there’s so many different factors in poker where the decision making process is completely different, so it wouldn’t be kind of right to compare them. So first was picking one specific area of poker where there’d be a lot to learn from, but also a lot of data available. So the first hurdle I met because I allocate straight away that when someone bets the river, this is the point when they’re waiting for their opponent to make their decision. The voice in their head is either saying, “Please call or please fold.” I thought that’s going to be the perfect point, but it just wasn’t possible to get data on that area because the majority of the stream data footage for poker is they’ll keep the camera on whoever’s turn it is and then as soon as it swaps turn the majority of the time it will be on that player. And as soon as the camera swaps once or twice, as soon as you’ve sort of lost some data, it’s just too hard to have a uniform sample based on that. So sometimes it would go back and forth a little bit, but because it would swap at different rates and sometimes it wouldn’t swap at all, it just wasn’t going to be possible to get data for post-bet river analysis, and I think that would’ve been the most ideal. So that was the first big obstacle and that’s why I ended up doing pre-bet if that’s the right word on the river. And I tried to determine, being a poker player myself, what are the exact situations where there’s the most pressure and therefore the most– if they would be given away based on what I’ve read obviously about human psychology. If someone is betting one big blind on the flop, I think you wrote about this in your book as well, it’s just going to be a completely different subjective experience for them because the risk reward, invariantly, they don’t necessarily care that much if it doesn’t work yet because they can bluff later or they can give up and it’s a small pop. So I decided to choose parts that were at least 10 big blinds, and I just looked at tournaments just because that’s the uniformity that I went for there as well. I could just looked at cash games, I think that that would be interesting as well. And I thought once the part is at least that big and it’s a river decision, that’s a good starting point to say they’re going to care about the result of this bet and therefore they’re going to have to try and balance their emotions a lot more, people are going to take longer to make their decisions. It’s just a more important decision for both players. So I think that was nice to hone in on.

Zach: Yeah. The footage issue is a big problem, and that’s something I’ve dealt with a lot because I’ve created my Poker Tells video series. And ideally you would have those post-bet spots, those spots after someone’s made a significant bet. So I think it was a great decision you made to focus on that slightly before bet and then the actual during bet as they’re placing the bet, because those are usually things the camera stays on them for when it starts their turn the camera’s on them and then up until they place the bet the camera’s on them typically for that stretch of time. So that all made a lot of sense.

Zach: A small edit here, Brandon took a while to explain all the various elements he had logged for the experiment, there were 22 factors in all. But to speed this up a bit, I’ll just name a few of the specific verbal and nonverbal behaviors that he logged. One was the amount of time a player thought before betting, another behavior was the amount of time a player spent placing a bet once they’d either declared the bet or started putting together the bet. Another behavior was whether the bet was verbalized or not. Another behavior was whether the player looked back at their whole cards before placing their bet. Another behavior was whether the player was playing with their chips or not. Again, that was just a few of the aspects that Brandon logged.

Another aspect of Brandon’s study that made a lot of sense was in how he approached ranking, whether a better’s hand was a bluff or a value bet. And a value bet is a way of saying that it’s done for value with a hand that will usually be the best hand. In other words, a value bet is not a bluff. Brandon sent each hand in the study to several skilled poker players who then ranked the hand as either a value bet or a bluff. This was an improvement on a method of categorizing hand strength that Michael Slepian had done in his poker behavior study. In that study, they’d apparently, from what I could tell, relied on the onscreen percentage graphics which are displayed beside a player’s hand graphics. Those graphics show the likelihood of a player’s hand winning, it does this by comparing it to the opponents known hand. This makes it a pretty bad way to categorize whether a player believes that they’re betting a weak hand or a strong hand. In other words, a player could have a hand they believe will tend to be a winner, but in that specific hand their opponent happens to have an even stronger hand. In that instance, the first player’s strong hand would be presumably classified as a bluff. I confess I’m not sure if there was some way Slepian adjusted things to account for that, but my understanding was based on reading their paper. So it’s possible there was more to it. But in any case, Brandon’s decision to get skilled players to rank hands as either bluffs or value bets makes a lot of sense and may I think be the best way to easily make that categorization.

Okay, back to the interview where Brandon talks about this a little bit more.

Brandon: Yeah, I completely agree. I think it was your critique of that past study that helped me get onto that. So thank you as well.

Zach: Oh, that’s awesome. So let’s see. After naming all of those factors that you looked in, it might be anti-climatic they didn’t say what were your findings.

Brandon: Essentially, that none of these factors alone were statistically significant. And that does not mean that they aren’t potentially significant with a bigger sample, but with the sample I had, some of these factors didn’t actually occur that often, even though obviously I tried to measure all of them. But it’s not actually that often that someone double checks their cards. If I find the exact number here, we had in the 400 and–

Zach: 24.

Brandon: Yeah, only 24 times someone double checked their cards and four times it couldn’t be determined based on where the camera was or something else. So 24 out of 416 is a really small amount. If you looked at the ratio there, it would look like it is statistically significant to the human eye, just based on the maths behind figuring out statistical significance, it has to be. The confidence rate is 95%. We have to be able to say with 95% certainty that it’s the case that this makes this more likely, and we just didn’t have the sample. And I think with a bigger sample, I’m quite sure that that would’ve been significant.

Zach: That gets into a question I have, just a general scientific question, which is when I see a study that says, “This wasn’t statistically significant,” is my takeaway supposed to be just that the study cannot tell that? Because sometimes I feel like it’s framed as if there’s no correlation, but maybe that’s just a misreading on my part, and should the takeaway for me be this study can just not determine that?

Brandon: Generally, yes. It’s like saying we failed to prove it, it doesn’t mean that it’s not true. But it depends on the actual science that went into it. So if I had a sample here of a hundred thousand, it’d be pretty hard to argue with it assuming that my practice and how I recorded data is fine, which is kind of a whole other ballgame. But it is determined by there’s something called statistical power, which generally you want to get to above a rate of like 0.8, and it’s determined by basically the sample size. And so some studies are going to have really good statistical power and they’ll talk about that. And if they’ve got good power and they’ve got good science behind what they did, for example, if you don’t use the equity in our case, because that’s kind of a flaw in the science. If you’ve got good power which is due to the good sample and you’ve got good science, then it’s pretty hard to argue with. But you still can’t say for sure that it’s false, you can only say that, “With all of this, it’s not true.” And it’s almost as good as sometimes if there’s enough data though.

Zach: So it might be getting too far in the weeds, and if it is, feel free to say. But for that, say we were looking at that double checking whole cards before a bet behavior, how much more data do you think you need to be able to confidently say like, “There’s no correlation there,” if that makes sense?

Brandon: Well, I can say I, I ran the statistical power test on the sample that I would need total, but I didn’t run it on individual factors. So the initial based on the timeframe I had only had like a month or two of collecting data. It would’ve been impossible for me to get true statistical power on all of these individual factors just because they’re so infrequent. I knew that some of them would hit the benchmark, some of them wouldn’t, and it’s just kind of a starting point. It’s better I recorded it than not, but didn’t become the main focus of the study as good as it would be to have more data on them. So to answer your question about the amount, bear in mind we had 24 where it was true across 392, I’d have to plug it into a statistical computer to run all the exact equations. But in fact, to be honest, I don’t want to make any false claims by coming up with an exact number, but 24 out of that is very small. So I would imagine based on the ratio we’ve got, I’d need at least 4,000 data points maybe more.

Zach: Yeah, I think it really shows the difficulty in this because I think I was going to say too, I think some people would expect me to be disappointed or surprised by not finding anything, and actually these things are so hard to study because as you say, you collected 400 hands and only in a few of those hands is each behavior that you’re studying found, which means you’ve got to get a lot more hands to find a lot of those behaviors. And then you’ve even got more complexity too because you’ve got the fact that the situational context is important, you named a few of the factors involved, but there’s the fact that skilled players can behave quite differently from recreational players and most tells are found from more recreational players. So there’s even this thing of if you were able to zero in on the more recreational players and chop out the pro players, then that would also be an interesting way to analyze it. And this is just to say that this is massively complex to study these things, and I actually had considered setting up a local game to try to study this myself because they had poker rooms there and I thought about putting some effort into it. I started thinking about the same things that you’re thinking about here where I was like, “This would be like a life’s work almost, and I would have to invest a lot of time to really to do this right.” And then I would still be left with these things where I’m like I’m still running into these situational factors where there’s so many things to take into account. And it was kind of just really probably what you ran into yourself actually doing it, was it’s kind of daunting to set it up and to try to distinguish to get the situation down to a specific consistent situation that you’re comparing.

Brandon: Yeah, I completely agree about what you said about recreational players. And I think it’s a factor of self-awareness. So even though obviously being professional kind of comes with that, still there could be a recreational player that has read all about the leading poker tells and spoke to pros about tells that they’ve seen on them, and then they can reverse tell them all the time and be complete outliers. But when it comes to just people playing and not thinking too in-depthly, then the less experienced you are or the less that this is your profession and you’ve really fallen into this sort of stuff, you are going to just naturally give more stuff away or not realize that saying certain things is indicative of strength or weakness because you haven’t just got the sample size or you just don’t kind of care enough about that, you’re just there to have fun and you just do what you’re feeling at the time as opposed to thinking, “I need to recreate this situation all the time and not be exploitable.”

Zach: Yeah. The other one that I was, it was the double checking of whole cards, one that I was expecting a little something from, and then the other one was the length of time thinking before a bet. I kind of thought we’d see a little something there, but then I was thinking about it after you said you didn’t find anything. And I started thinking well, maybe it’d be hard to find it anyway, but I was thinking the fact that a lot of good players like to tank a long time with their good hands and their bluffs regardless and good players tend to take a long time in general, and I kind of wondered if that would throw off the timing averages. That thing, again, if we were just studying recreational players and you had this similar sample size of just recreational players, I kind of feel like there’d be a little something there. But anyway, that was all just stuff I was thinking of when you told me the results.

Brandon: I did notice in the stream games in… I forget the city now, is it Chicago? The Wind City, was it?

Zach: Oh yeah, Chicago, Windy City.

Brandon: Windy City, yeah. In those games, people acted so quickly, and it did make it really hard to gather data because I wanted to use more of a breadth of not just these big tournaments and more these were still tournaments in different environments. So I thought it’d be good to have a wider range of players, and there was so much more often that people acted in even less than two seconds. And because my whole analysis is on how long they take to make a bet, it almost became… You can’t get most of the factors when someone takes less than say 10 seconds. So I think I actually started excluding anything less than five seconds because you just couldn’t really determine anything and that didn’t feel so great to do either, because a lot of people snap at with bluffs or with value based on what they’re thinking too which you don’t want to exclude from the data. So it can change a lot based on the environment.

Zach: That gets into another thing here where the thing about using poker tells is applying a player specific filter to it. So, for example, if you’re playing with a few people who you noticed are pretty quirky and they’re always betting quickly or just betting weirdly or doing other weird things, it’s kind of like if you can’t find anything noticeable on them, you’re not going to apply the common general tells that you might apply to somebody else and other recreational player to those kind of quirky weird ones who are doing unusual things. And that kind of played into it too. And when you said that about the Windy City games, which I used in my Poker Tells series, you are right. Because some of those games they satellite into those games from lower level tournaments and there can be almost like a home game feel to those. And when you said that about those games, I was like, “Yeah, now that you mention it, those games are really quirky and people do all sorts of weird things and act really quick for spots you wouldn’t typically see that for.” Because it’s a lot of the same player pool and I think that kind of lends it to this kind of home game feel. So anyway, that was just to say, yeah, it’s tough to study these things basically.

Brandon: Yeah. I had two lines of four from what you just said. The first was the other thing with the Windy City games was there was a lot more times where there was not a unanimous opinion on whether it was bluff or value because there was a lot more times they would bet and I would say they didn’t know why they were betting. Or even sometimes the way they turned their hand over would be that like, “I don’t know if I win. You’ve called, here’s my hand, I don’t know. Maybe I win.” Well, it just meant if I know they don’t know, then it’s harder to get into their psychology because they might be almost free rolling it psychologically to think what they do is what they do. If they’re not thinking in-depthly about the strategy, you can’t really get the same data from them because they don’t know what they’re thinking. The other thing I was going to say is because my study was mainly focused on universal tells, I wanted a wide array of players, I didn’t get more than 10 samples on one player. So the whole 420 data points, the most I got on one player was 10 because I didn’t want it to be kind of too many of one player. But as you’ve wrote about and touched on already, I think if you just looked at one player and across many situations, that would be the best way to actually determine what stuff they do, what their tendencies are when they’re bluffing or value betting. And you very clearly see that the pros are much more balanced in that case and a recreational player I imagine it is not. So having like a hundred hand sample on one player in just these spots I think would’ve been very interesting. Not as useful for universal tells, but just to see that stuff is clearly different.

Zach: Yeah, it’s interesting because when I think about applying poker tells, so much of it is knowing which tells are likely to be true for someone you’ve kind of classified as a certain type of player and then there’s those kind of general tells for different player segments and then it’s also noticing the player-specific tells for things that you wouldn’t apply general tells for. So I mean there’s definitely some tells that I would use cold just because they are so common, assuming I peg someone as fairly recreational. And then there’s other tells that I would never use cold where I’m like, “I need to know more about this person.” So it’s kind of this intersection of universal, which I think is interesting too, and then the player-specific, which is almost like, “Let me study someone for a while and build up a little bit of information.”

Brandon: That’s pretty much what I’d say, I do want them at the table as well. The more information they give away in the hand, whether they’re talking or they’re doing certain things with their body language, if their hand gets to show down, I’m like, “Oh, that’s a data point in my memory about this player.” I can’t use it yet, but if they’re doing something completely different in another hand, then they get to show down and the hand is opposite, I’m like, “This is already quite a lot of data.” They’ve done two opposite things or two opposite ends of spectrum of hands, and some people are really smart and can kind of duck and dive around that, but a lot of people don’t realize how much they give away in the moment.

Zach: And the other interesting thing too is sometimes people say like, “Well you didn’t get to see their hand, they didn’t show it down,” but in practice so many players are only making big bets with value bets. So you can often just assume, even if you’ve seen them not show down, if they’ve made only a handful of big bets in a few hours time or something, you can safely assume that those were value bets if they don’t seem like a bluffy kind of player or whatever. So there’s even that kind of correlation you can draw, which is a little obviously not certain, but it’s kind of in the realm of assuming it’s probably true which can be helpful in some spots too.

Brandon: I was just going to add, another thing people forget with river bets, just based on how the pot odds and the maths works is that, for example, if someone bets the size of the pot, the price you’re laying for opponent is two to one. They’re calling the original size of the pot to win three times the size of the pot. They need to be right one in three times. So the person that bets is supposed to have it most of the time even with a perfect strategy, which obviously no one has and people are normally quite bad at bluffing or they do too much depending on the player, but they’re supposed to be making you indifferent, which means if they bet the size of the pot, they’ll lay you two to one. In theory, depending on obviously the different ranges and perceptions, they’re going to be bluffing between 60 and 70% of the time just as a factor of the pots. No, sorry, they’re going to have value that’s 67% of the time, so they’re only going to be bluffing–

Zach: Yeah, the game theory fundamentals, yeah. And yet that’s what you found in your study too, it was like 70% value bets, right? Something like that, yeah.

Brandon: I have the exact number here. Yeah, 71.2% value bets across 420 points. I don’t have the average bet size here which would be really useful as well actually to see the difference in what it should be. But clearly people always have it, that’s always been true across pretty much every focus environment.

Zach: I was going to ask too, I wasn’t exactly sure how to interpret it, but in your paper you had written that I think you did find something, it was like depending on controlling for a few variables, there were a few things that were interesting or was I misreading that?

Brandon: Yeah, so this is exploratory analysis, which again, I’m no expert on the nuances of how this works and this is using regression. So my understanding is if I find one of the statistically significant results we got, player verbalized the bet when controlling for total turn time, best size percentage, and if the player raised, if they went all in and if they were protecting their cards. So because I have 21, 22 factors, 21 independent variables, when you use the regression to see if anything’s significant, it’s using all of them in a different way to say like it uses the data such that it can isolate certain variables, whereas if you did the test just with a one on one variable, it would be different. So it’s almost accounting for the other variables. It’s hard to give a direct example, but the fact that someone raised is kind of its own area of data. And if you looked at just this when they raise or just this when they don’t raise, it’s almost like controlling for if they raised. So if you kind of play around with the data and just control for specific things, then you can get statistical significance. It’s not as useful because in theory you could kind of cherry pick it and work around with it and there’s always going to be significance you can find if you… Well, I guess not always, but if you really go into every possible permutation, you’re going to find significance. And this is one of the problems with [paper] sometimes as well, is that if you don’t pre-register your hypotheses and what you’re actually looking for, in theory afterwards you could have hypothesized the result you got was the one you wanted and then this is now a big headline, people are going to share it.

Zach: Yeah, I was reading something about that recently where they were making some point about finding something completely ridiculous, correlation between, I can’t remember what it was, it was something about DNA and something completely unrelated, but it was to make the point for what you’re saying, you can theoretically find significance if you look across so many different permutations and combinations, you’re able to find some correlation in something.

Brandon: Yeah. The more times you look for it, you’re supposed to adjust to the fact you’ve looked more times because it’s more likely you’re going to find it. So that actually affects the significance rate as well. So there’s more maths you’re supposed to apply to it. But if someone doesn’t pre-register the hypotheses and doesn’t use correct sound science, then… Like, if my story for this paper going into it I’m blindsided in the fact that I think when people, let’s take all of the random points, when people play with their chips, they’re always bluffing. And I really went into this paper thinking that and I write my whole paper such that I’m kind of looking to prove that it’s true, then if I do a test such that it doesn’t come out that way and I then want to like kind of move around the different data points and say, “Oh, what if this data was never recorded for? What if this data was controlled for in this aspect?” Until I find something significant and make out that was the first test I did, in theory I can then publish a paper and then say, “Yeah, this was statistically significant because of this.” And that’s why most papers today, I don’t know if all journals do this now, but I think most reputable ones, you have to pre-submit the paper and basically make sure that that can’t happen because there’s too many cases in the past where people have been doing this. And so that’s what we did with this paper as well for all it’s worth.

Zach: Nice. So in your case, in the one you mentioned, the verbalizing bet, so that would mean that depending on those other factors that you named, there was theoretically something there with the verbalizing bet, and maybe that points to like further study basically. Is that what that would tell you?

Brandon: Essentially, yeah. Another one here is thinking time percentage when controlling for if the player raised and if the player went all in. So already that’s a really specific area that they raised and went all in because I think generally people take longer as well then because it’s a second decision and it’s a bigger decision. So it’s like almost going too far off the path sometimes when you look into these specific areas, because I’m now honing in on one, I’m honing in on something which is kind of a small sample, and the data might be significant for this same as to give an extreme example, if I controlled for if the player had the nuts, obviously it’s going to be really significant that they’re never bluffing because I’m only looking at times where they’ve got the nuts. So you do kind of have to be careful with it. But I mean it does show that there is stuff going on in some permutations of the tree.

Zach: So are you interested in doing more in that space? Are you theoretically interested in adding more to your sample size, that set of hands that you have or any plans like that?

Brandon: I’d say that I am and I’m not at the same time. I enjoyed the process a lot and I really enjoy poker psychology, but I had deadlines and a timeframe and I was working on my own so it was a very different type of study as opposed to a full fledged study with a lot of people at it and a lot more sophisticated team and more people to collect data. I would enjoy being a part of that I’m sure, especially as, I don’t want to say poker expert, but I guess in that context that’s what I would be because I’ve played professionally for so long. But I’d be happy to be involved in and help out in these studies as they go forward or potentially have more of a role depending on the opportunity. But I think if new technology comes out where there’s new ways to analyze the data or it becomes easier or there’s new ways to think about it where there’s a lot more we can learn in a different way, that could reignite my interest as well or I think as you spoke about I really like the idea of someone creating a game which is the psychology poker game. Oh, sorry, just to go back to one we’re speaking about hurdles, the problem initially with this is if you bring people into a lab to play poker, the data’s almost worthless because they don’t have any risks. I played Play Money with people before and it’s not poker, unless you have a league or something that means something. People just don’t care, they’ve got to have their own risk. I really like the idea of if hypothetically I had infinite money to make this study, I can put on like a big tournament, whether it’s a league or whatever, have some pros, have some maybe athletes, have some famous people in different areas or purely recreational players and just analyze as much data as possible, but just give away actual prizes as well like prize money that means something to people or a title or a trophy, and it becomes kind of prestigious to be able to win this game where maybe it’s one table of six max every week and we record heart rate, we record breathing rate, we record the eye shiftiness directly–

Zach: Skin conductance.

Brandon: Yeah, yeah, as many things as you can possibly record without being too intrusive such that people can still relax enough to play the game and gather all the data as possible and then kind of have a… I can imagine if we recorded this and people watched it, you could have experts generally saying like, “This is leading to this. We can say that this is more likely because of this or here’s the science behind this,” and I think I would find something like that really fascinating to watch as a poker player. So I imagine other people would find it interesting.

Zach: No, I think it’s a great idea, and I think it’s like using the entertainment factor as almost like an excuse to do the science, because you’re creating that real environment. And that’s what I was struggling with too, because actually I spent a good few weeks brainstorming this a while back in Portland where I was because it was that same challenge of it needs to be real obviously, but then what am I doing to induce people to be willing to do this with a bunch of cameras and detectors and stuff? And it’s like I would have to pay them a good amount, so for many reasons, it had a lot of obstacles. But I think your idea’s great because it would be using the entertainment and the money involved that would come with that to do some cool science. And actually I don’t know if you ever saw that show, I can’t remember what it was called, it was very short-lived.

Zach: A small edit here, I talked a little bit about a poker TV show here that I couldn’t remember the name of. In the show, they had recorded the players’ heart rates. The show I was thinking of was from 2006, it was called Poker Dome Challenge. And it was only on air, I think a few weeks. Back to the interview.

All I remember was that it was only a few episodes I think, and they recorded I think it was heart rate, but maybe it was something else. But does that ring a bell at all with the heart rate?

Brandon: I mean, I’ve seen streamers do it now, but they have the heart rate on the screen while they’re playing it.

Zach: I haven’t seen that.

Brandon: There’s a guy called BBZ, I’ve seen him on his streams where his heart gets like 140, 150 when he is doing a huge bluffing like a 10K tournament. You can see it going up.

Zach: I haven’t seen that. Okay, I got to check that out because I always thought it would be cool to wear those monitors on yourself when you go to play or when you’re playing at home or whatever. I think that stuff is really interesting. And you can also buy the EKG skin conductance things too if you really wanted to get into that.

Brandon: Well, maybe as a starting point, hypothetically, if there’s a game that already runs then imagine if you could say to those people before they play like, “We’re doing this study, do you want to have your [data] measured? I don’t know what incentive you can necessarily give people. But if we could start to get data from that from games that already exist before creating a full-fledged game, that seems like a good kind of stepping stone. I would be happy to do that. I’d be interested in how my own physiology reacts when I’m playing if the whole cards and everything’s already streamed.

Zach: Totally, yeah. No, and if there’s anybody listening who’s into that idea, contact me and/or Brandon, we’ll look into it. It’s interesting too thinking about how AI, machine learning, video detection stuff can play into this too, because you can imagine usually heart rate can be kind of hard to see, but you can imagine hooking up something where it’s like recording a specific person’s heart rate or even indicators of like flushing at a very minute, detailed level and then correlating that in some way and noticing things that wouldn’t be obvious to people. And that’s something I think is interesting too because, for example, there was a recent Israeli study that found facial movements pretty high frequency ability to detect deception by minute facial movements detect when people were lying, which struck me as like these kinds of things that are not obvious to human eyes, but that a video recognition AI could pick up gets into a kind of a scary area where you can imagine somebody making some really awesome advancement and using that to really take advantage of that at the poker tables without anybody knowing. Because if you had something like that, that would be a way I would be using some advanced technology like that if I was trying to make the most money and willing to cheat basically. So it’s something to think about.

Brandon: Some super glasses. I was going to say that I know you’re saying if you could record or AI detected all these extra features of someone’s face, the stuff we don’t see, I actually think there’s a lot of stuff that we pick up subconsciously. Because when you’re playing, if you look at someone, it’s almost like you can’t pinpoint why, but you can just sense discomfort sometimes or sense, comfort. And you won’t be able to put it into words it’s because of X, Y, and Z, but well this is kind of an ongoing debate in the psychology world that we have an area of our brain that is either really, really good at just detecting objects or it’s really, really good at detecting faces. And we don’t know if it’s either that we see so many faces that that’s why we’re so good at determining faces or that we have a specific area for faces and it’s still kind of up for contention, but either way we are much better at reading faces than we realize. We pick up so many sort of cues as well just as humans, even if we can’t document it. It would be really cool to pinpoint that into a big AI super learning machine that you just tell it it’s bluffing. You just say like, “Watch this guy’s face for this period of time,” and it comes out, you can plug in the next day and it’s like, “Yep, they’re bluffing. Yep, they’re not bluffing…”

Zach: No, totally. And I think that is not far away. They have an app for analyzing video for various things and you could plug that into some machine learning stuff and study a bunch of footage. They have these black box machine learning things that can just spit out correlations and you don’t really know how it’s working. And I think that’s stuff that I think you could theoretically do now if you were so inclined. And I think like you were saying, it’s like the things that we often don’t notice consciously or just don’t notice at all are these kind of like when someone’s relaxed they might have little tiny micro movements that are not really that obvious to us, but that might stand out as like the things that we pick up as a feeling or a vibe or things that the machine would be able to get down to a really fine grain detail very exact.

Brandon: The only issue is just going to be similar to my study is sample size. I know how poker solvers work, they play against themselves millions of times. I don’t know how many times you’ll have to but someone doing this like river action or whatever before it can be statistically significantly correct that much percentage of the time. It might need hundreds of thousands or millions of bits of data. If we did create hypothetically in this parallel world, if we had infinite money to just make this game, then you’d have the camera exactly on everyone’s face such that they can’t move between so far or you can always see their entire face and you get a pretty big sample pretty quick just doing that. Because every time you’ve got everyone’s face in every game and they play every day for six hours, then you start building a sample pretty quick. Obviously not compared to the numbers you might need, it would take a very long time, but if there was more games and more people doing that, then that’d be a really good starting point.

Zach: So it sounds like we need multiple numbers of these games set up around the world going 24 hours a day. So, yeah, we’ll get started on that. So I wanted to ask you too, are there any tells that stand out to you that you use live when it comes to maybe a recent hand you played where a tell made a decision for you? Anything stand out in that regard?

Brandon: Definitely yes. I try not to base… I say never, it’s very rare that I’ll make a super, super export base purely on a tell. I’d have to be really confident which is very, very rare situation. It’s a dangerous place to be if you’re so confident in something like that, but it does happen. I play a lot of hands, and it’s very, very rare that will happen. But obviously I won’t go directly into this means this, because then I’m going to get leveled very easily next time I see that.

Zach: Yeah, I hear you, I hear you. I get you.

Brandon: I mean, there also isn’t a direct thing. I can give you one actually. I’ll give you two examples that come to mind for playing in the last 12 months. There’s one where I had a friend who’s a very good online player and I know he was new to the live poker game, but he’s a very good theoretical player. And there’s something which I call card apex, which I can’t remember if you also wrote about, but I’ve seen it in a few places, which is when you look at your hand for the first time, if you see that it’s like ACEs or Kings or like a really good hand, you naturally put it down quickly because your body’s like, “Oh, shit, good hand.” People just put it down quicker. Whereas if you see more of a marginal decision where you need to think about it, people look at it for longer. So if you see a Jack-Ten suited, Jack-Nine suited, Ace-Five suited, something which is you want to play, but it might be dependent on the action, whereas compared to a hand that you’re always playing no matter what, people tend to look at it a little bit longer. I played a hand where I’d raised first to act, and this guy was on the button. And he looked at his hand and he was still looking at it for like a few seconds and I was watching him and they put it down, and then he re-raised and it came back to me and I had like the worst hand I could possibly have. So I was like I really want to just go all in here, but if I’m wrong, I’m just a complete idiot. And I’m really sure that he’s bluffing based on this one tell, but I know it would still be too out of line for me to go in with this hand. If I had a hand I’m supposed to bluff here with sometimes, I’d just do it every time. But that’s how I’d calibrate. I wouldn’t then go in with a hand I should never go in with just to control my frequencies. And so I just said to him, “You’re bluffing, aren’t you? I’m so sure you’re bluffing here. Please show me and I’ll fold it.” And he showed me he was bluffing. So that’s just one nice one which can be quite reliable for people.

Zach: No, I like that one. I like that one a lot. I write about that a good amount and I talk about the kind of psychological reasons behind that, and I think I wrote a good amount that in Exploiting Poker Tells, my last one. And I will use that one a good amount to decide whether to three bet somebody preflop if they raise and they stare at their cards a little bit longer than normal, longer than average. I’ll use that as a decision point to make a looser three bet.

Brandon: Yeah, I think it can be a really nice one, but the more important part like my decision process there is that I don’t know he’s bluffing there, it’s a strong indicator. So I can use that to make smaller exploits by saying… Let’s say I fold bet all in as a bluff there with– I don’t know, 5 percent– maybe that’s not the right number, but instead I go to 5.5 and all the hands that I’m supposed to mix, I just always use. And maybe there’s one hand which I don’t use that I then use, but as soon as I start, I just go on a limb with everything. It feels like it’s too far away from a strategy, so to speak. The example you just gave I think is a great indicator, but you don’t just re-raise the seven two off suit because it’s–

Zach: No, exactly. Because they’re still going to call you some percentage of time or whatever, it’s not… And like you said, it’s far from certain anyway, it’s just making it slightly more or even significantly more likely. But yeah, you’re right, you have to keep in the factors of what’s good play too.

Brandon: The way I talk about it is you’ve got to give weightings to your assumptions. So my assumptions in some spots are not worth much because I don’t know much about the player, I don’t have much info, but in other spots they’re worth a bit more. And this is an example where based on my history of playing with people and the psychology I’ve read behind it, my assumption that that meant he was bluffing is worth something. It’s not worth everything, but it just allows me to expand my range a little bit. And the other example that came to my head was a hand I played in Vegas against someone who was a recreational player. To some extent I think he probably had a job but was just out for the World Series, and he plays poker for fun, but he’s not necessarily terrible, but he’s not professional. So there’s a hammer I raised with Ace King and he called in the big line. I’ve gone too poker technical I guess. I bet on 7, 7, 6, 2 or flush draw and I just have Ace King and he raised. So this is a point I know where I always continue with Ace King against someone that raises correctly, but my assumption tells me he’s not raising correctly because he’s not professional. He’s not going to know which bluffs to use, and it’s quite counterintuitive to find some of the bluffs. But obviously some people easily overdo it too, but I’ve not seen him do anything crazy. So my head’s playing back all these different features like, “Do I defend my Ace King versus the raise maybe he’s always got two power set and I’m just losing loads of money or maybe I’ll just keep him honest for one straight and then over fold the turn. I just started staring at him, and it was just clear that he was uncomfortable. And I can’t necessarily explain why, but something about his eyes and the way he had his movement, everything. Because I’d seen him play a few of my hands where he had good hands and just his body language was just completely different. And it was almost enough for me to say, “I’m not going to fold this hand at any point unless his body language changes. And if I’m wrong, I’m wrong, I’ll die by the sword at this point. I’m so confident in the fact this guy doesn’t look comfortable.” So I called, the turn was nicely at two, so nothing changed. And then he bet again, and it was the same story. And I went as far as to… I don’t know if this actually made a difference, but I tried to make myself look as weak as possible when I called the turn because I really wanted him to bluff the river. So I really made it look like really begrudging call like, “Ah, this is a close spot for me.” And then I got one of the best rivers in the deck, another two, so every single bluff became the same hand by the river. And he went all in, and I called him, he had a really strong bluff. He had open-ended straight flush draw, but it would’ve been really easy for me to just over fold that flop against the other players or over fold the turn without that extra I think he’s uncomfortable so I’m going to go closer to theory here.

Zach: That’s a real interesting one.

Brandon: But I couldn’t pinpoint his hand was in this place or he had this brief. Whatever it was, it’s a combination of lots of things.

Zach: Yeah, I was going to say it gets into that, like you were saying, sometimes there may be things that we feel that may be based on, for example, you subconsciously noticing something about how he was acting in previous hands that was like his eye contact was completely different that didn’t really consciously register to you because I think eye contact’s really big and an underrated behavior. But some of these things can be things that we’re kind of slightly aware of which gets into that realm of… I actually had a really good interview with Brian Rust about this kind of stuff about poker tells and he was talking about–

Brandon: I like Brian Rast.

Zach: Yeah, he’s great. I respect him a lot, poker-wise. And yeah, he was talking about playing draw games and the fact that there’s so little information in draw games. And so a lot of it comes down to these like, “Well, do I feel one way or other about this?” There’s a lot of these borderline spots where you’re put in where you’re like, “Well, this could go either way,” more than other games because you have less information. And he was saying he really does trust those feelings sometimes and that he thinks that’s a source of a big edge where you’re just like, “I just feel like even if I can’t put my finger on it, I think this guy’s bluffing or this guy’s got it this time.”

Brandon: Yeah. I think especially in the single draw games where it’s decision draw, decision hands over, then you get so much less information about how to range your opponents and that becomes a much bigger component of the strategy used.

Zach: Well, this has been great. Anything else you wanted to throw in here before we signed off?

Brandon: I guess that the only other thing we didn’t touch on that I had one note on was determining player skill, a way to do that. I was just going to mention that I was going to use Hendon Mob as a reference point to say, for example, if someone has 10 million in cashes and they’re playing a 5,000 pound buy in, it’d be good to use that as a metric to say obviously that I played a lot of big tournaments and maybe the amount of total cashes they’ve got could go into that and we could have a formula and a rating. So you could have a degree of live professional poker player based on that. And there is a lot of problems with it because if someone is a business man with millions of pounds and plays high rollers and then wants to play a small tournament, it might bias the data, but I’m sure there’s a way to do it to make it correlated to skill level. So I think that’d be really good to incorporate into future studies if we could create it, some sort of system of recreational to pro, maybe a scale of one to 10, and then we could use that to determine how useful some of the data is or to see if there is a lot more indicators when it comes to more recreational players which I think we both agree is intuitive that that makes sense that it’s true.

Zach: Yeah, that sounds great because even if it wasn’t perfect, it would still be something that you could filter through.

Brandon: Yeah. I guess other than that, I just wanted to say thanks, you helped me determine the hypothesis of this study, you helped me kind of plan it out in a really nice way and incorporate much better science in some ways, learn from past mistakes of other studies. I didn’t know so much about the other poker study that happened in the past, the sleeping one, but as we touched on today, there was some issues with it and I think my study became the next step from there in some ways. We improved on a lot of the problems of that and it’s going to make for better science in the future for the next study in this space. Reading your books and speaking to you helped me learn a lot about the space and make a lot of good decisions when it came to studying it and recording the data. So thanks.

Zach: Yeah. Thanks Brandon. I appreciate you saying that and thanks for talking to me and look forward to seeing what else you do. Yeah, thanks for coming out.

Brandon: No problem.

Zach: That was a talk with Brandon Sheils. You can find him on his youtube channel, which is titled Brandon Sheils, or on Twitter at @brandonsheils. 

If you’re interested in poker, you might like to check out my poker tells work, which you can learn about at www.readingpokertells.com

If you like this podcast, I’d very much appreciate you sharing it on social media and giving it a rating on iTunes or another platform. You can learn more about this podcast at behavior-podcast.com You can follow me on Twitter at @apokerplayer. 

Thanks for listening.