-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- Psychology Community
- :
- Talk Psych Blog
Talk Psych Blog
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Talk Psych Blog
Showing articles with label Research Methods and Statistics.
Show all articles
david_myers
Author
04-05-2024
08:00 AM
Basketball players, coaches, and fans agree: Players are more likely to make a shot after they’ve successfully completed one or multiple consecutive shots than after they’ve had a miss. Players therefore know to feed the teammate who’s “hot.” Coaches know to bench the one who’s not. This understanding is dittoed for the batter who’s on a hitting streak, the poker player who’s drawing strong hands and the stock picker who has a run of soaring successes. In life, as in sports, it pays to go with the hot hand. But as psychologists Thomas Gilovich, Robert Vallone and Amos Tversky revealed in a seminal 1985 report, the basketball hot hand is one of those universally shared beliefs that, alas, just isn’t so. When they studied detailed individual shooting records from the National Basketball Association (NBA) and university teams, the hot hand was nowhere to be found. Players were as likely to score after a miss as after a make. When told Gilovich’s team’s cold facts about the hot hand in a July 27 interview, Stephen “Steph” Curry, an all-time NBA three-point shooter, looked incredulous. “They don’t know what they’re talking about at all,” he replied. “It’s literally a tangible, physical sensation of "all I need to do is get this ball off my fingertips, and it’s gonna go in....” There are times you catch the ball, and you’ve maybe made one or two in a row—and ... the rim feels like the ocean. And it’s one of the most rewarding feelings.” Sports fans concur with Curry. In an article published on the same day, sports writer Jack Winter counseled, “Don’t be fooled by numbers-driven naysayers. The next time you’re feeling it at your local pickup game, don’t hesitate to indulge the temptation for even the most brazen of heat checks. Why? Stephen Curry, the truest expert on the matter, knows the hot hand is real.” The scientific story did not, however, end in 1985 with Gilovich and his colleagues. Their analyses stimulated a host of follow-up studies of streaks in free-throw shooting, as well as in baseball, golf and tennis. Occasional examples of a slight hot hand have appeared, as in NBA three-point shooting contests—but nothing like the 25 percent increase in shots made following a make that was estimated by Philadelphia 76er players surveyed in Gilovich’s team’s study. In a January 2022 study, operations researcher Wayne Winston of Indiana University Bloomington and computer scientist Konstantinos Pelechrinis of the University of Pittsburgh analyzed some 400,000 shot sequences across all NBA players over the 2013 –2014 and 2014–2015 seasons. Their results showed the slight opposite of a hot hand: after making one or two field goals, the average player became slightly less likely to make the next shot. (This replicated an earlier study that analyzed 12 NBA seasons between 2004 and 2016: 45 percent of field goal attempts were successful after a make, and 46 percent were successful after a miss.) Nevertheless, some players analyzed in Winston and Pelechrinis’s January 2022 study were, to a varying extent, more likely to make a shot after making one or more. So I wondered, “Was Curry among them?” In their data, Curry “did not exhibit the hot hand phenomenon,” Pelechrinis wrote in an e-mail to me. The computer scientist elaborated further: “After a single make his FG% [field goal percentage] was almost identical to the one expected based on the shot quality.” “After two consecutive makes his FG% was slightly below expected (2.5 percentage units).” “After three consecutive makes his FG% was 7.5 percentage units below expectation.” I can hear you protesting, “Are Gilovich and the stats geeks denying the reality of amazing hot and cold streaks in sports and in other life realms?” Actually, they are saying quite the opposite: Streaks do occur. Indeed, random data are streakier than folks suppose. And when streaks happen, our pattern-seeking mind finds and seeks to explain them. Given enough data—from sports statistics, stock market fluctuations or death rates—some really weird clusters are sure to appear. Buried in the essentially random digits of pi, you can find your eight-digit birthdate. (Is that a wink from God or just a lot of digits?) To demonstrate the streaks in random data, I flipped a coin 51 times, with these results (“H” and “T” represent heads and tails.): HTTTHHHTTTTHHTTHTTHHTTHTTTHTHTTTTTTHTTHTHHHHTHHTTTT Looking over the sequence, patterns jump out. For example, on the 30th to 38th tosses, set in boldface above, I had a “cold hand,” with only one head in nine tosses. But then my fortunes reversed with a “hot hand”: six heads out of seven tosses. Did I mentally snap out of my tails funk and get in a heads groove? No, these are the sorts of streaks found in any random sequence. When I compared each toss outcome with the next, 24 of the 50 comparisons yielded a changed result—just the sort of nearly 50 percent alternation we would expect from coin tossing. Can you see a similar hot hand in one of the basketball shot sequences shown below? Both show a player making 11 successful shots out of 21 attempts. Which one has outcomes that approximate a random sequence? Player B’s outcomes look more random to most people. (Do they look that way to you, too?) But Player B has fewer streaks than expected. For a 50 percent shooter, chance shooting, like chance coin tossing, should produce a changed outcome about half the time. But Player B’s outcome changes in successive shots 70 percent of the time (that is, in 14 out of 20 shots). Player A, despite a six-of-seven hot streak followed by a one-of-six cold streak, scores in a pattern that is more like what we would expect from a 50 percent shooter: Player A’s next outcome changes 10 times out of 20 shots. So, like his fans, coaches and commentators, Curry is right to perceive hot and cold streaks. Basketball shooting, like so much of life, is streaky. We just misinterpret the inevitable streaks. After the fact, we describe the “hot” player as “in a zone.” The phenomenon is ubiquitous. Maternity ward staff notice streaks of births of boys or girls—such as when 12 consecutive female babies were born in one New York State hospital in 1997—and sometimes these events are attributed to the phases of the moon during conception or to other mysterious forces. Cancer or leukemia cases may cluster in neighborhoods, sometimes provoking a fruitless search for a toxin. My then 93-year-old father once called me from his Seattle retirement home, where about 25 people died each year. He wondered about a curious phenomenon. “The deaths seem to come in bunches,” he said. “Why is that? A contagion?” How odd that folks should pass en masse! The streaks are real; the invented explanations are not. Nevertheless, forced to choose between data science and personal observation, between the statistics and their lying eyes, players and fans prefer the latter, so the hot hand hype lives on. After hearing the late CBS basketball commentator Billy Packer admonish college coaches to recognize the hot hand phenomenon, a friend of mine sent him my textbook summary of Gilovich’s team’s facts of life. Packer replied: “There is and should be a pattern of who shoots, when he shoots, and how often he shoots, and that can and should vary by game-to-game situations. Please tell the stat man to get a life.” I smiled. So did my colleague Thomas Gilovich when I shared Steph Curry’s response to his work:.“Steph is one of my favorite players (how unusual is that!),” Gilovich wrote, “so to hear him say that we don’t know what we’re talking about is precious.” Moreover, we can understand the science of serendipitous streaks and still marvel at the fact that Curry made 105 consecutive three-point practice shots. We can realize the realities of randomness and yet find pleasure in life’s weird streaks and coincidences. As countless things happen, we can savor the happenstances—such as three of the first five U.S. presidents dying on July 4 or someone winning the lottery twice or discovering a mutual friend on meeting a stranger overseas. In 2007 the late psychologist Albert Bandura recalled a book editor who came to Bandura’s lecture on the “Psychology of Chance Encounters and Life Paths" and ended up marrying the woman he happened to sit next to. As statisticians Persi Diaconis and Frederick Mosteller observed in a 1989 paper, “With a large enough sample, any outrageous thing is likely to happen.” And what fun when it does! This essay appeared earlier at ScientificAmerican.com as “Your Brain Looks for ‘Winning Streaks’ Everywhere—Here's Why.” David Myers, a Hope College social psychologist, authors psychology textbooks and trade books, including his recent essay collection, How Do We Know Ourselves: Curiosities and Marvels of the Human Mind. Photo permission: jeffmilner/Getty Images
... View more
Labels
-
Research Methods and Statistics
0
0
947
david_myers
Author
03-24-2022
01:01 PM
“There are trivial truths and great truths,” the physicist Niels Bohr reportedly said.[1] “The opposite of a trivial truth is plainly false. The opposite of a great truth is also true.” Light is a particle. And light is a wave. Psychology, too, embraces paradoxical truths. Some are more complementary than contradictory: Attitudes influence behavior, and attitudes follow behavior. Self-esteem pays dividends, and self-serving bias is perilous. Memories and attitudes are explicit, and they are implicit. We are the creatures of our social worlds, and we are the creators of our social worlds. Brain makes mind, and mind controls brain. To strengthen attitudes, use persuasion, and to strengthen attitudes, challenge them. Religion “makes prejudice, and it unmakes prejudice” (Gordon Allport). “When I accept myself just as I am, then I can change” (Carl Rogers). Psychology also offers puzzling paradoxical concepts. Paradoxical sleep (aka REM sleep) is so-called because the muscles become near-paralyzed while the body is internally aroused. The immigrant paradox refers to immigrant U.S. children exhibiting better mental health than native-born children. And the paradox of choice describes how the modern world’s excessive options produce diminished satisfaction. Even more puzzling are seemingly contradictory findings from different levels of analysis. First, consider: Who in the U.S. is more likely to vote Republican—those with lots of money or those with little? Who is happier—liberals or conservatives? Who does more Google searches for “sex”—religious or nonreligious people? Who scores highest on having meaning in life—those who have wealth or those who don’t? Who is happiest and healthiest—actively religious or nonreligious people? As I have documented, in each case the answer depends on whether we compare places or individuals: Politics. Low-income states and high-income individuals have voted Republican in recent U.S. presidential elections. Happy welfare states and unhappy liberals. Liberal countries and conservative individuals manifest greater well-being. Google “sex” searches. Highly religious states, and less religious individuals, do more Google “sex” searching. Meaning in life. Self-reported meaning in life is greatest in poor countries, and among rich individuals. Religious engagement correlates negatively with well-being across aggregate levels (when comparing more vs. less religious countries or American states), yet positively across individuals. Said simply, actively religious individuals and nonreligious places are generally flourishing. As sociologist W. S. Robinson long ago appreciated, “An ecological [aggregate-level] correlation is almost certainly not equal to its individual correlation.” Thus, for example, if you want to make religion look good, cite individual data. If you want to make it look bad, cite aggregate data. In response to this paradoxical finding, Nobel laureate economist Angus Deaton and psychologist Arthur Stone wondered: “Why might there be this sharp contradiction between religious people being happy and healthy, and religious places being anything but?”[2] To this list of psychological science paradoxes, we can add one more: the gender-equality paradox—the curious finding of greater gender differences in more gender-equal societies. You read that right. Several research teams have reported that across several phenomena, including the proportion of women pursuing degrees in STEM (science, technology, engineering, and math) fields, gender differences are greater in societies with more political and economic gender equality. In the February, 2022, issue of Psychological Science, University of Michigan researcher Allon Vishkin describes “the myriad findings” that societies with lower male-superior ideology and educational policy “display larger gender differences.” This appears, he reports, not only in STEM fields of study, but also in values and preferences, personality traits, depression rates, and moral judgments. Moreover, his analysis of 803,485 chess players in 160 countries reveals that 90 percent of chess players are men; yet “women participate more often in countries with less gender equality.” Go figure. Vishkin reckons that underlying the paradox is another curious phenomenon: Gender unequal societies have more younger players, and there’s greater gender equality in chess among younger people. Paradoxical findings energize psychological scientists, as we sleuth their explanation. They also remind us of Bohr’s lesson. Sometimes the seeming opposite of a truth is another truth. Reality is often best described by complementary principles: mind emerges from brain, and mind controls brain. Both are true, yet either, by itself, is a half-truth. (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com. Follow him on Twitter: @davidgmyers.) [1] I first read this unsourced quote in a 1973 article by social psychologist William McGuire. Neils Bohr’s son, Hans Bohr, in his biography of his father, reports that Neils Bohr discerned “two sorts of truths, profound truths recognized by the fact that the opposite is also a profound truth, in contrast to trivialities where opposites are obviously absurd.” [2] For fellow researchers: The paradox is partially resolved by removing income as a confounding factor. Less religious places also tend to be affluent places (think Denmark and Oregon). More religious places tend to be poorer places (think Pakistan and Alabama). Thus, when we compare less versus more religious places, we also are comparing richer versus poorer places. And as Ed Diener, Louis Tay, and I observed from Gallup World Poll data, controlling for objective life circumstances, such as income, eliminates or even slightly reverses the negative religiosity/well-being correlation across countries.
... View more
Labels
-
Research Methods and Statistics
1
0
2,785
david_myers
Author
07-09-2021
08:32 AM
Psychological science has taken some body blows of late, with famous findings challenged by seeming failures to replicate. The problem isn’t just that prolific researchers Brian Wansink and Derek Stapel faked data, or that David Rosenhan (of “On Being Sane in Insane Places” fame) and personality researcher Hans Eysenck have been accused of doing likewise. Every discipline has a few self-promoting deceivers, and more who bend the truth to their side. And it’s not just critics arguing (here and here) that a few celebrated findings, such as the tribalism of the Stanford Prison and Robbers Cave experiments, were one-off, stage-managed happenings. Or that some findings of enormous popular interest—brain training for older folks, implicit bias training programs, or teaching to learning styles—all produce little enduring benefit. The problem is that other findings have also not been consistently reproducible. The effects of teachers’ expectations, power posing, willpower depletion, facial feedback, and wintertime depression (seasonal affective disorder) have often failed to replicate or now seem more modest than widely claimed. Moreover, the magnitude and reliability of stereotype threat, growth mindset benefits, and the marshmallow test (showing the life success of 4-year-olds who can delay gratification) are, say skeptics, more mixed and variable than often presumed. Hoo boy. What’s left? Does psychology’s knowledge storehouse have empty shelves? Are students and the public justifiably dismayed? As one former psychology student tweeted: “I took a [high school] psychology class whose entire content was all of these famous experiments that have turned out to be total horse**bleep**. I studied this! They made me take an exam! For what?” To which others responded: “I'm putting all my chips on neuroscience, I refuse to listen to psychologists ever again, they had their chance.” “Imagine if you'd spent 10 years getting a PhD in this stuff, going into $200k in debt.” “You can learn more from life never mind a psychology lesson just take a look around fella.” “I have a whole damn degree full of this @#$%.” But consider: How science works. Yes, some widely publicized studies haven’t replicated well. In response to this, we textbook authors adjust our reporting. In contrast to simple common sense and to conspiracy theories, science is a self-checking, self-correcting process that gradually weeds out oversimplifications and falsehoods. As with mountain climbing, the upward march of science comes with occasional down slopes. Some phenomena are genuine, but situation specific. Some of the disputed phenomena actually have been replicated, under known conditions. One of my contested favorite experiments—the happy pen-in-the-teeth vs. pouting pen-in-the-lips facial feedback effect—turns out to replicate best when people are not distracted by being videotaped (as happened in the failure-to-replicate experiments). And stepping back to look at the bigger picture, the Center for Open Science reports that its forthcoming analysis of 307 psychological science replications found that 64 percent obtained statistically significant results in the same direction as original studies, with effect sizes averaging 68 percent as large. The bottom line: Many phenomena do replicate. What endures and is left to teach is . . . everything else. Memories really are malleable. Expectations really do influence our perceptions. Information really does occur on two tracks—explicit and implicit (and implicit bias is real). Partial reinforcement really does increase resistance to extinction. Human traits really are influenced by many genes having small effects. Group polarization really does amplify our group differences. Ingroup bias really is powerful and perilous. An ability to delay gratification really does increase future life success. We really do often fear the wrong things. Sexual orientation really is a natural disposition that’s neither willfully chosen nor willfully changed. Split-brain experiments really have revealed complementary functions of our two brain hemispheres. Electroconvulsive therapy really is a shockingly effective treatment for intractable depression. Sleep experiments really have taught us much about our sleeping and dreaming. Blindsight really does indicate our capacity for visual processing without awareness. Frequent quizzing and self-testing really does boost students’ retention. But enough. The list of repeatedly confirmed, humanly significant phenomena could go on for pages. So, yes: Let’s teach the importance of replication for winnowing truth. Let’s separate the wheat from the chaff. Let’s encourage critical thinking that’s seasoned with healthy skepticism but not science-scorning cynicism. And let us also be reassured that our evidence-derived principles of human behavior are overwhelmingly worth teaching as we help our students appreciate their wonder-full world. (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com; follow him on Twitter: @DavidGMyers.)
... View more
Labels
-
Research Methods and Statistics
-
Teaching and Learning Best Practices
5
1
7,856
david_myers
Author
03-09-2021
11:52 AM
The history of Americans' health is, first, a good news story. Thanks to antibiotics, vaccines, better nutrition, and diminished infant mortality, life expectancy-at-birth has doubled since 1880—from 38 to 79 years. For such a thin slice of human history, that is a stunning achievement. Even amid a pandemic, be glad you are alive today. The recent not-good news is that, despite doubled per-person health-care spending compared to other rich countries, U.S. life expectancy is lower—and has been declining since 2014. Can you guess why Americans, despite spending more on their health, are dying sooner? Our World in Data founder, Max Roser, sees multiple explanations. Smoking. While the COVID-19 pandemic has claimed 2.4 million lives in the past year, cigarette smoking kills 8.1 million people annually—and it kills more in the U.S. than in other wealthy countries. Of those people who are still smokers when they die, two-thirds die because of their smoking. Still, there’s some good news: The plunging smoking rate is reducing both smoking-caused deaths and the smoking fatality gap between the U.S. and other rich countries. Obesity. While smoking has declined, obesity—a risk factor for heart disease, stroke, diabetes, and some cancers—has increased: 36 percent of Americans are now obese. Compared to the other rich (but less obese) nations, the U.S. has a much higher rate of premature deaths attributed to obesity (five times that of Japan, more than double that of France, and nearly 60 percent greater than Canada). Homicides. As a gun-owning culture, Americans much more often kill each other—four times more often than the next most murderous rich nation (peaceful Canada). Opioid overdoses. Like homicides, America’s much greater opioid overdose death rate, though causing less than 2 percent of all deaths, affects life expectancy because so many victims are young. Road accidents. A surprise to me—and you?—is the U.S. having, compared to other rich nations, a roughly doubled rate of vehicle-related accidents (again, often involving the young, and thus contributing to the life expectancy gap). Inequality and poverty. Although the U.S. enjoys higher average income than most other rich countries, its lower-income citizens are poorer. This greater inequality and poverty predicts less access to health care and also greater infant mortality, which, among the rich nations, is highest in the U.S. And how might we expect the pandemic to affect the U.S. life expectancy standing? The half-million U.S. COVID-19 deaths in the pandemic’s first year have, per capita, been matched by the UK and Italy, but are roughly double those of other European nations, and many times more than collectivist East Asian countries. For its often mask-defying rugged individualism, the U.S. has paid a heavy price. Moreover, as Nathan DeWall and I report in our forthcoming Exploring Psychology, 12th Edition (with data from Carnegie Mellon University), there has been a striking -.85 correlation across states between mask wearing and COVID-19 symptoms. Less mask wearing—as in the mask-resisting Dakotas, Wyoming, Idaho, and the Southern states—predicts more COVID-19. The Centers for Disease Control observed the same correlation across U.S. counties during 2020: COVID-19 cases and deaths increased in U.S. counties that reinstated in-person dining or not requiring masks. As Max Roser notes, the factors that predict Americans’ dying sooner—smoking, obesity, violence, opioids, poverty, and likely COVID-19—are less about better healthcare for the sick than about averting health problems in the first place. As Benjamin Franklin anticipated, “An ounce of prevention is worth a pound of cure.” (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com; follow him on Twitter: @DavidGMyers.)
... View more
Labels
-
Current Events
-
History and System of Psychology
-
Research Methods and Statistics
1
0
4,322
david_myers
Author
01-28-2021
01:53 PM
German medical statistics expert (and psychologist) Gerd Gigerenzer can regale you with stories of ill-fated health-risk communications. He tells, for example, of the 1990s British press report that women taking a particular contraceptive pill had a 100 percent increased chance of developing potentially fatal blood clots. That massive-sounding risk caused thousands of horrified women to stop taking the pill—leading to a wave of unwanted pregnancies and some 13,000 additional abortions (which, ironically, entail a blood clot risk of their own). So, was the press report wrong? Well, yes and no. The risk did double. But it remained infinitesimal: increasing from 1 in 7000 to 2 in 7000. Fast forward to today’s dilemma for those contemplating getting a COVID-19 vaccine: Are the vaccines both safe enough and effective enough to warrant becoming vaccinated? How protective are they? Consider two possibly misleading reports: NPR invites us to imagine a 50 percent effective vaccine: “If you vaccinate 100 people, 50 people will not get disease.” The famed Cleveland Clinic reports that a 95 percent effective vaccine gives you a “95% level of protection…[meaning that] about 95% of the population would develop immunity in a fashion that would protect them from getting sick if exposed to the virus.” So, with a 50 percent effective vaccine, we have a 50 percent chance of contracting COVID-19, and with a 95 percent effective vaccine, we have a 5 percent chance…right? Actually, the news is much better. Consider what that “95 percent effective” statistic actually means. As the New York Times' Katie Thomas explained, the Pfizer/BioNTech clinical trial engaged nearly 44,000 people, half of whom received its vaccine, and half a placebo. The results? “Out of 170 cases of COVID-19, 162 were in the placebo group, and eight were in the vaccine group.” So, there was a 162 to 8 (95 percent to 5 percent) ratio by which those contracting the virus were unvaccinated (albeit with the infected numbers surely rising in the post-study months). Therein lies the “95 percent effective” news we’ve all read about. So, if you receive the Pfizer or equally effective Moderna vaccine, do you have a 5 percent chance of catching the virus? No. That chance is far, far smaller: Of those vaccinated in the Pfizer trial, only 8 of nearly 22,000 people—less than 1/10th of one percent (not 5 percent)—were found to have contracted the virus during the study period. And of the 32,000 people who received either the Moderna or Pfizer vaccine, how many experienced severe symptoms? The grand total, noted David Leonhardt in a follow-up New York Times report: One. Gigerenzer tells me that his nation suffers from the same under-appreciation of vaccine efficacy. “I have pointed this misinterpretation out in the German media,” he notes, “and gotten quite a few letters from directors of clinics who did not even seem to understand what’s wrong.” “Be assured that YOU ARE SAFE after vaccine from what matters—disease and spreading,” tweeted Dr. Monica Gandhi of the University of California, San Francisco. Although we await confirmation that the vaccines do, as expected, reduce transmission, she adds that “Two vaccinated people can be as close as 2 spoons in [a] drawer!” The near 100 percent efficacy is “ridiculously encouraging,” adds Paul Offit, the Vaccine Education Center director at Children’s Hospital of Philadelphia. Moreover, notes stats geek Nate Silver, “Telling people they can't change their behavior even once they get a vaccine is a disincentive for them to get vaccinated.” With other vaccines and virus variants the numbers will, of course, vary. And as one of the newly vaccinated, I will continue—for as long as the pandemic persists—to honor and support the needed norms, by masking and distancing. I will strive to model protective hygiene. But my personal fear of COVID-19 will be no greater than the mild fear and resulting caution that accompanies my biking and driving. Moreover, for all of us, vaccine statistical literacy—and the need to accurately convey health risks and benefits— matters. In this case, it matters a lot. [2/20/2021 P.S. In a follow-up twitter thread, David Leonhardt offered this table, showing that of 74,000+ participants in one of the five vaccine trials, the number of vaccinated people who then died of COVID was zero. The number hospitalized with COVID was also zero.} (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com; follow him on Twitter: @DavidGMyers.)
... View more
Labels
-
Current Events
-
Research Methods and Statistics
2
0
17.5K
david_myers
Author
12-16-2020
11:20 AM
The conservative New York Times columnist Ross Douthat spoke for many in being astounded by “the sheer scale of the belief among conservatives that the [2020 presidential] election was really stolen,” which he attributed partly to “A strong belief [spurring] people to go out in search of evidence” for what they suppose. Douthat alluded to confirmation bias—our well-established tendency, when assessing our beliefs, to seek information that supports rather than challenges them. What’s the basis for this big idea, which has become one of social psychology’s gifts to public awareness? And should appreciating its power to sustain false beliefs cause us to doubt our own core beliefs? In a pioneering study that explored our greater eagerness to seek evidence for rather than against our ideas, psychologist Peter Wason gave British university students a set of three numbers (2-4-6) and told them that the series illustrated a rule. Their task was to discover the rule by generating their own three-number sequences, which Wason would confirm either did or didn’t conform to the rule. After the students tested enough to feel certain they had the rule, they were to announce it. Imagine being one of Wason’s study participants. What might you suppose the rule to be, and what number strings might you offer to test it? The outcome? Most participants, though seldom right, were never in doubt. Typically, they would form a wrong idea (such as “counting by twos?”) and then test it by searching for confirming evidence: “4-6-8?” “Yes, that conforms.” “20-22-24?” “Yes.” “200-202-204?” “Yes again.” “Got it. It’s counting by twos.” To discover Wason’s actual rule (any three ascending numbers), the participants should also have attempted to disconfirm their hunch by imagining and testing alternative ideas. Confirmation bias also affects our social beliefs. In several experiments, researchers Mark Snyder and William Swann tasked participants with posing questions to someone that would reveal whether that person was extraverted. The participants’ typical strategy was to seek information that would confirm extraversion. They would more likely ask “What would you do if you wanted to liven things up at a party?” than “What factors make it hard for you to really open up to people?” Vice versa for those assessing introversion. Thus, participants typically would detect in a person whatever trait they were assessing. Seek and ye shall find. In everyday life, too, once having formed a belief—that vaccines cause autism, that people can choose or change their sexual orientation, that the election was rigged—we prefer and seek information that verifies our belief. The phenomenon is politically bipartisan. Across various issues, both conservatives and liberals avoid learning the other side’s arguments about topics such as climate change, guns, and same-sex marriage. If we believe that systemic racism is (or is not) rampant, we will gravitate toward news sources, Facebook friends, and evidence that confirms our view, and away from sources that do not. Robert Browning understood: “As is your sort of mind, / So is your sort of search: you’ll find / What you desire.” Confirmation bias supplements another idea from social psychology—belief perseverance, a sister sort of motivated reasoning. In one provocative experiment, a Stanford research team led by Craig Anderson invited students to consider whether risk-takers make good or bad firefighters. Half viewed cases of a venturesome person succeeding as a firefighter, and a cautious person not succeeding; the other half viewed the reverse. After the students formed their conclusion, the researchers asked them to explain it. “Of course,” one group reflected, “risk-takers are braver.” To the other group, the opposite explanation seemed equally obvious: “Cautious people have fewer accidents.” When informed that the cases they’d viewed were fake news made up for the experiment, did the students now return to their pre-experiment neutrality? No—because after the fake information was discredited, the students were left with their self-generated explanations of why their initial conclusion might be true. Their new beliefs, having grown supporting legs, thus survived the discrediting. As the researchers concluded, “People often cling to their beliefs to a considerably greater extent than is logically or normatively warranted.” So, does confirmation bias + belief perseverance preclude teaching an old dogma new tricks? Does pondering our beliefs, and considering why they might be true, close us to dissonant truths? Mindful of the self-confirming persistence of our beliefs (whether true or false), should we therefore doubt everything? Once formed, it does take more compelling persuasion to change a belief (“election fraud was rampant”) than it did to create it. But there are at least two reasons we need not succumb to a nihilistic belief in nothing. First, evidence-based critical thinking works. Some evidence will change our thinking. If I believe that Reno is east of Los Angeles, that Atlanta is east of Detroit, and that Rome is south of New York, a look at a globe will persuade me that I am wrong, wrong, and wrong. I may once have supposed that child-rearing techniques shape children’s personalities, that the crime rate has been rising for years, or that traumatic experiences get repressed, but evidence has shown me otherwise. Recognizing that none of us are infallible little gods, we all, thankfully, have at least some amount of intellectual humility. Moreover, seeking evidence that might disconfirm our convictions sometimes strengthens them. I once believed that close, supportive relationships predict happiness, that aerobic exercise boosts mental health, and that wisdom and emotional stability grow with age—and the evidence now enables me to believe these things with even greater confidence. Curiosity is not the enemy of conviction. Second, explaining a belief does not explain it away. Knowing why you believe something needn’t tell us anything about your belief’s truth or falsity. Consider: If the psychology of belief causes us to question our own beliefs, it can also cause others to question their opposing beliefs, which are themselves prone to confirmation bias and belief perseverance. Psychological science, for example, offers both a psychology of religion and a “psychology of unbelief” (an actual book title). If both fully complete their work—by successfully explaining both religion and irreligion—that leaves open the question of whether theism or atheism is true. Archbishop William Temple recognized the distinction between explaining a belief and explaining it away when he was challenged after an Oxford address: “Well, of course, Archbishop, the point is that you believe what you believe because of the way you were brought up.” To which the archbishop replied, “That is as it may be. But the fact remains that you believe that I believe what I believe because of the way I was brought up, because of the way you were brought up.” Finally, let’s remember: If we are left with uncertainty after welcoming both confirming and disconfirming evidence, we can still venture a commitment. As French author Albert Camus reportedly said, sometimes life beckons us to make a 100 percent commitment to something about which we are 51 percent sure—to a cause worth embracing, or even to a belief system that helps make sense of the universe, gives meaning to life, connects us in supportive communities, provides a mandate for morality and selflessness, and offers hope in the face of adversity and death. So yes, belief perseverance solidifies newly formed ideas as invented rationales outlast the evidence that inspired them. And confirmation bias then sustains our beliefs as we seek belief-confirming evidence. Nevertheless, evidence-based thinking can strengthen true beliefs, or at least give us courage, amid lingering doubt, to make a reasoned leap of faith. As St. Paul advised, “Test everything; hold fast to what is good.” (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com; follow him on Twitter: @DavidGMyers.)
... View more
Labels
-
Current Events
-
Research Methods and Statistics
-
Social Psychology
0
0
3,007
david_myers
Author
11-13-2020
08:40 AM
In a pre-election essay, I contrasted the prognostications of poll-influenced prediction models with the wisdom of the betting crowd. I suspected that bettors—many of whom believed their candidate would win—were overly influenced by the false consensus effect (the presumption that most others see the world as we do). But score this round for the wisdom of the betting crowd, which better anticipated the election closeness, including Trump’s Florida victory. That’s a contrast with Britain’s Brexit vote, where polls indicated a toss-up (a half percent edge for “Remain”) while the betting markets wrongly estimated a 90+ percent Remain chance. The U.S. polls had a rougher-than-usual year, with the final Biden margin likely near 4.5 percent rather than the predicted 8 percent. The pollsters also struggled in key states. On election eve, the average poll gave Trump a 0.8 percent edge in Ohio; he won by 8 percent. In Florida, the polls gave Biden a 2.5 percent edge; he lost by about 3 percent. In Wisconsin, the polls favored Biden by 8.4 percent; he won by less than a percent. With so few people now responding to pollster calls and texts, precision is increasing a challenge (even with pollster adjusting results to match the voting demographics). But lest we dismiss the polls and sophisticated forecasters, let’s grant them three points. First, they got many of the specifics right. FiveThirtyEight, for example, correctly anticipated 48 of the 50 presumed state outcomes. Some of its correct predictions even surprised its creator: Second, polls, as Silver has said, could be worse and they would still greatly exceed conventional seat-of-the-pants wisdom. In a University of Michigan national survey in September, 4 in 5 Republicans incorrectly anticipated a Trump victory. Third, although the models missed on some details, credit them with the big picture. “Biden’s Favored in Our Final Presidential Forecast, but It’s a Fine Line Between a Landslide and a Nail-Biter,” headlined FiveThirtyEight in its final election forecast. And Biden did win. As I write, though, the betting markets—mindful of fraud allegations and legal challenges—still give Donald Trump a 12 percent chance of victory. But this, says Silver, “is basically a market saying there's a 12% chance that the sky isn't blue.” Consider, say the media analysts who have called the election: How would widespread voter fraud account for Donald Trump’s doing much better than predicted by the historically reliable polls (and for down ballot Republicans doing better yet)? And how could that be so across America in countless local municipalities, including those with Republican-elected officials? Take my community—Holland, Michigan, a historically Republican town where Betsy DeVos grew up and has a home just a bike ride from my own. With a changing demography that now closely mirrors the nation, our last three presidential elections have been razor close, with Donald Trump narrowly edging Hillary Clinton in 2016 . . . but with Biden defeating Trump by 11 percent. Likewise, our surrounding county—one of the nation’s most reliably Republican counties—voted Trump by 30.2 percent in 2016 but only 21.5 percent in 2020. Neighboring Kent County, also leans Republican and is the home of Republican-turned-Libertarian Congressman Justin Amash, gave Biden 50,000 more votes than Clinton received four years ago. Is it conceivable that the Republican-friendly voting officials in countless such places across the U.S. consistently committed Biden-supporting fraud? All in all, it was not the best of years for pollsters and modelers, though it was a worse year for John and Joan Q. Public’s expectations of their candidate’s triumphant success. Winston Churchill once called democracy “the worst form of Government except for all those other forms.” To borrow his sentiment, polls and models are the worst forms of prediction, except for all the other forms. (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)
... View more
Labels
-
Current Events
-
Research Methods and Statistics
0
0
2,007
david_myers
Author
06-02-2020
08:37 AM
A classic moral psychology dilemma invites us to contemplate a runaway trolley headed for five people who are tied to the tracks and destined for death—unless you pull a lever that diverts the trolley to a side track where it would kill one person. So, do you: a) do nothing and allow five people to die, or b) take an action that causes one person’s death? Utilitarian ethics would admonish you to pull the lever and save lives. But doing so, says an alternative “deontological” perspective, would involve you in a moral wrong and make you actively responsible for someone’s death. The trolley problem is now playing out on the world stage in the medical ethics surrounding COVID-19 vaccine development. Developing a safe and effective coronavirus vaccine will reportedly take many months, as researchers vaccinate thousands of people with a trial vaccine or placebo, then allow time for the natural course of events to expose some to the virus. Meanwhile, hundreds of thousands of the world’s people may die, and rates of poverty and its associated ills will soar. Some ethicists, and 35 U.S. Congress members, have therefore proposed speeding up vaccine development with “human challenge” experiments—double-blind clinical trials that expose minimally-at-risk young adult volunteers to the virus, with all volunteers then being followed for a medically supervised quarantine period. This is not a mere hypothetical idea: Thousands of people have already volunteered to participate. So, should we proactively expose a relative few to infection in hopes of sparing the lives and livelihoods of so many more? This real-life trolley problem offers a provocative discussion topic for your class or dinner table. Here are arguments I’ve heard on each side of this issue: We should not solicit volunteers for experiments that infect people, even young adults: Exploitation. Young people have a natural tendency to believe themselves invincible, and we would be exploiting their natural “unrealistic optimism” in asking for volunteers. With the offer of pay, poor people might be especially vulnerable to exploitation. History. The horrific history of unethical medical experimentation provides a cautionary tale. Remember the revolting medical experiments done by the Nazis on those unwilling and the Tuskegee syphilis experiment on those unwitting. Unintended consequences. As a recent Science article explained, we don't yet know anything about potential long-term health consequences from having been afflicted with COVID. There have been questions about whether people who get very sick, for example, will ever recover full lung function, and many of those placed on ventilators suffer lingering neurological deficits. There have also been reports of young COVID sufferers experiencing kidney damage, blood clots, circulatory problems (“COVID toe”), and a post-infection inflammatory response. Unethical. “I cannot imagine that this would be ethical,” said one vaccine researcher. We should not induce humans to serve as guinea pigs in an experiment with unknown consequences. Would you want one of your own children to volunteer for a human challenge experiment? What about the Hippocratic oath: “First do no harm”? We should conduct human challenge experiments: Little risk. The risk to younger adults would be minimal. Among COVID deaths in the U.S., very few—.001—have been to people ages 15 to 24. Humanitarian purpose. If, by taking less lethal risk than taken each year by driving a car, young adult volunteers could save countless thousands of lives, is that not a net good? Don’t we owe it to our at-risk elders? Mere acceleration of exposure. One could conduct the experiment in a city—or a country such as Sweden—where volunteers might simply be accelerating the timeline for their likely exposure and subsequent likely immunity. The moral logic. Where is the moral logic in sending young adults into combat zones, where the risks are vastly greater and the moral outcome often more ambiguous, while denying young volunteers their wish to serve humanity? “If healthy volunteers, fully informed about the risks, are willing to help fight the pandemic by aiding promising research,” argue ethicists Peter Singer and Richard Yetter Chappell, “there are strong moral reasons to gratefully accept their help. To refuse it would implicitly subject others to still graver risks.” What do you (or your students or companions) think? And what risk/benefit ratio might change your answer? If you oppose a human challenge experiment, is there some minimal level of risk and some magnitude of benefit that would lead you to support it? If you support a human challenge experiment, what level of risk or what constraint on the benefit might lead you to oppose it? [2/16/2016 P.S.: A UK human challenge investigation of COVID vaccine efficacy is now seeking volunteers: https://ukcovidchallenge.com/]
... View more
Labels
-
Research Methods and Statistics
0
0
6,688
david_myers
Author
05-04-2020
09:04 AM
“If public health officials recommended that everyone stay at home for a month because of a serious outbreak of coronavirus in your community, how likely are you to stay home for a month?” When Gallup recently put this question to Americans, 76 percent of Democrats and 47 percent of Republicans answered “Very likely.” This partisan gap coheres with an earlier YouGov survey finding (replicated by NPR/Marist😞 By a 2 to 1 margin (58 percent versus 29 percent), Republicans more than Democrats believed “the threat of coronavirus has been exaggerated.” The gap extends to mask wearing, with mostly maskless shut-down protestors storming my state’s capitol. Politico headlined that wearing a mask is for smug liberals,” adding “For progressives, masks have become a sign that you take the pandemic seriously.” Given how right George W. Bush was to remind us “how small our differences are in the face of this shared threat,” my social scientist curiosity is tickled: Why the gap? What is it about being a Democrat that makes one more accepting of disruptive sheltering-in-place and mask-wearing? It’s not because kindred-spirited Democrats control the White House bully pulpit and the federal agencies that recommended sheltering-in-place. It’s likely not because Democrats are more submissively docile and obedient of authority. It’s not because Democrats are more fearful of threatening diseases, or have had more COVID-19-related experiences. And no, it’s seemingly not because Democrats are more knowledgeable about basic science. When Pew in 2019 gave Americans a science knowledge test, Republicans and Democrats were about equally likely to know, for example, that the tilt of the Earth’s axis determines the seasons, that antibiotic overuse produces antibiotic resistance, and that a control group helps determine the effectiveness of a new drug. So what gives? And why, in another Gallup survey, is there an even greater political gap in concerns about climate change—with 77 percent of Democrats and only 16 percent of Republicans being “concerned believers”: people who believe that climate warming will pose a serious threat, that it’s human-caused, and that news reports about it are accurate or underestimate the problem. As one who grew up Republican—my beloved business-owning father was Washington State treasurer of Nixon for President—I scratch my head. Why has the conservative party I associate with family values, low taxes, and business-supportive policies become so unsupportive of people’s right to life under a pandemic and of our conserving the environment for our grandchildren? One answer, reports University of Montana psychologist Luke Conway, is that conservatives are small government folks. They resist government intervention in their lives. A second answer comes from another Pew survey. Although telling me your political affiliation won’t clue me to your basic science knowledge, it will clue me to your science attitudes. Should scientists take “an active role in public policy debates?” Yes, say 73 percent of Democrats and 43 percent of Republicans. Even among those with high science knowledge, Republicans (64 percent) are much more likely than Democrats (39 percent) to “say scientists are just as likely to be biased as other people.” Speaking to protestors here in Michigan’s state capitol, David Clarke, a former Milwaukee County, Wisconsin, sheriff, reportedly mocked “bending the curve,” scorned “so-called experts” who created the six-foot social distancing recommendation “out of their rear ends,” and declared the coronavirus death count phony. (Actually, excess mortality data indicate the death count underestimates the toll.) Conservative commentator Yuval Levin, a former science adviser to President George W. Bush, also notes how social media and the Internet diminish respect for scientific expertise: “People tend to think that the expert is just a person. And so now information is available anywhere. And so anyone can be an expert.” If Levin is right, this is the Dunning-Krueger effect writ large (the least competent people most overestimating their knowledge). Another source of science skepticism may be the reversal of Republicans being the party of college graduates (as were 54 percent in 1994 versus only 39 percent of Democrats). By 2017 those numbers had exactly flipped. More education used to predict Republican voting. Now it predicts Democratic voting. Highly educated scientists, for example, now identify as Democratic rather than Republican by a 10 to 1 margin (55 to 6 percent). Does a Democratic-leaning academia—with 6 in 10 college professors identifying as “liberal”—explain why only one-third of Republicans (but two-thirds of Democrats) now perceive colleges and universities having a positive effect “on the way things are going in the country”? And, in addition to valid concerns for jobs and the economy, does Republicans’ suspicion of higher education and the role of scientists in public policy feed their push to reopen the country? Despite scientists’ progressive leanings, we can credit them with listening to their data and then letting the chips fall. Yes, I know, science is not an utterly neutral, value-free enterprise. But credit science with the pursuit of truth—with giving us research findings that sometimes affirm progressive views (about climate risk, sexual orientation, and socially toxic inequality), but also sometimes affirm conservative views (about the contribution of marriage to human flourishing, the association of religiosity with health and well-being, and growth mindsets that power individual initiative). And take note of the rising voices within academia who, in the words of the Heterodox Academy movement, believe that “diverse viewpoints & open inquiry are critical to research and learning.” As psychologist Scott Lilienfeld declares in a forthcoming special issue of Archives of Scientific Psychology, the welcome mat is now out even for “unpopular ideas.” If we educators can help people appreciate both the nonpartisan nature of scientific findings and academia’s openness to a free marketplace of ideas, then might we enable tomorrow’s citizens—whether Democrat or Republican—to welcome the wisdom of science? (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)
... View more
Labels
-
Research Methods and Statistics
-
Social Psychology
1
0
3,333
david_myers
Author
02-26-2020
11:37 AM
Paul Krugman’s Arguing with Zombies (2020) identifies “zombie ideas”—repeatedly refuted ideas that refuse to die. He offers economic zombie ideas that survive to continue eating people’s brains: “Tax cuts pay for themselves.” “The budget deficit is our biggest economic problem.” “Social Security is going broke.” “Climate change is nonexistent or trivial.” That triggered my musing: Does everyday psychology have a similar army of mind-eating, refuse-to-die ideas? Here are some candidates, and the research-based findings that tell a different story: People often repress painful experiences, which years later may later reappear as recovered memories or disguised emotions. (In reality, we remember traumas all too well, often as unwanted flashbacks.) In realms from sports to stock picking, it pays to go with the person who’s had the hot hand. (Actually, the combination of our pattern-seeking mind and the unexpected streakiness of random data guarantees that we will perceive hot hands even amid random outcomes.) Parental nurture shapes our abilities, personality, and sexual orientation. (The greatest and most oft-replicated surprise of psychological science is the minimal contribution of siblings’ “shared environment.”) Immigrants are crime-prone. (Contrary to what President Donald Trump has alleged, and contrary to people’s greater fear of immigrants in regions where few immigrants live, immigrants do not have greater-than-average arrest and incarceration rates.) Big round numbers: The brain has 100 billion neurons. 10 percent of people are gay. We use only 10 percent of our brain. 10,000 daily steps make for health. 10,000 practice hours make an expert. (Psychological science tells us to distrust such big round numbers.) Psychology’s three most misunderstood concepts are that: “Negative reinforcement” refers to punishment. “Heritability” means how much of a person’s traits are attributable to genes. “Short-term memory” refers to your inability to remember what you experienced yesterday or last week, as opposed to long ago. (These zombie ideas are all false, as I explain here.) Seasonal affective disorder causes more people to get depressed in winter, especially in cloudy places, and in northern latitudes. (This is still an open debate, but massive new data suggest to me that it just isn’t so.) To raise healthy children, protect them from stress and other risks. (Actually, children are antifragile. Much as their immune systems develop protective antibodies from being challenged, children’s emotional resilience builds from experiencing normal stresses.) Teaching should align with individual students’ “learning styles.” (Do students learn best when teaching builds on their responding to, say, auditory versus visual input? Nice-sounding idea, but researchers—here and here—continue to find little support for it.) Well-intentioned therapies change lives. (Often yes, but sometimes no—as illustrated by the repeated failures of some therapy zombies: Critical Incident Stress Debriefing, D.A.R.E. Drug Abuse Prevention, Scared Straight crime prevention, Conversion Therapy for sexual reorientation, permanent weight-loss training programs.) Association for Psychological Science President Lisa Feldman Barrett, with support from colleagues, has offered additional psychology-relevant zombie ideas: Vaccines cause autism (a zombie idea responsible for the spread of preventable disease). A woman’s waist-to-hip ratio predicts her reproductive success. (For people who advocate this idea about women, says Barrett, “There should be a special place in hell, filled with mirrors.”) A sharp distinction between nature and nurture. (Instead, biology and experience intertwine: “Nature requires nurture, and nurture has its impact via nature.”) “Male” and “female” are genetically fixed, non-overlapping categories. (Neuroscience shows our human gender reality to be more complicated.) People worldwide similarly read emotion in faces. (People do smile when happy, frown when sad, and scowl when angry—but there is variation across context and cultures. Moreover, a wide-eyed gasping face can convey more than one emotion, depending on the context.) Westend61/Getty Images Ergo, when approached by a possible zombie idea, don’t just invite it to become part of your mental family. Apply psychological science by welcoming plausible-sounding ideas, even hunches, and then putting each to the test: Ask does the idea work? Do the data support its predictions? When subjected to skeptical scrutiny, crazy-sounding ideas do sometimes find support. During the 1700s, scientists ridiculed the notion that meteorites had extraterrestrial origins. Thomas Jefferson reportedly scoffed at the idea that “stones fell from heaven.” But more often, as I suggest in Psychology 13 th Edition (with Nathan DeWall), “science becomes society’s garbage collector, sending crazy-sounding ideas to the waste heap atop previous claims of perpetual motion machines, miracle cancer cures, and out-of-body travels. To sift reality from fantasy and fact from fiction therefore requires a scientific attitude: being skeptical but not cynical, open-minded but not gullible.” (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)
... View more
Labels
-
Research Methods and Statistics
0
0
4,581
david_myers
Author
11-21-2019
07:54 AM
Bill Gates wants people he hires to read two of his favorite books: The Better Angels of Our Nature, by psychologist Steven Pinker, and Factfulness by the late Hans Rosling. I, too, have loved these books, which form a complementary pair. Pinker argues—our current malaise notwithstanding—that the world is getting better. World hunger is abating, child labor is disappearing. Murder and wars are less common. Literacy is increasing. Given a choice between living a half-century or century ago or today, any sane person would choose today. Rosling mined world data to document these trends and many more. And now the Rosling family’s Swedish foundation is offering stunning dynamic graphic displays of world data. For example, see here and click on the animation for a jaw-dropping depiction of the life-expectancy increase (in but an eye-blink of our total human history). Today’s average human lives much longer, thanks partly to the dramatic decline in child mortality from a time when nearly half of children died by age 5 (and when there was biological wisdom to having more than two children). Other show-the-class goodies include: Increasing gender equality—indexed by the changing girl/boy ratio in schools. Rising body mass index for men, and for women . . . with richer countries averaging higher BMI. Declining murder rates across countries. Increasing age at first marriage (for women). These facts should whet your informational appetite. For more, explore www.gapminder.com/data. “Gapminder makes global data easy to use and understand.” And then explore www.OurWorldInData.org, founded by Max Roser. This is an Oxford-based source of world data on all sorts of topics. “Our World in Data is about research and data to make progress against the world’s largest problems.” An example, presenting World Bank/United Nations data on the “missing women” phenomenon in certain countries since the advent of prenatal sex determination: On the commercial side, www.statista.com has a wealth of information—such as, from my recent searching, data on anti-Semitic crime trends, social media use, and dating app usage. For us data geeks, so many numbers, so little time. Not everything is “better angels” rosy. In addition to sex-selective abortions, we are menaced by climate change, nationalism, hate speech, and rampant misinformation. Even so, the Pinker/Rosling message—that in many important ways life is getting better—is further typified by these very websites, which provide easy access to incredible amounts of information that our ancestors could never know. (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)
... View more
Labels
-
Research Methods and Statistics
0
0
4,770
david_myers
Author
10-30-2019
09:32 AM
“Death is reversible.” So began NYU medical center’s director of Critical Care and Resuscitation Research Science, Sam Parnia, at a recent research consultation on people’s death experiences during and after cardiac resuscitation. Biologically speaking, he explained, death and cardiac arrest are synonymous. When the heart stops, a person will stop breathing and, within 2 to 20 seconds, the brain will stop functioning. These are the criteria for declaring someone dead. When there’s no heartbeat, no breathing, and no discernible brain activity, the attending physician records the time of death. Yet recent advances in science reveal that it may take many hours for individual brain cells to die. In a 2019 Nature report, slaughtered pigs’ brains, given a substitute blood infusion 4 hours after death, had brain function gradually restored over a 6-10 hour period. For many years now, brain cells from human cadaver biopsies have been used to grow brain cells up to 20 hours after death, explained Parnia. His underappreciated conclusion: “Brain cells die very, very slowly,” especially for those whose brains have been chilled, either medically or by drowning in cold water. But what is death? A Newsweek cover showing a resuscitated heart attack victim proclaimed, “This man was dead. He isn’t any more.” Parnia thinks Newsweek got it right. The man didn’t have a “near death experience” (NDE). He had a death experience (DE). Ah, but Merriam-Webster defines death as “a permanent cessation of all vital functions.” So, I asked Parnia, has a resuscitated person actually died? Yes, replied Parnia. Imagine two sisters simultaneously undergoing cardiac arrest, one while hiking in the Sahara Desert, the other in a hospital ER, where she was resuscitated. Because the second could be resuscitated, would we assume that the first, in the same minutes following the cessation of heart and brain function, was not dead? Of 2.8 million CDC-reported deaths in the United States annually, Parnia cites estimates of possibly 1.1 million attempted U.S. cardiac resuscitations a year. How many benefit from such attempts? And of those who survive, how many have some memory of their death experiences (cognitive activity during cardiac arrest)? For answers, Parnia offers his multi-site study of 2060 people who suffered cardiac arrests. In that group, 1730 (84 percent) died and 330 survived. Among the survivors, 60 percent later reported no recall of their death experience. The remaining 40 percent had some recollection, including 10 percent who had a meaningful “transformative” recall. If these estimates are roughly accurate, then some 18,000 Americans a year recall a death experience. NDEs (or DEs) are reportedly recalled as a peaceful and pleasant sense of being pulled toward a light, often accompanied by an out-of-body experience with a time-compressed life review. After returning to life, patients report a diminished fear of death, a kinder spirit, and more benevolent values—a “transformational” experience that Parnia is planning to study with the support of 17 major university hospitals. In this study, cardiac-arrest survivors who do and don’t recall cognitive experiences will complete positive psychology measures of human flourishing. One wonders (and Parnia does, too), when did the recalled death experiences occur? During the cardiac-arrest period of brain inactivity? During the moments before and at cardiac arrest? When the resuscitated patient was gradually re-emerging from a coma? Or even as a later constructed false memory? Answers may come from a future Parnia study, focusing on aortic repair patients, some of whom experience a controlled condition that biologically approximates death, with no heartbeat and flat-lined brain activity. This version of aortic repair surgery puts a person under anesthesia, cools the body to 70 degrees, stops the heart, and drains the blood, creating a death-like state, during which the cardiac surgeon has 40 minutes to repair the aorta before warming the body and restarting the heart. Functionally, for that 40 or so minutes, the patient is dead . . . but then lives again. So, will some of these people whose brains have stopped functioning experience DEs? One study suggests that at least a few aortic repair patients, despite also being under anesthesia, do report a cognitive experience during their cardiac arrest. Parnia hopes to take this research a step further, by exposing these “deep hypothermia” patients to stimuli during their clinical death. Afterwards he will ascertain whether any of them can report accurately on events occurring while they lacked a functioning brain. (Such has been claimed by people having transformative DEs.) Given that a positive result would be truly mind blowing—it would challenge our understanding of the embodied person and the mind-brain connection—my colleagues and I encouraged Parnia to preregister his hypotheses and methods with the Open Science Framework. conduct the experiment as an “adversarial collaboration” with a neuroscientist who would expect a null result. have credible, independent researchers gather the data, as happens with clinical safety trials. If this experiment happens, what do you predict: Will there be someone (anyone) who will accurately report on events occurring while their brain is dormant? Sam Parnia thinks yes. I think not. Parnia is persuaded by his accumulation of credible-seeming accounts of resuscitated patients recalling actual happenings during their brain-inactive time. He cites the case of one young Britisher who, after all efforts to restart his heart had failed and his body turned blue, was declared dead. When the attending physician later returned to the room, he noticed that the patient’s normal color was returning and discovered that his heart had somehow restarted. The next week, reported Parnia, the patient astoundingly recounted events from his death period. As Agatha Christie’s Miss Marple, reflected “It wasn’t what I expected. But facts are facts, and if one is proved to be wrong, one must just be humble about it and start again.” My skepticism arises from three lines of research: the failure of parapsychology experiments to confirm out-of-body travel with remote viewing, the mountain of cognitive neuroscience evidence linking brain and mind, and scientific observations showing that brain oxygen deprivation and hallucinogenic drugs can cause similar mystical experiences (complete with the tunnel, beam of light, and life review). Nevertheless, Parnia and I agree with Miss Marple: Sometimes reality surprises us (as mind-boggling DE reports have surprised him). So stay tuned. When the data speak, we will both listen. (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.) P.S. For those wanting more information: Parnia and other death researchers will present at a November 18 th New York Academy of Sciences symposium on “What Happens When We Die?” (see here and here)--with a live stream link to come. For those with religious interests: My colleagues, British cognitive neuroscientist Malcolm Jeeves and American developmental psychologist Thomas Ludwig, reflect on the brain-mind relationship in their recent book, Psychological Science and Christian Faith. If you think that biblical religion assumes a death-denying dualism (thanks to Plato’s immortal soul) prepare to be surprised.
... View more
Labels
-
Consciousness
-
Research Methods and Statistics
1
1
7,109
david_myers
Author
10-03-2019
11:10 AM
Photo courtesy Virginia Welle At a recent Teaching of Psychology in Secondary Schools workshop hosted by Oregon State University, I celebrated and illustrated three sets of big ideas from psychological science. Without further explanation, here is a quick synopsis. Questions: Which of these would not be on your corresponding lists? And which would you add? Twelve unsurprising but important findings (significant facts of life for our students to understand😞 There is continuity to our traits, temperament, and intelligence. With age, emotional stability and conscientiousness increase. Yet individual differences (extraversion and IQ) persist. Specific cognitive abilities are distinct yet correlated (g, general intelligence). Human traits (intelligence, personality, sexual orientation, psychiatric disorders, autism spectrum) are influenced by “many genes having small effects” A pessimistic explanatory style increases depression risk. To a striking extent, perceptual set guides what we see. Rewards shape behavior. We prioritize basic needs. Cultures differ in how we dress, eat, and speak. values. Conformity and social contagion influence our behavior. Group polarization amplifies our differences. Ingroup bias (us > them) is powerful and perilous. Nevertheless, worldwide, we are all kin beneath the skin (we share a human nature). Eleven surprising findings that may challenge our beliefs and assumptions: Behavior genetics studies with twins and adoptees reveal a stunning fact: Within the normal range of environments, the “shared environment” effect on personality and intelligence (including parental nurture shared by siblings) is ~nil. As Robert Plomin says (2019), “We would essentially be the same person if we had been adopted at birth and raised in a different family.” Caveats: Parental extremes (neglect/abuse) matter. Parents influence values/beliefs (politics, religion, etc.). Parents help provide peer context (neighborhood, schools). Stable co-parenting correlates with children’s flourishing. Marriage (enduring partnership) matters . . . more than high school seniors assume . . . and predicts greater health, longevity, happiness, income, parental stability, and children’s flourishing. Yet most single adults and their children flourish. Sexual orientation is a natural disposition (parental influence appears nil), not a moral choice. Many gay men’s and women’s traits appear intermediate to those of straight women and men (for example, spatial ability). Seasonal affective disorder (SAD) may not exist (judging from new CDC data and people’s Google searches for help, by month). Learning styles—assuming that teaching should align with students’ varying ways of thinking and learning—have been discounted. We too often fear the wrong things (air crashes, terrorism, immigrants, school shootings). Brief “wise interventions” with at-risk youth sometimes succeed where big interventions have failed. Random data (as in coin tosses and sports) are streakier than expected. Reality is often not as we perceive it. Repression rarely occurs. Some surprising findings reveal things unimagined: Astonishing insights—great lessons of psychological science—that are now accepted wisdom include split-brain experiments: the differing functions of our two hemispheres. sleep experiments: sleep stages and REM-related dreaming. misinformation effect experiments: the malleability of memory. We’ve been surprised to learn what works as therapy (ECT, light therapy). what doesn’t (Critical Incident Debriefing for trauma victims, D.A.R.E. drug abuse prevention, sexual reorientation therapies, permanent weight-loss programs). We’ve been astounded at our dual-processing powers—our two-track (controlled vs. automatic) mind, as evident in phenomena such as blindsight. implicit memory. implicit bias. thinking without thinking (not-thinking => creativity). We’ve been amazed at the robustness of the testing effect (we retain information better after self-testing/rehearsing it) the Dunning-Krueger effect (ignorance of one’s own incompetence). The bottom line: Psychological science works! It affirms important, if unsurprising, truths. And it sometimes surprises us with findings that challenge our assumptions, and with discoveries that astonish us. (For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)
... View more
Labels
-
Research Methods and Statistics
0
0
4,570
david_myers
Author
11-21-2018
07:13 AM
After elections, people often note unexpected outcomes and then complain that “the polls got it wrong.” After Donald Trump’s stunning 2016 presidential victory, the press gave us articles on “Why the Polls were such a Disaster,” on “4 Possible Reasons the Polls Got It So Wrong,” and on “Why the Polls Missed Their Mark.” Stupid pollsters. “Even a big poll only surveys 1500 people or so out of almost 130 million voters,” we may think, “so no wonder they can’t get it right. Moreover, consider the many pundits who, believing the polls, confidently predicted a Clinton victory. They were utterly wrong, leaving many folks shocked on election night (some elated, others depressed, with later flashbulb memories of when they realized Trump was winning). So how could the polls, the pundits, and the prediction models have all been so wrong? Or were they? First, we know that in a closely contested race, a representative sample of a mere 1500 people from a 130 million population will—surprisingly to many people—allow us to estimate the population preference within ~3 percent. Sounds easy. But there’s a challenge: Most randomly contacted voters don’t respond when called. The New York Times “Upshot” recently let us view its polling in real time. This enabled us to see, for example, that it took 14,636 calls to Iowa’s fourth congressional district to produce 423 responses, among which Steve King led J. D. Scholten by 5 percent—slightly more than the 3.4 percent by which King won. Pollsters know the likely demographic make-up of the electorate, and so can weight results from respondents of differing age, race, and gender to approximate the population. And that, despite the low response rate, allows them to do remarkably well—especially when we bear in mind that their final polls are taken ahead of the election (and cannot account for last-minute events, which may sway undecided voters). In 2016, the final polling average favored Hillary Clinton by 3.9 percent, with a 3 percent margin of error. On Election Day, she won the popular vote by 2.1 percent (and 2.9 million votes)—well within that margin of error. To forecast a race, fivethirtyeight.com’s prediction model does more. It “takes lots of polls, performs various types of adjustments to them [based on sample size, recency, and pollster credibility], and then blends them with other kinds of empirically useful indicators” such as past results, expert assessments, and fundraising. Here is their 2016 final estimation: Ha! This prediction, like other 2016 prediction models, failed. Or did it? Consider a parallel. Imagine that as a basketball free-throw shooter steps to the line, I tell you that the shooter has a 71 percent free-throw average. If the shooter misses, would you disbelieve the projection? No, because, if what I’ve told you is an accurate projection, you should expect to see a miss 29 percent of the time. If the player virtually never missed, then you’d rightly doubt my data. Likewise, if, when Nate Silver’s fivethirtyeight.com gives a candidate a 7 in 10 chance of winning and that candidate always wins, then the model is, indeed, badly flawed. Yes? In the 2018 U.S. Congressional races, fivethirtyeight.com correctly predicted 96 percent of the outcomes. On the surface, that may look like a better result, but it’s mainly because most races were in solid Blue or Red districts and not seriously contested. Ergo, don’t be too quick to demean the quality polls and the prediction models they inform. Survey science still works. (For David Myers’ other weekly essays on psychological science and everyday life visit TalkPsych.com)
... View more
Labels
-
Research Methods and Statistics
0
0
2,147
david_myers
Author
09-06-2018
07:58 AM
Some fun emails stimulated by last week’s essay on loss aversion in sports and everyday life pointed me to statistician David Spiegelhalter's Cambridge Coincidence Collection, which contains people’s 4500+ reports of weird coincidences. That took my mind back to some personally experienced coincidences . . . like the time my daughter, Laura Myers, bought two pairs of shoes. Back home, we were astounded to discover that the two brand names on the boxes were “Laura” and “Myers.” Or the time I confused our college library desk clerk when checking out after using a photocopy machine. My six-digit charge number was identical to the one-in-a-million six-digit number of copies on which the last user had stopped. Or the day my wife, Carol, called seeking my help in sourcing Mark Twain’s quote, “The man who does not read good books has no advantage over the man who cannot read them.” After this first-ever encounter with that quote, my second encounter was 90 minutes later, in a Washington Post article. In Intuition: Its Powers and Perils, I report more amusing coincidences. Among my favorites: Twins Levinia and Lorraine Christmas, driving to deliver Christmas presents to each other near Flitcham, England, collided. Three of the first five U.S. Presidents—Adams, Jefferson, and Monroe—died on the same date–which was none other than the 4 th of July. And my favorite . . . in Psalm 46 of the King James Bible, published in the year that Shakespeare turned 46, the 46 th word is “shake” and the 46 th word from the end is “spear.” (An even greater marvel: How did anyone notice this?) What should we make of weird coincidences? Were they, as James Redfield suggested in The Celestine Prophecy, seemingly “meant to happen . . . synchronistic events, and [that] following them will start you on your path to spiritual truth”? Is it a wink from God that your birthdate is buried among the random digits of pi? Beginning 50,841,600 places after the decimal, my 9/20/1942 birthdate appears . . . and you can likewise find yours here. Without wanting to drain our delight in these serendipities, statisticians have a simpler explanation. Given the countless billions of daily events, some weird juxtapositions are inevitable—and then likely to get noticed and remembered (while all the premonitions not followed by an envisioned phone call or accident are unnoticed and fall into oblivion). “With a large enough sample, any outrageous thing is likely to happen,” observed statisticians Persi Diaconis and Frederick Mosteller. Indeed, added mathematician John Allen Paulos, “the most astonishingly incredible coincidence imaginable would be the complete absence of all coincidences.” Finally, consider: That any specified coincidence will occur is very unlikely. That some astonishing unspecified event will occur is certain. That is why remarkable coincidences are noted in hindsight, not predicted with foresight. And that is also why we don’t need paranormal explanations to expect improbable happenings, even while delighting in them.
... View more
Labels
-
Research Methods and Statistics
0
0
2,457
Topics
-
Abnormal Psychology
6 -
Achievement
1 -
Affiliation
1 -
Cognition
7 -
Consciousness
8 -
Current Events
26 -
Development Psychology
11 -
Developmental Psychology
9 -
Emotion
10 -
Gender
1 -
Gender and Sexuality
1 -
Genetics
2 -
History and System of Psychology
2 -
Industrial and Organizational Psychology
2 -
Intelligence
3 -
Learning
3 -
Memory
2 -
Motivation
3 -
Motivation: Hunger
2 -
Nature-Nurture
4 -
Neuroscience
6 -
Personality
9 -
Psychological Disorders and Their Treatment
9 -
Research Methods and Statistics
22 -
Sensation and Perception
8 -
Social Psychology
79 -
Stress and Health
8 -
Teaching and Learning Best Practices
7 -
Thinking and Language
12 -
Virtual Learning
2
Popular Posts