-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- Psychology Community
- :
- Psychology Blog
- :
- Psychology Blog - Page 7
Psychology Blog - Page 7
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Psychology Blog - Page 7
sue_frantz
Expert
11-23-2022
09:29 AM
I read with increasing horror a New York Times article describing how college and university athletic departments have partnered with sportsbooks to encourage betting among their students. (The legal betting age in the U.S. varies by state. In ten states the minimum age is 18, in Alabama it is 19, and in all of the rest—including D.C.—it is 21. See the state list. In Canada, the age is 18 or 19 depending on province. See the province list.) “Major universities, with their tens of thousands of alumni and a captive audience of easy-to-reach students, have emerged as an especially enticing target” for gambling companies (Betts et al., 2022). While what I’d like to write is an opinion piece about the financial state of colleges and universities (how is it that public funding has evaporated?), how athletic departments have come to operate outside of the college and university hierarchy (why does my $450 airfare to travel to a professional conference have to be signed off on by a raft of people, but an athletic department can sign a $1.6 million dollar deal without the university’s Board of Regents knowing anything about it?), and the ethically-suspect behavior of a college or university using their student contact information—such as email addresses that the institution provides to them and requires them to use for official communication—to encourage those students to bet on sports. But I’m not going to write that opinion piece. At least not in this forum. Instead, I am going to write about what I know best: teaching Intro Psych. If our colleges and universities are going to encourage our students to gamble on sports, psychology professors need to be more explicit in discussing gambling. Within casinos, slot machines are the biggest gaming moneymaker (see this UNLV Center for Gaming Research infographic for an example). For everything you could possibly want to know about slot machines, I highly recommend Addiction by Design by Natasha Dow Schüll, cultural anthropologist at New York University. Slot machines and sports betting are similar in that they both pay out on a variable ratio schedule. People play slot machines to escape; they are powered by negative reinforcement, not positive. Each win provides the ability to play longer, and thus to spend even more time not thinking about problems at school, at work, at home, or in the world. The goal of the slot machine manufacturer and casino is to get you to stay at the machine longer. Having recently visited a casino, I was impressed by some of the newer innovations designed to do just that, such as comfy seats and phone charging pads built into the slot machine itself. While sports betting may—initially at least—be driven by positive reinforcement. Each win feels good and apparently outweighs the punishment of a loss. However, like slot machines, sports betting can become an escape. The time spent planning bets, placing bets, monitoring the games and matches one has put money on, and then trying to find ways to fund the next round of bets can be time not spent thinking about problems at school, at work, at home, or in the world. Since we’re talking about decision making, cognitive biases are also at play. For example, the availability heuristic may have us give undue attention to the big betting wins our friends brag about. Are our friends telling us about their big losses, too? If not, we may feel like winning is more common than losing. We know, however, that winning is not more common. Every time someone downloads the University of Colorado Boulder’s partner sportsbook app using the university’s promo code and then places a bet, the university banks $30. If the sportsbook is giving away $30 every time, how much money in losing bets per person, on average, must the sportsbook be collecting? While there are many topics in the Intro Psych course where sports betting can be discussed, I’ll suggest using it as an opener for discussion of psychological disorders. To be considered a psychological disorder, a behavior needs to be unusual, distressing, and dysfunctional (American Psychiatric Association, 2013). Ask students to envision a friend who lies about how much they are gambling, who has wanted to quit or greatly reduce how much they are betting but can’t seem to be able to, and who is using student loans to fund their betting. Do your students think their friend meets the criteria for a psychological disorder? Why or why not? If you’d like, have students discuss in small groups, and then invite groups to share their conclusions. Gambling disorder is a DSM-V diagnosis categorized under “Substance Use and Addictive Disorders.” In previous editions of the DSM, it was called “gambling pathology” and was categorized as an impulse control disorder. Also in previous DSMs, illegal activity was a criterion for diagnosis; that has been removed in DSM-V. To be diagnosed with gambling disorder, a person must—in addition to impairment and/or distress—meet at least four of the following criteria: Requires higher and higher bets to get the same rush Becomes irritable during attempts to cut back on gambling Has been repeatedly unsuccessful when trying to cut back or stop gambling Spends a lot of time thinking about gambling When stressed, turns to gambling as an escape Chases losses (for example, after losing a $20 bet, places an even higher bet to try to get the $20 back) Lies about how much they are gambling Gambling interferes with their performance in school or in a job or has negatively affected interpersonal relationships Gets money from others to support their gambling Poll your students—even by a show of hands—to find out if they know someone, including themselves, who meet at least four of these criteria. For help with a gambling problem, residents of the U.S., Canada, and the U.S. Virgin Islands can contact the National Problem Gambling Helpline by calling or texting 1-800-522-4700 any day at any time. For those who prefer chat, visit this webpage. For additional peer support, recommend gamtalk.org. References American Psychiatric Association (Ed.). (2013). Diagnostic and statistical manual of mental disorders: DSM-5 (5th ed). American Psychiatric Association. Betts, A., Little, A., Sander, E., Tremayne-Pengelly, A., & Bogdanich, W. (2022, November 20). How colleges and sports-betting companies ‘Caesarized’ campus life. The New York Times. https://www.nytimes.com/2022/11/20/business/caesars-sports-betting-universities-colleges.html
... View more
0
0
2,545
sue_frantz
Expert
11-14-2022
05:00 AM
In the days before learning management systems, my students would take exams in class and submit hard copies of their assignments. I would write carefully crafted comments on these documents before returning them to the students in class. Some students would read my comments immediately. Some students would tuck their papers into their book or notebook, and I would fool myself into thinking that each of these students would give my comments careful consideration when they were in a quiet place and could give my comments the attention they deserved. And some students would toss the papers into the trashcan on their way out the door at the end of class—my carefully crafted comments never so much as even glanced at. Now in the age of course management systems, my carefully crafted comments are digital. I cannot see if my students are reading my comments or not, but I am confident that the percentages are not all the different from the days of paper. I can see why students would read their professors’ comments, because I was that type of student. I did well in school, so if I missed a question or didn’t earn a perfect score on a paper, I wanted to know why. What to make of those students who don’t read their professors’ comments, who toss their papers in the trash? I made a they-don’t-care-about-school attribution and gave it no more thought. And then some years ago Roddy Roediger pointed out that students who found taking the test or writing the paper aversive were disinclined to revisit the experience. In other words, if they hated doing the test/paper in the first place, why would they want to spend even more time thinking about it? That was a true “doh!” moment for me. If I really wanted students to learn from their mistakes, I was going to have to provide an incentive for revisiting these aversive events. To that end, I began using an assignment wrapper (this earlier blog post describes what I do). This is not the only time I’ve thought about failure—or, more generally, about being wrong. Author Adam Grant told a story about giving a talk and having Nobel Prize winner Daniel Kahneman who was in the audience come up to him afterward and say, “That was wonderful. I was wrong.” (See a longer description of Grant and Kahneman’s interaction and my thoughts on it in this blog post.) With all of that floating around in my head, I read Lauren Eskreis-Winkler and Ayelet Fishbach’s (2022) Perspectives on Psychological Science article on learning from failure. They argue that there are two big reasons we tend not to learn from failure: emotional and cognitive. The emotional reason is that we want to feel good about ourselves. As a general rule, reflecting on where we have gone wrong does not tend to produce happy feelings about ourselves, therefore we prefer not to engage in such reflection. There appear to be two cognitive reasons why it is hard for us to learn from failure. Confirmation bias causes us to look for information that aligns with our view of ourselves as a person who is correct. We focus on all of the times when we have been correct and dismiss the times when we have been incorrect. The second reason is that it is cognitively easier to learn from our successes than our failures. When we succeed, we can simply say, “Let’s do that again.” When we fail, we have to figure out why we failed and then develop a different course of action. That takes much more effort. Based on their summary of why learning from failure is hard, Eskreis-Winkler and Fishbach have some suggestions on how to encourage others—in this case, our students—to learn from failure. Of course, I don’t mean failure defined as scoring below standard on an assessment. I mean failure in a more general sense, such as missing items on an exam or losing points on an assignment. First, let’s look at their suggested interventions designed to counter emotional barriers to learning from failure. Rather than having to address our own failures, we can observe and learn from the failures of others. Instructors who go over the most commonly missed exam questions in class, for example, are taking this approach. When giving instructions for an assignment, some instructors will create an example with many common errors and then ask students to work in small groups to identify the errors. Creating some emotional distance between ourselves and our failures can help us look at our failures more objectively. One strategy would be to ask myself “Why did Sue fail?” rather than ask “Why did I fail?” While I can see why that would work in theory, I’m having a hard time picturing how to explain it to students in such a way that would minimize eyerolling. Asking students to give advice to other students can help students learn from their failure while at the same time turning the failure into a source of strength. For example, immediately following receiving exam scores, ask students to take a minute to reflect on what they did in studying for the test that worked well and what they would do differently next time. Ask them to write their advice—just a couple sentences—in whatever format is easiest for you to collect. For example, you could distribute blank index cards for students to write on, collect the cards, shuffle them, and then redistribute them. If you’d like to screen them first, collect the cards, read them, and then redistribute the next class session. Or you can make this an online class discussion where the initial post is the student’s advice. Remind students that they have abilities and skills, that their education is important to them (commitment), and that they have expertise. Eskreis-Winkler and Fishbach tell us that experts have an easier time learning from failure than do novices. Experts are committed to being experts in their field. To be an expert, they know that they have abilities and skills, but to get even better, they have to be able to learn from failure. Perhaps this is one reason it is easier for Daniel Kahneman to accept being wrong—every time he is, he learns something new and is now even more of an expert than he was before. While our students may not (yet) be Nobel Prize winners, they do have reading, study, and social skills that they can build on. Remind students that they are not born with knowing psychology, chemistry, math, history, or whatever, nor are they born knowing how to write or how to study. Knowledge and skills are learned. You probably recognized this as fostering a growth mindset. Eskreis-Winkler and Fishbach also have five suggested interventions that address cognitive barriers to learning from failure. Being explicit about how failure can help us learn can reduce the cognitive effort to learn from failure. For example, if your course includes a comprehensive final, point out to students that if they take a look at the questions they missed on this exam, they can learn the correct information now, and that will reduce how much time they need to study for the final. While this may seem obvious to instructors, to students who are succumbing to confirmation bias and cognitive miserliness, it may not occur to them that reviewing missed questions will save them time in the long run. We seem to have an easier time learning from failure when our failure involves the social domain. “An adult who loses track of time and misses a meeting with friends may tune in and learn more from this failure than an adult who loses track of time and misses a train” (Eskreis-Winkler & Fishbach, 2022, p. 1517). I wonder if a jigsaw classroom, small group discussions, or study groups would address this. If a student is accountable to others, are they more likely to learn from their errors? It’s an interesting question. If having enough cognitive bandwidth is a barrier to learning from failure, then providing time in class for students to learn from failure may be time well spent. When I gave in-class multiple choice exams, students would take the test themselves first. After they submitted their completed bubble sheets, they got a new bubble sheet, and students would answer the same questions again, but this time it was open note, open book, and an open free-for-all discussion. The individual test was worth 50 points, and the wide-open test was worth 10 points. So much learning happened in that wide-open test that if I were to go back to multiple-choice tests, I’d make the wide-open test worth 25 points. Most students discussed and debated the answers to the questions. Even the students who were not active participants were active listeners. While the students hadn’t received their exam scores back yet, I’d hear students say, “AH! I missed that one!” They were learning from their failures—and in a socially supportive atmosphere. The more practice we have at a skill, the fewer cognitive resources we need to devote to it, and so the easier it is to learn from our failures. One approach would be to encourage students to add tools to their study skills toolbox. The LearningScientists.org study posters are a great place for students to start. The more tools they have, the easier it will be for them to choose the best one for what they are learning. By analogy, if all they have in their toolbox is a hammer, that hammer will work great when a hammer is called for. But if they have a situation that calls for a screwdriver or pliers, they might be able to make the hammer work, but it will take much more effort and the outcomes will not be that great. Picture hammering in a screw. Once students are well-practiced at using a number of different study skills, it will be easier for them to see where a particular study skill did not serve them well for a particular kind of test. What they learn from their failure, perhaps, is to implement a different study skill. We can work to create a culture that accepts failure as a way to learn. This can be a challenge with students who have been indoctrinated to see failure as a reflection on who they are as human beings. Standards-based grading, mastery-based grading, and ungrading are all strategies for embracing failure as an opportunity to learn. In each case, students do the work and then continue to revise until a defined bar has been reached. In these approaches, failure is not a final thing; it is merely information one learns from. Of the instructors I’ve known who have tried one of these techniques, the biggest challenge seems to come from students who have a hard time grasping a grading system that is not point based. Being able to learn from failure is a lifelong skill that will serve our students well. If you try any of these strategies, be explicit about why. And then tell students that in their next job interview when they are asked about their greatest strength, their greatest strength may very well be learning from failure. It’s the rare employer who would not love hearing that. Reference Eskreis-Winkler, L., & Fishbach, A. (2022). You think failure is hard? So is learning from it. Perspectives on Psychological Science, 17(6), 1511–1524. https://doi.org/10.1177/17456916211059817
... View more
1
0
1,385
sue_frantz
Expert
11-09-2022
02:03 PM
In my online class weekly discussions, I ask students to share good news—no matter how small—from the previous week. In the last couple of years, I have had students reporting being excited because their credit score increased. At least some people—number unknown—have gamified their credit. They are watching in almost real time what happens when they pay down or pay off their credit cards. It’s a great example of operant conditioning. Engaging in this behavior increases my credit score, therefore I’m going to engage in this behavior more often. It was with that in mind that I read this Lifehacker article: “Here’s How to Stop Succumbing to Financial Peer Pressure” (Dietz, 2022). For my students who are trying to reduce their spending because of the impact that spending has on their credit scores, have they become consciously aware of the social pressure to spend money that they don’t necessarily have? I don’t have the answer to that question, but given that it’s a topic explored by Lifehacker, it must be in the consciousness of some. In any case, I propose that we bring this topic to the forefront of the minds of all of our students in our coverage of conformity. After covering conformity, share this scenario with your students. At college, Logan has developed some new friendships. Logan enjoys the company of their new friends, but they have noticed that their new friends prefer to eat at expensive restaurants, to sit in the expensive seats at concerts, plays, and sporting events, and to shop for clothes in pricey stores. Logan doesn’t know if they have a lot of money or if they are poor at managing the money they have. In any case, Logan doesn’t have that kind of money and doesn’t want to damage their credit score by going deeply into debt before they’re even done with their first college term. After reviewing the factors that tend to increase conformity, ask your students to envision how each of those factors could be present in Logan’s relationship with their money-spending friends. For example, conformity tends to increase when the group is unanimous about a decision. Logan would be more likely to go along with the group if one person suggests having dinner at the most expensive restaurant in town and everyone else in the group agreed. After students have had a couple minutes to think about these on their own, ask them to share their examples in small groups. After discussion has died down, invite groups to share their thoughts with the class. Now let’s take it one step further. For each factor, ask students to consider what Logan can do to counter it. For example, if Logan knows that there will be a discussion about where to have dinner, Logan can approach one person from the group in advance, explain their financial situation, and ask if this friend would be supportive when Logan suggests a less expensive dinner option. Knowing that an ally will also dissent from the group rendering the group no longer unanimous may help Logan suggest a different restaurant—or, better yet, a potluck. Again, give students an opportunity to share their ideas in small groups, and then invite groups to share their best ideas with the class. This activity not only gives students practice applying the factors that influence conformity, it also gives students a chance to see how they may be inadvertently pressuring their friends and how they themselves may be being pressured—and give them some strategies for countering the pressure. Given that money is the context for this activity, we may even help our students raise their credit scores. Reference Dietz, M. (2022, October 25). Here’s how to stop succumbing to financial peer pressure. Lifehacker. https://lifehacker.com/here-s-how-to-stop-succumbing-to-financial-peer-pressur-1849694842
... View more
Labels
0
0
941
sue_frantz
Expert
10-27-2022
07:37 AM
The Intro Psych therapy chapter is a good place to reinforce what students have learned about research methods throughout the course. In this freely available 2022 study, researchers wondered about the effectiveness of a particular medication (naltrexone combined with bupropion) and a particular type of psychotherapy (behavioral weight loss therapy) as a treatment for binge-eating disorder (Grilo et al., 2022). First, give students a bit of background about binge-eating disorder. If you don’t have the DSM-V (with or without the TR) handy, this Mayo Clinic webpage may give you what you need (Mayo Clinic Staff, 2018). Next, let students know that naltrexone-bupropion work together on the hypothalamus to both reduce how much we eat and increase the amount of energy we expend (Sherman et al., 2016). It’s a drug combination approved by the FDA for weight loss. Lastly, behavioral weight loss therapy is all about gradual changes to lifestyle. That includes gradual decreases in daily calories consumed, gradual increases in nutritional quality, and gradual increases in exercise. Invite students to consider how they would design an experiment to find out which treatment is most effective for binge-eating disorder: naltrexone-bupropion, behavioral weight loss (BWL) therapy, or both. In this particular study ("a randomized double-blind placebo-controlled trial"), researchers used a 2 (drug vs. placebo) x 2 (BWL vs no therapy) between participants design. In their discussion, they note that, in retrospect, a BWL therapy group alone would have been a good thing to have. The study was carried out over a 16-week period. Participants were randomly assigned to condition. Researchers conducting the assessments were blind to conditions. Next ask students what their dependent variables would be. The researchers had two primary dependent variables. They measured binge-eating remission rates, with remission defined as no self-reported instances of binge eating in the last 28 days. They also recorded the number of participants who lost 5% or more of their body weight. Ready for the results? Percentage of participants who had no binge-eating instances in the last 28 days Placebo Naltrexone-Bupropion No therapy 17.7% 31.3% BWL therapy 37.% 57.1% Number of participants who lost 5% or more of their body weight Placebo Naltrexone-Bupropion No therapy 11.8% 18.8% BWL therapy 31.4% 37.1% As studies that have evaluated treatments for other psychological disorders have found, medication and psychotherapy combined are more effective than either alone. If time allows, you can help students gain a greater appreciation for how difficult getting participants for this kind of research can be. Through advertising, the researchers heard from 3,620 people who were interested. Of those, 972 never responded after the initial contact. That left 2,648 to be screened for whether they would be appropriate for the study. Following the screening, only 289 potential participants were left. Ask students why they think so few participants remained. Here are the top reasons: participants did not meet the criteria for binge-eating disorder (715), participants decided they were not interested after all (463), and participants were taking a medication that could not be mixed with naltrexone-bupropion (437). Other reasons included but not limited to having a medical condition (could impact study’s results), they were already in a treatment program for weight loss or binge-eating disorder (would not be a sole test of these treatments), or they were pregnant or breast-feeding (couldn’t take the drugs). After signing the consent form and doing the initial assessment, another 153 were found to have not met the inclusion criteria. That left 136 to be randomly assigned to conditions. Over the 16 weeks of the study, 20 participants dropped out on their own, and four were removed because of medical reasons. It took 3,620 people who expressed interest to end up with data from 112 participants. There is no information in the article about whether participants who were not in the drug/psychotherapy group were offered—after the study was over—the opportunity to experience the combined treatment that was so effective. Ethically, it would have been the right thing to do. References Grilo, C. M., Lydecker, J. A., Fineberg, S. K., Moreno, J. O., Ivezaj, V., & Gueorguieva, R. (2022). Naltrexone-bupropion and behavior therapy, alone and combined, for binge-eating disorder: Randomized double-blind placebo-controlled trial. American Journal of Psychiatry, 1–10. https://doi.org/10.1176/appi.ajp.20220267 Mayo Clinic Staff. (2018, May 5). Binge-eating disorder. Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/binge-eating-disorder/symptoms-causes/syc-20353627 Sherman, M. M., Ungureanu, S., & Rey, J. A. (2016). Naltrexone/bupropion ER (Contrave): Newly approved treatment option for chronic weight management in obese adults. Pharmacy & Therapeutics, 41(3), 164–172.
... View more
sue_frantz
Expert
10-17-2022
05:00 AM
I have had occasion to send out emails with some sort of inquiry. When I don’t get any response, it ticks me off. I don’t do well with being ignored. I’ve learned that about me. Even a short “I’m not the person to help you. Good luck!” would be welcome. I mention that to acknowledge that I brought that particular baggage with me when I read an article in the Journal of Counseling Psychology about counselors ignoring email messages from people seeking a counselor (Hwang & Fujimoto, 2022). As if the bias this study revealed was not anger-inducing enough. The results are not particularly surprising, but that does not make me less angry. I’ve noticed that as I’m writing this, I’m pounding on my keyboard. Wei-Chin Hwang and Ken A. Fujimoto were interested in finding out how counselors would respond to email inquiries from potential clients who varied on probable race, probably gender, psychological disorder, and inquiry about a sliding fee scale. The researchers used an unidentified “popular online directory to identify therapists who were providing psychotherapeutic services in Chicago, Illinois” (Hwang & Fujimoto, 2022, p. 693). From the full list 2,323 providers, they identified 720 to contact. In the first two paragraphs of their methods section, Hwang and Fujimoto explain their selection criteria. The criterion that eliminated the most therapists was that the therapist needed to have an advertised email address. Many of the therapists listed only permitted contact through the database. Because the researchers did not want to violate the database’s terms of service, they opted not to contact therapists this way. They also excluded everyone who said that they only accepted clients within a specialty area, such as sports psychology. They also had to find a solution for group practices where two or more therapists from the same practice were in the database. Hwang and Fujimoto did not want to risk therapists in the same group practice comparing email requests with each other, so they randomly chose one therapist in a group practice to receive their email. This experiment was a 3x3x2x2 (whew!). Inquirer’s race: White, African American, Latinx American (the three most common racial groups in Chicago, where the study was conducted). Researchers used U.S. Census Bureau data to identify last names that were most common for each racial group: Olson (White), Washington (African American), and Rodriguez (Latinx). Inquirer’s diagnosis: Depression, schizophrenia, borderline personality disorder (previous research has shown that providers find people with schizophrenia or borderline personality disorder less desirable to work with than, say, depression) Inquirer’s gender: Male, female. (Male first names: Richard, Deshawn, José; female first names: Molly, Precious, and Juana) Inquirer’s ability to pay full fee: Yes, no. In their methods section, Hwang and Fujimoto include the scripts they used. Each script includes this question: “Can you email me back so that I can make an appointment?” The dependent variable was responsiveness. Did the provider email the potential client back within two weeks? If not, that was coded as “no responsiveness.” (In the article’s Table 1, the “no responsiveness” column is labeled as “low responsiveness,” but the text makes it clear that this column is “no responsiveness.”) If the provider replied but stated they could not treat the inquirer, that was coded as “some responsiveness.” If the provided replied with the offer of an appointment or an invitation to discuss further, that was coded as “high responsiveness.” There were main effects for inquirer race, diagnosis, and ability to pay the full fee. The cells refer to the percentage of provider’s email messages in each category. Table 1. Responsiveness by Race of Inquirer Name No responsiveness Some responsiveness High responsiveness Molly or Richard Olson 15.4% 33.2% 51.5% Precious or Deshawn Washington 27.4% 30.3% 42.3% Juana or José Rodriguez 22.3% 34% 43.7% There was one statistically significant interaction. Male providers were much more likely to respond to Olson than they were to Washington or Rodriguez. Female providers showed no bias in responding by race. If a therapist does not want to work with a client based on their race, then it is probably best for the client if they don’t. But at least have the decency to reply to their email with some lie about how you’re not taking on more clients, and then refer them to a therapist who can help. Table 2. Responsiveness by Diagnosis Diagnosis No responsiveness Some responsiveness High responsiveness Depression 17.9% 20% 62.1% Schizophrenia 25.8% 43.8% 30.4% Borderline Personality Disorder 21.3% 33.8% 45% Similar thoughts here. I get that working with a client diagnosed with schizophrenia or borderline personality disorder takes a very specific set of skills that not all therapists have, but, again, at least have the decency to reply to the email saying that you don’t have the skills, and then refer them to a therapist who does. Table 3. Responsiveness by Inquirer’s Ability to Pay Full Fee Ability to pay full fee No responsiveness Some responsiveness High responsiveness No 22.4% 39.7% 38% Yes 21% 25.4% 53.6% While Hwang and Fujimoto interpret these results to mean a bias against members of the working class, I have a different interpretation. The no response rate was the same with about 20% of providers not replying at all. If there were an anti-working-class bias, I would expect the no responsiveness percentage to those asking about a sliding fee scale would be much greater. In both levels of this independent variable, about 80% gave some reply. It could be that the greater percentage of “some responsiveness” in reply to those who inquired about a sliding fee scale was due to the providers being maxed out on the number of clients they had who were paying reduced fees. One place to discuss this study and its findings with Intro Psych students is in the therapy chapter. It would work well as part of your coverage of therapy ethics codes. Within the ethics code for the American Counseling Association, Section C on professional responsibility is especially relevant. It reads in part: Counselors facilitate access to counseling services, and they practice in a nondiscriminatory manner…Counselors are encouraged to contribute to society by devoting a portion of their professional activity to services for which there is little or no financial return (pro bono publico) (American Counseling Association, 2014, p. 😎 Within the ethics code of the American Psychological Association, Principle 😧 Social Justice is particularly relevant. Psychologists recognize that fairness and justice entitle all persons to access to and benefit from the contributions of psychology and to equal quality in the processes, procedures, and services being conducted by psychologists. Psychologists exercise reasonable judgment and take precautions to ensure that their potential biases, the boundaries of their competence, and the limitations of their expertise do not lead to or condone unjust practices (American Psychological Association, 2017). Principle E: Respect for People's Rights and Dignity is also relevant. It reads in part: Psychologists are aware of and respect cultural, individual, and role differences, including those based on age, gender, gender identity, race, ethnicity, culture, national origin, religion, sexual orientation, disability, language, and socioeconomic status, and consider these factors when working with members of such groups. Psychologists try to eliminate the effect on their work of biases based on those factors, and they do not knowingly participate in or condone activities of others based upon such prejudices (American Psychological Association, 2017). This study was conducted in February 2018—before the pandemic. Public mental health has not gotten better. Asking for help is not easy. When people muster the courage to ask for help, the absolute least we can do is reply. Even if we are not the best person to provide that help, we can direct them to additional resources, such as one of these crisis help lines. For a trained and licensed therapist who is bound by their profession’s code of ethics to just not reply at all to a request for help, I just don’t have the words. Again, I should acknowledge that I have my own baggage about having my email messages ignored. For anyone who wants to blame their lack of responding on the volume of email you have sort through (I won’t ask if you are selectively not responding based on perceived inquirer personal characteristics), I have an hour-long workshop that will help you get your email under control and keep it that way. References American Counseling Association. (2014). ACA 2014 code of ethics. https://www.counseling.org/docs/default-source/default-document-library/2014-code-of-ethics-finaladdress.pdf?sfvrsn=96b532c_8 American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code Hwang, W.-C., & Fujimoto, K. A. (2022). Email me back: Examining provider biases through email return and responsiveness. Journal of Counseling Psychology, 69(5), 691–700. https://doi.org/10.1037/cou0000624
... View more
sue_frantz
Expert
10-10-2022
05:30 AM
I have been doing a bit of digging into the research databases, and I came across a Journal of Eating Disorders article with a 112-word “Plain English summary” (Alberga et al., 2018). I love this so much I can hardly stand it. Steven Pinker (2014) wrote an article for The Chronicle of Higher Education titled “Why academics’ writing stinks.” Pinker does not pull any punches in his assessment. Let’s face it. Some academic writing is virtually unreadable. Other academic writing is actually unreadable. Part of the problem is one of audience. If a researcher is writing for other researchers in their very specific corner of the research world, of course they are going to use jargon and make assumptions about what their readers know. That, though, is problematic for the rest of us. I have spent my career translating psychological science as an instructor and, more recently, as an author. This is what teaching is all about: translation. If we are teaching in our particular subdiscipline, translation is usually not difficult. If we are teaching Intro Psych, though, we have to translate research writing that is miles away from our subdiscipline. This is what makes Intro Psych the most difficult course in the psychology curriculum to teach. I know instructors who do not cover, for example, biopsychology or sensation and perception in their Intro Psych courses because they do not understand the topics themselves. Additionally, some of our students have learned through reading academic writing to write in a similarly incomprehensible style. Sometimes I feel like students initially wrote their papers in plain English, and then they threw a thesaurus at it to make their writing sound more academic. We have certainly gone wrong somewhere if ‘academic’ has come to mean ‘incomprehensible.’ I appreciate the steps some journals have taken to encourage or require article authors to tell readers why their research is important. In the Society for the Teaching of Psychology’s journal Teaching of Psychology, for example, the abstract ends with a “Teaching Implications” section. Many other journals now require a “Public Significance Statement” or a “Translational Abstract” (what the Journal of Eating Disorders calls a “plain English summary”). I have read my share of public significance statements. I confess that sometimes it is difficult—impossible even—to see the significance of the research to the general public in the statements. I suspect it is because the authors themselves do not see any public significance. That is probably truer for (some areas of) basic research than it is for any area of applied research. Translational abstracts, in contrast, are traditional abstracts rewritten for a lay audience. APA’s page on “Guidance for translational abstracts and public significance statements” (APA, 2018) is worth a read. An assignment where students write both translational abstracts and public significance statements for existing journal articles gives students some excellent writing practice. In both cases, students have to understand the study they are writing about, translate it for a general audience, and explain why the study matters. And maybe—just maybe—as this generation of college students become researchers and then journal editors, in a couple generations plain English academic writing will be the norm. This is just one of several windmills I am tilting at these days. The following is a possible writing assignment. While it can be assigned after covering research methods, it may work better later in the course. For example, after covering development, provide students with a list of articles related to development that they can choose from. While curating a list of articles means more work for you up front, students will struggle less to find article abstracts that they can understand, and your scoring of their assignments will be easier since you will have a working knowledge of all of the articles students could choose from. Read the American Psychological Association’s (APA’s) “Guidance for translational abstracts and public significance statements.” Chose a journal article from this list of Beth Morling’s student-friendly psychology research articles (or give students a list of articles). In your paper: Copy/paste the article’s citation. Copy/paste the article’s abstract. Write your own translational abstract for the article. (The scoring rubric for this section will be based on APA’s “Guidance for translational abstracts and public significance statements.”) Write your own public significance statement. (The scoring rubric for this section will be based on APA’s “Guidance for translational abstracts and public significance statements.”) References Alberga, A. S., Withnell, S. J., & von Ranson, K. M. (2018). Fitspiration and thinspiration: A comparison across three social networking sites. Journal of Eating Disorders, 6(1), 39. https://doi.org/10.1186/s40337-018-0227-x APA. (2018, June). Guidance for translational abstracts and public significance statements. https://www.apa.org/pubs/journals/resources/translational-messages Pinker, S. (2014). Why academics’ writing stinks. The Chronicle of Higher Education, 61(5).
... View more
0
0
2,793
sue_frantz
Expert
10-03-2022
05:30 AM
Here is another opportunity to give students practice in experimental design. While this discussion (synchronous or asynchronous) or assignment would work well after covering research methods, using it in the Intro Psych social psych chapter as a research methods refresher would work, too. By way of introduction, explain to your students how double-blind peer review works and why that’s our preferred approach in psychology. Next, ask students to read the freely-available, less-than-one-page article in the September 16, 2022 issue of Science titled “Reviewers Award Higher Marks When a Paper’s Author Is Famous.” While I have a lot of love for research designs that involves an entire research team brainstorming for months, I have a special place in my heart for research designs that must have occurred to someone in the middle of the night. This study has to be the latter. If you know it is not, please do not burst my bubble. A Nobel Prize winner (Vernon Smith) and his much lesser-known former student (Sabiou Inoua) wrote a paper and submitted it for publication in a finance journal. The journal editor and colleagues thought, “Hey, you know what would be interesting? Let’s find out if this paper would fly if the Nobel Prize winner’s name wasn’t on it. Doesn’t that sound like fun?” The study is comprised of two experiments (available in pre-print). In the first experiment, the experimenters contacted 3,300 researchers asking if they would be willing to review an economics paper based on a short description of the paper. Those contacted were randomly assigned to one of three conditions. Of the one-third who were told that the author was Smith, the Nobel laureate, 38.5% agreed to review the paper. Of the one-third who were told that the author was Inoua, the former student, 28.5% agreed to review. Lastly, of the one-third who were given no author names, 30.7% agreed to review. Even though there was no statistical difference between these latter two conditions as reported in the pre-print, the emotional difference must be there. If I’m Inoua, I can accept that many more people are interested in reviewing a Nobel laureate’s paper than are interested in reviewing mine. What’s harder to accept is that my paper appears even less interesting when my name is on it than when my name is not on it. I know. I know. Statistically, there is no difference. But, dang, if I were in Inoua’s shoes, it would be hard to get my heart to accept that. Questions for students: In the first experiment, what was the independent variable? Identify the independent variable’s experimental conditions and control condition. What was the dependent variable? Now let’s take a look at the second experiment. For their participants, the researchers limited themselves to those who had volunteered to review the paper when they had not been given the names of either of the authors. They randomly divided this group of reviewers into the same conditions as in the first experiment: author identified as Vernon Smith, author identified as Sabiou Inoua, and no author name given. How many reviewers recommended that the editor reject the paper? With Smith’s name on the paper, 22.6% said reject it. With Inoua’s name on the paper, 65.4% said reject. With no name on the paper, 48.2% said reject. All differences are statistically significant. Now standing in Inoua’s shoes, the statistical difference matches my emotional reaction. Thin comfort. Questions for students: In the second experiment, what was the independent variable? Identify the independent variable’s experimental conditions and control condition. What was the dependent variable? The researchers argue that their data reveal a status bias. If you put a high status name on a paper, more colleagues will be interested in reviewing it, and of those who do review it, more will advocate for its publication. Double-blind reviews really are fairer, although the researchers note that in some cases—especially with pre-prints or conference presentations—reviewers may know who the paper belongs to even if the editor conceals that information. With this pair of experiments, the paper sent out for review was not available as a pre-print nor had it been presented at a conference. The stickier question is how to interpret the high reject recommendations for Inoua as compared to when no author was given. While the experimenters intended to contrast Smith’s status with Inoua’s, there is a confounding variable. Reviewers who are not familiar with Sabiou Inoua will not know Inoua’s race or ethnicity for certain, but they might guess that Inouan is a person of color. A quick Google search finds Inoua’s photo and CV on Chapman University’s Economic Science Institute faculty and staff page. He is indeed a person of color from Niger. Is the higher rejection rate for when Inoua’s name was on the paper due to his lower status? Or due to his race or ethnicity? Or was it due to an interaction between the two? Questions for students: Design an experiment that would test whether Inoua’s higher rejection rate was more likely due to status or more likely due to race/ethnicity. Identify your independent variable(s), including the experimental and control conditions. Identify your dependent variable. If time allows, give students an opportunity to share other contexts where double-blind reviews are or would be better.
... View more
0
0
1,475
sue_frantz
Expert
09-26-2022
10:25 AM
Early in my Intro Psych teaching career, I didn’t cover sleep or stress probably because I thought that these more applied topics were less important than the core theories. When I finally noticed how sleep-deprived and stressed my students were, I had a DOH! moment. That was the beginning of what became a years-long shift in how I thought about Intro Psych. These days, I choose the content of my Intro Psych course based on what I think my neighbors need to know about psychology (more on that thinking here). Emotion regulation is a topic that my neighbors need to know about, so I’m adding it to my Intro Psych course. Examples abound—in the news, on Reddit, on Failblog—of people acting on emotion without seemingly to have made an attempt at moderating their emotions. They lash out at whoever happens to be in their line of fire. In some cases, the fire is literal. By naming the emotion regulation strategies and giving students some practice at thinking through how the strategies can be employed in different situations, students may be better able to moderate their emotions when needed. (There’s an empirical question for anyone interested in studying the long-term effects of taking Intro Psych.) While it makes sense to cover emotion regulation in the emotion chapter, it would fit fine in the stress and coping chapter. For an excellent overview of emotion regulation, take a look at McRae and Gross’s (2020) open access article “Emotion regulation.” For your students, start by describing and giving examples of the five emotion regulation strategies. Note that the strategies are sequential. Which strategy is employed depends on how deep into the emotional event we are. With situation selection, we choose our situations to elicit or not elicit certain emotions. For example, if we find particular family members aggravating, we may choose not to be around them thereby decreasing the likelihood of us feeling aggravated. Or if we have a friend whose company tends to generate positive emotions, we may ask them to meet us for coffee thereby increasing the likelihood of us feeling happy. When we cannot avoid a particular situation, we may be able to alter it. In situation modification, we attempt to change the situation. For example, if we are stuck sharing a holiday dinner with family members who we find aggravating, we can ask other family members to run interference so that our time interacting with the aggravators is minimized. At the holiday dinner, despite our best efforts, we find ourselves seated next to one of our aggravating family members. Using attentional deployment, we shift our attention to other things. Rather than listen to the ranting of this family member, we stop paying attention to what they are saying. Instead, we focus on the words spoken by the family member on the other side of us, we silently sing to ourselves, we mentally review all of the concepts we learned in our Intro Psych course, or we count backward from 10,000 by threes. It's now a month after the holiday dinner, and memories of those aggravating comments keep popping up. It’s now time to try cognitive change. Is it possible to think of the comments and the people who made them in a different way? Television producer Norman Lear—who turned 100 in July 2022—titled his memoir Even This I Get to Experience. It is an apt title, because it really does seem to be how he approaches life. He views negative events not so much as negative, but as opportunities to experience something new. That dinner with aggravating relatives? Even that we got to experience. And we got some good stories out of it! The last emotion regulation strategy is response modulation. When all of the other strategies fail us, and we experience the emotion in all of its unmitigated glory, we can reduce the strength of the emotion by doing something else, such as lifting weights, playing pickleball, or eating an entire batch of chocolate chip cookies. Now is a good time to note that some response modulation strategies are better for us than others. Now it is your students’ turn. Give students a minute to think about an event that could generate strong negative emotions. It could be an event that has occurred or an event that is anticipated. It could be an event from their own lives or from the lives of family or friends. In a face-to-face or virtual class, ask students to gather in groups of three or four. In an online course, a discussion board works fine. Ask students to share their events with each other. For each event, ask students to consider how each emotion regulation strategy could be or could have been used. Invite groups to share from their discussion their favorite event and emotion regulation strategy. Reference McRae, K., & Gross, J. J. (2020). Emotion regulation. Emotion, 20(1), 1–9. https://doi.org/10.1037/emo0000703
... View more
Labels
0
0
1,843
sue_frantz
Expert
09-19-2022
05:00 AM
Drew Gilpin Faust (president of Harvard from 2007 to 2018) is back in the classroom teaching undergraduate history. In The Atlantic, she wrote about the experience of discovering that most of her students could not read cursive (Faust, 2022). Some of you may remember the 2010 battle over whether cursive handwriting should be in the standards for the K-12 Common Core. The arguments over the dinner table tore families apart. Okay, maybe not. Much more divisive political views would do that in their own time, but people certainly had opinions about whether children needed to learn cursive. One concern was that people who did not learn cursive would not be able to read historical documents that were written in cursive, such as the U.S. Constitution. I admit that was not a particularly high concern of mine as many people had ‘translated’ the cursive into print. Faust, however, discovered that when she showed her students photographs of Civil War-era documents, most of her students could not read them. To them, it was like looking at hieroglyphics. One student said that she decided against doing a research paper on Virginia Woolf because she was unable to read the cursive handwriting in Woolf’s letters. Students who are interested in earlier time periods where ‘earlier’ is defined as before, say, 2015, will need to learn how to read cursive if they want to read original documents. How long will be until we see the first Cursive Handwriting course taught in a history department? Or is it already being offered? (I would totally teach that course!) Forget about identifying all of the squares that contain traffic lights, crosswalks, and chimneys. Just give me some cursive text. The youngsters will have to ask their grandparents to read it to them. The opportunity is ripe for a tech company who can create a tool that converts cursive handwriting to text. As for our own teaching, this shift away from cursive means that we need to make some changes. If you do any handwriting—on student assignments or on the board—be sure to print. You can write cursive if you want, but some of your younger students won’t be able to read it. As Faust writes, “Didn’t professors make handwritten comments on their papers and exams? Many of the students found these illegible. Sometimes they would ask a teacher to decipher the comments; more often they just ignored them” (Faust, 2022). As for me, my handwriting was never that great. Through school, my cursive devolved into an idiosyncratic set of scribbles that is a jumble of cursive and print. It only got worse when I became a professor. When I was still hand writing student comments, some students would ask me to decipher them. I am certain most of my students just ignored them. Typing is my preferred mode of written communication. I can type faster than I can write. Besides, I’m much more confident you—and my students—can read my typing much better than my handwriting. Most of my students are probably still ignoring my comments, but at least I know they can read them if they so choose. Reference Faust, D. G. (2022, September 16). Gen Z never learned to read cursive. The Atlantic. https://www.theatlantic.com/magazine/archive/2022/10/gen-z-handwriting-teaching-cursive-history/671246/
... View more
0
0
1,852
sue_frantz
Expert
09-12-2022
05:00 AM
In 2013, Thibault Le Texier, a French academic, accidentally tripped over Philip Zimbardo’s 2008 “The Psychology of Evil” TED Talk. Le Texier became so fascinated by Zimbardo’s prison study, he devoured everything he could find about it. Like others, he thought it would make an excellent documentary. A couple film producers received a grant to send Le Texier to Stanford to take a deep dive into the prison study’s archives.(Le Texier, 2018). About what he learned, Le Texier wrote: C'est là, en juillet 2014, que mon enthousiasme a fait place au scepticisme, puis mon scepticisme à l'indignation, à mesure que je découvrais les dessous de l'expérience et l'évidence de sa manipulation. It was there, in July 2014, that my enthusiasm gave way to skepticism, then my skepticism to indignation, as I discovered the underside of the experiment and the evidence of its manipulation. [Google translation, with the translation stamp of approval from this blog post author based on her limited French] Le Texier published what he learned in his well-researched 2018 book, Histoire d’un mensonge: Enquête sur l’expérience de Stanford (available from Amazon). While the book has not yet been published in English, a not-too-bad Google translation is freely available. In 2019, Le Texier provided us with a summary of his findings in an American Psychologist article (Le Texier, 2019). If your library does not carry this journal, you can download a copy of the article from Le Texier’s website. I remember when the 2019 American Psychologist article came out. It was the July/August edition. I made a mental note to read the article, but never made the time to actually read it. I’m currently working on a writing project that gave me the impetus to finally read it. If you cover the prison study in any of your courses, the American Psychologist article is a must-read. The biggest surprise to me was the amount of guidance and direction the guards were given. The popular narrative is that the power of the prison situation created guards who enthusiastically took on the role of who they believed a guard is. Instead, the guards were instructed to engage in particular behaviors, such as the middle-of-the-night counts. In fact, the guards thought of themselves as fellow experimenters who were tasked with creating a stressful psychological environment for the prisoners. During the guards’ orientation, Zimbardo told them that he had a grant to study the conditions which lead to mob behavior, violence, loss of identity, feelings of anonymity. [. . .] [E]ssentially we’re setting up a physical prison here to study what that does and those are some of the variables that we’ve discovered are current in prisons, those are some of the psychological barriers. And we want to recreate in our prison that psychological environment (Le Texier, 2019, p. 827). This was indeed the original purpose of the study—to see how a stressful prison-like situation could impact mock prisoners. Zimbardo wrote in The Lucifer Effect, I should mention again that my initial interest was more in the prisoners and their adjustment to this prisonlike situation than it was in the guards. The guards were merely ensemble players who would help create a mind-set in the prisoners of the feeling of being imprisoned. […] Over time, it became evident to us that the behavior of the guards was as interesting as, or sometimes even more interesting than, that of the prisoners (Zimbardo, 2007, p. 55). What has been lost to time, however, is that the guards did not decide for themselves how to behave. Another big factor that affected what happened during the study is how much the guards and prisoners were paid: $15 per day. In 2022 dollars, that is $110/day (Webster, 2022). Both guards and prisoners said that it was in their best interest to act as Zimbardo expected in order to stretch the experience out as long as they could. The longer they stayed, the more money they would all make. As for my own writing project, I knew I could not delete the prison study wholesale. Too many people know something about it, and it is well past time for us to discuss what the historical record tells us. In the end, I framed my coverage of the study in the context of the study’s demand characteristics. Perhaps in a strange twist of fate, Zimbardo’s point about the prison study holds, but not in the way he describes it. The power of the situation can, indeed, greatly affect our behavior. The power of the situation in the prison study, however, does not come from taking on the role of guard or prisoner in a prison situation, but comes from taking on the role of experimenter (or, at least, experimenter assistant as the guards believed themselves to be) and the role of research participant (as the prisoners knew themselves to be) in a research situation. In the end, the prison study appears to be an excellent object lesson in the power of demand characteristics in a psychological research situation. References Le Texier, T. (2018). Histoire d’un mensonge: Enquête sur l’expérience de Stanford. Éditions La Découverte. Le Texier, T. (2019). Debunking the Stanford Prison Experiment. American Psychologist, 74(7), 823–839. https://doi.org/10.1037/amp0000401 Webster, I. (2022, September 4). $15 in 1971 is worth $109.73 today. CPI Inflation Calculator. https://www.in2013dollars.com/us/inflation/1971?amount=15
... View more
Labels
0
0
2,103
sue_frantz
Expert
09-05-2022
05:00 AM
In an example of archival research, researchers analyzed data from the U.S. National Health and Nutrition Examination Survey for the years 2007 to 2012 (Hecht et al., 2022). They found that after controlling for “age, gender, race/ethnicity, BMI, poverty level, smoking status and physical activity,” (p. 3) survey participants “with higher intakes of UPF [ultra-processed foods] report significantly more mild depression, as well as more mentally unhealthy and anxious days per month, and less zero mentally unhealthy or anxious days per month” (p. 7). So far, so good. The researchers go on to say, “it can be hypothesised that a diet high in UPF provides an unfavourable combination of biologically active food additives with low essential nutrient content which together have an adverse effect on mental health symptoms” (p. 7). I don’t disagree with that. It is one hypothesis. By controlling for their identified covariates, they address some possible third variables, such as poverty. However, at no place in their article do they acknowledge that the direction can be reversed. For example, it can also be hypothesized that people who are experiencing the adverse effects of mental health symptoms have a more difficult time consuming foods high in nutritional quality. Anyone who battles the symptoms of mental illness or who is close to someone who does knows that sometimes the best you can do for dinner is a hotdog or a frozen pizza—or if you can bring yourself to pick up your phone—pizza delivery. They do, however, include reference to an experiment: “[I]n one randomized trial, which provides the most reliable evidence for small to moderate effects, those assigned to a 3-month healthy dietary intervention reported significant decreases in moderate-to-severe depression.” The evidence from that experiment looks pretty good (Jacka et al., 2017), although their groups were not equivalent on diet at baseline: the group that got the dietary counseling scored much lower on their dietary measure than did the group that got social support. Also, those who received social support during the study did, in the end, have better mental health scores and better diet scores than they did at baseline, although all we have are the means. I don’t know if the differences are statistically significant. All of that is to say is that the possibility remains that reducing the symptoms of mental illness may also increase nutritional quality. Both the Jacka et al. experiment and the Hecht et al. correlational study are freely available. You may also want to read the Science Daily summary of the Hecht et al. study where the author (or editor?) writes, “Do you love those sugary-sweet beverages, reconstituted meat products and packaged snacks? You may want to reconsider based on a new study that explored whether individuals who consume higher amounts of ultra-processed food have more adverse mental health symptoms.” If you’d like to use this in your Intro Psych class, after covering correlations and experiments, ask your students to read the Science Daily summary. Ask your students two questions. 1) Is this a correlational study or an experiment? 2) From this study, can we conclude that ultra-processed foods negatively affect mental health? These questions lend themselves well for use with in-class student response systems (e.g., Clickers, Plickers). Lastly, you may want to share with your students more information about both the Hecht et al. study and the Jacka et al. experiment. If time allows, give your students an opportunity to design an experiment that would test this hypothesis: Improved mental health symptoms causes better nutritional consumption. References Hecht, E. M., Rabil, A., Martinez Steele, E., Abrams, G. A., Ware, D., Landy, D. C., & Hennekens, C. H. (2022). Cross-sectional examination of ultra-processed food consumption and adverse mental health symptoms. Public Health Nutrition, 1–10. https://doi.org/10.1017/S1368980022001586 Jacka, F. N., O’Neil, A., Opie, R., Itsiopoulos, C., Cotton, S., Mohebbi, M., Castle, D., Dash, S., Mihalopoulos, C., Chatterton, M. L., Brazionis, L., Dean, O. M., Hodge, A. M., & Berk, M. (2017). A randomised controlled trial of dietary improvement for adults with major depression (the ‘SMILES’ trial). BMC Medicine, 15(1), 23. https://doi.org/10.1186/s12916-017-0791-y
... View more
sue_frantz
Expert
08-24-2022
10:57 AM
While I structure my course—and provide direct instruction—on time management, I generally do not address procrastination head-on. Although, when I taught face-to-face, I’d wear this t-shirt to class: “Procrastinate today! Future you won’t mind the extra work.” As far as interventions go, it was low cost: $19.99 plus shipping, and it was one day I didn’t have to weigh my different clothing options. Did it help students reduce their procrastination? I don’t know. I never measured procrastination in classes that saw the shirt and those that didn’t. It wasn’t because of procrastination, though! It just never occurred to me to do it. An article in the August 2022 issues of Current Directions in Psychological Science has me thinking about procrastination again. Akira Miyake and Michael J. Kane suggest several small-teaching interventions that can help students develop some anti-procrastination strategies. Their suggested interventions are based on a self-control model of procrastination (Miyake & Kane, 2022). One reason we procrastinate is because doing the task is aversive, and so we regulate our emotion by doing something less aversive instead. James Gross has done the most thinking about and the most research on emotion regulation. A freely available article he wrote with Kateri McRae for the journal Emotion provides a nice overview of the topic (McRae & Gross, 2020). Doing something less aversive than the thing we should be doing is not always a bad thing. I’m a fan of productive procrastination. For example, yesterday morning I was going to write this blog post. While I don’t usually find writing aversive (although, I did as a college student—big time), if I have done several days of writing, sitting down in front of my computer monitor can feel like an insurmountable lift. That was yesterday. Instead, I did a whole list of household chores, including shoveling gravel—admittedly, not a typical household chore. Now, the shoveling of gravel was something I had been procrastinating on. With the heat we’ve had and, well, it’s shoveling gravel, the task was pretty aversive. Or at least it was until something else became more aversive. To help with task aversion, Miyake and Kane suggest instructors teach students about the pomodoro technique: set a timer for 25 minutes, work for those 25 minutes, then take a 5-minute break, repeat. They also suggest teaching students the scientifically not-validated 5-second rule where when the inclination to work on the task hits, you have five seconds to act before the feeling passes. I would add to this my strategy of getting out everything I will need and set it up so that when that inclination hits, I am ready to go. To also reduce task aversion, Miyake and Kane recommend that instructors can do more on our end. When students see value in their assignments, the assignments are less aversive. For example, we can ask students to write a few sentences on how an assignment can be personally meaningful to them. We can also break large assignments into smaller ones. While it would be great if all students could already do this on their own, they don’t. When we break larger assignments into smaller ones, we are modeling the practice. It would probably also help if we were explicit about why we are doing that. While we’re at it, it probably wouldn’t hurt to describe big projects that we’re working on now and how we’ve broken those projects into smaller, more manageable pieces. Doing this can also help students stop thinking about the end outcome and focus on the process involved in getting there. I’ve had plenty of students who were so focused on what their end grade in the course was going to be, they forgot that the purpose was to learn. I remind them that if they focus on learning, the grades will follow. That reminder doesn’t help everyone, but it seems to resonate with some. In addition to task aversion, we may also procrastinate because we lose sight of our goals—or don’t have goals at all. As a student (high school, college, and grad school), I was firmly in the latter category. I had no goals beyond making it through each class I took with an A or a B. Those were good enough goals for me as I’ve done well enough in my career. At no point, though, did I have a long-term goal to become a college professor. I just kind of fell into it. Once I got into this career, though, I did develop some career goals, and I’ve checked a bunch of those boxes. Miyake and Kane suggest helping students create goals, and then teach students how to use planning tools such as a calendar, a to-do list (e.g., Trello), and reminders (e.g., nudgemail.com) to help them reach those goals. They also suggest instructors use their learning management system (LMS) to send reminders to students. Again, it would be great if all of our students had the skills to create reminders for themselves, but they don’t. Now I wonder if it would be effective to remind students to set up reminders—meta-reminders. There’s an empirical question. Miyake and Kane’s last set of suggestions for helping students work toward their goals is to teach students to use when/then statements to propel them toward their goals. For example, “When I leave class, then I am going to go to the student union, order coffee and a scone, and start reading the next chapter.” They also recommend encouraging students to remove distractions. For most of my students, it’s their phones. For others, it’s their family or others they live with. They’ve found going to the library or a coffee shop helps reduce distractions. My favorite was my student who would go to the food court at IKEA: not many people on a weekday, free wifi, cheap snacks, AC, and a great place to take a walk during a break. While managing negative mood states and attending to goals are important, Miyake and Kane also recommend reflection and community building to help students adopt some of the strategies discussed above. For reflection, instructors can ask students to periodically reflect on their study habits, e.g., what’s working and what’s not. Creating a supportive class environment where students can support each other in their anti-procrastination efforts provides a space where students can share their strategies and celebrate their wins. Lastly, Miyake and Kane recommend that we evaluate effectiveness of our interventions, preferably with objective measures rather than self-report. For example, are students submitting their work earlier than they did in previous quarters? If you’re game for adopting some of the strategies suggested by Miyake and Kane for your Intro Psych course and are interested in working with other Intro Psych instructors to gather effectiveness data, visit the collaboration page at Regan A. R. Gurung’s Hub for Introductory Psychology and Pedagogical Research (HIPPR) website. If you’re the first one there, fill out the HIPPR collaboration form. Do you use any of these or similar strategies to help students develop anti-procrastination skills? Or do you know of any peer-reviewed articles that have evaluated anti-procrastination strategies in a classroom or work environment? I invite you to use the comment box below. References McRae, K., & Gross, J. J. (2020). Emotion regulation. Emotion, 20(1), 1–9. https://doi.org/10.1037/emo0000703 Miyake, A., & Kane, M. J. (2022). Toward a holistic approach to reducing academic procrastination with classroom interventions. Current Directions in Psychological Science, 31(4), 291–304. https://doi.org/10.1177/09637214211070814
... View more
0
0
2,176
sue_frantz
Expert
08-16-2022
01:09 PM
If you are looking for a new study to freshen up your coverage of experimental design in your Intro Psych course, consider this activity. After discussing experiments and their component parts, give students this hypothesis: Referring to “schizophrenics” as compared to “people with schizophrenia” will cause people to have less empathy for those who have a diagnosis of schizophrenia. In other words, does the language we use matter? Assure students that they will not actually be conducting this experiment. Instead, you are asking them to go through the design process that all researchers go through. Ask students to consider these questions, first individually to give students an opportunity to gather their thoughts and then in a small group discussion: What population are you interested in studying and why? Are you most interested in knowing what impact this choice of terminology has on the general population? High school students? Police officers? Healthcare providers? Next, where might you find 100 or so volunteers from your chosen population to participate? Design the experiment. What will be the independent variable? What will the participants in each level of the independent variable be asked to do? What will be the dependent variable? Be sure to provide an operational definition of the dependent variable. Invite groups to share their populations of interest with a brief explanation of why they chose that population and where they might find volunteers. Write the populations where students can see the list. Point out that doing this research with any and all of these populations would have value. The independent variable and dependent variable should be the same for all groups since they are stated in the hypothesis. Operational definitions of the dependent variable may vary, however. Give groups an opportunity to share their overall experimental design. Again, point out that if researchers find support for the hypothesis regardless of the specifics of how the experiment is conducted and regardless of the dependent variable’s operational definition, that is all the more support for the robustness of the findings. Even if some research designs or operational definitions or particular populations do not support the hypothesis, that is also very valuable information. Researchers then get to ask why these experiments found different results. For example, if research with police officers returns different findings than research with healthcare workers, psychological scientists get to explore why. For example, is there a difference in their training that might affect the results? Lastly, share with students how Darcy Haag Granello and Sean R. Gorby researched this hypothesis (Granello & Gorby, 2021). They were particularly interested in how the terms “schizophrenic” and “person with schizophrenia” would affect feelings of empathy (among other dependent variables) for both practicing mental health counselors and graduate students who were training to be mental health counselors. For the practitioners, they found volunteers by approaching attendees at a state counseling conference (n=82) and at an international counseling conference (n=79). In both cases, they limited their requests to a conference area designated for networking and conversing. For the graduate students, faculty at three different large universities asked their students to participate (n=109). Since they were particularly interested in mental health counseling, anyone who said that they were in school counseling or who did not answer the question about counseling specialization had their data removed from the analysis (n=19). In the end, they had a total of 251 participants. Granello and Gorby gave volunteers the participants Community Attitudes Toward the Mentally Ill scale. This measure has four subscales: authoritarianism, benevolence, social restrictiveness, and community mental health ideology. While the original version of the scale asked about mental illness more generally, the researchers amended it so that “mental illness” was replaced with “schizophrenics” or “people with schizophrenia.” The researchers stacked the questionnaires so that the terminology used alternated. For example, if the first person they approached received the questionnaire asking about “schizophrenics,” the next person would have received the questionnaire asking about “people with schizophrenia.” Here are sample items for the “schizophrenics” condition, one from each subscale: Schizophrenics need the same kind of control and discipline as a young child (authoritarian subscale) Schizophrenics have for too long been the subject of ridicule (benevolence subscale) Schizophrenics should be isolated from the rest of the community (social restrictiveness subscale) Having schizophrenics living within residential neighborhoods might be good therapy, but the risks to residents are too great (community mental health ideology) Here are those same sample items for the “people with schizophrenia” condition: People with schizophrenia need the same kind of control and discipline as a young child (authoritarian subscale) People with schizophrenia have for too long been the subject of ridicule (benevolence subscale) People with schizophrenia should be isolated from the rest of the community (social restrictiveness subscale) Having people with schizophrenia living within residential neighborhoods might be good therapy, but the risks to residents are too great (community mental health ideology) What did the researchers find? When the word “schizophrenics” was used: both practitioners and students scored higher on the authoritarian subscale. the practitioners (but not the students) scored lower on the benevolence subscale. all participants scored higher on the social restrictiveness subscale. there were no differences on the community mental health ideology subscale for either practitioners or students. Give students an opportunity to reflect on the implications of these results. Invite students to share their reactions to the experiment in small groups. Allow groups who would like to share some of their reactions with the class an opportunity to do so. Lastly, as time allows, you may want to share the two limitations to their experiment identified by the researchers. First, the practitioners who volunteered were predominantly white (74.1% identified as such) and had the financial means to attend a state or international conference. Would practitioners of a different demographic show similar results? The graduate students also had the financial means to attend a large in-person university. Graduate students enrolled in online counseling programs, for example, may have different results. A second limitation the researchers identified is that when they divided their volunteers into practitioners and students, the number of participants they had was below the recommended number to give them the statistical power to detect real differences. With more participants, they may have found even more statistical differences. Even with these limitations, however, the point holds. The language we use affects the perceptions we have. Reference Granello, D. H., & Gorby, S. R. (2021). It’s time for counselors to modify our language: It matters when we call our clients schizophrenics versus people with schizophrenia. Journal of Counseling & Development, 99(4), 452–461. https://doi.org/10.1002/jcad.12397
... View more
0
1
5,450
sue_frantz
Expert
08-08-2022
01:31 PM
I was pleased to see in the April/May 2022 issues of the APA’s Monitor on Psychology an interview with Dr. Laura Helmuth, editor in chief of Scientific American (Santoro, 2022). (Read it here.) According to her LinkedIn page, Dr. Helmuth has a BS in biology and psychology from Eckerd College. For those who have ever taken the airport shuttle from the Tampa airport to NITOP, you may have stopped at Eckerd to drop off students returning to college after winter break. NITOP has been at the Tradewinds in St. Pete Beach since 1988 (Bernstein, 2019). Helmuth was there between 1987 to 1991. Long-time NITOP attendees may have shared an airport shuttle with her. From Eckerd, Helmuth went to UC, Berkeley for her Ph.D. in cognitive neuroscience. It didn’t take her long to find her calling in science journalism. Here are some career highlights. She’s worked as a reporter and editor for the AAAS flagship publication Science, as science editor for Smithsonian Magazine, and science and health editor for Slate, as health and science editor for The Washington Post, and since 2020 as editor in chief for Scientific American. The short interview with Dr. Helmuth in the Monitor would be good for students to read for a number of reasons. For all college students, this is an excellent lesson in the importance of solid writing skills. If you can write well, you can take your career in any number of directions. For psychology majors, the interview points out the need for people who understand the inner workings of science to be able to translate it for the general public. Students should see science journalism as a legitimate career path. For Intro Psych students, the interview drives home the point that, yes, psychology is indeed a science. If you would like to build a class discussion (face-to-face or online) around this article, here are two suggested discussion questions. After reading this interview, what do you think is the most important thing Dr. Helmuth wants you to know? If you could ask Dr. Helmuth a question, what question would ask? Why? If you are feeling especially adventurous, take the questions students would ask Dr. Helmuth and combine those that are most similar together. Share the final list with students, and then invite students to vote for, say, their top three favorite questions. Send the top question or two to Dr. Helmuth via Twitter (@LauraHelmuth). If she responds, be sure to share her response with your students. References Bernstein, D. A. (2019). A brief history of NITOP. National Institute on the Teaching of Psychology. https://nitop.org/History Santoro, H. (2022, April). Psychology coverage is vital for scientists: 6 questions for Laura Helmuth. Monitor on Psychology, 53(2). https://www.apa.org/monitor/2022/04/conversation-helmuth-psychologists-scientists
... View more
0
0
4,969
sue_frantz
Expert
08-01-2022
05:00 AM
I just finished reading—and rereading—Freedman and Fraser’s famous 1966 Journal of Personality and Social Psychology paper, “Compliance without pressure: The foot-in-the-door technique” (Freedman & Fraser, 1966). This is the safe-driving sign study. The paper doesn’t say what I expected it to say. First, a heads up. Because this paper was published in 1966, there is some disorienting time travel involved. Freedman and Fraser’s first experiment involved making requests of “156 Palo Alto, California housewives” (p. 197) who were randomly chosen from the telephone directory. For those who aren’t familiar with the terms housewives and telephone directory, Google can help with that. The second experiment—and the more famous of the two—included in their final analysis the results from 105 women (no word on whether they were housewives) and 7 men (househusbands, presumably). In the paper’s discussion, Freedman and Fraser used the universal masculine. I did the math; 97.4% of the two experiments’ participants were women. Their writing about “his” reasoning in the discussion threw me a bit. None of that is a knock against Freedman or Fraser—nor was it unexpected. It’s just a little commentary on the year 1966. Now let’s get to the unexpected part. The second experiment they report in their paper is the safe-driving sign study. Every third or fourth home along blocks in different neighborhoods in Palo Alto, California were selected as the participants. Each block was randomly assigned to one of the five conditions of the independent variable. Residents were asked to either (1) put a 3-inch square safe-driving sign in a home or car window, (2) put a 3-inch square “Keep California Beautiful” sign in a home or car window, (3) sign a petition to encourage California’s U.S. Senators to pass legislation that would promote safe driving, (4) sign a petition to the same Senators to pass legislation that would “keep California beautiful,” and (5) a control group who did not receive any of these initial requests. Two weeks later, a different set of researchers who were blind to conditions asked the residents if they would agree to have a large (massive, really), poorly-lettered, safe-driving sign placed in their front yard that would apparently obscure most of the front of their house for seven to ten days. So far so good. Now let’s turn to the results. They write, First, it should be noted that there were no large differences among the experimental conditions in the percentages of subjects agreeing to the first request. Although somewhat more subjects agreed to post the “Keep California Beautiful” sign and somewhat fewer to sign the beauty petition, none of these differences approach significance. The important figures are the number of subjects in each group who agreed to the large request… The figures for the four experimental groups include all subjects who were approached the first time, regardless of whether or not they agreed to the small request (p. 200). Wait. “[R]egardless of whether or not they agreed to the small request.” This is the experiment that is held up as the classic example of foot-in-the-door. In foot-in-the-door, it’s agreement to that initial request that is so important. Some secondary sources report that nearly everyone in the study agreed to the small request. I don’t see that reported in this paper. All I see is that agreement across the groups was pretty much the same. Same large? Same small? Same in between? They don’t tell us. If anyone knows where the secondary sources got that there was large agreement to the initial requests, please let me know. In their discussion, Freedman and Fraser write at length about how receiving a small request can lead to later agreement to a larger request. For example, “It is immediately apparent that the first request tended to increase the degree of compliance with the second request” (p. 200), and “regardless of whether or not the two requests are similar in either issue or task, simply having the first request tends to increase the likelihood that the subject with comply with a subsequent, larger request” (p. 201). Nowhere do they say that agreement to that first request is important. Now the results. Of those who were asked to display the small safe-driving sign (regardless of how many actually agreed), 76% said yes to displaying the large safe-driving sign. For the other three experimental conditions (display small “Keep California Beautiful” sign or signing either of the petitions), 47% said yes to the large safe-driving sign. How many in the control group said yes to the big sign? “[F]ewer than 20%” (p. 200). While the safe-driving sign is the more famous experiment, the first experiment Freedman and Fraser report in their 1966 paper does indeed illustrate foot-in-the-door, although they don’t quite report the data that I want. In this experiment, researchers called 156 randomly-selected phone numbers from the “telephone directory.” The “housewives” who answered the phone were told that the person calling was from the “California Consumers’ Group.” The women were randomly assigned to one of three conditions: (1) the caller asked if they would answer some survey questions, then if they agreed proceeded to ask the questions (performance condition), (2) the caller asked if they would answer some survey questions, then if they agreed said that he “was just lining up respondents for the survey and would contact her if needed” (p. 197) (agree-only condition), and (3) the caller described his organization and described the survey, but did not make any request (familiarization condition). (I imagine that the women in this last group must have been more confused than anything.) Three days later, all of the participants were called again and a control group was called for the first time. The request was to allow “five or six men from our staff” into their homes to spend two hours to count and classify their household products. In the results section, Freedman and Fraser write, Apparently even the small request was not considered trivial by some of the subjects. Only about two thirds of the subjects in the Performance and Agree-Only conditions agreed to answer the questions about household soaps. It might be noted that none of those who refused the first request later agreed to the large request, although…all subjects who were contacted for the small request are included in the data for those groups. In the performance condition (answered the survey questions), 52.8% agreed to allow a group of strange men into their homes for two hours to dig through their household products. In the agree-only condition (agreed to answer questions but weren’t actually asked any), 33.3% agreed to the large request. In the familiarization condition (heard about the organization—for no apparent reason), 27.8% agreed to the large request. In the control condition, 22% agreed to the large request. Since Freedman and Fraser included everyone in their reported data, we don’t know what percentage of those who agreed to the small request also agreed to the large request. However, since they tell us that the participants who said no to the initial request also said no to the larger request, we know that those who said yes to the initial request were the only ones who said yes to the larger request. (Whew!) In other words, the actual percentages Freedman and Fraser give us underreport the degree of compliance to a large request after agreeing to a smaller request, but the relative differences in percentages are meaningful. I encourage you to read Freedman and Fraser’s 1966 paper. For those interested in replication, take a look at an attempt to replicate the household products study in Poland and Ukraine (Gamian-Wilk & Dolinski, 2020). This could be a fun class discussion on how 1960s California differs from 2003 Poland and 2013 Ukraine—some of which the authors of this paper discuss. For that matter, discussing how 1960s California differs from 2022 anywhere would be worthwhile. This can help students better understand why some studies may not replicate when done exactly the same way. What woman in the world today would agree to allow five or six strange men into her home for two hours to count household products?! Let alone based solely on a phone call from a stranger?!?! Conceptual replications, in this case, may be more illuminating. References Freedman, J. L., & Fraser, S. C. (1966). Compliance without pressure: The foot-in-the-door technique. Journal of Personality and Social Psychology, 4(2), 195–202. https://doi.org/10.1037/h0023552 Gamian-Wilk, M., & Dolinski, D. (2020). The foot-in-the-door phenomenon 40 and 50 years later: A direct replication of the original Freedman and Fraser study in Poland and in Ukraine. Psychological Reports, 123(6), 2582–2596. https://doi.org/10.1177/0033294119872208
... View more
Labels
0
0
2,538
Topics
-
Abnormal Psychology
19 -
Achievement
3 -
Affiliation
1 -
Behavior Genetics
2 -
Cognition
40 -
Consciousness
35 -
Current Events
28 -
Development Psychology
19 -
Developmental Psychology
34 -
Drugs
5 -
Emotion
55 -
Evolution
3 -
Evolutionary Psychology
5 -
Gender
19 -
Gender and Sexuality
7 -
Genetics
12 -
History and System of Psychology
6 -
History and Systems of Psychology
7 -
Industrial and Organizational Psychology
51 -
Intelligence
8 -
Learning
70 -
Memory
39 -
Motivation
14 -
Motivation: Hunger
2 -
Nature-Nurture
7 -
Neuroscience
47 -
Personality
29 -
Psychological Disorders and Their Treatment
22 -
Research Methods and Statistics
107 -
Sensation and Perception
46 -
Social Psychology
132 -
Stress and Health
55 -
Teaching and Learning Best Practices
59 -
Thinking and Language
18 -
Virtual Learning
26
- « Previous
- Next »
Popular Posts