-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- Psychology Community
- :
- Psychology Blog
- :
- Psychology Blog - Page 3
Psychology Blog - Page 3
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Psychology Blog - Page 3
Showing articles with label Research Methods and Statistics.
Show all articles
sue_frantz
Expert
01-17-2023
10:13 AM
When I discuss experimental design in Intro Psych, I usually focus on the independent variable and dependent variable and how they are operationally defined as well as design considerations such as the experimenters and participants being blind to conditions. (I’ve read some articles where the authors say that the experimenters and participants were blinded. It brings me up short every time.) It wouldn’t hurt me to spend a little time talking about external validity, especially how we may sacrifice external validity in a lab study as a sort of proof of concept, and then follow up with a study that has more external validity. A recent JAMA article provides a nice illustration of how this can work—and gives students some experimental design practice. After covering experimental design, describe this freely available study on fast food menu choices (Wolfson et al., 2022). The researchers hypothesized that the study “participants would be more likely to select sustainable options when viewing menus with positive or negative framing compared with control labels.” In an online questionnaire, 5,049 “[p]articipants were shown a fast food menu and prompted to select 1 item they would like to order for dinner.” The independent variable was menu labeling. “Participants were randomized to view menus with 1 of 3 label conditions: a quick response code label on all items (control group); green low–climate impact label on chicken, fish, or vegetarian items (positive framing); or red high–climate impact label on red meat items (negative framing).” The primary dependent variable was the number of participants who selected a menu item that was not red meat. In the control condition, 49.5% of participants selected something other than red meat. In the positive framing condition (green labels on non-red meat items), 54.4% selected something other than red meat. In the negative framing condition (red labels on red meat items), 61.1% selected something other than red meat. All differences were statistically significant. In the limitations section of the article, the researchers acknowledge that this study assessed hypothetical food purchases rather than actual food purchases. As such, the study lacks external validity. They also acknowledge that social desirability may have also influenced the results, but they think that the anonymity of the online study may have mitigated the effects. I’m less convinced. Participants may have been more likely to select non-red meat options partly to look like better people to themselves and partly because they guessed the hypothesis and wanted to help out the researchers. In any case, this study found positive results that may be worth investigating further. The challenge for your students is to design a study that has greater external validity. How could the same research hypothesis be tested in real world conditions? Give students a couple minutes to think about this on their own and then ask students to discuss in small groups. What problems can students envision in conducting such a study? For example, would a local fast food restaurant be okay with putting red or green labels on their menu boards? One last comment about social desirability. In a real fast food restaurant, if someone chose to order a green-labeled item, for the purpose of the hypothesis, does it matter if they ordered it because they wanted to have a positive impact (or less negative impact) on the planet, because they wanted to think of themselves as a good person, or because they wanted to look good to others? References Wolfson, J. A., Musicus, A. A., Leung, C. W., Gearhardt, A. N., & Falbe, J. (2022). Effect of climate change impact menu labels on fast food ordering choices among US adults: A randomized clinical trial. JAMA Network Open, 5(12), e2248320. https://doi.org/10.1001/jamanetworkopen.2022.48320
... View more
Labels
-
Research Methods and Statistics
0
0
1,549
sue_frantz
Expert
12-31-2022
08:16 AM
“Older adults report surprisingly positive affective experience. The idea that older adults are better at emotion regulation has emerged as an intuitively appealing explanation for why they report such high levels of affective well-being despite other age-related declines” (Isaacowitz, 2022). Our schemas and the assumptions that come with them influence how we see the world and, in turn, influence how we talk about the world. As instructors and researchers we need to consider how our assumptions can weasel their way into what we say and what we write. I’ve been thinking a lot lately about ageism. Certainly, how we think about aging varies by culture. In some cultures, for example, elders are revered for their knowledge and wisdom. In others, aging is viewed as a gradual decline into an inevitable physical and cognitive wasteland. Unfortunately for me, my dominant culture is the latter. This schema that has been drilled into my head, however, has amassed so many exceptions that I’m not sure that I still have the schema. I have many friends who are in their 70s and 80s. They are all physically active and intellectual powerhouses. Every one of them. When I read the opening two sentences of Isaacowitz’s aging and emotion regulation article in Perspectives on Psychological Science quoted at the beginning of this blog post, I was first puzzled. “Older adults report surprisingly positive affective experience.” Surprisingly? You’re surprised that older adults are happy? Why is this surprising? “The idea that older adults are better at emotion regulation has emerged as an intuitively appealing explanation for why they report such high levels of affective well-being despite other age-related declines.” Oh. You’re surprised because you believe that older adults live in a physical and cognitive wasteland, so how could they possibly be happy. This needs an explanation! It even has a name: the paradox of aging. “There is an extensive, robust literature suggesting that older adults self-report quite positive emotional lives; sometimes they even report being more emotionally positive than their younger counterparts” (Isaacowitz, 2022). [Gasp!] The question is not why younger people aren’t happier. The question is why older adults are. Some researchers think that older adults are happier because older adults are better at regulating emotions. Isaacowitz’s article provides a nice summary of the research into this explanation and concludes that the evidence is inconclusive. The article ends with this: “the robust finding of older adults’ positive affective experience remains to be well-explained. This is a mystery for future researchers still to unravel” (Isaacowitz, 2022). This article was a nice reminder for me to consider my own schemas and assumptions when I talk with my students about any psychological topic. For example, I knew an instructor who would talk about people who were diagnosed with a psychological disorder as suffering from the disorder. I know people with a variety of diagnoses who manage, live with, and experience psychological disorders. The word suffer brings with it a set of assumptions that I don’t share. I admit that I have my own baggage here. I have a chronic physical condition that if not well-managed could kill me which I manage, live with, and experience. I most certainly do not suffer from it. Maybe we should be also asking how I—a person with such a condition—could possibly be so gosh darn happy. If in the Intro Psych research methods chapter you discuss how a researcher’s values affect the topics they choose to research, discussion of this article may be a good example. It’s a nice illustration of why researchers from a diversity of backgrounds is so important to science. Would, for example, a researcher from a culture that reveres older adults wonder why older adults are happy? If you’d like to explore more about cultural ageism and its impact, I highly recommend Becca Levy’s 2022 book, Breaking the Age Code. It will change how you think about—and talk about—aging. Reference Isaacowitz, D. M. (2022). What do we know about aging and emotion regulation? Perspectives on Psychological Science, 17(6), 1541–1555. https://doi.org/10.1177/17456916211059819
... View more
Labels
-
Development Psychology
-
Research Methods and Statistics
1
0
1,132
sue_frantz
Expert
12-28-2022
10:05 AM
I think of the Intro Psych course as an owner’s manual for being human. Throughout the course, we explore the multitude of ways we are influenced to think, feel, or behave a certain way that happens without our conscious awareness. Here’s one such example we can use to give our students some experimental design practice. It’s suitable for the methods chapter or, if you cover drugs, in that chapter after discussing caffeine. Caffeine, as a stimulant, increases arousal. It’s plausible that consumers who are physiologically aroused engage in more impulsive shopping and, thus, spend more money than their uncaffeinated counterparts. Give students this hypothesis: If shoppers consume caffeine immediately before shopping, then they will spend more money. Ask students to take a couple minutes thinking about how they would design this study, and then invite students to share their ideas in pairs or small groups. Ask the groups to identify their independent variable (including experimental and control conditions) and their dependent variable. If you cover operational definitions, ask for those, too. Invite groups to share their designs with the class. Emphasize that there is no one right way to conduct a study. Each design will have its flaws, so using different designs to test the same hypothesis will give us greater confidence in the hypothesis. Share with students the first two of five experiments reported in the Journal of Marketing (Biswas et al., 2022). In study 1, researchers set up a free espresso station just inside the front door of a store. As shoppers entered, they were offered a cup of espresso. The experiment was conducted at different times of day over several days. At certain times, shoppers were offered a caffeinated espresso. At other times, they were offered a decaffeinated espresso. As the espresso drinkers left the store after having completed their shopping, researchers asked if they could see their receipts. Everyone said yes. Researchers recorded the number of items purchased and the total purchase amount. (Ask students to identify the independent and dependent variables.) As hypothesized, the caffeinated shoppers purchased more items (2.16 vs. 1.45) and spent more money (€27.48 vs. €14.82) than the decaffeinated shoppers. Note that participants knew whether they were consuming a caffeinated or decaffeinated beverage, but did not know when they accepted that they were participating in a study. There are a few ethical questions about study 1 worth exploring with your students. First, this study lacked informed consent. Participants were not aware that they were participating in a study when they accepted the free espresso. As participants were leaving, it became clear to them that they were participating in a study. Given the norm of reciprocity, did participants see not handing over their receipts as a viable option? Lastly, the researchers expected that caffeine would increase consumer spending. In fact, it nearly doubled it. Was it ethical for the researchers to put unwitting shoppers in a position to spend more money than they had intended? In study 2, students from a marketing research class “in exchange for course credit” were asked to recruit family or friends to participate. The volunteers, who were told that this was a study about their shopping experience, were randomly assigned to an espresso or water condition which were consumed in a cafeteria next to a department store. After consuming their beverages, the volunteers were escorted to the department store and were asked to spend two hours in the store “shopping or looking around.” As in study 1, caffeinated shoppers spent nearly twice as much money (€69.91 vs. €39.63). Again, we have the ethical question of putting unwitting shoppers in the position to spend more money than they would have. We also have the ethical question of students recruiting friends and family to participate as course requirement. And then from a design perspective, how certain can we be that the students didn’t share the hypothesis with their family and friends? Is it possible that some of the students thought that if the study’s results didn’t support the hypothesis, their grade would be affected? As a final ethics question, what should we do with the knowledge that we are likely to spend (much) more money when shopping when we are caffeinated? As a shopper, it’s easy. I’m not going stop on the coffee shop on my way to the store. For a store manager whose job it is to maximize, it’s also easy. Give away cups of coffee as shoppers enter the store. The amount of money it costs to staff a station and serve coffee will more than pay for itself in shopper spending. Here’s the bigger problem. Is it okay to manipulate shoppers in this way for financial gain? Advertising and other persuasive strategies do this all the time. Is free caffeine any different? Or should coffee cups carry warning labels? To close this discussion, ask students in what other places or situations can impulsive behavior encouraged by being caffeinated be problematic. Casinos come readily to my mind. Are caffeinated people likely to bet more? Would that study be ethical to conduct? Reference Biswas, D., Hartmann, P., Eisend, M., Szocs, C., Jochims, B., Apaolaza, V., Hermann, E., López, C. M., & Borges, A. (2022). Caffeine’s Effects on Consumer Spending. Journal of Marketing, 002224292211092. https://doi.org/10.1177/00222429221109247
... View more
Labels
-
Drugs
-
Research Methods and Statistics
0
0
4,051
sue_frantz
Expert
12-14-2022
09:19 AM
Men in the United States are four times more likely to die by suicide than are women (Curtin et al., 2022), and men are almost half as likely to receive mental health treatment than are women (Terlizzi & Norris, 2021). This is seriously problematic, as pointed out by a December 2022 New York Times article (Smith, 2022). In the Intro Psych therapy chapter, share the above statistics with students. Ask your students to discuss in small groups why they think men are less likely to receive mental health treatment. (While what is described here is for a face-to-face class, the discussion can be adapted for asynchronous discussions.) To take away some of what could be very personal, ask students to consider why their male friends or male relatives might not be inclined to seek mental health treatment. If your male students choose to share their own thoughts, that’s fine; just don’t pressure them to do so. Invite the groups to share the reasons they generated with the class. Record the reasons in a way that students can view them. Next invite your students to visit the Man Therapy website (mantherapy.org). What are their favorite article titles? I’m partial to “Sometimes a man needs a pork shoulder to cry on” and “Anxiety: When worry grabs you by the [nether parts]” with an honorable mention for “Sleep: When catching z’s is harder than catching a 20lb trout.” Do your students think that the messaging about mental health on this website would resonate with the men in their lives? Why or why not? Do your students think different messaging would work better for different cultural or ethnic groups? If so, what might that look like? If you’d like to extend this discussion, ask students if they were interested in sharing the mantherapy.org link with their male friends and relatives. For your students who are game, ask them to send out texts right now while in class. If texts come back while you are still in class, invite students to share them. Check back in with students during the next class for reactions that students received after class. If time allows and you are so inclined, ask students to work in small groups to design an experiment that would evaluate the effectiveness of a website such as mantherapy.org. What would their hypothesis be? What would be their measure of effectiveness? What would be their control condition? How would they identify and recruit participants. If your class, department, psych club, or psych honor society thinks that mantherapy.org could be effective at increasing men’s access to mental healthcare, you can “become a champion” by visiting this page and completing the form at the bottom. You will receive a “shipment of printed collateral including posters, wallet cards, and stickers to help get the word out and drive traffic to the site.” There is no mention of a cost for these materials. References Curtin, S. C., Garnett, M. F., & Ahmad, F. B. (2022). Provisional numbers and rates of suicide by month and demographic characteristics: United States, 2021 (No. 24). National Center for Health Statistics. https://www.cdc.gov/nchs/data/vsrr/vsrr024.pdf Smith, D. G. (2022, December 9). How to Get More Men to Try Therapy. The New York Times. https://www.nytimes.com/2022/12/09/well/mind/men-mental-health-therapy.html Terlizzi, E. P., & Norris, T. (2021). Mental health treatment among adults: United States, 2020 (NCHS Data Brief No. 419). National Center for Health Statistics. https://www.cdc.gov/nchs/data/databriefs/db419.pdf
... View more
Labels
-
Psychological Disorders and Their Treatment
-
Research Methods and Statistics
0
0
1,809
sue_frantz
Expert
10-27-2022
07:37 AM
The Intro Psych therapy chapter is a good place to reinforce what students have learned about research methods throughout the course. In this freely available 2022 study, researchers wondered about the effectiveness of a particular medication (naltrexone combined with bupropion) and a particular type of psychotherapy (behavioral weight loss therapy) as a treatment for binge-eating disorder (Grilo et al., 2022). First, give students a bit of background about binge-eating disorder. If you don’t have the DSM-V (with or without the TR) handy, this Mayo Clinic webpage may give you what you need (Mayo Clinic Staff, 2018). Next, let students know that naltrexone-bupropion work together on the hypothalamus to both reduce how much we eat and increase the amount of energy we expend (Sherman et al., 2016). It’s a drug combination approved by the FDA for weight loss. Lastly, behavioral weight loss therapy is all about gradual changes to lifestyle. That includes gradual decreases in daily calories consumed, gradual increases in nutritional quality, and gradual increases in exercise. Invite students to consider how they would design an experiment to find out which treatment is most effective for binge-eating disorder: naltrexone-bupropion, behavioral weight loss (BWL) therapy, or both. In this particular study ("a randomized double-blind placebo-controlled trial"), researchers used a 2 (drug vs. placebo) x 2 (BWL vs no therapy) between participants design. In their discussion, they note that, in retrospect, a BWL therapy group alone would have been a good thing to have. The study was carried out over a 16-week period. Participants were randomly assigned to condition. Researchers conducting the assessments were blind to conditions. Next ask students what their dependent variables would be. The researchers had two primary dependent variables. They measured binge-eating remission rates, with remission defined as no self-reported instances of binge eating in the last 28 days. They also recorded the number of participants who lost 5% or more of their body weight. Ready for the results? Percentage of participants who had no binge-eating instances in the last 28 days Placebo Naltrexone-Bupropion No therapy 17.7% 31.3% BWL therapy 37.% 57.1% Number of participants who lost 5% or more of their body weight Placebo Naltrexone-Bupropion No therapy 11.8% 18.8% BWL therapy 31.4% 37.1% As studies that have evaluated treatments for other psychological disorders have found, medication and psychotherapy combined are more effective than either alone. If time allows, you can help students gain a greater appreciation for how difficult getting participants for this kind of research can be. Through advertising, the researchers heard from 3,620 people who were interested. Of those, 972 never responded after the initial contact. That left 2,648 to be screened for whether they would be appropriate for the study. Following the screening, only 289 potential participants were left. Ask students why they think so few participants remained. Here are the top reasons: participants did not meet the criteria for binge-eating disorder (715), participants decided they were not interested after all (463), and participants were taking a medication that could not be mixed with naltrexone-bupropion (437). Other reasons included but not limited to having a medical condition (could impact study’s results), they were already in a treatment program for weight loss or binge-eating disorder (would not be a sole test of these treatments), or they were pregnant or breast-feeding (couldn’t take the drugs). After signing the consent form and doing the initial assessment, another 153 were found to have not met the inclusion criteria. That left 136 to be randomly assigned to conditions. Over the 16 weeks of the study, 20 participants dropped out on their own, and four were removed because of medical reasons. It took 3,620 people who expressed interest to end up with data from 112 participants. There is no information in the article about whether participants who were not in the drug/psychotherapy group were offered—after the study was over—the opportunity to experience the combined treatment that was so effective. Ethically, it would have been the right thing to do. References Grilo, C. M., Lydecker, J. A., Fineberg, S. K., Moreno, J. O., Ivezaj, V., & Gueorguieva, R. (2022). Naltrexone-bupropion and behavior therapy, alone and combined, for binge-eating disorder: Randomized double-blind placebo-controlled trial. American Journal of Psychiatry, 1–10. https://doi.org/10.1176/appi.ajp.20220267 Mayo Clinic Staff. (2018, May 5). Binge-eating disorder. Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/binge-eating-disorder/symptoms-causes/syc-20353627 Sherman, M. M., Ungureanu, S., & Rey, J. A. (2016). Naltrexone/bupropion ER (Contrave): Newly approved treatment option for chronic weight management in obese adults. Pharmacy & Therapeutics, 41(3), 164–172.
... View more
Labels
-
Psychological Disorders and Their Treatment
-
Research Methods and Statistics
0
0
2,904
sue_frantz
Expert
10-17-2022
05:00 AM
I have had occasion to send out emails with some sort of inquiry. When I don’t get any response, it ticks me off. I don’t do well with being ignored. I’ve learned that about me. Even a short “I’m not the person to help you. Good luck!” would be welcome. I mention that to acknowledge that I brought that particular baggage with me when I read an article in the Journal of Counseling Psychology about counselors ignoring email messages from people seeking a counselor (Hwang & Fujimoto, 2022). As if the bias this study revealed was not anger-inducing enough. The results are not particularly surprising, but that does not make me less angry. I’ve noticed that as I’m writing this, I’m pounding on my keyboard. Wei-Chin Hwang and Ken A. Fujimoto were interested in finding out how counselors would respond to email inquiries from potential clients who varied on probable race, probably gender, psychological disorder, and inquiry about a sliding fee scale. The researchers used an unidentified “popular online directory to identify therapists who were providing psychotherapeutic services in Chicago, Illinois” (Hwang & Fujimoto, 2022, p. 693). From the full list 2,323 providers, they identified 720 to contact. In the first two paragraphs of their methods section, Hwang and Fujimoto explain their selection criteria. The criterion that eliminated the most therapists was that the therapist needed to have an advertised email address. Many of the therapists listed only permitted contact through the database. Because the researchers did not want to violate the database’s terms of service, they opted not to contact therapists this way. They also excluded everyone who said that they only accepted clients within a specialty area, such as sports psychology. They also had to find a solution for group practices where two or more therapists from the same practice were in the database. Hwang and Fujimoto did not want to risk therapists in the same group practice comparing email requests with each other, so they randomly chose one therapist in a group practice to receive their email. This experiment was a 3x3x2x2 (whew!). Inquirer’s race: White, African American, Latinx American (the three most common racial groups in Chicago, where the study was conducted). Researchers used U.S. Census Bureau data to identify last names that were most common for each racial group: Olson (White), Washington (African American), and Rodriguez (Latinx). Inquirer’s diagnosis: Depression, schizophrenia, borderline personality disorder (previous research has shown that providers find people with schizophrenia or borderline personality disorder less desirable to work with than, say, depression) Inquirer’s gender: Male, female. (Male first names: Richard, Deshawn, José; female first names: Molly, Precious, and Juana) Inquirer’s ability to pay full fee: Yes, no. In their methods section, Hwang and Fujimoto include the scripts they used. Each script includes this question: “Can you email me back so that I can make an appointment?” The dependent variable was responsiveness. Did the provider email the potential client back within two weeks? If not, that was coded as “no responsiveness.” (In the article’s Table 1, the “no responsiveness” column is labeled as “low responsiveness,” but the text makes it clear that this column is “no responsiveness.”) If the provider replied but stated they could not treat the inquirer, that was coded as “some responsiveness.” If the provided replied with the offer of an appointment or an invitation to discuss further, that was coded as “high responsiveness.” There were main effects for inquirer race, diagnosis, and ability to pay the full fee. The cells refer to the percentage of provider’s email messages in each category. Table 1. Responsiveness by Race of Inquirer Name No responsiveness Some responsiveness High responsiveness Molly or Richard Olson 15.4% 33.2% 51.5% Precious or Deshawn Washington 27.4% 30.3% 42.3% Juana or José Rodriguez 22.3% 34% 43.7% There was one statistically significant interaction. Male providers were much more likely to respond to Olson than they were to Washington or Rodriguez. Female providers showed no bias in responding by race. If a therapist does not want to work with a client based on their race, then it is probably best for the client if they don’t. But at least have the decency to reply to their email with some lie about how you’re not taking on more clients, and then refer them to a therapist who can help. Table 2. Responsiveness by Diagnosis Diagnosis No responsiveness Some responsiveness High responsiveness Depression 17.9% 20% 62.1% Schizophrenia 25.8% 43.8% 30.4% Borderline Personality Disorder 21.3% 33.8% 45% Similar thoughts here. I get that working with a client diagnosed with schizophrenia or borderline personality disorder takes a very specific set of skills that not all therapists have, but, again, at least have the decency to reply to the email saying that you don’t have the skills, and then refer them to a therapist who does. Table 3. Responsiveness by Inquirer’s Ability to Pay Full Fee Ability to pay full fee No responsiveness Some responsiveness High responsiveness No 22.4% 39.7% 38% Yes 21% 25.4% 53.6% While Hwang and Fujimoto interpret these results to mean a bias against members of the working class, I have a different interpretation. The no response rate was the same with about 20% of providers not replying at all. If there were an anti-working-class bias, I would expect the no responsiveness percentage to those asking about a sliding fee scale would be much greater. In both levels of this independent variable, about 80% gave some reply. It could be that the greater percentage of “some responsiveness” in reply to those who inquired about a sliding fee scale was due to the providers being maxed out on the number of clients they had who were paying reduced fees. One place to discuss this study and its findings with Intro Psych students is in the therapy chapter. It would work well as part of your coverage of therapy ethics codes. Within the ethics code for the American Counseling Association, Section C on professional responsibility is especially relevant. It reads in part: Counselors facilitate access to counseling services, and they practice in a nondiscriminatory manner…Counselors are encouraged to contribute to society by devoting a portion of their professional activity to services for which there is little or no financial return (pro bono publico) (American Counseling Association, 2014, p. 😎 Within the ethics code of the American Psychological Association, Principle 😧 Social Justice is particularly relevant. Psychologists recognize that fairness and justice entitle all persons to access to and benefit from the contributions of psychology and to equal quality in the processes, procedures, and services being conducted by psychologists. Psychologists exercise reasonable judgment and take precautions to ensure that their potential biases, the boundaries of their competence, and the limitations of their expertise do not lead to or condone unjust practices (American Psychological Association, 2017). Principle E: Respect for People's Rights and Dignity is also relevant. It reads in part: Psychologists are aware of and respect cultural, individual, and role differences, including those based on age, gender, gender identity, race, ethnicity, culture, national origin, religion, sexual orientation, disability, language, and socioeconomic status, and consider these factors when working with members of such groups. Psychologists try to eliminate the effect on their work of biases based on those factors, and they do not knowingly participate in or condone activities of others based upon such prejudices (American Psychological Association, 2017). This study was conducted in February 2018—before the pandemic. Public mental health has not gotten better. Asking for help is not easy. When people muster the courage to ask for help, the absolute least we can do is reply. Even if we are not the best person to provide that help, we can direct them to additional resources, such as one of these crisis help lines. For a trained and licensed therapist who is bound by their profession’s code of ethics to just not reply at all to a request for help, I just don’t have the words. Again, I should acknowledge that I have my own baggage about having my email messages ignored. For anyone who wants to blame their lack of responding on the volume of email you have sort through (I won’t ask if you are selectively not responding based on perceived inquirer personal characteristics), I have an hour-long workshop that will help you get your email under control and keep it that way. References American Counseling Association. (2014). ACA 2014 code of ethics. https://www.counseling.org/docs/default-source/default-document-library/2014-code-of-ethics-finaladdress.pdf?sfvrsn=96b532c_8 American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code Hwang, W.-C., & Fujimoto, K. A. (2022). Email me back: Examining provider biases through email return and responsiveness. Journal of Counseling Psychology, 69(5), 691–700. https://doi.org/10.1037/cou0000624
... View more
Labels
-
Psychological Disorders and Their Treatment
-
Research Methods and Statistics
-
Social Psychology
0
0
3,133
sue_frantz
Expert
10-10-2022
05:30 AM
I have been doing a bit of digging into the research databases, and I came across a Journal of Eating Disorders article with a 112-word “Plain English summary” (Alberga et al., 2018). I love this so much I can hardly stand it. Steven Pinker (2014) wrote an article for The Chronicle of Higher Education titled “Why academics’ writing stinks.” Pinker does not pull any punches in his assessment. Let’s face it. Some academic writing is virtually unreadable. Other academic writing is actually unreadable. Part of the problem is one of audience. If a researcher is writing for other researchers in their very specific corner of the research world, of course they are going to use jargon and make assumptions about what their readers know. That, though, is problematic for the rest of us. I have spent my career translating psychological science as an instructor and, more recently, as an author. This is what teaching is all about: translation. If we are teaching in our particular subdiscipline, translation is usually not difficult. If we are teaching Intro Psych, though, we have to translate research writing that is miles away from our subdiscipline. This is what makes Intro Psych the most difficult course in the psychology curriculum to teach. I know instructors who do not cover, for example, biopsychology or sensation and perception in their Intro Psych courses because they do not understand the topics themselves. Additionally, some of our students have learned through reading academic writing to write in a similarly incomprehensible style. Sometimes I feel like students initially wrote their papers in plain English, and then they threw a thesaurus at it to make their writing sound more academic. We have certainly gone wrong somewhere if ‘academic’ has come to mean ‘incomprehensible.’ I appreciate the steps some journals have taken to encourage or require article authors to tell readers why their research is important. In the Society for the Teaching of Psychology’s journal Teaching of Psychology, for example, the abstract ends with a “Teaching Implications” section. Many other journals now require a “Public Significance Statement” or a “Translational Abstract” (what the Journal of Eating Disorders calls a “plain English summary”). I have read my share of public significance statements. I confess that sometimes it is difficult—impossible even—to see the significance of the research to the general public in the statements. I suspect it is because the authors themselves do not see any public significance. That is probably truer for (some areas of) basic research than it is for any area of applied research. Translational abstracts, in contrast, are traditional abstracts rewritten for a lay audience. APA’s page on “Guidance for translational abstracts and public significance statements” (APA, 2018) is worth a read. An assignment where students write both translational abstracts and public significance statements for existing journal articles gives students some excellent writing practice. In both cases, students have to understand the study they are writing about, translate it for a general audience, and explain why the study matters. And maybe—just maybe—as this generation of college students become researchers and then journal editors, in a couple generations plain English academic writing will be the norm. This is just one of several windmills I am tilting at these days. The following is a possible writing assignment. While it can be assigned after covering research methods, it may work better later in the course. For example, after covering development, provide students with a list of articles related to development that they can choose from. While curating a list of articles means more work for you up front, students will struggle less to find article abstracts that they can understand, and your scoring of their assignments will be easier since you will have a working knowledge of all of the articles students could choose from. Read the American Psychological Association’s (APA’s) “Guidance for translational abstracts and public significance statements.” Chose a journal article from this list of Beth Morling’s student-friendly psychology research articles (or give students a list of articles). In your paper: Copy/paste the article’s citation. Copy/paste the article’s abstract. Write your own translational abstract for the article. (The scoring rubric for this section will be based on APA’s “Guidance for translational abstracts and public significance statements.”) Write your own public significance statement. (The scoring rubric for this section will be based on APA’s “Guidance for translational abstracts and public significance statements.”) References Alberga, A. S., Withnell, S. J., & von Ranson, K. M. (2018). Fitspiration and thinspiration: A comparison across three social networking sites. Journal of Eating Disorders, 6(1), 39. https://doi.org/10.1186/s40337-018-0227-x APA. (2018, June). Guidance for translational abstracts and public significance statements. https://www.apa.org/pubs/journals/resources/translational-messages Pinker, S. (2014). Why academics’ writing stinks. The Chronicle of Higher Education, 61(5).
... View more
Labels
-
Research Methods and Statistics
-
Teaching and Learning Best Practices
0
0
2,937
sue_frantz
Expert
10-03-2022
05:30 AM
Here is another opportunity to give students practice in experimental design. While this discussion (synchronous or asynchronous) or assignment would work well after covering research methods, using it in the Intro Psych social psych chapter as a research methods refresher would work, too. By way of introduction, explain to your students how double-blind peer review works and why that’s our preferred approach in psychology. Next, ask students to read the freely-available, less-than-one-page article in the September 16, 2022 issue of Science titled “Reviewers Award Higher Marks When a Paper’s Author Is Famous.” While I have a lot of love for research designs that involves an entire research team brainstorming for months, I have a special place in my heart for research designs that must have occurred to someone in the middle of the night. This study has to be the latter. If you know it is not, please do not burst my bubble. A Nobel Prize winner (Vernon Smith) and his much lesser-known former student (Sabiou Inoua) wrote a paper and submitted it for publication in a finance journal. The journal editor and colleagues thought, “Hey, you know what would be interesting? Let’s find out if this paper would fly if the Nobel Prize winner’s name wasn’t on it. Doesn’t that sound like fun?” The study is comprised of two experiments (available in pre-print). In the first experiment, the experimenters contacted 3,300 researchers asking if they would be willing to review an economics paper based on a short description of the paper. Those contacted were randomly assigned to one of three conditions. Of the one-third who were told that the author was Smith, the Nobel laureate, 38.5% agreed to review the paper. Of the one-third who were told that the author was Inoua, the former student, 28.5% agreed to review. Lastly, of the one-third who were given no author names, 30.7% agreed to review. Even though there was no statistical difference between these latter two conditions as reported in the pre-print, the emotional difference must be there. If I’m Inoua, I can accept that many more people are interested in reviewing a Nobel laureate’s paper than are interested in reviewing mine. What’s harder to accept is that my paper appears even less interesting when my name is on it than when my name is not on it. I know. I know. Statistically, there is no difference. But, dang, if I were in Inoua’s shoes, it would be hard to get my heart to accept that. Questions for students: In the first experiment, what was the independent variable? Identify the independent variable’s experimental conditions and control condition. What was the dependent variable? Now let’s take a look at the second experiment. For their participants, the researchers limited themselves to those who had volunteered to review the paper when they had not been given the names of either of the authors. They randomly divided this group of reviewers into the same conditions as in the first experiment: author identified as Vernon Smith, author identified as Sabiou Inoua, and no author name given. How many reviewers recommended that the editor reject the paper? With Smith’s name on the paper, 22.6% said reject it. With Inoua’s name on the paper, 65.4% said reject. With no name on the paper, 48.2% said reject. All differences are statistically significant. Now standing in Inoua’s shoes, the statistical difference matches my emotional reaction. Thin comfort. Questions for students: In the second experiment, what was the independent variable? Identify the independent variable’s experimental conditions and control condition. What was the dependent variable? The researchers argue that their data reveal a status bias. If you put a high status name on a paper, more colleagues will be interested in reviewing it, and of those who do review it, more will advocate for its publication. Double-blind reviews really are fairer, although the researchers note that in some cases—especially with pre-prints or conference presentations—reviewers may know who the paper belongs to even if the editor conceals that information. With this pair of experiments, the paper sent out for review was not available as a pre-print nor had it been presented at a conference. The stickier question is how to interpret the high reject recommendations for Inoua as compared to when no author was given. While the experimenters intended to contrast Smith’s status with Inoua’s, there is a confounding variable. Reviewers who are not familiar with Sabiou Inoua will not know Inoua’s race or ethnicity for certain, but they might guess that Inouan is a person of color. A quick Google search finds Inoua’s photo and CV on Chapman University’s Economic Science Institute faculty and staff page. He is indeed a person of color from Niger. Is the higher rejection rate for when Inoua’s name was on the paper due to his lower status? Or due to his race or ethnicity? Or was it due to an interaction between the two? Questions for students: Design an experiment that would test whether Inoua’s higher rejection rate was more likely due to status or more likely due to race/ethnicity. Identify your independent variable(s), including the experimental and control conditions. Identify your dependent variable. If time allows, give students an opportunity to share other contexts where double-blind reviews are or would be better.
... View more
Labels
-
Research Methods and Statistics
-
Social Psychology
0
0
1,514
sue_frantz
Expert
09-05-2022
05:00 AM
In an example of archival research, researchers analyzed data from the U.S. National Health and Nutrition Examination Survey for the years 2007 to 2012 (Hecht et al., 2022). They found that after controlling for “age, gender, race/ethnicity, BMI, poverty level, smoking status and physical activity,” (p. 3) survey participants “with higher intakes of UPF [ultra-processed foods] report significantly more mild depression, as well as more mentally unhealthy and anxious days per month, and less zero mentally unhealthy or anxious days per month” (p. 7). So far, so good. The researchers go on to say, “it can be hypothesised that a diet high in UPF provides an unfavourable combination of biologically active food additives with low essential nutrient content which together have an adverse effect on mental health symptoms” (p. 7). I don’t disagree with that. It is one hypothesis. By controlling for their identified covariates, they address some possible third variables, such as poverty. However, at no place in their article do they acknowledge that the direction can be reversed. For example, it can also be hypothesized that people who are experiencing the adverse effects of mental health symptoms have a more difficult time consuming foods high in nutritional quality. Anyone who battles the symptoms of mental illness or who is close to someone who does knows that sometimes the best you can do for dinner is a hotdog or a frozen pizza—or if you can bring yourself to pick up your phone—pizza delivery. They do, however, include reference to an experiment: “[I]n one randomized trial, which provides the most reliable evidence for small to moderate effects, those assigned to a 3-month healthy dietary intervention reported significant decreases in moderate-to-severe depression.” The evidence from that experiment looks pretty good (Jacka et al., 2017), although their groups were not equivalent on diet at baseline: the group that got the dietary counseling scored much lower on their dietary measure than did the group that got social support. Also, those who received social support during the study did, in the end, have better mental health scores and better diet scores than they did at baseline, although all we have are the means. I don’t know if the differences are statistically significant. All of that is to say is that the possibility remains that reducing the symptoms of mental illness may also increase nutritional quality. Both the Jacka et al. experiment and the Hecht et al. correlational study are freely available. You may also want to read the Science Daily summary of the Hecht et al. study where the author (or editor?) writes, “Do you love those sugary-sweet beverages, reconstituted meat products and packaged snacks? You may want to reconsider based on a new study that explored whether individuals who consume higher amounts of ultra-processed food have more adverse mental health symptoms.” If you’d like to use this in your Intro Psych class, after covering correlations and experiments, ask your students to read the Science Daily summary. Ask your students two questions. 1) Is this a correlational study or an experiment? 2) From this study, can we conclude that ultra-processed foods negatively affect mental health? These questions lend themselves well for use with in-class student response systems (e.g., Clickers, Plickers). Lastly, you may want to share with your students more information about both the Hecht et al. study and the Jacka et al. experiment. If time allows, give your students an opportunity to design an experiment that would test this hypothesis: Improved mental health symptoms causes better nutritional consumption. References Hecht, E. M., Rabil, A., Martinez Steele, E., Abrams, G. A., Ware, D., Landy, D. C., & Hennekens, C. H. (2022). Cross-sectional examination of ultra-processed food consumption and adverse mental health symptoms. Public Health Nutrition, 1–10. https://doi.org/10.1017/S1368980022001586 Jacka, F. N., O’Neil, A., Opie, R., Itsiopoulos, C., Cotton, S., Mohebbi, M., Castle, D., Dash, S., Mihalopoulos, C., Chatterton, M. L., Brazionis, L., Dean, O. M., Hodge, A. M., & Berk, M. (2017). A randomised controlled trial of dietary improvement for adults with major depression (the ‘SMILES’ trial). BMC Medicine, 15(1), 23. https://doi.org/10.1186/s12916-017-0791-y
... View more
Labels
-
Psychological Disorders and Their Treatment
-
Research Methods and Statistics
0
0
2,677
sue_frantz
Expert
08-16-2022
01:09 PM
If you are looking for a new study to freshen up your coverage of experimental design in your Intro Psych course, consider this activity. After discussing experiments and their component parts, give students this hypothesis: Referring to “schizophrenics” as compared to “people with schizophrenia” will cause people to have less empathy for those who have a diagnosis of schizophrenia. In other words, does the language we use matter? Assure students that they will not actually be conducting this experiment. Instead, you are asking them to go through the design process that all researchers go through. Ask students to consider these questions, first individually to give students an opportunity to gather their thoughts and then in a small group discussion: What population are you interested in studying and why? Are you most interested in knowing what impact this choice of terminology has on the general population? High school students? Police officers? Healthcare providers? Next, where might you find 100 or so volunteers from your chosen population to participate? Design the experiment. What will be the independent variable? What will the participants in each level of the independent variable be asked to do? What will be the dependent variable? Be sure to provide an operational definition of the dependent variable. Invite groups to share their populations of interest with a brief explanation of why they chose that population and where they might find volunteers. Write the populations where students can see the list. Point out that doing this research with any and all of these populations would have value. The independent variable and dependent variable should be the same for all groups since they are stated in the hypothesis. Operational definitions of the dependent variable may vary, however. Give groups an opportunity to share their overall experimental design. Again, point out that if researchers find support for the hypothesis regardless of the specifics of how the experiment is conducted and regardless of the dependent variable’s operational definition, that is all the more support for the robustness of the findings. Even if some research designs or operational definitions or particular populations do not support the hypothesis, that is also very valuable information. Researchers then get to ask why these experiments found different results. For example, if research with police officers returns different findings than research with healthcare workers, psychological scientists get to explore why. For example, is there a difference in their training that might affect the results? Lastly, share with students how Darcy Haag Granello and Sean R. Gorby researched this hypothesis (Granello & Gorby, 2021). They were particularly interested in how the terms “schizophrenic” and “person with schizophrenia” would affect feelings of empathy (among other dependent variables) for both practicing mental health counselors and graduate students who were training to be mental health counselors. For the practitioners, they found volunteers by approaching attendees at a state counseling conference (n=82) and at an international counseling conference (n=79). In both cases, they limited their requests to a conference area designated for networking and conversing. For the graduate students, faculty at three different large universities asked their students to participate (n=109). Since they were particularly interested in mental health counseling, anyone who said that they were in school counseling or who did not answer the question about counseling specialization had their data removed from the analysis (n=19). In the end, they had a total of 251 participants. Granello and Gorby gave volunteers the participants Community Attitudes Toward the Mentally Ill scale. This measure has four subscales: authoritarianism, benevolence, social restrictiveness, and community mental health ideology. While the original version of the scale asked about mental illness more generally, the researchers amended it so that “mental illness” was replaced with “schizophrenics” or “people with schizophrenia.” The researchers stacked the questionnaires so that the terminology used alternated. For example, if the first person they approached received the questionnaire asking about “schizophrenics,” the next person would have received the questionnaire asking about “people with schizophrenia.” Here are sample items for the “schizophrenics” condition, one from each subscale: Schizophrenics need the same kind of control and discipline as a young child (authoritarian subscale) Schizophrenics have for too long been the subject of ridicule (benevolence subscale) Schizophrenics should be isolated from the rest of the community (social restrictiveness subscale) Having schizophrenics living within residential neighborhoods might be good therapy, but the risks to residents are too great (community mental health ideology) Here are those same sample items for the “people with schizophrenia” condition: People with schizophrenia need the same kind of control and discipline as a young child (authoritarian subscale) People with schizophrenia have for too long been the subject of ridicule (benevolence subscale) People with schizophrenia should be isolated from the rest of the community (social restrictiveness subscale) Having people with schizophrenia living within residential neighborhoods might be good therapy, but the risks to residents are too great (community mental health ideology) What did the researchers find? When the word “schizophrenics” was used: both practitioners and students scored higher on the authoritarian subscale. the practitioners (but not the students) scored lower on the benevolence subscale. all participants scored higher on the social restrictiveness subscale. there were no differences on the community mental health ideology subscale for either practitioners or students. Give students an opportunity to reflect on the implications of these results. Invite students to share their reactions to the experiment in small groups. Allow groups who would like to share some of their reactions with the class an opportunity to do so. Lastly, as time allows, you may want to share the two limitations to their experiment identified by the researchers. First, the practitioners who volunteered were predominantly white (74.1% identified as such) and had the financial means to attend a state or international conference. Would practitioners of a different demographic show similar results? The graduate students also had the financial means to attend a large in-person university. Graduate students enrolled in online counseling programs, for example, may have different results. A second limitation the researchers identified is that when they divided their volunteers into practitioners and students, the number of participants they had was below the recommended number to give them the statistical power to detect real differences. With more participants, they may have found even more statistical differences. Even with these limitations, however, the point holds. The language we use affects the perceptions we have. Reference Granello, D. H., & Gorby, S. R. (2021). It’s time for counselors to modify our language: It matters when we call our clients schizophrenics versus people with schizophrenia. Journal of Counseling & Development, 99(4), 452–461. https://doi.org/10.1002/jcad.12397
... View more
Labels
-
Research Methods and Statistics
-
Thinking and Language
0
1
5,512
sue_frantz
Expert
06-27-2022
05:00 AM
Thurgood Marshall in his argument before the U.S. Supreme Court in Brown vs. Board of Education cited the research of Drs. Mamie and Kenneth Clark. Those were the now-famous doll studies demonstrating that segregation affects how Black children feel about themselves. That 1954 ruling started a cascade of changes. While racism is still prevalent almost 70 years later, some of the state-sponsored systemic barriers have come done. Some of them. Step into the shoes of a Black man charged with a crime. Your case goes to a jury trial. The jury is comprised of all white people. And the jury room, maintained by a chapter of the United Daughters of the Confederacy, prominently features a Confederate flag. Would you feel that your jury was impartial? Tim Gilbert and his attorneys did not. For a summary of this case, read the freely available APA Div 9: Society for the Psychological Study of Social Issues column in the June 2022 issue of the Monitor on Psychology, “Legacies of racism in our halls of justice” (Anderson & Najdowski, 2022). Gilbert’s trial was held in 2020 at the Giles County courthouse in Pulaski, TN. “[T]he jury retired to the jury room during every recess, for every meal, and for its deliberations” (p. 29 of appeals court ruling.) While there were other Confederacy memorabilia in the jury room—including a portrait of Confederate president Jefferson Davis, the defense team took primary issue with the Confederate flag. (See a photo.) “In its amicus brief, the Tennessee Association of Criminal Defense Lawyers (‘TACDL’), noting that ‘[m]ultiple courts have recognized the racially hostile and disruptive nature of the Confederate flag,’ argues that ‘a jury’s exposure to Confederate Icons denies the defendant a fair trial free of extraneous prejudicial information and improper outside influence’” (p. 19 of appeals court ruling). In the TACDL amicus brief, they cited a 2011 Political Psychology article (Ehrlinger et al., 2011). The article features two experiments conducted in 2008. In the first, volunteers who were subliminally shown images of a Confederate flag were less likely to express interest in voting for Obama. In the second experiment—the one that I found more compelling—volunteers who were exposed to a folder with a Confederate flag sticker ostensibly left by someone else who had been in the room were more likely to evaluate a description of a Black man more negatively. (Read this section of the amicus brief.) Quoted in the amicus brief was the researchers’ conclusion: “Our studies show that, whether or not the Confederate flag includes other nonracist meanings, exposure to this flag evokes responses that are prejudicial. Thus, displays of the Confederate flag may do more than inspire heated debate, they may actually provoke discrimination.” Excluded from that quote was the end of the researchers’ sentence: “even among those who are low in prejudice.” In August 2021, the appeals court ruled that Gilbert was deserving of a new trial. In Intro Psych, we can discuss this case in the first few days of class, when we discuss the importance of psychological research. It would also work to discuss the Ehringer et al. second study as an example of experimental design—and then add how that experiment was used to support a new trial for Gilbert. References Anderson, M., & Najdowski, C. J. (2022, June). Legacies of racism in our halls of justice. Monitor on Psychology, 53(4), 39. Ehrlinger, J., Plant, E. A., Eibach, R. P., Columb, C. J., Goplen, J. L., Kunstman, J. W., & Butz, D. A. (2011). How exposure to the Confederate flag affects willingness to vote for Barack Obama. Political Psychology, 32(1), 131–146. https://doi.org/10.1111/j.1467-9221.2010.00797.x
... View more
Labels
-
Research Methods and Statistics
-
Social Psychology
0
0
1,235
sue_frantz
Expert
05-16-2022
10:20 AM
We know that when students have a growth mindset they tend to perform better in school (Yeager & Dweck, 2020). Do what instructors communicate about mindset matter? Here’s an activity that will give students some practice in experimental design while also introducing students to the concepts of fixed and growth mindset and perhaps even inoculating them against instructors who convey a fixed mindset. For background for yourself, read Katherine Muenks and colleagues’ Journal of Experimental Psychology article (2020). The activities below will replicate their study designs. After explaining to students the difference between a growth and fixed mindset, ask students if they have ever had an instructor who said something in class or wrote something in the syllabus that conveyed which mindset the instructor held. For example, an instructor with a growth mindset might say, “This course is designed to help you improve your writing skills.” An instructor with fixed mindset might say, “Either you have the skills to succeed in this course or you don’t.” As students share examples that they have heard, write them down where students can see them. Ask students if they think that these instructor statements could affect students. If so, how? Perhaps these statements could affect how much they feel like they belong in the course, or how interested students are in the course, or even how well students do in the course. Write down what students generate. Point out to students that they just generated two hypotheses. 1. If students hear an instructor with a growth mindset, then they are more likely to feel like they belong (or/and whatever other dependent variables students suggested). 2. If students hear an instructor with a fixed mindset, then they are less likely to feel like they belong (or/and whatever other dependent variables students suggested). Point out to students that the “if” part of the hypotheses gives us the independent variable (instructor mindset). Suggest that the experiment they will design has three levels to the independent variable: growth mindset, fixed mindset, and a control condition of no mindset. The “then” part of the hypotheses gives us the dependent variables, such as feelings of belonging and whatever other variables students think could be affected. Ask students to spend a couple minutes thinking about how they could design an experiment that would test both of these hypotheses. Then invite students to group up with a couple students near them to discuss. Lastly, give students an opportunity to share their designs. Remind students that conducting experiments is a creative endeavor and that there is no one right way to test hypotheses. In fact, the more ways researchers test hypotheses, the more confidence we have in the findings. Share with students how Muenks and her colleagues did the first of their studies. They created three videos of what was ostensibly a calculus professor talking about their syllabus on the first day of class. The same actor delivered the same information; it was all scripted. The only difference was that for the growth mindset condition, the script included growth mindset comments sprinkled throughout, such as “These assignments are designed to help you improve your skills throughout the semester.” For the fixed mindset condition, comments included things like, “In this course, you either know the concepts and have the skills, or you don’t.” The control condition excluded mindset comments. Volunteers were randomly assigned to watch one of the three videos. Muenks and colleagues assessed four dependent variables: vulnerability which was a combined measure of belongingness (five questions, including “How much would you feel that you ‘fit in’ during this class?”) and evaluative concerns (five questions, “How much would you worry that you might say the wrong thing in class?”), engagement (three items, including “I think I would be willing to put in extra effort if the professor asked me to”), interest in the course, and anticipated course performance. (See the second study they reported in their article for additional dependent variables, including feelings of being an imposter and intentions of dropping the course.) Volunteers reported that they would feel the most vulnerable with fixed mindset instructor, less vulnerable with the control instructor and the least vulnerable with the growth mindset instructor. Volunteers reported that they would feel the least engaged with either the fixed mindset or control instructor and the most engaged with the growth mindset instructor. Volunteers reported that they would be least interested in a course taught by the fixed mindset instructor, more interested in a course taught by the control instructor and the most interested in a course taught by the growth mindset instructor. Lastly, volunteers expected that they would perform the worst in a course taught by the fixed mindset instructor and best in the course taught by the growth mindset or control instructor. After sharing these results, explain that volunteers in this study reported what they think they would feel or do. For ethical reasons, we cannot randomly assign students to take actual courses taught by instructors who express these different views. However, if students were taking courses, researchers could do correlational research on student experiences. In studies three and four, Muenk and colleagues did correlational studies where students were asked immediately after attending class for their impressions of their instructor’s mindset along with a number of other measures, including feelings of belonging, evaluative concerns, imposter feelings, and affect. After the course was over, students reported how often they attended class, how often they thought about dropping the course, and how interested they were in the course discipline. Student grades in the course were gathered from university records. While there is a lot in the results to unpack, in sum, instructor mindset had an impact. For example, student grades were worst when students perceived their instructor as having a fixed mindset, but this result seems to have been driven by student feelings of being an imposter. End this activity with this question: Is it possible that being consciously aware of an (Leave it as a rhetorical question or challenge students to design the study as a take-home assignment.) References Muenks, K., Canning, E. A., LaCosse, J., Green, D. J., Zirkel, S., Garcia, J. A., & Murphy, M. C. (2020). Does my professor think my ability can change? Students’ perceptions of their STEM professors’ mindset beliefs predict their psychological vulnerability, engagement, and performance in class. Journal of Experimental Psychology: General, 149(11), 2119–2144. https://doi.org/10.1037/xge0000763 Yeager, D. S., & Dweck, C. S. (2020). What can be learned from growth mindset controversies? American Psychologist, 75(9), 1269–1284. https://doi.org/10.1037/amp0000794
... View more
Labels
-
Research Methods and Statistics
-
Teaching and Learning Best Practices
0
0
1,617
sue_frantz
Expert
04-11-2022
06:00 AM
Heads-up displays (HUD) have been common in airplanes for years. (See examples here, or do a Google image search for airplane HUD.) With a HUD, information of use to pilots is projected onto the window, so the pilot can see the information without having to glance down at dashboard gauges, taking their eyes off their view out the window. Automobile manufacturers are now bringing this technology to cars. With a car HUD, the driver will be able to see projected on their windshield information such as speed, speed limit, distance to the car ahead, and highlighted pedestrians. As I read about this technology, I can’t help but wonder if the attentional demands outstrip the value making driving more dangerous with a HUD. After all, pilots are highly trained. In one article about automotive HUDS, I was horrified to read, “And, of course, you should be able to display information from your phone onto the windshield” (Wallaker, 2022). We know that talking on a phone (hands-free or not) takes attention away from driving. A driver who is reading text messages or making a different music selection on their windshield would be seconds away from a crash. On the other hand, if the HUD marks the car in front as green, then we will know that we are following at a safe distance. If the car is red, we need to back off until it goes green. That’s real-time, useful information that is directly related to safe driving. We know from behavioral change research, immediate feedback is more useful than delayed feedback—or in the case of the lack of technology most of us currently drive with—no feedback at all. After covering attention, this may be a good opportunity to give your students a little practice designing experiments. Describe automotive HUD technology, including some of the information that HUDs can display. Ask students to design an experiment that would test these hypotheses: Hypothesis 1: If drivers are given driving-relevant information, such as speed and distance to vehicle in front, via a heads-up display (HUD), then they will have better driving performance. Hypothesis 2: If drivers are given driving-irrelevant information, such as the ability to read text messages or change music selections, via a heads-up display (HUD), then they will have impaired driving performance. “In your design, identify each level of the independent variable, and identify the dependent variable. You may have more than one dependent variable. Include operational definitions of each.” To help students get started, explain that researchers use driving simulators for research such as this as it would be (very!) unethical to put research volunteers behind the wheel of a real car on a real road where they could kill real people, including themselves. The additional advantage of driving simulators is that researchers have complete control over the simulated environment. They can decide what information to display, when a text message appears, and when a virtual child runs into the street. After students have had a few minutes to consider their own experimental designs, invite students to work in groups of three or four to discuss their designs with the goal of creating one design for the group. After groups appear to have settled on a design, invite one group to share their independent variable and its levels. Ask if other groups have different independent variables or different levels. As groups share, identify pros and cons of each independent variable and level. Take the best options offered. Next, ask a group to share their dependent variable(s). Invite other groups to share their dependent variable(s). Again, identify pros and cons of each, then take the best options offered. If you’d like to expand this into an assignment, ask students to dive into your library’s databases. Have any research teams done experiments like the one the class just created? If so, what did they find? Reference Wallaker, M. (2022, February 6). How does a car HUD work? MUO. https://www.makeuseof.com/how-does-car-hud-work/
... View more
Labels
-
Consciousness
-
Research Methods and Statistics
0
0
1,289
jenel_cavazos
Expert
01-27-2022
08:19 AM
How can psychologists stay safe standing up for the truth in a climate where misinformation is so common? The anatomy of a misinformation attack: How a respected psychologist ended up getting attacked online for sharing the facts https://www.apa.org/news/apa/2022/news-anatomy-misinformation
... View more
Labels
-
Current Events
-
Research Methods and Statistics
0
0
1,122
jenel_cavazos
Expert
10-15-2021
10:09 AM
Short and sweet guide written in easy-to-understand language! How To Tell Science From Pseudoscience: https://www.popsci.com/diy/spot-fake-science/?taid=6169b3ba0fbc4500016aa03e&utm_campaign=trueanthem_trending-content&utm_medium=social&utm_source=twitter
... View more
Labels
-
Research Methods and Statistics
0
0
1,087
Topics
-
Abnormal Psychology
19 -
Achievement
3 -
Affiliation
1 -
Behavior Genetics
2 -
Cognition
40 -
Consciousness
35 -
Current Events
28 -
Development Psychology
19 -
Developmental Psychology
34 -
Drugs
5 -
Emotion
55 -
Evolution
3 -
Evolutionary Psychology
5 -
Gender
19 -
Gender and Sexuality
7 -
Genetics
12 -
History and System of Psychology
6 -
History and Systems of Psychology
7 -
Industrial and Organizational Psychology
51 -
Intelligence
8 -
Learning
70 -
Memory
39 -
Motivation
14 -
Motivation: Hunger
2 -
Nature-Nurture
7 -
Neuroscience
47 -
Personality
29 -
Psychological Disorders and Their Treatment
22 -
Research Methods and Statistics
107 -
Sensation and Perception
46 -
Social Psychology
132 -
Stress and Health
55 -
Teaching and Learning Best Practices
59 -
Thinking and Language
18 -
Virtual Learning
26
- « Previous
- Next »
Popular Posts