-
About
Our Story
back- Our Mission
- Our Leadershio
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
- Macmillan Community
- :
- Psychology Community
- :
- Psychology Blog
- :
- Psychology Blog - Page 2
Psychology Blog - Page 2
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Psychology Blog - Page 2
Showing articles with label Research Methods and Statistics.
Show all articles

Expert
12-17-2023
05:00 AM
The New York Times has noted new data showing a rise in pedestrian deaths (Leonhardt, 2023). The article offers several possibilities for this increase. One reason may be that drivers are paying more attention to their phones than to the road and what’s going on around them. I’ll add built-in car displays in that category. With a physical knob or dial, adjusting music/audiobook volume or in-cabin temperature could easily be done by touch. With screens, drivers have to look away from the road to make these adjustments. The article also suggests that the greater availability of marijuana and opioids has more people driving under the influence of something. Additionally, more people are living in areas where sidewalks and crosswalks are less common. When people walk on the road, it stands to reason that their chances of being hit by a driver increase. Lastly, the article notes that with more people living on the streets, there are more opportunities for people and cars to collide. I’d add one more possibility. It seems like cars are quieter than they used to be—electric vehicles certainly are. If pedestrians rely on sight and sound to help with vehicle awareness, quiet cars reduce those sensory modalities by half. The New York Times article makes excellent points. What is missing from this discussion, however, is pedestrian behavior. In my informal observations of pedestrians—both as a driver and as a fellow pedestrian, some pedestrians seem pretty cavalier about occupying the same space as cars. Here are a few examples I’ve experienced in the last two weeks. There is a fairly busy rural road near my home that has a few rolling hills. There is no sidewalk. It’s possible to walk on the side of the road, but with the rocks, it looks like it would be tough trekking. I’ve seen one person on two different occasions walking on the road, walking with traffic, and wearing over-the-ear headphones. It’s not difficult for me to imagine a car cresting one of those hills and not seeing this person in time to avoid them—especially if there is oncoming traffic. The person would have no chance since they can neither see nor hear oncoming traffic. Just yesterday I was leaving our local post office when a person crossed the street in front of me. They did not look either direction before crossing. They were wearing a big hood that functioned just like blinders. If I had been any closer, they would have walked into the side of my car. Actually, a couple weeks ago, I was the passenger in a car when a person who had not looked for oncoming cars, stepped off the curb and came very close to walking into the side of our car. The car was a red Camaro. It was not easy to miss. Not paying attention to surroundings is as much of a problem for pedestrians as it is for drivers. While a pedestrian who steps into a crosswalk when the lighted guy turns green is in the right and the inattentive driver who hits them is in the wrong, being right does not make the pedestrian any less dead. Have pedestrians become less attentive? I don’t know. If we are, I can imagine several reasons why. Just like drivers, phones have pedestrians’ attention. I also wonder if today’s pedestrians have less experience being pedestrians than pedestrians of the past. For example, stranger danger pushed kids indoors, giving them less experience on streets. Furthermore, more of my students today do not know how to drive as compared to my students in the past. Does less experience behind the wheel make it harder for pedestrians to see the world through a driver’s eyes? This could be the basis of an interesting observational study for your students. Can your students devise measurements that would quantify pedestrian or driver attentiveness? For example, does a randomly selected pedestrian look both ways before stepping into the street? Or does a randomly selected driver stopped in an intersection, look both directions before proceeding into the crosswalk? How would your students select the intersections to conduct their observations? Does your city have data on the busiest intersections? Does your local police department have data on where the car/pedestrian crashes occur? What days or times of day would your students choose? As a way to expand student engagement with psychology or as alternative activity, consider asking students to use the persuasive strategies they learned about in their Intro Psych social chapter to design a public ad campaign While the primary goal of the observational study activity is to give students practice designing and conducting an observational study and the primary goal of the public ad campaign is to give students practice putting their knowledge of social psychology to work, the secondary goal for both activities is to increase student traffic safety awareness—both as drivers and as pedestrians. Reference Leonhardt, D. (2023, December 11). The rise in U.S. traffic deaths. The New York Times. https://www.nytimes.com/2023/12/11/briefing/us-traffic-deaths.html
... View more
Labels
-
Research Methods and Statistics
-
Social Psychology
0
3
1,820

Expert
11-23-2023
08:14 AM
The New York Times published a freely-available 5-minute opinion piece on credit cards that offer rewards. They argue that the money rewards cards give back has to come from somewhere, and that somewhere is from the transaction fees the credit card companies—most notably Visa and Mastercard—charge business owners. They believe that using rewards cards ultimately hurts those in the lower economic strata who are less likely to use rewards cards and so don’t reap their benefits while still paying the higher prices the businesses have to charge to cover the transaction fees. The economists and public policy experts at the International Center for Law and Economics have a different opinion. I am not an economist, but I understand enough to know that this is all more complicated than it may first appear. I am also not so naïve as to believe that if we all stopped using cash-back credit cards, Visa and Mastercard would reduce these transaction fees. As a consumer, however, I have been encountering the reinforcement and punishment of credit card use. I have a credit card that gives me 5% back on gas purchases. Our local gas stations will give me 10 cents off per gallon if I pay with cash. With gas at $3.00 a gallon, when I use this credit card, I get 15 cents off per gallon. I am reinforced with money for using the credit card. The price of gas would need to drop to $2.00 per gallon for the 10 cents for cash equals the 5% back on my credit card. Now let’s imagine a fantasy world where gas is less than $2.00 per gallon. Even if it would cost me less to pay cash, the hassle of walking into the gas station, standing in line, and waiting for the cashier to make change would make using the credit card at the pump or using the app on my phone a more desirable option. And that’s not even calculating the cost of the snacks I’m more likely to purchase if I walk in. In short, the use of this particular credit card at gas stations is reinforced. I also have a credit card that give me 3% back on restaurant purchases. Because of the credit card transaction fees, our favorite local restaurant started adding a 3.5% credit card use fee to all credit card transactions. On a $30 bill (including tip), that’s $1.5, but we would only get 90 cents back from our credit card. Sixty cents isn’t much, but it doesn’t require any extra effort on our part to pay cash—it takes just as long to wait to for server to return with change as it does to wait for them to return with a credit card receipt to sign—so we pay cash and save the 60 cents, thank you very much. In other words, at this restaurant, our credit card use is punished with an extra cost, so we don’t use it. I wonder, though, if customers at this restaurant would be even more likely to pay cash if what we were charged initially included the 3.5% credit card use fee and they framed it as a 3.5% discount for using cash. It’s an empirical question! If you’d like to give your students some experimental design practice, ask them how they would go about testing that hypothesis. Last example. We have a rewards card that gives us 1.5% back on all purchases. We had to have the thermostat replaced on our car. Our mechanic recently started passing the 3.5% cost of the credit card transaction fee onto their clients. On a bill of a few hundred dollars, I would be punished in the form of having to pay many dollars for using my credit card. This was an easy decision. I kept the credit card in my wallet and paid by check. And, no, I don’t remember the last time I wrote a check. As more businesses adopt this strategy, I look forward to the development of the handheld printer that will print checks. It’ll connect via Bluetooth to an app on my phone. I enter the business name and amount, and it prints out the check for me to sign. Ask students if they have a cashback rewards card and if they decide when to use it based on whether businesses explicitly pass the transaction fees solely onto credit card users. My examples may be specific to me—what I find reinforcing and punishing. I can see where some people choose to use the credit card regardless—either because they’ve decided that the convenience of using a card (or app) always outweighs the dollar cost or because they don’t have the cash available so need to put the purchase on credit.
... View more
Labels
-
Learning
-
Research Methods and Statistics
0
1
1,459

Expert
10-09-2023
05:00 AM
In a May 2023 Scientific American article, I was introduced to the concept of recreational fear (Martinez-Conde & Macknik, 2023). Of course, we’re all familiar with it. It was the term that was news to me. People who are into recreational fear do things that are scary—for fun: roller coasters, bungee jumping, haunted houses, horror movies. You get the idea. I’ve been on a bobsled (twice) and have been zip lining over some pretty impressive gorges (twice), but horror movies and haunted houses are not my bag. Researchers wondered if being with friends would lessen the intensity of the fear in these recreational settings (Tashjian et al., 2022). Sometimes when we are with others in fear-inducing situations, social buffering occurs. The presence of others reduces our fear. But sometimes we experience social contagion. The presence of others increases our fear. In instances of recreational fear, which is it? Here’s a little experimental design practice for the social psych chapter in Intro Psych. Ask students to work in small groups to design an exploratory study. Since we don’t know (or at least your students don’t know yet) whether the presence of others increases or decreases fear—and we can make a good case for either one, we won’t have an hypothesis. The question is “Does the presence of others in a recreational fear situation increase or decrease fear?” Your students will have a few problems to solve in designing this study. First, the independent variable. Will they focus on the effect of the presence of friends, strangers, or both? Will they investigate the impact of group size? Does the presence of five others have more of an impact than, say, one other person? There is also the challenge of the recreational fear situation itself. Even though your students are not actually going to conduct this study, potential IRB ethical concerns should be considered. I doubt that your IRB would approve of you scaring the bejesus out of your participants. Is there someplace in your community or nearby environs where people pay to be scared? Ask your students to design a study where they would solicit volunteers from those paying customers. And now the dependent variable. How would your students operationally define fear? Invite groups to share their designs with the class. To close this activity, tell students about the Tashjian et.al study (Tashjian et al., 2022). The researchers elicited the help of the good folks at The 17th Door, a haunted house experience now located in Buena Vista, CA. The research article includes a summary of what happens in each of the 17 scenes. I read through them. Here is the researchers’ concise summary. “Each of the 17 contiguous rooms involved distinct threats, including the inability to escape an oncoming car, mimicked suffocation, actual electric shocks, and being shot with pellets by a firing squad while blindfolded” (Tashjian et al., 2022, p. 238). In an understatement for the win, they write, “[T]his type of immersive threat manipulation is not replicable in the lab” (Tashjian et al., 2022, p. 238). The “immersive threat manipulation” lasted 30 minutes. I’ve been on a bobsled and been ziplining over deep gorges. As far as recreational fear goes, I’m pretty sure The 17th Door is not for me. The researchers recruited participants after they paid the admission fee and signed the waiver required by The 17th Door. Participants went through in groups of eight to ten. The researchers asked the volunteers how many friends were in their group. Everyone went through with at least one friend. Some groups were comprised entirely of friends. As a measure of fear intensity, each volunteer wore a wrist sensor that measured skin conductance. The groups of participants are led through the experience by an employee of the The 17th Door on a precisely timed schedule. That allowed the recorded sensor activity to be aligned precisely with the events. Now for the results. Social contagion won out over social buffering. The more friends people had with them, the greater the fear they experienced as measured by skin conductance. The authors acknowledge that because changes in skin conductance are due to sympathetic nervous system arousal, the increase in skin conductance could be caused by factors other than fear, such as excitement or nervousness. To close out this activity, tell students that there is a Recreational Fear Lab at Aarhus University in Denmark run by Mathias Clasen and Marc Malmdorf Andersen. If the photo on their “people” page is accurate, their research assistants get lab coats that read on the back “Horror Research Team.” I’m a little jealous. These are the Recreational Fear Lab’s research questions for 2020-2023: What is recreational fear, and what can it be used for? What characterizes engagement with recreational fear across the lifespan? What psychological and physiological characteristics are associated with recreational fear? When does recreational fear turn into real fear? I’m particularly intrigued by the last question. There is a boundary, but how do we identify it—both as researchers and as a terrified person? In The 17th Door, participants can yell “Mercy” to signal that they want to opt of a scene or opt out of the entire event. What factors contribute to a person making that decision? Is that caused by crossing the line between recreational fear and real fear? Which research question do your students find the most interesting and why? References Martinez-Conde, S., & Macknik, S. (2023, May). Friends can make things very scary. Scientific American, 328(5), 80. Tashjian, S. M., Fedrigo, V., Molapour, T., Mobbs, D., & Camerer, C. F. (2022). Physiological responses to a haunted-house threat experience: Distinct tonic and phasic effects. Psychological Science, 33(2), 236–248. https://doi.org/10.1177/09567976211032231
... View more
Labels
-
Research Methods and Statistics
-
Social Psychology
0
0
1,890

Expert
10-02-2023
05:00 AM
In the (freely available) editorial that opened the June 2, 2023 edition of Science, H. Holden Thorp, Editor-in-Chief of Science, reminds us that “it matters who does science” (Thorp, 2023, p. 873). His point is that scientists are human, humans make mistakes, therefore scientists make mistakes. And we should just own that. Science is riddled with mistakes. Thorp urges us to use the phrase “trust the scientific process,” because it suggests that “science is what we know now, the product of the work of many people over time, and principles that have reached consensus in the scientific community through established processes of peer review and transparent disclosure” (Thorp, 2023, p. 873). Science is the process, not just a collection of known facts—or a collection of theories that tie the known facts together into a (semi-)coherent whole. Thorp also notes that when a working group of scientists all have the same preconceived notions, their biases may affect the research questions they ask, how they try to the answer those research questions, and how they may interpret the results. However, when people with different lived experiences and cultural backgrounds are part of the research process, “scientific consensus can be reached faster and with greater reliability” (Thorp, 2023, p. 873). Yes, science is riddled with mistakes, but the greater diversity of experiences we bring to science the faster we can rid ourselves of these mistakes and reduce the number of mistakes we make going forward. I’m reminded of some of my favorite psychologists whose lived experiences led them to ask the research questions they are now famous for. Mamie and Kenneth Clark asked young Black children to choose the doll they would like to play with: a Black doll or a white doll. The children chose the white doll. That research, which was presented to the U.S. Supreme Court by Thurgood Marshall, influenced the outcome of what we now know as Brown vs. Board of Education. Anyone could have done that research, but only Black psychologists thought to ask the question. Lillian Gilbreth, the mother of industrial/organizational (I/O) psychology, became interested in efficient kitchens after 1920s sexism resulted in dropped business contracts after her husband’s death. Again, anyone could have done research into how to create an efficient kitchen, but only a female psychologist thought to ask the question. In a more recent example, researchers have been uncovering the factors that contribute to racial disparities in sleep quality, such as racial disparities in shift work, exposure to light and air pollution, and acculturation stress. Sure, we can tell people to get better sleep, they need to sleep in a quiet, dark, cool room, but what if they live in an urban environment with plenty of middle-of-the-sirens, ambient street lighting, and no air conditioning? And what if they work the night shift? What if what’s keeping them awake is worrying about whether their boss’s racism is keeping them from getting raise or promotion? Researchers who are asking these questions include Girardin Jean-Louis, Dayna Johnson, Carmela Alcántara, and Alberto Ramos (Pérez Ortega, 2021). After covering research methods in Intro Psych, ask your students to read Thorp’s editorial. Next, invite your students to consider their own lived experiences. What research questions would they ask? References Pérez Ortega, R. (2021). Divided we sleep. Science, 374(6567), 552–555. https://doi.org/10.1126/science.acx9445 Thorp, H. H. (2023). It matters who does science. Science, 380(6648), 873. https://doi.org/10.1126/science.adi9021
... View more
Labels
-
History and System of Psychology
-
Research Methods and Statistics
0
0
3,458

Expert
09-18-2023
05:30 AM
As of June 2023, recreational cannabis use is legal Canada (Department of Justice, Canada, 2021) and in 23 U.S. states, the District of Columbia, Guam, and the Northern Mariana Islands (Reuters, 2023). Not that it has to be legal for people to use it. In a 2022 national survey, researchers asked people about their marijuana use. Of full-time college students between the ages of 19 and 22, 22.1% reported that they used marijuana at least once in the last 30 days, whereas only 4.7% reported that they used it daily. Both numbers were lower than for age-matched non-college students (28.2% monthly and 14.5% daily). That 30-day percentage of 22.1% for college students is about where the numbers have been since 2013. To see these kind of numbers for marijuana use, we have to go back to the early 1980s. In 1980, a whopping one-third (34.8%) of college students reported using marijuana in the previous 30 days (Patrick et al., 2023). Why do college students use marijuana? In one qualitative study, one reason participants gave was that they used it for a boost in creativity (Kilwein et al., 2022). But does marijuana actually make users more creative? Or do they just think they are more creative? After covering experimental design, give your students this hypothesis: Cannabis use increases creativity. Ask students for the independent variable (including an experimental group and a control group) and the dependent variable(s). For all variables, ask for operational definitions. After students have had a couple minutes to consider this on their own, ask students to work in small groups to create their experimental design. If time allows, ask students how or where they would find volunteers for their study. What are the ethical concerns that they need to take into consideration? After group discussion dies down, ask a volunteer from each group to share their design. Now share with students how researchers investigated this same question (Heng et al., 2023). To recruit participants, researchers posted flyers in recreational cannabis dispensaries in Washington (a state where such use is legal) and on Craigslist. Users who smoked one joint no more than a few times a week were selected to participate. Anyone who reported being pregnant was excluded. Participants were mailed cannabis test kits and emailed the study information. Participants who successfully completed the study received a $25 gift card. Participants were randomly assigned to one of two conditions: high during the creativity test or not high during the creativity test. “High” was operationally defined as having used marijuana in the last 15 minutes. The researchers note that the participants had to supply their own cannabis. “Instead of stipulating a specific time to complete the study, participants were asked to begin the study within 15-min of their volitional cannabis use. This addressed the IRB restriction of not instructing cannabis use” (Heng et al., 2023, p. 637). Now we need an operational definition for creativity. “Participants were asked to generate as many creative uses as they could for a brick in 4 min” (Heng et al., 2023, p. 637). They also rated their brick ideas based on how creative, original, and novel they thought they were on a 5-point scale. Then they used the saliva test kit and mailed it back to the researchers. What did the researchers find? Participants who used cannabis before doing the creativity task thought they were more creative than did those in the control group. But were they really more creative” The researchers asked a couple research assistants who were blind to conditions to evaluate the creativity of the answers, and they also asked participants on Prolific to do the same. Neither the research assistants nor the Prolific participants saw any difference in creativity between the groups. There was a bit more to the research design if you’d like to share this with your students as a way to conclude this activity. The researchers also asked the participants how happy and joyful they were. The researchers found that it was this mood state that mediated creativity evaluations. Cannabis use was more likely to result in higher creativity ratings if the person was happy while high. References Department of Justice, Canada. (2021, July 7). Cannabis legalization and regulation. https://www.justice.gc.ca/eng/cj-jp/cannabis/ Heng, Y. T., Barnes, C. M., & Yam, K. C. (2023). Cannabis use does not increase actual creativity but biases evaluations of creativity. Journal of Applied Psychology, 108(4), 635–646. https://doi.org/10.1037/apl0000599 Kilwein, T. M., Wedell, E., Herchenroeder, L., Bravo, A. J., & Looby, A. (2022). A qualitative examination of college students’ perceptions of cannabis: Insights into the normalization of cannabis use on a college campus. Journal of American College Health, 70(3), 733–741. https://doi.org/10.1080/07448481.2020.1762612 Patrick, M. E., Miech, R. A., Johnston, L. D., & O’Malley, P. M. (2023). Monitoring the Future Panel Study annual report: National data on substance use among adults ages 19 to 60, 1976-2022 (Monitoring the Future Monograph Series). Institute for Social Research, University of Michigan. https://monitoringthefuture.org/wp-content/uploads/2023/07/mtfpanel2023.pdf Reuters. (2023, June 1). U.S. states where recreational marijuana is legal. Reuters. https://www.reuters.com/world/us/us-states-where-recreational-marijuana-is-legal-2023-05-31/
... View more
Labels
-
Drugs
-
Research Methods and Statistics
0
0
2,079

Expert
04-10-2023
05:00 AM
On the first day of class, I give my students a few get-to-know-each-other questions to discuss in small groups. While they discuss, I visit the groups and invite them to ask any questions they may have about me. This semester during one such small group visit I had a student ask me for my zodiac sign. I said, “Scorpio. But you know that doesn’t mean anything, right?” She looked at me as if I were the naïve one. In retrospect, I could have handled that better. “Why do you ask?” “Do you believe that the time of year we’re born is the sole determinant of personality? Our genes and experiences don’t matter at all?” In any case, I didn’t think any more about it. And then two days ago (April 4, 2023), Google announced that their Waze app is adding a zodiac mode (Waze, 2023). And this month, Waze is tapping into the all-knowing cosmos to find out if you navigate like a Saggitarius or a Scorpio, thanks to the latest driving experience: Zodiac. Drive with a vehicle and Mood outfitted for your sign and embody your true colors on the road. Our navigation guide is well-versed in astrology and knows how to get all types of personalities to their final destination — whether you're a fiery Aries, a balanced Libra, an independent Aquarius, an ambitious Taurus, a spontaneous Gemini, an intuitive Cancer, a detail-oriented Virgo, an intense Capricorn, a whimsical Pisces, a dramatic Leo, a free-spirited Sagitttarius or a loyal Scorpio. She does it with love, life advice and a little teasing. The first thing I did was roll my eyes. The second thing I did was uninstall Waze. You would think that as a Scorpio I’d be more loyal than that. When we lived in the Seattle area, Waze was my go-to navigation app. Now that we live where there is much less traffic, I don’t need help getting around traffic jams so I haven’t used Waze in two years. I admit that haven’t kept up with Waze’s fun features. I just reinstalled Waze to see how zodiac mode works. Unfortunately—and to my great disappointment—zodiac mode has not rolled out to my phone, yet. There are, however, several other ways for me to “customize my drive.” If I select zombie mode, the driving directions are delivered in a zombie voice—or rather, what someone imagines a zombie voice would sound like, the car icon I see is decaying green, and the icon that appears to other drivers is a stitched up gray blob. That helps me envision a bit what zodiac mode might look like. Just like the 70s/80s/90s mode or the cat/dog mode, I suppose zodiac mode is meant to be a new, fun, quirky way to get to and from wherever you need to be. While there probably aren’t many people who believe in zombies, a Pew Research Center survey found that 29% of U.S. adults believe in astrology (Gecewicz, 2018). You can assume that about a third of your students hold such a belief. Among college graduates however, the survey found that the number that believed in astrology dropped to 22% (Gecewicz, 2018). I credit the personality chapter in the Intro Psych course for that decrease. Ok. I don’t know that. It’s an empirical question, though, for someone looking for a research project. If you’d like to give your students some research practice in the personality chapter, point out that about a third of people in the U.S. believe that zodiac signs affect personality. Zodiac signs, however, were not included in our textbook’s personality chapter as a contributing factor. How could we find out if one’s zodiac sign affects personality? Give students a couple of minutes to think about this question on their own, and then ask them to discuss in small groups. The research designs will likely include some measuring of personality traits. The biggest challenge here may be finding two astrological experts who agree on the characteristics each sign is supposed to have. As another variable, students may suggest asking study participants for their sign. It’s possible that asking outright for a zodiac sign may prime the potentially one-third of participants who believe in the zodiac to skew their personality answers. There are at least two ways around this: ask for birthday and determine zodiac sign yourself or ask for the zodiac sign at the very end after all of the personality questions have been answered. Asking for birthday is probably safest as some volunteers may not know their zodiac sign. Also point out that birthdays don’t have meaning in some cultures, so members of those cultural groups don’t know the date of their birth. When a birthday is needed, they may use January 1. A good question for students to consider is how they could ask if a participant knows their birth date. If time allows, consider asking this question about ethics that I’ve been thinking about a lot lately. Do we each have a responsibility to share and only share factually correct information? If we know what we’re sharing is false or suspect that it might be, do we have a responsibility to say so? As a professor of psychology, I certainly have an ethical responsibility to share evidence-based information about psychology. If the evidence is lacking, then I need to make it clear that the evidence is lacking. Does a producer or film company have a responsibility to depict accurately how drugs work, how memory works, how psychological disorders work? Especially given how many people learn about these topics through media? When a tech company uses the zodiac to make commuting more fun, are they promoting—whether intentionally or unintentionally—belief in the stars having an impact on personality? References Gecewicz, C. (2018, October 1). ‘New Age’ beliefs common among both religious and nonreligious Americans. Pew Research Center. https://www.pewresearch.org/fact-tank/2018/10/01/new-age-beliefs-common-among-both-religious-and-nonreligious-americans/ Waze. (2023, April 4). Customize your next drive and tap into the zodiac with Waze. Google. https://blog.google/waze/customize-your-next-drive-and-tap-into-the-zodiac-with-waze/
... View more
Labels
-
Personality
-
Research Methods and Statistics
1
0
2,027

Expert
04-03-2023
05:00 AM
It is not often that the New York Times publishes an article on operational definitions. Okay, they don’t call them operational definitions, but that’s what they are. Introduce this assignment by describing how so much of our health is influenced by our behaviors. If behavior is involved, psychology is there. How do researchers—or ourselves, for that matter—know if changes in our behavior, e.g., exercise, more nutritious eating, is positively affecting our level of fitness? The easy answer is that we randomly assign volunteers to, say, an exercise program of some sort or to a control group—maybe even a waitlist control group—and then after a predetermined amount of time, we measure their fitness. Great! Now, how do we measure fitness? Probably the most common way the average person on the street measures their fitness is by hopping on the scale. The more fat we carry, the greater the potential impact on our health. Since both fat and muscle have weight, the average scale does not differentiate. It is possible that the more we exercise, the more fat we lose but the more muscle mass we gain. Even though our fitness is increasing, our scales may tell us that we weigh the same or are actually gaining weight. Then there’s the body mass index (BMI). This is another measurement that does not differentiate between fat and muscle. The BMI is not lacking for critics. As one observer pointed out, the current BMI categories are not useful. Several longitudinal studies, they report, have found being BMI overweight (BMI 25-29.9) or in the first level of BMI obese (30-34.9) had little or no impact on mortality rates (Nuttall, 2015). What if we could just measure the amount of fat that we carry? There are scales that purport to do that. Such scales send an electrical current through your body. Water is an excellent conductor of electricity. Muscle contains more water than fat. The less resistance the electrical current encounters, the more muscle mass the scale concludes we have. The scales that have only two points of measurement—two feet—are less accurate than scales that have four points of measurement—two feet and two hands, but the two-point scales are considerably less expensive. The two-point scales tend to be reliable, but not accurate, underestimating or overestimating fat content significantly. However, if one is using such a scale to track changes, then they do fine. One more important point about these scales. If you’re dehydrated, the scale’s electrical current will meet more resistance, and the scale will say that we have more body fat than we have (McCallum, 2022). None of these measurements—overall weight, BMI, or fat composition—identify where our fat is concentrated. Abdominal fat is associated with poorer health outcomes than, say, fat stored in the lower body. The latter may actually have protective effects (de Lemos, 2020). If these measurements are not the best way to operationally define fitness, what are some alternatives? If you’d like to make this an out of class assignment, ask your students to read this New York Times article (Smith, 2023). The article identifies three different approaches to measuring fitness: heart metrics, physical performance metrics, and daily living metrics. Ask students to identify at least three operational definitions of fitness provided in the article for each approach. References de Lemos, J. (2020, December 16). Why belly fat is dangerous and how to control it. University of Texas Southwestern Medical Center. http://utswmed.org/medblog/belly-fat/ McCallum, K. (2022, April 26). How accurate are scales that measure body fat? Houston Medicine: On Health. https://www.houstonmethodist.org/blog/articles/2022/apr/are-body-composition-scales-accurate/ Nuttall, F. Q. (2015). Body mass index: Obesity, BMI, and health a critical review. Nutrition Today, 50(3), 117–128. https://doi.org/10.1097/NT.0000000000000092 Smith, D. G. (2023, March 27). 3 ways to measure how fit you are, without focusing on weight. The New York Times. https://www.nytimes.com/2023/03/27/well/move/fitness-test.html
... View more
Labels
-
Research Methods and Statistics
1
0
2,728

Expert
01-30-2023
04:55 PM
The following would fit well with a discussion research methods, but would also work as a research methods booster in the social or emotion chapters. In a series of studies conducted under different field and lab conditions, researchers gave participants opportunities to engage in random act of kindness to evaluate the impact that kindness had on both the giver and the recipient (Kumar & Epley, 2022) (freely available). For the purpose of this blog post, I want to focus on study 2a: hot chocolate at the skating rink. After reading several of Kumar and Epley’s studies in this article, it makes me want to do random acts of kindness research. I want to spend a chunk of my day brainstorming random acts of kindness that I could encourage participants to do. I’m picturing Amit Kumar and Nicholas Epley sitting around on a cold day, and one of them saying, “You know what makes me happy? A hot beverage on a cold day.” And the other saying, “Especially if I’m really cold and the hot beverage is extra tasty.” It’s a short leap from there to an outdoor skating rink and hot chocolate. With the permission of the skating rink operators, researchers approached people, told them that they were conducting a study, and gave them a choice. Here’s a cup of hot chocolate. You can keep it for yourself or you can point out anyone here, and we’ll deliver it to the person. The researchers made deliberate use of demand characteristics to encourage giving away the hot chocolate. I’m picturing something like this spiel, “The entire reason we’re out here, bub, is to investigate the effects of random acts of kindness, so we’d really love it if you’d give this hot chocolate away. But, hey, if you want to keep it, you selfish lout, there’s nothing we can do about it.” Okay, they probably didn’t call them selfish louts, although that would have upped the demand characteristics ante. While 75 people agreed to give the hot chocolate away, nine (very cold people with low blood sugar perhaps) opted to keep it. The givers each identified one person at the outdoor skating rink to receive a hot chocolate delivery. For the dependent variables, each hot chocolate donor was asked three questions: how big do they think this act of kindness is (scale of 0 to 10), what’s your mood now having made the decision to give away the hot chocolate compared to normal (-5 to +5, where 0 is normal), and what they thought the mood of the recipient would be upon receiving the hot chocolate (same scale, -5 to +5 where 0 is normal). Next, the researchers approached the identified recipients, explained that they were conducting a study, and that they gave people the choice to keep or give away a cup of hot chocolate. They further explained that a person chose to give away their cup of hot chocolate to them. At this point, I’m a little sorry that this was not a study of facial expressions. I would imagine that looks of confusion would dominate, at least at first. Imagine standing at an outdoor ice skating rink when a complete stranger comes up to you, says they’re conducting a study, and, here, have a cup of hot chocolate. After confusion, perhaps surprise or joy. Or perhaps skepticism. The researchers did not report how many hot chocolate recipients actually drank their beverage. Also no word on how happy the researchers were since they were the ones who were actually giving away hot chocolate. After being handed the cup of hot chocolate, each recipient was asked to rate how big this act of kindness was (0 to 10 scale) and to report their mood (scale of -5 to +5, where 0 is normal). The design of this study makes the data analysis interesting. The mood of the givers and the mood of the recipients was each treated as a within participants comparison. The reported mood (-5 to +5) was compared against 0 (normal mood). The givers, on average, reported a net positive mood of +2.4 (with +5 being the maximum). The recipients, on average, reported a net positive mood boost to +3.52. In a between participants comparison, givers and recipients were compared on the mood of recipients. When the givers were asked what the mood would be of the participants, they underestimated. They guessed an average of +2.73 as compared the actual rating the recipients gave their own mood of +3.52. As another between participants comparison, the ratings of how big the givers thought their act of kindness was (3.76 on an 11-point scale) were compared to how big the recipients thought the act of kindness was (7.0 on an 11-point scale). Studies reported later in this article provide evidence that suggests that the difference in perspective between the givers of a random act of kindness and their recipients is that the givers attend to the act itself—such as the value of the hot chocolate—and not on the additional value of being singled out for kindness, no matter what that kindness is. To give students some practice at generating operational definitions, point out that Kumar and Epley operationally defined a random act of kindness as giving away hot chocolate. Ask students to consider some other operational definitions—some other ways Kumar and Epley could have created a random act of kindness situation but using the same basic study design. Point out that researchers could use these other operational definitions to do a conceptual replication of this study—same concepts, but different definitions. Maybe some of your students will even choose to engage in some of those random acts of kindness. Reference Kumar, A., & Epley, N. (2022). A little good goes an unexpectedly long way: Underestimating the positive impact of kindness on recipients. Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0001271
... View more
Labels
-
Emotion
-
Research Methods and Statistics
-
Social Psychology
0
0
2,047

Expert
12-28-2022
10:05 AM
I think of the Intro Psych course as an owner’s manual for being human. Throughout the course, we explore the multitude of ways we are influenced to think, feel, or behave a certain way that happens without our conscious awareness. Here’s one such example we can use to give our students some experimental design practice. It’s suitable for the methods chapter or, if you cover drugs, in that chapter after discussing caffeine. Caffeine, as a stimulant, increases arousal. It’s plausible that consumers who are physiologically aroused engage in more impulsive shopping and, thus, spend more money than their uncaffeinated counterparts. Give students this hypothesis: If shoppers consume caffeine immediately before shopping, then they will spend more money. Ask students to take a couple minutes thinking about how they would design this study, and then invite students to share their ideas in pairs or small groups. Ask the groups to identify their independent variable (including experimental and control conditions) and their dependent variable. If you cover operational definitions, ask for those, too. Invite groups to share their designs with the class. Emphasize that there is no one right way to conduct a study. Each design will have its flaws, so using different designs to test the same hypothesis will give us greater confidence in the hypothesis. Share with students the first two of five experiments reported in the Journal of Marketing (Biswas et al., 2022). In study 1, researchers set up a free espresso station just inside the front door of a store. As shoppers entered, they were offered a cup of espresso. The experiment was conducted at different times of day over several days. At certain times, shoppers were offered a caffeinated espresso. At other times, they were offered a decaffeinated espresso. As the espresso drinkers left the store after having completed their shopping, researchers asked if they could see their receipts. Everyone said yes. Researchers recorded the number of items purchased and the total purchase amount. (Ask students to identify the independent and dependent variables.) As hypothesized, the caffeinated shoppers purchased more items (2.16 vs. 1.45) and spent more money (€27.48 vs. €14.82) than the decaffeinated shoppers. Note that participants knew whether they were consuming a caffeinated or decaffeinated beverage, but did not know when they accepted that they were participating in a study. There are a few ethical questions about study 1 worth exploring with your students. First, this study lacked informed consent. Participants were not aware that they were participating in a study when they accepted the free espresso. As participants were leaving, it became clear to them that they were participating in a study. Given the norm of reciprocity, did participants see not handing over their receipts as a viable option? Lastly, the researchers expected that caffeine would increase consumer spending. In fact, it nearly doubled it. Was it ethical for the researchers to put unwitting shoppers in a position to spend more money than they had intended? In study 2, students from a marketing research class “in exchange for course credit” were asked to recruit family or friends to participate. The volunteers, who were told that this was a study about their shopping experience, were randomly assigned to an espresso or water condition which were consumed in a cafeteria next to a department store. After consuming their beverages, the volunteers were escorted to the department store and were asked to spend two hours in the store “shopping or looking around.” As in study 1, caffeinated shoppers spent nearly twice as much money (€69.91 vs. €39.63). Again, we have the ethical question of putting unwitting shoppers in the position to spend more money than they would have. We also have the ethical question of students recruiting friends and family to participate as course requirement. And then from a design perspective, how certain can we be that the students didn’t share the hypothesis with their family and friends? Is it possible that some of the students thought that if the study’s results didn’t support the hypothesis, their grade would be affected? As a final ethics question, what should we do with the knowledge that we are likely to spend (much) more money when shopping when we are caffeinated? As a shopper, it’s easy. I’m not going stop on the coffee shop on my way to the store. For a store manager whose job it is to maximize, it’s also easy. Give away cups of coffee as shoppers enter the store. The amount of money it costs to staff a station and serve coffee will more than pay for itself in shopper spending. Here’s the bigger problem. Is it okay to manipulate shoppers in this way for financial gain? Advertising and other persuasive strategies do this all the time. Is free caffeine any different? Or should coffee cups carry warning labels? To close this discussion, ask students in what other places or situations can impulsive behavior encouraged by being caffeinated be problematic. Casinos come readily to my mind. Are caffeinated people likely to bet more? Would that study be ethical to conduct? Reference Biswas, D., Hartmann, P., Eisend, M., Szocs, C., Jochims, B., Apaolaza, V., Hermann, E., López, C. M., & Borges, A. (2022). Caffeine’s Effects on Consumer Spending. Journal of Marketing, 002224292211092. https://doi.org/10.1177/00222429221109247
... View more
Labels
-
Drugs
-
Research Methods and Statistics
0
0
4,625

Expert
10-17-2022
05:00 AM
I have had occasion to send out emails with some sort of inquiry. When I don’t get any response, it ticks me off. I don’t do well with being ignored. I’ve learned that about me. Even a short “I’m not the person to help you. Good luck!” would be welcome. I mention that to acknowledge that I brought that particular baggage with me when I read an article in the Journal of Counseling Psychology about counselors ignoring email messages from people seeking a counselor (Hwang & Fujimoto, 2022). As if the bias this study revealed was not anger-inducing enough. The results are not particularly surprising, but that does not make me less angry. I’ve noticed that as I’m writing this, I’m pounding on my keyboard. Wei-Chin Hwang and Ken A. Fujimoto were interested in finding out how counselors would respond to email inquiries from potential clients who varied on probable race, probably gender, psychological disorder, and inquiry about a sliding fee scale. The researchers used an unidentified “popular online directory to identify therapists who were providing psychotherapeutic services in Chicago, Illinois” (Hwang & Fujimoto, 2022, p. 693). From the full list 2,323 providers, they identified 720 to contact. In the first two paragraphs of their methods section, Hwang and Fujimoto explain their selection criteria. The criterion that eliminated the most therapists was that the therapist needed to have an advertised email address. Many of the therapists listed only permitted contact through the database. Because the researchers did not want to violate the database’s terms of service, they opted not to contact therapists this way. They also excluded everyone who said that they only accepted clients within a specialty area, such as sports psychology. They also had to find a solution for group practices where two or more therapists from the same practice were in the database. Hwang and Fujimoto did not want to risk therapists in the same group practice comparing email requests with each other, so they randomly chose one therapist in a group practice to receive their email. This experiment was a 3x3x2x2 (whew!). Inquirer’s race: White, African American, Latinx American (the three most common racial groups in Chicago, where the study was conducted). Researchers used U.S. Census Bureau data to identify last names that were most common for each racial group: Olson (White), Washington (African American), and Rodriguez (Latinx). Inquirer’s diagnosis: Depression, schizophrenia, borderline personality disorder (previous research has shown that providers find people with schizophrenia or borderline personality disorder less desirable to work with than, say, depression) Inquirer’s gender: Male, female. (Male first names: Richard, Deshawn, José; female first names: Molly, Precious, and Juana) Inquirer’s ability to pay full fee: Yes, no. In their methods section, Hwang and Fujimoto include the scripts they used. Each script includes this question: “Can you email me back so that I can make an appointment?” The dependent variable was responsiveness. Did the provider email the potential client back within two weeks? If not, that was coded as “no responsiveness.” (In the article’s Table 1, the “no responsiveness” column is labeled as “low responsiveness,” but the text makes it clear that this column is “no responsiveness.”) If the provider replied but stated they could not treat the inquirer, that was coded as “some responsiveness.” If the provided replied with the offer of an appointment or an invitation to discuss further, that was coded as “high responsiveness.” There were main effects for inquirer race, diagnosis, and ability to pay the full fee. The cells refer to the percentage of provider’s email messages in each category. Table 1. Responsiveness by Race of Inquirer Name No responsiveness Some responsiveness High responsiveness Molly or Richard Olson 15.4% 33.2% 51.5% Precious or Deshawn Washington 27.4% 30.3% 42.3% Juana or José Rodriguez 22.3% 34% 43.7% There was one statistically significant interaction. Male providers were much more likely to respond to Olson than they were to Washington or Rodriguez. Female providers showed no bias in responding by race. If a therapist does not want to work with a client based on their race, then it is probably best for the client if they don’t. But at least have the decency to reply to their email with some lie about how you’re not taking on more clients, and then refer them to a therapist who can help. Table 2. Responsiveness by Diagnosis Diagnosis No responsiveness Some responsiveness High responsiveness Depression 17.9% 20% 62.1% Schizophrenia 25.8% 43.8% 30.4% Borderline Personality Disorder 21.3% 33.8% 45% Similar thoughts here. I get that working with a client diagnosed with schizophrenia or borderline personality disorder takes a very specific set of skills that not all therapists have, but, again, at least have the decency to reply to the email saying that you don’t have the skills, and then refer them to a therapist who does. Table 3. Responsiveness by Inquirer’s Ability to Pay Full Fee Ability to pay full fee No responsiveness Some responsiveness High responsiveness No 22.4% 39.7% 38% Yes 21% 25.4% 53.6% While Hwang and Fujimoto interpret these results to mean a bias against members of the working class, I have a different interpretation. The no response rate was the same with about 20% of providers not replying at all. If there were an anti-working-class bias, I would expect the no responsiveness percentage to those asking about a sliding fee scale would be much greater. In both levels of this independent variable, about 80% gave some reply. It could be that the greater percentage of “some responsiveness” in reply to those who inquired about a sliding fee scale was due to the providers being maxed out on the number of clients they had who were paying reduced fees. One place to discuss this study and its findings with Intro Psych students is in the therapy chapter. It would work well as part of your coverage of therapy ethics codes. Within the ethics code for the American Counseling Association, Section C on professional responsibility is especially relevant. It reads in part: Counselors facilitate access to counseling services, and they practice in a nondiscriminatory manner…Counselors are encouraged to contribute to society by devoting a portion of their professional activity to services for which there is little or no financial return (pro bono publico) (American Counseling Association, 2014, p. 😎 Within the ethics code of the American Psychological Association, Principle 😧 Social Justice is particularly relevant. Psychologists recognize that fairness and justice entitle all persons to access to and benefit from the contributions of psychology and to equal quality in the processes, procedures, and services being conducted by psychologists. Psychologists exercise reasonable judgment and take precautions to ensure that their potential biases, the boundaries of their competence, and the limitations of their expertise do not lead to or condone unjust practices (American Psychological Association, 2017). Principle E: Respect for People's Rights and Dignity is also relevant. It reads in part: Psychologists are aware of and respect cultural, individual, and role differences, including those based on age, gender, gender identity, race, ethnicity, culture, national origin, religion, sexual orientation, disability, language, and socioeconomic status, and consider these factors when working with members of such groups. Psychologists try to eliminate the effect on their work of biases based on those factors, and they do not knowingly participate in or condone activities of others based upon such prejudices (American Psychological Association, 2017). This study was conducted in February 2018—before the pandemic. Public mental health has not gotten better. Asking for help is not easy. When people muster the courage to ask for help, the absolute least we can do is reply. Even if we are not the best person to provide that help, we can direct them to additional resources, such as one of these crisis help lines. For a trained and licensed therapist who is bound by their profession’s code of ethics to just not reply at all to a request for help, I just don’t have the words. Again, I should acknowledge that I have my own baggage about having my email messages ignored. For anyone who wants to blame their lack of responding on the volume of email you have sort through (I won’t ask if you are selectively not responding based on perceived inquirer personal characteristics), I have an hour-long workshop that will help you get your email under control and keep it that way. References American Counseling Association. (2014). ACA 2014 code of ethics. https://www.counseling.org/docs/default-source/default-document-library/2014-code-of-ethics-finaladdress.pdf?sfvrsn=96b532c_8 American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code Hwang, W.-C., & Fujimoto, K. A. (2022). Email me back: Examining provider biases through email return and responsiveness. Journal of Counseling Psychology, 69(5), 691–700. https://doi.org/10.1037/cou0000624
... View more
Labels
-
Psychological Disorders and Their Treatment
-
Research Methods and Statistics
-
Social Psychology
0
0
4,231

Expert
10-10-2022
05:30 AM
I have been doing a bit of digging into the research databases, and I came across a Journal of Eating Disorders article with a 112-word “Plain English summary” (Alberga et al., 2018). I love this so much I can hardly stand it. Steven Pinker (2014) wrote an article for The Chronicle of Higher Education titled “Why academics’ writing stinks.” Pinker does not pull any punches in his assessment. Let’s face it. Some academic writing is virtually unreadable. Other academic writing is actually unreadable. Part of the problem is one of audience. If a researcher is writing for other researchers in their very specific corner of the research world, of course they are going to use jargon and make assumptions about what their readers know. That, though, is problematic for the rest of us. I have spent my career translating psychological science as an instructor and, more recently, as an author. This is what teaching is all about: translation. If we are teaching in our particular subdiscipline, translation is usually not difficult. If we are teaching Intro Psych, though, we have to translate research writing that is miles away from our subdiscipline. This is what makes Intro Psych the most difficult course in the psychology curriculum to teach. I know instructors who do not cover, for example, biopsychology or sensation and perception in their Intro Psych courses because they do not understand the topics themselves. Additionally, some of our students have learned through reading academic writing to write in a similarly incomprehensible style. Sometimes I feel like students initially wrote their papers in plain English, and then they threw a thesaurus at it to make their writing sound more academic. We have certainly gone wrong somewhere if ‘academic’ has come to mean ‘incomprehensible.’ I appreciate the steps some journals have taken to encourage or require article authors to tell readers why their research is important. In the Society for the Teaching of Psychology’s journal Teaching of Psychology, for example, the abstract ends with a “Teaching Implications” section. Many other journals now require a “Public Significance Statement” or a “Translational Abstract” (what the Journal of Eating Disorders calls a “plain English summary”). I have read my share of public significance statements. I confess that sometimes it is difficult—impossible even—to see the significance of the research to the general public in the statements. I suspect it is because the authors themselves do not see any public significance. That is probably truer for (some areas of) basic research than it is for any area of applied research. Translational abstracts, in contrast, are traditional abstracts rewritten for a lay audience. APA’s page on “Guidance for translational abstracts and public significance statements” (APA, 2018) is worth a read. An assignment where students write both translational abstracts and public significance statements for existing journal articles gives students some excellent writing practice. In both cases, students have to understand the study they are writing about, translate it for a general audience, and explain why the study matters. And maybe—just maybe—as this generation of college students become researchers and then journal editors, in a couple generations plain English academic writing will be the norm. This is just one of several windmills I am tilting at these days. The following is a possible writing assignment. While it can be assigned after covering research methods, it may work better later in the course. For example, after covering development, provide students with a list of articles related to development that they can choose from. While curating a list of articles means more work for you up front, students will struggle less to find article abstracts that they can understand, and your scoring of their assignments will be easier since you will have a working knowledge of all of the articles students could choose from. Read the American Psychological Association’s (APA’s) “Guidance for translational abstracts and public significance statements.” Chose a journal article from this list of Beth Morling’s student-friendly psychology research articles (or give students a list of articles). In your paper: Copy/paste the article’s citation. Copy/paste the article’s abstract. Write your own translational abstract for the article. (The scoring rubric for this section will be based on APA’s “Guidance for translational abstracts and public significance statements.”) Write your own public significance statement. (The scoring rubric for this section will be based on APA’s “Guidance for translational abstracts and public significance statements.”) References Alberga, A. S., Withnell, S. J., & von Ranson, K. M. (2018). Fitspiration and thinspiration: A comparison across three social networking sites. Journal of Eating Disorders, 6(1), 39. https://doi.org/10.1186/s40337-018-0227-x APA. (2018, June). Guidance for translational abstracts and public significance statements. https://www.apa.org/pubs/journals/resources/translational-messages Pinker, S. (2014). Why academics’ writing stinks. The Chronicle of Higher Education, 61(5).
... View more
Labels
-
Research Methods and Statistics
-
Teaching and Learning Best Practices
0
0
4,220

Expert
09-05-2022
05:00 AM
In an example of archival research, researchers analyzed data from the U.S. National Health and Nutrition Examination Survey for the years 2007 to 2012 (Hecht et al., 2022). They found that after controlling for “age, gender, race/ethnicity, BMI, poverty level, smoking status and physical activity,” (p. 3) survey participants “with higher intakes of UPF [ultra-processed foods] report significantly more mild depression, as well as more mentally unhealthy and anxious days per month, and less zero mentally unhealthy or anxious days per month” (p. 7). So far, so good. The researchers go on to say, “it can be hypothesised that a diet high in UPF provides an unfavourable combination of biologically active food additives with low essential nutrient content which together have an adverse effect on mental health symptoms” (p. 7). I don’t disagree with that. It is one hypothesis. By controlling for their identified covariates, they address some possible third variables, such as poverty. However, at no place in their article do they acknowledge that the direction can be reversed. For example, it can also be hypothesized that people who are experiencing the adverse effects of mental health symptoms have a more difficult time consuming foods high in nutritional quality. Anyone who battles the symptoms of mental illness or who is close to someone who does knows that sometimes the best you can do for dinner is a hotdog or a frozen pizza—or if you can bring yourself to pick up your phone—pizza delivery. They do, however, include reference to an experiment: “[I]n one randomized trial, which provides the most reliable evidence for small to moderate effects, those assigned to a 3-month healthy dietary intervention reported significant decreases in moderate-to-severe depression.” The evidence from that experiment looks pretty good (Jacka et al., 2017), although their groups were not equivalent on diet at baseline: the group that got the dietary counseling scored much lower on their dietary measure than did the group that got social support. Also, those who received social support during the study did, in the end, have better mental health scores and better diet scores than they did at baseline, although all we have are the means. I don’t know if the differences are statistically significant. All of that is to say is that the possibility remains that reducing the symptoms of mental illness may also increase nutritional quality. Both the Jacka et al. experiment and the Hecht et al. correlational study are freely available. You may also want to read the Science Daily summary of the Hecht et al. study where the author (or editor?) writes, “Do you love those sugary-sweet beverages, reconstituted meat products and packaged snacks? You may want to reconsider based on a new study that explored whether individuals who consume higher amounts of ultra-processed food have more adverse mental health symptoms.” If you’d like to use this in your Intro Psych class, after covering correlations and experiments, ask your students to read the Science Daily summary. Ask your students two questions. 1) Is this a correlational study or an experiment? 2) From this study, can we conclude that ultra-processed foods negatively affect mental health? These questions lend themselves well for use with in-class student response systems (e.g., Clickers, Plickers). Lastly, you may want to share with your students more information about both the Hecht et al. study and the Jacka et al. experiment. If time allows, give your students an opportunity to design an experiment that would test this hypothesis: Improved mental health symptoms causes better nutritional consumption. References Hecht, E. M., Rabil, A., Martinez Steele, E., Abrams, G. A., Ware, D., Landy, D. C., & Hennekens, C. H. (2022). Cross-sectional examination of ultra-processed food consumption and adverse mental health symptoms. Public Health Nutrition, 1–10. https://doi.org/10.1017/S1368980022001586 Jacka, F. N., O’Neil, A., Opie, R., Itsiopoulos, C., Cotton, S., Mohebbi, M., Castle, D., Dash, S., Mihalopoulos, C., Chatterton, M. L., Brazionis, L., Dean, O. M., Hodge, A. M., & Berk, M. (2017). A randomised controlled trial of dietary improvement for adults with major depression (the ‘SMILES’ trial). BMC Medicine, 15(1), 23. https://doi.org/10.1186/s12916-017-0791-y
... View more
Labels
-
Psychological Disorders and Their Treatment
-
Research Methods and Statistics
0
0
3,280

Expert
08-16-2022
01:09 PM
If you are looking for a new study to freshen up your coverage of experimental design in your Intro Psych course, consider this activity. After discussing experiments and their component parts, give students this hypothesis: Referring to “schizophrenics” as compared to “people with schizophrenia” will cause people to have less empathy for those who have a diagnosis of schizophrenia. In other words, does the language we use matter? Assure students that they will not actually be conducting this experiment. Instead, you are asking them to go through the design process that all researchers go through. Ask students to consider these questions, first individually to give students an opportunity to gather their thoughts and then in a small group discussion: What population are you interested in studying and why? Are you most interested in knowing what impact this choice of terminology has on the general population? High school students? Police officers? Healthcare providers? Next, where might you find 100 or so volunteers from your chosen population to participate? Design the experiment. What will be the independent variable? What will the participants in each level of the independent variable be asked to do? What will be the dependent variable? Be sure to provide an operational definition of the dependent variable. Invite groups to share their populations of interest with a brief explanation of why they chose that population and where they might find volunteers. Write the populations where students can see the list. Point out that doing this research with any and all of these populations would have value. The independent variable and dependent variable should be the same for all groups since they are stated in the hypothesis. Operational definitions of the dependent variable may vary, however. Give groups an opportunity to share their overall experimental design. Again, point out that if researchers find support for the hypothesis regardless of the specifics of how the experiment is conducted and regardless of the dependent variable’s operational definition, that is all the more support for the robustness of the findings. Even if some research designs or operational definitions or particular populations do not support the hypothesis, that is also very valuable information. Researchers then get to ask why these experiments found different results. For example, if research with police officers returns different findings than research with healthcare workers, psychological scientists get to explore why. For example, is there a difference in their training that might affect the results? Lastly, share with students how Darcy Haag Granello and Sean R. Gorby researched this hypothesis (Granello & Gorby, 2021). They were particularly interested in how the terms “schizophrenic” and “person with schizophrenia” would affect feelings of empathy (among other dependent variables) for both practicing mental health counselors and graduate students who were training to be mental health counselors. For the practitioners, they found volunteers by approaching attendees at a state counseling conference (n=82) and at an international counseling conference (n=79). In both cases, they limited their requests to a conference area designated for networking and conversing. For the graduate students, faculty at three different large universities asked their students to participate (n=109). Since they were particularly interested in mental health counseling, anyone who said that they were in school counseling or who did not answer the question about counseling specialization had their data removed from the analysis (n=19). In the end, they had a total of 251 participants. Granello and Gorby gave volunteers the participants Community Attitudes Toward the Mentally Ill scale. This measure has four subscales: authoritarianism, benevolence, social restrictiveness, and community mental health ideology. While the original version of the scale asked about mental illness more generally, the researchers amended it so that “mental illness” was replaced with “schizophrenics” or “people with schizophrenia.” The researchers stacked the questionnaires so that the terminology used alternated. For example, if the first person they approached received the questionnaire asking about “schizophrenics,” the next person would have received the questionnaire asking about “people with schizophrenia.” Here are sample items for the “schizophrenics” condition, one from each subscale: Schizophrenics need the same kind of control and discipline as a young child (authoritarian subscale) Schizophrenics have for too long been the subject of ridicule (benevolence subscale) Schizophrenics should be isolated from the rest of the community (social restrictiveness subscale) Having schizophrenics living within residential neighborhoods might be good therapy, but the risks to residents are too great (community mental health ideology) Here are those same sample items for the “people with schizophrenia” condition: People with schizophrenia need the same kind of control and discipline as a young child (authoritarian subscale) People with schizophrenia have for too long been the subject of ridicule (benevolence subscale) People with schizophrenia should be isolated from the rest of the community (social restrictiveness subscale) Having people with schizophrenia living within residential neighborhoods might be good therapy, but the risks to residents are too great (community mental health ideology) What did the researchers find? When the word “schizophrenics” was used: both practitioners and students scored higher on the authoritarian subscale. the practitioners (but not the students) scored lower on the benevolence subscale. all participants scored higher on the social restrictiveness subscale. there were no differences on the community mental health ideology subscale for either practitioners or students. Give students an opportunity to reflect on the implications of these results. Invite students to share their reactions to the experiment in small groups. Allow groups who would like to share some of their reactions with the class an opportunity to do so. Lastly, as time allows, you may want to share the two limitations to their experiment identified by the researchers. First, the practitioners who volunteered were predominantly white (74.1% identified as such) and had the financial means to attend a state or international conference. Would practitioners of a different demographic show similar results? The graduate students also had the financial means to attend a large in-person university. Graduate students enrolled in online counseling programs, for example, may have different results. A second limitation the researchers identified is that when they divided their volunteers into practitioners and students, the number of participants they had was below the recommended number to give them the statistical power to detect real differences. With more participants, they may have found even more statistical differences. Even with these limitations, however, the point holds. The language we use affects the perceptions we have. Reference Granello, D. H., & Gorby, S. R. (2021). It’s time for counselors to modify our language: It matters when we call our clients schizophrenics versus people with schizophrenia. Journal of Counseling & Development, 99(4), 452–461. https://doi.org/10.1002/jcad.12397
... View more
Labels
-
Research Methods and Statistics
-
Thinking and Language
0
1
6,308

Expert
03-08-2021
09:50 AM
After covering correlations and experiments, share the February 17, 2021 edition of the PC and Pixel comic strip with your students. In the first panel, one of the characters reads a research finding: “It’s reported here that unhappy people watch more TV than happy folks.” Ask your students if they think this is correlational research or experimental research, and ask them to explain why. In the next two panels of the comic strip, the characters wonder if it’s that unhappiness leads to more TV watching or if more TV watching leads to unhappiness. Point out that since this is correlational research, we don’t know which is true. Either or both could be true. We just don’t know. Ask your students to generate some possible third variables that could influence both happiness and TV watching separately. For example, feelings of loneliness could lead to both feelings of unhappiness and greater TV watching (as a source of company, say). Explain that researchers may take correlational research, and use it to generate hypotheses that could be tested by conducting an experiment. If people are made to feel unhappier, they will watch more TV. If people are made to watch more TV, they will be unhappier. If people are made to feel lonely, they will be both unhappier and watch more TV. Working in small groups, ask your students to design experiments that would test each of these hypotheses. “Be sure to identify the independent variable and its levels and the dependent variable. Be sure to describe how they would operationalize the variables.” Bring the class back together, and ask one group to share their design for testing the first hypotheses. Invite other groups to share how they operationalized the independent variable and dependent variable. Take a minute to walk students through what the different results from each test of the hypothesis would tell us. Point out that there is no right or wrong way to operationalize a variable. In fact, if the hypothesis is supported across experiments that operationalized the variables different, the more confident we are in the findings. Next, ask your students if they have any concerns about intentionally trying to make people feel unhappier, either as the independent variable or as the dependent variable. Invite students to share their concerns. If you haven’t already, introduce students to the APA’s Ethical Principles of Psychologists and Code of Conduct. The first of the five general principles is beneficence and nonmaleficence. This principle reads, in part: Psychologists strive to benefit those with whom they work and take care to do no harm. In their professional actions, psychologists seek to safeguard the welfare and rights of those with whom they interact professionally and other affected persons, and the welfare of animal subjects of research. The third of the principles—integrity—is also relevant here. This principle reads in its entirety: Psychologists seek to promote accuracy, honesty, and truthfulness in the science, teaching, and practice of psychology. In these activities psychologists do not steal, cheat or engage in fraud, subterfuge, or intentional misrepresentation of fact. Psychologists strive to keep their promises and to avoid unwise or unclear commitments. In situations in which deception may be ethically justifiable to maximize benefits and minimize harm, psychologists have a serious obligation to consider the need for, the possible consequences of, and their responsibility to correct any resulting mistrust or other harmful effects that arise from the use of such techniques. Given these ethical principles, are students more comfortable with some of the experimental designs they created than others? For example, are experiments that bring about temporary and mild unhappiness better than designs that are, say, more intense? Does the knowledge that these experiments would bring—and the good it would mean for humanity—outweigh the harm they may cause in the short-term? Be sure to describe the purpose of a debriefing. Conclude this discussion by emphasizing that these are the ethics questions every researcher and every member of an Institutional Review Board struggle with. No one takes these questions lightly. If you’d like to give your students some library database practice, ask your students to find three to five peer-reviewed research articles on the connection between happiness and TV watching. For each article, students should identify if the research reported was correlational or experimental (and how they know) and provide a paragraph summarizing the results.
... View more
Labels
-
Research Methods and Statistics
0
0
1,956

Expert
02-16-2021
10:32 AM
At some point in college or grad school, I was given a short article that explained the different sections of a typical psychology journal article. I have a vague memory of being told to always read the abstract first, but beyond that, I don’t remember be given any guidance on how to actually read the article. Eventually I figured out that journal articles that are sharing new research are not meant to be read from beginning to end. True confession: I started doing this pretty early in my journal-article-reading career, but I felt guilty about it. I had no reason to feel guilty. Wish I would have known that then. The Learning Scientists blog has a nice collection of articles on how to read a research journal article. Take a look at that list to see if there is anything there you want to share with your students that particularly meets your goals. For example, the library at Teesside University has brief descriptions of each article section. If you’d like your students to hear from academics themselves on how they approach research journal articles, the Science article is a good choice. Alternatively, you may choose to give your students a few easy-to-read articles and ask your students to sort out the different elements of a research article. Ask your students to look at (not necessarily “read”), say, three of the following articles, all of which have 12 or fewer pages of text. Work with your librarians to get permalinks to this articles from your library’s databases. Barry, C. T., McDougall, K. H., Anderson, A. C., Perkins, M. D., Lee-Rowland, L. M., Bender, I., & Charles, N. E. (2019). ‘Check your selfie before you wreck your selfie’: Personality ratings of Instagram users as a function of self-image posts. Journal of Research in Personality, 82, 1-11. https://doi.org/10.1016/j.jrp.2019.07.001 Gosnell, C. L. (2019). Receiving quality positive event support from peers may enhance student connection and the learning environment. Scholarship of Teaching and Learning in Psychology. Advance online publication. https://doi.org/https://doi.org/10.1037/stl0000178 Howe, L. C., Goyer, J. P., & Crum, A. J. (2017). Harnessing the placebo effect: Exploring the influence of physician characteristics on placebo response. Health Psychology, 36(11), 1074–1082. https://doi.org/10.1037/hea0000499.supp Hyman, I. E., Boss, S. M., Wise, B. M., McKenzie, K. E., & Caggiano, J. M. (2010). Did you see the unicycling clown? Inattentional blindness while walking and talking on a cell phone. Applied Cognitive Psychology, 24, 597–607. https://doi.org/10.1002/acp.1638 Reed, J., Hirsh-Pasek, K., & Golinkoff, R. M. (2017). Learning on hold: Cell phones sidetrack parent-child interactions. Developmental Psychology, 53(8), 1428–1436. https://doi.org/10.1037/dev0000292 Rhodes, M., Leslie, S. J., Yee, K. M., & Saunders, K. (2019). Subtle linguistic cues increase girls’ engagement in science. Psychological Science, 30(3), 455–466. https://doi.org/10.1177/0956797618823670 Soicher, R. N., & Gurung, R. A. R. (2017). Do exam wrappers increase metacognition and performance? A single course intervention. Psychology Learning and Teaching, 16(1), 64–73. https://doi.org/10.1177/1475725716661872 Wirth, J. H., & Bodenhausen, G. V. (2009). The role of gender in mental-illness stigma: A national experiment. Psychological Science, 20(2), 169–173. https://doi.org/10.1111/j.1467-9280.2009.02282.x Give your students the following instructions and questions. Amend them to your liking. Skim each of these three articles: Barry, C. T., McDougall, K. H., Anderson, A. C., Perkins, M. D., Lee-Rowland, L. M., Bender, I., & Charles, N. E. (2019). ‘Check your selfie before you wreck your selfie’: Personality ratings of Instagram users as a function of self-image posts. Journal of Research in Personality, 82, 1-11. https://doi.org/10.1016/j.jrp.2019.07.001 Gosnell, C. L. (2019). Receiving quality positive event support from peers may enhance student connection and the learning environment. Scholarship of Teaching and Learning in Psychology. Advance online publication. https://doi.org/https://doi.org/10.1037/stl0000178 Howe, L. C., Goyer, J. P., & Crum, A. J. (2017). Harnessing the placebo effect: Exploring the influence of physician characteristics on placebo response. Health Psychology, 36(11), 1074–1082. https://doi.org/10.1037/hea0000499.supp Research articles published in journals follow some basic conventions that are designed to make them easy for researchers and students to read. Almost all research articles have these six main components, and always in this order. Using the three articles you skimmed, your goal is to identify the basic structure of research articles. For each component, answer the questions given. Abstract In less than 50 words, describe the purpose of the abstract. Introduction (usually not labeled, but it always comes after the abstract) In less than 50 words, describe the purpose of the introduction. The research hypotheses can almost always be found near the end of the introduction. Identify at least one hypothesis from each article. Method In less than 50 words, describe the purpose of the methods section. In the methods section, you will see that all of the articles contain similar information. Identify three different types of information that is common across all three articles. Results In less than 50 words, describe the purpose of the results section. If you don’t understand much of what is written in this section, that’s okay. This section is written for fellow researchers, not Intro Psych students. Copy/paste (use quotation marks!) one sentence from the results section of each article that made little or no sense to you. Discussion In less than 50 words, describe the purpose of the discussion section. References In less than 50 words, describe the purpose of the references section. Choose one reference from each article that, based on the title alone, you might be interested in reading. How would you go about getting that article? Researchers almost always read the abstract first. After that, what they read next depends on why they are looking at the article at all. For each of the following scenarios, match the researcher with the section of the article they are likely to read first after the abstract: Introduction, method, results, discussion, references. A. Dr. Akiya Yagi wanted to read more about the conclusions the researchers drew from having done this study. B. Dr. Selva Hernandez-Lopez is doing research on these same psychological concepts, and she’s looking for useful research articles that she may have missed. C. Dr. DeAndre Thomas is looking for different ways to measure a particular psychological concept. D. Dr. Kaitlyn Kronvalds read some information in the abstract that made her wonder about the statistics that were used to analyze the data. E. Dr. Bahiya Cham is about start doing research on a different set of psychological concepts and wants to learn more about the different theories behind those concepts and how those theories are being used to generate hypotheses.
... View more
Labels
-
Research Methods and Statistics
0
0
5,095
Topics
-
Abnormal Psychology
5 -
Achievement
2 -
Affiliation
1 -
Cognition
9 -
Consciousness
13 -
Current Events
6 -
Development Psychology
9 -
Developmental Psychology
12 -
Drugs
4 -
Emotion
19 -
Evolution
1 -
Gender
4 -
Gender and Sexuality
3 -
Genetics
2 -
History and System of Psychology
4 -
History and Systems of Psychology
2 -
Industrial and Organizational Psychology
15 -
Intelligence
1 -
Learning
26 -
Memory
10 -
Motivation
4 -
Motivation: Hunger
1 -
Nature-Nurture
2 -
Neuroscience
15 -
Personality
11 -
Psychological Disorders and Their Treatment
9 -
Research Methods and Statistics
41 -
Sensation and Perception
15 -
Social Psychology
45 -
Stress and Health
5 -
Teaching and Learning Best Practices
30 -
Thinking and Language
9 -
Virtual Learning
7
- « Previous
- Next »
Popular Posts