-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- Psychology Community
- :
- Psychology Blog
- :
- Psychology Blog - Page 3
Psychology Blog - Page 3
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Psychology Blog - Page 3
Showing articles with label Research Methods and Statistics.
Show all articles
sue_frantz
Expert
08-16-2022
01:09 PM
If you are looking for a new study to freshen up your coverage of experimental design in your Intro Psych course, consider this activity. After discussing experiments and their component parts, give students this hypothesis: Referring to “schizophrenics” as compared to “people with schizophrenia” will cause people to have less empathy for those who have a diagnosis of schizophrenia. In other words, does the language we use matter? Assure students that they will not actually be conducting this experiment. Instead, you are asking them to go through the design process that all researchers go through. Ask students to consider these questions, first individually to give students an opportunity to gather their thoughts and then in a small group discussion: What population are you interested in studying and why? Are you most interested in knowing what impact this choice of terminology has on the general population? High school students? Police officers? Healthcare providers? Next, where might you find 100 or so volunteers from your chosen population to participate? Design the experiment. What will be the independent variable? What will the participants in each level of the independent variable be asked to do? What will be the dependent variable? Be sure to provide an operational definition of the dependent variable. Invite groups to share their populations of interest with a brief explanation of why they chose that population and where they might find volunteers. Write the populations where students can see the list. Point out that doing this research with any and all of these populations would have value. The independent variable and dependent variable should be the same for all groups since they are stated in the hypothesis. Operational definitions of the dependent variable may vary, however. Give groups an opportunity to share their overall experimental design. Again, point out that if researchers find support for the hypothesis regardless of the specifics of how the experiment is conducted and regardless of the dependent variable’s operational definition, that is all the more support for the robustness of the findings. Even if some research designs or operational definitions or particular populations do not support the hypothesis, that is also very valuable information. Researchers then get to ask why these experiments found different results. For example, if research with police officers returns different findings than research with healthcare workers, psychological scientists get to explore why. For example, is there a difference in their training that might affect the results? Lastly, share with students how Darcy Haag Granello and Sean R. Gorby researched this hypothesis (Granello & Gorby, 2021). They were particularly interested in how the terms “schizophrenic” and “person with schizophrenia” would affect feelings of empathy (among other dependent variables) for both practicing mental health counselors and graduate students who were training to be mental health counselors. For the practitioners, they found volunteers by approaching attendees at a state counseling conference (n=82) and at an international counseling conference (n=79). In both cases, they limited their requests to a conference area designated for networking and conversing. For the graduate students, faculty at three different large universities asked their students to participate (n=109). Since they were particularly interested in mental health counseling, anyone who said that they were in school counseling or who did not answer the question about counseling specialization had their data removed from the analysis (n=19). In the end, they had a total of 251 participants. Granello and Gorby gave volunteers the participants Community Attitudes Toward the Mentally Ill scale. This measure has four subscales: authoritarianism, benevolence, social restrictiveness, and community mental health ideology. While the original version of the scale asked about mental illness more generally, the researchers amended it so that “mental illness” was replaced with “schizophrenics” or “people with schizophrenia.” The researchers stacked the questionnaires so that the terminology used alternated. For example, if the first person they approached received the questionnaire asking about “schizophrenics,” the next person would have received the questionnaire asking about “people with schizophrenia.” Here are sample items for the “schizophrenics” condition, one from each subscale: Schizophrenics need the same kind of control and discipline as a young child (authoritarian subscale) Schizophrenics have for too long been the subject of ridicule (benevolence subscale) Schizophrenics should be isolated from the rest of the community (social restrictiveness subscale) Having schizophrenics living within residential neighborhoods might be good therapy, but the risks to residents are too great (community mental health ideology) Here are those same sample items for the “people with schizophrenia” condition: People with schizophrenia need the same kind of control and discipline as a young child (authoritarian subscale) People with schizophrenia have for too long been the subject of ridicule (benevolence subscale) People with schizophrenia should be isolated from the rest of the community (social restrictiveness subscale) Having people with schizophrenia living within residential neighborhoods might be good therapy, but the risks to residents are too great (community mental health ideology) What did the researchers find? When the word “schizophrenics” was used: both practitioners and students scored higher on the authoritarian subscale. the practitioners (but not the students) scored lower on the benevolence subscale. all participants scored higher on the social restrictiveness subscale. there were no differences on the community mental health ideology subscale for either practitioners or students. Give students an opportunity to reflect on the implications of these results. Invite students to share their reactions to the experiment in small groups. Allow groups who would like to share some of their reactions with the class an opportunity to do so. Lastly, as time allows, you may want to share the two limitations to their experiment identified by the researchers. First, the practitioners who volunteered were predominantly white (74.1% identified as such) and had the financial means to attend a state or international conference. Would practitioners of a different demographic show similar results? The graduate students also had the financial means to attend a large in-person university. Graduate students enrolled in online counseling programs, for example, may have different results. A second limitation the researchers identified is that when they divided their volunteers into practitioners and students, the number of participants they had was below the recommended number to give them the statistical power to detect real differences. With more participants, they may have found even more statistical differences. Even with these limitations, however, the point holds. The language we use affects the perceptions we have. Reference Granello, D. H., & Gorby, S. R. (2021). It’s time for counselors to modify our language: It matters when we call our clients schizophrenics versus people with schizophrenia. Journal of Counseling & Development, 99(4), 452–461. https://doi.org/10.1002/jcad.12397
... View more
Labels
-
Research Methods and Statistics
-
Thinking and Language
0
1
4,993
sue_frantz
Expert
06-27-2022
05:00 AM
Thurgood Marshall in his argument before the U.S. Supreme Court in Brown vs. Board of Education cited the research of Drs. Mamie and Kenneth Clark. Those were the now-famous doll studies demonstrating that segregation affects how Black children feel about themselves. That 1954 ruling started a cascade of changes. While racism is still prevalent almost 70 years later, some of the state-sponsored systemic barriers have come done. Some of them. Step into the shoes of a Black man charged with a crime. Your case goes to a jury trial. The jury is comprised of all white people. And the jury room, maintained by a chapter of the United Daughters of the Confederacy, prominently features a Confederate flag. Would you feel that your jury was impartial? Tim Gilbert and his attorneys did not. For a summary of this case, read the freely available APA Div 9: Society for the Psychological Study of Social Issues column in the June 2022 issue of the Monitor on Psychology, “Legacies of racism in our halls of justice” (Anderson & Najdowski, 2022). Gilbert’s trial was held in 2020 at the Giles County courthouse in Pulaski, TN. “[T]he jury retired to the jury room during every recess, for every meal, and for its deliberations” (p. 29 of appeals court ruling.) While there were other Confederacy memorabilia in the jury room—including a portrait of Confederate president Jefferson Davis, the defense team took primary issue with the Confederate flag. (See a photo.) “In its amicus brief, the Tennessee Association of Criminal Defense Lawyers (‘TACDL’), noting that ‘[m]ultiple courts have recognized the racially hostile and disruptive nature of the Confederate flag,’ argues that ‘a jury’s exposure to Confederate Icons denies the defendant a fair trial free of extraneous prejudicial information and improper outside influence’” (p. 19 of appeals court ruling). In the TACDL amicus brief, they cited a 2011 Political Psychology article (Ehrlinger et al., 2011). The article features two experiments conducted in 2008. In the first, volunteers who were subliminally shown images of a Confederate flag were less likely to express interest in voting for Obama. In the second experiment—the one that I found more compelling—volunteers who were exposed to a folder with a Confederate flag sticker ostensibly left by someone else who had been in the room were more likely to evaluate a description of a Black man more negatively. (Read this section of the amicus brief.) Quoted in the amicus brief was the researchers’ conclusion: “Our studies show that, whether or not the Confederate flag includes other nonracist meanings, exposure to this flag evokes responses that are prejudicial. Thus, displays of the Confederate flag may do more than inspire heated debate, they may actually provoke discrimination.” Excluded from that quote was the end of the researchers’ sentence: “even among those who are low in prejudice.” In August 2021, the appeals court ruled that Gilbert was deserving of a new trial. In Intro Psych, we can discuss this case in the first few days of class, when we discuss the importance of psychological research. It would also work to discuss the Ehringer et al. second study as an example of experimental design—and then add how that experiment was used to support a new trial for Gilbert. References Anderson, M., & Najdowski, C. J. (2022, June). Legacies of racism in our halls of justice. Monitor on Psychology, 53(4), 39. Ehrlinger, J., Plant, E. A., Eibach, R. P., Columb, C. J., Goplen, J. L., Kunstman, J. W., & Butz, D. A. (2011). How exposure to the Confederate flag affects willingness to vote for Barack Obama. Political Psychology, 32(1), 131–146. https://doi.org/10.1111/j.1467-9221.2010.00797.x
... View more
Labels
-
Research Methods and Statistics
-
Social Psychology
0
0
988
sue_frantz
Expert
05-16-2022
10:20 AM
We know that when students have a growth mindset they tend to perform better in school (Yeager & Dweck, 2020). Do what instructors communicate about mindset matter? Here’s an activity that will give students some practice in experimental design while also introducing students to the concepts of fixed and growth mindset and perhaps even inoculating them against instructors who convey a fixed mindset. For background for yourself, read Katherine Muenks and colleagues’ Journal of Experimental Psychology article (2020). The activities below will replicate their study designs. After explaining to students the difference between a growth and fixed mindset, ask students if they have ever had an instructor who said something in class or wrote something in the syllabus that conveyed which mindset the instructor held. For example, an instructor with a growth mindset might say, “This course is designed to help you improve your writing skills.” An instructor with fixed mindset might say, “Either you have the skills to succeed in this course or you don’t.” As students share examples that they have heard, write them down where students can see them. Ask students if they think that these instructor statements could affect students. If so, how? Perhaps these statements could affect how much they feel like they belong in the course, or how interested students are in the course, or even how well students do in the course. Write down what students generate. Point out to students that they just generated two hypotheses. 1. If students hear an instructor with a growth mindset, then they are more likely to feel like they belong (or/and whatever other dependent variables students suggested). 2. If students hear an instructor with a fixed mindset, then they are less likely to feel like they belong (or/and whatever other dependent variables students suggested). Point out to students that the “if” part of the hypotheses gives us the independent variable (instructor mindset). Suggest that the experiment they will design has three levels to the independent variable: growth mindset, fixed mindset, and a control condition of no mindset. The “then” part of the hypotheses gives us the dependent variables, such as feelings of belonging and whatever other variables students think could be affected. Ask students to spend a couple minutes thinking about how they could design an experiment that would test both of these hypotheses. Then invite students to group up with a couple students near them to discuss. Lastly, give students an opportunity to share their designs. Remind students that conducting experiments is a creative endeavor and that there is no one right way to test hypotheses. In fact, the more ways researchers test hypotheses, the more confidence we have in the findings. Share with students how Muenks and her colleagues did the first of their studies. They created three videos of what was ostensibly a calculus professor talking about their syllabus on the first day of class. The same actor delivered the same information; it was all scripted. The only difference was that for the growth mindset condition, the script included growth mindset comments sprinkled throughout, such as “These assignments are designed to help you improve your skills throughout the semester.” For the fixed mindset condition, comments included things like, “In this course, you either know the concepts and have the skills, or you don’t.” The control condition excluded mindset comments. Volunteers were randomly assigned to watch one of the three videos. Muenks and colleagues assessed four dependent variables: vulnerability which was a combined measure of belongingness (five questions, including “How much would you feel that you ‘fit in’ during this class?”) and evaluative concerns (five questions, “How much would you worry that you might say the wrong thing in class?”), engagement (three items, including “I think I would be willing to put in extra effort if the professor asked me to”), interest in the course, and anticipated course performance. (See the second study they reported in their article for additional dependent variables, including feelings of being an imposter and intentions of dropping the course.) Volunteers reported that they would feel the most vulnerable with fixed mindset instructor, less vulnerable with the control instructor and the least vulnerable with the growth mindset instructor. Volunteers reported that they would feel the least engaged with either the fixed mindset or control instructor and the most engaged with the growth mindset instructor. Volunteers reported that they would be least interested in a course taught by the fixed mindset instructor, more interested in a course taught by the control instructor and the most interested in a course taught by the growth mindset instructor. Lastly, volunteers expected that they would perform the worst in a course taught by the fixed mindset instructor and best in the course taught by the growth mindset or control instructor. After sharing these results, explain that volunteers in this study reported what they think they would feel or do. For ethical reasons, we cannot randomly assign students to take actual courses taught by instructors who express these different views. However, if students were taking courses, researchers could do correlational research on student experiences. In studies three and four, Muenk and colleagues did correlational studies where students were asked immediately after attending class for their impressions of their instructor’s mindset along with a number of other measures, including feelings of belonging, evaluative concerns, imposter feelings, and affect. After the course was over, students reported how often they attended class, how often they thought about dropping the course, and how interested they were in the course discipline. Student grades in the course were gathered from university records. While there is a lot in the results to unpack, in sum, instructor mindset had an impact. For example, student grades were worst when students perceived their instructor as having a fixed mindset, but this result seems to have been driven by student feelings of being an imposter. End this activity with this question: Is it possible that being consciously aware of an (Leave it as a rhetorical question or challenge students to design the study as a take-home assignment.) References Muenks, K., Canning, E. A., LaCosse, J., Green, D. J., Zirkel, S., Garcia, J. A., & Murphy, M. C. (2020). Does my professor think my ability can change? Students’ perceptions of their STEM professors’ mindset beliefs predict their psychological vulnerability, engagement, and performance in class. Journal of Experimental Psychology: General, 149(11), 2119–2144. https://doi.org/10.1037/xge0000763 Yeager, D. S., & Dweck, C. S. (2020). What can be learned from growth mindset controversies? American Psychologist, 75(9), 1269–1284. https://doi.org/10.1037/amp0000794
... View more
Labels
-
Research Methods and Statistics
-
Teaching and Learning Best Practices
0
0
1,344
sue_frantz
Expert
04-11-2022
06:00 AM
Heads-up displays (HUD) have been common in airplanes for years. (See examples here, or do a Google image search for airplane HUD.) With a HUD, information of use to pilots is projected onto the window, so the pilot can see the information without having to glance down at dashboard gauges, taking their eyes off their view out the window. Automobile manufacturers are now bringing this technology to cars. With a car HUD, the driver will be able to see projected on their windshield information such as speed, speed limit, distance to the car ahead, and highlighted pedestrians. As I read about this technology, I can’t help but wonder if the attentional demands outstrip the value making driving more dangerous with a HUD. After all, pilots are highly trained. In one article about automotive HUDS, I was horrified to read, “And, of course, you should be able to display information from your phone onto the windshield” (Wallaker, 2022). We know that talking on a phone (hands-free or not) takes attention away from driving. A driver who is reading text messages or making a different music selection on their windshield would be seconds away from a crash. On the other hand, if the HUD marks the car in front as green, then we will know that we are following at a safe distance. If the car is red, we need to back off until it goes green. That’s real-time, useful information that is directly related to safe driving. We know from behavioral change research, immediate feedback is more useful than delayed feedback—or in the case of the lack of technology most of us currently drive with—no feedback at all. After covering attention, this may be a good opportunity to give your students a little practice designing experiments. Describe automotive HUD technology, including some of the information that HUDs can display. Ask students to design an experiment that would test these hypotheses: Hypothesis 1: If drivers are given driving-relevant information, such as speed and distance to vehicle in front, via a heads-up display (HUD), then they will have better driving performance. Hypothesis 2: If drivers are given driving-irrelevant information, such as the ability to read text messages or change music selections, via a heads-up display (HUD), then they will have impaired driving performance. “In your design, identify each level of the independent variable, and identify the dependent variable. You may have more than one dependent variable. Include operational definitions of each.” To help students get started, explain that researchers use driving simulators for research such as this as it would be (very!) unethical to put research volunteers behind the wheel of a real car on a real road where they could kill real people, including themselves. The additional advantage of driving simulators is that researchers have complete control over the simulated environment. They can decide what information to display, when a text message appears, and when a virtual child runs into the street. After students have had a few minutes to consider their own experimental designs, invite students to work in groups of three or four to discuss their designs with the goal of creating one design for the group. After groups appear to have settled on a design, invite one group to share their independent variable and its levels. Ask if other groups have different independent variables or different levels. As groups share, identify pros and cons of each independent variable and level. Take the best options offered. Next, ask a group to share their dependent variable(s). Invite other groups to share their dependent variable(s). Again, identify pros and cons of each, then take the best options offered. If you’d like to expand this into an assignment, ask students to dive into your library’s databases. Have any research teams done experiments like the one the class just created? If so, what did they find? Reference Wallaker, M. (2022, February 6). How does a car HUD work? MUO. https://www.makeuseof.com/how-does-car-hud-work/
... View more
Labels
-
Consciousness
-
Research Methods and Statistics
0
0
885
jenel_cavazos
Expert
01-27-2022
08:19 AM
How can psychologists stay safe standing up for the truth in a climate where misinformation is so common? The anatomy of a misinformation attack: How a respected psychologist ended up getting attacked online for sharing the facts https://www.apa.org/news/apa/2022/news-anatomy-misinformation
... View more
Labels
-
Current Events
-
Research Methods and Statistics
0
0
877
jenel_cavazos
Expert
10-15-2021
10:09 AM
Short and sweet guide written in easy-to-understand language! How To Tell Science From Pseudoscience: https://www.popsci.com/diy/spot-fake-science/?taid=6169b3ba0fbc4500016aa03e&utm_campaign=trueanthem_trending-content&utm_medium=social&utm_source=twitter
... View more
Labels
-
Research Methods and Statistics
0
0
852
jenel_cavazos
Expert
07-22-2021
12:04 PM
Excellent resource for students! Gain insight into the peer review process by examining published peer reviews: https://plos.org/published-peer-review-examples/
... View more
Labels
-
Research Methods and Statistics
0
0
746
jenel_cavazos
Expert
07-05-2021
02:55 PM
Ever found yourself stuck while writing demographic questions? If so, see this guide: Bad Gender Measures and How To Avoid Them https://devonprice.medium.com/bad-gender-measures-how-to-avoid-them-23b8f3a503a6
... View more
Labels
-
Research Methods and Statistics
0
0
880
sue_frantz
Expert
05-28-2021
11:38 AM
If you are looking to freshen up your examples of experiments, here’s a pretty readable one from an open access journal. Pendry, P., Carr, A. M., Vandagriff, J. L., & Gee, N. R. (2021). Incorporating human–animal interaction into academic stress management programs: Effects on typical and at-risk college students’ executive function. AERA Open. https://doi.org/10.1177/23328584211011612 Here is a summary. Petting therapy dogs can help college students cope with stress. (2021). The Optimist Daily. Retrieved May 28, 2021, from https://www.optimistdaily.com/2021/05/petting-therapy-dogs-can-help-college-students-cope-with-stress While you can use this as your own lecture example, if you want to make this an assignment or a discussion, here are some questions. Use whatever best matches your coverage of experiments. Ask students to read both the summary and the original research article, then answer these questions. What is the independent variable? What are the levels of the independent variable? What is the dependent variable? How many different times was the dependent variable measured? Were participants randomly assigned to conditions? Why is random assignment important? In the research article, the authors identify three limitations to the study. What are they? Which of the three do you believe is the biggest limitation? Why? Identify at least two more pieces of evidence you would like to have before recommending that your college or university spend money on therapy dogs.
... View more
Labels
-
Research Methods and Statistics
0
0
1,452
sue_frantz
Expert
03-08-2021
09:50 AM
After covering correlations and experiments, share the February 17, 2021 edition of the PC and Pixel comic strip with your students. In the first panel, one of the characters reads a research finding: “It’s reported here that unhappy people watch more TV than happy folks.” Ask your students if they think this is correlational research or experimental research, and ask them to explain why. In the next two panels of the comic strip, the characters wonder if it’s that unhappiness leads to more TV watching or if more TV watching leads to unhappiness. Point out that since this is correlational research, we don’t know which is true. Either or both could be true. We just don’t know. Ask your students to generate some possible third variables that could influence both happiness and TV watching separately. For example, feelings of loneliness could lead to both feelings of unhappiness and greater TV watching (as a source of company, say). Explain that researchers may take correlational research, and use it to generate hypotheses that could be tested by conducting an experiment. If people are made to feel unhappier, they will watch more TV. If people are made to watch more TV, they will be unhappier. If people are made to feel lonely, they will be both unhappier and watch more TV. Working in small groups, ask your students to design experiments that would test each of these hypotheses. “Be sure to identify the independent variable and its levels and the dependent variable. Be sure to describe how they would operationalize the variables.” Bring the class back together, and ask one group to share their design for testing the first hypotheses. Invite other groups to share how they operationalized the independent variable and dependent variable. Take a minute to walk students through what the different results from each test of the hypothesis would tell us. Point out that there is no right or wrong way to operationalize a variable. In fact, if the hypothesis is supported across experiments that operationalized the variables different, the more confident we are in the findings. Next, ask your students if they have any concerns about intentionally trying to make people feel unhappier, either as the independent variable or as the dependent variable. Invite students to share their concerns. If you haven’t already, introduce students to the APA’s Ethical Principles of Psychologists and Code of Conduct. The first of the five general principles is beneficence and nonmaleficence. This principle reads, in part: Psychologists strive to benefit those with whom they work and take care to do no harm. In their professional actions, psychologists seek to safeguard the welfare and rights of those with whom they interact professionally and other affected persons, and the welfare of animal subjects of research. The third of the principles—integrity—is also relevant here. This principle reads in its entirety: Psychologists seek to promote accuracy, honesty, and truthfulness in the science, teaching, and practice of psychology. In these activities psychologists do not steal, cheat or engage in fraud, subterfuge, or intentional misrepresentation of fact. Psychologists strive to keep their promises and to avoid unwise or unclear commitments. In situations in which deception may be ethically justifiable to maximize benefits and minimize harm, psychologists have a serious obligation to consider the need for, the possible consequences of, and their responsibility to correct any resulting mistrust or other harmful effects that arise from the use of such techniques. Given these ethical principles, are students more comfortable with some of the experimental designs they created than others? For example, are experiments that bring about temporary and mild unhappiness better than designs that are, say, more intense? Does the knowledge that these experiments would bring—and the good it would mean for humanity—outweigh the harm they may cause in the short-term? Be sure to describe the purpose of a debriefing. Conclude this discussion by emphasizing that these are the ethics questions every researcher and every member of an Institutional Review Board struggle with. No one takes these questions lightly. If you’d like to give your students some library database practice, ask your students to find three to five peer-reviewed research articles on the connection between happiness and TV watching. For each article, students should identify if the research reported was correlational or experimental (and how they know) and provide a paragraph summarizing the results.
... View more
Labels
-
Research Methods and Statistics
0
0
1,117
sue_frantz
Expert
02-16-2021
10:32 AM
At some point in college or grad school, I was given a short article that explained the different sections of a typical psychology journal article. I have a vague memory of being told to always read the abstract first, but beyond that, I don’t remember be given any guidance on how to actually read the article. Eventually I figured out that journal articles that are sharing new research are not meant to be read from beginning to end. True confession: I started doing this pretty early in my journal-article-reading career, but I felt guilty about it. I had no reason to feel guilty. Wish I would have known that then. The Learning Scientists blog has a nice collection of articles on how to read a research journal article. Take a look at that list to see if there is anything there you want to share with your students that particularly meets your goals. For example, the library at Teesside University has brief descriptions of each article section. If you’d like your students to hear from academics themselves on how they approach research journal articles, the Science article is a good choice. Alternatively, you may choose to give your students a few easy-to-read articles and ask your students to sort out the different elements of a research article. Ask your students to look at (not necessarily “read”), say, three of the following articles, all of which have 12 or fewer pages of text. Work with your librarians to get permalinks to this articles from your library’s databases. Barry, C. T., McDougall, K. H., Anderson, A. C., Perkins, M. D., Lee-Rowland, L. M., Bender, I., & Charles, N. E. (2019). ‘Check your selfie before you wreck your selfie’: Personality ratings of Instagram users as a function of self-image posts. Journal of Research in Personality, 82, 1-11. https://doi.org/10.1016/j.jrp.2019.07.001 Gosnell, C. L. (2019). Receiving quality positive event support from peers may enhance student connection and the learning environment. Scholarship of Teaching and Learning in Psychology. Advance online publication. https://doi.org/https://doi.org/10.1037/stl0000178 Howe, L. C., Goyer, J. P., & Crum, A. J. (2017). Harnessing the placebo effect: Exploring the influence of physician characteristics on placebo response. Health Psychology, 36(11), 1074–1082. https://doi.org/10.1037/hea0000499.supp Hyman, I. E., Boss, S. M., Wise, B. M., McKenzie, K. E., & Caggiano, J. M. (2010). Did you see the unicycling clown? Inattentional blindness while walking and talking on a cell phone. Applied Cognitive Psychology, 24, 597–607. https://doi.org/10.1002/acp.1638 Reed, J., Hirsh-Pasek, K., & Golinkoff, R. M. (2017). Learning on hold: Cell phones sidetrack parent-child interactions. Developmental Psychology, 53(8), 1428–1436. https://doi.org/10.1037/dev0000292 Rhodes, M., Leslie, S. J., Yee, K. M., & Saunders, K. (2019). Subtle linguistic cues increase girls’ engagement in science. Psychological Science, 30(3), 455–466. https://doi.org/10.1177/0956797618823670 Soicher, R. N., & Gurung, R. A. R. (2017). Do exam wrappers increase metacognition and performance? A single course intervention. Psychology Learning and Teaching, 16(1), 64–73. https://doi.org/10.1177/1475725716661872 Wirth, J. H., & Bodenhausen, G. V. (2009). The role of gender in mental-illness stigma: A national experiment. Psychological Science, 20(2), 169–173. https://doi.org/10.1111/j.1467-9280.2009.02282.x Give your students the following instructions and questions. Amend them to your liking. Skim each of these three articles: Barry, C. T., McDougall, K. H., Anderson, A. C., Perkins, M. D., Lee-Rowland, L. M., Bender, I., & Charles, N. E. (2019). ‘Check your selfie before you wreck your selfie’: Personality ratings of Instagram users as a function of self-image posts. Journal of Research in Personality, 82, 1-11. https://doi.org/10.1016/j.jrp.2019.07.001 Gosnell, C. L. (2019). Receiving quality positive event support from peers may enhance student connection and the learning environment. Scholarship of Teaching and Learning in Psychology. Advance online publication. https://doi.org/https://doi.org/10.1037/stl0000178 Howe, L. C., Goyer, J. P., & Crum, A. J. (2017). Harnessing the placebo effect: Exploring the influence of physician characteristics on placebo response. Health Psychology, 36(11), 1074–1082. https://doi.org/10.1037/hea0000499.supp Research articles published in journals follow some basic conventions that are designed to make them easy for researchers and students to read. Almost all research articles have these six main components, and always in this order. Using the three articles you skimmed, your goal is to identify the basic structure of research articles. For each component, answer the questions given. Abstract In less than 50 words, describe the purpose of the abstract. Introduction (usually not labeled, but it always comes after the abstract) In less than 50 words, describe the purpose of the introduction. The research hypotheses can almost always be found near the end of the introduction. Identify at least one hypothesis from each article. Method In less than 50 words, describe the purpose of the methods section. In the methods section, you will see that all of the articles contain similar information. Identify three different types of information that is common across all three articles. Results In less than 50 words, describe the purpose of the results section. If you don’t understand much of what is written in this section, that’s okay. This section is written for fellow researchers, not Intro Psych students. Copy/paste (use quotation marks!) one sentence from the results section of each article that made little or no sense to you. Discussion In less than 50 words, describe the purpose of the discussion section. References In less than 50 words, describe the purpose of the references section. Choose one reference from each article that, based on the title alone, you might be interested in reading. How would you go about getting that article? Researchers almost always read the abstract first. After that, what they read next depends on why they are looking at the article at all. For each of the following scenarios, match the researcher with the section of the article they are likely to read first after the abstract: Introduction, method, results, discussion, references. A. Dr. Akiya Yagi wanted to read more about the conclusions the researchers drew from having done this study. B. Dr. Selva Hernandez-Lopez is doing research on these same psychological concepts, and she’s looking for useful research articles that she may have missed. C. Dr. DeAndre Thomas is looking for different ways to measure a particular psychological concept. D. Dr. Kaitlyn Kronvalds read some information in the abstract that made her wonder about the statistics that were used to analyze the data. E. Dr. Bahiya Cham is about start doing research on a different set of psychological concepts and wants to learn more about the different theories behind those concepts and how those theories are being used to generate hypotheses.
... View more
Labels
-
Research Methods and Statistics
0
0
3,265
jenel_cavazos
Expert
01-07-2021
02:34 PM
It used to be that money only made us happier up to a certain point. New research shows that this relationship is changing and getting stronger over time. Why is this happening? http://ow.ly/Hgyt50D2QjZ
... View more
Labels
-
Current Events
-
Emotion
-
Research Methods and Statistics
0
0
1,166
sue_frantz
Expert
01-06-2021
02:40 PM
Learning how to learn assignment My father used to get so frustrated with one of my brothers. My father would say, with great exasperation, “I talk to you until I’m blue in the face…” Even though my father’s “talk” was—apparently—not very effective, it didn’t keep him from talking. Over and over again. Until he was blue in the face. How many of us instructors are like my father? We tell our students about the best study strategies until we are blue in the face, and it feels like most of our students continue to use less effective methods. Maybe we should take a different approach. Carolyn R. Brown-Kramer (2021) asked her Intro Psych students to read one of four research articles where each article investigated the effectiveness of a study strategy. Two articles were about more effective study strategies: distributed practice (Seabrook et al., 2005) and practice testing (McDaniel et al., 2011). Two articles were about less effective study strategies: rereading (Rawson & Kintsch, 2005) and forming mental images (Schmeck et al., 2014). Students were instructed to “write a three- to four-page paper summarizing and analyzing [their assigned article] critically” and then “drawing specific connections to how they study, how they could use the article’s results to improve their studying behavior, and their plans to adopt (or not to adopt) the strategy about which they had read.” (Contact Brown-Kramer for assignment instructions and scoring rubric.) What impact did this assignment have on students? Over the course of the term, students reported using more of the more effective strategies and fewer of the less effective strategies. Students who read the practice testing article (McDaniel et al., 2011) did much better on the exams following this assignment than students who read the other three articles. Students who were given this assignment did better on the exams following this assignment and in the course overall than students from an earlier term who were not given this assignment. Students who reported using the more effective study strategies and using them more frequently did better on the exams and in the course overall. Brown-Kramer (2021) has provided us with some pretty compelling evidence that there is something we can do as instructors that will help students change their study strategies. I wonder what component of this assignment is key. For example, would the application piece be enough, or is the analytic section crucial? Or perhaps the analytic section ensures that students are reading the article carefully. In that case, would some other assignment instructions that would also ensure careful reading—such as answering a few targeted questions about different sections of the article—be just as effective? Brown-Kramer’s assignment would fit as part of your coverage of research methods or memory. If you use it as part of your memory coverage, the research methods review would be a terrific application of distributive practice. References Brown-Kramer, C. R. (2021). Improving students’ study habits and course performance with a “learning how to learn” assignment. Teaching of Psychology, 48(1), 48–54. https://doi.org/10.1177/0098628320959926 McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103(2), 399–414. https://doi.org/10.1037/a0021782 Rawson, K. A., & Kintsch, W. (2005). Rereading effects depend on time of test. Journal of Educational Psychology, 97(1), 70–80. https://doi.org/10.1037/0022-0663.97.1.70 Schmeck, A., Mayer, R. E., Opfermann, M., Pfeiffer, V., & Leutner, D. (2014). Drawing pictures during learning from scientific text: Testing the generative drawing effect and the prognostic drawing effect. Contemporary Educational Psychology, 39(4), 275–286. https://doi.org/10.1016/j.cedpsych.2014.07.003 Seabrook, R., Brown, G. D. A., & Solity, J. E. (2005). Distributed and massed practice: From laboratory to classroom. Applied Cognitive Psychology, 19(1), 107–122. https://doi.org/10.1002/acp.1066
... View more
Labels
-
Memory
-
Research Methods and Statistics
0
0
2,330
alanna_smith
Community Manager
01-05-2021
08:06 AM
Dr. Kelly Goedert and Dr. Susan Nolan will describe how the use of statistics in psychological science is changing as the field undergoes an open-science revolution. They will highlight ways to update your undergraduate statistics course that center on an ethical approach to analyzing, interpreting, and reporting data, and will offer engaging examples and activities you can use in your classroom.
WATCH THE RECORDING
... View more
Labels
-
Research Methods and Statistics
-
Teaching and Learning Best Practices
-
Virtual Learning
0
1
8,604
sue_frantz
Expert
09-30-2020
10:39 AM
There must be a cognitive bias that explains why it is so hard for us to learn that correlation does not mean causation. Okay, we can learn it fine. Applying it consistently is the hard part. Last week here in the Pacific Northwest we had a few days of our typical late fall weather: highs in the low 60s (upper teens Celsius) and rainy. And that’s when I discovered that I’m not ready for late fall. Fortunately for me, it’s early fall, and we’re now looking at a 10-day stretch of mid-70s (low 20s Celsius) and sunny. This morning, I opened my news feed to see this article: “Thinking like a Norwegian may help you cope with a winter lockdown.” Oslo's average temperature in January is 32 degrees (0 Celsius), and they get about 6 hours of daylight, so I thought, “Perfect! Let’s see what the experimental evidence says. I could use some tips!” Researchers gave a sample of Norwegians a questionnaire asking about their attitudes about winter, their mental health, and their life satisfaction. Those who had better mental health and higher ratings of life satisfaction had better attitudes about winter. Those who had poorer mental health and higher ratings of life satisfaction had poorer attitudes about winter. This correlational—not experimental—research doesn’t help me at all. After covering correlations and experiments in Intro Psych, in a synchronous or asynchronous discussion, provide students with this discussion prompt: Read “Thinking like a Norwegian may help you cope with a winter lockdown.” Is this article describing a correlational study or an experimental study? How do you know? The article suggests that if you change how you think about winter, you will feel better. In other words, they’re saying that how you think about winter will cause you to feel better. Based on this research study alone, is that conclusion warranted? Why or why not? Monitor the group discussions. If they’re falling into the trap of thinking “it makes sense that how one thinks about winter would affect one’s attitude toward winter,” prompt students with this: This was a correlational study. They measured attitudes toward winter and measured levels of mental health and levels of life satisfaction. They found positive correlations. When attitudes toward winter were high, mental health was better (and vice versa). When attitudes toward winter were high, life satisfaction was high (and vice versa). It is possible that those who have better mental health and higher life satisfaction already would see winter more positively. It may not be that attitudes toward winter cause life satisfaction/mental health. It may be that life satisfaction/mental health cause attitudes toward winter. Or maybe there are third factors. Can you think of any third factors that could cause people to have positive attitudes toward winter and have better mental health and higher life satisfaction? Given that this research was correlational, what would have been a better article title? If you’d like to extend this discussion or spin it off into an assignment, give students this prompt. While correlations cannot tell us which variable is the cause and which is the effect, experimental research can. Here are two hypotheses. Design an experiment that would test each of these hypotheses. Hypothesis 1: Seeing winter in a more positive way can cause people to feel more satisfied about their lives. Design an experiment that would test this hypothesis. Be sure to identify the independent variable and dependent variable. Hypothesis 2: Feeling more positive about our lives can cause us to see winter in a more positive light. Design an experiment that would test this hypothesis. Be sure to identify the independent variable and dependent variable.
... View more
Labels
-
Research Methods and Statistics
0
0
1,669
Topics
-
Abnormal Psychology
19 -
Achievement
3 -
Affiliation
1 -
Behavior Genetics
2 -
Cognition
40 -
Consciousness
35 -
Current Events
28 -
Development Psychology
16 -
Developmental Psychology
34 -
Drugs
5 -
Emotion
50 -
Evolution
3 -
Evolutionary Psychology
5 -
Gender
19 -
Gender and Sexuality
6 -
Genetics
12 -
History and System of Psychology
5 -
History and Systems of Psychology
7 -
Industrial and Organizational Psychology
50 -
Intelligence
8 -
Learning
68 -
Memory
38 -
Motivation
14 -
Motivation: Hunger
1 -
Nature-Nurture
7 -
Neuroscience
42 -
Personality
29 -
Psychological Disorders and Their Treatment
21 -
Research Methods and Statistics
98 -
Sensation and Perception
43 -
Social Psychology
128 -
Stress and Health
55 -
Teaching and Learning Best Practices
58 -
Thinking and Language
16 -
Virtual Learning
26
- « Previous
- Next »
Popular Posts