-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- STEM Community
- :
- STEM Blog
- :
- Benchmark Quizzes for Intro Chem - The Final Resul...
Benchmark Quizzes for Intro Chem - The Final Results
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
In the fall of 2016, I implemented benchmark quizzes in my organic 1 classes, which I wrote about here. The quizzes covered key learning objectives, they were pass-fail, and students had to be essentially 100% correct to pass. Although things didn’t go entirely according to plan, the outcomes for the class were extraordinary.
This past semester, I decided to modify this approach for my introductory chemistry classes. I had two lecture sessions, with about 210 students total (70 in the morning session, 140 in the evening session). I built the benchmarks around key learning outcomes for each chapter. As before, the benchmark quizzes were pass-fail, and students had to answer each question correctly to pass. Students had 3 opportunities to pass each quiz. A sample quiz was posted beforehand. Quizzes were worth 10 points each - 10% of their grade. However, In order to unlock their full homework grade (also 10%), students were required to pass 6 of 10 benchmarks. (See this article for more detail about this).
The Logistics
Students took the quizzes each week at the beginning of their lab section. I passed out the main quiz for that week to the entire section. As students turned in their quiz, they could pick up re-quizzes from the previous two weeks to complete. (For example, quiz 1 was available in weeks 1, 2, and 3 – but not afterward.)
Because each quiz was pass-fail, it didn’t take me long to grade (about an hour per week). I simply sorted quizzes by pass-fail, then entered the passing grades. I kept hard-copy grade sheets for each benchmark. I recorded a “1” for students who passed the first week, a “2” for the second week, and a “3” for the third week. This allowed me to track performance week to week.
One of the biggest challenges was staying up-to-date: I had to post grades each week, so students knew if they needed to re-take the quizzes. With 210 students, this seemed like the most labor-intensive part of the experiment.
The Results
Ultimately, I didn’t make it through every benchmark. I made it completely through 6, and gave a 7th twice, but at the high-stress end of the semester, I decided to give full credit for everyone. Here are the results:
In general, most students who were going to pass did so in the first week, or in the second. Very few students passed the third week. A fourth week clearly was not justified. The one glaring exception to this was week 4 - naming ions and compounds. I think that in this case, it took most students longer to really master the ion names and the nuance of naming the compounds, so even the above-average students needed a couple extra weeks before they could get the quiz 100% correct.
To be candid, I found these results discouraging: I designed these quizzes to keep students on-track with lower-stakes weekly quizzes, rather than waiting for the exams to realize they were unprepared. I also wanted students to have the opportunity to correct their mistakes if they failed the first time. But some students simply didn't put any effort into preparing for these. Perhaps because it is a freshman level class, and the only chemistry requirement for several majors. Or perhaps the stakes were too low, or the grading scheme too complex.
Compared with previous semesters, my retention rates remained strong (>90%). Scores on the ACS standardized final were slightly lower. Despite the considerable effort, I did not see the effects I had hoped.
But there were also bright spots: While the benchmarks didn’t motivate every student, there were students who came to me to figure out what they missed, and to practice with me until they could work the problems correctly. This was what I was after.
Conclusions
So what do I make of this? Using benchmark quizzes had a profound positive effect in my majors organic class, but a negligible effect in my non-majors class. I suspect that a lot of this has to do with the psychology of the majors versus the non-majors. For highly motivated pre-med students, it was a challenge to meet. For the non-majors, the effect was different.
I’m not ready to give up on the model yet. I think it can still work in the non-majors classes, but I’ve got to tweak it for that group. This fall, I’m planning to make two adjustments: Simplify and raise the stakes:
Spring 2017 | Fall 2017 |
---|---|
10 quizzes @ 10 pts.each | 5 quizzes @ 20 pts. each |
Pass/Fail | Pass/Fail |
3 attempts | 3 attempts |
6/10 required for full homework credit | No connection to homework |
My hope is that these changes will make it easier for intro students to see the importance of the quizzes, and give them more priority. We’ll see how it goes.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
-
Biochemistry
5 -
Biology
14 -
Case Studies
15 -
Chemistry
112 -
Environmental Science
4 -
General Chemistry
20 -
Genetics
1 -
Intro & Prep Chemistry
10 -
Math & Stats
15 -
Organic Chemistry
9 -
Physics
6 -
Tech
18 -
Virtual Learning
9