- Our Mission
- Our Leadership
- Diversity, Equity, Inclusion
- Learning Science
- Webinars on Demand
- Digital Community
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
I wrote in my last blog about a major overhaul to grading in my FYC/corequisite course this fall. What have I learned four weeks into the semester? All course activities need to align with the grading strategy—assignments, online discussions, feedback, and peer review.
I am committed to peer review in my FYC courses, for reasons shared by many of you: when well-designed, it encourages close reading, audience awareness, and attention to writing choices. But in corequisite classes—particularly when students come from a variety of language backgrounds and classroom experiences—it can be an exercise in frustration. Students come without drafts, and those that have drafts may resort to cheerleading and repetition of pseudo grammar rules (Don’t start a sentence with “because”), despite explicit instructions and instructor modeling of more appropriate feedback. To address these concerns, I’ve used rubrics to assess peer review, and I’ve provided feedback on the feedback. But with corequisite students in particular, the gap between what I envision and what actually happens remains significant.
So did the alternative grading make a difference?
To receive process credit for the first peer review, students needed to meet basic specifications: prior to class, they had to watch a short video about peer-review and read Richard Straub’s classic, “Responding, Really Responding to Other Students’ Writing.” For both the video and reading, I gave short verification quizzes online (to be completed at home, with open resources). Students also needed a draft of at least 750 words for their literacy narratives; most students had completed this draft during class the week before.
On the morning of the scheduled peer review, all students had an acceptable draft, and 16 of 18 students had completed at least one of the two preparation assignments; 14 completed both.
In class, students received written instructions before meeting with their groups: they were to add three specific questions to their drafts for their reviewers to address, and for each paper in the group, they were to provide 3 to 5 in-line comments or questions, using the comments feature in Google Docs, and offer short answers to the three questions posed by the author. Comments had to be written, but they were welcome to discuss as a group as well.
I allotted a full hour for the groups of three to work; I did not participate in their discussions unless they explicitly invited me to do so. Because students knew the specifications for process points, I did not hover or insist that they stay “on task” the whole time; groups that lost focus for a few moments eventually got back on track.
After class, I skimmed through the comments on the drafts. Despite the preparation and my encouragement to focus on thesis, development, and considerations of a reader’s experience, there were still many comments related to grammar and mechanics in snappy directives: Capitalize this! Run-on! Don’t start with and! There was also a considerable amount of cheerleading: Good job! Your story’s great! Fantastic.
But there were also hints of deeper engagement with the content: more specific encouragement (This part was really amazing), questions and information requests (Could you tell me more about…?), and thoughtful suggestions (Would it help to talk more about how you felt at that moment?).
I recognize how unfamiliar this style of peer review is to my corequisite students. Despite that unfamiliarity, this semester, students made reasonable efforts to meet the specifications I had set before them. Nearly all had prepared. The grading scheme seemed to have made a difference—so process points were awarded.
In the following class, I asked students to complete a reflection to earn additional process credit: Which comments from peer review (and the instructor) were most helpful, and why? They also needed to identify what information or resources they needed going forward. Students had about 15 minutes to complete the review, in class, to earn points for the activity. And the results were eye-opening:
- Some students admitted, honestly, that they had not received useful feedback. They had specific questions but did not get answers to them.
- Others acknowledged they got encouragement from students, but they were not sure what to do with it.
- A few students mentioned specific comments that helped them narrow their focus or hone their thesis.
It was clear to me that the peer review itself fell short of what I had envisioned. At the same time, student reflections demonstrated that deeper purposes were accomplished: students recognized possibilities (if yet unrealized) in peer review, they saw the power in a thesis, and they identified the types of comments that would not help improve a paper (whether given or received).
I won’t chide students who only gave grammar corrections or cheerleading. I won’t use a rubric to assign grades for the peer review. Instead, we will review their collected comments as part of the assigned preparation for our next round. I will ask them to reflect on what they need from peer review and take the initiative to make sure they get it. Students who meet the specifications will receive their process points.
Has my alternative grading scheme transformed peer review with corequisite students? Hardly. But I had full participation in the process for the first time in many semesters, with progress towards more strategic engagement for many. I’ll take that.
What’s working for you in corequisite peer review?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.