Grades, Students, Stakeholders: A Collision of Stories

0 0 586

We are entering the last week of class prior to final exams, and once again I sense a growing dread: in two weeks, I must enter one of five letter grades for each student into our data management system. The deliberations and angst which accompany this process have not diminished—in fact, they’ve increased—after over twenty-five years of classroom experience.

Granted, it really isn’t appropriate to reconsider how a grade is calculated at this late point in the term. The right time for devising a system for course grades is prior to the start of the semester, during the construction of the syllabus. Our dean has often reminded us (and I have reminded new teachers) that we are bound to follow the guidelines of the syllabus, so we need to compose that document with thought and care. And I do – each term I make adjustments to course policies, assignments, revision procedures, and the overall grade percentage for each assignment. With each tweak, I wonder if I will have landed on just the right balance, just the right approach. And now, as every semester, I am planning for the next set of adjustments.

Part of the problem, of course, is that grades mean different things to different stakeholders.  At my community college, those stakeholders include the department and teachers of subsequent courses my students must take, the division, the college as a whole (since part of our funding is determined by successful completion rates in developmental and gatekeeper courses), our transfer institutions (our students generally transfer to one of five or six universities), employers who fund coursework, federal workforce grant programs, parents, and of course, students.  In my particular local context, these stakeholders variously interpret a grade as evidence of mastery of learning outcomes, certification of readiness for the next level, completion of a certain number of required activities, engaged participation in the learning process, evidence of progress (in relation to the student’s starting point), an indication of academic promise, and evidence of effort (or even personal worth—which is often how my students see these marks).

Indeed, each stakeholder not only interprets that grade differently, but he or she may use that grade to make decisions with very real consequences for the student. A “C” grade, for example, is generally accepted by a transfer institution, while a D grade is not. But an employer only requiring that a student pass a course would accept the D grade. A grade of D will permit a student to take the next course in the sequence at the college, but if that D was something of a “gift” to keep a student from losing financial aid, the student could be sent forward under-prepared for the next course. The desire to help a hard-working student maintain financial aid often motivates adjustments – adjustments which we tacitly accept but rarely discuss.  And regardless of the intersecting realities that led to a grade, the final record is a highly decontextualized transcript.  A student who has made tremendous gains despite an inappropriate incoming placement could legitimately see a D as a mark of success, but the narrative that defines the grade as a success will not appear on the one official document that the student may present as evidence of learning. Many of my students need more time, and repetition of a course might be the best option; unfortunately, the F grade (and even the less stigmatized R or “re-enroll” grade in a developmental class) can mean loss of financial aid, loss of employer support, or problems with visas.

There is an ongoing national discussion about community college student success, a discussion that is telling a story of failure, especially for those that begin in developmental classrooms. That story includes data—an incredibly large amount of data that have led to the implementation of “data-driven” policies and reforms.  We certainly need data. But data don’t make sense of themselves; we need theory, experience, and careful thought in order to use data wisely, as emphasized in the Community College Data website. What we do with these data affects not widgets but students, as Adam Bessie and Dan Carino have demonstrated so beautifully.

The other community college “story” focuses on the students: students who don’t necessarily fit data patterns or trends, students for whom established courses and time sequences don’t quite work, and students whose needs conflict with well-intentioned and data-driven policies and procedures.

The intersection of these competing stories captures the quandaries I face in grading. So in two weeks I will look at my own evidence –my students’ work throughout the term—in the context of my college and my syllabus. And I will think of the students’ stories. And I will enter a grade. But it won’t be easy.

Want to offer feedback, comments, and suggestions on this post? Join the Macmillan Community to get involved (it’s free, quick, and easy)!

About the Author
Miriam Moore is Assistant Professor of English at the University of North Georgia. She teaches undergraduate linguistics and grammar courses, developmental English courses (integrated reading and writing), ESL composition and pedagogy, and the first-year composition sequence. She is the co-author with Susan Anker of Real Essays, Real Writing, Real Reading and Writing, and Writing Essentials Online. She has over 20 years experience in community college teaching as well. Her interests include applied linguistics, writing about writing approaches to composition, professionalism for two-year college English faculty, and threshold concepts for composition, reading, and grammar.