-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
Course Evals
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
03-05-2014
07:30 AM
At my school we call them the SPOT forms: Student Perception Of Teaching. But no matter the acronym, the existence of the course evaluation is nearly universal. I’ve just finished reviewing the fall SPOTs for all of our teachers. There’s a lot of discussion in my department about course evaluations in general and more specifically about how accurately they reflect the quality of teaching. Since I look through the SPOTs for all of our writing courses every semester, I have a certain view from above that influences my own thoughts on the process. Say what you will, course evaluations reflect very specific patterns in ways that are quite useful. For example, there is one set of very regular patterns that emerge (and have emerged for as long as I have worked in writing program administration). Certain comments appear with utter regularity: this course could be improved by writing less, having more time between papers, reading more interesting essays, doing less group work, skipping peer revision—in short students would improve the course by taking away the elements that, we feel, make it most successful. And I’ve found this specific set of comments at every school I’ve worked out; it seems less bound by institution and more somehow part of the fabric of the writing course. Of course there are high points too. Students raving about their teachers and talking about the changes they’ve experienced as writers and thinkers. Those are a pleasure to read, particularly when your primary job sometimes seems to be solving problems, handling complaints, and putting out fires. Looking at all of the course evaluations also allows for useful professional development interventions. Whether or not a given course evaluation is an accurate reflection of a particular teacher’s skill or success, as an aggregate the evaluations allow me to see the “outliers,” those whose scores are simply anomalous. These scores then open conversations about what might have happened in a given semester and how a teacher might approach the course differently in the future. In the end I don’t know if course evaluations are useful for teachers. I will say they are useful for me as an administrator not as a Big Brother tool to watch over the workers but rather as a quick indicator of the health of our program as a whole. I’ll end with a confession. I almost never read my course evaluations. My scores are usually really terrific but something in me deeply fears the comments. Go figure.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
About the Author
Barclay Barrios is an Associate Professor of English and Director of Writing Programs at Florida Atlantic University, where he teaches freshman composition and graduate courses in composition methodology and theory, rhetorics of the world wide web, and composing digital identities. He was Director of Instructional Technology at Rutgers University and currently serves on the board of Pedagogy. Barrios is a frequent presenter at professional conferences, and the author of Emerging.