-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- English Community
- :
- Bits Blog
- :
- Assessment: The Best In-House Professional Develop...
Assessment: The Best In-House Professional Development of The Year (?)
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
I say “Assessment” and you say . . . what? Did you involuntarily wince or utter a furtive moan? Like many of my colleagues, I often find the assessments required of us to be a waste of time and, on occasion, an insult to our professionalism; my colleagues and I are continually assessing, evaluating, refining, and improving what we know about writing and how we can best teach it. Our ongoing “assessments,” both formal and informal, reveal how complicated, nuanced, and messy the development of writing skills can be, and we resist mandated assessments that reduce that messiness to check boxes and quantities.
We may feel “imposed upon,” as Suzanne Buffamanti, Denise David, and Robert Morris, articulated in a 2006 article in TETYC. At a recent scoring workshop for a college-wide critical thinking and writing assessment, however, my colleagues showed me how the imposition can become a boon of sorts, as it did for Buffamanti and her colleagues at SUNY. At my college, a small group of cross-disciplinary faculty spent a full day scoring student performance on the Critical-thinking Assessment Test, a short-answer test developed through a National Science Foundation Grant at Tennessee Technological University. The test probes students’ ability to read information and graphs, describe that information, interpret it without drawing unsupported conclusions, and consider additional evidence needed to clarify initial interpretations.
My college began to offer this test several years ago as part of a Quality Enhancement Plan on critical thinking, and we have continued biannually since then. The test is given to a sample of students in sections of ENG 112, our second semester FYC course, and it is scored by faculty according to a carefully structured rubric designed by the test developers (those who lead the scoring workshop must complete training before conducting a scoring session). Scoring takes a full day, depending on the number of faculty raters and total number of tests.
If you are feeling a bit of repulsion, don’t stop reading. What was a mandate, an imposition, just one more “to-do” before vacation can really commence evolved into professional development of the very best sort: open-ended discussions of what we value and how the knowledge and skills we are trying to inculcate can be repurposed and applied in other courses that our students take. How did this happen? Quite simply, twelve faculty from various disciplines (including biology, EMS certification, business, education, history, and English) examined student writing together. When we looked at how students read and interpreted graphs, for example, we talked about similar assignments and skills in our own courses, and we discussed the extent to which our instruction transferred from one context to the next. The test provided clear data to inform that discussion: skills taught in ENG 112 should apply readily to the context of the assessment instrument, and yet we saw time and again little evidence that students were actually transferring the skills practiced in class. Examples of failure to apply target skills were discussed, analyzed, and (truthfully) occasionally used for some comic relief. Conversely, rare instances of success were also shared, analyzed, and celebrated.
Eventually discussion turned to how we as instructors in different areas can encourage our students to apply, to re-purpose, to connect. We delved into word choice, style, and clarity, along with awareness of audience and purpose. And in the context of those discussions, the value of this assessment was clear – not for what it told us about our students, but for the opportunity to talk rhetoric, language, transfer, and assignment design with instructors across the curriculum. Our shared vocabulary may not have sounded much like a “Teaching for Transfer” or “Writing about Writing” session at the 4Cs, but the conceptual focus was similar.
Community college faculty rarely have the occasion to work across disciplines on substantive issues of pedagogy or theory; dedicated time to this sort of collaboration seems to me to be the best possible outcome for mandated assessments. And if the institution can offer a small stipend, a comfortable room, lots of coffee, and an amazing barbecue lunch as well, so much the better.
Want to offer feedback, comments, and suggestions on this post? Join the Macmillan Community to get involved (it’s free, quick, and easy)!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.