Archive for May, 2015

Professors are wont to say that the worst part of their (our) job is grading.  The subtext of the complaint is that grading large numbers of essays or exams is tedious and repetitive.  A kind of necessary evil.

The problem is that grading is self-evidently very important because it involves decisions about student performance that can have potentially far reaching personal and professional consequences. And so in my view grading – which is part of a much wider process of student assessment – has to be thought of as a core professional and ethical responsibility rather than a distraction that (*shrugs shoulders*) just happens to go with the territory. If that’s right then we should be talking more about effective assessment design and assessment process. Perhaps because US higher education as I have experienced it assumes our professionalism by granting individual class instructors a large degree of autonomy over how we teach and assess, left to our own devices, we don’t seem to talk about student assessment much with each other (at all?).  Maybe we should.

I’ve just finished grading my Spring semester classes and have begun to reflect on my own assessment design and process. In my Bankruptcy class I succumbed to giving the students an end-of-semester “high stakes” final.  I don’t like “high stakes” finals as a general rule (my 1L contracts students get a graded midterm about which I may say more on another occasion).  But this was a big class and I decided to trade off a more continuous approach to assessment and instead provide opportunities along the way (for example, through non-graded quizzes) for students to self-evaluate their own learning.  I also made a very deliberate decision to test breadth rather than depth because the class was very much designed to provide a conceptual and functional understanding of how federal bankruptcy law works across the board. Neither high theory (which I can’t do) nor street level minutiae. Rather something in between. Or as a sociologist might say, neither macro nor micro but meso.

Despite my instinctive dislike of “high stakes” finals, I’m reasonably happy that the test instrument did a good job of assessing student learning benchmarking the results (once the cloak of anonymity is removed) against what I would describe as my “expectation curve”. My predicted grades on the expectation curve based on student performance in class, levels of attendance, and my impression of student engagement, were in the vast majority of cases no more than one increment off. There were outliers (a couple of students who did significantly better or worse than expected). There are always outliers. But overall the results were consistent with what I see when I base the grade on more than one or on multiple pieces of assessment.

Changing tack, I just read a piece by a college English professor, Raymond DiSanza.  Professor DiSanza doesn’t give a “high stakes” final and experiences lack of student engagement with end-of-semester classes that don’t specifically relate to assessed work. He laments that: “In our culture of assessment and evaluation, students can’t see the value in learning anything on which they’re not going to be assessed.” This too is a familiar refrain. You might blame our students’ instrumentalism on the fact that a surfeit of standardized testing is part of their lived experience. You might blame our students’ instrumentalism (in the law school context at any rate) on the fact that they are paying tuition to study for a credential that is linked pretty directly to a career aspiration.  And I am sceptical that there was ever some past golden age in which all students embraced the joy of learning for the sake of learning.  To me, it conjures up the image of a leisured elite that could well afford to learn so as to learn, rather than learn so as to earn. I could be wrong about this but I think we should be careful not to stigmatize instrumentalism. It’s not so terrible.

More interesting to me is the wider implication of the DiSanza complaint that when students are assessment focused, it detracts from a less instrumental, more engaged, more “wholesome” kind of learning. The wider implication is that there is as much danger in over-assessment as there is in under-assessment.  This was my experience in the UK where the Quality Assurance Agency for Higher Education rules the roost and dictates, or at least frames, much of what goes on in terms of teaching and assessment practice.  There, multiple points of assessment are increasingly the norm.  So, for example, my corporate law module had four points of assessment (a short IT-based research project, two 2,000 word problem solving essays, and a final exam that counted for no more than 50% of the grade). What tends to happen in that system is that students get amped up about the non-exam portions of the grade and will therefore tune out of your class, and their other classes, around the times when those assignments are due.  Moreover, I had several students that absolutely convinced themselves that the quality of their grade depended on the non-exam portions because “I’m not very good at exams”. This added further to the angst and was a self-perception that I spent considerable time trying to dispel.

I wouldn’t want this to be read as some kind of advocacy piece in defence of the “high stakes” final. Finals have their downsides, of course. The obvious downside is the otherwise excellent student who gets sick or just has “a bad day at the office”. And to be effective and fair measures of student learning, instructors must design finals purposively and well with clear benchmarks in mind.  But, if like me, you are not overly worried about student instrumentalism, a well-designed final that brings together a semester’s worth of material and assesses learning holistically may have as good a chance as anything else of concentrating students’ minds.

Read Full Post »