Laura’s Writing Theory & Practice Blog 2015-11-15 21:19:00

Using Rubrics to Develop and Apply Grading Criteria
John Bean 2001

Bean begins by explaining that the root of the conflict surrounding assessing student writing samples stems from a disagreement as to what constitutes “good writing”- an interesting point.   He then summarizes a study done by Diederich in 1974 which basically found that different teachers value different criteria in writing and, therefore, graded sample papers well, ummm... differently.  As a result of this important study, the composition community has come to realize the importance of using rubrics and “norming sessions” to “reach high levels of agreement on grades.”  

The article continues by making clear the differences between analytic/holistic and generic/task-specific rubrics.  I was familiar with these terms, but it served as a good refresher on the topic.  As a teacher, I definitely rely on analytic over holistic rubrics and prefer task-specific ass compared to generic rubrics.  

Bean makes no apologies for his support of rubrics and, thus, lightly touches on the many conflicting views surrounding rubrics.  These arguments conjured up in me, the image of a movie critic using a rubric to “assess” a film.  Not a pretty sight.  

Although I admitted to aligning myself with analytic as opposed to holistic rubrics, here might be a good point at which to make a confession:  I have read a student’s paper- maybe one that was totally “off the beaten path”- and decided upon the completion of my reading that it “deserves” an A.  Perhaps it moved me in a way unlike the others, or it somehow struck a chord in me-whatever the reason, I decided that it would get a 90% or above at that moment.  In these rare instances, I remember filling in the 90% at the bottom of the grid rubric and proceeding to fill in each of the boxes with the appropriate individual scores in order to ensure the paper totals to an A.  I’m now wondering whether or not the paper would have “earned” a similar score had I not decided to intervene with my usual analytic rubric grading practices.  I think yes, but wonder….This process is reminiscent of Bean’s own personal left/right brain grading procedures.  I found his idea of “negotiating” a grade refreshing and about as close to “fair” and “accurate” as a person can get.  I’m guessing this was a somewhat intuitive process for him, which he refined and evolved over time.  It sounds like he uses the rubric to justify his “gut” reaction to the piece-something I don’t entirely disagree with.  

Here, too, I worry about the abundance of “normalized grading” on standardized testing.  I fear that the trained graders are more concerned about making sure their grade would match that of another “trained grader” than ensuring the paper receives an accurate and fair assessment.  Writers are humans. Graders are human.  Writing is one of the most intimate and personal activities in which we can engage.  I envision robots grading papers and cringe….There needs to be a box on the rubric marked “exception to the rules,” whereby scorers can go with their gut instincts.  


In his discussion of “norming sessions,” Bean notes the discrepancies among teachers and students understanding of what constitutes a “high” grade and a “low” grade.  Is a C a fairly good grade?  I’ve noticed a trend whereby parents and students expect nothing less than a B as a fairly good grade.  Interesting.

Writing Assessment in the Early 21st Century
Kathleen Blake Yancey

A nice companion article to Bean’s, Yancey addresses the inherent conflict surrounding the assessment of student writing, particularly with regard to standardized testing. It gives an overview of the current (and history of) writing assessments.  

She begins by tracing the history of writing assessment, noting how the goal (at least in part) of such practices was to use a “...machine-like efficiency” and to do the “..fairest job of prediction with the least amount of work and the lowest cost…”  Echos of Bean’s article can be heard in the discussion as to “what constitutes good writing” and “what is the best way to assess writing.”  Definitely a frustrating topic for students and teachers alike.  

When looking at the current trend in assessment practices, Yancey brings forth a multitude of highly-charged and controversial issues, including the link between critical thinking skills and writing assessment, social inequalities, ESL learners, digital writing, and self-placement.

Yancey ends by acknowledging that while the future of writing assessment remains to be seen, there is sure to be an interest in outcomes-based assessments, program assessments and on the use of portfolios. Validity and fostering student writing development, she reminds us, will continue to remain at the forefront of these practices.