“Writing assessment is thus both hero/ine the practice that brings us into a relationship with our students, and villain, an obstacle to our agency.”
“Writing Assessment In The Early Twenty-First Century” by Kathleen Blake Yancy, addresses the writing assessment practices from previous times to now. Writing assessment has been defined by a set of terms; for the last few centuries, it has been testing. We tested our student’s abilities to be able to go to college or take a composition class in college through testing, from SATS to college placement classes. Though the measure might be low in terms of validity, it offered high reliability. It came down to which assessment was fair but also the cheapest. It’s interesting to see why and how these assessments came about. Because I have taken the SATS, and my students took the PSATS just Wednesday. I thought it was ridiculous then, and I think it’s ridiculous now. The students were given two reading passages with a total of 42 questions to complete in 50 MINUTES. I CAN’T EVEN DO THAT!!!
The second wave of writing assessment (the 1970s-1980s) had developed the term we now know as holistic scoring. First holistic scoring began with sampling student writing. Second, it reliably measured student writing, providing consistent scoring. It offered the correct way for teachers to grade student writing instead of testing experts. The questions about assessment were different in the second wave than the first wave. Asking questions like who is authorized to make the best judgments and what is the overall purpose of writing assessments. I love this thought-provoking question! I work countless hours with my students to help them improve their writing; I know where they started and how far they have come — making me the expert, not someone on the outside looking at one paper and assessing them.
The third wave of writing assessment focused on program assessment vs. the student. It centers on if the program is working, how to improve it, and showing others why the application should be funded. Showcasing the good and bad of the curriculum and why things should or should not exist was the focus of the third wave. Lastly, one joint assessment prominent in all three ways of assessment was adding on formative assessment.
The current moment of assessment is looking at critical thinking, how writing assessments produce racial inequalities, looking t students from other cultures whose first language might not be English, digital composing, and lastly, self-placement.
There is also a debate on for outcomes on writing programs. Outcomes people argue are not objectives but a measure of what students know and can do. The WPA has listed four types of results: rhetorical, critical thinking, processes, and knowledge of convections. They soon added another outcome: the use of digital technologies in writing. Derek Soles and many first-year teachers respond to the idea of outcomes negatively. Stating these outcomes lack philosophies of exposition and expressionism. James Zebroski also argues that these outcomes lacked knowledge of composition and rhetoric. The University of Kentucky developed a scoring guide based on five outcomes: Ethos, Structure, Analysis, Evidence, and Convections. These are all active elements of a robust program assessment.
In 2006 the Spellings Commission was focusing on the four A’s, which is access, affordability, accountability, and assessment. The goal was for post-secondary education, providing students and parents an opportunity to see differences and similarities between institutions. The questions such assessments wanted to answer were, “What values have colleges and universities added to students?” The interesting part of this section was when the Collegiate Learning Assessment (CLA), which requires students to respond to real-world prompts. So what is being measured isn’t really clear; it is different from the SAT score, which determines how a student will perform in college. The CLA determines what the student should earn. The concept of CLA is such a far-reach, I can’t imagine them integrating it into the USA.
Then there is the AAC&U, the value project that focuses on the faculty assessment of student work. This assessment drew on created in authentic places like classrooms and service-learning centers; and faculty expertise. To ensure faculty expertise, factually from around the world was invited to create a scoring guide that could be used to access electronic profiles. Despite faculty expertise, the composition side was concerned about what is perceived as a global effort in the writing world where the local is valued. But then again, going completely local leaves out larger context.
The article then mentions Portfolios, and I don’t know why, but I dread portfolios. As a teacher, every year, I am asked to hand in a collection of my work. Yancy mentions the benefits of portfolios to students, providing a sort of self-assessment. Students perceive this assessment as not useful, which is relatable. Currently, reflection as both theory and practice suggests it will play a vital role in writing assessment; they just don’t know how yet.