Writing Assessment in the Early Twenty-First Century, by Kathleen Blake Yancy, points out the strengths and weaknesses of various approaches to writing assessment. Throughout the article she traces the evolution of standardized testing as moving from placement tests that were “typically a multiple choice test of editing skills” to a phase which evaluated student writing samples utilizing a holistic scoring method and ultimately she describes that in the late 1980’s the vehicle for assessment that was typically used was a portfolio of writing. She accented that there were various approaches that different institutions applied to portfolio construction and thus assessment. Furthermore, she explains that the federal governments role moved from a disinterested stance to one which developed standards and measurements that are aimed at influencing the practices of educational institutions. These standards and measurements encourage educational institutions not only to adopt a model which is outcomes based but also to adhere to standards that allow for comparability across institutions. Compositionists, who are concerned with local contexts and values, have developed local program assessments to enhance their curricula. Other institutions have developed new models of assessment that are more sophisticated and provide evidence of validity. In recent years many educational institutions, both high schools and universities, are utilizing portfolios as a centerpiece of assessment for both student learning and from which to measure the effectiveness of the teaching methods of the educational institutions.
War in Translation: Giving Voice to the Women of Syria by Lina Mounzer on the Urgency of Telling the Stories of Conflict bled with the wounds inflicted by bullets and oppression. When I was finished reading the piece, my heart ached, and I wept. Unlike convenient soundbites used by the news, she unfolded the meaning of words, in the context of poetic prose, to serve as vessels that transmitted the suffering of being subjected to war in Syria. I dare not paraphrase central accounts in the piece out of respect for the authenticity of the voice and the gravity the voice conveys from having survived terrors.
The piece opens with the following: “I have been threatened, beaten, strip-searched, thrown in prison, tortured and made to watch as my mother knelt weeping at the dirty feet of tribal leaders to beg for any information about my kidnapped father. I have waited at countless checkpoints, praying that no one finds the bread, the money, the schoolbooks, the chocolates I have hidden in my bag, on my body, trying to smuggle them through to people on the other side. I have buried seven husbands, three fiancés, fifteen sons and a two-week old daughter I finally agreed to have at 42 for my husband’s sake, to bring life back to his tongue after we laid our two grown, handsome sons to rest, one after the other, and grief took all his words away.”
The media keeps us removed from the human torment. The language that is used is cold and brief. The author, Lina Mounzer, recognizes the danger in the media distancing listeners from the inhumanity taking place: “We know how language itself can wage war against us, by mimicking the same casual dehumanization of a bomb. Everyone you know and love: terrorists. Militants. Strategic targets. Collateral damage. The leveling of your neighborhood: an unfortunate mistake. The razing of your city: the birth pangs of a new Middle East. Seven dead, twenty wounded. Forty-one dead, ninety-three wounded. 1.2 million refugees. 2,000 migrants.”
Unlike the media, throughout the piece the skillful author ensures that the reader understands the nuance and weight that words signify to the victims of the war: “A barrel will no longer ever be a barrel again; shrapnel will always explode from it. The word mustard will forevermore carry a whiff of gas, rashing your skin, smarting your eyes. When you say Sabra, or Shatila, you are not referring to a place, but to a heap of dead bodies shot indiscriminately and tossed aside like worn rags.”
The Yancey article about writing assessment did a really good job expanding on the ideas from some of the other articles that we have read this semester. Especially with the readings on ranking and evaluating, I felt this was a great way to expand on the ideas and conversations that were born out of those readings. Yancey begins by addressing holistic scoring in the 70’s and 80’s. We then move into describing the growth and acceptance of portfolio writing in up through the turn of the century. The more we discuss and read about the utility in having a portfolio at the end of a term to all aggregate into a sort of illustration of growth for the writer over the course of the term they are in. I also find that using portfolios as a mode of assessment allows for the evaluator to get a much better feel and idea of the student’s voice. As I have talked about in previous posts, I find voice to be an imperative intangible for good writing, good writing can only be good if it authentic coming from the writers true thoughts and perspective. Yancey goes on to talk about “construct of writing “ and “consequential validity.” First, construct of writing. As it is described in the article, my interpretation is that the construct is essentially guidelines of sorts that aid in the assessment of writing due to it creating a uniform standard across what was being written. I liked what Paretti and Powell meant when they said “The ability to write one kind of document does not automatically guarantee the ability to write another kind of document.” This spoke to how dynamic writing can be. For me, this really hit home. In my previous work, I have gotten so used to writing in such a “business formal” kind of way. E-mails, reports, plans, implementation, all are things that I have been responsible to write before, and through the time and practice I have under my belt, I have become very good in these modes of writing. To the above point, however, I do not have a lot of experience in purely academic or thoughtful creative writing. In all honesty, I feel somewhat inferior to my classmates as most if not all of them, it seems, come from a discipline which fostered these types of writing and the kind of thinking allowing one to see writing through those lenses. This is a major reason why I am so excited to meet every Monday night. The dialogue and the sharing of our ideas and experiences allows for each of us to grow as writers in ways we would never have otherwise. Moving along to the idea of consequential validity. This speaks more to how the evaluation is viewed in it’s ability for writers to learn. Samuel Messick is quoted in the article as saying “To what degree are decisions made about students accurate? To what degree are the decisions made about students appropriate?” And I think this sets a great foundation for the theory of assessing writing. Moving to current modes of evaluation, Yancey references a program at Washington State that talks about the assessment of writing and critical thinking. Critical thinking is such an imperative for being able to write at the level we have been discussing all season long. THe next theme, which I think compliments our class particularly well is social inequity and how writing can fight, or perpetuate these kinds of stereotypes in our society. Considering all of the work we are doing with equity unbound, this is a poignant theme for our class and our understanding of writing. For me, I found these to be the most important threads of the reading, as the inquisition into the future of writing assessment is still very raw and hypothetical.
Switching gears, I would like to briefly touch upon “War in Translation” and the collaborative annotation dialogue we have been having with people all around the world who are also doing work within out Equity Unbound network. First off, the reading is compelling. The each description is more vivid and haunting than the next. In the very beginning of the article, we get a look into the fear that is felt by all, especially the women, in this community. As the article continues, it becomes even harder to fathom what it is to live in a place where freedom is not something that is accessible to everybody. The idea that there was such an emphasis on Mounzer to learn and write in English as opposed to her native arabic, to me, sheds light on a more dormant theme; that an escape from this place can come in many ways. Not to say that she is not proud of the arabic language and her ability to use it. Rather, how it is a sort of vehicle for her to communicate with so many others regarding the struggles through war time.
As the annotations go, this is one of the better and more though provoking exercises I have been apart of. To see the brain trust all come together with all of these amazing takes and thoughts from thinkers all around the world, each with a unique perspective. Kind of like we discussed earlier with the idea of “One Story,” this illustrates the point that each person has a unique perspective and life experience to come up with their own conclusions. Moreover, I like that people can respond to another persons thoughts. I think it is this dialogue that makes this exercise so unique and exciting for those who are not only aspiring writers, but aspiring thinkers as well.
Reading this week’s article “Writing Assessment”, by Kathleen B Yancey, was interesting for me because I found myself building connections with the context. The article talks about how writing assessments focus on not only evaluating the writing students of students, but also evaluating schools on their teaching pedagogy and approaches. It also states that is the heart of composition studies, both for students and schools, since it provide students with a sense of reflection on their writing and schools with information on how the students are learning writing.
One reason I found myself agreeing with the claims of this article is because it claims that writing assessments helps student writers with reflection and feedback on their writing, which is important for their skills and knowledge. This takes me back to two forms of writing assessments I have become familiar with over the years, as a student of composition. These forms in which I, which is supported by this article, is that writing assessments can fall in two ways: one is where a student’s writing is assessed and evaluated; the other where the students completes and assessment on his or her writing by themselves. Regardless of what it might be or be called, we can say it writing assessments are very important for writers. So it is hard to argue that it might not be called “the heart of it.” Either way, I think they work as a way of reflection on our writing as students. In regards to the first way of assessment I mentioned above, I think this one is key when it comes to helping writers further develop their skills and knowledge in the field. The first reason I have for believing this is because, as a student of writing, I sometimes write for my classes at my school, but not many times I receive feedback that is measured and gauged by a set-writing-system. What I mean by this is that oftentimes, I don’t receive enough evaluated feedback on my writing that is measured by a pedagogical system (that takes into account various aspects of writing), which serves as a standardized format for writing. But luckily, I have taken classes where my work as been part of strict assessment where my work as been evaluated in various forms: grammar, syntax, voice, word choice, writing fluidity, clarity with ideas and concepts, outline, sentence construction, writing style, audience consideration, and the list goes on and on. Some of the assessments have focused only in a few of the above from a large list of things that could be evaluated, but other have focused on more. When being evaluated like this, with my writing, I always get a type of feedback that is somewhat different from others. This is possible to the fact that this type of evaluation is not meant to be personal, but rather general, as it appeals to the writing of all students in general, and in how they grade on a gauge taking into accounts the various things that go into writing.
Similarly, the other type of assessment I found myself understanding from the article, and one which I am personally familiar with, is that of when students are made responsible for their own personal writing assessments. Although this one might not be as formal as the first one I mention before, it is still a way for them to reflect on their writing and still gain some feedback. Remembering some of my classes I took in previous semester at my school, I did have to complete this type of assessment at the end of the semester. I remember the first time I was introduced to it feeling weird about it, since normally another party is responsible for evaluating you, not you evaluating yourself. But with time, I became used to it, and came to learn that it is just as useful when providing me feedback (in this case personal feedback) on my own writing and my process. In this case, I enjoyed being responsible for my own evaluation and judging my writing in a fair manner, not taking into account that I am the writer. It forced me to think about the things I knew where probably incorrect or could be improved for more efficient writing. It also allowed me to be honest with myself on where I feel I am week, and on what I need to strengthen, as part of my writing.
Today I am still victim of these two forms of assessment, which come together as a whole in the end, as they work for the same goal: to provide the student with reflection and feedback on their skills and writing. I do agree that writing assessments are important for writer because it helps give us a better understanding on where we gauge as writing, the things we have correctly, and how we can improve, according to the socially accepted and expected standards for writing.
Writing Assessment in the Early Twenty-First Century
Writing assessments have been utilized since the early 21st century. Assessments are a valuable method for the pedagogy of writing. It measures students’ writing performance, and it helps students to improve their skills. I work with high school students. I find that they have strengths in certain areas of writing and weaknesses in other areas of writing. Writing assessments enables the student to evaluate their work, learn from their errors and edit when necessary. Writing assignments enable the student to decide which type of writing they are skilled. For instance, I am skilled in writing poetry, while I am not skilled in writing laboratory reports. “The ability to write one kind of document does not automatically guarantee the ability to write another kind of document; the successful completion of a generic ‘research paper’ does not ensure the successful completion of a journal article or business proposal or laboratory report” (Paretti and Powell 4).
In addition, writing assessments are beneficial for teachers because it evaluates their skills. Sue McLeod, “when I began consulting at other institutions that wanted to start WAC programs, writing-across-the-curriculum, I always included assessments as part of what I recommended they should do, a feedback loop into the program that would let them know what they were doing well and where they needed to improve. Kathleen Blake Yancy Pg. 171. Teachers must take on the responsibility to attend personal development workshops and stay informed of new studies and developments in pedagogy for writing.
Using Rubrics to Develop and Apply Grading Criteria
Grading students’ assignments without a grading system can be time consuming and confusing. A teacher must decide if the answer is worth a whole point or maybe a half point. However, if the grading system is established before the assignment is administered, the teacher will not have to estimate grades. Instead, the percentages and letter grades will already be established. Also, rubrics empower students. A clear and detailed rubric will display the requirements, instructions and the grades of the assignment. I asked my students if they liked being graded with the use of a rubric. Most preferred the use of a rubric, because it is clear as to what they need to do to successfully master the assignment. I am in favor of using rubrics to develop and apply grading criteria. I found myself drawn to the, “Generic Writing Using Analytic Method”. John Bean Pg. 21, and the “Generic Rubric for Summary Writing Using Holistic Method”, John Bean Pg. 272. These rubric have criterion fits the type of Sociology and Investigating Careers assignments that I give to my students. Rubrics improve communication between the students and teachers in three particularly ways:
“The descriptive criteria on the rubric provide helpful feedback to students and allows me to write shorter marginal and end comments.”
“My rubric scores on a paper make my revision conferences with students more efficient because the circled scores for each trait indicate at a glance the main problem areas in the paper.”
“By analyzing the distribution of rubric scores for each trait among the set of papers, I can identify general patterns of strength and weakness in student performance and develop ideas for improving instruction.” John Bean Pg. 282.
The teacher must be organized when discussing the criteria of an assignment to a class of at least 20 students, as well, when discussing a student’s progress, by having one-to-one conferences. It is very beneficial to have a system established for student-teacher conferencing, such as a rubric.
“Important questions that continue to vex us today:
- Who controls the writing assessment?
- What the construct is that is being tested?
- How we define validity and how that matters?”
–Kathleen Blake Yancey, “Writing Assessment in The Twenty-First Century”
What is good writing? Who decides what good writing is? How do we decide what good writing is? In her essay “Writing Assessment in The Twenty-First Century,” Kathleen Blake Yancey provides a history of writing assessments in three waves: 1.) Before the 1970s, there was a focus on testing, which was used to make high-stakes decisions regarding college admissions and college placement; 2.) From the 1970s to the 1980s, there was a focus on the writing process and holistic scoring; and 3.) From the late 1980s until the turn of the century, there was a focused on portfolios. Current waves in writing assessments include writing and critical thinking, writing and social inequalities, and the use of digital technologies in writing.
In the first wave of testing, it did not work because it was a highly stressful “reductive practice” done to my students and students worldwide. Students were overly concerned with the SAT score and not necessarily on good writing. Highly competitive students spent too much time studying for the writing section of the SAT by memorizing grammatical terms such as misplaced modifiers, dangling modifiers, split infinitives, and passive voice. As Yancey points out, the writing section of the SAT in the 1980s was an “indirect measure” of writing where students were asked to find errors on a multiple-choice assessment. The assessment did not measure if the students could write clear prose; instead, it measured if students could identify errors. Therefore, college professors realized that just because their students did well on the verbal section of the SAT did not correlate with good writing.
In the second wave of writing assessment, the College Board decided to institute an essay on the SAT. They used holistic scoring to ensure consistent scoring. As first, students were asked to write an argumentative essay; then they were asked to write a rhetorical essay. Students were given 30-minutes to write a formulaic, five-paragraph essay. Once again, this type of assessment did not measure good writing. Now in 2019, Ivy colleges do not require students to take the writing section of the SAT. Instead, some colleges such as Princeton require students to submit a graded process essay with the teacher’s comments, which I believe is a step in the right direction.
In terms of current moments of writing assessment, Yancey briefly mentions that there is research at Washington State University to examine critical thinking in writing assessments. I am interested in finding more about their research because most teachers, including myself, assume that writing requires critical thinking. As for writing and social inequalities, I am very interested in Sandra Murphy’s research on the cultural construct of writing and the validity of scoring non-native speakers. How do we fairly assess non-native writers? I have ESL students in my classes. To ensure their success, I give them extended time on writing assignments and opportunities to rewrite their essays. When I grade their essays, I take into consideration that they are ESL students. However, I do not believe that all teachers believe in this humanistic approach, which also sheds light on social inequalities and writing. We need to make sure that there is equity in the evaluation of writing assessments. We cannot penalize students who are non-native speakers. We need to provide these students with accommodations so that they can experience writing success.
I am most excited about the most current wave of writing assessment: the use of digital technology in writing. Teachers, like myself, need to be taught technological writing tools so they can teach their students. To illustrate, I have used successfully various digital technology in my classes, such as Google Sites and Voice Recorder. Students like technology. I intend to use more digital tools in my classes to help my students become digital citizens. All students, regardless of them going to college or not, need to learn the use of digital technology in writing. (Please read “Should everyone go to college?https://www.nytimes.com/2019/01/16/learning/should-everyone-go-to-college.html)
Therefore, in terms of writing assessment, writing assessment needs to be more inclusive. It cannot continue to b reductive.
- Who controls the writing assessments? (Decision-makers need input from students, parents, teachers, business leaders, and professors.)
- What the construct is that is being tested? (Decision-makers need to realize that not all students will want to attend a traditional four-year college. The writing assessments need to prepare them for the world. They may not necessarily need to learn how to write a rhetorical essay. However, they need to learn to write a business letter, an email, a resume, or a cover letter.)
- How we define validity and how that matters? (If we ask students, parents, teachers, business leaders, and professors what matters in terms of writing, it may lead us closer to answering the question of what matters? Moreover, by including the voices of all the stakeholders, writing assessments will be more authentic, more equitable, and more valid.)
So, let’s sit back and wait for the next wave.
The reading for this week’s class was a little tricky for me, the pressure of grad school is kicking in! From the encouragement and humor of my fellow grad students, there is still hope for me! So lets jump into some key points I find interesting to address within the article.
THREE WAVES OF WRITING ASSESSMENT
I found this portion of the article very interesting because I can see how these waves are still seen in assesemnts today, in pratical, in my own experience as both an educator and student.
The first wave we are presented with in the article is:
Testing: Placement Assessment
When reading about this, I could not help but think of my past and current students at the school I work for. Referring back to my course World Englishes, the English language continues to grow and spread as years past. With that being said, it is pretty hard to determine the true standard form of English, unless it is in an academic setting. Unfortunatley, not everyone student globally is taught this standard for of English most colleges and universities are looking for. Here are also some keys points I found intresting in this section:
- College, “regular” – first year composition course
- Looking back at my little rant above, this bullet point more so targets international students who come to these american universities having their own understanding of English. The misconstruction it that they do not understand English (according to the assessments) when in all reality, they have a different view point of what is standard English
- How well a student can edit another author’s writing
- How fun was this part of the SAT!
- Pro – can evaluate a student’s level
- Of course I had to add in an upside because it is fun to ply devil’s advocate. But in all seriousness, there is definable an upside to writing assessments. We are able to see the mile marker for the students so us as educator know where to start from and move forward.
The second wave is: The Writing Process
Out of all the waves, I can say this one is truly my favorite. We actually get to the nitty gritty of how we are phiscally hands on preping the students to prepare for these writing assesments.
- Hollistic scoring: Sampling students writing, scoring guide
- Instead of just throughing the students into the wilderness of the assesemnt, there is sampling wiritng and guides being shown. The hardest part about writing is getting started, so imagine being thrown a writing prompt and not know what the graders are looking for. Being able to refer to or even looking at something that graders want seen in writing is a heads up in the game!
- Teachers theaching the prompt questions
- This takes me back to the days of my 11th garde school year, when we were getting preped for the HESPA. This test determined if we were allowed to move on to 12th grade. Since this test was very dier, there was a entire week dedicated towards preparing us for this test. ELA teachers from all grade levels were preping us for the type of essay prompts we would be tested, which it did make a big difference on how I approched tje test!
The third wave is: Attenetion to multiple text
What I took away from within this wave it two point:
- Looking at writing on different multitudes
- Of course there is a such thing of writing outside the world of academia (Happy National Day on Writing!). As I like to continue to stress, there is writing on so many different platforms and our students are not introduce to it. Of course, these piece of information is a bit dated, but it still applies to our students today!
- “You can write a successful theses but not a journal article”
What is WAC?
This acumen of WAC conitnued to appear throughout the reading, so I though it noeworthy to jot it down in my notes. So when coming back to my notes, I set a reminder to look up what it means.
In its simplest form, Writing Across the Curriculum (WAC) recognizes and supports the use of writing in any and every way and in every and any course offered at a learning institution. A WAC Program in its simplest term is any organized, recognized, and sustained effort–no matter how modest in people, resources, and funding–to help faculty in any and every course use writing more deliberately and more often. https://wac.colostate.edu/resources/wac/intro/programs/
This program reminds me a lot of my current program, AmeriCorps. As said in the article, WAC is a grant funded program that is meant to produce results in bettering students writing assessments. Depending on these results will determine future funding. The same aspects going into AmeriCorps. A government funded program designed to improve students writing and reading abilities.
From the reading, Yancey propsed the three purposes of Program Assesments as the following:
- To see what the program is doing well in
- To determine how the program might improve
- To demonstrate to others why the program should continue to be funded
The Current Moment
To end my blog, I thought that addressing some of the minor themes about The Current Moment would be quite fitting.
- Consider how writing and critical thinking are related
- I am sorry, but this theme here is a no brainier. So many assessments are looking for the product and not the thought process nehind it. Critical thinking is esence, spirt, amd soul behind true writing. If there is not any cirtical thinking behind writing, then we producing one in the same kind of writers.
- Social inequalities (racial inequities)
- I thought I would give a little example of this from a hand for my World Englishes course (As you can tell, I really enjoy this course!). To the write of this passage is a handout about a woman who starts her own business. As a class we identified the Formal Schemata for this writing. Then we had to look at in a perspective of a global setting. In most cultural outside of the US, most of the actions being down in this writing is outside of their cultural norms. How do we expect them to relate to a reading they never experienced?
This week’s reading was admittedly the toughest for me personally. It isn’t an uninteresting topic, just one I have little to no experience with. I’ve tried to keep up with any connection to the pedagogical that I can based on my life as a not-educator, but here I’m stumped. I will attempt to speak as thoughtfully as I can, but recognize simultaneously that I’m walking on conceptual eggshells as someone without experience speaking on the subject.
As Yancey sums up a bit into the reading, outcomes can be viewed as more of a “statement on what students know they can do” and less of a tool for educators. In this, at least, there remains some agency for the student, as the outcome is then viewed through the lens of the growth of and academic statement by, the student. I think this is the most positive way of viewing the notion of assessments because as it is further pointed out, the result can be more personal and empowering and less impersonal and made to view students as tools.
But my own positive opinion on assessment seems to waver by this point as I tend to agree more with the dissenting opinions on quantitative assessment moving forward. I was even entertained, I will say, by the use of the word “eviscerate” when describing the effects that assessment has had on the more “qualitative” humanities as the article points out. And this has been true enough for me in my life.
One thread that I did enjoy reading about were outcome assessments given more as a commentary on educational reform than student placement. The section about the University of Kentucky scoring guide functioning through revision and denoting what better ways the school itself could change around the students gave a good example of what scoring could accomplish. The point was that the outcomes were not meant to affect the students themselves, and to administer judgment on them, but instead to show the school what needed to be worked on and to then help the student with those things.
Later in the article the question of local vs. national standards and practices was raised, which I did also tune closely into. The question is, and I can’t claim to have much of an “answer” for it myself, who should decide what criteria are focused on for any particular post-secondary school? I’m less inclined toward the somewhat mechanical national scholastic identity, as being sometimes more of a political being rather than one truly attempting to positively impact students, but the local may be lacking as well. As Yancey points out, what local bodies sometimes lack is greater context. I do believe that a greater context may make a big difference in education, as sometimes the smaller body may have its own agendas too which the greater context could circumnavigate, but at that point the diplomatic call for a meeting in the middle seems to be the answer (however dreamlike that solution may be).
Lastly, the topic of the portfolio bobs to the surface of the conversation as it has done in several of our readings now. I’ve become a fan of the portfolio. It offers you a choice of the things you feel are your best, and likely asks you to question why that is in the process. But as the reading indicates, “a portfolio is not a portfolio, is not a portfolio”. What one educator thinks is the right way to port a folio, may be counter to what any other thinks is the right way. Is it a question of teacher-knows-best? Or is it more of a self-reflective exercise for the student? I’d move toward the self-reflective. At the risk of the grade for the assignment, itself being a little opaque perhaps as portfolio assignments can be, the student may discover something of – dare I say – a voice in their writing as they search through the period’s pieces. But this still may not serve the strict interpretation of an “assessment”, as likely there are parameters for any portfolio that one’s voice may not fit into. This would then only serve as a macrocosm for any individual assignment’s similarly conflicting part.
That all was maybe a little vague, but I’m going on ideas and not so much on practice. But as I consider myself a future educator, these are things that I’m fortunate to be a part of the discussion on in this curriculum. I look forward to our discussion tomorrow and continuing thence.
“Writing assessment is thus both hero/ine the practice that brings us into a relationship with our students, and villain, an obstacle to our agency.”
“Writing Assessment In The Early Twenty-First Century” by Kathleen Blake Yancy, addresses the writing assessment practices from previous times to now. Writing assessment has been defined by a set of terms; for the last few centuries, it has been testing. We tested our student’s abilities to be able to go to college or take a composition class in college through testing, from SATS to college placement classes. Though the measure might be low in terms of validity, it offered high reliability. It came down to which assessment was fair but also the cheapest. It’s interesting to see why and how these assessments came about. Because I have taken the SATS, and my students took the PSATS just Wednesday. I thought it was ridiculous then, and I think it’s ridiculous now. The students were given two reading passages with a total of 42 questions to complete in 50 MINUTES. I CAN’T EVEN DO THAT!!!
The second wave of writing assessment (the 1970s-1980s) had developed the term we now know as holistic scoring. First holistic scoring began with sampling student writing. Second, it reliably measured student writing, providing consistent scoring. It offered the correct way for teachers to grade student writing instead of testing experts. The questions about assessment were different in the second wave than the first wave. Asking questions like who is authorized to make the best judgments and what is the overall purpose of writing assessments. I love this thought-provoking question! I work countless hours with my students to help them improve their writing; I know where they started and how far they have come — making me the expert, not someone on the outside looking at one paper and assessing them.
The third wave of writing assessment focused on program assessment vs. the student. It centers on if the program is working, how to improve it, and showing others why the application should be funded. Showcasing the good and bad of the curriculum and why things should or should not exist was the focus of the third wave. Lastly, one joint assessment prominent in all three ways of assessment was adding on formative assessment.
The current moment of assessment is looking at critical thinking, how writing assessments produce racial inequalities, looking t students from other cultures whose first language might not be English, digital composing, and lastly, self-placement.
There is also a debate on for outcomes on writing programs. Outcomes people argue are not objectives but a measure of what students know and can do. The WPA has listed four types of results: rhetorical, critical thinking, processes, and knowledge of convections. They soon added another outcome: the use of digital technologies in writing. Derek Soles and many first-year teachers respond to the idea of outcomes negatively. Stating these outcomes lack philosophies of exposition and expressionism. James Zebroski also argues that these outcomes lacked knowledge of composition and rhetoric. The University of Kentucky developed a scoring guide based on five outcomes: Ethos, Structure, Analysis, Evidence, and Convections. These are all active elements of a robust program assessment.
In 2006 the Spellings Commission was focusing on the four A’s, which is access, affordability, accountability, and assessment. The goal was for post-secondary education, providing students and parents an opportunity to see differences and similarities between institutions. The questions such assessments wanted to answer were, “What values have colleges and universities added to students?” The interesting part of this section was when the Collegiate Learning Assessment (CLA), which requires students to respond to real-world prompts. So what is being measured isn’t really clear; it is different from the SAT score, which determines how a student will perform in college. The CLA determines what the student should earn. The concept of CLA is such a far-reach, I can’t imagine them integrating it into the USA.
Then there is the AAC&U, the value project that focuses on the faculty assessment of student work. This assessment drew on created in authentic places like classrooms and service-learning centers; and faculty expertise. To ensure faculty expertise, factually from around the world was invited to create a scoring guide that could be used to access electronic profiles. Despite faculty expertise, the composition side was concerned about what is perceived as a global effort in the writing world where the local is valued. But then again, going completely local leaves out larger context.
The article then mentions Portfolios, and I don’t know why, but I dread portfolios. As a teacher, every year, I am asked to hand in a collection of my work. Yancy mentions the benefits of portfolios to students, providing a sort of self-assessment. Students perceive this assessment as not useful, which is relatable. Currently, reflection as both theory and practice suggests it will play a vital role in writing assessment; they just don’t know how yet.