"Why the Research Paper is Not Working" by Barbara Fister and "The Popularity of Formulaic Writing" by Mark Wiley

     I was surprised to find that I had strong reactions to the two articles for this week. Fister's article bothered me at first. This is mainly because I still have a bit of the mentality that if I had to do a research paper, so should everyone else. Looking beyond that, students have much to gain from the skills needed to properly cite and synthesize information. I feel that her article glosses over this fact and instead chooses to focus on how earth-shattering teachers find a misused comma to be.

     The argument about teaching format and ensuring it be done correctly seems, to me, to be a waste of time. For the last few years both myself and the media specialist in our school library show the students how to use easybib or even EBSCOhost. They both have functions that give you the citations you need, all you need to do is copy and paste. So the fact that teachers would still teach it the old fashioned way is surprising to me. After all, we don't send out letters via carrier pigeons; we have a better system.

     I always viewed a research paper as following a specific set of rules to complete an assignment. It is a task that needs to be completed, meeting a certain set of criteria. The skills required have not changed drastically over time. What makes it so much more difficult for students today to complete a research paper when previous generations somehow managed to do it? Would changing how we view the research paper lead to a "dumbing down" of the research process?

     One thing that really bothered me was when Fister says she doesn't like when students "have to change their topic because they can’t find sources that say exactly what they plan to say." Yeah, that's what makes it research based on evidence and not a blog post based on your opinion. Many students need to understand that their opinions can't stand on their own in an argument.

     Wiley's article discussed Jane Schaffer's formulaic technique for writing. I can definitely see the benefit of giving students a formula to follow when writing, but I'm among those who see it as too restrictive. Struggling writers might latch on to this formula and use it for all types of writing assignments, event hose it might not be appropriate for. I would assume that any student who is capable of writing well enough on his or her own would be allowed to abandon this formula in favor of his or her own style. I'm not a fan of the lack of an exit strategy when using the formula. As stated in the article, students should have a variety of writing styles, and this may hamper them.

     One thing I do really like is how it gives a universal language that the students and teacher can use when referencing certain parts of the paragraph. "Add a commentary sentence" sounds much better than "Explain this". Obviously a language can be developed for each class, but if the program were adopted school wide, it provides consistency.

Weekly Response: Fister’s "Why the Research Paper is Not Working"

I agree and disagree with Fister. She is right that the research paper doesn't work. In HUM 101 we just finished doing a research paper, and the results are somewhat disappointing and just as Fister says: no one can cite sources correctly, and students skim the surface of the sources they read anyway, picking quotes out after the paper is written. Further, the students seem to be able to do "everyday research" much better than academic research. She suggests that we should scrap the formal research paper in freshman year because the students don't like it and aren't successful with it. Hmmm, maybe we should also scrap first year sports, as many students are uncoordinated and not star athletes when they first try a new athletic endeavor. Ridiculous, of course, but the comparison makes sense. 

While I agree that the results of the FYW research paper assignment are not stellar, I do not agree that it is a reason to scrap it. Her advise that we should assign topics the students are interested in of course makes sense. I think most institutions do that by now, no? We did at NJIT. In my class, the students were allowed to chose a brand, product, or issue of interest. We walked through all the steps of the research paper from brainstorming and idea generation, to preliminary research, proposal, annotated bibliography, drafts, writing center visits, revisions, presentations, and final drafts. (Linear in some cases, I know, and recursive in others. But how to teach to 27 freshmen at a time? My best advise: get the administration to cap the class at 18. I digress....)

I agree with Fister about the sillyness of worrying about the details of the citations. It is daunting for them. I review how to do in-text citations, and I explain that there is only one way to do them correctly in MLA format, i.e. parenthesis, name, page, parenthesis, period. When it comes to citing the actual sources in the annotated bibliography and the works cited page of the report, I tell the students to use easybib.com and not worry about trying to write the citation themselves. That seems to take the fear out of it for them, and they do fine.

The papers I received were, for the most part, not as awful as Fister suggests. We talked about the research and how it was similar to looking for a dress or a car online. The most difficult task of the project seemed to be the annotated bibliography, as no one had ever heard of such a thing. We used a template and samples, and just about everyone got it right on the second try. Will they ever have to write an annotated bibliography again? Probably not. Will they have to struggle with a new format and new genre and figure out how to do it properly for a professor or boss? Most definitely. It didn't kill them to complete the exercise.

The research presentations were much better than the actual papers, and gave them each a chance to be the experts in the room, talking about something they not only care about, but also know about. I like the whole project. They are far from expert researchers and writers after their first try, but I wouldn't take this first try away from them.

P.S. Fister's article looked like it was 3 pages, but after following all the links (some of which didn't work) it was much longer and more in-depth. I appreciate the thoroughness of her article which, at first glance, looked flimsy.

Respose to Straub’s "The Concept of Control in Teacher Response: Defining the Varieties of ‘Directive’ and ‘Facilitative’ Commentary"


In “The Concept of Control in Teacher Response: Defining the Varieties of ‘Directive’ and ‘Facilitative’ Commentary,” Author Richard Straub discusses facilitative and directive teacher commentary in response to students’ writing and how some types of commentary assume more control than others.

To begin, Straub mentions two pioneering articles written in 1982 by Sommers, and Brannon and Knowblach. These authors urged teachers to be careful about the amount of control that we exert over students when commenting on their papers. Yet, as Straub points out, many questions still remain in terms of distinguishing facilitative and directive commentary. For one, how much should we try to help students develop their attitudes toward writing versus how much do we allow students to find their own way? The categorization of these two types of comments is sometimes very vague. What types of comments can be considered directive or facilitative?

Furthermore, Straub askes, how do different comments exert control? Is there even a way to offer guidance without assuming control?

First, Straub points out, directive commentary is easy to distinguish because it is highly critical and focuses on what is wrong and what needs to be changed. As for a method for analyzing comments, Straub suggests looking at their focuses and modes. Facilitative comments focus on global concerns and often make suggestions that deal with the student’s writing process.

Also, “the extent to which a teacher assumes control over student writing is also determined to a great extent by the way he frames his comments—by the mode of commentary he employs”.  Straus notes that comments framed as corrections assume greater control than those framed as advice or inquiries. An example is provided which demonstrates how directive commentary can be made more facilitative. It is important that the student be given direction, yet not told exactly how writing “should look”. Therefore, a comment can be made about revising an opening paragraph, but that revision is still left to the student.

Ultimately, Straus’s study, as well as this article, go against the idea that comments can either be facilitative and helpful, or directive an ineffective. Straus proves that comments can be both, not either/or. He notes that the best commenting styles play on our strengths as teachers and highlight our goals for the classroom.

Writing Theory and Practice 2015-11-23 16:59:00


I like the way Richard Straub set up his article “The Concept of Control in Teacher Response: Defining the Varieties of ‘Directive’ and ‘Facilitative’ Commentary.” What stood out in this article, were the different examples he provided. To be honest, I liked the “directive response” example (Straub 227). In my senior seminar class with Dr. Nira Gupta-Casale, I learned you can help a student but that student still has to interpret what you are saying. In addition, the student still has to go home and figure out ways to apply everything and sometimes that can be challenging. Continuing, I like how Peter Elbow says, “I’d be happy to talk more about this in a conference” (Straub 243). The comment lets students know he is willing to clear any confusion and elaborate. I prefer conferencing myself so I love when teachers are open and welcoming to students seeing them during office hours. I also thought Anne Gere’s and Jane Peterson’s comments were effective as well.

Continuing, this article once again reminded me of my writing center theory and practice class because the article uses terms presented in that class and it makes statements that I think a lot of people in the writing center would agree with. I like the stance “we should not reject all directive styles of response any more than we should all adopt some standard facilitative style” (Straub 246). With Straub’s article, learning was easier because it tied into things discussed in my other class. Moreover, I liked reading Straub’s article a little more than I liked reading “Looking Back as We Look Forward: Historicizing Writing Assessment” by Kathleen Blake Yancey. I do feel like I learned in Yancey’s article, and I liked the questions proposed.

 

The Concept of Control in Teacher Response (Straub) & Looking Back As We Look Forward (Yancey)


YetYet another powerful article about commenting on student’s paper. The tone of this article was a bit different from the others though. I thought this author had a lighter and almost humorous tone to his message. He began by saying that despite the expanded quote of our inquiry and deepen discussions that we have continue to look at responses in dualistic ways. He sates, “teacher commentary is either directive or facilitative, authoritative or collaborative, teacher-based or student-based.” In this article he tried to identify the focus and modes of comment styles labeled “directive” a controlling system and “facilitative” using the comments of known composition teachers. Straub begins by examining several teachers’ comments on students’ paper. Comparing these students’ papers he found that the teacher’s comments are highly controlling. Straub states, “The teacher, like an editor, freely marks up the writing-circling errors, underlining problem areas, and inserting corrections on the student's text.” He assert that the comments written on these student’s papers don’t tell the students what is wrong with their writing and what need to be change. Straub conclude that the more comment a teacher makes on student’s paper, the more controlling the teacher is likely to be. This applied more so to the teachers who make numerous specific comments on local matter. He also concluded that the more a teacher looks at student writing processes and tried to focus on the writer’s development and not the development of the specific text, the less likely the teacher is to point out specific changes in the text.
He went on to talk more about the different type of comments. For example he concluded that comments framed as corrections exert greater control over the student than criticism of the writing. He also added that praise comments are less controlling than criticism or commands because they place the teacher in the role of the appreciative reader.  However, they can decrease the teacher’s values and agenda and contain a certain degree of control over how the student views his/her own text and how she/ revises.
At the end Straub came to the conclusion that all l teacher’s comment regardless of their style or techniques are evaluative, but the question of how teachers exert their power over students still remain.


While reading this I had to pause a couple of time to make sure that I wasn’t rereading last week’s article “Writing Assessment in the 21stCentury, also by Yancey. It’s pretty much echo what she said in that previous article. In this article she also divides the history of writing assessment into three “waves.” The first wave (1950-1970) she states focused on objective, non-essay testing that prioritized “efficiency and reliability.” The second wave (1970-1986) which she claims moved towards holistic scoring of essays, based on rubrics and scoring guides first developed through ETS and AP. The third wave (1986-present) developed to include portfolios and larger, programmatic assessments. Yancey looks at these waves from several perspectives. One includes how the concepts of reliability and validity are viewed, the other is the local knowledge of the non-expert teacher. Again just like in the last article Yancey voices her concerns for the state of writing assessments. She also provides guidance on how to further practice in writing assessment.


Blog # 9 – Yancey & Straub

Looking Back as We Look Forward: Historicizing Writing Assessment by Kathleen Blake Yancey & The Concept of Control in Teacher Response: Defining the Varieties of “Directive” and “Facilitative” Commentary by Richard Straub

As I read, Looking Back as We Look Forward: Historicizing Writing Assessment by Kathleen Blake Yancey, I felt like her essay was talking about similar aspects as the one we read by her, last week. She ones again talked about the writing assessment and how it used to be testing, then turned into a holistically scored essay, and then it took the form of portfolio assessment. Although she was presenting some of the same information, I felt that the way she presented it in this essay, it was more direct, organized, and easier to read. 

From her essay, I felt like the idea of having students being involved with writing portfolios was helpful. I still understand how, at times, tests are necessary and how scoring an essay can be a helpful way of assessing students. But I feel like it is not always the best way. It is hard to really pick and choose what’s “better” for the students when so much research has been done and it seems like we are not sure about what works best for the students just yet. 

I find extremely interesting how all these readings are full of dos and don’ts when it comes to how to interact with the teaching of writing. All these essays focus on what’s the “best” way teachers should go about teaching based on research that has been done. However, while they all sound very convincing, the fact that they are always finding out new approaches or going back to old forms makes me think that there is no “right” way yet. I have my preferences based on my experience as a student, but even then, I know that other students may think differently than me and will have other preferences. 

In The Concept of Control in Teacher Response: Defining the Varieties of “Directive” and “Facilitative” Commentary by Richard Straub, he brings in the idea that teachers should comment on students writing in a facilitative way rather that directive way to allow the student to find things out on their own. He also talks about the idea of telling teachers that they must not take over when commenting on students writing or when helping students. 
In his essay it was shared that a teacher stated the following “as a teacher, I must be careful not to take over – because the minute I do, the success (if there is one) becomes mine, not his – and the learning is diminished.” I can totally agree with this statement. It sounds like the students need just that right amount of commentary to allow them to learn on their own.

Reading the essays this week made me think about what we’ve read so far this semester. I find that these essays often make me think about what I’ve gone through as a student and they make me wonder how well could teachers interact with every single student to be able to help them in just the way they need.

Weekly Response: Yancey’s "Looking Back as We Look Forward: Historicizing Writing Assessment"

Yancey reviews the history of writing assessment and describes three waves: testing, holistic scoring, and portfolio and program assessment. She asks what we can learn from writing assessment. That is a question we are addressing in our FYW program now in the Writing Committee meetings. Further, Yancey discusses the different realms of educators and testing specialists.

In the first wave, Deiderich seems overly confident in the assessment professionals' abilities to quantify good writing. Educators, on the other hand, were more concerned with validity than reliability and efficiency. However, it took 20 years for their concerns to create change in the system.

The second wave, spearheaded by White at Cal State in the 70's, included an actual writing test to evaluate writing. Of concern was cost, reliability, and efficiency, but he and his team managed to create an acceptable test. The test, however, was a measurement of more than just writing. It was correlated to wealth and parental education, not just writing. Therefore, the second wave had struggles and challenges that lead to the third wave.

The third wave is marked by collecting and scoring multiple drafts of writing: portfolio assessment. In this method of portfolio assessment, faculty read the portfolios in community and have to negotiate the standards and outcomes of the assessment practice. In this wave, writing assessment became recognized as a field. Also, questions emerged regarding who holds the "power" in assessment and how does it help to shape identity in students. I like the discussion of "fictionalizing" and "narrativizing." I find I struggle with this sometimes when grading. It is difficult to tune it out.

Yancey theorizes on what the fourth wave will look like. I think the fourth wave will indeed be electronic portfolios that are visual as well as textual, hyperlinked, and multimedia. I see that happening already.

Laura’s Writing Theory & Practice Blog 2015-11-23 01:22:00

The Concept of Control in Teacher Response: Defining the Varieties of "Directive" and "Facilitative" Commentary
 Richard Straub (1996)

Straub seeks to analyze-and clearly define- both the types of commentary that teachers employ as well as the “roles” teachers play in the writing classroom.  His aim is to make connections between these comment types and roles in an effort to ensure teachers take charge of their instructional practices.  

Specifically, Straub looks to provide a set of criteria that can be used to easily distinguish between directive and facilitative roles by examining the focus and mode of teacher commentary.  To achieve his goal, Straub analyzes 4 sets of teacher comments (one set made by none other than Peter Elbow).  He finds that all sets of comments had a mix of directive and facilitative strategies.  Straub makes no claims about which commenting styles are better/worse than others or which comments teacher should use more/less frequently.  Rather, he asserts that teachers must become more aware of the choices they make and the potential impact of these choices on students.  

I was interested to hear which set of comments some of our classmates would say they, as students, would prefer to receive and/or which they believe would be most effective/helpful to them as writers.  I also pondered which commenting style I most identified with as a teacher.  

Overall, I felt like the article fell a little flat.  I was hoping for some more concrete findings.  I kept struggling with the notion that everything is relative and it is nearly impossible to analyze teacher comments on a page out of context.  There are so many factors that must be considered when conducting such a study.  While Straub does make mention of this fact, he glazes over the issue and continues on his endeavor.  Much of the conclusion and advice offered to teachers feels intuitive and something that evolves naturally over time for most teacher. I see the value in teachers having a heightened awareness of the words they put on a student’s paper, but question whether teachers can sustain that amount of focus over time.  On the hand, perhaps an increased awareness for even a limited amount of time will create a shift in the long-term commenting process for teachers.  It can’t hurt.  

Looking Back As We Look Forward:  Historicizing Writing Assessment
Kathleen Yancey (1999)

Appropriately named, Yancey’s article explores the three “waves” of writing assessment that have transpired over the past 50 years or so in an effort to gain insight into the assessment of the future.  Yancey does a fantastic job of clearly describing each of these periods in writing assessment history and prompted many ideas for discussion in me.  Such as…
  • can we evaluate writing without looking at some kind of pre-assessment to check for growth rather than ability level?
  • would there be validity in “testing” a student by having them accurately assess a group of sample essay?  Kind of like “well, I might not be able to do it ‘on-demand,’ but I know what constitutes a good paper…
  • what other types of assessments can there be...what about a test that asks not for a complete essay or story, but one that prompts students to produce specific kinds of phrases, sentences or paragraphs only?

Toward the beginning of her essay, Yancey poses a set of her own questions regarding assessments.  As a new member of the composition community, I find it a little uncomfortable that there are so many “unknowns” surrounding such a major aspect of my field.  It seems like an impossible feat to come to a consensus on some of these issues anytime soon, if ever.  Some of these unanswered questions seem like a chicken-or-the-egg conundrum whereby we must first determine what makes writing “good” before we can determine if a test is accurately measuring what it seeks to measure-nevermind what is assessment’s true purpose in the educational setting or can it be measured consistently…..

The talk about the cost, time and effort needed for a truly valid writing assessment truly was both expected and frustrating.  “The reliable test,” as Yancey says really struck a nerve in me due to a recent controversy that I found myself in the middle of recently.  In short, my school pays for a testing service called Linkit to assess students three times a year which, in turn, is a big part of how TEACHERS are evaluated (student growth).  The company release sample quizzes for student use which I discovered to have numerous ERRORS.  After making some noise, I sat with administrators who told me Linkit had “shut down” the quizzes, but they assured me the “top secret” tests they administer 3x’s a year were completely reliable and error-free.  I volunteered to take the test myself by was not allowed to do so.  During this meeting, I was also given a very sales-pitchy-sounding speech about how highly reliable these tests are in predicting a student’s score on the NJ Ask (which no longer exists) and was shown a chart with statistics showing the mathematical and scientific algorithm-looking way in which these tests correlate within a .00005 percentage accuracy….blah, blah, blah...Needless to say the term “reliable test” made me cringe a little.  (By the way, I’m betting the only reason they even met with me is because my husband is on my district’s Board of Ed...otherwise, unfortunately,  I doubt they would have addressed my concerns at all.)  UGGGGGHHH...frustrating!!

I found it interesting- but not entirely surprising- how high the correlation rate of a reading test was to writing ability.   The article also talks about devising a writing assessment that would perform the same tasks as the “objective tests.”  Frankly, last year while exploring some of the released PARCC sample questions, my colleagues and I ran into a LARGE NUMBER of potentially subjective questions.  This is supposed to be “THE” test and I was appalled to find that I (and many others) DISAGREED  with many of their sample answers.  What’s going on in the real PARCC test?  Who are making these questions?  They’re only human and can make mistakes. Their goal seems to be to ensure the questions are aligned with the CCS, rather than to accurately assess the students’ skills/ability/knowledge. (another conversation,perhaps)

Weekly Response: Straub’s "The Concept of Control in Teacher Response"

This article concentrated on explaining the difference in teacher comments on writing assignments, and how facilitative comments allow students to maintain more control over their writing than directive comments.  (The examples were very useful.) I felt that the author flip-flopped a couple times, seeming to feel that facilitative comments are superior to directive comments, and then explaining that teachers will have different styles and ways of using directive comments that are appropriate for certain teachers and in certain settings. Straub clearly prefers facilitative comments, but didn't want to be directive in telling teachers how to comment.

It was in the end notes that Straub best explained the difference between facilitative and directive responses.
In directive commentary, the teacher says or implies, 'Don't do it your way; do it this way.' In facilitative commentary, the teacher says or implies, 'Here's what your choices have caused me to think you're saying-if my response differs from your intent, how can you help me to see what you mean?'
I tend to like a mix of both, when I use comments at all. (I prefer conferences or rubrics.) My students would not appreciate completely facilitative responses. They want to be told what to do to "fix" the paper; they want clear instructions. However, giving them only directives does not teach them to think and become better writers.

Richard Straub’s “The Concept of Control in Teacher Response: Defining the Varieties of ‘Directive’ and ‘Facilitative’ Commentary”

I actually enjoyed this piece a great deal despite the misgivings I had initially when I saw the pages with corrections on them! Straub lays the groundwork simply; teachers should NOT take over a students’ paper as it then stops being the students work. It is difficult to know when one must stop when helping—and in a teacher’s case—guiding each student to a successful outcome. By making comments which do not fix or change the work but instead brainstorm with the student-writer, a teacher can: “…share responsibility with the writer” (225). 
This question of directive or facilitative responses is a tricky one; quite honestly, I still have trouble discerning one from the other. Straub, however, really makes great strides with the sample composition and the comments by well-known, respected teachers. The study illustrates how different teachers would respond to student writing, and which way or ways proved more effective for students (without writing for them).
I found all four very helpful, albeit different tools were offered, and different reactions presented to the piece itself. Peterson’s was a little directive but softened with facilitative words. White’s was even more facilitative and Gere’s was up on top. Peter Elbow’s was a whole other animal; he acted as a very encouraging reader, but probably not as much help as the student might have expected! However, he gave suggestions, as they all did, and the paper’s course was then placed back in the student’s hands—where it belonged.

None of these four gave strictly directive commentary, which is encouraging as it displays the trend towards allowing students to make their own choices in their writing matters. Straub notes one important fact: “…the optimum style of response for any teacher is going to be a function of her personality and teaching style” (247). In the end, it all comes down to the interaction between the teacher and student, and each teacher’s ability to know when to guide and when to stop. The models illustrated in this paper by Straub through his study, serve as excellent tools for teacher’s to model their own comments. The outcome is sure to benefit all involved in the writing process.