blog 8

"Using Rubrics" thoughts and feedback.


                                                          _________________________

“no essay received less than 5 different grades” I'm not really surprised by the varied results of the 53-teacher experiment. At first it seems shocking, but when I stopped and really thought about what I was reading, it made a lot of sense. Especially when the article went on to say that each subgroup that formed had similar criterion than others. From experience, I can say that each field of study has its own rules that generally vary to a noticeable degree when it comes to writing standards (you could say each one has a characteristic style of writing). So the results really don’t surprise me, and even though it’s interesting and does prove the point that there isn’t a standard, it also seems a tiny bit biased.

It’s funny for me to think of a grading criterion that didn’tinvolve a rubric. I guess that just shows how standardized they’ve become. Or maybe how linear my experiences with them are? I say this because I never thought of there being “different” rubric styles. I just assumed they were all the same.

Also, I think the use of “flavor” to seriously describe voice is absolutely hysterical (in a good way).

I’m inclined to disagree with the idea that writing can’t be broken down into separate parts. Of course they can. We all know that one student—maybe we’ve even been that one student—who keeps making the same mistake over and over, yet the rest of the paper is fine (more or less). It is possible to excel in one area and lack in another. Personally, I’m good at analysis, but have a hard time organizing my thoughts. I often jump from subject to subject with little or no transition or reasoning. To me, it makes sense, but it doesn’t to others. So I don’t believe it’s impossible to separate the components. Writing involves a lot of working parts; and for some students, it’s really hard to get all the parts to work together. A grading guide that breaks things down into smaller, more manageable parts is, I would think, less intimidating than a holistic approach, where everything counts equally—where your flaws might end up cancelling out your strengths.

I felt 14.2 was the most helpful of the examples. I’ve noticed that in an attempt to be universal, rubrics often use vague language (the paper isn’t “balanced” enough, or it was “thin” in some areas).  

“universally agreed-on standards for good writing” a valid point. But I do believe that there are certain qualities that are generally found within “good” writing. Sentence structure is a priority—not necessarily because it has to be correct, but because it has to work within the piece itself. Ideas and thoughts are also priorities. As well as organization. These are all aspects of good writing that rubrics attempt to assess. The problem here is the interpretation of the word “good”. I would say critics of the rubric are assuming that “good” is synonymous with “traditional”, “orthodox” or “academic” writing. Saying a piece is good doesn’t automatically mean that it is the cookie-cutter paper we expect it to be. “Good” writing, then, has become stigmatized and is expected to fit into a very specific mold. However, rubrics are vague enough that a paper can be graded as having good ideas and sentence structure, without it stereotypically good.

“oversimplifies…valued by real readers” also a good point, although I disagree that a rubric inherently implies these things.

I disagree with the comment examples given. A reader is not supposed to “work hard” to fill in gaps of information. I was never given that luxury. I was always told to tell my reader everything they need to know, and to assume they’ll never read what I’m telling them about. A reader can (and should) work hard to analyze or interpret a piece, but not to fill in the gaps. And if the organization as bad enough to drop the paper a whole letter grade, then I get the impression that it was moderately disorganized and disrupted the reading process noticeably. It’s good that the “teacher” pointed out that the ideas were “superb,” because that’s important. But organization is important too. And I think comment 2 belittles that importance, and almost coddles the writer. Comment 2 sugar coats what comment 1 is saying, and that’s well and good, but comment 2 also doesn’t say that the disorganization is why points were lost. To me, it sounds like comment 2 is saying “this was great and the readers will have to adapt to you and keep doing things like this. Also, you earned a B even though I said your work was superb.” It sounds contradictory, in my opinion. My response to this would be “if my work was superb, and the organization wasn’t that big of a deal, why did I only get a B?” Losing readers isn't really on the forefront of a student's mind, let's be realistic here. A student cares about points, and they won't stop caring about points until they have the skills to know they can break the rules and still earn the points. It is only at that point that the writer will worry about losing their readers. 

And obviously there won't be a single rubric for every field of study that exists. It's not possible because each field uses writing to achieve something different. Writing is a tool of communication. Different fields communicate different messages.

If you’re going to question grading scales, what’s stopping you from questioning letter grades? They’re the same thing, except number scales show you exactly where your work fell within the guidelines, whereas with the letter system, you have a wide ad vague estimation: “I got a B, so I must have done better than 79, but worse than 90…”

I’m not really sure what to make of his grading process. It doesn’t seem like something I can agree with. I like that he tries to be fair while keeping in touch with his technical side, but I don’t really agree with separating them in the way he does. If the technical issues are bad enough to disrupt the reading process, then the paper needs serious revision. The ideas may be good, but if the delivery is hard to understand, then the quality of the ideas are lost. I think a rubric should be used to assess the technical stuff and a teacher’s comments should be used to discuss the paper holistically. That’s how I assess papers, at least. We’ve said that technicality isn’t everything, but we can’t say that it doesn’t contribute to the holistic quality of the paper. It’s not the most important thing, but it is still important (to a certain degree).

Overall, I felt this piece was interesting and easy to read. Even if I disagreed with some of the things stated in it—especially most of the stuff at the end—I felt it provoked a lot of insight and reflection on my part; which is nice because I didn’t know I even had feelings about rubrics (considering I usually don’t read them). But it also revealed a bit about my own grading beliefs. Although I probably sound overly critical in my reflections, I don’t believe I am as “hard” a grader as I (perceive myself to) come across as. I think grading is hard no matter what, and it doesn’t get easier, and that a rubric should, ultimately, be a tool to help you reach a grade with, not an all-determining, all-knowing checklist we rely solely on. 

blog 8

"Using Rubrics" thoughts and feedback.


                                                          _________________________

“no essay received less than 5 different grades” I'm not really surprised by the varied results of the 53-teacher experiment. At first it seems shocking, but when I stopped and really thought about what I was reading, it made a lot of sense. Especially when the article went on to say that each subgroup that formed had similar criterion than others. From experience, I can say that each field of study has its own rules that generally vary to a noticeable degree when it comes to writing standards (you could say each one has a characteristic style of writing). So the results really don’t surprise me, and even though it’s interesting and does prove the point that there isn’t a standard, it also seems a tiny bit biased.

It’s funny for me to think of a grading criterion that didn’tinvolve a rubric. I guess that just shows how standardized they’ve become. Or maybe how linear my experiences with them are? I say this because I never thought of there being “different” rubric styles. I just assumed they were all the same.

Also, I think the use of “flavor” to seriously describe voice is absolutely hysterical (in a good way).

I’m inclined to disagree with the idea that writing can’t be broken down into separate parts. Of course they can. We all know that one student—maybe we’ve even been that one student—who keeps making the same mistake over and over, yet the rest of the paper is fine (more or less). It is possible to excel in one area and lack in another. Personally, I’m good at analysis, but have a hard time organizing my thoughts. I often jump from subject to subject with little or no transition or reasoning. To me, it makes sense, but it doesn’t to others. So I don’t believe it’s impossible to separate the components. Writing involves a lot of working parts; and for some students, it’s really hard to get all the parts to work together. A grading guide that breaks things down into smaller, more manageable parts is, I would think, less intimidating than a holistic approach, where everything counts equally—where your flaws might end up cancelling out your strengths.

I felt 14.2 was the most helpful of the examples. I’ve noticed that in an attempt to be universal, rubrics often use vague language (the paper isn’t “balanced” enough, or it was “thin” in some areas).  

“universally agreed-on standards for good writing” a valid point. But I do believe that there are certain qualities that are generally found within “good” writing. Sentence structure is a priority—not necessarily because it has to be correct, but because it has to work within the piece itself. Ideas and thoughts are also priorities. As well as organization. These are all aspects of good writing that rubrics attempt to assess. The problem here is the interpretation of the word “good”. I would say critics of the rubric are assuming that “good” is synonymous with “traditional”, “orthodox” or “academic” writing. Saying a piece is good doesn’t automatically mean that it is the cookie-cutter paper we expect it to be. “Good” writing, then, has become stigmatized and is expected to fit into a very specific mold. However, rubrics are vague enough that a paper can be graded as having good ideas and sentence structure, without it stereotypically good.

“oversimplifies…valued by real readers” also a good point, although I disagree that a rubric inherently implies these things.

I disagree with the comment examples given. A reader is not supposed to “work hard” to fill in gaps of information. I was never given that luxury. I was always told to tell my reader everything they need to know, and to assume they’ll never read what I’m telling them about. A reader can (and should) work hard to analyze or interpret a piece, but not to fill in the gaps. And if the organization as bad enough to drop the paper a whole letter grade, then I get the impression that it was moderately disorganized and disrupted the reading process noticeably. It’s good that the “teacher” pointed out that the ideas were “superb,” because that’s important. But organization is important too. And I think comment 2 belittles that importance, and almost coddles the writer. Comment 2 sugar coats what comment 1 is saying, and that’s well and good, but comment 2 also doesn’t say that the disorganization is why points were lost. To me, it sounds like comment 2 is saying “this was great and the readers will have to adapt to you and keep doing things like this. Also, you earned a B even though I said your work was superb.” It sounds contradictory, in my opinion. My response to this would be “if my work was superb, and the organization wasn’t that big of a deal, why did I only get a B?” Losing readers isn't really on the forefront of a student's mind, let's be realistic here. A student cares about points, and they won't stop caring about points until they have the skills to know they can break the rules and still earn the points. It is only at that point that the writer will worry about losing their readers. 

And obviously there won't be a single rubric for every field of study that exists. It's not possible because each field uses writing to achieve something different. Writing is a tool of communication. Different fields communicate different messages.

If you’re going to question grading scales, what’s stopping you from questioning letter grades? They’re the same thing, except number scales show you exactly where your work fell within the guidelines, whereas with the letter system, you have a wide ad vague estimation: “I got a B, so I must have done better than 79, but worse than 90…”

I’m not really sure what to make of his grading process. It doesn’t seem like something I can agree with. I like that he tries to be fair while keeping in touch with his technical side, but I don’t really agree with separating them in the way he does. If the technical issues are bad enough to disrupt the reading process, then the paper needs serious revision. The ideas may be good, but if the delivery is hard to understand, then the quality of the ideas are lost. I think a rubric should be used to assess the technical stuff and a teacher’s comments should be used to discuss the paper holistically. That’s how I assess papers, at least. We’ve said that technicality isn’t everything, but we can’t say that it doesn’t contribute to the holistic quality of the paper. It’s not the most important thing, but it is still important (to a certain degree).

Overall, I felt this piece was interesting and easy to read. Even if I disagreed with some of the things stated in it—especially most of the stuff at the end—I felt it provoked a lot of insight and reflection on my part; which is nice because I didn’t know I even had feelings about rubrics (considering I usually don’t read them). But it also revealed a bit about my own grading beliefs. Although I probably sound overly critical in my reflections, I don’t believe I am as “hard” a grader as I (perceive myself to) come across as. I think grading is hard no matter what, and it doesn’t get easier, and that a rubric should, ultimately, be a tool to help you reach a grade with, not an all-determining, all-knowing checklist we rely solely on. 

Writing Theory and Practice 2015-11-16 17:58:00


I liked John C. Bean’s “Using Rubrics to Develop and Apply Grading Criteria.” I liked the position he took in his article, and I liked his own personal technique as well. I feel like his technique will possibly make more students satisfied especially since he embraces more than one method of feedback. In addition, I liked how he pointed out some of the same things the reader may be thinking while reading his article. For example he says, “Although this process might seem time-consuming, I believe it leads to fairer and more thoughtful grades because each paper receives a score from both a holistic and an analytic perspective” (Bean 281). From reading his article, I get the sense that Bean is great at what he does, is open minded, and dedicated to students’ success. I am not sure if other teachers will be willing to do everything he does.

Furthermore, I did not know there were so many different rubrics. Actually, I do not recall any of my teachers using one of the rubrics I really liked within the article. I like when a teacher writes out how they feel about my paper or assignment, and that is why I embraced the “Analytic Rubric with Non-Grid Design” (Bean 277). I also like to know exactly what a teacher is looking for so I favored the “Task-Specific Rubric for a Genre” as well (Bean 273). Moreover, Bean’s article made me think about a teacher I currently have when he started talking about the dilemma with rubrics. Although I liked some of the rubrics proposed, if I become a teacher in the future I am not sure if I will use them. When I was younger, I believe giving me certain numbers did affect me.

Writing Theory and Practice 2015-11-16 17:58:00


I liked John C. Bean’s “Using Rubrics to Develop and Apply Grading Criteria.” I liked the position he took in his article, and I liked his own personal technique as well. I feel like his technique will possibly make more students satisfied especially since he embraces more than one method of feedback. In addition, I liked how he pointed out some of the same things the reader may be thinking while reading his article. For example he says, “Although this process might seem time-consuming, I believe it leads to fairer and more thoughtful grades because each paper receives a score from both a holistic and an analytic perspective” (Bean 281). From reading his article, I get the sense that Bean is great at what he does, is open minded, and dedicated to students’ success. I am not sure if other teachers will be willing to do everything he does.

Furthermore, I did not know there were so many different rubrics. Actually, I do not recall any of my teachers using one of the rubrics I really liked within the article. I like when a teacher writes out how they feel about my paper or assignment, and that is why I embraced the “Analytic Rubric with Non-Grid Design” (Bean 277). I also like to know exactly what a teacher is looking for so I favored the “Task-Specific Rubric for a Genre” as well (Bean 273). Moreover, Bean’s article made me think about a teacher I currently have when he started talking about the dilemma with rubrics. Although I liked some of the rubrics proposed, if I become a teacher in the future I am not sure if I will use them. When I was younger, I believe giving me certain numbers did affect me.

Using Rubrics (Bean) & Writing Assessment in the Early 21st Century (Yancey)



 Bean begins the article by discussing the subjectivity of evaluation criteria. He states, professional writing teachers grant that the assessment of writing like any art, involves subjective judgments. But the situation is not entirely relative either, for communal standards for good writing can be formulated and readers with different tastes can be trained to assess writing samples with surprisingly high correlation. To illustrate  this argument Bean brought up Diederich research on composition in which he discover that a diverse group of readers could be trained to increase the correlation of their grading. Bean wrote, by setting descriptions for high, middle, and low achievement in each of the five criterion areas ---idea, organization, sentence structure, wording, and flavor. Bean wrote that Diederich was able to train readers to balance their assessments over the 5 criteria. Bean further adds that since then many researchers have refined or refocused Diederich’s criteria and have developed strategies for training readers as evaluators and for displaying criteria to students in the form of rubrics.  Further in the article Bean went on to talk about the different type of rubric used and their importance to evaluation and the evaluator.
I agree with Bean that rubrics are important because they clarify for students qualities their work should have and I like that he value rubric, but he did not mention how little some teachers use rubric over time. Some teachers would develop a rubric for a particular assignment and project and at the end of that project or assignment that’s the end of that. The rubric is not reused or applied in different areas. I think rubrics should be designed for repeated use, or used on several tasks. Students should be given a rubric at the beginning of an instruction. Then they should complete the work, receive feedback, practice, revise or do another task, continue to practice, and ultimately receive a grade all using the same rubric. I think this reinforce learning more than anything.




In this article Yancey discuss writing assessment and how it has changed and varied across different time periods. She begins by describing the first wave of writing assessment in the early century. She wrote that “testes” what assessment were referred to at the time were indirect measures, that is a test that sampled something related to but other than the individual student’s writing typically a multiple choice test of editing skills serving as a proxy for writing. She added the most important question in this first wave of writing assessment was informed by an ideology located in a machine-like efficiency characterizing the early part of the century. “Which measure can do the best and fairest job of prediction with the least amount of work and the lowest cost?”
Yancey also discussed the second wave of writing assessments. She states that this wave dated back to the 70s and 80s was prompted by the explosion of interest in writing process and new pedagogies enacting the field’s new understandings of process. Due to these new understanding holistic scoring was developed. Yancey wrote that this type of assessment relied on a direct measure, or sample, of good writing by developing and using scoring guide that provided a reliability analogous to the reliability of indirect measures, holistic scoring was able to meet the standard of consistent scoring. She further wrote, the questions about assessment dominating this period were very different, then, than those driving the first wave: what roles have validity and reliability played in writing assessment? Who is authorized and who has the appropriate expertise to make the best judgements---teachers or experts?
Yancey further discussed the third wave of writing assessments as occurring from the late 1980s up until the turn of the century. She stated that this wave was characterized by attention to multiple texts, the ways we read those texts, and the role of students in helping us understand their texts and the processes they used to produce them. The vehicle for practicing assessment keyed to these principles was typically a portfolio of writing. Yancey defined as a set of texts selected from a larger archive and narrated, contextualized, and explained by the student himself---or herself. During this period of writing assessment the question one was asking, “Whose needs does writing assessment serve? And “how is it a political and social act?” Yancey also talks about the current moment in writing assessments, but I thought her explanation of writing assessments throughout the different periods was interesting. Yancey not only provide a historical component to her argument, but she also includes the important questions that were raised by shifts in writing assessments in accordance with their time period.  






Using Rubrics (Bean) & Writing Assessment in the Early 21st Century (Yancey)



 Bean begins the article by discussing the subjectivity of evaluation criteria. He states, professional writing teachers grant that the assessment of writing like any art, involves subjective judgments. But the situation is not entirely relative either, for communal standards for good writing can be formulated and readers with different tastes can be trained to assess writing samples with surprisingly high correlation. To illustrate  this argument Bean brought up Diederich research on composition in which he discover that a diverse group of readers could be trained to increase the correlation of their grading. Bean wrote, by setting descriptions for high, middle, and low achievement in each of the five criterion areas ---idea, organization, sentence structure, wording, and flavor. Bean wrote that Diederich was able to train readers to balance their assessments over the 5 criteria. Bean further adds that since then many researchers have refined or refocused Diederich’s criteria and have developed strategies for training readers as evaluators and for displaying criteria to students in the form of rubrics.  Further in the article Bean went on to talk about the different type of rubric used and their importance to evaluation and the evaluator.
I agree with Bean that rubrics are important because they clarify for students qualities their work should have and I like that he value rubric, but he did not mention how little some teachers use rubric over time. Some teachers would develop a rubric for a particular assignment and project and at the end of that project or assignment that’s the end of that. The rubric is not reused or applied in different areas. I think rubrics should be designed for repeated use, or used on several tasks. Students should be given a rubric at the beginning of an instruction. Then they should complete the work, receive feedback, practice, revise or do another task, continue to practice, and ultimately receive a grade all using the same rubric. I think this reinforce learning more than anything.




In this article Yancey discuss writing assessment and how it has changed and varied across different time periods. She begins by describing the first wave of writing assessment in the early century. She wrote that “testes” what assessment were referred to at the time were indirect measures, that is a test that sampled something related to but other than the individual student’s writing typically a multiple choice test of editing skills serving as a proxy for writing. She added the most important question in this first wave of writing assessment was informed by an ideology located in a machine-like efficiency characterizing the early part of the century. “Which measure can do the best and fairest job of prediction with the least amount of work and the lowest cost?”
Yancey also discussed the second wave of writing assessments. She states that this wave dated back to the 70s and 80s was prompted by the explosion of interest in writing process and new pedagogies enacting the field’s new understandings of process. Due to these new understanding holistic scoring was developed. Yancey wrote that this type of assessment relied on a direct measure, or sample, of good writing by developing and using scoring guide that provided a reliability analogous to the reliability of indirect measures, holistic scoring was able to meet the standard of consistent scoring. She further wrote, the questions about assessment dominating this period were very different, then, than those driving the first wave: what roles have validity and reliability played in writing assessment? Who is authorized and who has the appropriate expertise to make the best judgements---teachers or experts?
Yancey further discussed the third wave of writing assessments as occurring from the late 1980s up until the turn of the century. She stated that this wave was characterized by attention to multiple texts, the ways we read those texts, and the role of students in helping us understand their texts and the processes they used to produce them. The vehicle for practicing assessment keyed to these principles was typically a portfolio of writing. Yancey defined as a set of texts selected from a larger archive and narrated, contextualized, and explained by the student himself---or herself. During this period of writing assessment the question one was asking, “Whose needs does writing assessment serve? And “how is it a political and social act?” Yancey also talks about the current moment in writing assessments, but I thought her explanation of writing assessments throughout the different periods was interesting. Yancey not only provide a historical component to her argument, but she also includes the important questions that were raised by shifts in writing assessments in accordance with their time period.  






Blog # 8 – Yancey & Bean

Writing Assessment in the Early Twenty-First Century by Kathleen Blake Yancey & Using Rubrics to Develop and Apply Grading Criteria by John C. Bean

In the beginning of the essay, Writing Assessment in the Early Twenty-First Century by Kathleen Blake Yancey, the author talked about assessment for students. She shared that compositionists often find themselves at the odds with writing assessment and frustrated with it. This showing that assessment is not their favorite task. Yancey presents a summary in her essay about the writing assessment. This history shared that students have been assessed through testing, on the writing process, and the attention to multiple texts, the way those texts are read.

Yancey also talked about students portfolios later in her essay. She talked about digital and printed portfolios. As she talked about this, I remembered a class I took when I first came to Kean.  There, we were asked to create a digital portfolio. Back then, it was explained to us that it was a class project but that we could use that portfolio to apply for jobs as well. The professor explained that the portfolio was a way to show potential employers part of our work. I didn’t really understood then what was the importance of it. After that class, creating portfolios was not a common task we were asked to do in other classes so I just simply didn’t go back to that portfolio I created. I think we were asked to reflect on our work for that portfolio and I also think we had several drafts of the same essay included on the portfolio. I think that overall, the portfolio was good to do in class. But I think that if we were asked to do them in several classes rather than in only a few, I would’ve been able to get more familiar with them.

I appreciated that Yancey’s essay was more up to date, this way I was able to more easily understand the points she was trying to make.

Using Rubrics to Develop and Apply Grading Criteria by John C. Bean was an essay that talked about the usage of rubrics for grading. Bean states that “as teachers, our goal is to maximize the help we give students while keeping our own workloads manageable.” I’ve often heard teachers say that they spend a lot of time grading papers. Using rubrics seems to be a helpful tool for them. But as a student, I’m not sure how I feel about rubrics. I don’t hate them, but they are not my favorite either. I feel like rubrics can be so dry at times. They have so much information that it seems like it covers everything a student could wonder about how they’ll be graded. Yet, they often make me feel like I have questions after I read them. I often have to go back and ask my professors questions about the rubric so that I’ll have a better understanding of how I’ll be graded.   

I’ve never had a deep connection with rubrics. I’ve understood what they were going to be used for, followed them and found them important. I feel like a professor could easily say “I used the same rubric for all students, I posted it online, I went over it with you” so there shouldn’t really be a problem when it comes to how the students feels about how they were graded. The professor wouldn’t be wrong in saying this. But at the same time, I wonder how those rubrics could really say how each student will be graded. Rubrics are not always specific and the teachers need to clarify what it is said in them. This makes me think that they are a helpful tool but perhaps not always the best one for students.

Blog # 8 – Yancey & Bean

Writing Assessment in the Early Twenty-First Century by Kathleen Blake Yancey & Using Rubrics to Develop and Apply Grading Criteria by John C. Bean

In the beginning of the essay, Writing Assessment in the Early Twenty-First Century by Kathleen Blake Yancey, the author talked about assessment for students. She shared that compositionists often find themselves at the odds with writing assessment and frustrated with it. This showing that assessment is not their favorite task. Yancey presents a summary in her essay about the writing assessment. This history shared that students have been assessed through testing, on the writing process, and the attention to multiple texts, the way those texts are read.

Yancey also talked about students portfolios later in her essay. She talked about digital and printed portfolios. As she talked about this, I remembered a class I took when I first came to Kean.  There, we were asked to create a digital portfolio. Back then, it was explained to us that it was a class project but that we could use that portfolio to apply for jobs as well. The professor explained that the portfolio was a way to show potential employers part of our work. I didn’t really understood then what was the importance of it. After that class, creating portfolios was not a common task we were asked to do in other classes so I just simply didn’t go back to that portfolio I created. I think we were asked to reflect on our work for that portfolio and I also think we had several drafts of the same essay included on the portfolio. I think that overall, the portfolio was good to do in class. But I think that if we were asked to do them in several classes rather than in only a few, I would’ve been able to get more familiar with them.

I appreciated that Yancey’s essay was more up to date, this way I was able to more easily understand the points she was trying to make.

Using Rubrics to Develop and Apply Grading Criteria by John C. Bean was an essay that talked about the usage of rubrics for grading. Bean states that “as teachers, our goal is to maximize the help we give students while keeping our own workloads manageable.” I’ve often heard teachers say that they spend a lot of time grading papers. Using rubrics seems to be a helpful tool for them. But as a student, I’m not sure how I feel about rubrics. I don’t hate them, but they are not my favorite either. I feel like rubrics can be so dry at times. They have so much information that it seems like it covers everything a student could wonder about how they’ll be graded. Yet, they often make me feel like I have questions after I read them. I often have to go back and ask my professors questions about the rubric so that I’ll have a better understanding of how I’ll be graded.   

I’ve never had a deep connection with rubrics. I’ve understood what they were going to be used for, followed them and found them important. I feel like a professor could easily say “I used the same rubric for all students, I posted it online, I went over it with you” so there shouldn’t really be a problem when it comes to how the students feels about how they were graded. The professor wouldn’t be wrong in saying this. But at the same time, I wonder how those rubrics could really say how each student will be graded. Rubrics are not always specific and the teachers need to clarify what it is said in them. This makes me think that they are a helpful tool but perhaps not always the best one for students.