Wednesday, November 11, 2015

Weekly Response: Bean's "Using Rubrics to Develop and Apply Grading Criteria"

This was a great reading. I'll respond to it with my own experiences with holistic grading, norming sessions, and rubrics.

When Bean talked about the controversy surrounding what professors actually want, it reminded me of some of the other articles we read about evaluating student writing. First, we need to decide what wer're looking for: voice, organization, content, grammar and spelling...? 


Essex County College: When I was at Essex, we were looking only for organization of the 5 paragraph essay in the exact structure taught in class, and grammar. Then, we would grade the paper holistically, with the student's name hidden, and have a second reader also score the paper holistically.  The sum of the two scores would be calculated and become the grade.  There were a number of problems with this system.  First, some papers were not bad, but didn't have the exact structure taught in class, so they were marked down. Next, the grammar was often stilted and confused, even when correct, so the grader didn't really consider whether the grammar was technically correct or not. The sentence could have subject and verb agree, but be so awkward in every other way; yet, it was difficult to mark down for awkwardness. Further, the illusion of hiding the student's name on each paper was absurd. These were handwritten essays. By midterm and final, I could certainly tell by the handwriting as well as the voice in the writing who the author was. Even as a second reader, reading students I didn't know, I could often tell by the handwriting, voice, and common errors the gender, nationality (foreign or American), and race (ELL or Ebonics) of the authors. Some common errors would also lead me to what first language the author spoke. I understand the theory behind hiding the names, but it didn't work in practice at all.

NJIT: At NJIT we have norming sessions every semester before we evaluate 500+ randomly selected student essays to check our FYW course effectiveness. The norming sessions are as Bean describes, only sometimes much worse. Depending on who shows up to the sessions, it can be a slightly argumentative discussion of which papers are better and why, and whose criteria are more important and "scholarly" than others'. When certain stuffed-shirts attend, there is less discussion, because those people tend to think their opinions are more important and informed than others. Some groveling attendees agree and defer, while others can't be bothered engaging in discussion which becomes argument with said individuals, and therefore the "norming" sessions become an exercise in listening to a select few and letting them decide our criteria for the day.  No matter the agreed upon criteria, we all try to grade each line item with a 2 or 3 on our 4 point scale.  Why? Incentive. Each essay gets evaluated twice. If any essay has grades with more than a 1 point spread, they go back in the communal pile for a third reading. Lots of these essays increases our time on task, and no one wants to stay late. Further, once you have evaluated a paper, your name is attached to it. If many of your papers are put in for third readings, people hate you by the end of the day and forever after.

Rubrics in HUM 101: I started using rubrics this semester to evaluate students' essays (projects). I love rubrics! They make evaluation easier, more precise (although Bean doesn't like the illusion of precision, sorry Bean), and reduce the need for end comments. I make my own rubrics on the Rubistar website. I let it generate a standard rubric; then I edit it. I may have included the link elsewhere in this blog, but it is worth posting again.  http://rubistar.4teachers.org/index.php
My rubrics are always grid based and task specific, and I have a different one for each project. The rubrics are available online when each project is assigned.

Brainstorming: I LOVE Diederich's 1974 experiment with the rubric grading where there was so little agreement among readers' scores. Just love it.  I'd like to do a version of that study now, using  a rubric to evaluate across the HUM 102 research paper that soon all NJIT second semester freshmen will have to write. If everyone is theoretically writing the same assignments, shouldn't the grading be consistent? I wonder if I could get some professors to use a rubric I create to score their students' essays? Could we get a random sample of say 100, and have them all graded by different professors? I know 5, maybe 6, who would likely be willing. I wonder if I could get a research grant? If I had a grant, I could offer a small incentive to the professors. NJIT just gave out research grants last month. I could apply for one for next year. Interesting. Not sure how this works, will have to investigate. What if I got permission to look at 200 essays online (free) and had Kean U grad students grade them?  Might be less red tape and a lower required incentive. Less politics, too. Now we're getting somewhere....Thesis?

Action Item - Bean's left-brain, right-brain Grading: Cool. Going to try it.  Grade the paper holistically first, and then evaluate the parts. Go back and calculate the rubric score later, and then norm it with the other papers in the class. It would take slightly more time, but it seems worth it if the grading is more consistent.

No comments:

Post a Comment