You Don’t Have to Take My Word For It #2: Value-Added and the LA Times

In this next guest post, Tom Hoffman, intrepid teacher and policy wonk at Tuttle SVC  (I”ll have to ask him what the title of his blog means someday), tackles the publication of teacher scores by the LA Times last week.


Much of the concern about value-added analysis of teachers, as exemplified by this weekend’s already infamous LA Times feature, comes from teachers and other advocates for urban schools that have been singled out under NCLB and Obama administration policies, with no regard for the increased difficulty of serving low-income, minority students.

However, on the macro level, a value-added assessment would have to have a cataclysmically bad design to actually treat disadvantaged schools worse than they are under straight comparisons of proficiency rates.

The weirdness comes in at the micro level, looking at individual teachers, and, in particular, things may get weird for teachers, administrators and parents under value-added analysis of schools serving not disadvantaged students.

In their article, Jason Felch, Jason Song and Doug Smith start by emphasizing the wide range of outcomes based on teacher effectiveness:

Highly effective teachers routinely propel students from
below grade level to advanced in a single year. There is a substantial
gap at year’s end between students whose teachers were in the top 10%
in effectiveness and the bottom 10%. The fortunate students ranked 17
percentile points higher in English and 25 points higher in

Their first comparison of high and low achieving teachers in the same school suggests a divergence of over forty percentile points in the math scores of two classes after one year, depending on their teachers. In this school, the students are starting fifth grade in the mid-30th percentiles.

Whether this accurately represents the difference in the value these two teachers add, I can’t say. It is, at least, a clear distinction.

Their second pair of examples come from “Third Street Elementary in Hancock Park, one of the most well-regarded schools in the district.

Here, you have

“(Karen) Caruso, who teaches third grade, ranked among the bottom 10% of elementary school teachers in boosting students’ test scores. On average, her students started the year at a high level —
above the 80th percentile — but by the end had sunk 11 percentile points in math and 5 points in English.”

Contrasted with Karen, a teacher named Ms. Polacheck had her students gain

“5 percentile points in math after a year in her class, and 4 points in English. That put her in the top 5% of elementary school teachers.”

Apparently with this school’s demographic, the difference between best and worst amounts to just a 9 percentile swing in English and 16 points in math. The differences between the teachers seems lessened in this context as well: the “bottom 10%” teacher has National Board Certification and is recognized by her school community as a leader
and excellent teacher. The reporters suggest, via anecdote alone, that she may be less academically demanding than the higher scoring teacher.

But what does it mean if the average percentile ranking in one class is 85% one class and 75% in another– particularly if you’re starting at 80%? We know that teacher effects fade out over time. It is easy to suspect that for relatively affluent students with more educated parents and other out-of-school resources, that these relatively small
differences may not have a long-term impact.

Indeed, in twenty-first century America the advantage to sending your child to a relatively affluent elementary school is not so much the test scores– it is the freedom from worrying about test scores.

As a parent myself, I would not want a teacher to look at one of my daughters and think, “OK, how can I get this student from the 85th percentile to the 90th on the test.” Not only because I don’t trust the test in general, and don’t feel that its values reflect my own; not only because I’d prefer my children to have a broader, human education; but also because these tests aren’t designed to draw subtle distinctions at the extremes.

They’re looking for grade level proficiency cut-offs, and, as we’ve seen in New York and elsewhere, just getting that right is difficult enough. They aren’t meant to draw out the highest-level skills that will differentiate top performers over the longer run.

If we follow this path of emphasizing value-added analysis and publishing the results, we may push teachers to chase test scores past the point of diminishing returns, discarding enrichment and narrowing the curriculum for our high-achieving students as much as we have for our historically low-achieving populations.


Thanks, Tom.

For more information on the response to the LA Times story and value-added scores, please go muck around on Larry Ferlazzo’s blog, who has put together a fabulous collection of posts and links on both the article and value-added scores in general  that will leave you thinking, “Wow….this is our magic bullet?”

For the time-strapped, just watch this video from Dan Willingham. And invite your administrators to do so as well. Popcorn helps.   




And lastly, shameless promotion of the last two Summer Words here on The Line include:

August 31st: “Mastery Tips for the Beginning of School,” from Kathleen Cushman, author of the fabulous _Fires in the Mind_ and its corresponding blog (go give it some love);


September 5th: “What is a Professional, Anyway?,” a conversation with doctor turned teacher Mike Doyle, blogging at Science Teacher.

…and then school begins, with Common Core English standards adopted, Race to the Top grants won, state exam cut scores gutted, and, in my own backyard, a brand spanking new building schedule applied. 

Should be a very interesting year.

6 thoughts on “You Don’t Have to Take My Word For It #2: Value-Added and the LA Times

  1. Dina,

    Great post from Tom. I’ll add it to the list on my blog. And thanks for pointing people my way!


Leave a Reply

Your email address will not be published. Required fields are marked *