An Open Letter to Governor Cuomo: Re-think the Regs of APPR

Subject: An Open Letter to Governor Cuomo: Re-think the Regs of APPR
From: Grant Wiggins
Date: 1 Jul 2015

Dear Governor Cuomo:

I have my whole professional educational life been a supporter of teacher accountability. And, as you may know, I sided publicly with the findings in your recent report on the sham of current local teacher effectiveness ratings in New York schools and districts.

However, I have long written and consulted on the need for transparency in assessment and accountability via released tests after they are given – as the Regents did for over 100 years until recently. You simply cannot expect people to trust a system in which the scores are psychometrically generated and where I cannot see, for myself, what was assessed and what the actual results were. It fails as both credible accountability and as feedback to teachers.

More bluntly: Would you accept such a system for yourself?

The cardinal rule in Quality Control as formulated by Edwards Deming is Drive Out Fear. Alas, a one-shot high-stakes test in which no one can even challenge the VAM growth score or (despite its lack of year-to-year reliability) see it in context with the specific item analysis greatly amplifies fear. Instead of building an accountability system that incentivizes success we are simply beating the prisoners in hopes of improving morale.

The recent edits to the APPR system. The proposed edits in the APPR system, as part of the recent budget negotiations, are thus unwise and likely to backfire. In particular, I found the following changes to be without much merit:

APPR plans may no longer include evaluations based upon the following factors:

• Parent/student surveys;

• Teacher artifacts or lesson plans;

• Student portfolios (unless there is an SED approved rubric);

• Goal setting;

• District/regionally developed assessments (unless SED approved);

• Any growth or achievement target

This is truly a step backward. This disempowers the teacher, the supervisors, the parents, and the BOCES. To not look closely at goals and unit plans that frame the lessons observed; to ignore feedback from students and parents; and to further undercut the legitimacy of district and regional assessments is to emasculate local authorities and any incentive to “own” reform. At the very least, why not do what NYC has done in its Quality Review reports and use student and parent survey data (as well as external evaluators), in order to better triangulate the data?

What is an effective teacher? What these unwise and hurried changes have done is to – ironically – bring an essential question more clearly to the surface. Just how do we identify an effective teacher? You and I agree that the current rating system does not work. So, we require a lengthy and public debate, across all stake-holders, on what constitutes an effective teacher. The new proposed system reduces that criterion to an external opaque test score and observation – both of which are prone to reliability errors and that focus far too narrowly on only a few aspects – and testable aspects – of student achievement. (Nor, currently, do valid external tests exist that can assess numerous subjects and courses not covered by Common Core.)

Over my 35 years of work in educational research and reform – and based on the work done by the National Board for Professional Teaching Standards – I can say with confidence that teacher behavior in the classroom is only a small part of being an effective teacher. Teacher planning, self-assessing, and self-adjusting are all critical criteria – and in the NBPTS process. Further, only by looking at what teachers assess, how they assess it, and how they act on that feedback can we truly grasp what is effective and ineffective about them. All of us in education have seen teachers that seem effective if we look only at their style, student engagement, their articulateness, and their subject knowledge. But these are sometimes very misleading indicators – as is a single score, earned once per year.

Consider baseball, your love and mine. Here is a simple analogy to make the point. You were a ballplayer and are a Yankees fan. But suppose only once per year, we “tested” the Yankees on their skills, on tests developed by experts. Now, imagine, the players do not know how they did, either during or after the test. Now imagine, the NYSED gives them a value-added score – with test security, so they cannot double-check or question the test results (or test validity). Worse, imagine in addition that the impartial evaluators and internal supervisors (coaches) went to one game where the Yankees were terrible – like the game last week in which they made 4 errors, left runners stranded, and pitched poorly. By the logic of your plan, we would be obligated to find Manager Girardi “ineffective.” But that’s both bad measurement and not common sense. Two weeks later it looks different, doesn’t it? Indeed, the charm of baseball is that a long season of 162 “tests” enables the truth of quality to out. If this is true for highly-skilled and trained professional athletes, what about novice young students?

In short, I fear you are making matters worse, not better, by this new round of reductionist rules. And by insisting that they be put into operation next year, with no time to really think them through, test them, and refine them ensures that this effort will backfire. Which neither you nor I wish to see.

I hope you will re-think these new criteria of “effective teaching”. I strongly recommend that you put together a blue-ribbon panel of educators to help you develop a definition and set of criteria that will have a broader professional stamp of approval. And I trust that you realize that your goal of great accountability for teachers and thus opportunity for kids can only happen if you have a sufficient number of respected educators on your side. When you start losing impartial and informed people like me, who have no skin in this game, then your initiative is in trouble.

Sincerely,

Grant Wiggins

Category: