Imagine an evaluation system for teachers where a teacher is considered “Effective” after earning “Needs Improvement” ratings on both of their measures of student learning AND on each of the following criteria: managing student behavior, creating an environment to support learning, engaging students in learning, demonstrating flexibility and responsiveness, and using questioning techniques.

Most casual observers would consider this to be a pretty weak appraisal system. Or at least, they’d understand this is not a system designed to give anyone accurate feedback or recognize differences in teaching quality. Perhaps they’d ask how such a system came to be and what were the designers of the system thinking.

If you want to understand how things like this happen in our education system, you can start with a review of the recommendations approved at last night’s DPAS-II Subcommittee meeting. This committee was created by House Joint Resolution #6 and is required to give recommendations on changes to the state’s educator evaluation system by March 31st, 2016. Prior to leaving the Delaware Dept of Education (DDOE) last week, I was the DDOE’s representative (a non-voting member) on this committee. Yesterday, the subcommittee’s 14 members unanimously approved a series of recommendations that the members of the legislature will likely attempt to act upon in April. Here are my reflections on last night’s meeting:

Adults are the focus of these decisions, not students:

The representatives on this committee work each day on behalf of Delaware students and have sacrificed a lot of personal time to serve on this committee. But it is hard to see how that commitment to students and equity is reflected in the committee’s recommendations.

  • The committee will recommend reducing the weight of the Student Improvement and growth component of educators’ evaluations. The current system is designed such that an educator deemed “Effective”  on the other four components based on classroom observations of their teaching (98% of educators in Delaware are effective on all of these components) but “unsatisfactory” based on their students’ growth will be considered “Needs Improvement.”
  • The committee is also set to recommend that the Delaware State Education Association (DSEA)/Delaware Association for School Administrators (DASA)-designed evaluation rating system is piloted in 2016-17 and used statewide in the following year. The committee approved this rating system even after I noted the inconsistencies described above several times. A teacher’s students could show little growth under the proposed system and if they were rated highly on things like “Selecting Instructional Goals,” “Recording Data in a Student Record System,” or “Growing and Developing Professionally” they could still be considered “Effective.”
  • The committee is also recommending that the use of students’ growth on the state Math/English assessment will no longer be required as one of two measures in a Math and English teacher’s Student Improvement component. This is currently the only statewide, uniform, and objective measure of educator effectiveness in the evaluation system.

Some voices matter more than others in this process:

The committee’s executive summary reads “The DPAS II Sub-Committee, comprised of representatives from all major educator and stakeholder groups.” But this is a misrepresentation. As one student highlighted during his comments, students are a major stakeholder group not at the table in this discussion. A parent attending the meeting also remarked that having one parent on the committee (when there are four DSEA representatives and four DASA representatives) to represent the entire state is also unacceptable.

Additionally, in other aspects of this process, not all voices are created equal. Last year’s DPAS-II Advisory Committee agreed to move the state to annual summative evaluations by 2017-18 and this change was approved by the State Board of Education. This year’s DPAS-II Advisory Committee asked the sub-committee to spend time discussing how this could be done in a way that was not overly burdensome for administrators. Half of the 4000+ Delaware teachers that responded to the 2015 DPAS-II survey even felt that moving to annual summative evaluations could enhance the DPAS-II system.

But the 14 people (or, at least, the most vocal members) on the subcommittee were not interested in exploring this. So they recommended that annual evaluations  “are not necessary and should be rescinded.” The current system has only novice teachers (those in their first three years) and teachers on an improvement plan getting annual summative evaluations.

Also, during the last two meetings the Superintendent representatives (Matt Burrows and Dr. Susan Bunting) and a number of administrators on the committee noted that they wanted the committee to make sure principals and evaluators had the final say on the types of measures of student growth educators in their buildings use and the goals they set. The current policy is that educators and evaluators must discuss and agree to these goals–but administrators have the final say in case there is disagreement.

This would have undoubtedly been a point of disagreement between the teachers and administrators on this committee. So the chairs of the committee claimed this was too detailed of a matter for this committee to discuss and outside of the scope of its recommendations. It says a lot about this “representative” process if the expertise and recommendations of the two delegates of Delaware’s 19 superintendents are not regarded in the conversation.

I’ll end with a list of who is represented on this committee (see below). If you are interested in having a say in this discussion, there will likely be an opportunity for public comment when the chair and co-chair of the committee present their recommendations to the legislature and State Board in the coming months.

DPAS-II Sub-Committee Members

  • Jackie Kook, Delaware State Education Association (DSEA) – Chair
  • Dr. David Santore, DE Association for School Administrators (DASA) – Co-Chair
  • Sherry Antonetti, Delaware State Education Association (DSEA)
  • Clay Beauchamp, Delaware State Education Association (DSEA)
  • Rhiannon O’Neal, Delaware State Education Association (DSEA)
  • Kent Chase, DE Association for School Administrators (DASA)
  • Dr. Clifton Hayes, DE Association for School Administrators (DASA)
  • Dr. Charlynne Hopkins, DE Association for School Administrators (DASA)
  • Bill Doolittle, Parent Representative (PTA)
  • David Tull, DE Delaware School Boards Association
  • Dr. Lisa Ueltzhoffer, Newark Charter – Charter School Representative
  • Dr. Susan Bunting, School Chief’s Association/ (DPAS-II Advisory Committee Chairperson)
  • Donna R Johnson, Delaware State Board of Education (non-voting member)
  • The Honorable David Sokola, Delaware State Senate
  • Tyler Wells, Higher Education
  • Christopher Ruszkowski, Delaware Dept of Education (non-voting member)

 

Advertisements

4 thoughts on “Reflections after last night’s educator evaluation committee meeting

  1. I agree with many of your points, but the system is currently not working at providing any variation in teacher ratings (like you said 98%+ teachers are rated effective or better). So much focus is placed on student growth because that is one of the few areas that any variability is observed. When you get no variability in the first 4 components, the one component that does it becomes much more important.
    Overall, my reluctance to endorse using student growth from SBAC (for teacher evaluation) is because there are several unanswered questions hanging out there. The first is in regards to some very basic reliability questions I have regarding the SBAC. Basically, what I need to know before using an assessment to measure student growth is how precise it is. I did receive this information for DCAS, but as of November of 2015, I never saw the standard error or measurement released for SBAC results.
    Another issue that has never been well-explained to practitioners is how the target scores are determined for measuring student growth. It took a very long time to get any information out for how the targets were set for DCAS. The fact that the targets are norm-referenced and not criterion-referenced needs to be addressed. Under a norm-referenced model, there will always be a percentage of teachers will be determined to be underperforming.
    Finally, my biggest concern is that we never have received information (as far as I am aware) on the year-to-year correlations for student growth teacher by teacher. In other words, this growth model should produce results that show that a teacher whose class shows little growth in year 1 should be also more likely to show little growth in year 2. In reality, studies that I have seen show that these year to year correlations of class growth is pretty low (r=0.3 or 0.4). I have not seen what the correlations are for Delaware’s model.
    Additionally, I haven’t even mentioned the fact that often we are talking about small sample sizes for teachers (especially at the elementary level) and that we know that students are not randomly assigned to classrooms. DEDOE has produced data that shows our neediest students are placed with the most inexperienced and often ineffective teachers. Many schools have systems in place that allow parents to request teachers, thus creating classrooms comprised of more engaged parents.
    In summary, it is important to look at the big picture. I find it hard to get too wound up over actions that will make 99% teachers rated effective instead of 98% like this group proposes. Overall, I think too much focus is being placed on the inclusion of student growth when we should be instead talking about why not a lot variation is reported in the observation/artifact-based portions of DPAS II. This is what I think is the real issue.

  2. Overall, my reluctance to endorse using student growth from SBAC (for teacher evaluation) is because there are several unanswered questions hanging out there.

    Chief among them is that we are using a test designed for students on the adults… against its own creative purpose. Policymakers hijacked student measurements and foisted them on teachers to make themselves feel better with no regard to unintended consequences like, oh, I don’t know….turnover, professional disrespect, lower rates entering profession, etc etc

    The ideologues are in control, make no mistakes about it and schools and communities will suffer until they stop.

    http://www.nationalaffairs.com/doclib/20110919_Hess.pdf

  3. The growth targets the DOE set for minority, low-income, ELL, and students with disabilities are, as Penny Schwinn put it, the most aggressive in the nation. But with ESEA now yesterdays news, do those targets even matter anymore?

    In terms of “managing” student behavior, there are several facets to this. Is the student on an IEP and are the accommodations being implemented with fidelity? Are they the right accommodations? Is there enough staff in the classroom? How would you measure this? What if a student is so unruly but has no disability but just has horrible manners and refuses to change? Can a teacher be blamed for that and is the system set up so action would be taken against a student like that? If another teacher has a bad environment in their classroom and a student carries that bad behavior to another classroom with a good environment should that teacher be blamed?

    When those questions are answered the whole picture changes. With that being said, if a teacher is able to implement those five items you would have a much more engaged classroom. If the environment is not conducive to learning, there are going to be issues.

    In our endless quest to rate and label, we have lost track of the individual student. Yes, with this new system teachers would still be measured based on some sort of collective student outcome. Each child is different. We keep bumping heads and tweaking the system but we have the same results, year after year. Maybe if we just let teachers teach and stop trying to “rate” them and students all the time, we can release the pressure valve that ultimately gets put on the student!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s