Teacher-prep rankings return

U: Those rankings aren’t so useful. (Alex Friedrich / MPR)

Remember the National Council on Teacher Quality and the teacher-prep ratings that riled up the education world last year?

Those assessments are back — this time in the form of a rankings system.

The 2014 Teacher Prep Review fully evaluated 39 Minnesota programs on 21 campuses, and gives national “Top Ranked” status to the University of Minnesota – Morris, Gustavus Adolphus College, St. Olaf College, University of Minnesota – Duluth, University of Northwestern – St. Paul, and University of St. Thomas.

They’re one of 107 programs that received it, and 15 elementary and 20 secondary programs received a national ranking.

A statement by the councils says four Minnesota programs got no rank because “their performance was in the bottom half of the national sample.”

Kate Walsh, President of the National Council on Teacher Quality, said in the statement:

Given the increasing knowledge and skills expected of teachers, it is indeed disappointing that we could not identify more exemplary programs in Minnesota. However, Minnesota is by no means unique. The dearth of high-quality programs is a national problem that public school educators, state policymakers and advocates, working alongside higher education, must solve together.

A council spokesman said that although the assessment and methodology have stayed the same, it is not releasing programs’ overall rating as it did last year — but does provide ratings for various elements in the program.

Instead of stars, it’s using circles. A black circle means four stars, one that’s three-quarters black is three stars, and so on.

The council also looked at two special-education programs in Minnesota.

Here are the “Top-Ranked” secondary-education programs followed by their national rank:

University of Minnesota – Morris – Undergraduate (50)
Gustavus Adolphus College – Undergraduate (57)
St. Olaf College – Undergraduate (57)
University of Minnesota – Duluth – Undergraduate (57)
University of Northwestern – St. Paul – Undergraduate (57)
University of St. Thomas – Undergraduate (57)

These are the highest-ranked elementary programs, followed by their national rank:

Highest ranked elementary programs (national rank):
Minnesota State University – Mankato – Undergraduate (27)
Bethel University – Undergraduate (78)
University of Minnesota – Morris – Undergraduate (97)
University of St. Thomas – Undergraduate (101)
Augsburg College – Undergraduate (155)

Here’s a link to the full rankings, and here’s a link to the report’s website with Minnesota information.

Associate Professor Misty Sato, who holds the Campbell Chair for Innovation in Teacher Development at the University of Minnesota – Twin Cities, was a critic of the ratings last year.

Here’s her take on the new system and the U’s stance:

Sato (Courtesy of UMN)

The University of Minnesota-Twin Cities campus chose to not participate in the NCTQ data collection for its rating system for three reasons:

1) they are a non-profit organization that is not an accrediting body, and we have already been through a rigorous program review by our accrediting body CAEP (formerly NCATE)–we received top marks for our partnerships with schools and the overall quality of our programs;

2) the methodology used by NCTQ to rate institutions lacks transparency and uses limited data collection; and 3) submitting information about our programs to a non-sanctioned non-profit is not a cost-effective use of university personnel time or resources. We chose to provide minimum documentation to NCTQ on the advice of our General Counsel.

So, why is our institution ranked by this group if we did not fully participate? We fully expected to see “data insufficient to rank” or “chose to not participate” next to our institution’s name on the NCTQ website.

NCTQ claims to be about providing consumers with information they need to make informed decisions. They want aspiring teachers to know about program quality.

I don’t think this ranking system is going to provide that information.

For example, if I want to be a high school science teacher, I am going to look at the programs that prepare science teachers. NCTQ has lumped ALL secondary programs together. As a consumer, I am not applying to be a “secondary teacher” at the University of Minnesota-Twin Cities. I will be applying to be a science teacher — and our secondary science program has won national awards for innovation in early career support for teachers and has a very good track record for job placement in local school districts.

Ranking all secondary programs together gives that aspiring science teacher or English teacher or French teacher little to no information about what their real licensure program will be like. The ranking system is just not effective at providing consumer-needed information.

NCTQ would also like school districts to be able to use these rankings to make first cut hiring decisions. These rankings will have little meaning to districts when they are trying to hire a bilingual elementary teacher for an immersion school or a middle school social studies teacher. It is not clear based on these rankings who is being prepared to teach what.

We work in partnership with seven of our local school districts. They know the quality of the teachers we prepare for their schools because we meet with them, we talk with them about the kinds of teachers they want to hire, we place our candidates in their schools for student teaching and hold interviewing sessions in the school districts.

More and more we are engaging in early hiring processes for candidates the school leaders identify as an asset to their school. No ranking system is going to imitate what it really means to engage in high-leverage practices for identifying a good fit between a teacher candidate and a school where they can make a difference.

Finally, NCTQ is still using an old model of program evaluation — looking for the best “inputs” by the program that they think might make a difference (we are not sure what those desired inputs are based on their methodology).

For the past five years, with support from the Bush Foundation, we have been moving our program evaluation model at the U of M-Twin Cities toward better understanding the impact that our graduates have on schools and on children when they are hired by our school district partners. We call this “impact” or “outcome” measures–and this is what our accrediting body is aiming toward as measures of program effectiveness. We survey our candidates, we survey our candidate employers, and we are getting closer to being able to look at the actual performance of our alumni in our partner district schools as we engage in very careful and collaborative data sharing practices.

The field of teacher education does not really need more “old models” of program evaluation. We need more resources invested in being able to look at the new models that show candidate impact on schools and P-12 students.