What the UMN-Twin Cities director for educator development says about the teacher-prep ratings

Incomplete. (justified sinner via Flickr)

I’ve just received this note from Mistilina Sato, director of the Educator Development and Research Center at the Twin Cities Campus of the University of Minnesota.

Sato, who holds the the Campbell Chair for Innovation in Teacher Development, is among many who’ve taken issue with today’s National Council on Teacher Quality ratings.

As an institution, the University of Minnesota-Twin Cities campus chose to not participate in the NCTQ review process. We  voluntarily engage in accreditation reviews in which our programs have been thoroughly reviewed by a panel of peers and experts in teacher education and K-12 schools. The NCATE accreditation review process includes a review of syllabi, a review of data about our candidate performance, a visit to campus, interviews with K-12 partners and faculty. In those reviews we have, as recently as this year, received high marks and exemplary ratings–especially in the area of K-12 partnerships and our clinical preparation of teachers.

We do not have an interest in participating with this non-profit organization that has established its own rating system. We prefer to be evaluated through a system that has been federally sanctioned and has a long track record of transparent and thorough review–NCATE. Including our institution on their ratings list is not productive for us or NCTQ as our non-participation brings their whole rating system into question for its lack of accuracy.

The information NCTQ has about our programs for making their ratings is grossly incomplete. They only looked at a handful of syllabi and possibly our public websites (we are not sure they even looked at those). They decided to provide a rating for our institution even though we chose to not participate. So, they have decided to publish a rating that is based on incomplete information.

For example, they rated us on a standard they call “student teaching.” We had to provide, under freedom of information, the syllabi they requested based on courses they identified.This was not our consent to participate in their ratings. After receiving our syllabi for the student teaching portion of the preparation programs, their ratings then used criteria such as how frequently feedback is provided to candidates. They did not ask us for that information and that is not information that is provided in a syllabus. They chose to proceed in rating this aspect of our programs without the information needed to make an accurate assessment of our program. The report we received also comments on how we communicate with school districts about our criteria for cooperating teachers and our capacity to participate in the selection process of the cooperating teachers. NCTQ did not ask us for this information under the FOIA request and again, this is not information that is included in a syllabus. This information is available on our website in our clinical handbook and we have detailed memoranda of understanding with our district partners about the criteria for cooperating teacher selection and we actively participate in their selection and training. The NCTQ rating does not reflect this information at all.

Our report says “While the program requires observations to be spaced at regular intervals, it fails to meet this standard because it does not provide student teachers with written feedback after five or more observations, does not clearly communicate to school districts the desired characteristics of cooperating teachers, and fails to assert its critical role in the selection of cooperating teachers.” This entire statement is false and they did not have the information available to them to make this assessment, and yet they have chosen to publish this rating.

 

  • seth

    So, you refused to provide them info and then criticize them for
    using presumptive incomplete info? Really?

    • seth

      What’s so scary abou providing info?

  • andrew

    Why should UMN waste its resources assembling information for some random non-profit organization? Why would anyone consider the evaluations of such an organization, one that is willing to publish rankings based on incomplete information? Especially when a little digging reveals that the organization has a specific point-of-view that it is advocating, and seems more concerned with garnering publicity than anything else?