On Education and Credentialing: “Mak[ing] a Straight-cut Ditch of a Free, Meandering Brook”

“What does education often do?” Henry David Thoreau asked in his journal, answering: “It makes a straight-cut ditch of a free, meandering brook.”

As a former high school English/ELA teacher for 18 years, as I sat in the first of two training sessions yesterday, this from Thoreau came to mind.

Over the past 15 years, I have been a teacher educator, now a full and tenured professor in my university’s Education Department. Yet, from 9-4 yesterday, as representatives from the state department of education trained our full-time and adjunct faculty on the new South Carolina teacher evaluation rubric, adapted from the National Institute for Excellence in Teaching (NIET) standards, I felt more like an elementary student because the so-called training was mostly condescending and entirely unprofessional.

But the unprofessional, I regret to acknowledge, is business as usual for teacher education, as a faux-field in higher education, and for K-12 teaching, a faux-profession.

Some of my doctoral courses for an EdD in curriculum and instruction covered educational leadership. In that work, I was always fascinated by what the research often describes as three types of leaders—authoritarian, authoritarian-light, and collegial.

The most chilling of the three is the authoritarian-light, which is a style that includes finding strategies that manipulate stakeholder buy-in by making it appear the stakeholders are making decisions even though they are actually being coerced to comply with mandates about which they have no real choice.

This is the process I suffered through yesterday as bureaucrats from the state department assured a room of professors and practitioners that the new state rubric for teacher evaluation is backed by research and that we already know and do everything therein.

Again, as a former English/ELA teacher, I am struggling with describing the experience as Orwellian, a Kafkan nightmare of reason, or both.

Training Teacher Educators to Train Teachers to Train Students

Some of the early session dynamics are worth noting upfront.

As part of the authoritarian-light strategies, the facilitators had lots of group work with large sticky paper and markers. Much laughing and chatting included references to the numerous teacher evaluation systems SC has adopted over the past three decades and how everyone in the room knew all this stuff.

We all shared our very E.D. Hirsch moment of knowing all the acronyms for the four or five systems many of us in the room have experienced.

And then the dramatic kicker: But this new rubric and system is different, better, and supported by research!

[Let’s note that no time was taken to acknowledge that this same framing occurred each time all the former systems were introduced.]

In passing, the credibility of the rubric was linked to the fact that the rubric includes footnotes (so do Ann Coulter’s books, by the way) to the incredible work of Danielson and Marzano!

However, as I found the rubric online, I noticed that neither were in the 23 footnotes.

[Let’s note that no time was taken to examine very powerful and credible counter-evidence refuting the credibility of the cult of Danielson and the cult of Marzano. Also, the cult of Hattie is in footnote 7, a hint to the hokum therein.]

Not to belabor the seven-hour training session, but a few additional points:

  • This rubric is highly touted, yet when we raised concerns about vague terms such as “most” and “some” to distinguish between “proficient” and “needs improvement,” that conversation was mostly brushed aside, except that we discovered if you look under “Description of Qualifying Measures” on page 8, you learn that “most” means “some” (though “some remains undefined). By any fair evaluation of this rubric, it fails miserably the basic parameters of high-quality rubrics (interestingly something I teach in my methods courses).
  • And then there is the rubric’s enormity: 404 bullets over 4 categories and nine pages of small Helvetica font. To navigate these bullets (and we were warned repeatedly to do so “holistically and not as a checklist” as we walked through the bullets as a checklist and not holistically) with any care at all requires nearly three hours for just one lesson, assuming about 2-minutes per bullet. Not only does the rubric fail basic expectations for clearly defined terms (just what the hell are “powerful ideas”?), but also it fails for being incredibly unwieldy and overwhelming.
  • Throughout the training, two key points were emphasized: mastery and teacher impact on student learning. As I will discuss below, we were given no opportunity to explore the serious problems with both, and no time was spent highlighting how the training itself practiced faux-science in the context of each.
  • As we explored the rubric, as well, the facilitators unpacked key factors that are not expressed in the rubric itself. Even though the language of the rubric under “proficient” references the teacher, the facilitators noted often that to move from “needs improvement” to “proficient” was dependent on students demonstrating mastery (showing “proficient”), not teacher behaviors (merely “needs improvement”).

To clarify how problematic this training proved to be, let me offer briefly the last activity, our viewing a lesson and watching the facilitators model how to use the rubric.

The lesson was a ninth-grade ELA lesson on inference, and the class was a “no excuses” charter school with black and brown children all adorned in matching purple shirts.

Here is the short version: the lesson, we were told, met the upper range of “proficient.”

Yet, what the activity highlighted was quite different than the intent.

The lesson was weak, a reductive attempt to teach inference to mastery that confuses isolated literacy skills with teaching literacy or literature. But this sort of bad lesson is necessary once you reduce teaching to mastery and teacher impact on student learning.

Instead of addressing this substantive problem and ways to conference with the teacher about focusing literacy instruction on rich texts and inviting students to explore those texts with more and more sophistication over a long period of time, the points of emphasis were on transcribing verbatim the lesson (although we could barely hear the audio) so that we could give lots of evidence for the bullet points we were not supposed to view as a checklist.

[Let’s note that no time was allowed to acknowledge that if and when teacher evaluators need detailed evidence of teaching, the video itself is superior to transcribing.]

The Big Point here is that once a rubric is codified by the state as a credentialing instrument, that rubric determines “proficient,” which may also simultaneously be a very bad, uninspiring, and reductive act of teaching.

Within that, as well, we witnessed the faux-science of claiming to embrace concepts while simultaneously contradicting them.

While only a few students out of a class of 20-plus students responded aloud during the lesson (our only potential evidence of learning), that constituted “most” and thus “proficient”—and represents in the Orwellian confines of this rubric “mastery.”

A few students offering one or two comments aloud in no reasonable way constitutes mastery, and there were no efforts to control for anything that justifies claiming this lesson by this teacher was a direct causal agent for the supposed learning. For example, those students willing to share may have come to class already capable of playing the inference game in school.

Teacher education as a bureaucratic mandate has mostly and currently functions as faux-science—adopting the language of being a certain kind of reductive behavioral psychology without taking the care and time to understand or implement the concept with fidelity.

This is a tragic consequence of the low self-esteem of the field—which becomes a vicious cycle of pretending (badly) to be a field deemed more credible (psychology) but unable to become a credible and independent field unfettered by bureaucracy.

Everything Wrong with Teacher Education Is Everything Wrong with Education

“Schools are increasingly caught up in the data/information frenzy,” concludes Rebecca Smith, adding:

Data hold elusive promises of addressing educational concerns, promising real-time personalized instruction, predicting student growth, and closing the achievement gap of marginalized students (Bernhardt, 2006; Earl & Katz, 2006; Spillane, 2012). Today collections of student data are considered a reliable and a scientific way of measuring academic growth, mobilizing school improvement, and creating accountable, qualified teachers. Influenced by policy, pedagogy, and governing school procedures, data collection has become normalized in schools. Instead of asking what we can do with data, the better questions are: How did the accepted practice of quantifying children become normalized in education? How does our interaction with data govern our thoughts, discourses, and actions? (p. 2)

And as Smith details, the historical roots are deep:

Thorndike (1918), relying on his psychological work, believed scientific measurement utilized in educational settings could create efficient systems where “knowledge is replacing opinion, and evidence is supplanting guess-work in education as in every other field of human activity” (p. 15). To Thorndike, the measurement of educational products was the means by which education could become scientific through rigor, reliability, and precision. (p. 3)

As a logical although extreme consequence of this historical pattern, Common Core represents the false allure of accountability and standards as well as the quantification of teaching and learning within the idealized promise of “common.”

Common Core was doomed from the beginning, like the many iterations of standards before because as a consequence of the accountability era the evidence is quite clear:

There is, for example, no evidence that states within the U.S. score higher or lower on the NAEP based on the rigor of their state standards. Similarly, international test data show no pronounced test score advantage on the basis of the presence or absence of national standards. Further, the wave of high-stakes testing associated with No Child Left Behind (NCLB) has resulted in the “dumbing down” and narrowing of the curriculum.

And thus:

As the absence or presence of rigorous or national standards says nothing about equity, educational quality, or the provision of adequate educational services, there is no reason to expect CCSS or any other standards initiative to be an effective educational reform by itself.

For decades and decades—and then to an extreme over the past thirty years—education and teacher preparation have been mired in doing the same thing over and over while expecting different results.

The quality of education, teaching, and learning is not in any reasonable way connected to the presence or quality of standards, to the ways in which we have chosen to measure and then quantify them.

Training education professionals to use a really bad rubric that will determine if candidates are allowed to teach “proficiently” (which I can define for you: “badly”) is insanity because within a few years, another rubric will be heralded as the greatest thing while teaching and learning are no better—and likely worse—for all the bluster, time, and money wasted.

Education and teacher education are trapped in a very long technocratic nightmare bound to a reductive behaviorism and positivism.

These false gods are useful for control and compliance, but are in no way supportive of educating everyone in a free society.

Technocrats and bureaucrats cut straight ditches; teaching and learning are meandering brooks.

3 thoughts on “On Education and Credentialing: “Mak[ing] a Straight-cut Ditch of a Free, Meandering Brook””

  1. “This is the process I suffered through yesterday as bureaucrats from the state department assured a room of professors and practitioners that the new state rubric for teacher evaluation is backed by research and that we already know and do everything therein.”
    Why do you not all walk out then?

  2. Dr. Dolan passed away a couple weeks ago. This lecture (2014) relates to this post. Dolan: Overview of the Decision-Making 7 Point Continuum

    Dr. W. Patrick Dolan presents a tool for collaboration between district and teachers or it’s union leaders.

  3. I didn’t know how fortunate I was in 1975 when I was asked if I wanted to earn my teaching credential through an urban residency program that pared me up with a master teacher and had me work full time (and paid a small stipend) in her classroom for an entire school year in a 5th grade classroom in an area where child poverty rates were approaching 100 percent.

    Every teacher should have the opportunity to go through a program like this one.

    My master teacher was Adele Stepp and she was great. What I learned from her showed me how to manage a classroom and deal with unruly disruptive students in the calmest possible way, the most important skill of all for a teacher to know: management and control.

    When I joined the U.S. Marines out of high school and went to boot camp, before serving in combat in Vietnam, I was exposed to another method of management and control through terror and fear — a method we find that is used in many corporate charter schools like Eva Moskowitz’s so-called, misnamed Success Academies in New York City (they should be called Bully Boot Camp Academies, we will abuse your child and scar them for life) — but this was a method Adele didn’t teach me because teachers weren’t allowed to use terror in the community based, democratic, transparent, non profit traditional public schools to maintain an environment where learning takes place.

    At the time I thought all teachers went through this type of program to earn a teaching credential. I had no idea about the variety of teacher credentialing programs that were probably all supported by research. I stayed in the classroom teaching for the next thirty years until 2005, and during those decades I was exposed to endless teaching methods, all supported by research, that could be footnoted because it is so easy for some researcher to publish in education journals that might be peer reviewed. That doesn’t mean the methods researched work on all children or work for all teachers are are even humane and child friendly.

    Anyway, in Dana Goldstein’s book “The Teacher Wars: A History of America’s Most Embattled Profession”, in one chapter, I think it was Chapter 10, she referred to research based on results in real schools and classroom over a long period of time that compared the different methods of teacher training and discovered that urban residency programs like the one I went through was by far the best of all them. The worst method was TFA, Teach for America. The details are in her book, a book heavily researched with loads of citations and footnotes.

    I really want to take those representatives from the SC state department of education that ran the workshop you were forced to attend and be their DI in Marine Corps boot camp. If they survived, they could even write a research paper about their experience learning how to become a auto-killing machine that is then sent off to fight in one of their country’s endless wars that feeds profits to the largest weapons industry in the world.

Leave a comment