The Wrong “Scientific” for Education

The release of National Assessment of Educational Progress (NAEP) 2019 scores in math and reading, announced as an “emergency” and “devastating,” has thrown gasoline on the rhetorical fire that has already been sweeping across media—a call for “scientific” research to save public education in the U.S.:

While the media and the public seem historically and currently convinced by the rhetoric of “scientific,” there is a significant irony to the lack of scientific evidence backing claims about the causes of NAEP scores; for example, some have rushed to argue that intensive phonics instruction and grade retention legislation have caused Mississippi’s NAEP reading gains while many have used 2019 NAEP scores to declare the entire accountability era a failure.

Yet, none of these claims have the necessary scientific evidence to make any of these arguments. There simply has not been the time or the efforts to construct scientific studies (experimental or quasi-experimental) to identify causal factors in NAEP score changes.

Another problem with the rhetoric of “scientific” is that coinciding with that advocacy is some very disturbing contradictory realities:

And let’s not forget that for at least two decades, “scientific” has been central to No Child Left Behind and the Common Core—both of which were championed as mechanisms for finally bringing education into a new era of evidence-based practices.

We must wonder: If “scientific” is the answer to our educational failures, what has happened over the past twenty years of “scientific” being legislated into education, resulting in everyone shouting that the sky is falling because 2019 NAEP scores are down from 2017 as well as relatively flat since the early 1990s (30 of the 40 years spanning accountability)?

First, there is the problem of definition. “Scientific” is short-hand for a very narrow type of quantitative research, experimental and quasi-experimental research that is the gold standard of pharmaceutical and medical research.

To meet the standard of “scientific,” then, research in education would have to include random-sample populations of students and a control group in order to draw causal relationships and make generalizations. This process is incredibly expensive in terms of funding and time.

As I noted above, no one has had the time to conduct “scientific” research on 2019 NAEP data so making causal claims of any kind for why NAEP scores dropped is necessarily not “scientific.”

But there is a second, and larger, problem with calling for “scientific” research in education. This narrow form of “scientific” is simply wrong for education.

Experimental and quasi-experimental research seeks to identify causal generalizations. In other words, if we divide all students into a bell-shaped curve with five segments, the meaty center segment would be where the generalization from a study has the greatest effectiveness. The adjacent two outer segments would show some decreasing degrees of effectiveness, leaving the two extreme segments at the far ends of the curve likely showing little or no effectiveness (these students, however, could have learned under instruction not shown as generally effective).

Yet, in a real classroom, teachers are not serving a random sampling of students, and there are no controls to assure that some factors are not causing different outcomes for students even when the instructional practice has been shown by scientific research to be effective.

No matter the science behind instruction, sick, hungry, or bullied students will not be able to learn.

The truth is, in education, scientific studies are nearly impossible to conduct, are often overly burdensome in terms of expense and time, and are ultimately not adequate for the needs of real teachers and students in real classrooms—where teaching and learning are messy, idiosyncratic, and impacted by dozens of factors beyond the control of teachers or students.

Frankly, nothing works for all students, and a generalization can be of no use to a particular student with an outlier need.

While we are over-reacting to 2019 NAEP reading scores, we have failed to recognize that there has never been a period in the U.S. when reading achievement was adequate; over that history teachers have implemented hundreds of different instructional strategies, reading programs, standards, and high-stakes tests—and we always find the outcomes unsatisfying.

If there is any causal relationship between how we teach and how students learn, it is a cumbersome matrix of factors that has been mostly unexamined, especially by “scientific” methods.

And often, history is a better avenue than science.

The twenty-first century has not been the only era calling for “scientific” in educational practice.

The John Dewey progressivism of the early twentieth century was also characterized by a call for scientific practice. Lou LaBrant, who taught from 1906 until 1971 and rose to president of the National Council of Teachers of English in the 1950s, was a lifelong practitioner of Deweyan progressivism.

LaBrant called repeated for closing the “gap” between research and practice, but she also balked at reading and writing programs—singular approaches to teaching all students literacy.

While progressive education and Dewey are often demonized and blamed for educational failure by mid-twentieth century, the truth is that progressivism has never been widely embraced in the U.S.

Today, however, we should be skeptical of the narrow and flawed call for “scientific” and embrace instead the progressive view of “scientific.”

For Dewey, the teacher must simultaneously teach and conduct research—what eventually would be called action research.

To teach, for progressives, is to constantly gather evidence of learning from students in order to drive instruction; in this context, science means that each student receives the instruction they demonstrate a need for and that produces some outcomes of effectiveness.

In an elementary reading class, some students may be working in read aloud groups while others are receiving direct phonics instruction, and even others are sitting in book clubs reading picture books by choice. None of them, however, would be doing test-prep worksheets or computer-based programs.

The current urge toward “scientific” seems to embrace the false assumption that with the right body of research we can identify the single approach for all students to succeed.

Human learning, however, is as varied as there are humans.

This brings us to the current “science of reading” narrative that calls for all students to receive intensive systematic phonics, purportedly because scientific research calls for such. The “science of reading” narrative also rejects and demonizes “balanced literacy” as not “scientific.”

We arrive then back at the problem of definition.

The “science of reading” advocacy is trapped in too narrow a definition of “scientific” that is fundamentally wrong for educating every student. Ironically, again, balanced literacy is a philosophy of literacy (not a program) that implements Deweyan progressive “scientific”; each student receives the reading instruction they need based on the evidence of learning the teacher gathers from previous instruction, evidence used to guide future instruction.

Intensive phonics for all begins with a fixed mandate regardless of student ability or need; balanced literacy starts with the evidence of the student.

If we are going to argue for “scientific” education in the U.S., we would be wise to change the definition, expand the evidence, and tailor our instruction to the needs of our students and not the propagandizing of a few zealots.

For two decades at least, the U.S. has been chasing the wrong “scientific” as a distraction from the real guiding principle, efficiency. Reducing math and reading to discrete skills and testing those skills as evidence for complex human behaviors are as misleading as arguing that “scientific” research will save education.

Teachers as daily, patient researchers charged with teaching each student as that student needs—that is the better “scientific” even as it is much messier and less predictable.

One thought on “The Wrong “Scientific” for Education

  1. Pingback: Education Evolutions #115 | Haas | Learning

Comments are closed.