Entering higher education in my early 40s after 18 successful years as a high school English teacher, I remain 17 years later baffled and even disappointed at the mess of contradictions that characterizes an institution populated by the most educated people possible.
Immediately I had to hold my tongue against the pervasive culture of college professors bemoaning constantly how busy they are. When my high school teaching career ended, I was wearing a wrist brace because I was hand marking about 4000 essays and 6000 journals per year while teaching five course and about 100 students (many colleagues taught 20+ more students per year).
I also coached many of those years, with work days from about 7:30 AM until 10 or 11 PM in the evening.
By contrast, I teach two first-year writing courses each fall (as part of my full load, a minimum of five course per academic year), a total of 24 students, and my teaching schedule tends to be three days a week, often a Monday evening class included.
The Ivory Tower effect is a bit more accurate than I would prefer.
More disturbing, however, is the power of tradition among academics, a dynamic that works against practices and policies being based on evidence (and thus in a state of flux when that evidence changes).
While the U.S. has a long history of characterizing and even demonizing higher education as some sort of liberal cult, the truth is that the very worst qualities of higher education are from its conservative urges as institutions.
Of course, you can find a disproportionate number of professors who have left-leaning social and philosophical ideologies, but the most powerful department/colleges in higher education are often the most conservative—political science, economics—or the most apt to take non-political poses—the hard sciences.
This disconnect between how higher education is perceived and how higher education exists stems from, in part, I think, higher education presenting itself rhetorically as progressive—mission statements, social justice initiatives, etc.
However, with a little unpacking, we can expose that practices and policies often contradict and even work against that rhetoric and those initiatives.
One example that I have addressed again and again is the use of student evaluations of teaching (SET) to drive significantly the promotion, tenure, and reward process.
Consider a few points raised in Colleges Are Getting Smarter About Student Evaluations. Here’s How by Kristen Doerer:
“Having a female instructor is correlated with higher student achievement,” Wu said, but female instructors received systematically lower course evaluations. In looking at prerequisite courses, the two researchers found a negative correlation between students’ evaluations and learning. “If you took the prerequisite class from a professor with high student teaching evaluations,” Harbaugh said, “you were likely, everything else equal, to do worse in the second class.”
The team found numerous studies with similar findings. “It replicates what many, many other people found,” said Harbaugh. “But to see it at my own university, I sort of felt like I had to do something about it.”…
Studies since the 1980s have found gender bias in student evaluations and, since the early 2000s, have found racial bias as well. A 2016 study of data from the United States and France found that students’ teaching evaluations “measure students’ gender biases better than they measure the instructor’s teaching effectiveness,” and that more-effective instructors got lower ratings than others did….
Despite the data, at many colleges, particularly research-based institutions, student evaluations are still the main measure, if not the only one, of teaching effectiveness in promotion-and-tenure decisions.
Common among universities and colleges across the U.S., diversity and inclusion are pervasive problems. Poor students and students of color are underrepresented in many colleges, especially the so-called elite institutions; women and people of color are equally underrepresented on faculties.
Nothing rings more true or frustrating than Doerer’s use of “despite the data.”
I have rejected SETs directly in my bi-annual self-evaluation for merit raises. I have consistently advocated the administration and our faculty status committee to end or greatly reduce the influence of SETs.
In all of the situations, I have repeatedly shared the research, the data:
- Boring, A., Ottoboni, K., & Stark, P.B. (2016, January 7). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research.
- Uttl, B., White, C.A., & Gonzalez, D.W. (2017, September). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.
- MacNell, L., Driscoll, A. & Hunt, A.N. (2015). What’s in a Name: Exposing Gender Bias in Student Ratings of Teaching. Innovative Higher Education, 40(4), 291–303. doi:10.1007/s10755-014-9313-4
- Student evaluations of teaching are not only unreliable, they are significantly biased against female instructors, Anne Boring, Kellie Ottoboni, and Philip B. Stark, LSE Impact Blog
- How Student Evaluations Are Skewed against Women and Minority Professors
- New study could be another nail in the coffin for the validity of student evaluations of teaching
- New analysis offers more evidence against student evaluations of teaching
- Study finds gender perception affects evaluations
- Academic sexism: Research suggests students are biased against female lecturers
- Student evaluations can’t be used to assess professors. They’re discriminatory
- Gender Bias in Student Evaluations, Kristina M. W. Mitchell and Jonathan Martin
- Most institutions say they value teaching but how they assess it tells a different story
And without fail, those with power, who tend to be white men, offer a tepid acknowledgement of the research followed by a quick “But we have to do something.” Doerer includes a response (from a white man) that sounds all too familiar:
Ken Ryalls, president of the IDEA Center, a nonprofit higher-education consulting organization, recognizes the bias but thinks doing away with evaluations isn’t the answer. He opposes efforts to eliminate the voice of students. “It seems ludicrous,” he said, “to have the hubris to think that students sitting in the classroom have nothing to tell us.”
“The argument that you should get rid of student evaluations because there is bias inherently is a bit silly,” he said. “Because basically every human endeavor has bias.”
The “yes, but” dynamic works to maintain the inequitable status quo. And as Ryalls’s comment shows, the “yes, but” response is often a distraction.
No one is arguing to remove the voice of students, but as Doerer’s reporting confronts and as the research base shows, student evaluations of teaching are fraught with student biases that corrupt the teacher evaluation process, effectively discouraging women, people of color, and international faculty from remaining in a hostile environment with very real negative career consequences.
For example, calls to end SETs a primary or major instruments for promotion, tenure, and merit pay are often part of a larger examination of how to make student feedback more effective for teaching and learning.
That’s in large part why Oregon decided to try a midterm student-experience survey that only the applicable faculty member can view. An instructor can make changes in the middle of a semester, when students can still benefit, encouraging them to give constructive feedback.
For many years, I have asked students for feedback at midterm, and explained that I would like the opportunity to address their concerns, and also to identify what is working well, because receiving complaints after a course really benefits no one.
Further, when student feedback is for the professor only, it becomes a conversation about improving teaching and learning, and as a professor myself, I am best equipped to interpret student comments. I consistently receive feedback intended as negative by students, but will never change them because they misunderstand my role and their roles in the classroom.
Yes, student feedback is valuable, but it likely cannot be simply or easily reduced to numbers, formulas, or even verbatim interpretations of their direct words.
It has taken nearly four decades of high-stakes accountability in K-12 education for people to begin to acknowledge that high-stakes accountability causes far more harm than good.
In higher education, if equity and inclusion are real goals, we can and must seek ways that students have safe and open spaces for providing their professors feedback, we can and must better support faculty in how to interpret that feedback in ways that improve their teaching and student learning, but to reach those goals, we must end the practice of using SETs in significant ways to evaluate faculty.
Higher education must end the tradition of “despite the data,” recognize that rhetoric means less than nothing if contradicted by practices, policies, and a culture of “yes, but.”