For many years, the College Board would release average SAT scores with the states ranked by those averages. While the media would rush to make claims about those rankings as well as how average SAT scores changed from one year to the next, educational researchers and scholars often fought a losing battle trying to explain the flaws with such rankings and with making many of the claims about relative educational quality the media, politicians, and the public embraced.
More recently, however, even the College Board warns that no one “[should] rank or rate teachers, educational institutions, districts or states solely on the basis of aggregate scores derived from tests that are intended primarily as a measure of individual students.” 
Yet, many continue to rank and draw rash conclusions despite that warning because in the U.S. rankings of all kinds are extremely popular—from our sports to our schools and then almost everything else.
The U.S. obsession with ranking seems as much a love/hate relationship as anything; notably how we both seek always a better way to rank sports teams (consider the new playoff format for college football) and constantly argue and complain about those rankings.
What we tend to fail to do is question the act of ranking itself or acknowledge what it is that rankings do reveal—the latter being that any ranking reveals more about who is doing the ranking and why than what the ranking claims to accomplish.
Researcher Gerald Bracey has warned about ranking:
In any ranking, while someone gets to rank first, someone must rank last….In order to properly judge a rank, you need to know something about the context in which it occurs. (p. 59)
A key point here is that ranking imposes a judgment of relative quality on people and situations even though such judgments may be either irrelevant or terribly misleading. To explain this, Bracey refers to Olympic athletic events (placing fourth in an Olympic event is losing, although that athlete may be fourth best in the world at the sport), but with my methods students, while addressing assessment, I discuss identifying the top runners out of a group of students.
Unlike administering selected-response testing, identifying the best (and worst) runners can be accomplished under ideally authentic conditions—running a race. But to return to Bracey’s point about “context,” even though determining the best and worst runners can be authentic doesn’t mean that the process is without bias that impacts directly the resulting rankings.
If we take 30 runners, and ask them to run the 40-yard dash we are likely to get a much different ranking than if we ask them to run a marathon. In other words, who decides and what conditions create the metrics used for ranking render all attempts at ranking deeply biased and relative to a certain setting, and in many ways less useful than they appear.
By changing the parameters of determining “best runner,” we change who ranks where, and we must also acknowledge that a runner who places first in one class (and labeled “best”) may place last if moved to another class.
Concurrent with the newly formed college football national championship playoffs (resulting from decades of using a variety of systems to rank and determine only two teams to play for that championship each year—a deeply unsatisfying process), Education Week released its annual Quality Counts ranking of state educational quality and a edu-scholar public influence ranking first offered in 2010.
At the risk of sounding petty , I want to note that although I do not appear on the edu-scholar ranking and since the metrics for that ranking are made public, I would easily rank in the second 100 due to my Klout score, google Scholar metric, book publications, and frequent publishing and citing in international, national, state, and local media.
While I am not lobbying here, my own case highlights that it is likely dozens of other scholars have the same situation, and that despite the intent behind the rankings (to recognize often ignored within the academy public work by academics), the act of reducing people or work to metrics and then ranking is often counter-productive to the intended goals.
Should we increase the value placed on public work by academics and scholars? Yes. But labeling and ranking public work by scholars does more harm than good.
Should we shine a bright light on educational quality of schools and states in order to improve that quality? Yes. But labeling and ranking schools and states does more harm than good.
Just as we need to set aside accountability, standards, and high-stakes testing as our only approach to education reform, we need to stop our incessant race to rank in all educational contexts.
Rankings (labeling in order to sort), I contend, are not only poor ways to accomplish those goals, but the act of ranking itself is likely harmful to those goals—much in the same way SAT data have been misinterpreted for years, not because of the data but because of the urge to use that data to rank.
The urge to rank NCAA college basketball teams and then funnel all that into March Madness may in fact be a vibrant and mostly harmless way to do sport and entertainment.
But as Gerald Bracey (as an active researcher and public scholar) warned over and over, ranking is mostly a harmful and flawed exercise in the world of education. And since much of education in the U.S. is publicly funded and necessarily a part of the political process, many times rankings are more about political agendas than genuinely seeking to recognize accomplishments or prompt reform.
Ranking, as I noted about grades in education, are almost always accomplishing more harm than good, and thus, ranking is the worst possible process to advocate for or achieve laudable goals, especially in the context of education and scholarship.
 Gerald Bracey often stressed that we must never use an assessment or data set for purposes other than the ones for which they were designed.
 To clarify and for full disclosure, I do not need the edu-scholar ranking since I am an associate professor with tenure currently applying for full professor at my university. My university recognition and status are unlikely to be impacted by the ranking, although many of us on the faculty are currently calling for greater acknowledgment of public work by professors. My reason for using myself as an example is because I have the metrics and because a large number of junior faculty not as secure as I am are the ones likely being mis-served by rankings.