The U.S. News and World Report released its annual college rankings list on Tuesday Sept. 12. University presidents, administrators, and deans waited with baited breath for its release as a school’s ranking affects everything from a university’s application numbers to its president’s bonus. But, perhaps the amount of weight prospective students and universities alike invest in these numbers is unjustified.
You need look no further than the first metric to question this methodology: undergraduate academic reputation (which counts for 22.5% of a school’s ranking), which is determined by how presidents, provosts, and deans of admissions at different schools evaluate their peers. According to U.S. News, the purpose of this metric is “to account for intangibles at peer institutions, such as faculty dedication to teaching.” Though I myself haven’t surveyed top academics, I would speculate that the presidents of most universities could not tell you much about how dedicated to teaching faculty at other institutions are. Even if President DeGioia had a relatively thorough understanding of how he would evaluate peer and competitor schools such as Notre Dame, Duke, Stanford, or Harvard, you would be hard pressed to tell you much about the faculty at Brigham Young University (ranked No. 66), over 600 miles away in Provo, UT.
Academic reputation is a dangerous metric because presidents, provosts, and deans of admissions are no more immune to the influence of rankings in our perceptions of universities than the rest of us. Reputation is therefore self-perpetuating. Harvard is consistently ranked in the top three national universities, which leads people to see it as a top three university, which leads academics to evaluate it as a top three school, which causes it to remain a top three school. A school’s academic reputation and its ranking are inseparable.
While we’re talking about self-perpetuating metrics, student selectivity (12.5 percent) also feeds off rankings. According to a report by the American Educational Research Association, making the top 25 on U.S. News rankings is associated with a 6 to 10 percent increase in applications. And, of course, more applications means that the percent admitted falls and the school becomes more selective.
But, the point of this, is that despite U.S. News’ claim that “we do our utmost to make sure [our rankings] are [objective],” they are not. Rankings are not objective. They are based off a subjective list of metrics that people in a small office on Thomas Jefferson St. in Georgetown decided were important in making the best colleges. It would be just as easy to alter the percentage weightings or the list of factors, and then you would have a completely different list. For example, U.S. News ranks Georgetown at No. 20, but if you decide that the most important factors are the percent of students receiving aid and the average aid package, like Money’s Best College Rankings, then Georgetown is not a top 50 college, rather it’s No. 84. If you, on the other hand, decide that median earnings relative to expected earnings is most important, then you, like Georgetown’s Center on Education and the Workforce, would place Georgetown at No. 1.
Because U.S. News has established itself as an authority in the college rankings and because rankings affect the applicant pool and perceptions and alumni giving, colleges have begun to game them. Northeastern University President Richard Freeland lead the charge on playing the rankings. In a Boston Magazine article, Freeland admitted, “You get credit for the number of classes you have under 20 [students], so we lowered our caps on a lot of our classes to 19 just to make sure.” Freeland also began accepting the Common Application to increase the number of student applicants in order to turn more away and increase selectivity. Now, you could argue that a school with smaller classes and a higher quality undergraduate population is a better university, but you could also argue that the difference between 19 students and 21 students in a class is negligible. You could argue that the difference between a class with an average SAT score of a 2100 instead of a 2050 is also negligible. And yet, it’s these small difference that make all the difference in rankings.
Remember, also, that rankings, like colleges, are a business. Rankings are as much about providing a service as they are about drumming up hype and traffic. A list that has Harvard, Princeton, and Yale at the top is boring because it’s expected. When you look up Money’s rankings on Google, the excerpt reads, “Find out which schools beat out Harvard for the top spots” because that’s what makes people click on it. They want to know which colleges beat out the one with the most esteemed reputation. It’s not difficult to imagine that many rankings are created with the press release pitches in mind.
Rankings are ultimately just a dangerously imperfect attempt at comparing very different schools by reducing them to numbers. It lets you compare schools like Georgetown to schools like Ohio State so you can talk about them in the same sentence. But maybe you shouldn’t. Georgetown is a medium-sized Jesuit school in D.C. with a dismal football team, and Ohio State is a huge state school with a powerhouse football team and bleeding school spirit. How could you evaluate these schools in the same terms when each serves a very different purpose? You can’t, and yet that’s what rankings aim to do.
Each year, U.S. News and Money and Princeton Review among other publications will continue to publish these rankings, but each is just its own subjective measure of what defines a good college. If you are a freshman applying to college, remember that you too can just as easily create your own metrics based off what you need and are looking for in college. And, if you’re a student in college or an alumni, remember that the ranking of your college is separable from your worth as an individual, from your potential, and from your intelligence. Though preconceptions about the college you attended will often follow you, they will not define you.