Sonia Cardenas is the vice president for academic affairs and dean of faculty at Trinity College in Hartford, Connecticut.
It’s college rankings season, when institutions across the country are sorted into lists purportedly so prospective students can choose a college. These rankings are often seen as a proxy for academic quality. In practice, they can be unreliable.
I’m one of more than 4,000 college administrators who is asked to complete a key component of the U.S. News & World Report’s ranking — the peer reputation survey. Peer reputation is one of the most heavily weighted variables in the ranking’s ever-changing algorithm, making up a whopping 20% of a college’s overall score.
It’s worth noting that only about a third of those who receive the peer reputation survey complete it. Perhaps this is why U.S. News uses a two-year weighted rolling average of the responses. To place this peer assessment in context, here is a closer look at the mechanics of completing the survey.
The peer reputation survey asks respondents — such as presidents, provosts and deans of admissions — to rate the academic quality of hundreds of colleges nationwide. In the case of a liberal arts college, where I work, you’re presented with a list of more than 200 other liberal arts institutions. When rating each college, you’re supposed to consider the quality of its curriculum, faculty and graduates.
The task is to rate each college on a five-point scale from “outstanding” to “marginal.” Respondents who are not familiar with an institution can respond with "don't know" or can leave the scale blank. If you’re completing the survey honestly, a black space will be the response to most colleges.
Judging the quality of an institution at which I’ve never worked, studied or conducted an accreditation review is like asking a culinary expert to rate the signature dishes at hundreds of restaurants where they’ve never eaten. No person — no matter how informed they are — has sufficient knowledge of the academic quality of all other colleges in the country to rate them responsibly on a five-point scale. That’s absurd.
Nor can one entirely discount that a degree of institutional self-interest may lead an administrator to score a competitor less generously. This is an anonymous survey. And for better or worse, the results can have real-world impact on enrollment and tuition revenue.
What the peer reputation survey measures most closely is brand recognition. This is why statistical studies have shown that the top predictor of a college’s ranking is its ranking for the previous year. This should come as no big surprise.
It doesn’t take a social scientist to see serious methodological flaws in the survey. Any ranking — and certainly one carrying considerable weight in public opinion and family investment — should aspire to be objective. Yet the U.S. News ranking bases one-fifth of every college’s score on the subjective views of college administrators who complete the survey.
The academic quality of the colleges and universities at the top of U.S. News’ lists is undoubtedly high. But are those institutions objectively the best? In the words of Princeton President Christopher Eisgruber, labeling a university best is at the very least “bizarre.”
It’s true that some factors U.S. News considers in its rankings affect the quality of a college's academic offerings. Large differences in student-to-faculty ratio or the percentage of full-time versus part-time faculty are important. What you find, however, is that colleges ranked very differently overall can still score similarly on these individual criteria. For instance, universities currently ranked as 10th, 30th and 105th all have a student-to-faculty ratio of 8:1.
It's high time that the U.S. News ranking eliminated its peer reputation survey, participation in which is declining from already-low response rates.
The publication would better serve students and their families by presenting objective quantitative indicators for each institution, instead of subjective survey responses — effectively, a sorting mechanism more than a rankings table.
Let prospective students select the factors that matter to them and see where colleges fall in relation to one another. This is consistent with a 2023 study on student views of college rankings, which finds that college-bound students are more interested in where an institution falls overall, such as whether it is ranked in the top 50 national liberal arts colleges (its “neighborhood”), than its exact numerical ranking (or “street number”).
So long as peer reputation is used to rate colleges, students and families are at risk of conflating overall rankings with academic quality. The fact is that 20% of an institution’s ranking is based on a survey that barely makes sense to those who complete it. It’s also an approach that’s scientifically unreliable and publicly irresponsible.
If you want to understand a college’s academic quality, dig into the data and don’t forget to listen when students and graduates speak of their experiences.