I admit that I don't tear into every paper reverse engineering the USNWR rankings out of principle, but this paper by Robert L. Jones (hat tip: Tax Prof Blog) seemed worth the read. (The paper is a shorter essay updating a longitudinal research paper from two years ago that I missed.) Both papers look at those ever-so-important "academic reputation" scores that show up in the USNWR to make every law school strive to improve its academic reputation. These scores are the result of surveys sent to every law school dean, academic associate dean, chair of appointments and newly tenured faculty member. Each school is rated 1 to 5. If you've ever stepped foot into a law school as a faculty member, you have been in a conversation on how to boost this score. Splashy new hires? Increasing scholarship? Increasing visibility of scholarship? Conferences? And you will inevitably hear someone say that academic reputation scores are "sticky" -- they do not seem to move quickly or substantially, no matter what schools do. This paper answers the "why" of the sticky question and concludes that the scores are not sticky -- they are intentionally deflated. First, here's Jones' graph showing that the average ARS has declined since 1998, with a particular trend since the disruption in the legal market during the financial crisis, despite all the investments schools have made in increasing scholarship and expanded hiring. Moreover, these scores have declined while judge/lawyer reputation scores have increased.
Why this decline then? Jones contends that because of the importance of the rankings and the competition the rankings engender, that voters act strategically by deflating the rankings of competitor schools. The more important the rankings are, then the more strategic the voting. No voter has an incentive to give high reputation scores, but a real incentive to give low ones. Therefore, Jones concludes, the academic reputation scores are worthless. (I could see an argument that if all voters systematically gave lower grades to everyone, then the scores are valid as a ranking, much like a 2.7 grading curve mean. But, we don't know how systematic the strategic ranking is. One could imagine that competitor schools over-punish schools that make large, visible investments in academic quality and ignore schools that are not seen as threats.) I have just begun to foment thoughts on this theory, but if it's true then it calls into question many firmly-held beliefs about expensive practices that are thought to "boost the rankings."
TrackBack URL for this entry:
Links to weblogs that reference USNWR Reputation Scores Aren't Just Sticky, They Are Deflated: