April 09, 2004
Embracing the Rankings? Better Law Schools Through Statistics
Posted by Gordon Smith

Many bytes in the blogosphere have been dedicated to writing about the recently released law school rankings in the U.S. News. Now, prepare yourself for something completely different: Paul Caron and Rafael Gely of the University of Cincinnati College of Law are about to publish an article entitled "What Law Schools Can Learn from Billy Beane and the Oakland Athletics." Using Michael Lewis' Moneyball as a springboard, the authors hope to "spur other attempts to embrace the market demand for greater accountability and transparency in legal education through more refined measures of organization success and individual contributions to that success."

What do law school rankings have to do with baseball? It's pretty straightforward really. Start with baseball: if the market for baseball players were efficient, large-market teams would dominate small-market teams. Well, of course, this does happen to some extent. But how does one explain the Oakland A's? According to Lewis, the success of the Oakland A's is due to Billy Beane's superior managment, which was based largely on Beane's willingness to evaluate talent in new ways. Rather than judging players subjectively, Beane used a statistical approach, embracing modern measures of player performance. Instead of looking at hitters in terms of batting average, home runs, and runs batted in (RBIs), Beane calculated on-base percentage and slugging percentage. Rather than judging pitchers pased on wins and earned run average, Beane focused on statistics that were mostly within the pitcher's control (walks, home runs, and strikeouts). The result for the A's has been astonishing.

The lesson that Lewis takes from Beane is that baseball players should be judged on their performance, not on their talent. Every sports fan can list players from now until sundown who have had all of the right "tools," but never lived up to their supposed potential.

Caron and Gely want to ensure that this lesson is not lost on academia. Indeed, some academics are taking note. They cite to Richard H. Thaler and Cass R. Sunstein, who wrote a review in The New Republic entitled Who's on First:

Lewis is actually speaking here of a central finding in cognitive psychology. In making judgments, people tend to use the "availability heuristic." As Daniel Kahneman and Amos Tversky have shown, people often assess the probability of an event by asking whether relevant examples are cognitively "available." ... Now, it is not exactly dumb to use the availability heuristic. Sometimes it is the best guide that we possess. Yet reliable statistical evidence will outperform the availability heuristic every time. In using data rather than professional intuitions, Beane confirmed this point.

Caron and Gely take this foundation and build upon it, applying the lessons of Billy Beane to law schools. This paper includes a fascinating history of the market for legal education, divided between the era before U.S. News rankings and the era after. In addition, the authors review the "ranking literature," which has become surprisingly voluminous in the modern era. (By the way, when discussing alternatives to the U.S. News, the authors observe that the law professors who create alternative ranking systems (Leiter, Brennan & LeDuc, Lindgren & Seltzer, Eisenberg & Wells, etc.) inevitably rank their home schools higher than U.S. News does. This tends to make cynics of all of us, and suggests the need for independent ranking agencies.)

The punchline of this paper is that rankings can be our friend. Legal academics should face the fact that rankings are not going away and work to improve the metrics. This is the concluding paragraph of their paper:

Like Michael Lewis, we have told a story about a profession and people we love. We are proud of the work law schools and law professors do in teaching future lawyers and producing legal scholarship to the betterment of American law and society. As institutions and as individuals, we have nothing to fear from the accountability and transparency spotlight. Indeed, we do our best work in the light. We should welcome the opportunity to tell the world what we do and help them measure our performance as teachers and scholars. If we do not, the story will be told by others and it will no longer be our own.

On a descriptive level, I have a hard time resisting their conclusion about the inevitability of rankings. Promoting them under the banner of "transparency and accountability" also appeals to the modern ear. Nevertheless, legal education and baseball are different in at least one crucial respect: while baseball teams largely agree on the ultimate measure of success (winning), law schools have wildly different conceptions of a successful legal education. Alternative rankings systems may account for some of these differences, but the unique experience offered by each law school simply cannot be aggregated into a numerical ranking. This is a point mainly aimed at student use of rankings: prospective law students should be encouraged to use rankings as part of the decisionmaking process, not as the ultimate arbiter of the "best" law school.

More importantly, I am concerned that the institutionalization of rankings would lead to increased homogeneity among law schools. Convergence is a good thing when a goal is universally embraced, but convergence of law school models does not strike me as a good thing. Such convergence has already begun in the U.S. News era. Although I do not know the exact numbers, I would be interested to learn how many law schools have launched specialty programs in the following areas since the U.S. News started ranking specialties:

* Clinical Training
* Dispute Resolution
* Environmental Law
* Healthcare Law
* Intellectual Property Law
* International Law
* Tax Law
* Trial Advocacy

The answer, I suspect, is "lots!" These programs consume resources, dictate hiring priorities, and influence curriculum. In addition, the opportunity costs of other programs foregone can be enormous. This is Heisenberg's Uncertainty Principle on steroids (as long as we were speaking of baseball ...).

Because of my concerns about unhealthy convergence, I hope for the creation of multiple competing rankings systems, rather than the creation of an "official" ranking institution. Multiple rankings create information noise, which does not necessarily equate to lack of accountability or transparency. Instead, this noise provides space for law schools to pursue different visions of legal education and forces students to act more thoughtfully.

Law Schools/Lawyering | Bookmark

TrackBacks (0)

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8345157d569e200d8350e19be53ef

Links to weblogs that reference Embracing the Rankings? Better Law Schools Through Statistics:

Comments (0)
Post a comment

If you have a TypeKey or TypePad account, please Sign In

Bloggers
Papers
Posts
Recent Comments
Popular Threads
Search The Glom
The Glom on Twitter
Archives by Topic
Archives by Date
July 2014
Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    
Miscellaneous Links