Many thanks to Gordon, Christine and Vic for inviting me to guest blog at Conglomerate. I really appreciate the opportunity. Without further ado, this post argues that SSRN downloads may be a much better proxy for faculty productivity and performance than most people realize. Let me get a couple of ideas onto the table.
Bernie Black’s and Paul Caron’s Ranking Law Schools: Using SSRN to Measure Scholarly Performance systematically compares SSRN download data with three other measures of faculty productivity: (1) citations counts, (2) quantity and placement of faculty scholarship, and (3) reputational surveys. Despite “blogger bias” and other factors that can skew SSRN downloads in ways unrelated to merit (however one defines it), Black and Caron show that SSRN downloads are, nonetheless, highly correlated with the traditional measures of faculty performance.
In a short commentary on the Black & Caron study, Ted Eisenberg acknowledges the innovation and efficiency of SSRN’s system. But he also points out that SSRN downloads have lower intercorrelations with the reputation, publication, and citation metrics when the cohort is limited to Top 10 to Top 20 law schools—i.e., peer institutions that actually “compete for students and faculty.” Eisenberg therefore contends that SSRN downloads are missing key information that is captured by the other, more established measures (e.g., citation counts use Westlaw’s TP-ALL, which has broader subject matter coverage).
Here is a comparison of intercorrelations drawn from one Eisenberg’s key tables.
Measure of Scholarly
Performance |
Correlation with SSRN Downloads |
Correlation with Leiter
Reputation |
SSRN Downloads |
1.00 |
0.55* |
Leiter
Reputation |
0.55* |
1 |
US News Reputation |
0.51* |
0.96* |
Lindgren & Seltzer Top 20
Journals |
0.69* |
0.83* |
Leiter Top 10
Journals + books |
0.45* |
0.78* |
Eisenberg & Wells Citation
Count |
0.50* |
0.90* |
Leiter
Citation Count |
0.61* |
0.88* |
* p ≤
.05 |
Obviously, the SSRN correlation coefficients are lower than those for the Leiter’s Reputation Ranking. The other non-SSRN intercorrelations are similar. (See Eisenberg table 1C).
So Eisenberg may be right: SSRN may be missing key information. On the other hand, it is also possible that the other measures of faculty performance—reputation, publications, and citation counts—all share an irrelevant confound that inflates their intercorrelations.
Consider the following causal chain:
- Faculty reputation is a “sticky” variable. The first US News ranking in 1987 (limited to Top 20) was based on survey of 183 law school deans. Eighteen years later the same 16 are still the Top 16. They’ve played musical chairs a bit, but the inner circle has stayed the same.
- Law review editors are relatively risk averse and, all else equal, will favor professors from more prestigious law schools. This is the well-known letterhead bias. And, as just noted, the pecking order hasn’t changed too much over time.
- Some law professors, especially those trying to write their way up the hierarchy, will be reluctant to cite articles in less prestigious journals. If professors at the Top 20 (30, 40?) publish in the Top 20 (30, 40?), then many professors will, consciously or unconsciously, favor citations to the Top 20 (30, 40?).
If such a prestige bias exists (#1 affects #2, #2 affects #3, #3 reinforces #1), then the non-SSRN intercorrelations are destine to be inflated.
In contrast, SSRN downloads may not suffer from the same prestige bias, or at least not to the same degree. If the abstract makes an intelligent pitch, the article is in your field, it has not yet been published by a law review (i.e., there is uncertainty as to substantive or signaling quality), there is a strong incentive to download it. Timeliness, subject matter relevance, and anonymity encourage scholars to read the work of others at less prestigious schools in order to remain competitive.
In such an environment, SSRN downloads can be a good way to identify underplaced scholars. Further, law schools with SSRN downloads far in excess of their US News rankings (see Black & Caron, table 15) may have figured out markers for productive scholars that more elite schools tend to ignore. And all the data is there, right before our eyes.
In my next post, I will tie this topic into Paul Caron’s & Rafael Gely’s insightful Moneyball analysis.
TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8345157d569e200d83494e7b769e2
Links to weblogs that reference Why SSRN might be a superior measure of faculty productivity:
Sun | Mon | Tue | Wed | Thu | Fri | Sat |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |