March 21, 2007
Just Curious About Law Review Rejection Rates
Posted by Gordon Smith

This afternoon I had a conversation with an aspiring lawprof about the relative merits of publishing in a certain specialty journal and various general law reviews. I seem to get these questions a lot, though I have no particular expertise. Then again, does anyone have a particular expertise in this? Sure, people can tell you all sorts of interesting facts about law review citations and rankings (see, e.g., here, here, and here), but if you are an aspiring academic, what you really want to know is whether your publication will catch the eye of a potential future employer/colleague.

The problem with the notion that placement is a proxy for quality is that law reviews don't have standards for publication that might distinguish articles in one journal from articles in another. Nevertheless, most of us tend to make some quality judgments based on the placement of an article. Why?

If you suspect that U.S. News must have something to do with it, you are probably right. Al Brophy has shown a high correlation between law school rankings and law review citations, and citation analysis is one method of ranking law reviews. As to the causal link between law school reputation and law review citations, Brophy writes, "as reputation increases, law reviews are able to have a greater choice of articles. And as citations increase, as faculty see articles cited more frequently, they may have increasing respect for the schools associated with them." (emphasis added)

There it is, bolded so you couldn't miss it. The answer to my question about why we make quality judgments based on the placement of an article is that highly ranked law reviews (i.e., law reviews at highly ranked law schools) have higher rejection rates. Or so we believe.

Quite apart from whether such a belief justifies the inference of quality, is it actually true that higher-ranked law reviews have higher rejection rates?

As far as I know, no one has gathered statistics on rejection rates. ExpressO ranks the Top 100 law reviews in terms of submissions through its service, but these rankings are somewhat skewed by the fact that several top law reviews do not accept ExpressO submissions. They also don't tell us how many articles were accepted at any of the law reviews.

Perhaps former law review editors can help shine some light on this question. If you have recent experience as a law review editor, please provide the following information in the comments: (1) the number of unsolicited submissions received by your law review during the editorial year, and (2) the number of offers made by your law review to authors of unsolicited submissions, and (3) the number of unsolicited submissions actually published. Obviously, this is informal, but I suspect that even a few responses would be quite enlightening.

UPDATE: This is one of those things that should go without saying, but just in case. The information that I requested won't be very helpful unless we know the name of the law review and the year to which the data applies. If you could provide that, too, I would be grateful.

Links to weblogs that reference Just Curious About Law Review Rejection Rates:

» Law Review Rejection Rates from Concurring Opinions ...
"Over at the Conglomerate, Gordon Smith asks: Quite apart from whether such a belief justifies the in ..." [more] (Tracked on March 21, 2007 @ 23:28)

1. Posted by anonymous on March 22, 2007 @ 1:28 | Permalink

The rate of rejections is not necessarily a good proxy for "quality." First, even if low-prestige journals (however defined) have similar rejection rates as higher-prestige journals, they may be choosing from a different pool of articles. Authors with a track record of publishing in top-tier journals are unlikely to submit articles to bottom-tier journals. Conversely, authors who don't think they have a shot at the top journals may decide to submit their articles only to lower-ranked journals. (Even with expresso, sending out hundreds of articles is not cheap, and not everyone has an institution to pay the costs.) Second, even if differential submission patterns result in a smaller pool of articles for the lower-ranked journals, those journals often have fewer slots, since they tend to publish less frequently. This will drive down their acceptance rates. However, it still may be the case that the articles they are accepting would not be accepted at higher-ranked journals, given the higher average quality of (or bigger names associated with) the articles typically submitted to those journals. Finally, lower-prestige journals may not extend offers to articles they don't think they have a chance of getting. (I don't know if tihs is true, but I would imagine that in some cases journals may have a sense that they are just being used to facilitate expedites, and that the chances of actually keeping an article are slim.) If this is true, it would also increase their rejection rate.

2. Posted by Bonnie Shucha on March 22, 2007 @ 8:49 | Permalink

There is a reference source for this data. Joyner's Directory for Successful Publishing in Legal Periodicals lists the percentage of unsolicited manuscripts accepted and the percentage of published manuscripts solicited. It's a bit dated (1997) but should still be illustrative.

3. Posted by Gordon Smith on March 22, 2007 @ 10:07 | Permalink

Bonnie and I just looked at Joyner's together. It does not list the number of unsolicited submissions, but it gives a percentage of unsolicited manuscripts accepted. HLR is listed as accepting 1.5% and YLJ is listed as accepting 2%. Are these numbers

In volume 119, HLR published 13 unsolicited manuscripts. (There is some guesswork involved here, but I think that is pretty close to the right number. I excluded Essays and Book Reviews, which would increase that number somewhat.) If HLR published 1.5% of all unsolicited manuscripts, they are receiving only 866 per year.

Similarly, YLJ published 12 unsolicited manuscripts in volume 115, which would imply that they had received 600 manuscripts.

Even if you double my numbers to account for Essays, Book Reviews, and errors, the numbers are ridiculously low.

4. Posted by Gordon Smith on March 22, 2007 @ 10:16 | Permalink

Anonymous,

First, why do you need to be anonymous to make comments like that? Nothing controversial in what you said.

Second, I agree that lower-ranked journals may have as many submissions as top-ranked journals. If ExpressO is a guide, the Wisconsin Law Review is getting as many as anyone. But as my post indicated, I think a lot of law professors assume without much thought that lower-ranked journals are less competitive because they receive fewer articles. So I am looking for data.

Third, I don't believe that lower-ranked journals have fewer slots. Notice my previous comment. How many general law reviews publish fewer than 12-13 unsolicited manuscripts in a year?

Finally, you might be right about a reluctance to offer certain articles, but I am not sure that happens or how anyone could account for it. It's probably worth remembering that I am not proposing to do a study here. I just want a few data points.

5. Posted by Orin Kerr on March 22, 2007 @ 10:33 | Permalink

Gordon,

FWIW, I agree with Anonymous.

I think authors believe (correctly) that lower ranked journals are more likely to give an offer on a given submission than higher ranked ones; however, I haven't heard this explained by differences in rejection rates.

6. Posted by Gordon Smith on March 22, 2007 @ 10:47 | Permalink

Orin, Just to be clear, I think anonymous may be right, too. While most people don't talk explicitly in terms of rejection rates, I often hear people talk about submitting to the Top 20 or Top 40 or Top 100 law reviews (implying that the top law reviews see everything). And I often hear people reference the "fact" that the top law reviews have more submissions than lower-ranked journals.

7. Posted by J.W. Verret on March 22, 2007 @ 11:30 | Permalink

I think that the debate is very interesting. However, as an aspiring academic I am more interested in the perception of law reviews than the various methods for assessing quality. Why doesn't someone do a survey of law professors? Include rankings for specialty journals and one for general law reviews? In truth, in making decisions between journals, and thanks to the Conglomerate's advice I have received more offers than I deserve, at this point I only really care about how that citation will be perceived on my cv when I go to market.

8. Posted by Orin Kerr on March 22, 2007 @ 11:33 | Permalink

Got it.

Just to add one data point, before I started teaching I submitted an article but chose not to send to some of the top journals. I figured that my chances at Harvard, Yale, etc. were vanishingly low, and that on balance it wasn't worth the $5-$10 of printing and mailing costs that I had to foot. It's a different story when submissions are free, either because they're electronic or a university pays mailing and shipping costs.

9. Posted by Vic on March 23, 2007 @ 9:17 | Permalink

If it's prestige we're after, shouldn't we also look at yield rates? I.e., how often does a journal's offer on an unsolicited submission get accepted?

Of course, some better mid-range journals might be more aggressive about going after good articles, which would lead to higher quality but lower yield rates (since top journals will go after those same articles). Still, might be interesting to compare Tax L Rev or J Corp Law's yield rate against some general law reviews.

10. Posted by James Grimmelmann on March 23, 2007 @ 12:17 | Permalink

Acceptance rates at different law reviews may not be the most useful statistic here. Publication in a top-N journal can still accurately be a signal of quality even if top-N journal editors have no special expertise. Consider the following model:

There are 2N journals, of which N have High prestige and N have Low prestige. An author submits a paper simultaneously to all. Each paper is either Good or Bad. Each journal accepts a paper of quality i \in {G,B} with probability p_i, where where p_G > p_B. The author prefers to publish at a High prestige journal, but will publish at a Low prestige journal if that is the only option.

Given this model, a paper of quality i has probability 1 - (1 - p_i)^N of being accepted by (and hence appearing in) at least one High prestige journal. Call this probability X_i; it recurs. The probability of being accepted by at least one Low prestige journal, but not by any High prestige journal, is X_i - X_i^2. Assuming equal-size pools of M each Good and Bad papers initially submitted, MX_G Good papers and MX_B Bad papers will be published by High prestige journals, and M(X_G - X_G^2) Good and M(X_B - X_B^2) Bad papers will be published in Low prestige journals. A paper in a High prestige journal therefore has probability X_G / (X_G + X_B) of being Good; a paper in a Low prestige journal has probability (X_G - X_G^2) / (X_G - X_G^2 + X_B - X_B^2) of being Good.

Let's try some numbers. For N = 100, p_G = .02, and p_B = .005, I get X_G = .867 and X_B = .394. That means that about 70% of papers in High prestige journals are Good, but only about 30% of papers in Low prestige journals are. This signal therefore makes it rational for authors to choose to publish with High prestige journals, which in turn perpetuates their rankings. Again, note that there is absolutely no difference in the editorial process at different journals in this model: only a (weak) ability for editors in general to distinguish good work from bad, and a known differentiation of journals based on prestige.

Bloggers
Papers
Posts