The question in the title does not refer to any of my own papers; rather, I want to *answer* the question from the perspective of an editor. Here, roughly, is how the sausage is made (this is a medium case scenario, your mileage may vary). Keep in mind that this is a journal which has relatively good standards (for number theorists, we are talking somewhere between JNT and Duke).
Day 0: After carefully selecting a suitable journal and performing a final check on your paper for typographical errors, you submit your precious baby to the whims of fate.
Day 20(?): The paper works its way though the editorial system and is assigned to me as an editor.
Day 40: I have had a chance to take a look at the paper and determine whether it is obviously rubbish or not. Moreover, I have identified someone (usually at the level of professor) whom I trust to give an honest opinion of both how interesting the paper is and whether it is suitable for the journal in question. I email that person asking for a quick opinion and any suggestions they may have for possible reviewers.
Day 60: I email the expert again because they have not yet responded to my original request. Often, at this point, the expert will say that they are not qualified to give an opinion, and I return to the previous step.
Day 80: The expert has usually found time to respond, often to suggest another expert to consult (go back two spaces).
Day 100: I have a response from the expert. If they are only lukewarm, I reject the paper. So far, 80% of papers have now been rejected. Measured by the “standards of the industry,” I think that rejecting papers within about 3 months is acceptable to good. If the expert is enthusiastic, they either agree to referee the paper themselves or suggest someone else (often someone younger) to to the job. I then send out a detailed review request, either to the person suggested by the expert or to someone else.
Day 120: I email a different reviewer, because the first review declines for one of the standard excuses (busy/not qualified/lazy and so makes up something about not liking commercial publishers). I email someone else.
Day 130: They agree to review! I give them three months.
Day 230: I email the reviewer to follow up on my previous email. They start reviewing the paper.
Day 250: The paper is accepted. 25% of the time, the comments consist of minor typographical remarks. 50% of the time, there are a few requests for clarification, references, and corrections of minor inaccuracies. 25% of the time, there are substantial comments and corrections. In the majority of cases, the referees do a conscientious job (some papers don’t need many corrections!)
Some General Remarks:.
- Of all the papers I have edited, a small number (at most 2 or 3) have ultimately been rejected because of a fatal mathematical error (i.e., the paper would have been accepted if it had turned out to be correct). In all of those cases, I was the one who found the error.
- I end up rejecting quite a few papers because there is a fixed number of pages I can accept per year. I would anticipate doubling the number of acceptances if there were no such constraint.
- Sometimes papers do fall through the cracks. It can be very hard to find a reviewer for a very technical paper, especially one that builds off previous technical work of the author. Can one reject a paper on the basis that you couldn’t find anyone to review it? I honestly think we may be heading in that direction.
- The main task of the editor is not summary judgement, but administration. It’s not enough to email someone (say, a reviewer) and then consider one’s job done; you have to keep track of when you emailed them, so you know when to email them again (or someone else) if (or frequently when) they don’t respond. (I admit, I’m by no means perfect as a reviewer, either.)
- Any online system set up to coordinate and facilitate communication with authors/editors is more annoying than useful; I work off the grid as much as possible.
For all the various discussions of the future of mathematics journals, the one thing that I personally feel is completely broken at present is the refereeing system. I don’t have any problem with the final outcomes, but it just takes way too long. I have no idea how to fix this since it’s all anonymous (so incentivizing is hard) and community standards for what constitutes a timely response to email is positively tortoise-like. There’s no reason the first 130 days of the above timeline couldn’t be compressed into less than 30 since the cumulative amount of time the experts actually spend in that stage is measured in minutes, not hours.
Surely most people, including me, will agree wholeheartedly with Nathan. What is surprising is that no alternate models are being tried (are they? I don’t follow these things too closely.)
For example, why don’t we try to pay referees? Many people seem to be against this, and there are indeed obvious pitfalls, but we could try it – I personally find it hard to imagine that it could be strictly worse than the current situation.
Could it be that there are simply too many papers, many of which are neither well written nor very interesting?
One strategy would be to have the speed at which you review papers be linked to the speed at which your own papers are reviewed (the market driven approach). For example, by reviewing a paper quickly you can be awarded some points (contingent on the quality of the review, I guess), which you can then trade in at participating journals for faster reviewing of your own papers. Perhaps this could be combined with some financial component as well. On the other hand, this doesn’t sound so fun for editors.
In your experience, are papers that are well-written and/or very interesting take less time to grind their way through the system?
In the abstract, I like the idea of a karma driven system like you propose, but I’m not sure how it would work in practice. Suppose someone submitted a typical paper together with enough points to get them a three-month review. Could you arrange that? I guess whatever website is keeping track of the points could also know everyone who has a paper submitted to one of the participating journals, which might constitute a pool of more eager referees.
It’s not entirely clear. But I will say my best responses have come when I have found a referee who has looked at the paper of their own accord before I sent it to them. On the flip side, how often are you *excited* to be asked to review a paper?
Certainly the karma system is more of a fantasy than a realistic proposal.
It’s certainly true that excitement is not usually my reaction when asked to referee a paper; perhaps one request in every five or even ten. The exciting ones are when I’ve seen the paper on the arXiv and have been meaning to look at it but of course haven’t found the time. I’m certainly much faster in this case, and I often say no to unexciting requests just because I know that I wouldn’t get around to it in a timely manner. Roughly, I try to cap the number of non-exciting papers I referee in detail to 2-3 times the number I personally inflict upon the world…
Thanks, this is a great post!
I, for one, think that papers that are better written are easier to referee and thus somewhat more quickly reviewed. Part of the reason is that sometimes you just have to sit around and try to figure out what the author is even trying to do! At least, that’s a problem I face, especially with more technical papers.
It makes me wonder the following: would the system be better if it was easier for the referee to ask questions of the author? Or is this a terrible idea?
I think it would be nice as a reviewer to be able to anonymously ask questions of the author. This would be pretty easy to set up from a technical perspective.
It also might encourage reviewers to procrastinate less: if you want to take advantage of this option then you need to ask the author before the last minute….
I think that people should be given some points for refereeing journals and reviewing for AMS and so on. The universities totally ignore this aspect and hence we obtain these kinds of results. I think that the university administration must wake up to the fact that not just SCI articles are contributions. Well, the reason is that they are quantifiable. So, we must make some quantifications for reviewing and organizing conferences and helping other mathematicians, graduate students and so on.
One issue is distinguishing the difference between a “good” and a “bad” refereeing job. Perhaps we should have referee reports refereed!
Anyway how can we tell if an article is good or not even if it is published? I said a quantification not counting. The whole mess we are in it that we only “count” essentially. The whole tools such as SCI are not a good idea. On the other hand there are awful professors. (But also many Korean businessmen trying to turn academic world into profitable company.)
Pingback: Two views on peer review | Quomodocumque
Why do you only write to *one* expert at the start? I always write to at least 2 (and usually 3). That way if some do not respond then I will likely get at least 1 response (preferably at least 2). It isn’t foolproof, but usually avoids 100 days passing by before some quick opinions roll in.
Dear BC, it’s one thing to do this for JAMS, but did you also do this for JNT?
Dear Persiflage: You spoke of “between JNT and Duke”, so I thought you were referring to work as an editor at a journal with higher selectivity than JNT. (I don’t know at what journals you are serving as an editor these days.)
That being said, when I was at JNT I had a different way to avoid the 100-day delay: I never used the “preliminary review” step (it had never occurred to me to try). I would make the decision myself with each submission (or pass it to another editor if I didn’t think I could decide). But if JNT has raised its acceptance standards since those days then that method might no longer be feasible. Sorry!
Dear BC,
For JNT, I tend to ask people to review the paper more directly. Asking more people does make sense, although I have previously been annoyed having written a full detailed Duke referee report only to find that other people had also refereed the paper (with a certain amount of duplication of effort). Even so, the greater issue (for me) lies in the second half of the process.
Ultimately, the difficulty seems to stem from a lack of incentives. Why should someone bother to make the effort to do a decent job of reviewing?
Dear Persiflage: I’m trying to reply to your reply to my reply but can’t see how to do it, so this reply might appear in the wrong place, but here goes: the only incentive I ever knew to do a good job refereeing a paper is because one thinks the results are interesting and so one might help to get others to appreciate the work by contributing improvements through correcting mistakes and getting unclear parts clarified. There’s also the issue of tit-for-tat, but that could be perceived as too idealistic for this world.
Dear BC,
There’s a maximum thread length, thus the difficulty. What’s even worse is that good referees get “rewarded” by asking to do more refereeing. That’s why I try to use my top guys sparingly.
One issue is that many papers are simply *not* very interesting, and someone still needs to review those.
A different approach to speeding things up would be to have hierarchies of journals controlled by a single editorial board. You would submit your paper to the editorial board and they would get both quick opinions and a full report and then decide which journal it was worthy of appearing in. The range of quality of the constituent journals would need to be broad enough that you are basically guaranteed acceptance in one of them if the paper is correct but also the top one should be pretty fancy (Duke level, say) with strict limits on how many papers it publishes per year. One could even copy the CS folks and have an editor, the quick opinionators, and the referee engage in an online discussion about the placement after all the data is in…
If they added BYCAMS (boring yet correct) to their stable of JAMS, TAMS, and PAMS the AMS could try this pretty easily…
I agree. There are too many journals controlled in strange way. The subject matter and worthiness depending on editorial preferences which one cannot guess. I think that there should be some consolidations into blocks as physicists have done with their physics reviews. Also, mathematics journals print very small numbers of articles and also mostly original idea articles. We should have more report-type articles as well. These are very valuable also since they provide some confirmations and applications of ideas.
Another incentive to referee a paper is this: to promote your field. If the paper works on a topic you like, or uses methods you like, (or cites papers you like!) then it’s publication makes your field more active and prominent.