The bio-science journal eLife is trying out an remarkable new approach to peer review: letting authors decide how, or even whether, to revise their manuscripts in response to reviewers. The well-written announcements are here and here. I’m glad to see experimentation with new scientific publishing methods, though I’m skeptical about this one. Fundamentally, the new system wants to liberate the assessment of papers from the arbitrary judgement of small numbers of reviewers, while simultaneously maintaining, and in fact increasing, the gatekeeping power of an even smaller number of editors. I’ll describe this and other thoughts below, and give a suggestion that would (slightly) improve the proposed scheme.
eLife is a relatively new (2012) “prestige” journal — i.e. one that aims to publish high impact papers, and that has an editorial approval stage before peer review that most papers fail to pass. In other words, unlike most journals, in which any paper that’s appropriate for the scope of the journal is sent out for peer review, at journals like eLife an editor first decides whether the submitted manuscript is good enough and flashy enough to warrant sending to reviewers. Most papers (about 70%) don’t pass the editorial stage.
Before going on, I should point out that my opinion of eLife has been steadily declining over the past few years. At least in areas of biophysics and microbiology, it seems to have a fondness for quite traditional “molecular mechanism” sorts of studies; it’s not as exciting as one would hope. Its bias towards “big name” researchers isn’t any less than that of the other fancy journals, at least in my and several others’ perceptions. Also, in my experience, the reasons offered for its editorial-stage rejections seem quite shallow, which is something particularly relevant for my skepticism about how the new publishing scheme might turn out.
The new path
In this new scheme [link1, link2], as in the standard path, a paper will go to reviewers if an editor decides that it is eLife-worthy. The main difference, strikingly, is that the journal commits at this point to publishing the paper. The reviewers then review the paper as usual, perhaps making minor suggestions, perhaps pointing out major flaws, or perhaps proposing additional experiments. (Nearly everyone who submits papers to any journal, by the way, hates it when reviewers demand additional experiments — they’re often not actually necessary, and are nearly always far more difficult than the reviewer realizes.)
A good thing about this approach is that it counteracts capricious or unreasonably demanding reviewer comments. The reviews are simply suggestions, that the authors can take or leave.
Another positive is that by publishing the reviews (which eLife already does, in fact), the reader can judge how meaningful they are. This shifts the assessment of the paper, somewhat, from a small group of people (the reviewers) to anyone reading the paper, while still supplying some guidance.
I have many qualms, however. (eLife‘s blog post is quite thoughtful; some of these are “known” potential pitfalls.)
The main one is that it gives a lot of power to the editor — even more than is already the case. With editorial approval equated with acceptance, the editor, who in general doesn’t read the paper as carefully as a reviewer, is essentially the sole decision maker about publication. One might argue that the editor could carefully read the paper, but given the large number of papers submitted, and the fact that the editors have full time jobs as successful scientists, this seems unlikely. One might also argue that the authors might not want their paper published if the reviewer comments are particularly scathing, but this will probably be rare.
Given this increased power, one could easily imagine that the editors, even more than they do already, will approve papers from bigshots to mitigate the (perceived) risk of publishing something that might embarrass them later.
I wonder as well about the new system from the perspective of reviewers. Reviewing a paper takes hours of work (if one does it well), and one of the main motivations is the possibility that your review will hopefully improve the quality of published research. If one’s review can be simply dismissed, why bother?
The response to this will be that the reviews are published, so the reader can see and assess the paper in light of them. That’s true, but I wonder how many people will read the reviews? For the dozens of eLife papers I’ve read in the past few years I’ve read perhaps two or three of the reviews and, if anything, my average is probably higher than most people’s.
Another possibility will be that the reviews are not anonymous (which would be unusual). eLife writes “Reviewers will know that it is very likely that their comments will be published and they will have an opportunity to gain recognition for well-crafted and thoughtful advice.” It wasn’t clear whether this means reviewer names will be published; in a prompt response to a comment on the eLife blog post eLife Executive Director Mark Patterson replied that reviewers can choose whether or not to be anonymous. This is good, but I worry again whether this will bias the system towards the elites. There’s really little to be gained by revealing your name as a reviewer, and in contrast many ways to antagonize the people who will be at some point reviewing your papers and grants. I’ve de-anonymized myself for some reviews I’ve written, which in a few cases has led to nice conversations and in one to remarkably angry feedback (from, incidentally, someone at a more prestigious school). Overall, my suggestion is that reviewers keep themselves anonymous unless they are senior faculty at Stanford.
A flaw of the fancy journals is that too many of their papers are flashy, without the substance to back up the flash. Overblown claims, tenuous links to trendy topics, or (especially) cherry-picked conclusions drawn from tiny datasets wither upon careful reading of many papers. That’s understandable: it’s the flash that gets the paper past the editors, and it’s the substance that is supposed to be evaluated by the (fallible) reviewers. Presumably, there are many flashy papers that got past the editors that were thankfully killed by reviewers.
From the perspective of authors, the incentive in eLife’s new scheme is to make one’s paper and cover letter as flashy as possible, even more so than is presently the case, because getting past the editor is tantamount to publication. Obviously, this incentive isn’t a good one for science. The counter-argument is that the authors will get feedback from the reviewers, and if they ignore it, it will be there for the readers to see and judge. But, as noted above, readers will ignore the reviews. And, moreover, the authors still get to note an eLife paper in their CVs, and most people won’t look more deeply beyond this.
One simple change to the proposed process that would help address some of these issues is if the author names and affiliations were hidden from the editor (i.e. blind submission). Of course, it’s often easy to guess who authors are — one would likely guess that a paper on live imaging of gut microbes comes from my lab, since no one else does it, but this is an extreme case. (And moreover other labs could certainly start, and provide some competition!) This would help counter the important concern that the increased editorial power would benefit well-connected and already-famous labs. Of course, it does nothing about the problems of reviewer motivations and author incentives.
The real problem
Fundamentally, the problem with the proposed approach arises from the cake that eLife wants to both have and eat: championing reader review while still being a high-prestige journal. They write, surprisingly: “Rather than the journal name being used as a proxy for the possible quality of an article, the journal becomes a venue for the critical and transparent evaluation of work that is judged to be making important claims for a field.”
There’s a viewpoint (espoused e.g. by Andrew Gelman in many posts) that we should scrap our system of pre-publication peer review completely, and rely on post-publication commentary by whoever cares about the paper. I’m tempted by this, but I don’t agree — it would take too long to delve here into my reasons why, but in short I think this is doomed because there are too many scientists and too many papers. Nonetheless, this is an intellectually justifiable and self-consistent perspective.
It makes little sense, however, for the post-publication review approach to be merged with an editorial gatekeeping approach. In other words: if the readers are the ones entrusted with the task of assessing papers, the papers shouldn’t have to get an editorial stamp of approval first. It’s like being told one can sample a bunch of foods to find the one one likes the best, but the foods have first been vetted by a judge who only let through the 30% that were to his or her taste. What’s more, the judge just smelled, rather than actually tasted the foods, and some were delivered by famous celebrity chefs.
The only way in which this makes sense is if one believes the editor’s discernment is really important. This is perfectly fine. After all, it’s the justification for the fancy journals in the first place, which really have published a lot of remarkable papers over the past century or so. But then one can’t write that the new approach means that the journal name shouldn’t be “used as a proxy for the possible quality of an article” because, by construction, it is!
A valuable experiment
Despite my negativity, I’m appreciative and impressed that eLife is trying this. The initial test will be on the first 300 submissions that choose this publication option. There are a lot of problems with present-day scientific publishing, and it’s great to see a high-profile journal experiment with alternatives to the standard methods. In addition, as detailed on its blog, eLife will be keeping track of many measures describing the outcomes, such as the percentage of referees who accept the invitation to review, and how often readers look at the decision letter. These will be valuable information for any future publishing experiments.
I look forward to reading about the outcomes!
I finished painting a great blue heron, based on this photo, a few days ago. You can suggest changes that should be made, but I’ll likely ignore them!
— Raghuveer Parthasarathy. July 1, 2018