Published on [Permalink]
Reading time: 3 minutes
Posted in:

The academic prisoners' dilemma

As of this year, eLife no longer has “accept/reject” decisions after peer review: Which I learned via Andrew Gelman.

All papers that have been peer-reviewed will be published on the eLife website as Reviewed Preprints, accompanied by:

Authors will then receive a paper with a full DOI that can be used on funding applications. They will be able to include a response to the assessment and reviews, and decide what to do next:

  • Revise and resubmit
  • Declare the Reviewed Preprint as the final Version of Record

This is as it should be in the age of unlimited digital space.

The quality of public peer review on eLife seems above average: I have once, as the sole peer review of this paper from a double-digit impact factor journal, Impact factor of eLife, per Wikipedia, is 8.7 received a single sentence which amounted to “sample size too small”, but with more spelling errors and the same lack of punctuation. If your goal when reading a paper is both critical appraisal and learning, you could do worse than reading this exchange.

But! Eleven reviewed preprints total in the last 5 months seems… low? Am I missing other public reviews? I would, for example, very much like to learn what the reviewers said about this.

More generally, I am worried that this will make eLife become the default publication of last resort — trouble for the Infection and Immunity and Leukemia and Lymphomas of the world, but not exactly the killing blow to Science or Nature or most of its million offshoots.

The current, bizarre, inefficient, unsustainable — Byzantine, if you will, thought that is too disrespectful of Byzantium — keeps itself alive through force of reputation. Critical thinking is hard, so unless I am in the opposing team and my goal is to tear down your data I will save many a mental cycle by “trusting the process” and taking the conclusion, abstract, that one piece of information I need to cite in my own work… at face value. And evidence to the contrary be damned, say published in NEJM to a clinician and their ears will perk up.

So we are in a prisoner’s dilemma of sorts. Take a group of one hundred researchers: the average benefit to all of them, and to science in general, would be greater if all published in eLife. But, if 90 of the 100 submit to CNS journals or NEJM first then go down the impact factor list and only 10 shmucks go straight to eLife, there will be only a handful of “winners”, the state of science remains what it is, and everyone ends up wasting so… much… time.

It doesn’t have to be this way – and Covid did expedite some reputational decay – so this is a good a time as any to place a chisel in the crack. What’s needed now is some forceful movement of the hammer and, well, I guess people who publish People who review are equally important, but maybe just maybe we will at one point be able to leave that to an algorithm. It would certainly do a better job than most! are the hammer in this strained analogy.

Should I start with myself? I do have a handful of side projects which are neither industry nor strictly academic — myself having no academic affiliation. Stay tuned.

✍️ Reply by email

✴️ Also on Micro.blog