The main objectives of peer review are to improve manuscript quality and advise the editor regarding acceptance. But peer review’s two defining characteristics – anonymity and altruism – are ill-suited to reliably fulfill its aims. Increases in manuscript submissions has led to a ‘tragedy of the reviewer commons’, whereby over-solicited reviewers put less effort into reviews or stop reviewing altogether. I present problems surrounding peer review, and approaches to slowing the tragedy.
My first experience with peer review was in 1983 at UC Berkeley. I conducted a project investigating whether females of the California oak moth, Phryganidia californica produced a sex pheromone to attract males of the species. Sure enough, this is what we found. Together with my supervisor, Dr. Jan Volney, we wrote the study up as a research note for The Journal of the Lepidopterist’s Society [1]. Before submitting the manuscript however, Jan suggested that we get “friendly” reviews from colleagues in the Department of Entomology. Three willing people reviewed what was a two page, very straightforward study. The ensemble of their comments was extensive, many dealing with the writing(!). These reports were invaluable to improving the manuscript and ultimately publishing the paper.
These were not “real” peer reviews – they were our friends and colleagues - yet the experience had all the hallmarks of how and why peer review works.
Our reviewers:
Had the time. They were likely not as burdened by reviewing as they would be today.
Took the time necessary. They viewed a thorough review as important.
Were competent to review the paper. Everyone we approached had the needed expertise.
Were constructive. They viewed peer review as an important institution. Their identities were obviously known to us, putting an extra layer on the importance of carefully written critique.
Why these reviews worked then is also indicative for why peer review sometimes does not work today: scientists are busy and tend to give reviewing low priority. Some scientists refuse invites, do not take the time necessary for the review when having agreed, may be a specialist in the area, and may hide behind the veil of anonymity to either adulate or destroy a paper. These and other problems are by no means givens – many if not most peer reviews do adhere to the positive points above. Nevertheless, there is a burgeoning issue that peer reviews do not always serve their purpose, and sometimes sully it.
Anonymity and altruism
Although the ideal of peer review is to benefit science, and authors consistently express support for peer review [2], it has numerous shortcomings including: delaying publication, over-influence on editorial decisions, biased reports, and the risk of one’s own use of manuscript information.
Many of the concerns surrounding peer review stem from its two defining characteristics.
Anonymity
Not being known to authors means providing comments without fearing judgement or reprisal. This should promote more honest reviews. Although many use anonymity as it was intended, others do not or even abuse it. This can take several forms:
- Lackluster reports. The reviewer conducts a summary or careless report.
- Biased or ‘rogue’ reports. The reviewer biases her report either to support or lambaste a study, due to author identities or schools of thought.
- Delayed or cancelled reports. The reviewer gives low priority to conducting the review, intentionally delays it, or never comes through.
Editors can do little to prevent these issues beyond inviting people they view as conscientious in the first place, and if necessary, arbitrating problematic reviews.
Altruism
Reviewing contributes to the scientific commons with the expectation that others do the same. But conducting a thorough, thoughtful review takes time that could be spent on one’s own research. No ‘higher authority’ checks to see if scientists contribute their fair share to the commons, meaning that invited reviewers are free to decline, or should they agree, conduct their assessment pretty much as they please.
Issues surrounding anonymity and altruism surely existed when I was a student in the 1980s, but were not discussed nearly as much as today. I argue below that the main problem plaguing peer review ultimately comes from continual increases in what are already too many manuscripts.
Alternatives to classic peer review. Some are turning to preprint servers and subject repositories (arXiv, BioRxiv, peerJ, Zenodo, RePEc, Pubmed central) and websites with post-publication peer review (e.g., PLoS One, F1000, eLetters). These alternatives are not without their own issues, for example, the posting of embarrassing errors that are subsequently corrected (or not), and lack of clarity regarding whether the preprint is the final (citable) product. Another initiative is collaborative peer review (eLife, Frontiers journals, EMBO, Science), in which reviewers interact to produce a single report [3]. This maintains anonymity and promotes constructive, consistent and quality assessments.
More manuscripts - more experts
According to the Web of Science Core Collection, a total of 2,896,773 articles were published in 2016, and this is about double the number in 2003 [4]. Publons [5] estimates that the annual growth in published articles from 2013 to 2016 is 2.6%, whereas the number of manuscript submissions grow by 6.1% annually. The number of individual authors is also increasing at comparable rates. According to one survey, their numbers almost doubled between 2003 and 2013 [6].
It would seem therefore that the pool of reviewers is keeping up with manuscript submissions. But according to the Publons survey the average number of invitations required to get one review increased from 1.94 to 2.38 over the period 2013-2017. Similar trends were found for a set of ecology journals [7].
Then why are journals having greater difficulty in getting preferred reviewers to agree?
In an ideal world, each review received would be reciprocated to the scientific community at some time later by one or more of the co-authors. So, for example, if there were a total of 10 reviews received in 4 sequential submissions (that is, at least 3 rejections), and 8 authors on the paper, then (if the burden were divided as equally as possible) 6 of the 8 would review 1 future manuscript, and 2 would each review 2 future manuscripts.
But things are not so simple. There is no straightforward way to enable or enforce fair reciprocation. For example, if I am co-author on a publication, and ‘owe’ one review, then how much time do I have until I conduct it, is there any review quality control, and what happens if I don’t review the required paper?
Reasons for not pulling one's weight are many and varied. They include: not being invited to review papers in the first place, invited to review papers outside of one’s expertise, and invited, but having conflicts of interest. And some people are just busy. Busy can mean personal or professional imperatives or prioritizing one task over another. Reviewing takes time and effort and is not directly recompensed. It therefore competes with other projects promising some kind of return (e.g., a published article). ‘Busy’ also makes it is easy to think that one’s review is not indispensable – “there are enough competent reviewers out there to do the job”. Declining to review need not be justified, and is easy: just click on the appropriate link or ignore the soliciting email.
Some scientists are more likely to be contacted to review papers than others. The number of authorships, citations, appearances at conferences, and the regularity of reviewing for journals, all contribute to being highly solicited to review. Experience-bias is therefore built into the system: students, postdocs, and young faculty are less likely to be identified as potential reviewers, despite their reviews often being as or even more thorough and useful than those from more senior scientists.
What is the overall effort that goes into peer review? Consider the following simple but plausible calculation. Assuming on average a very low 1 review conducted per submission, this means that anywhere from 1 (accepted in first journal approached) to several (assuming two or three journals approached) reviews contributed to marshalling these manuscripts to publication. If the number is a high 3 reviews per submission, then this number can be anywhere from 3 to more than 10! The reality is probably somewhere in the middle (on average), and the actual number of reviewers will be somewhat less than the number of reviews, since some experts are contacted more than once (following rejection by one journal and submission to another). Nevertheless, the total effort put into reviewing published (and sometimes never-published) papers is considerable, and arguably beyond several reviews conducted, becomes redundant and a waste of reviewers’ time.
The tragedy
As mentioned above, reviewing manuscripts is an act of altruism. It takes time and effort and means doing less of one’s own work. It can also mean – in making extensive suggestions and corrections – that reviewers contribute as much or more to the final paper as some of the coauthors, but the former is, at most, only acknowledged [8]. Peer review works when scientists view this sacrifice as both a responsibility and an opportunity to contribute to the betterment of science. Some are also motivated by the expectation that other scientists feel and act the same. They indirectly reciprocate. They cooperate.
But for peer review to work, enough people need to contribute, and specifically the right people are needed. The right people are those scientists who, according to the editors, are competent to review specific manuscripts, the reliable, and the thorough. The system suffers and eventually breaks down when the right people are over-solicited and decide to review less than their share, or not at all. When the right people contribute less to peer review, less appropriate reviewers take their place.
The tragedy of the reviewer commons [9] occurs when different journals tend to seek the same reviewers [10]. Each journal invites specific people for their own purposes, as if no other journal were doing the same. This happens for the simple reason that editors never communicate their reviewer lists. Imagine the following plausible example. If I am the editor of journal X, then I may invite Reviewer Y (who always accepts reviewing assignments) 10 times a year without knowing that Z other journals are also inviting Y on average (let’s say) 10 times a year. The total number of invites this person receives in a year is about 10*(Z+1), and as Z becomes large, one can easily understand that this favored reviewer simply cannot make the time to review all of these manuscripts. Worse, this person could become increasingly annoyed, and eventually start to refuse assignments, or may snap and stop reviewing altogether for certain journals.
The tragedy affects journals as courts of quality science. Not only will sub-standard reviews result in less informed publication decisions, but when articles are finally published, they too will be at a lower scientific standard than had they been revised based on high standard reviews. Overall, the tragedy negatively impacts science.
Evidence of the tragedy. Fox and coll. [7] examined trends in the levels of invited reviewers agreeing and submitting their review for a sample of 6 journals in the fields of ecology and evolution. They found downward trends with time for some but not all the journals, and that individuals invited repeatedly were less likely to accept reviewing. The authors also found indirect evidence that some journals manage their reviewer data bases avoiding repeat invites.
The tragedy affects science and the scientific community in a number of ways:
- More desk rejections so that journals can cope with fewer consenting reviewers
- Emergence of journals requiring no reviews or lower scientific standards
- Bias stemming from a smaller (willing) reviewer pool
- Fewer, poorer reviews received
The last of these points is particularly troublesome. Being asked to review a manuscript is recognition of one’s expertise, and it used to be - and for many still is - a great honor. Nevertheless, being asked to review by a major journal such as Science or Nature carries greater kudos than receiving an invite from a less prestigious (but nevertheless reputable) journal. The obvious danger is that preferred reviewers are more likely to both accept invites by, and conduct high-level assessments for, prestigious journals compared to their lesser known counterparts. The latter will have greater difficulty finding highly qualified reviewers, and some who do accept will attach little importance to the quality of their reports.
Slowing the tragedy
The tragedy is a graded phenomenon – not all-or-nothing. Individual journals can contribute to slowing the effect through the development and more judicious use of their own reviewer base. Should many journals do the same, this will materialize as improved science.
Numerous checks slow the tragedy, and many journals apply these without even realizing the underlying mechanism that created the problem in the first place nor the greater positive implications for science.
Fewer go out for review
Until the 1990s, submissions that fell within scope of a journal were routinely sent for peer review. Reviewers were plentiful and willing. Editors made it a point to give almost all manuscripts their day in court. This situation changed with the growth of the Internet and more specifically the increasing importance accorded to citation metrics. The 2000s saw notable increases in submissions to the top-ranked journals and the first signs of reviewer fatigue and disengagement.
Journals increasingly found themselves in situations where they were reviewing manuscripts that had little or no chance of acceptance. They had to parse this with difficulties in obtaining reviews for those (viable) manuscripts where expert feedback was the most needed. Many journals responded to these challenges by increasing their desk rejection rates. The figure below illustrates the challenges of deciding between desk rejection and external review.
As unjust as they may seem, desk rejections have positive effects. They save time and effort for both editors and reviewers, and although disappointing for authors, enable a quick turn-around for submission to a more suitable journal. Desk rejections also help conserve a journal’s reviewer base and result in lower costs since less editorial office time is spent marshaling peer reviews.
In deciding on desk rejection, the chief editor – and often a member of the board – believes that the manuscript has little chance of surviving peer review. In making such decision, editors have to rely on their experience in observing the range of manuscripts that are sent out for review at the journal and fail, and those that succeed. Desk rejections therefore necessarily have a subjective component, and in particular, solicit the ability of the editor to forecast whether a probable desk-rejected manuscript could become an acceptable paper, were it to be revised. Of course, without expert reviews the editor is limited in making such forecasts, and it is for this reason that editors tend to err on the side of conservatism, and send manuscripts out for review where there is a doubt.
Nowadays, desk rejection is practiced by many journals. Although desk rejections undoubtedly reduce the tragedy, they are not a panacea, if for no other reason because rejected authors can simply continue to submit to journals where they have little chance of acceptance, some of which will nevertheless review (and reject) their manuscript.
Submission fees
Although at first sight a reasonable policy to deter speculative submissions – is very rarely practiced in biological science journals. The two main impediments are that it requires administration and more importantly, even if inexpensive, could become a significant factor in choosing a journal, resulting in many excellent papers being submitted to other journals.
Invite younger reviewers
With age comes knowledge, and editors know this all too well. Given the choice between inviting a PhD student and a faculty member, most editors would not think twice and go with experience. But whereas such expertise is invaluable, there are some notable downsides. Some common problems include biased reports, summary reports, and reports being late or never submitted. Over-solicitation may explain some of the shortcomings, and a promising solution is to shift some of the reviewing responsibility to younger scientists [11]. This has the added benefits of training, accessing the expertise of scientists in a research phase of their career, and likely greater punctuality and thoroughness than older researchers.
It should come as no surprise that some young scientists are not equipped to conduct a peer review. Graduate school curricula very rarely include courses on writing scientific articles let alone reviewing. Journals, more experienced scientists and group PIs need to encourage, teach and train. This can be accomplished in two complementary ways.
First, journals should encourage the first steps in peer review. This is not to say that it’s the journal’s responsibility to take a student by the hand from start to finish, but rather that they identify prepared students and postdocs, and/or invite supervisors, requesting that they help their students take the first steps or gain additional experience.
Second, experienced researchers need to train students in peer review. Peer review is often overlooked as a part of a student’s education, and rather viewed as something learned ‘on the job’. Learning on the job has its merits, but also its problems, such as untrained reviewers focusing on minor details and either wording their review too critically or congratulatory. Rather, young scientists need to learn how to develop a critical eye whilst being constructive and diplomatic. In reviewing manuscripts, young scientists also become more aware of the strengths and shortcomings to their own writing. Experienced researchers can accompany students and postdocs by co-reviewing manuscripts or giving lectures or courses on writing and publishing scientific papers.
Diversify the reviewer pool
Previously underrepresented scientific communities are increasingly publishing high impact – high quality science in reputable international journals. Yet, scientists from these communities are not contributing to peer review in the same proportion as their publications. The principal reason is not that they do not want to review, but rather that some editors are reluctant to approach scientists from unfamiliar communities or who are simply not registered in their databases. For example, according to Publons [5] scholars from China submit over 17 million papers between 2013 and 2017, and also being responsible for 13.8% of research output, review somewhat less (8.8%). Signs are however that emerging regions - and China in particular - are increasing their contributions to peer review, with growth of the same period of 193% and 224%, respectively.
Ask dedicated editors to conduct reviews
Journals range widely in the responsibilities of editorial board members. Many journals keep editors out of the manuscript review process, preferring that they focus on arbitration tasks such as initial screening and recommendations on reviewed manuscripts. Contributions of editors to reviewing either punctually or regularly, will alleviate the tragedy.
Clearly, those rare journals that review exclusively ‘in house’ are unaffected by the tragedy. Moreover, they reduce the tragedy by ‘shielding’ considerably greater numbers of scientists from peer review solicitations. Editors-as-reviewers, however, potentially bring other issues into peer review, such as not being sufficiently qualified to assess certain manuscripts or in introducing bias.
Recompense, acknowledge, publish
Compensating reviewers is arguably the most effective way at curtailing the tragedy. It provides a just reward directly for each contributed review, rather than relying exclusively on the mechanism of altruistic, indirect reciprocity. Providing rewards should, in particular, promote quality reviews, since the service is under a costlier contract than just an automated ‘thank you’, and the quality of the service rendered can be checked.
There are numerous non-mutually exclusive ways to acknowledge, reward or compensate reviewers, including:
- Publish names at end of journal
- PubCreds
- Lowered costs for subsequent publication or on-line access
- Recognition at home institute
- Open peer review
- Monetary compensation
- Professional reviewers
None of these are routinely used, and although the last two have been widely discussed, I know of no journal that monetarily compensates reviewers. Paying reviewers is problematic because it requires administration and accounting. PubCreds, where reviewers earn tokens that are otherwise required for manuscript submission [12], is really only viable if many scientists and journals participate. Publons allows reviewers to ‘claim’ their reviews (a citable DOI), and the service ensures the right people get the appropriate credit [13]. Finally, OA platforms such as F1000Research and PeerJ with open peer review are gaining considerable support because of the transparency of the process [14-16].
Problematic reviewers
As much as some editors may ravish the thought, reprimanding or punishing reviewers is rarely done. Here are some of the issues that irk editors:
- Reviewer never responding or always busy
- Reviewer agreeing to review, but not submitting review
- Tardy review
- Summary review
- Biased review
- Aggressive or personal review
Perhaps the main reason for why it is difficult and even ethically untenable to question some reviewer behaviors (such as late reports) is that personal or health issues may be involved. The most that an editor can really do for most of the above points is to give the reviewer another chance on a future manuscript, and if the problem persists, then place the reviewer on a no-invite list. Heavily biased and aggressive reviews may immediately land the reviewer on a no-invite list or a blacklist.
Problematic reviewers increase the tragedy regardless of whether they review manuscripts or are blacklisted. Punishment as suggested by Hauser and Fehr [17] simply won’t work and, as just mentioned, would be unethical in cases where the reasons for being late are personal. Rather, the most positive and effective means to reduce the above listed issues is for teachers, mentors, and PIs to educate young scientists.
Cascade journals and revising rejected manuscripts
Journals that send what should be desk rejected manuscripts out for external review increase the tragedy. Over-reviewing is difficult to estimate (but see ref. 18), if for no other reason because the justification for sending a manuscript out for review is rarely noted. It is the remit of the journal, and not the community, to consistently administer a peer review policy. Desk rejecting submissions fairly requires editorial time, effort and expertise, and is never applied to all eventually-rejected manuscripts, since editors cannot perfectly predict the outcome of peer review. A manuscript with potential at one journal that is rejected following peer review will possibly experience the same fate at one or more subsequent submissions.
One partial solution to the burden of re-reviewing a previously rejected manuscript is to transfer reviews from journal to journal. Typically, this is done within a journal family under the same publisher, both to facilitate the process and promote the flourishing of new or lower standing members of the family (so-called ‘cascade journals’). The authors will be expected to reply to reviewer comments from the original submission, and some non-family journals that consider reports from a previous submission will nevertheless solicit one or more new reports.
More importantly, following rejection, some authors who want to publish in comparable or even more prestigious journals may not want to prejudice a publication decision, and will not make any mention of previous rejection(s). Even if not informing a chief editor of a previous submission, taking reviews into serious consideration in revising a manuscript will improve it and increase chances of subsequent publication success. Thus, both transferring reviews from journal to journal and revising rejected manuscripts before approaching a new journal contribute to reducing the tragedy.
References
1 The study was originally submitted to Pan Pacific Entomologist, but after many months without news, we learned that the chief editor had apparently quit his functions, and submitted manuscripts were unretrievable. A very unfortunate introduction into the world of publishing!
2 King, S.R., 2017. Consultative review is worth the wait. eLife, 6, p.e32012.
3 Nicholas, D. et al., 2014. Trust and Authority in Scholarly Communications in the Light of the Digital Transition: setting the scene for a major study. Learned publishing: journal of the Association of Learned and Professional Society Publishers, 27(2), pp.121–134.
4 Note that this is likely to be biased by increased numbers of journals in the 2017 estimate, meaning that the doubling time is actually shorter.
5 Publons. (2018). Global State of Peer Review. Available at: https://publons.com/community/gspr
6 Plume, A. & van Weijen, D., 2014. Publish or perish? The rise of the fractional author…. Research Trends, 38. Available at: https://www.researchtrends.com/issue-38-september-2014/publish-or-perish-the-rise-of-the-fractional-author/
7 Fox, C.W., Albert, A.Y.K. & Vines, T.H., 2017. Recruitment of reviewers is becoming harder at some journals: a test of the influence of reviewer fatigue at six journals in ecology and evolution. Research integrity and peer review, 2, p.3. Note that the authors also found indirect evidence that some journals manage their reviewer databases to avoid repeat invites (Fig. 2C), indicative that they are reacting to the tragedy. Also, the study cannot exclude the possibility that the individual reviewers censused are receiving increasing numbers of total review solicitations over time from other journals.
8 Paine, C.E.T. & Fox, C.W., 2018. The effectiveness of journals as arbiters of scientific impact. Ecology and Evolution, 8(19), pp.9566–9585.
9 This is slowly changing with journals that publish peer reviews alongside articles. Ross-Hellauer, T., 2017. What is open peer review? A systematic review. F1000Research, 6, p.588.
10 Evidently inspired by the classic work of Garrett Hardin. Hardin, G., 1968. The tragedy of the commons. The population problem has no technical solution; it requires a fundamental extension in morality. Science, 162(3859), pp.1243–1248.
11 Hochberg, M.E. et al., 2009. The tragedy of the reviewer commons. Ecology letters, 12(1), pp.2–4.
12 Donaldson, M.R. et al., 2010. Injecting youth into peer-review to increase its sustainability: a case study of ecology journals. Ideas in Ecology and Evolution, 3, pp.1-7; Hochberg, M.E., 2010. Youth and the tragedy of the reviewer commons. Ideas in Ecology and Evolution, 3, pp.8-10.
13 Fox, J. & Petchey, O.L., 2010. Pubcreds: Fixing the Peer Review Process by “Privatizing” the Reviewer Commons. Bulletin of the Ecological Society of America, 91(3), pp.325–333.
14 https://publons.com/home/
15 Ross-Hellauer, T., 2017. What is open peer review? A systematic review. F1000Research, 6, p.588.; Tennant, J.P. et al., 2017. A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research, 6, p.1151; Polka, J.K. et al., 2018. Publish peer reviews. Nature, 560(7720), pp.545–547; Parker, T.H. et al., 2018. Empowering peer reviewers with a checklist to improve transparency. Nature ecology & evolution, 2(6), pp.929–935.
16 The Royal Society has recently introduced open peer review at two of its journals. https://blogs.royalsociety.org/publishing/publication-of-open-peer-review/
17 Another recently proposed form of post-publication feedback is open, interoperable annotations https://scholarlykitchen.sspnet.org/2018/08/28/all-about-open-annotation/
18 Calcagno, V., Demoinet, E., Gollner, K., et al., 2012. Flows of research manuscripts among scientific journals reveal hidden submission patterns. Science, 338(6110), pp.1065–1069; Paine & Fox, op. cit.
Comments