There are tradeoffs involved in participating in open peer review. Here, I highlight what the empirical evidence shows about those tradeoffs.
We are officially entrenched in the era of open science, a movement designed to make all output of scientific research—including raw data, data analysis, and finalized manuscripts—freely available to all members of society. The term “open science” typically gets thrown around with abandon but without clarity. Often, people use the broad term to specifically refer to Open Access (OA) journals, such as PLOS One or Science Advances, which provide their articles free-of-charge to the general public.
However, free-of-charge to the public does not mean free-of-charge to scientists. For a vast majority of OA journals, the financial burden of publishing is shifted from the reader (and the institutional libraries that pay monumental subscription fees) to the scientist. Although it is typical for the article processing charge to come out of funds from the grant that is supporting the scientific research, it typically costs an author between $1,000 to $3,000 to publish an article in an OA journal. And while it’s true that many OA journals waive the fee for scientists who come from underdeveloped countries, there are many cases where the fee is unlikely to be waived, but still presents an insurmountable obstacle—for example, graduate students who don’t have carte blanche access to grant funds. So open access is really only open to certain scientists, realistically.
As OA journals become more commonplace in various scientific fields, other elements of the open science movement have begun to proliferate, such as the use of open peer review (OPR). Note that OPR currently only refers to manuscript peer review, rather than grant peer review. Few people have entertained the idea of opening up the grant peer review process (outside of grants administered by philanthropic organizations), which is one of the reasons that grant peer review is so challenging to study.
OPR can take many, many forms, as Tony Ross-Hellauer reports in his systematic review of OPR. Ross-Hellauer identifies the following possible elements of Open Peer Review:
Open Identities: Also known as “unblinded” peer review, this is when the reviewers sign their names, so that the authors know who their reviewers are. This is essentially the broadest definition of OPR.
Open Reports: This is when the reviewers’ critiques (or summaries of them) are posted publicly along with the article, so that readers can see the critiques for themselves.
Open Participation: Also known as “crowdsourced” or “public” peer review, this is when readers from the broader community can contribute their own reviews. There’s a huge degree of variety in how this is undertaken, but basically, it aims to increase the number of reviewers for a given manuscript and to expand the pool of reviewers beyond the typical group of well-established scientists.
Open Interaction: This is when reviewers and authors directly interact with one another, as opposed to having all interactions mediated by the journal editor.
Open Pre-Review: This is when the draft of a manuscript submitted to a journal is immediately made publicly available as a “pre-print.” Such pre-prints can often invite comments from public reviewers that can then be integrated into future iterations as the manuscript winds its way through the sometimes interminable peer review process.
Open Final Version: This is when comments are opened up after the final version of an article has been published—sort of like a more regulated version of the perilous comments section of an online news article. This is similar to Open Participation, except that it only occurs after the final manuscript has been accepted for publication (and the authors have paid the money to publish it).
Open Platforms: This is when a platform separate from the journal itself serves as the host for an online article. The platform conducts its own peer review process and then provides a review report. For some platforms, participating journals can access these reports and use them to decide to solicit the authors to publish the manuscript. For others, the platform automatically forwards the report to the author’s (participating) journal of choice.
Clearly, open peer review can mean a lot of different things, but almost all of these approaches have one thing in common: the authors and reviewers are known to each other. As the list above alludes to, there are a lot of benefits for authors and for the scientific community-at-large when the reviewers’ identities are known, including:
Enhanced accountability for reviewers’ opinions and words
Increased transparency of the peer review process
More candid feedback from reviewers when their identities are known
Reduced likelihood of plagiarism, fabricated results, or scientific misconduct
Open reviews can serve as instructive models for graduate students and early career scholars who are new to reviewing manuscripts
Open participation can diversify the pool of potential reviewers, thereby potentially reducing bias in the review process and amplifying the voice of underrepresented scholars
Open pre-review may shorten the time it takes between submission and publication
If more reviewers participate, it may increase the reliability of the aggregated judgment made about a manuscript
There’s also some drawbacks for the broader scientific community:
- There’s evidence that asking reviewers to sign their reviews doesn’t improve the quality of their reviews, and might actually result in lower quality reviews
Few reviewers may be willing to participate in OPR before it has become the norm
Reviewers may not give as honest or candid of feedback if their identities are known
Open participation may invite unfounded or uninformed criticism from scholars without the requisite expertise
So it seems that the potential benefits may outweigh the drawbacks for the larger scientific community, but what about for the reviewers themselves? In other words, should you serve as a reviewer for OPR?
The main concern of OPR for reviewers tends to be that they aren’t protected from backlashes from the authors or from others who disagree with their opinion. For many reviewers, remaining anonymous ensures that they can be critical and discerning without fear of starting an academic (or political) rivalry that can negatively impact their careers. It’s hard to quantify or empirically measure the degree to which OPR reviewers experience such negative repercussions, and whether they outweigh the benefits to reviewers.
One piece of empirical evidence that has been found, though, is that it may actually take reviewers more time to complete their reviews when their identity is known. So it if requires more time and effort, with a potential greater professional risk, what’s the motivation to participate in OPR as a reviewer?
This conundrum has led to another step in open science movement that is gaining steam: Trying to somehow incentivize reviewers to participate in OPR, given the immense time investment required to review a manuscript and the increasing shortage of willing and able reviewers. Companies such as Publons aim to create public and verified profiles for reviewers so that their efforts are recognized and rewarded beyond a small line at the end of their CV.
If there is public verification of one’s contributions, this might be a good incentive to engage in OPR, right? Who wouldn’t be enticed by public recognition for an otherwise unremunerated and sometimes thankless job?
Although most academics consider their CV to be precious if not invaluable, a public reviewer profile falls short of financially compensating reviewers for their time. Plus, in the rat race that is the tenure track, it’s unlikely that the academic equivalent of a participation ribbon is going to motivate already overburdened academics to donate more of their time than they currently do, given that serving as a reviewer bears little weight in the tenure process.
Given the novelty of OPR and its slow but increasing adoption in science, it remains to be seen whether the risks to reviewers’ professional identities and time invested are borne out. It also isn’t clear to what extent having proof of one’s reviewing will serve as an effective professional cachet. Until there’s more data on how OPR affects not just authors but also reviewers, I think scientists ought to be wary of donating their time and resources to an uncertain process. On the other hand, we can’t obtain more data on the effects of open peer review if we don’t have willing participants.
And therein lies the paradox of OPR: We won’t know if it works until more of us try. So for the good of the future of science, perhaps we need to be willing to participate in an experiment of our own collective making.