Should scientists who are at the mercy of the peer review process feel wary of criticizing it? Should those of us in peer review houses avoid throwing scientific stones?
Over the last two years, my research has focused on grant peer review, asking questions like: How do scientists decide which research should be funded? How often do different reviewers agree with one another about which research should be funded? If we assign the same research proposals to different groups of scientists, do those groups reach the same conclusions?
For a process that is as important to the entire field of science as peer review is, there's still not a whole lot of empirical evidence that peer review actually "works." There's a lot of evidence, actually, from peer review of manuscripts submitted to journals (for example, see here, here, and here) that different people reading the same manuscript come to very different conclusions about whether it should be published.
Lots of people think this result can be attributed (in part) to biases against particular authors (for example, women and racial/ethnic minorities) or to psychological habits of mind (for example, confirmation bias or the halo effect). As a result, many have suggested we simply keep authors' identities secret, so that only the manuscript itself is judged on its merit. Unfortunately, though, masking authors' identities doesn't seem to make a substantial difference in improving the reliability or fairness of journal peer review.
But what about grant peer review? We see the same challenges for women and underrepresented minorities in obtaining grant funding that they experience in getting published. As a result, the National Institutes of Health (NIH) have begun conducting their own study to see what happens when grant applicants' identities are masked, although, as we saw, the evidence from blinded journal peer review doesn't bode well.
So what does this mean for the future of grant peer review? Are we doomed to rely on a process firmly entrenched in the scientific zeitgeist, yet deeply biased and perhaps even inherently flawed?
We must be careful not to cast peer review out as a policy simply because we have found problems with its results. Instead, we need to more closely examine how the processes of peer review impact the outcomes of it, by studying what happens behind closed peer review meeting doors and by determining effective ways to increase the diversity of the people who get to sit at peer review tables.
As scientists, it's uncomfortable yet crucial to place ourselves under the proverbial microscope. We are beholden to the peer review process for career success and advancement, so objectivity and bravery when evaluating peer review as a policy is difficult but imperative. "Publish or perish" is still the mantra of the tenured and the tenure-seeking (even as tenure lines continue to vanish). Our research is only as valued as the outlets in which it is disseminated.
And yet, it is perfectly plausible to empirically scrutinize a policy, while still firmly believing in its historical, cultural, and scientific value. In other words, those in peer review houses should, in fact, throw stones. We just need to start devising a blueprint for what happens if we come crashing through the floor.