Is peer review like a professional form of hazing?

How the social psychological theory of cognitive dissonance can help us understand the peer review process

One of the most fundamental tenets of science is the process of peer review. Whether submitting a proposal to fund research, an article to be published in a journal, or an application to be hired as a faculty member, scientists rely on their peers to evaluate the merit of their ideas in order to be accepted by the scientific community as sound.

But there is a growing body of research suggesting that peer review is problematic. As I’ve written about before, peer review is subject to cognitive biases that disadvantage scientists outside the majority, including women, people of color, clinical researchers, and scientists at smaller institutions. It’s also a process that has been shown repeatedly (see here, here, here, and here, for a few examples) to be unreliable if not random.

So in the face of evidence that peer review is flawed, why do so many scientists (for example, here, here, and here) defend peer review as a highly effective system? The answer may lie, unironically, in science: One of the most elemental and enduring psychological phenomena, cognitive dissonance, may provide us with some answers.

Cognitive dissonance is a theory developed by social psychologist Leon Festinger in the 1950s. It is based on the principle that humans experience psychological stress or discomfort when they possess two conflicting ideas at the same time. Festinger showed how this works in many different experimental studies, such as one classic study where the experimenters asked participants to spend an hour doing a really tedious task, and then afterwards asked them to persuade another participant (who was really a member of the research team) that the tasks were exciting and engaging. Some participants were offered $1 to persuade this other person, while others were offered $20 (and a control group was not asked to interact with the other participant).

The participants who were paid $1 rated the tasks as significantly more enjoyable than the participants who were paid $20. Why? Because if they had to convince someone else that a really boring task was actually fun, then they need to have a pretty good reason for lying (since most people believe they are moral and ethical, inherently). One good reason for lying would be that they were paid a hefty amount to lie (hefty for the 1950s, anyway). But in the absence of a financial incentive, why would a moral person lie? It must be because the task wasn’t actually that bad, and maybe it could be considered kind of fun. That’s the phenomenon of cognitive dissonance at work: two conflicting ideas (“I’m a good person” and “I just lied to this other person”) are resolved by changing one of them (“I didn’t really lie, because it wasn’t so boring, after all”).

A powerful example of cognitive dissonance in the real world is the phenomenon of hazing. There’s evidence showing that the more intensely a person experiences hazing into a group, the more dependent that person feels on the group; relatedly, the more severe the hazing, the more a person wants newcomers to experience the same hazing.

Peer review is certainly not as detrimental, destructive, and abusive a process as hazing. But there are some parallels we can draw. Members of a fraternity or a sports team likely don’t inherently want to be humiliated, deprived of food, or physically assaulted, so if such things do occur as part of a hazing ritual, cognitive dissonance theory predicts they will rationalize such behaviors, perhaps by thinking the hazing wasn’t actually as bad as they thought it would be.

Members of the scientific community don’t inherently want to believe that the process that is responsible for their career advancement and professional survival is flawed and random, so when presented with evidence that it might be, cognitive dissonance theory predicts they will rationalize such evidence, perhaps by thinking the peer review process isn’t actually as flawed as the evidence suggests.

I’m not suggesting that cognitive dissonance is the only factor at play here. Without a doubt, there are a multitude of reasons why scientists would want to protect and preserve peer review. There are issues related to identity, group membership, federal policy, feasibility, and many others that would influence scientists’ beliefs that peer review is sound.

Instead, I’m suggesting that cognitive dissonance can help us understand why scientists might be not simply unwilling, but unable to face the growing evidence that peer review is flawed. When someone has participated in—or been subject to—a process that might be biased in their favor or even random, this contradicts their belief that they are deserving of that grant, that published manuscript, that job. Enter cognitive dissonance, to help rectify those conflicting thoughts.

But, just like with hazing, cognitive dissonance predicts that the more that people are subject to peer review, the more dependent they will feel upon it and the more they will want newcomers to go through it, too.

Peer review doors, peer review tables, and peer review houses

Should scientists who are at the mercy of the peer review process feel wary of criticizing it? Should those of us in peer review houses avoid throwing scientific stones? 

Over the last two years, my research has focused on grant peer review, asking questions like: How do scientists decide which research should be funded? How often do different reviewers agree with one another about which research should be funded? If we assign the same research proposals to different groups of scientists, do those groups reach the same conclusions?

For a process that is as important to the entire field of science as peer review is, there's still not a whole lot of empirical evidence that peer review actually "works." There's a lot of evidence, actually, from peer review of manuscripts submitted to journals (for example, see here, here, and here) that different people reading the same manuscript come to very different conclusions about whether it should be published. 

Lots of people think this result can be attributed (in part) to biases against particular authors (for example, women and racial/ethnic minorities) or to psychological habits of mind (for example, confirmation bias or the halo effect). As a result, many have suggested we simply keep authors' identities secret, so that only the manuscript itself is judged on its merit. Unfortunately, though, masking authors' identities doesn't seem to make a substantial difference in improving the reliability or fairness of journal peer review. 

But what about grant peer review? We see the same challenges for women and underrepresented minorities in obtaining grant funding that they experience in getting published. As a result, the National Institutes of Health (NIH) have begun conducting their own study to see what happens when grant applicants' identities are masked, although, as we saw, the evidence from blinded journal peer review doesn't bode well. 

So what does this mean for the future of grant peer review? Are we doomed to rely on a process firmly entrenched in the scientific zeitgeist, yet deeply biased and perhaps even inherently flawed? 

We must be careful not to cast peer review out as a policy simply because we have found problems with its results. Instead, we need to more closely examine how the processes of peer review impact the outcomes of it, by studying what happens behind closed peer review meeting doors and by determining effective ways to increase the diversity of the people who get to sit at peer review tables

As scientists, it's uncomfortable yet crucial to place ourselves under the proverbial microscope. We are beholden to the peer review process for career success and advancement, so objectivity and bravery when evaluating peer review as a policy is difficult but imperative. "Publish or perish" is still the mantra of the tenured and the tenure-seeking (even as tenure lines continue to vanish). Our research is only as valued as the outlets in which it is disseminated. 

And yet, it is perfectly plausible to empirically scrutinize a policy, while still firmly believing in its historical, cultural, and scientific value. In other words, those in peer review houses should, in fact, throw stones. We just need to start devising a blueprint for what happens if we come crashing through the floor.