How the social psychological theory of cognitive dissonance can help us understand the peer review process
One of the most fundamental tenets of science is the process of peer review. Whether submitting a proposal to fund research, an article to be published in a journal, or an application to be hired as a faculty member, scientists rely on their peers to evaluate the merit of their ideas in order to be accepted by the scientific community as sound.
But there is a growing body of research suggesting that peer review is problematic. As I’ve written about before, peer review is subject to cognitive biases that disadvantage scientists outside the majority, including women, people of color, clinical researchers, and scientists at smaller institutions. It’s also a process that has been shown repeatedly (see here, here, here, and here, for a few examples) to be unreliable if not random.
So in the face of evidence that peer review is flawed, why do so many scientists (for example, here, here, and here) defend peer review as a highly effective system? The answer may lie, unironically, in science: One of the most elemental and enduring psychological phenomena, cognitive dissonance, may provide us with some answers.
Cognitive dissonance is a theory developed by social psychologist Leon Festinger in the 1950s. It is based on the principle that humans experience psychological stress or discomfort when they possess two conflicting ideas at the same time. Festinger showed how this works in many different experimental studies, such as one classic study where the experimenters asked participants to spend an hour doing a really tedious task, and then afterwards asked them to persuade another participant (who was really a member of the research team) that the tasks were exciting and engaging. Some participants were offered $1 to persuade this other person, while others were offered $20 (and a control group was not asked to interact with the other participant).
The participants who were paid $1 rated the tasks as significantly more enjoyable than the participants who were paid $20. Why? Because if they had to convince someone else that a really boring task was actually fun, then they need to have a pretty good reason for lying (since most people believe they are moral and ethical, inherently). One good reason for lying would be that they were paid a hefty amount to lie (hefty for the 1950s, anyway). But in the absence of a financial incentive, why would a moral person lie? It must be because the task wasn’t actually that bad, and maybe it could be considered kind of fun. That’s the phenomenon of cognitive dissonance at work: two conflicting ideas (“I’m a good person” and “I just lied to this other person”) are resolved by changing one of them (“I didn’t really lie, because it wasn’t so boring, after all”).
A powerful example of cognitive dissonance in the real world is the phenomenon of hazing. There’s evidence showing that the more intensely a person experiences hazing into a group, the more dependent that person feels on the group; relatedly, the more severe the hazing, the more a person wants newcomers to experience the same hazing.
Peer review is certainly not as detrimental, destructive, and abusive a process as hazing. But there are some parallels we can draw. Members of a fraternity or a sports team likely don’t inherently want to be humiliated, deprived of food, or physically assaulted, so if such things do occur as part of a hazing ritual, cognitive dissonance theory predicts they will rationalize such behaviors, perhaps by thinking the hazing wasn’t actually as bad as they thought it would be.
Members of the scientific community don’t inherently want to believe that the process that is responsible for their career advancement and professional survival is flawed and random, so when presented with evidence that it might be, cognitive dissonance theory predicts they will rationalize such evidence, perhaps by thinking the peer review process isn’t actually as flawed as the evidence suggests.
I’m not suggesting that cognitive dissonance is the only factor at play here. Without a doubt, there are a multitude of reasons why scientists would want to protect and preserve peer review. There are issues related to identity, group membership, federal policy, feasibility, and many others that would influence scientists’ beliefs that peer review is sound.
Instead, I’m suggesting that cognitive dissonance can help us understand why scientists might be not simply unwilling, but unable to face the growing evidence that peer review is flawed. When someone has participated in—or been subject to—a process that might be biased in their favor or even random, this contradicts their belief that they are deserving of that grant, that published manuscript, that job. Enter cognitive dissonance, to help rectify those conflicting thoughts.
But, just like with hazing, cognitive dissonance predicts that the more that people are subject to peer review, the more dependent they will feel upon it and the more they will want newcomers to go through it, too.