The People Have Spoken!

Last Friday, September 22, I participated in the 2nd Annual Postdoctoral Research Symposium held by the UW-Madison Postdoc Association. Following a keynote by Jo Handelsman, the former Associate Director for Science at the White House Office of Science and Technology Policy under President Obama and the current Director of the Wisconsin Institutes for Discovery, 17 postdoctoral researchers (including myself) took the stage in the Genetics-Biotechnology Auditorium at UW-Madison to share their current and ongoing research.

There were three judges evaluating and scoring each talk: Stephanie Carpenter, UWPA President and Postdoc in the Dept. of Psychology; Eric Hamilton, University Relations Specialist, Office of Vice Chancellor for University Relations; and Natasha Kassulke, Manager of Strategic Communications, Office of the Vice Chancellor for Research and Graduate Education. There were also more than 50 audience members in attendance who were able to score each talk (from 1 to 10) in real time using their smart phones.

I presented a piece of my dissertation research (published here) that examines how one type of discourse during grant peer review meetings, "score calibration talk," helps us understand why collaborative discussion improves agreement in the scores assigned by reviewers within a panel, but worsens agreement in the scores assigned by different panels. My talk received the highest average score from the audience members, and I was awarded the People's Choice Award by the UWPA! 

It is such a tremendous honor to be recognized by a group of my peers spanning so many different fields of science—from psychology to entomology to chemical engineering to forest ecology and beyond! Without a doubt, public speaking is one of my favorite professional activities, and putting together engaging, creative, and fun visual presentations to accompany those talks is something I truly love to do. 

You can see (most) of the talk, except for the first 5-10 seconds, and the visual presentation right here

What is open peer review—and should I be doing it?

There are tradeoffs involved in participating in open peer review. Here, I highlight what the empirical evidence shows about those tradeoffs. 

We are officially entrenched in the era of open science, a movement designed to make all output of scientific research—including raw data, data analysis, and finalized manuscripts—freely available to all members of society. The term “open science” typically gets thrown around with abandon but without clarity. Often, people use the broad term to specifically refer to Open Access (OA) journals, such as PLOS One or Science Advances,  which provide their articles free-of-charge to the general public.

However, free-of-charge to the public does not mean free-of-charge to scientists. For a vast majority of OA journals, the financial burden of publishing is shifted from the reader (and the institutional libraries that pay monumental subscription fees) to the scientist. Although it is typical for the article processing charge to come out of funds from the grant that is supporting the scientific research, it typically costs an author between $1,000 to $3,000 to publish an article in an OA journal. And while it’s true that many OA journals waive the fee for scientists who come from underdeveloped countries, there are many cases where the fee is unlikely to be waived, but still presents an insurmountable obstacle—for example, graduate students who don’t have carte blanche access to grant funds. So open access is really only open to certain scientists, realistically.

As OA journals become more commonplace in various scientific fields, other elements of the open science movement have begun to proliferate, such as the use of open peer review (OPR). Note that OPR currently only refers to manuscript peer review, rather than grant peer review. Few people have entertained the idea of opening up the grant peer review process (outside of grants administered by philanthropic organizations), which is one of the reasons that grant peer review is so challenging to study.

OPR can take many, many forms, as Tony Ross-Hellauer reports in his systematic review of OPR. Ross-Hellauer identifies the following possible elements of Open Peer Review:

  • Open Identities: Also known as “unblinded” peer review, this is when the reviewers sign their names, so that the authors know who their reviewers are. This is essentially the broadest definition of OPR. 

  • Open Reports: This is when the reviewers’ critiques (or summaries of them) are posted publicly along with the article, so that readers can see the critiques for themselves.

  • Open Participation: Also known as “crowdsourced” or “public” peer review, this is when readers from the broader community can contribute their own reviews. There’s a huge degree of variety in how this is undertaken, but basically, it aims to increase the number of reviewers for a given manuscript and to expand the pool of reviewers beyond the typical group of well-established scientists.  

  • Open Interaction: This is when reviewers and authors directly interact with one another, as opposed to having all interactions mediated by the journal editor.

  • Open Pre-Review: This is when the draft of a manuscript submitted to a journal is immediately made publicly available as a “pre-print.” Such pre-prints can often invite comments from public reviewers that can then be integrated into future iterations as the manuscript winds its way through the sometimes interminable peer review process.

  • Open Final Version: This is when comments are opened up after the final version of an article has been published—sort of like a more regulated version of the perilous comments section of an online news article. This is similar to Open Participation, except that it only occurs after the final manuscript has been accepted for publication (and the authors have paid the money to publish it).

  • Open Platforms: This is when a platform separate from the journal itself serves as the host for an online article. The platform conducts its own peer review process and then provides a review report. For some platforms, participating journals can access these reports and use them to decide to solicit the authors to publish the manuscript. For others, the platform automatically forwards the report to the author’s (participating) journal of choice.

Clearly, open peer review can mean a lot of different things, but almost all of these approaches have one thing in common: the authors and reviewers are known to each other. As the list above alludes to, there are a lot of benefits for authors and for the scientific community-at-large when the reviewers’ identities are known, including:

  • Enhanced accountability for reviewers’ opinions and words

  • Increased transparency of the peer review process

  • More candid feedback from reviewers when their identities are known

  • Reduced likelihood of plagiarism, fabricated results, or scientific misconduct

  • Open reviews can serve as instructive models for graduate students and early career scholars who are new to reviewing manuscripts

  • Open participation can diversify the pool of potential reviewers, thereby potentially reducing bias in the review process and amplifying the voice of underrepresented scholars

  • Open pre-review may shorten the time it takes between submission and publication

  • If more reviewers participate, it may increase the reliability of the aggregated judgment made about a manuscript

There’s also some drawbacks for the broader scientific community:

So it seems that the potential benefits may outweigh the drawbacks for the larger scientific community, but what about for the reviewers themselves? In other words, should you serve as a reviewer for OPR?

The main concern of OPR for reviewers tends to be that they aren’t protected from backlashes from the authors or from others who disagree with their opinion. For many reviewers, remaining anonymous ensures that they can be critical and discerning without fear of starting an academic (or political) rivalry that can negatively impact their careers. It’s hard to quantify or empirically measure the degree to which OPR reviewers experience such negative repercussions, and whether they outweigh the benefits to reviewers.

One piece of empirical evidence that has been found, though, is that it may actually take reviewers more time to complete their reviews when their identity is known. So it if requires more time and effort, with a potential greater professional risk, what’s the motivation to participate in OPR as a reviewer?

This conundrum has led to another step in open science movement that is gaining steam: Trying to somehow incentivize reviewers to participate in OPR, given the immense time investment required to review a manuscript and the increasing shortage of willing and able reviewers. Companies such as Publons aim to create public and verified profiles for reviewers so that their efforts are recognized and rewarded beyond a small line at the end of their CV.

If there is public verification of one’s contributions, this might be a good incentive to engage in OPR, right? Who wouldn’t be enticed by public recognition for an otherwise unremunerated and sometimes thankless job?

Although most academics consider their CV to be precious if not invaluable, a public reviewer profile falls short of financially compensating reviewers for their time. Plus, in the rat race that is the tenure track, it’s unlikely that the academic equivalent of a participation ribbon is going to motivate already overburdened academics to donate more of their time than they currently do, given that serving as a reviewer bears little weight in the tenure process. 

Given the novelty of OPR and its slow but increasing adoption in science, it remains to be seen whether the risks to reviewers’ professional identities and time invested are borne out. It also isn’t clear to what extent having proof of one’s reviewing will serve as an effective professional cachet. Until there’s more data on how OPR affects not just authors but also reviewers, I think scientists ought to be wary of donating their time and resources to an uncertain process. On the other hand, we can’t obtain more data on the effects of open peer review if we don’t have willing participants.

And therein lies the paradox of OPR: We won’t know if it works until more of us try. So for the good of the future of science, perhaps we need to be willing to participate in an experiment of our own collective making.

5 Easy Punctuation Rules for Academics (or Anyone!) to Instantly Improve Your Writing

Academic writing is notorious for being dense, convoluted, and verbose. That sentence, on the other hand, features one of the easiest ways to make your writing more clear—the Oxford comma.

Below, I list five easy punctuation rules to help make your writing as clear and easy-to-follow as possible. Each of these rules is easy to understand, and you can implement them with no exceptions each and every time you write.

(1) Always use the Oxford comma.

The “Oxford comma” is the comma that comes before the final item in a list of three things. For example, in the first sentence of this article, the comma before “and verbose” is an Oxford comma.

A comma signifies a slight pause to a reader. When we read the first sentence of this blog aloud, we naturally pause after each comma. The prosody or pitch of our voices fall at the end of each item in a list, and the voices in our heads while we read silently do the same. Omitting the Oxford comma often makes your reader double back to reread the sentence, since the lack of a comma doesn’t initially signal the pause to let the reader know it’s one of the items in the list.

Although there is some very heated debate surrounding whether and when to use the Oxford comma, it’s one of the easiest ways to ensure you avoid ambiguity or confusion in your writing. Plus, a recent $10 million labor lawsuit predicated on the interpretation of a contract that lacks an Oxford comma serves as a cautionary tale against lazy comma usage.

(2) Use commas to separate independent clauses.

Clear sentences typically convey one main idea (also called an “independent clause”). But reading sentence after sentence with just one independent clause can get more monotonous than a marathon reading of See Spot Run. It’s often necessary to have multiple independent clauses in a single sentence, because ideas in academic writing are often complicated. It’s also necessary because we will bore our readers to death not just with our ideas, but also with our writing style.

The easiest way to spot if you need a comma is if you introduce a new subject-verb pair. If you have one subject that is doing two different things, you don’t need a comma:

Donald turned on his Android phone and launched his Twitter app.

Here, the completely fictitious “Donald” character is doing two things: turning on his phone and launching Twitter. No comma needed.

If you have a new subject with that second verb, though, you need a comma first:

Donald extended his diminutive hand out towards his wife, and she swatted it away.

Here, you need a comma before “and she swatted,” because this is a new independent clause. Without the comma, the sentence starts to get more convoluted and difficult to follow.

(3) Use commas with “which” but not with “that.”

Are you sensing a pattern yet? This is the last comma rule, I promise. When you use the relative pronoun “which,” it should introduce a non-essential clause. This means you need a comma, as in the example below:

Paul lived in a garish house in Wisconsin, which has the best cheese in the United States.

Here, the fact that the state of Wisconsin has the best cheese in the U.S. is not essential to understanding the sentence about the totally made-up character “Paul.” The second part of the sentence is just added detail, so I used the word “which,” with a comma.

On the other hand, when you use the pronoun “that,” it should introduce an essential clause, which means that you wouldn’t correctly understand the sentence without it, as in the example below:

Paul proposed a health care bill that actually was a tax cut for the rich in a not-so-convincing disguise.

Here, the “health care bill” could be any old health care bill. It’s not clear which (hypothetical, imaginary) health care bill I’m talking about. It’s only after the word “that” when it becomes clear what I’m talking about, so I used the word “that” (not “which”) but no comma.

(4) Correctly use em-dashes.

We are free from the world of commas, at last. But, you ask, what is an em-dash? An em-dash is the long hyphen that you use when you want to insert something in a sentence. It often serves the same purpose as two commas—sorry, you’re never actually free from comma world!—or as two parentheses. And voila: There is your example of em-dashes!

The biggest mistakes people make when using em-dashes are:

  1. Putting spaces before or after an em-dash. There is no space either before or after any em-dash.

  2. Using the shorter en-dash instead of the em-dash. An em-dash is not a hyphen; it’s longer. An en-dash is just a normal hyphen. You use it in between words (e.g., in the word “em-dash”). Simply using a single hyphen instead of an em-dash is confusing, since single hyphens are used to connect individual words, not separate clauses and ideas.

  3. Using two hyphens instead of an em-dash. In many word processing applications, you can type two hyphens (--) and it will autocorrect to an em-dash. If it doesn’t autocorrect, though, you need to manually type one in. On a Mac, this is as easy as typing Option + Shift + - (i.e., the minus key). On a PC, you can type Alt + 0151 (yes, type out the numbers), and it will spit out an em-dash for you.

In sum, every time you use an em-dash, make sure it’s a single long line—without spaces before or after it. This will signify to your reader that you’re inserting something into the sentence.

(5) Use semicolons to separate items in long lists.

In academic writing, and in lots of writing, we need lists that involve multiple parts. A simple Oxford comma won’t do. When you have a list of things, and each thing requires one or more commas, then semicolons are needed to separate the individual things in the list. Here’s an example:

Some of the most senior administrators on the team included Mike, from Indiana; Reince, from Wisconsin; Steve, from Virginia; Rex, from Texas; and Jeff, from Alabama.

Here, the semicolons serve to separate the individual people (and the states they are from) who serve as senior administrators on this mythical, make-believe team.

And there you have it—five easy ways that punctuation can make your writing clearer, easier to read, and more user friendly. As a handy reminder, below is a little summary for you.


  • Add a comma after every item in a list (including the last one)

  • Add a comma when you introduce a new subject and verb

  • Add a comma before “which” but not before “that”

  • Take out any spaces before or after dashes, and use one long line for the dash

  • Use semicolons to separate items in a list when you use commas as part of the items

Is peer review like a professional form of hazing?

How the social psychological theory of cognitive dissonance can help us understand the peer review process

One of the most fundamental tenets of science is the process of peer review. Whether submitting a proposal to fund research, an article to be published in a journal, or an application to be hired as a faculty member, scientists rely on their peers to evaluate the merit of their ideas in order to be accepted by the scientific community as sound.

But there is a growing body of research suggesting that peer review is problematic. As I’ve written about before, peer review is subject to cognitive biases that disadvantage scientists outside the majority, including women, people of color, clinical researchers, and scientists at smaller institutions. It’s also a process that has been shown repeatedly (see here, here, here, and here, for a few examples) to be unreliable if not random.

So in the face of evidence that peer review is flawed, why do so many scientists (for example, here, here, and here) defend peer review as a highly effective system? The answer may lie, unironically, in science: One of the most elemental and enduring psychological phenomena, cognitive dissonance, may provide us with some answers.

Cognitive dissonance is a theory developed by social psychologist Leon Festinger in the 1950s. It is based on the principle that humans experience psychological stress or discomfort when they possess two conflicting ideas at the same time. Festinger showed how this works in many different experimental studies, such as one classic study where the experimenters asked participants to spend an hour doing a really tedious task, and then afterwards asked them to persuade another participant (who was really a member of the research team) that the tasks were exciting and engaging. Some participants were offered $1 to persuade this other person, while others were offered $20 (and a control group was not asked to interact with the other participant).

The participants who were paid $1 rated the tasks as significantly more enjoyable than the participants who were paid $20. Why? Because if they had to convince someone else that a really boring task was actually fun, then they need to have a pretty good reason for lying (since most people believe they are moral and ethical, inherently). One good reason for lying would be that they were paid a hefty amount to lie (hefty for the 1950s, anyway). But in the absence of a financial incentive, why would a moral person lie? It must be because the task wasn’t actually that bad, and maybe it could be considered kind of fun. That’s the phenomenon of cognitive dissonance at work: two conflicting ideas (“I’m a good person” and “I just lied to this other person”) are resolved by changing one of them (“I didn’t really lie, because it wasn’t so boring, after all”).

A powerful example of cognitive dissonance in the real world is the phenomenon of hazing. There’s evidence showing that the more intensely a person experiences hazing into a group, the more dependent that person feels on the group; relatedly, the more severe the hazing, the more a person wants newcomers to experience the same hazing.

Peer review is certainly not as detrimental, destructive, and abusive a process as hazing. But there are some parallels we can draw. Members of a fraternity or a sports team likely don’t inherently want to be humiliated, deprived of food, or physically assaulted, so if such things do occur as part of a hazing ritual, cognitive dissonance theory predicts they will rationalize such behaviors, perhaps by thinking the hazing wasn’t actually as bad as they thought it would be.

Members of the scientific community don’t inherently want to believe that the process that is responsible for their career advancement and professional survival is flawed and random, so when presented with evidence that it might be, cognitive dissonance theory predicts they will rationalize such evidence, perhaps by thinking the peer review process isn’t actually as flawed as the evidence suggests.

I’m not suggesting that cognitive dissonance is the only factor at play here. Without a doubt, there are a multitude of reasons why scientists would want to protect and preserve peer review. There are issues related to identity, group membership, federal policy, feasibility, and many others that would influence scientists’ beliefs that peer review is sound.

Instead, I’m suggesting that cognitive dissonance can help us understand why scientists might be not simply unwilling, but unable to face the growing evidence that peer review is flawed. When someone has participated in—or been subject to—a process that might be biased in their favor or even random, this contradicts their belief that they are deserving of that grant, that published manuscript, that job. Enter cognitive dissonance, to help rectify those conflicting thoughts.

But, just like with hazing, cognitive dissonance predicts that the more that people are subject to peer review, the more dependent they will feel upon it and the more they will want newcomers to go through it, too.

Peer review doors, peer review tables, and peer review houses

Should scientists who are at the mercy of the peer review process feel wary of criticizing it? Should those of us in peer review houses avoid throwing scientific stones? 

Over the last two years, my research has focused on grant peer review, asking questions like: How do scientists decide which research should be funded? How often do different reviewers agree with one another about which research should be funded? If we assign the same research proposals to different groups of scientists, do those groups reach the same conclusions?

For a process that is as important to the entire field of science as peer review is, there's still not a whole lot of empirical evidence that peer review actually "works." There's a lot of evidence, actually, from peer review of manuscripts submitted to journals (for example, see here, here, and here) that different people reading the same manuscript come to very different conclusions about whether it should be published. 

Lots of people think this result can be attributed (in part) to biases against particular authors (for example, women and racial/ethnic minorities) or to psychological habits of mind (for example, confirmation bias or the halo effect). As a result, many have suggested we simply keep authors' identities secret, so that only the manuscript itself is judged on its merit. Unfortunately, though, masking authors' identities doesn't seem to make a substantial difference in improving the reliability or fairness of journal peer review. 

But what about grant peer review? We see the same challenges for women and underrepresented minorities in obtaining grant funding that they experience in getting published. As a result, the National Institutes of Health (NIH) have begun conducting their own study to see what happens when grant applicants' identities are masked, although, as we saw, the evidence from blinded journal peer review doesn't bode well. 

So what does this mean for the future of grant peer review? Are we doomed to rely on a process firmly entrenched in the scientific zeitgeist, yet deeply biased and perhaps even inherently flawed? 

We must be careful not to cast peer review out as a policy simply because we have found problems with its results. Instead, we need to more closely examine how the processes of peer review impact the outcomes of it, by studying what happens behind closed peer review meeting doors and by determining effective ways to increase the diversity of the people who get to sit at peer review tables

As scientists, it's uncomfortable yet crucial to place ourselves under the proverbial microscope. We are beholden to the peer review process for career success and advancement, so objectivity and bravery when evaluating peer review as a policy is difficult but imperative. "Publish or perish" is still the mantra of the tenured and the tenure-seeking (even as tenure lines continue to vanish). Our research is only as valued as the outlets in which it is disseminated. 

And yet, it is perfectly plausible to empirically scrutinize a policy, while still firmly believing in its historical, cultural, and scientific value. In other words, those in peer review houses should, in fact, throw stones. We just need to start devising a blueprint for what happens if we come crashing through the floor.  

Dissertation research discussed in Science magazine

Just a few weeks after defending my dissertation, my dissertation research was featured in an article written by Jeffrey Mervis in Science. The article focuses on a current research project underway at the National Institutes of Health (NIH) that is evaluating whether masking the identities of applicants submitting their grants to NIH helps reduce inequitable outcomes for female scientists and scientists of color. The article quotes myself and my two mentors, Dr. Molly Carnes (Director, Center for Women's Health Research) and Dr. Anna Kaatz (Director of Computational Sciences, Center for Women's Health Research).