Back to the Future: (Re)turning from Peer Review to Peer Engagement

by Rebecca Kennison

KEY POINTS

  • Scholarly communication — with the exception of traditional (e.g., blind and double-blind) peer review — prizes the open exchange of ideas.
  • The aim of peer review should be engagement, not judgment.
  • Reviews that improve the quality of a work and thus advance the field are not merely service to the community, but contributions to existing scholarship, and need to be rewarded accordingly; an open and transparent review process is the first step in enabling such reviews to be properly recognized.

INTRODUCTION

What is the real purpose of peer review? And is that purpose immutable?

These questions lie at the heart of the debates about peer review and its value in academic publishing. Who should be considered peers — and are they the ones actually doing the review? Is the purpose of that review to improve the work — or to evaluate and judge it? If the goal is to improve the work, why does the review need to be anonymous, as who needs to be protected in an exchange of ideas between equals? And should not that important peer-to-peer work be publicly acknowledged and explicitly rewarded? For that to happen, there must be a shift in the philosophy that underpins peer review and its attendant practices, one that replaces peer judgment with peer engagement as the primary value of these peer exchanges.

THE RISE AND FALL OF PEER REVIEW

Learned societies were created for the purpose of facilitating scholarly engagement with others who shared similar interests, and journals arose as a forum to allow wider discussion of ideas and discoveries, enabled by editors who were also members of those communities and who acted as moderators of the discussion. What we now commonly understand to be “peer review” as a standard practice in academic publishing — in which discussants have become referees — is comparatively recent. In sports, the role of a referee is not to act as part of the team working together to move the ball forward but simply to ensure everyone follows the rules of the game, whatever those may be, and to certify the results. In the academy, refereeing has much the same task, that of evaluation, assessment, and judgment.

Peer review, in this refereeing sense, became the norm during the rapid growth in higher education after World War II and the concurrent explosion in Cold War-driven research funding. There are good historical reasons for the turn to reliance on peers to handle the bulk of the review process. The rising number of manuscripts that resulted from the increased numbers of academics meant that individual editors could no longer keep up with the workload and often felt they themselves were not experts enough in emerging or niche fields to determine the validity or quality of the work submitted to them. The expenses of print-based publishing of necessity limited the number of pages any given editor could produce in a given year, and editors increasingly turned to their academic colleagues to provide them with a justification they could use for either accepting or rejecting the flood of manuscripts submitted to the journal or the press. The perceived (or real) biases of individual editors could be mitigated by the (hopefully) impartial reviews from those who, like the authors, were also experts in the field.

It is the failure of that last laudable goal of peer review — that every manuscript submitted for publication receives an impartial assessment of that work by several experts in the same field — that has raised the most criticism. Most reviews are performed anonymously, either “blind” or “double blind,” with the editors and the editorial staff being the only ones to know with 100% certainty who are the authors and who are the reviewers. How good a match of expertise has the editor made? Why choose one reviewer and not another? What if numerous reviewers decline to review the manuscript? Are the ones who end up reviewing qualified to do so? (Is, for example, a grad student truly the peer of a tenured faculty member? Or vice versa?) What do reviewers have to gain from slowing down their competitors by asking for extensive revisions or by rejecting theories or ideas that are contrary to their own? And can any human being, no matter what his or her expertise, truly be impartial, especially in a highly competitive world such as academia, where skepticism and critique, considered hallmarks of the top-notch scholar, can easily become criticism? For better or for worse, with a single editor making the decision as to acceptance or rejection, as was most often the case before peer review took primacy of place, you knew the qualifications and the biases of that individual. Anonymous peer review removed that certainty. Now you were writing for … whom exactly? Richard Smith famously described anonymous peer review as “a court with an unidentified judge” that “makes us think immediately of totalitarian states and the world of Franz Kafka” (Smith, 1999). Most everyone who has published a scholarly article or book has horror stories to tell about highly critical and sometimes brutal reviews from reviewers who seemed more keen on destroying the idea and devastating the author than on encouraging a new concept and its creator to grow and thrive.

Critique — and, yes, criticism — of the peer review process has been around for decades. But what could be executed to change the system?

Then came the World Wide Web. Coincidentally (or not!), complaints about the “traditional” peer review process, especially of its gatekeeping function, rose along with the Internet, a system built on open standards designed to foster interconnected communication. Solutions developed in response to those criticisms — such as open peer review or post-publication peer review — are now easily enabled by online technology. Even more fundamentally, the shift from print to online publication has removed the once very real requirement that an editor concern herself with page counts. Why not simply accept every piece of scholarship deemed by the community as adhering to the standards of research in the field, as some publishers already do? Why still use a process that has been developed to exclude rather than include and employ peer review systems that have a final judgement (accept or reject) as their main function?

To put the question another way, does traditional peer review even make sense in this online world? Or, to return to the question with which we began: What is the real purpose of peer review? Is it to foster collaboration on important questions and topics? Or is the purpose of peer review to limit the number of items published?

If the purpose of peer review is only to publish work that a small number of experts would have liked to have published themselves or, conversely, to exclude what otherwise might be quality work that is not in alignment with “received wisdom,” whether because of intellectual bias or because of lack of expertise, then the traditional closed system works well. If the purpose, however, is to encourage the best possible work by the best possible minds, then the process looks quite differently — it is, instead, peer engagement, rather than peer review.

PEER REVIEW VERSUS PEER ENGAGEMENT

We all value, at all stages of the research process, the input of those whose opinion we respect, whether we agree with them or not. Some of that input makes its way into a final published piece. Some does not. Some happens after a work has been published, but when it is then discussed by the community, whether formally (e.g., through book reviews and in subsequent work) or informally (e.g., in blogs, on social media, and at conferences). All of this interaction is peer engagement. With the exception of traditional peer review, which usually happens anonymously, this engagement takes place in an open forum — between colleagues in a department, among members of a lab, in Q&A, over drinks at a conference, on the Internet. The qualms usually raised about open peer review or open commentary concerning the lack of anonymity do not seem to come into play in any of those interactions. Why is that? Perhaps because the purpose of those exchanges is to engage rather than to judge. And perhaps because those exchanges happen at a different stage of the process, namely, either when the scholar is still in the middle of working out and reworking his or her findings and arguments well in advance of submitting a piece for publication or when the scholar is discussing recently published work in order to launch new projects.

Open peer review has had mixed success, yes — in large part because it demands engagement. Reviewers who know that their names will be associated with their comments or who know that their reviews will be made publicly available tend to take more time and effort with their reviews. The higher decline rates by reviewers in an open peer review environment is likewise explained by this perception by potential reviewers of the extra effort it takes to produce a quality review, one the reviewer would be comfortable with anyone seeing (van Rooyen, Delamothe and Evans, 2010). But should not that already be the goal of any review? If the primary aim of the review is merely to include the work in or exclude it from a certain publishing venue, all you, as an editor, would need from a reviewer is a thumbs-up or a thumbs-down. Most reviewers are asked to provide much more than that — to provide feedback that, as one editor put it, is not only helpful to the author and the editor but is presented in a manner that is professional, pleasant, objective, timely, empathetic, realistic, and organized (Lucey, 2013). That takes thought. And time. And effort.

But is not thought, time, and effort what anyone wants and expects from a review of any kind? Should not then that effort be rewarded? Open review is a first step in being able to put into place the same recognition and rewards now given for other kinds of publications and for tracking the impact those reviews can have on advancing the field. The complications of changing the reward system notwithstanding — see, for example, the recent editorial by Lisa Adkins and Maryanne Dever bemoaning the challenges in Australia (Adkins and Dever, 2015) — what must come first are established practices of openness and transparency. Without them, no one can develop the level-playing-field metrics that are needed to persuade administrators to reward work beyond that of the already easily trackable, that is, publications.

Following on logically from my arguments for a return to peer engagement and for a turn to open review, let me pose one final question. In the 21st-century publishing environment, why should such engagements be made only at that point when the author thinks he or she is finished with the piece, but before publication? Technology allows anyone anywhere to access, read, and comment on work in progress or work considered published — and permits the author to respond, rework, revise, and update at will as well. Concerns about version control, about the potential pressure on authors to respond to all comments, or about researchers never being able to move on to new research because of the need to constantly revisit and revise older work have meant that broad adoption of post-publication peer review has met with some resistance. Even so, some new publications, such as F1000Research and ScienceOpen, are devoted entirely to post-publication peer review. The Winnower collapses the idea of pre-publication and post-publication entirely in an open peer-review system that allows any format of work to be reviewed at any stage, with the author being the final decision-maker as to when a work is finished; at that point, the author publishes the piece by assigning it a DOI, archiving that version on the site, and beginning to acquire metrics to show its impact. MLA Commons, the collaboration, publishing, and archiving platform hosted by the Modern Language Association, works similarly. This trend toward open collaboration, open publication, and open review indicates a willingness by many to engage openly with their community. The challenge of how best to do so comes not in the theory but in the practice.

Even for those who value openness and transparency, rewards and incentives matter. The current system of peer review suffers from reviews and reviewers being anonymous; peer review is relegated to the category of “service” rather than being seen as more weighty in terms of its contribution to the field. Open peer review, whenever that review happens in the process, enables a shift in how that activity is recognized and rewarded. Thoughtful, substantive reviews that are seen to strengthen or advance the field could be counted as publications in and of themselves, not merely as service to the community. Just as excellence in teaching is rewarded, so could excellence in writing reviews that improve the ideas and enrich the field through an approach to peer review that is pedagogical and collegial, rather than competitive. Engagement, rather than judgement, would make the process more welcoming. And who would not welcome that?

REFERENCES

Adkins, L., & Dever, M. (2015). Editorial: academic labour on-the-move. Australian Feminist Studies, 30(84), 105–108.

Lucey, B. (2013, 27 September). Peer review: how to get it right – 10 tips [Web log post].

Smith, R. (1999). Opening up BMJ peer review: a beginning that should lead to complete transparency. British Medical Journal, 318(7175), 4–5.

van Rooyen, S., Delamothe, T., & Evans, S. J.W. (2010). Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial. British Medical Journal, 341, c5729.

This Opinion Piece (DOI: 10.1002/leap.1001) originally appeared in January 2016 as an Early View (the online version of record published before inclusion in an issue) in the journal Learned Publishing as part of a special issue on peer review. Subscribers to Learned Publishing can access the full issue on the Wiley Online site.