Tag Archives: Peer review

Reviewer motivation: observations from Rubriq

ReviewerMotivation

A recent post by Elsevier on a study by Chetty et al (http://www.elsevier.com/reviewers/reviewers-update/how-small-changes-can-influence-reviewer-behavior) and a follow-up post on the Scholarly Kitchen (http://scholarlykitchen.sspnet.org/2014/05/28/what-motivates-reviewers-an-experiment-in-economics/) addressed an issue core to the Rubriq team – motivating reviewers.  The study confirmed what we already knew about the effects of shorter deadlines and paying for reviewer time. However, the social piece discussed could have been an experiment on its own. Instead of just using email reminders, there could be a peer critique element introduced depending on the system used. We hope to pursue that further with our network of Rubriq reviewers, but as a supplement to payments, not a replacement.

It’s always interesting to see the comments that come from discussions regarding paying reviewers. As some of the comments seen on the Scholarly Kitchen article suggest, the act of providing a $100 honorarium/stipend is quickly extrapolated into a commercialized market steeped in corruption.  However, the bias that money=evil conflicts with many other discussions we’ve seen about how reviewers are expected to work for free, not receiving any reward for their time in a monetary model that favors the journal/publisher. There is a big difference between providing a moderate payment as an incentive to meet a deadline on a single review, and paying a full-time salary with large bonus incentives.

When we first launched Rubriq, we worried that we would face community criticism about our policy to pay reviewers. However, we have received very little negative feedback from the researchers we have contacted. Less than 10% of our reviewers have either opted-out of receiving payment or are ineligible due to residence or employment restrictions. But most are happy to accept a monetary reward for their time and effort.

We recently surveyed our reviewers to find out more about their motivations. The leading reason was to receive financial compensation for their work/time. But this 33.2% was a narrow victory over intrinsic motivations such as gaining more peer review experience (32.5%), supporting the mission behind Rubriq (15.1%), to help create standards for their field (12.8%), and even to get to read new papers in their fields (3.2%).  Like most decisions, we assume that a researcher decides to review an article based on multiple factors, not just a single motivator. In our survey we asked the reviewers to select just one reason as their primary motivation, but in future research we may change to a ranked sort or percent distribution. Because the decision to review has many potential sources of influence, we feel it is unlikely that these same researchers would accept a shift to a completely commercialized review market.

chart

Payment could help compensate for some other elements that might be lacking – papers that are submitted to a journal, but that appear to be low on the novelty/interest scale might languish for months at a journal waiting to attract attention. Outsourcing peer review of “challenging/problem” papers to a service like Rubriq would make journals more efficient but not require across the board reviewer payments – they could still get interested reviewers for the sexier papers to do it for free, and pay for the benefit of faster/efficient process for others. For reviewers, knowing that they are being compensated for their time can help ease frustration when they are reviewing some of these more challenging papers.

Despite the positive acceptance of our policy to pay reviewers, we are still exploring other types of reward options. Recognition is one aspect we are pursuing, and we hope to work with the CASRAI/ORCID project to integrate reviews done through Rubriq into a researcher’s record. In addition to recognition programs, we asked our reviewers about alternate types of payment. Although our current direct payment option is the most preferred, reviewers have also expressed interest in models in which an equivalent donation is made to the charity of their choice, or in which the funds are deposited into a central account for use by the researcher’s lab/department/institution.

Quality of review is another compelling topic. We were happy to see that our own findings that quality was not adversely affected by speed were upheld by the Chetty et al study.  At Rubriq we have even seen an increase in review quality within our system, which is attributed to both our standardized scorecard structure and our in-house quality check of all reviews. The scorecard provides reviewers with a tested instrument that effectively guides them through a comprehensive assessment of the paper. The standardized format enables reviewers to become more efficient over time and to maintain a consistent quality of review.  Our internal QA check assesses each review for its ability to highlight the strengths and weaknesses of the manuscript.  In addition to ensuring that the customer receives a high-quality assessment, this also adds built-in control to select out any reviewers who are simply filling in the blanks to get a check. In the event a reviewer “phones it in” the reviewer is given the chance to amend the scorecard to provide a more detailed, informative assessment, but is replaced on that manuscript if they chose not to do so.

Some have expressed concern that compensating reviewers for their time will result in the loss of objectivity and honesty, but this fear is based on a very shortsighted perspective. Researchers and publishers do not come to us for glowing reviews; they come to us for honest assessments. Accordingly, we happily pay reviewers for comprehensive, thoughtful reviews regardless of the reviewer’s overall opinion of the work. Our future hinges on our ability to consistently provide this honest feedback to the research community, and because we are not a publisher, there is no financial incentive to bias our reviews or reviewer selection towards positive opinions.

One of the other concerns regarding compensating reviewers is the associated cost to the system. We have found that 15 million hours of researcher’s time is lost each year to performing redundant reviews (and under the current system that number will grow). However, the costs associated with having this amount of time distracting researchers from their primary mission seems to be willfully ignored in the discussion of compensating reviewers. “Free peer review” isn’t free, it is simply paid for in a different currency. This is one of the primary motivations behind Rubriq: if a standardized, in-depth review is performed and it is transferable from journal to journal, then costs associated are apparent and only paid for once. In addition, this system is not simply pulling money from the research community, but is also putting compensation back into the hands of the researchers in the community that are performing the evaluations.

My favorite quotable from the Elsevier post was “… as editors, we shouldn’t believe that the performance of our journals is something we can’t change. We can greatly improve the quality of our journals’ review process through simple policy changes and active editorial management.”

I hope more journal editors adopt this attitude because it could be one of the most impactful and meaningful shifts for the industry. When journals treat all authors as valuable customers and act accordingly to improve services, I think everyone wins. Overall these articles offer a nice validation of our efforts at Rubriq, confirming that we can improve turnaround speed without sacrificing quality.  We welcome your comments or questions.


Lisa Pautler, Rubriq |  lisa.pautler@rubriq.com

1 Comment

Filed under How Things Work, Reading & Reference

Rubriq Presentation from SSP Annual Meeting 2013

Missed us at SSP? Want to know the latest things we’re cooking up at Rubriq? In this video Keith Collier re-presents all of his slides from the SSP session (Concurrent 4E: The Future of Peer Review: Game Changers). This presentation gives a detailed (~20 min) overview of our independent peer review service, as well as a preview of one of our new free author tools. It was designed for the SSP audience, which is primarily journals, editors, and publishers.

From the SSP 2013 Annual Meeting – http://www.sspnet.org/events/past-events/2013-annual-meeting/schedule/

Leave a comment

June 19, 2013 · 7:51 pm

How we found 15 million hours of lost time

Lost time in the current peer review process

Rubriq wants to recover lost hours from redundant reviews so they can be put back into research. In the current journal submission process, rejection is common, yet reviews are rarely shared from one journal to the next.  Even when reviews are passed along, the lack of an industry-wide standard means that each journal in the chain solicits its own reviews before making a decision. All of this leads to reviewers repeating work that has already been done on the same manuscript by other colleagues.  We estimate that over 15 million hours are spent on redundant or unnecessary reviews – every year. 

Here’s a video that helps illustrate the key issues:

(once it starts, you can click on “HD” in the right-hand corner to view on highest resolution)

So how did we get to that number of 15 million hours each year?

The two key metrics for finding wasted time are quantity (how many manuscripts are reviewed and then rejected?) and time (how long does each submission one take to be reviewed?). While there are 28,000 peer reviewed journals, we only use 12,000 in our calculations since that is roughly the number of high-quality journals that are included in Thomson Reuters’ Web of Science.  The figure below shows how we calculated both quantity and time, and the descriptions and citations for the key steps in the process follow:

Rubriq calculation of time lost to peer review - click to view larger

Rubriq calculation of time lost to peer review – click to view larger

 

Calculation & Source Details:

1.  3,360,207 (English-language, STM) submissions per year

  • Although the MarkWare STM report1 showed that there are over 28,000 peer-reviewed journals, we focused our scope within just the 12,000 English-language STM journals identified in that same report, as they are the current focus for Rubriq.
  • The average submissions per journal were shown in the Thompson Reuters data2 as 280 (total ScholarOne submissions divided by the count of ScholarOne journal sites). Calculating 280 submissions for each of the 12,000 journals equals 3,360,207 submissions per year.
  • Note that this is submission-based data, not paper-based.  A single manuscript that was rejected by one journal but then accepted by another within the same year would go through two review cycles and thus recognized as two separate submissions.

2.  1,344,099 (40%) accepted submissions per year

  • Thompson Reuters data2 reports 37% acceptance based on all submissions received and accepted within their system, but the MarkWare PRC report3 estimated an average of 50%.
  • We feel the Thomson Reuters data is more accurate than PRC data based on how the information was collected and how calculations were made. Combined with our own internal data and personal interviews with some of the largest STM publishers, we selected 40% as the best representation for this group of journals. 40% of our total submission number equals 1,344,099 accepted papers.

3.  705,652 (21%) submissions per year rejected WITHOUT Review

  • The MarkWare PRC report3 stated 21% as its estimate for submissions that are rejected without going through peer review, also known as a “desk rejection”.
  • Although there is time lost and an opportunity cost to the author when this occurs and they have to try again with another journal, we are currently only focused on time spent on peer-review, so do not factor this group in with our calculation of wasted time.

4.  1,310,496 (39%) submissions per year rejected WITH Review

  • The number of submissions that are sent to peer review but are then rejected is our key starting metric for calculating lost hours (why? See our “Additional Reading” section below for some background material). We use the two preceding calculations to find this number.
  • If 21% were rejected without review, and 40% were accepted, then the remaining submissions were rejected after the peer review process.  Applying 39% to our total gives us 1,310,496.

5.  11.5 average reviewer hours spent per submission

  • Data from the MarkWare STM report4 provided us with an average (median) of five hours spent per review.
  • The MarkWare PRC report3 states that an average of 2.3 reviewers is used for each submission.
  • Five hours * 2.3 reviewers equals 11.5 average review hours per submission.
  • Note that this number only takes into account the time spent per submission by reviewers – it does not include time spent by the journal or publisher in coordinating the review process (e.g., recruiting reviewers, editorial check of reviews, review software costs) or other time spent processing these papers (e.g., screening, editorial review, technical check, other operational time).

6. 15,070,706 hours per year spent on redundant reviews

  • Assuming 11.5 hours per submission * 1,310,496 submissions that were reviewed but then rejected = over 15 million hours. Every year.
  • Since there are only 8,760 hours in a year, you can also think of it as equaling 1,720 years (if it was all one reviewer working 24 hrs per day).

 

 References/Links:

1. M. Ware, M. Mabe, The STM Report: An overview of scientific and scholarly journal publishing (International Association of Scientific, Technical, and Medical Publishers, Oxford, UK, 2012; http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf)

2. Thomson Reuters, Global Publishing: Changes in submission trends and the impact on scholarly publishers (April 2012: http://scholarone.com/about/industry_insights/).

3. M. Ware, Peer review: benefits, perceptions, and alternatives (Publishing Research Consortium, London UK, 2008; http://www.publishingresearch.net/documents/PRCsummary4Warefinal.pdf)

4. Mark Ware (2011): Peer Review: Recent Experience and Future

Directions, New Review of Information Networking, 16:1, 23-53 http://dx.doi.org/10.1080/13614576.2011.566812

 

Have other questions? Found a better number with your own calculations? Feel free to add your comments here on our blog!

 

2 Comments

Filed under Uncategorized