There has been a lot of interest in Rubriq over the last few weeks, which of course is great. But I’m finding that many people are getting hung up on our plan to pay reviewers and miss the larger and more important elements of Rubriq.
Yes, it’s true we are experimenting with paying reviewers and we believe direct compensation could have an important and positive effect on reviewer turnaround time and quality of reviews. But that is only one element of reviewer compensation and we plan to develop better training and rewards for reviewers over time.
The core of Rubriq isn’t direct compensation of reviewers, but the development of the Rubriq scorecard that allows reviewers to rate the paper’s Quality of Research, Quality of Presentation, Novelty, and Interest. The scorecard is at the heart of our model and we hope to present our numerous validation tests at the Peer Review Congress this fall.
It’s the scorecard that provides a new way of thinking about peer review. We don’t make recommendations on accept/reject decisions, nor do we claim to provide a “valid science” stamp. The journal will have to map what they believe is valid science to our R-Score. Those are decisions that editors can make. But the scorecard ratings and comments provide a number of advantages:
- It’s a way to stratify/organize papers in a mega-OA journal upon publication allowing for better initial filtering after publication
- It’s a better way to kick-off and organize post-publication peer review. Now post-publication peer review can build off of the ratings and comments that were collected in pre-publication peer review
- It’s standardized and thus portable; so individual journal editors can map the standard scorecard to their own peer review process.
- It provides a mechanism for directing papers to journals based on the quality and importance of the research. The current tools for finding journals are based on keyword/semantic matching which I believe exacerbates the journal loops problem. Without the quality and novelty of the paper, suggesting journals (other than the mega-OA journals) is probably causing more problems than it solves.
I recently met Nikolaus Kriegeskorte, a Neuroscientist from Cambridge. We were both on a debate panel at the Annual AAP/PSP Meeting where we were arguing opposites sides of the “post-publication peer review would be better then pre-publication peer review” question. I do recommend Nikolaus’ paper on post-publication peer review published in Frontiers of Computation Neuroscience.
What struck me, both during the debate and in reading his paper, was the structure he proposed to organize post-publication peer review. He proposed the following areas: Justification of Claims, Importance, and Originality. He also proposed using scales and error bars to show the ratings and number of raters over time. It’s very similar to our approach at Rubriq and reinforced our ideas around pre-publication peer review “kicking-off” post publication peer review.
We have just completed a technical validation test of the scorecard and we are also wrapping up our beta tests with publishers in March. We look forward to presenting our feedback to the community.