Beyond Biology: Rubriq scorecards for new fields

scorecardzSome of the first questions we heard after launching Rubriq were “but what about [my area of study]”? We designed the original Rubriq scorecard (see a sample here) specifically for biological and medical sciences, but hoped to eventually expand into other areas. A new partnership project with our sister company AJE prompted us to move up our timeline.

Since most of our team members have a life sciences background, we found outside experts in other fields to help us customize the new scorecards. We are happy to announce that we have created customized Rubriq scorecards for the following fields:

  • Social Sciences: Qualitative
  • Social Sciences: Quantitative
  • Humanities
  • Physical Sciences
  • Math and Computer Sciences
  • Engineering and Materials Science
  • Statistical Analysis “add-in” component (for any scorecard)

The full Rubriq Report contains three components: the scorecard, the iThenticate plagiarism report, and a custom journal recommendation. However, the journal recommendations are based on our ability to do advanced searches of all the published literature. Right now we can easily do that with all fields covered by PubMed, but don’t yet have the same access in other areas of study. So for these new papers we are not yet able to provide journal recommendations.  As part of our development of the new JournalGuide tool, we are expanding our article databases in these new areas. Once those are complete, we will be able to provide journal recommendations.

Although the new scorecards were only recently added to our system, Rubriq has already received manuscripts in all of the new fields. Moving into new areas is exciting, but also means that we need to expand our search for reviewers. Although more than 1,000 reviewers have already signed up with Rubriq, very few are outside our original scope. So if you have colleagues in these newly added areas of study, let them know that we’re looking for reviewers! They can visit our site to see the requirements, and can send any questions to

— Lisa Pautler, Director of Marketing

Leave a comment

Filed under Product/Service Updates

Some updates about Research Square (our parent company)

RSQ-featuredResearch Square is the parent company of Rubriq, AJE, and JournalGuide. Our combined team of researchers, software developers, publishing industry veterans and other specialists recently moved to a fantastic new office in downtown Durham. You can read more about it here. We’re planning an official ribbon-cutting ceremony before the end of the year. If you’re in the Raleigh/Durham area and would like to join us, drop us a line at!

Research Square was recently the recipient of several awards, highlighting both our overall growth as well as the workplace itself. We received multiple distinctions from Inc. Magazine, including membership on the prestigious Inc. 5000 list and the Inc. Hire Power award. The Hire Power designation ranks us in the top 10 of North Carolina companies creating new jobs, as well as the top 10 in our category.

In addition to accolades from Inc. Magazine, we were also listed in the Triangle Business Journal Fast 50, which highlights the fastest growing private companies in the area. But we don’t just get honors for growth. Research Square was also recognized for our family-friendly work culture. Carolina Parenting, which owns Carolina ParentCharlotte Parent and Piedmont Parent, highlights companies that support and encourage working parents in North Carolina. They selected Research Square for their 2013 N.C. Family-Friendly 50 Companies list.

As part of the Research Square family, our Rubriq team has been spending quite a bit of time helping our fledgling sister brand, JournalGuide. Although the public launch is expected by January, you can see our beta now on In addition to helping to create JournalGuide, the Rubriq team also recently integrated the Rubriq scorecard into the new Scholarly Editing Assistant site offerings. This new site is a collaboration primarily between our sister company AJE and Thomson Reuters. We have also been working on some special projects working directly with journals. Look for updates about our brand new projects as well as more information on our work with Thomson Reuters here on our blog in the weeks to come!

Lisa Pautler, Director of Marketing

Leave a comment

Filed under Product/Service Updates

Rubriq Presentation from SSP Annual Meeting 2013

Missed us at SSP? Want to know the latest things we’re cooking up at Rubriq? In this video Keith Collier re-presents all of his slides from the SSP session (Concurrent 4E: The Future of Peer Review: Game Changers). This presentation gives a detailed (~20 min) overview of our independent peer review service, as well as a preview of one of our new free author tools. It was designed for the SSP audience, which is primarily journals, editors, and publishers.

From the SSP 2013 Annual Meeting –

Leave a comment

June 19, 2013 · 7:51 pm

How we found 15 million hours of lost time

Lost time in the current peer review process

Rubriq wants to recover lost hours from redundant reviews so they can be put back into research. In the current journal submission process, rejection is common, yet reviews are rarely shared from one journal to the next.  Even when reviews are passed along, the lack of an industry-wide standard means that each journal in the chain solicits its own reviews before making a decision. All of this leads to reviewers repeating work that has already been done on the same manuscript by other colleagues.  We estimate that over 15 million hours are spent on redundant or unnecessary reviews – every year. 

Here’s a video that helps illustrate the key issues:

(once it starts, you can click on “HD” in the right-hand corner to view on highest resolution)

So how did we get to that number of 15 million hours each year?

The two key metrics for finding wasted time are quantity (how many manuscripts are reviewed and then rejected?) and time (how long does each submission one take to be reviewed?). While there are 28,000 peer reviewed journals, we only use 12,000 in our calculations since that is roughly the number of high-quality journals that are included in Thomson Reuters’ Web of Science.  The figure below shows how we calculated both quantity and time, and the descriptions and citations for the key steps in the process follow:

Rubriq calculation of time lost to peer review - click to view larger

Rubriq calculation of time lost to peer review – click to view larger


Calculation & Source Details:

1.  3,360,207 (English-language, STM) submissions per year

  • Although the MarkWare STM report1 showed that there are over 28,000 peer-reviewed journals, we focused our scope within just the 12,000 English-language STM journals identified in that same report, as they are the current focus for Rubriq.
  • The average submissions per journal were shown in the Thompson Reuters data2 as 280 (total ScholarOne submissions divided by the count of ScholarOne journal sites). Calculating 280 submissions for each of the 12,000 journals equals 3,360,207 submissions per year.
  • Note that this is submission-based data, not paper-based.  A single manuscript that was rejected by one journal but then accepted by another within the same year would go through two review cycles and thus recognized as two separate submissions.

2.  1,344,099 (40%) accepted submissions per year

  • Thompson Reuters data2 reports 37% acceptance based on all submissions received and accepted within their system, but the MarkWare PRC report3 estimated an average of 50%.
  • We feel the Thomson Reuters data is more accurate than PRC data based on how the information was collected and how calculations were made. Combined with our own internal data and personal interviews with some of the largest STM publishers, we selected 40% as the best representation for this group of journals. 40% of our total submission number equals 1,344,099 accepted papers.

3.  705,652 (21%) submissions per year rejected WITHOUT Review

  • The MarkWare PRC report3 stated 21% as its estimate for submissions that are rejected without going through peer review, also known as a “desk rejection”.
  • Although there is time lost and an opportunity cost to the author when this occurs and they have to try again with another journal, we are currently only focused on time spent on peer-review, so do not factor this group in with our calculation of wasted time.

4.  1,310,496 (39%) submissions per year rejected WITH Review

  • The number of submissions that are sent to peer review but are then rejected is our key starting metric for calculating lost hours (why? See our “Additional Reading” section below for some background material). We use the two preceding calculations to find this number.
  • If 21% were rejected without review, and 40% were accepted, then the remaining submissions were rejected after the peer review process.  Applying 39% to our total gives us 1,310,496.

5.  11.5 average reviewer hours spent per submission

  • Data from the MarkWare STM report4 provided us with an average (median) of five hours spent per review.
  • The MarkWare PRC report3 states that an average of 2.3 reviewers is used for each submission.
  • Five hours * 2.3 reviewers equals 11.5 average review hours per submission.
  • Note that this number only takes into account the time spent per submission by reviewers – it does not include time spent by the journal or publisher in coordinating the review process (e.g., recruiting reviewers, editorial check of reviews, review software costs) or other time spent processing these papers (e.g., screening, editorial review, technical check, other operational time).

6. 15,070,706 hours per year spent on redundant reviews

  • Assuming 11.5 hours per submission * 1,310,496 submissions that were reviewed but then rejected = over 15 million hours. Every year.
  • Since there are only 8,760 hours in a year, you can also think of it as equaling 1,720 years (if it was all one reviewer working 24 hrs per day).



1. M. Ware, M. Mabe, The STM Report: An overview of scientific and scholarly journal publishing (International Association of Scientific, Technical, and Medical Publishers, Oxford, UK, 2012;

2. Thomson Reuters, Global Publishing: Changes in submission trends and the impact on scholarly publishers (April 2012:

3. M. Ware, Peer review: benefits, perceptions, and alternatives (Publishing Research Consortium, London UK, 2008;

4. Mark Ware (2011): Peer Review: Recent Experience and Future

Directions, New Review of Information Networking, 16:1, 23-53


Have other questions? Found a better number with your own calculations? Feel free to add your comments here on our blog!



Filed under Uncategorized

See us at STM

Are you attending the STM Spring Conference in Washington DC next week (April 30th-May 2nd)?

Rubriq will again participate in one of the event’s very popular Flash Sessions. Keith Collier will represent Rubriq in the session on “Peer Review, Payment & Analytics”, which starts at 4:30pm on Wednesday the 1st. It’s the last session of the day and is immediately followed by the conference reception, where you should be able to track Keith down for a chat.

The overall theme of the conference is “Proactive Scholarly Publishing: New Business Models, Partners and Customer Relationships,” which fits in perfectly with where we are in the evolution of Rubriq. Now that phase two is underway, we are continuing to grow and develop our different publisher and journal partnerships, and even explore some brand new opportunities.

So far we’ve invited several hundred journals to join the Rubriq network, and are continuing to expand our invitations to all journals in the biological and medical fields. If you’re a journal that has not received an email with login information, you can fill out our request form online. Look for updates on the journal page of our site in the month of May to see some of the first journals to come on board.

— Lisa (@RubriqNews)

Leave a comment

Filed under Events

Phase Two of Beta is Here!

We are excited to announce that Rubriq has successfully completed the first phase of beta testing, and the second phase is now underway!

As of this second phase, a total of eight publishers are now providing feedback on all aspects of the Rubriq report and system, and will help develop the features and options designed for journal editors. You can find out more about journals in the Rubriq network on our new Journals page.

This second phase of the Rubriq beta launch introduces two new services for authors in addition to the Rubriq scorecard. The first is a plagiarism check. Rubriq has selected iThenticate to provide a complete plagiarism report, as it is widely recognized as the industry standard. This will not only help authors resolve any issues prior to submission, but can also be attached to a manuscript for use by any journal.

The second new service is the journal recommendation report. After the scorecard is completed, a list of journals is compiled that most closely matches the paper’s scores and content.  These data-driven suggestions are then checked for accuracy and relevance by a Rubriq team member who is experienced in journal selection, and who is also a published researcher.  Authors will be able to filter and sort all of these journal recommendations by their own preferences, and therefore can make a well-informed decision about the best place to submit for a higher likelihood of success. 

Rubriq is also expanding from its initial three areas of study (immunology, cancer research, and microbiology) to cover over 200 biological and medical fields. Information about journals in these fields has been compiled into a proprietary database, which is used in the journal recommendation process.

Selected journals will soon receive invitations to claim their profiles, update their data, and become active in the Rubriq network. There is no cost for journals to join or participate in the Rubriq network. Journals that do not receive invitations by the end of April can request membership on the Rubriq website using this request form.

We are now accepting manuscript submissions, and continue to welcome new reviewers to the Rubriq network from all STM fields. The Rubriq team thanks its beta partners and Advisory Panel for their feedback and insights that have brought us to this new chapter. 

Lisa Pautler, Director of Marketing / @RubriqNews

Leave a comment

Filed under Product/Service Updates

The Potential of the Scorecard

There has been a lot of interest in Rubriq over the last few weeks, which of course is great.  But I’m finding that many people are getting hung up on our plan to pay reviewers and miss the larger and more important elements of Rubriq.  

Yes, it’s true we are experimenting with paying reviewers and we believe direct compensation could have an important and positive effect on reviewer turnaround time and quality of reviews.  But that is only one element of reviewer compensation and we plan to develop better training and rewards for reviewers over time.

The core of Rubriq isn’t direct compensation of reviewers, but the development of the Rubriq scorecard that allows reviewers to rate the paper’s Quality of Research, Quality of Presentation, Novelty, and Interest.  The scorecard is at the heart of our model and we hope to present our numerous validation tests at the Peer Review Congress this fall.  

It’s the scorecard that provides a new way of thinking about peer review.  We don’t make recommendations on accept/reject decisions, nor do we claim to provide a “valid science” stamp.  The journal will have to map what they believe is valid science to our R-Score.  Those are decisions that editors can make.  But the scorecard ratings and comments provide a number of advantages:

  • It’s a way to stratify/organize papers in a mega-OA journal upon publication allowing for better initial filtering after publication
  • It’s a better way to kick-off and organize post-publication peer review.  Now post-publication peer review can build off of the ratings and comments that were collected in pre-publication peer review
  • It’s standardized and thus portable; so individual journal editors can map the standard scorecard to their own peer review process.  
  • It provides a mechanism for directing papers to journals based on the quality and importance of the research.  The current tools for finding journals are based on keyword/semantic matching which I believe exacerbates the journal loops problem.  Without the quality and novelty of the paper, suggesting journals (other than the mega-OA journals) is probably causing more problems than it solves.

I recently met Nikolaus Kriegeskorte, a Neuroscientist from Cambridge.  We were both on a debate panel at the Annual AAP/PSP Meeting where we were arguing opposites sides of the “post-publication peer review would be better then pre-publication peer review” question.  I do recommend Nikolaus’ paper on post-publication peer review published in Frontiers of Computation Neuroscience.

What struck me, both during the debate and in reading his paper, was the structure he proposed to organize post-publication peer review.  He proposed the following areas: Justification of Claims, Importance, and Originality.  He also proposed using scales and error bars to show the ratings and number of raters over time.  It’s very similar to our approach at Rubriq and reinforced our ideas around pre-publication peer review “kicking-off” post publication peer review.

We have just completed a technical validation test of the scorecard and we are also wrapping up our beta tests with publishers in March.  We look forward to presenting our feedback to the community.


Leave a comment

Filed under How Things Work