Category Archives: Uncategorized

Who reviews for Rubriq? (Part 1)

Rubriq provides high-quality, expert peer review of academic manuscripts in two weeks. By industry standards, our reviews are returned very quickly, but providing quality peer review is always our number one priority. Ensuring quality peer review is dependent on finding the right experts to review each manuscript we receive. Therefore, we make sure that each manuscript is reviewed by researchers who have the expertise to critically review the research topic, study design, methodology, and data.

Who is qualified to be a Rubriq Reviewer?
Rubriq Reviewers are the same academic reviewers that journals use for their peer reviews. They are required to have strong publication records and significant peer-review experience. Our reviewers must have completed a doctoral-level degree at an accredited university (or hold a tenure-track professorship) and must be actively publishing postdoctoral or faculty-level researchers at top research universities or institutes from around the world.

How are peer reviewers selected, vetted, and invited to review a manuscript for Rubriq?
Rubriq peer reviewers are carefully selected and vetted by Peer Review Coordinators (PRCs) before they are invited to review manuscripts. Rubriq’s PRCs are PhD-level scientists who graduated from top US universities (Duke University, John’s Hopkins University, and University of North Carolina, Chapel Hill). Most PRCs have 3-5 years of postdoctoral experience and have published multiple first-author papers.

There are two ways that researchers can be invited to review a manuscript for Rubriq:

  1. When we receive a manuscript to review, we search our existing reviewer database for reviewers who have the right expertise to review a manuscript. All reviewers in the database were vetted thoroughly for their qualifications, publication records, and current research positions (see above qualifications for being a Rubriq reviewer). If reviewers have decided to use a user-generated email address (Gmail, for example) rather than an institutional email address for correspondence with us, then they must provide additional proof of identification (such as a picture of their institutional ID) before they are incorporated into our database. If we have appropriate experts among our current reviewers, we invite those reviewers to review the manuscript.
  2. When we do not have well-matched reviewers among our existing reviewer database, our team searches the current literature in that field to identify researchers with the expertise and qualifications to review the manuscript. Researchers with the right qualifications are sent a personal invitation to review the manuscript, and those who agree to review become Rubriq reviewers are added to the reviewer database and can be invited for future reviews. Many researchers we invite to review manuscripts are willing to provide suggestions for other potential reviewers. Those suggested reviewers are also vetted thoroughly before we invite them to review a manuscript.

Regardless of how we identify a potential reviewer for a manuscript, PRCs ensure that there are no apparent conflicts of interest (such as co-authorships) with the authors of the manuscript before inviting researchers to review the manuscript.

Who are Rubriq’s current reviewers?
To-date, Rubriq has over 3,400 reviewers from all over the world in our reviewer database. The majority of our reviewers (83%) are academics and clinicians from the US, Canada, the UK, and Europe.

Location

Degrees

More than half of our reviewers (62%) are tenure-track faculty or clinicians, and 32% are postdoctoral fellows or research associates. Almost half (49%) of current Rubriq reviewers are tenure-track professors and clinicians from the US, Canada, the UK, and Europe.

Positions

Rubriq reviewers hold faculty positions at some of the top universities in the world and serve as academic editors or editors-in-chief for well-known journals. In Part 2 of this series we will break down what institutions and universities our reviewers are working in, so stay tuned!


Cynthia Nagle, Phd

Leave a comment

Filed under Uncategorized

Rubriq & Scientific Reports Fast-Track Trial

One of the main criticisms of the publishing process is the time it takes to go through peer review. More specifically, it’s the unknown amount of time the process will take that makes peer review feel like the “black box” that it is. Peer review is also one of the largest pain points for reviewers who may be inundated with requests from journals.

We developed Rubriq to save reviewers and authors time without compromising on rigor. Our scorecards help busy reviewers provide high-quality, rigorous feedback in an easy-to-use format. The resulting Rubriq Report gives authors clear feedback on how to improve and journals clear indicators of whether a manuscript would be acceptable for publication.

It’s for these reasons we are pleased to launch another experimental use of Rubriq. Starting this week, Rubriq and NPG’s Scientific Reports will be collaborating on a limited trial to fast-track submissions through the peer review process.  Much like priority mail, where you can pay extra to guarantee delivery by a certain date, fast-track will do the same for authors submitting paper to Scientific Reports. This gives researchers the option to pay an additional fee for expedited handling without compromising the journal’s standard.

The service will guarantee authors receive a decision in three weeks from complete submission/quality check of their papers. Rubriq will provide the peer review reports while Editors at Scientific Reports will make the final decision. The same editorial criteria for acceptance will be applied to any paper submitted to Scientific Reports, whether fast tracked or not.

We at Research Square are pleased to be partnering with NPG on this trial. We both have similar missions to advance science and enable researcher success. NPG has the highest standards of peer review in the industry, and it’s great validation that Rubriq passed NPG’s testing prior to embarking on our trial.

We hope to learn a lot from the trial to improve the author and reviewer experiences. We believe minimising the uncertainty surrounding the peer review process and speeding up the time to a decision on publication will be of real value to researchers, giving them more time to focus on making discoveries. We are excited to launch the Rubriq fast track trial with Scientific Reports this week.

Leave a comment

Filed under Uncategorized

Rubriq adds Sound Research Stamps

Stamps

Rubriq is excited to announce the addition of Sound Research Stamps to our scorecards. These stamps are based on direct responses from the reviewers that have evaluated the manuscript and allow authors and journal editors to know when a manuscript meets the standards of sound research. These stamps help provide even more context and clarity to the detailed feedback that the reviewers provide through the Rubriq Scorecard.


New Rubriq Sound Research Stamp creates additional publication options for researchers

December 2014, Durham, North Carolina

In 2012 Research Square launched Rubriq, a peer review that is fast, objective, and portable, to better enable researchers to meet their publication goals. Since its launch, Rubriq has performed over 2,000 peer reviews, much more than most journals would perform over that time period. The new Rubriq Sound Research Stamp represents another step toward a better model for publishing verified research results to maximize speed and impact. With the launch of the Sound Research Stamp, Rubriq is able to certify that a research article is fundamentally sound and suitable for publication. This feature gives researchers a quick path to sharing their results in a sound science journal and enables a future where self-publication is standard practice.

Rubriq’s independent process helps researchers by creating a standard format and scoring system for reviews, allowing them to bypass the traditional and time-consuming process of moving from journal to journal.

‘By decoupling validation from the publication process and creating a standardized peer review methodology, we give researchers more options to disseminate their work and, more importantly, more time to focus on their next discovery,’ said Shashi Mudunuri, CEO of Research Square. ‘The Sound Research Stamp enables researchers to more quickly share their results through a variety of channels. Savvy journal editors will offer streamlined publication paths to attract researchers that have the Sound Research Stamp, and we are happy to facilitate those connections.’

Rubriq scorecards already provide comprehensive feedback on research that has been thoroughly and independently peer reviewed. Now, the Sound Research Stamp will enable editors and researchers to immediately see whether the research is ready for publication in a sound science journal. The service is launching with two different stamps.

Sound Research Certified is for work that the reviewers believe could be published with minor revisions, or even exactly as it is.

Sound Research Potential is for articles that reviewers believe will be publishable but will require some larger issues to be addressed before publication.

‘Good peer review should create an in-depth, thoughtful response to the work,’ said Jody Plank, the Product Manager for Rubriq, ‘and Rubriq scorecards contain a wealth of information. But there’s also a need, particularly in open access publishing, for a clear assessment of the work. The stamps will give both researchers and editors an immediate indication of whether the article is suitable for publication.’

Crucially, the stamps make no judgment as to the importance of the work; that is for the wider scientific community to assess. Instead, the stamp indicates that the manuscript represents methodologically sound research with publication-quality presentation.

Rubriq has created the stamps as a direct response to the needs of open access publishers, who require a streamlined peer review system but are also looking to maintain high quality in their published works. Sound Research Stamps are a simple, concise statement of the quality of a manuscript for journals that publish technically sound research regardless of novelty.

–Ends–

About Research Square

Research Square is a for-benefit company that is focused on helping researchers succeed by creating tools and services that improve scientific communication. Through the AJE, Rubriq, and JournalGuide brands, Research Square provides a complete solution for authors who are preparing a manuscript for publication. To learn more about Research Square, visit http://www.researchsquare.com. More information is available about Rubriq and the Sound Research Stamp at http://www.rubriq.com.

Leave a comment

Filed under Uncategorized

Welcome, JournalGuide!

postheader

 

Today marks the official public launch of JournalGuide, a free tool for authors to use to find the best match for their research. JournalGuide grew out of the Rubriq team’s need to create a comprehensive journal database and powerful search tools at the article level. Our team continues to use this data to provide customized journal recommendations as a part of the complete Rubriq Report. We saw an opportunity to take what was originally an internal tool and make it into a free resource for all authors. By doing so, we also created the opportunity to collect author ratings and input to make it even more helpful. User accounts for JournalGuide and Rubriq are linked with a central login, and all journal profile data are also shared. Journals that created profiles on www.rubriq.com will see that data displayed live on JournalGuide. Be sure to visit www.journalguide.com to try it out, rate a journal or two, and share your feedback.

Leave a comment

Filed under Uncategorized

Rubriq Presentation from SSP Annual Meeting 2013

Missed us at SSP? Want to know the latest things we’re cooking up at Rubriq? In this video Keith Collier re-presents all of his slides from the SSP session (Concurrent 4E: The Future of Peer Review: Game Changers). This presentation gives a detailed (~20 min) overview of our independent peer review service, as well as a preview of one of our new free author tools. It was designed for the SSP audience, which is primarily journals, editors, and publishers.

From the SSP 2013 Annual Meeting – http://www.sspnet.org/events/past-events/2013-annual-meeting/schedule/

Leave a comment

June 19, 2013 · 7:51 pm

How we found 15 million hours of lost time

Lost time in the current peer review process

Rubriq wants to recover lost hours from redundant reviews so they can be put back into research. In the current journal submission process, rejection is common, yet reviews are rarely shared from one journal to the next.  Even when reviews are passed along, the lack of an industry-wide standard means that each journal in the chain solicits its own reviews before making a decision. All of this leads to reviewers repeating work that has already been done on the same manuscript by other colleagues.  We estimate that over 15 million hours are spent on redundant or unnecessary reviews – every year. 

Here’s a video that helps illustrate the key issues:

(once it starts, you can click on “HD” in the right-hand corner to view on highest resolution)

So how did we get to that number of 15 million hours each year?

The two key metrics for finding wasted time are quantity (how many manuscripts are reviewed and then rejected?) and time (how long does each submission one take to be reviewed?). While there are 28,000 peer reviewed journals, we only use 12,000 in our calculations since that is roughly the number of high-quality journals that are included in Thomson Reuters’ Web of Science.  The figure below shows how we calculated both quantity and time, and the descriptions and citations for the key steps in the process follow:

Rubriq calculation of time lost to peer review - click to view larger

Rubriq calculation of time lost to peer review – click to view larger

 

Calculation & Source Details:

1.  3,360,207 (English-language, STM) submissions per year

  • Although the MarkWare STM report1 showed that there are over 28,000 peer-reviewed journals, we focused our scope within just the 12,000 English-language STM journals identified in that same report, as they are the current focus for Rubriq.
  • The average submissions per journal were shown in the Thompson Reuters data2 as 280 (total ScholarOne submissions divided by the count of ScholarOne journal sites). Calculating 280 submissions for each of the 12,000 journals equals 3,360,207 submissions per year.
  • Note that this is submission-based data, not paper-based.  A single manuscript that was rejected by one journal but then accepted by another within the same year would go through two review cycles and thus recognized as two separate submissions.

2.  1,344,099 (40%) accepted submissions per year

  • Thompson Reuters data2 reports 37% acceptance based on all submissions received and accepted within their system, but the MarkWare PRC report3 estimated an average of 50%.
  • We feel the Thomson Reuters data is more accurate than PRC data based on how the information was collected and how calculations were made. Combined with our own internal data and personal interviews with some of the largest STM publishers, we selected 40% as the best representation for this group of journals. 40% of our total submission number equals 1,344,099 accepted papers.

3.  705,652 (21%) submissions per year rejected WITHOUT Review

  • The MarkWare PRC report3 stated 21% as its estimate for submissions that are rejected without going through peer review, also known as a “desk rejection”.
  • Although there is time lost and an opportunity cost to the author when this occurs and they have to try again with another journal, we are currently only focused on time spent on peer-review, so do not factor this group in with our calculation of wasted time.

4.  1,310,496 (39%) submissions per year rejected WITH Review

  • The number of submissions that are sent to peer review but are then rejected is our key starting metric for calculating lost hours (why? See our “Additional Reading” section below for some background material). We use the two preceding calculations to find this number.
  • If 21% were rejected without review, and 40% were accepted, then the remaining submissions were rejected after the peer review process.  Applying 39% to our total gives us 1,310,496.

5.  11.5 average reviewer hours spent per submission

  • Data from the MarkWare STM report4 provided us with an average (median) of five hours spent per review.
  • The MarkWare PRC report3 states that an average of 2.3 reviewers is used for each submission.
  • Five hours * 2.3 reviewers equals 11.5 average review hours per submission.
  • Note that this number only takes into account the time spent per submission by reviewers – it does not include time spent by the journal or publisher in coordinating the review process (e.g., recruiting reviewers, editorial check of reviews, review software costs) or other time spent processing these papers (e.g., screening, editorial review, technical check, other operational time).

6. 15,070,706 hours per year spent on redundant reviews

  • Assuming 11.5 hours per submission * 1,310,496 submissions that were reviewed but then rejected = over 15 million hours. Every year.
  • Since there are only 8,760 hours in a year, you can also think of it as equaling 1,720 years (if it was all one reviewer working 24 hrs per day).

 

 References/Links:

1. M. Ware, M. Mabe, The STM Report: An overview of scientific and scholarly journal publishing (International Association of Scientific, Technical, and Medical Publishers, Oxford, UK, 2012; http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf)

2. Thomson Reuters, Global Publishing: Changes in submission trends and the impact on scholarly publishers (April 2012: http://scholarone.com/about/industry_insights/).

3. M. Ware, Peer review: benefits, perceptions, and alternatives (Publishing Research Consortium, London UK, 2008; http://www.publishingresearch.net/documents/PRCsummary4Warefinal.pdf)

4. Mark Ware (2011): Peer Review: Recent Experience and Future

Directions, New Review of Information Networking, 16:1, 23-53 http://dx.doi.org/10.1080/13614576.2011.566812

 

Have other questions? Found a better number with your own calculations? Feel free to add your comments here on our blog!

 

2 Comments

Filed under Uncategorized