Original Article
Peering at peer review revealed high degree of chance associated with funding of grant applications

https://doi.org/10.1016/j.jclinepi.2005.12.007Get rights and content

Abstract

Background and Objectives

There is a persistent degree of uncertainty and dissatisfaction with the peer review process underlining the need to validate the current grant awarding procedures. This study compared the CLassic Structured Scientific In-depth two reviewer critique (CLASSIC) with an all panel members' independent ranking method (RANKING). Eleven reviewers, reviewed 32 applications for a pilot project competition at a major university medical center.

Results

The degree of agreement between the two methods was poor (kappa = 0.36). The top rated project in each stream would have failed the funding cutoff with a frequency of 9 and 35%, depending on which pair of reviewers had been selected. Four of the top 10 projects identified by RANKING had a greater than 50% of not being funded by the CLASSIC ranking. Ten reviewers provided optimal consistency for the RANKING method.

Conclusions

This study found that there is a considerable amount of chance associated with funding decisions under the traditional method of assigning the grant to two main reviewers. We recommend using the all reviewer ranking procedure to arrive at decisions about grant applications as this removes the impact of extreme reviews.

Introduction

Peer review, like democracy (Sir Winston Churchill, Hansard, November 11, 1947; democracy is the worst form of government except for all those others that have been tried) has been referred to as the worst system except for all the others [1]. On the negative side, peer review has been said to stifle innovation, encourage cronyism and even pilfering of ideas, if not downright plagiarism [2], [3], [4], [5], [6], [7], [8], [9], [10], [11]. On the positive side, peer review may be seen as a rational and fair tool to manage the allocation of scarce research resources and as a way to promote scientific accountability [11]. Importantly, feedback from the peer-review process, even if negative, may provide applicants with helpful suggestions to modify their methods or identify which parts of the grant are difficult to comprehend. The peer-review process evokes much emotion, generally positive among people receiving favorable reviews [12], [13], [14] and negative among those not being so favorably reviewed [15], [16]. But, without unbiased and reliable peer review, important research may remain unfunded and never performed. Moreover, discouraged investigators may abandon research altogether. Swift [2] provides several examples where important research was turned down because of the classic peer review process. Horrobin [3], [17] has been vocal about the dangers of suppressing therapeutic innovation through peer review.

At first glance, the peer-review process appears to be worthwhile but, in practice, fairness is difficult to achieve [18], [19]. One of the challenges in peer review is to identify peers who do not have a conflict of interest either because they are collaborators or competitors [10], [11], [20], [21]. Thus, research into peer review is needed to serve the best interests of science [22].

As an object of scientific study, the peer review process is only a few decades old [23], [24]. Most of the literature has focused on the peer-review process regarding submissions to journals [25]. A Cochran review [25] found no evidence that peer review was a secure mechanism to detect bias or error in manuscripts. Gender bias in manuscript review has also been studied and, although male editors were more successful in recruiting male reviewers, there was no gender bias in acceptance rates [26].

There has been less written on peer review of grant applications. In Sweden, concern was voiced after a 1997 study [27], which showed a gender bias against women competing for postdoctoral awards. Gender bias was not observed in Great Britain and in the United States [28], [29], although women applied less frequently than men for postdoctoral positions. Two studies [30], [31] quantified the difference in funding decisions about grants across review panels. A 1981 study [30] found a 25% reversal rate when grants were rereviewed by an independent panel, and a 1997 study [31] found disagreement on fundability for 27% of grants sent to two independent panels. A systematic review [32] concluded that despite inconsistencies about funding decisions, generally there was good agreement between individual panel members as to the overall quality of the application.

Peer-review processes for grant funding across nations are quite similar [33], [34], [35]. The grant is allocated to a committee, usually chosen by the applicant. The committee has a core set of members who rotate every 2 to 4 years, depending on the agency and other ad hoc members. Often there is an attempt to be representative of geography, gender, race, or language. For example, in a bilingual country like Canada, grants can be submitted in either English or French. The administrative officer in conjunction with the committee chair and perhaps another scientific officer assign the applications to the appropriate reviewers. As reviewers identify conflicts or other perceived obstacles to a fair process, there is a reassigning of proposals until all applications have been assigned to two or rarely three reviewers. At the same time, external reviewers who provide written assessment but do not have any decision making capacity are sought.

As reviewers are expected to write in-depth reviews, they usually do not have time to read any applications others than those assigned to them. To overcome this reality, sometimes additional panel members are identified as readers. As a result, the funding decision is usually based on a consensus score of two (or at most three) reviewers. The external ratings play very little role except in situations of extreme discordance between internal reviewers' scoring [36], [37]. Similarly, overall committee discussion is usually dominated by the in-depth reviewers. As funding becomes more and more difficult to obtain, only projects where both reviewers agree on a very high rating will get funding. In other words, one mediocre review will likely eliminate any chance of funding [37].

Despite science's preoccupation with accurate measurements, there is no precise method of measuring the quality of grant proposals—“good enough” for funding is left to subjective opinion of a very few number of reviewers [38].

Given this situation, the need to identify alternatives to the current procedures for awarding grants was identified. The purpose of this study, therefore, was to compare two methods of peer review on the probability of funding a given application (1) the usual two reviewer grant review method, and (2) an all reviewer ranking method.

Section snippets

Methods

For the past 2 years, the McGill University Health Center (MUHC) Research Institute has had a pilot project competition to stimulate clinical research by encouraging applications from new investigators and new investigative teams. Funds were available for five projects in each stream, and the successful applicants are expected to eventually submit a full proposal to an external funding agency. The research community received explicit instructions about content, format, and evaluation criteria.

Statistical methods

The ranks assigned to each project by each unconflicted reviewer were summed and averaged (RANKING Method). Descriptive statistics of the distribution of ranks were calculated as was the average of the two CLASSIC reviews. Also calculated was the sum of ranks considering all possible pairs of reviewers. This was done to mimic the approximately random fashion by which projects are assigned to reviewers under normal circumstances. With 11 committee members, there are 55 possible pairings of

Results

Seventeen applications were received from new investigators and 15 from new teams. The focus of the request for applications was for clinical, epidemiologic, health services, or population health research, and so a review panel with expertise in these areas was selected. Prior to the day of the meeting, panel members provided their independent rankings for each project for which they had no conflict (RANKING method). For the CLASSIC method, an attempt was made to have two reviews for each

Discussion

This study found that there is a considerable amount of variability in project evaluation under the CLASSIC method of assigning the grant to two main reviewers. There was poor agreement between the CLASSIC method based on two reviewers, and the RANKING method based on all members reviewing and ranking projects. The frequency with which projects met the cutoff for funding based on all possible pairs of reviewers never reached 100%; even the top three projects in each stream would have failed to

References (41)

  • K. Resch et al.

    A randomized controlled study of reviewer bias against an unconventional therapy

    J R Soc Med

    (2000)
  • Anonymous. Bad peer reviewers: small proportions of referees are undermining the scientific process, especially in biology. Some of the problems are getting worse, partly because of changes in scientific publishing

    Nature

    (2001)
  • D.F. Horrobin

    The philosophical basis of peer review and the suppression of innovation

    JAMA

    (1990)
  • D. Grant

    Peer review is a two way process

    Nature

    (1997)
  • W.E. Stehben

    Basic philosophy and concepts underlying scientific peer review

    Med Hypotheses

    (1999)
  • H.M.R. Zuckerman

    Patterns of evaluation in science

    Minerva

    (1971)
  • E.J. Weber et al.

    Author perception of peer review: impact of review quality and acceptance on satisfaction

    JAMA

    (2002)
  • B.J. Sweitzer et al.

    How well does a journal's peer review process function? A survey of authors' opinions

    JAMA

    (1994)
  • J.M. Garfunkel et al.

    Effect of acceptance or rejection on the author's evaluation of peer review of medical manuscripts

    JAMA

    (1990)
  • M. Pupique

    Peer review my foot!

    J Biol Rhythms

    (2002)
  • Cited by (53)

    • Predictors of applying for and winning an ERC Proof-of-Concept grant: An automated machine learning model

      2022, Technological Forecasting and Social Change
      Citation Excerpt :

      Peer review of grant proposals is not exempt of criticism in relation to its reliability, predictive validity, and bias (Abdoul et al., 2012; Bornmann, 2011; Lee et al., 2013; Marsh et al., 2008). Therefore, there is an element of chance in receiving a research grant (Cole et al., 1981; Mayo et al., 2006; Seeber et al., 2021). This section summarizes the results from past research about whether certain characteristics of the consortia, the organization, the individual scientist, or the proposal affect a proposal's chances of success.

    • Clinical trials proposed for the VA Cooperative Studies Program: Success rates and factors impacting approval

      2021, Contemporary Clinical Trials Communications
      Citation Excerpt :

      Although PCORI's emphasis on patient-centeredness was occasionally echoed in the CSP process, in general CSP evaluations followed a more traditional medical model. Our focus in conducting this programmatic review was strictly on examining the factors affecting success and failure in a funding process focused solely on clinical investigations; we have not attempted to address issue of gender or other bias in peer review [22,23], the ability of the peer review process to foster innovation or discriminate among proposals likely to have more or less scientific impact [24–26], the format by which applications should be reviewed and discussed prior to funding [27,28], or the reproducibility of the peer review process [5,29,30]. Our observations on the factors influencing success in the application process are similar to those that have been observed in applications submitted for funding to the Plastic Surgery Foundation [31].

    • Using the peer review process to educate and empower emerging nurse scholars

      2021, Journal of Professional Nursing
      Citation Excerpt :

      Adhering to this philosophy of peer review assists evolving scholars better their work (Godlee & Jefferson, 2003; Polit & Beck, 2017; Wiley, 2019). In terms of funding mechanisms, the biomedical research enterprise depends on the fair and objective peer review of research grants, leading to the distribution of resources through efficient and robust competitive methods (Mayo et al., 2006). Experts in a specific field utilize peer review to make recommendations regarding what research is funded (Tamblyn et al., 2018).

    View all citing articles on Scopus
    View full text