Original ArticlePeering at peer review revealed high degree of chance associated with funding of grant applications
Introduction
Peer review, like democracy (Sir Winston Churchill, Hansard, November 11, 1947; democracy is the worst form of government except for all those others that have been tried) has been referred to as the worst system except for all the others [1]. On the negative side, peer review has been said to stifle innovation, encourage cronyism and even pilfering of ideas, if not downright plagiarism [2], [3], [4], [5], [6], [7], [8], [9], [10], [11]. On the positive side, peer review may be seen as a rational and fair tool to manage the allocation of scarce research resources and as a way to promote scientific accountability [11]. Importantly, feedback from the peer-review process, even if negative, may provide applicants with helpful suggestions to modify their methods or identify which parts of the grant are difficult to comprehend. The peer-review process evokes much emotion, generally positive among people receiving favorable reviews [12], [13], [14] and negative among those not being so favorably reviewed [15], [16]. But, without unbiased and reliable peer review, important research may remain unfunded and never performed. Moreover, discouraged investigators may abandon research altogether. Swift [2] provides several examples where important research was turned down because of the classic peer review process. Horrobin [3], [17] has been vocal about the dangers of suppressing therapeutic innovation through peer review.
At first glance, the peer-review process appears to be worthwhile but, in practice, fairness is difficult to achieve [18], [19]. One of the challenges in peer review is to identify peers who do not have a conflict of interest either because they are collaborators or competitors [10], [11], [20], [21]. Thus, research into peer review is needed to serve the best interests of science [22].
As an object of scientific study, the peer review process is only a few decades old [23], [24]. Most of the literature has focused on the peer-review process regarding submissions to journals [25]. A Cochran review [25] found no evidence that peer review was a secure mechanism to detect bias or error in manuscripts. Gender bias in manuscript review has also been studied and, although male editors were more successful in recruiting male reviewers, there was no gender bias in acceptance rates [26].
There has been less written on peer review of grant applications. In Sweden, concern was voiced after a 1997 study [27], which showed a gender bias against women competing for postdoctoral awards. Gender bias was not observed in Great Britain and in the United States [28], [29], although women applied less frequently than men for postdoctoral positions. Two studies [30], [31] quantified the difference in funding decisions about grants across review panels. A 1981 study [30] found a 25% reversal rate when grants were rereviewed by an independent panel, and a 1997 study [31] found disagreement on fundability for 27% of grants sent to two independent panels. A systematic review [32] concluded that despite inconsistencies about funding decisions, generally there was good agreement between individual panel members as to the overall quality of the application.
Peer-review processes for grant funding across nations are quite similar [33], [34], [35]. The grant is allocated to a committee, usually chosen by the applicant. The committee has a core set of members who rotate every 2 to 4 years, depending on the agency and other ad hoc members. Often there is an attempt to be representative of geography, gender, race, or language. For example, in a bilingual country like Canada, grants can be submitted in either English or French. The administrative officer in conjunction with the committee chair and perhaps another scientific officer assign the applications to the appropriate reviewers. As reviewers identify conflicts or other perceived obstacles to a fair process, there is a reassigning of proposals until all applications have been assigned to two or rarely three reviewers. At the same time, external reviewers who provide written assessment but do not have any decision making capacity are sought.
As reviewers are expected to write in-depth reviews, they usually do not have time to read any applications others than those assigned to them. To overcome this reality, sometimes additional panel members are identified as readers. As a result, the funding decision is usually based on a consensus score of two (or at most three) reviewers. The external ratings play very little role except in situations of extreme discordance between internal reviewers' scoring [36], [37]. Similarly, overall committee discussion is usually dominated by the in-depth reviewers. As funding becomes more and more difficult to obtain, only projects where both reviewers agree on a very high rating will get funding. In other words, one mediocre review will likely eliminate any chance of funding [37].
Despite science's preoccupation with accurate measurements, there is no precise method of measuring the quality of grant proposals—“good enough” for funding is left to subjective opinion of a very few number of reviewers [38].
Given this situation, the need to identify alternatives to the current procedures for awarding grants was identified. The purpose of this study, therefore, was to compare two methods of peer review on the probability of funding a given application (1) the usual two reviewer grant review method, and (2) an all reviewer ranking method.
Section snippets
Methods
For the past 2 years, the McGill University Health Center (MUHC) Research Institute has had a pilot project competition to stimulate clinical research by encouraging applications from new investigators and new investigative teams. Funds were available for five projects in each stream, and the successful applicants are expected to eventually submit a full proposal to an external funding agency. The research community received explicit instructions about content, format, and evaluation criteria.
Statistical methods
The ranks assigned to each project by each unconflicted reviewer were summed and averaged (RANKING Method). Descriptive statistics of the distribution of ranks were calculated as was the average of the two CLASSIC reviews. Also calculated was the sum of ranks considering all possible pairs of reviewers. This was done to mimic the approximately random fashion by which projects are assigned to reviewers under normal circumstances. With 11 committee members, there are 55 possible pairings of
Results
Seventeen applications were received from new investigators and 15 from new teams. The focus of the request for applications was for clinical, epidemiologic, health services, or population health research, and so a review panel with expertise in these areas was selected. Prior to the day of the meeting, panel members provided their independent rankings for each project for which they had no conflict (RANKING method). For the CLASSIC method, an attempt was made to have two reviews for each
Discussion
This study found that there is a considerable amount of variability in project evaluation under the CLASSIC method of assigning the grant to two main reviewers. There was poor agreement between the CLASSIC method based on two reviewers, and the RANKING method based on all members reviewing and ranking projects. The frequency with which projects met the cutoff for funding based on all possible pairs of reviewers never reached 100%; even the top three projects in each stream would have failed to
References (41)
Another step towards reshaping peer review at the NIH
Lancet
(1999)Peer review of grant applications: a harbinger for mediocity in clinical research
Lancet
(1996)Moron peer review
Curr Biol
(1999)Something rotten at the core of science?
Trends Pharmacol Sci
(2001)Galileo's peers
Pathology
(2000)Peer review of grant applications: what do we know?
Lancet
(1998)How reliable is peer review? An examination of operating grant proposals simultaneously submitted to two similar peer review systems
J Clin Epidemiol
(1997)- et al.
Peer review in medical journals
Chest
(1987) Innovative research and NIH grant review
J NIH Res
(1996)Referees and research administrators: barriers to scientific research?
Br Med J
(1974)
A randomized controlled study of reviewer bias against an unconventional therapy
J R Soc Med
Anonymous. Bad peer reviewers: small proportions of referees are undermining the scientific process, especially in biology. Some of the problems are getting worse, partly because of changes in scientific publishing
Nature
The philosophical basis of peer review and the suppression of innovation
JAMA
Peer review is a two way process
Nature
Basic philosophy and concepts underlying scientific peer review
Med Hypotheses
Patterns of evaluation in science
Minerva
Author perception of peer review: impact of review quality and acceptance on satisfaction
JAMA
How well does a journal's peer review process function? A survey of authors' opinions
JAMA
Effect of acceptance or rejection on the author's evaluation of peer review of medical manuscripts
JAMA
Peer review my foot!
J Biol Rhythms
Cited by (53)
Predictors of applying for and winning an ERC Proof-of-Concept grant: An automated machine learning model
2022, Technological Forecasting and Social ChangeCitation Excerpt :Peer review of grant proposals is not exempt of criticism in relation to its reliability, predictive validity, and bias (Abdoul et al., 2012; Bornmann, 2011; Lee et al., 2013; Marsh et al., 2008). Therefore, there is an element of chance in receiving a research grant (Cole et al., 1981; Mayo et al., 2006; Seeber et al., 2021). This section summarizes the results from past research about whether certain characteristics of the consortia, the organization, the individual scientist, or the proposal affect a proposal's chances of success.
Clinical trials proposed for the VA Cooperative Studies Program: Success rates and factors impacting approval
2021, Contemporary Clinical Trials CommunicationsCitation Excerpt :Although PCORI's emphasis on patient-centeredness was occasionally echoed in the CSP process, in general CSP evaluations followed a more traditional medical model. Our focus in conducting this programmatic review was strictly on examining the factors affecting success and failure in a funding process focused solely on clinical investigations; we have not attempted to address issue of gender or other bias in peer review [22,23], the ability of the peer review process to foster innovation or discriminate among proposals likely to have more or less scientific impact [24–26], the format by which applications should be reviewed and discussed prior to funding [27,28], or the reproducibility of the peer review process [5,29,30]. Our observations on the factors influencing success in the application process are similar to those that have been observed in applications submitted for funding to the Plastic Surgery Foundation [31].
Using the peer review process to educate and empower emerging nurse scholars
2021, Journal of Professional NursingCitation Excerpt :Adhering to this philosophy of peer review assists evolving scholars better their work (Godlee & Jefferson, 2003; Polit & Beck, 2017; Wiley, 2019). In terms of funding mechanisms, the biomedical research enterprise depends on the fair and objective peer review of research grants, leading to the distribution of resources through efficient and robust competitive methods (Mayo et al., 2006). Experts in a specific field utilize peer review to make recommendations regarding what research is funded (Tamblyn et al., 2018).
Blinding Models for Scientific Peer-Review of Biomedical Research Proposals: A Systematic Review
2023, Journal of Empirical Research on Human Research Ethics