Abstract
This paper examines the peer review procedure of a national science funding organization (Swiss National Science Foundation) by means of the three most frequently studied criteria reliability, fairness, and validity. The analyzed data consists of 496 applications for project-based funding from biology and medicine from the year 1998. Overall reliability is found to be fair with an intraclass correlation coefficient of 0.41 with sizeable differences between biology (0.45) and medicine (0.20). Multiple logistic regression models reveal only scientific performance indicators as significant predictors of the funding decision while all potential sources of bias (gender, age, nationality, and academic status of the applicant, requested amount of funding, and institutional surrounding) are non-significant predictors. Bibliometric analysis provides evidence that the decisions of a public funding organization for basic project-based research are in line with the future publication success of applicants. The paper also argues for an expansion of approaches and methodologies in peer review research by increasingly focusing on process rather than outcome and by including a more diverse set of methods e.g. content analysis. Such an expansion will be necessary to advance peer review research beyond the abundantly treated questions of reliability, fairness, and validity.
Similar content being viewed by others
References
Armstrong, P. W., Caverson, M. M., Adams, L., Taylor, M., Olley, P. M. (1997), Evaluation of the heart and stroke foundation of canada research scholarship program: Research productivity and impact, Canadian Journal of Cardiology, 13(5): 507–516.
Bakanic, V., Mcphail, C., Simon, R. J. (1987), The manuscript review and decision-making process, American Sociological Review, 52(5): 631–642.
Bornmann, L., Daniel, H. D. (2003), Begutachtung durch Fachkollegen in der Wissenschaft. Stand der Forschung zur Reliabilität, Fairness und Validität des Peer-Review-Verfahrens. In: Schwarz, S., Teichler, U. (Eds), Universität auf dem Prüfstand. Konzepte und Befunde der Hochschulforschung. Campus, Frankfurt am Main, pp. 211–230.
Bornmann, L., Daniel, H. D. (2005), Selection of research fellowship recipients by committee peer review. Reliability, fairness and predictive validity of Board of Trustees’ decisions, Scientometrics, 63(2): 297–320.
Bornstein, R. F. (1991), The predictive validity of peer review: A neglected issue, Behavioral and Brain Sciences, 14(1): 138–9.
Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S., Swan, A. (2007), Incentivizing the open access research web: Publication-archiving, data-archiving and scientometrics, CTWatch Quarterly, 3(3): 17–18.
Carter, G. M. (1978), The Consequences of Unfunded NIH Applications for the Investigator and His Research, Rand Corporation, Santa Monica.
Carter, G. M. (1974), Peer Review, Citations, and Biomedical Research Policy: NIH Grants to Medical School Faculty, Rand Corporation, Santa Monica.
Carter, G. M. (1978), A Citation Study of the NIH Peer Review Process, Rand Corporation, Santa Monica.
Chapman, G. B., Mccauley, C. (1994), Predictive validity of quality ratings of National Science Foundation graduate fellows, Educational and Psychological Measurement, 54(2): 428–438.
Cicchetti, D. V. (1991A), The reliability of peer review for manuscript and grant submissions: A crossdisciplinary investigation, Behavioral and Brain Sciences, 14(1): 119–135.
Cicchetti, D. V. (1991B), Reflections from the peer-review mirror, Behavioral and Brain Sciences, 14: 167–186.
Cicchetti, D. V., Sparrow, S. A. (1981), Developing criteria for establishing interrater reliability of specific items: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86(2): 127–137.
Claveria, L. E., Guallar, E., Cami, J., Conde, J., Pastor, R., Ricoy, J. R., Rodriguez, E., Ruizpalomo, F., Munoz, E. (2000), Does peer review predict the performance of research projects in health sciences? Scientometrics, 47(1): 11–23.
Cole, J. R., Cole, S. (1981), Peer Review in the National Science Foundation: Phase Two of a Study, National Academy Press, Washington D.C.
Cole, S., Fiorentine, R. (1991), Discrimination against women in science: The confusion of outcome with process, In: H. Zuckerman, J. R. Cole, J. T. Bruer (Eds), The Outer Circle: Women in the Scientific Community, Yale University Press, 205–226.
Cole, S., Rubin, L., Cole, J. R. (1978), Peer Review in the National Science Foundation: Phase One of a Study, National Academy Press, Washington D.C.
Daniel, H. D. (1993), Guardians of Science, VCH, New York.
Demicheli, V., Pietrantonj, C. (2004), Peer review for improving the quality of grant applications, The Cochrane Library, 4.
Deutsche Forschungsgemeinschaft (2007), Jahresbericht 2006, Bonn.
Dirk, L. (1999), A measure of originality: The elements of science, Social Studies of Science, 29(5): 765–776.
Garfield, E. (1979), Citation Indexing — Its Theory and Application in Science, Technology, and Humanities, John Wiley and Sons, New York.
Guetzkow, J., Lamont, M., Mallard, G. (2004), What is originality in the humanities and the social sciences? American Sociological Review, 69(2): 190–212.
Harnad, S. (1983), Peer Commentary on Peer Review: A Case Study in Scientific Quality Control. Cambridge University Press.
Harnad, S. (1985), Rational disagreement in peer review, Science, Technology, & Human Values, 10(3): 55–62.
Hartmann, I. (1990), Begutachtung in der Forschungsförderung — Die Argumente der Gutachter in der deutschen Forschungsgemeinschaft. Rita G. Fischer, Frankfurt am Main.
Hartmann, I., Neidhardt, F. (1990), Peer review at the Deutsche Forschungsgemeinschaft, Scientometrics, 19(5): 419–425.
Hemlin, S. (1993), Scientific quality in the eyes of the scientist. A questionnaire study, Scientometrics, 27(1): 3–18.
Hirschauer, S. (2004), Peer Review Verfahren auf dem Prüfstand: Zum Soziologiedefizit der Wissenschaftsevaluation, Zeitschrift für Soziologie, 33(1): 62–83.
Hoffmann, H., Joye, D., Kuhn, F., Metral, G. (2002), Der SNF im Spiegel der Forschenden. SIDOS, Neuchâtel.
Hosmer, D. W., Lemeshow, S. (2000), Applied Logistic Regression, John Wiley and sons, New York.
Howard, L., Wilkinson, G. (1999), Peer review and editorial decision-making, Neuroendocrinology Letters, 20(5): 256–260.
Kalthoff, H. (1999), Die Herstellung von Gewissheit: Firmenkredite und Risikoanalyse in Mitteleuropa. Frankfurter Institut für Transformationsstudien, Europa-Universität Viadrina.
Kelle, U., Prein, G., Bird, K. (1995), Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice, Sage Publications.
Lakatos, I. (1970), Falsification and the methodology of scientific research programmes, Criticism and the Growth of Knowledge, 4: 91–195.
Langfeldt, L. (2001), The decision-making constraints and processes of grant peer review, and their effects on the review outcome, Social Studies of Science, 31(6): 820–841.
Longino, H. E. (2002), The Fate of Knowledge, Princeton University Press.
Lonkila, M. (1995), Grounded theory as an emerging paradigm for CAQDAS. In: U. Kelle, G. Prein, K. Bird (Eds), Computer-Aided Qualitative Data Analysis. London: Sage.
Merton, R. K. (1973), The normative structure of science. In: The Sociology of Science, University of Chicago Press, pp. 267–279.
National Science Foundation (n.d.), US NSF — Budget, Retrieved August 6, 2008, from http://www.nsf.gov/about/budget/.
Neidhardt, F. (1988), Selbststeuerung in der Forschungsförderung: Das Gutachterwesen der DFG, Westdeutscher Verlag.
Oppenheim, C. (1996), Do citations count? Citation indexing and the Research Assessment Exercise (RAE), Serials: The Journal for the Serials Community, 9(2): 155–161.
Peters, D. P., Ceci, S. J. (1982), Peer-review practices of psychological journals: The fate of published articles, submitted again, Behavioral and Brain Sciences, 5(2): 187–195.
Reinhart, M. (forthcoming), Peer review and quality criteria in science funding: access point versus boundary organization.
Reinhart, M. (2006), Peer Review, Retrieved June 26, 2008, from http://www.forschungsinfo.de/iq/agora/Peer%20Review/peer_review.html.
Reinahrt, M., Sirtes, D. (2006), Wieviel Intransparenz ist für Entscheidungen über exzellente Wissenschaft notwendig? IfQ Working Paper, 1: 27–36.
Schweizerischer Nationalfonds. (2002), Stiftungsurkunde / Statuten, Retrieved August 14, 2007, from http://www.snf.ch/SiteCollectionDocuments/por_org_statuten_d.pdf.
Schweizerischer Nationalfonds. (2007), Jahresbericht 2006, Bern.
Smith, A., Eysenck, M. (2002), The Correlation Between Rae Ratings and Citation Counts In Psychology, Department of Psychology, Royal Holloway, University of London, UK.
Solomon, M. (2001), Social Empiricism, MIT Press.
Stricker, L. J. (1991), Disagreement among journal reviewers: No cause for undue alarm, Behavioral and Brain Sciences, 14: 163–164.
Strulik, T. (2007), Evaluationen in der Wirtschaft — Rating-Agenturen und das Management des Beobachtetwerdens, Leviathan Sonderheft, 24: 288–314.
Weller, A. C. (2001), Editorial Peer Review: Its Strengths and Weaknesses, Information Today.
Wood, F., Wessely, S. (1999), Peer review of grant applications: A systematic review. In: F. Godlee, T. Jefferson (Eds), Peer Review in Health Sciences. BMJ Books, London, 14–31.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Reinhart, M. Peer review of grant applications in biology and medicine. Reliability, fairness, and validity. Scientometrics 81, 789–809 (2009). https://doi.org/10.1007/s11192-008-2220-7
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-008-2220-7