Measuring Research Capacity: Development of the PACER Tool ========================================================== * Stephen K. Stacey * Melanie Steiner-Sherwood * Paul Crawford * Joseph W. LeMaster * Catherine McCarty * Tanvir Turin Chowdhury * Amanda Weidner * Peter H. Seidenberg ## Abstract Evaluating research activity in research departments and education programs is conventionally accomplished through measurement of research funding or bibliometrics. This limited perspective of research activity restricts a more comprehensive evaluation of a program’s actual research capacity, ultimately hindering efforts to enhance and expand it. The objective of this study was to conduct a scoping review of the existing literature pertaining to the measurement of research productivity in research institutions. Using these findings, the study aimed to create a standardized research measurement tool, the Productivity And Capacity Evaluation in Research (PACER) Tool. The evidence review identified 726 relevant articles in a literature search of PubMed, Web of Science, Embase, ERIC, CINAHL, and Google Scholar with the keywords “research capacity” and “research productivity.” Thirty-nine English-language studies applicable to research measurement were assessed in full and 20 were included in the data extraction. Capacity/productivity metrics were identified, and the relevance of each metric was data-charted according to 3 criteria: the metric was objective, organizational in scale, and applicable to varied research domains. This produced 42 research capacity/productivity metrics that fell into 7 relevant categories: bibliometrics, impact, ongoing research, collaboration activities, funding, personnel, and education/academics. With the expertise of a Delphi panel of researchers, research leaders, and organizational leadership, 31 of these 42 metrics were included in the final PACER Tool. This multifaceted tool enables research departments to benchmark research capacity and research productivity against other programs, monitor capacity development over time, and provide valuable strategic insights for decisions such as resource allocation. * ADFM/NAPCRG Research Summitt 2023 * Benchmarking * Bibliometrics * Capacity Building * Efficiency * Health Care Quality Indicators * Health Personnel * Leadership * Research Personnel * Resource Allocation * Systematic Review ## Introduction Effective research can have a profound impact, leading to significant advancements in new technologies, medicines, and evidence-based policies. In recent years, the use of research metrics has gained significant attention as a way to assess the quality and impact of research, leading to improved ability to increase research productivity and capacity in primary care.1,2 Measuring the impact and quality of scientific research, however, remains a challenge for researchers, institutions, and funding agencies.3⇓⇓–6 There are no standard guidelines for which research metrics are most informative, making it difficult to assess the relative effectiveness of different research organizations.1 A standardized data set would allow for comparison between research organizations, and within organizations over time. As a solution to this problem, the Building Research Capacity (BRC) Steering Committee commissioned a study to form a panel of research metrics. BRC comprises members from the North American Primary Care Research Group and the Association of Departments of Family Medicine. Since 2016, BRC has been engaged in offering resources to departments of family medicine to enhance and expand research, including consultations and leadership training through a research leadership fellowship.7 The development and monitoring of research capacity is a topic of significant practical interest to the committee, which has compiled a list of research metrics that have proved useful in providing consultations to clinical research departments and teaching fellows. Starting with this list as a template, the BRC Steering Committee commissioned a scoping review to investigate other metrics in the scientific literature that have been shown to be relevant and to collect a list of research assessment resources. The objective of this review was to generate a structured collection of metrics, termed the *Productivity And Capacity Evaluation in Research (PACER) Tool*. ## Methods We performed a scoping review using the method outlined by Arksey and O'Malley that was further developed by Levac et al.8,9 We aimed to identify previously reported metrics or tools that have been used as indicators to track, report, or develop research capacity and productivity in medicine. Arksey and O’Malley8 identified a process consisting of 6 steps: 1) identifying the research question, 2) identifying relevant studies, 3) selecting studies, 4) charting the data, 5) collating, summarizing, and reporting results, and 6) consulting (optional). The scoping review checklist described by Cooper et al10 was used to guide the process. A medical librarian performed a literature search of relevant databases to identify other citations in PubMed, Web of Science, Embase, ERIC, CINAHL, and Google Scholar by using the keywords “research capacity” and “research productivity”; further search details are given in the Supplemental Material. Further forward and backward citation searching was performed to identify any additional articles. No timeline restrictions were imposed, and only peer-reviewed articles were included. Deduplicator in the Systematic Review Accelerator package was used to remove duplicates from the results of the above database searches, producing a final list of citations, which were then uploaded to Rayyan, a web and mobile app for systematic reviews.11 This article follows the PRISMA-ScR checklist.12 ## Results For the study selection for the scoping review, 2 authors (SKS and PC) screened the titles and abstracts of 726 articles to determine their relevance to research capacity and/or productivity (Figure 1). Articles were selected if they met 3 metrics: 1) they developed or assessed a research tool or metric; 2) the tool or metric was objective in nature; and 3) the assessment was organizational in scope. If the primary screeners disagreed, a third screener (CM) adjudicated. Before article screening, the authors completed training to ensure consistency. After the screening round, 39 articles were selected to assess for eligibility. These articles were retrieved in full and underwent independent analysis by 2 authors (SKS plus MS-S, PC, JWL, CM, TTC, or PHS) to determine study inclusion. Conflicts between the reviewers in the independent analysis were resolved by discussion between researchers. Reasons for exclusion included no evaluation of research metrics (n = 4), subjective metrics only (n = 5), not peer-reviewed article (n = 2), and not organizational in scope (n = 8). Ultimately, 20 articles were selected for data extraction (Figure 1). ![Figure 1.](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/37/Supplement2/S173/F1.medium.gif) [Figure 1.](http://www.jabfm.org/content/37/Supplement2/S173/F1) Figure 1. PRISMA flow diagram. For the 20 included studies, the following information was recorded on a data-charting form: article title, authors, publication year, study objective, study type, target population, sample, data collection method, study duration, location of study, and study limitations. For studies that evaluated a tool or instrument for research capacity evaluation, the following additional data were recorded: name of tool/instrument, whether the tool/instrument was original or adapted, description of the tool, how it was developed, if and how it was validated, number of metrics captured, description of metrics, and how the tool performed. Key takeaways from the data extraction are summarized in Table 1. These data were used to generate an initial list of metrics that were objective, organizational in scale, and relevant to varied research domains. From the included articles, we extracted a set of 42 separate items that formed the first draft of the PACER Tool. Through qualitative content analysis, each of the 42 metrics was grouped into 8 domains of research capacity. These categories are 1. Bibliometrics 2. Impact 3. Ongoing research 4. Collaboration activities 5. Funding 6. Personnel 7. Education/academics 8. Recognition View this table: [Table 1.](http://www.jabfm.org/content/37/Supplement2/S173/T1) Table 1. Summary of Findings from Data Extraction Using the Delphi method, we submitted the initial tool to a panel of 31 research leaders (eg, deans, administrators, department chairs) to provide feedback, content expertise, and additional perspectives on the preliminary draft.31 The panel was chosen from among experts known to the BRC Steering Committee, and represented various expertise areas, including medicine (n = 21, from family medicine, internal medicine, psychiatry, pain and addiction medicine, and sports medicine), business administration (n = 2), finance (n = 1), research operations (n = 3), and population health (n = 4). The feedback from the Delphi panel was discussed by the authors. Unanimous consensus by the authors of necessary changes led to a second draft of the PACER Tool. This was then sent to the panel for further comment. The process was repeated a third time. After consensus was achieved by incorporating panelists’ feedback, the final PACER Tool was created. The Delphi panel reported that the initial tool was too complex and requested simplification. This resulted in the removal of thirteen metrics, including items such as internal publications, nonpeer-reviewed publications, and book chapters. The “recognition” category of metrics was removed after the Delphi panel determined that each of the identified metrics in that category (eg, internal awards and speaking invitations) was either infeasible or irrelevant. There was also feedback from panel members that we needed to include more data surrounding the impact of research. As a result of that feedback, we added “number of citations” and “median h-index” to the PACER Tool. They gave feedback on how each metric is described, which led to revisions in the clarity of each description. The Delphi panel also suggested we make it clear that organizations should not be expected to track every metric in the PACER Tool simultaneously as this would be infeasible for most of them. The final PACER Tool consists of 31 numeric metrics that, when taken as a whole, shed light on domains of research capacity and productivity that are amenable to such analysis (Table 2). View this table: [Table 2.](http://www.jabfm.org/content/37/Supplement2/S173/T2) Table 2. Productivity and Capacity Evaluation in Research (PACER) Tool ## Discussion Research metrics are important for academic institutions because they allow institutions to evaluate the productivity and impact of departments, teams, and individual researchers.2,22 By following relevant metrics, institutions are able to identify strengths and weaknesses and allocate resources more effectively. Bibliometric indicators, including citation counts, h-index, and impact factor, have become widely accepted measures of scientific productivity.32,33 However, they do not reflect the quality or validity of the research, and they can be influenced by factors such as the popularity of the research topic, the size of the research community, and the publishing practices of the field.29,34,35 With enough data, each of these metrics could conceivably be normalized by discipline, career stage, and other factors. This could lead to more effective comparisons over time and between institutions. Quantifying research capacity through measurements like bibliometrics or external funding often requires contextualization, which demands the collection of additional data.36 To assess whether any such data would be useful, we must be able to evaluate their effectiveness in measuring excellence of scientific output.25 Such an evaluation can seem circular, however, because it requires a prior definition of what constitutes excellence. Given the numerous possible metrics and the complex parameter landscape, it is worthwhile to define a priori what, at a minimum, may render a metric practical. In response to this, Kreiman and Maunsell29 posited that useful research metrics should possess the following characteristics: 1. Quantitative 2. Based on robust data 3. Based on data that are rapidly updated and retrospective 4. Presented with distributions and CIs 5. Normalized by number of contributors 6. Normalized by discipline 7. Normalized for career stage 8. Impractical to manipulate 9. Focused on quality over quantity These requirements necessitate that multiple metrics be obtained simultaneously. For example, to normalize quantitative bibliometric data by number of contributors or career stage, one would need to compare the data with additional data regarding the quantity and demographics of researchers. What is called for, then, is not a single metric but a panel of metrics that, when taken together, create a reasonably comprehensive picture of an organization’s research productivity and capacity. To normalize research data by discipline, a panel of metrics would need to be widely used. Such data would also need to be available to researchers so research productivity could be compared within and across organizations to discover and track trends. As the scientific landscape continues to evolve, research metrics will continue to have an increasingly important role in shaping the future of scientific research.1,2 A robust research data set could serve multiple purposes, including 1) equipping department chairs and deans with a set of practical measures to monitor research development; 2) allowing third-party organizations to compare research productivity at the organization or network level; and 3) providing researchers with a data set to evaluate the research economy (ie, how scarce resources of funding, personnel, and publications are allocated).2,37 Currently, no widely adopted set of research indicators exists that could serve these purposes. The PACER Tool was developed to meet the need identified by our team and supported by our scoping review for robust and comprehensive research capacity measurement systems. It provides a system of metrics that can be used to benchmark, monitor, and compare research productivity and capacity in various research settings. In particular, the PACER Tool provides a way for research programs, funders, and researchers themselves to benchmark research capacity and productivity in a way that is standardized, allowing for comparison across programs and within programs over time. Use of the PACER Tool will enable leaders to form a detailed evaluation of the capacity and productivity of their research enterprise and make evidence-based resourcing decisions for their own organizations. In addition, once such data become widely available, they could be used for benchmarking research enterprises across organizations. Consistent, widespread use of PACER data would allow researchers to find answers to important questions in research capacity development. For example, PACER data could be used to discover the average number of new publications an organization could expect if they were to focus resources on adding more junior researchers or having fewer senior researchers. Although the PACER Tool provides an array of metrics, it may be infeasible for an organization to obtain all data contained within the tool. Many members of the Delphi panel agreed, with one commenting that “some [measures] might be zero or not adopted, such as patents and [institutional review board] applications.” Another mentioned that using “a select subset of metrics would be best.” In response to this, the individual metrics in the PACER Tool are grouped by category. This allows users to focus on obtaining data in the domains that are most important and/or practical to them and their organizations. For example, a department that is trying to assess whether increased funding leads to increased high-impact publications could monitor aspects of the Bibliometrics, Impact, and Funding categories of the PACER Tool. An organization that is concerned with increasing the proportion of faculty with academic rank may want to focus on the Personnel and Education/academics categories. One limitation of this study is that it may not be applicable to commercial entities or countries with emerging research. All authors and Delphi panel members were from academic departments in the US and Canada. However, we tried to include perspectives from a wide array of experts in different, including nonmedical, disciplines. In addition, the review identified no non-English studies, which suggests a need for further research to extend these results to departments in non-English speaking countries. A limitation of the PACER Tool itself is that it only conveys quantitative data. Many areas of research capacity building such as quality or leadership may be more amenable to qualitative analysis. In addition, the PACER Tool does not assess important indicators that may be more applicable to smaller units (eg, metrics that focus on personal or team growth) or scales larger than a single organization (eg, national policies or journal-level bibliometrics). The ultimate goal of monitoring metrics such as those contained in the PACER Tool is to facilitate effective research. Organizations can use metrics in the PACER Tool to plot, trend, and compare data to generate a visible “research economy.” The PACER Tool represents a robust, multidimensional set of metrics, but it is important to acknowledge that research assessment is a complex and evolving field. The tool should be viewed as a starting point and may require further refinement and adaptation to specific research contexts. Further contextualization with qualitative data will continue to be important. Ongoing feedback and evaluation from colleagues in multiple disciplines and organizations, as well as ongoing validation and improvement of the metrics, will help ensure the ongoing relevance and usefulness of the PACER Tool. ## Conclusion The PACER Tool offers an adaptable, multifaceted approach for monitoring research performance. By incorporating a diverse set of metrics across multiple domains, it addresses many of the limitations of existing research metrics that focus only on bibliometrics and funding. This will enable organizations to evaluate the productivity and impact of research departments, teams, and individual researchers more effectively. ## Acknowledgments Database searching assistance was provided by a reference librarian affiliated with Louisiana State University Health Sciences Library. The Scientific Publications staff at Mayo Clinic provided editorial consultation, proofreading, and administrative and clerical support. This work is the opinions of the author (PC) and does not represent the views of the Department of Defense or the Uniformed Services University of the Health Sciences. ## Appendix ### Search Strategy Databases searched: PubMed, Web of Science, Embase, ERIC, CINAHL, and Google Scholar. For PubMed, the following terms were used: ((research) AND (capacity building OR productivity [MeSH Terms])) AND (tool[Title/Abstract] OR indicator[Title/Abstract] OR metric[Title/Abstract]). For Embase, the following terms were used: ('research'/exp OR 'research') AND ('capacity building'/exp OR 'capacity building') AND ('tool':ti, ab OR 'indicator':ti, ab OR metric:ti,ab). For ERIC, the following were used: ((research) AND ((“capacity building”)) AND (((faculty)))) AND ((TI tool OR AB tool) OR (TI indicator OR AB indicator) OR (TI metric OR AB metric)). For CINAHL, the search utilized ((MH “Research+”) OR (MH “Publishing+”)) AND (“capacity building”) AND ((TI tool OR AB tool) OR (TI indicator OR AB indicator) OR (TI metric OR AB metric)). Finally, for Google Scholar via Publish or Perish, the following search was utilized: research AND medical AND faculty AND capacity AND tool. ## Notes * This article was externally peer reviewed. * *Conflict of interest:* The authors report no financial conflicts of interest. * *Funding:* No funding was received for this study. * To see this article online, please go to: [http://jabfm.org/content/37/S2/S173.full](http://jabfm.org/content/37/S2/S173.full). * Received for publication February 23, 2024. * Revision received March 28, 2024. * Accepted for publication April 1, 2024. ## References 1. 1.Kilmarx PH, Maitin T, Adam T, et al. Increasing effectiveness and equity in strengthening health research capacity using data and metrics: recent advances of the ESSENCE mechanism. Ann Glob Health 2023;89:38. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=37273490&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 2. 2.Myers BA, Kahn KL. Practical publication metrics for academics. Clin Transl Sci 2021;14:1705–12. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=33982433&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 3. 3.Cooke J, Nancarrow S, Dyas J, Williams M. An evaluation of the 'Designated Research Team' approach to building research capacity in primary care. BMC Fam Pract 2008;9:37. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=18588685&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 4. 4.Ekeroma AJ, Shulruf B, McCowan L, Hill AG, Kenealy T. Development and use of a research productivity assessment tool for clinicians in low-resource settings in the Pacific Islands: a Delphi study. Health Res Policy Syst 2016;14:9. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=26821808&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 5. 5.Frontera WR, Fuhrer MJ, Jette AM, et al. Rehabilitation medicine summit: building research capacity. J Spinal Cord Med 2006;29:70–81. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=16572568&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 6. 6.Wootton R. A simple, generalizable method for measuring individual research productivity and its use in the long-term analysis of departmental performance, including between-country comparisons. Health Res Policy Syst 2013;11:2. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1186/1478-4505-11-2&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=23317431&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 7. 7.Ewigman B, Davis A, Vansaghi T, et al. Building research & scholarship capacity in departments of family medicine: a new joint Adfm-Napcrg initiative. Ann Fam Med 2016;14:82–3. [FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiYW5uYWxzZm0iO3M6NToicmVzaWQiO3M6NzoiMTQvMS84MiI7czo0OiJhdG9tIjtzOjMxOiIvamFiZnAvMzcvU3VwcGxlbWVudDIvUzE3My5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 8. 8.Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005;8:19–32. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1080/1364557032000119616&link_type=DOI) 9. 9.Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci 2010;5:69. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1186/1748-5908-5-69&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=20854677&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 10. 10.Cooper S, Cant R, Kelly M, et al. An evidence-based checklist for improving scoping review quality. Clin Nurs Res 2021;30:230–40. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 11. 11.Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev 2016;5:210. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1186/s13643-016-0384-4&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=27919275&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 12. 12.Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018;169:467–73. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.7326/M18-0850&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=30178033&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 13. 13.Sandstrom U, Sandstrom E. A metric for academic performance applied to Australian universities 2001-2004. Accessed Oct 2, 2023. Available at: [http://eprints.rclis.org/10577/1/2metrics.pdf](http://eprints.rclis.org/10577/1/2metrics.pdf). 14. 14.Humphries D, Ma M, Collins N, et al. Assessing research activity and capacity of community-based organizations: refinement of the CREAT instrument using the Delphi method. J Urban Health 2019;96:912–22. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=31350725&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 15. 15.Gill SD, Gwini SM, Otmar R, Lane SE, Quirk F, Fuscaldo G. Assessing research capacity in Victoria's south-west health service providers. Aust J Rural Health 2019;27:505–13. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=31814198&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 16. 16.Holden L, Pager S, Golenko X, Ware RS. Validation of the research capacity and culture (RCC) tool: measuring RCC at individual, team and organisation levels. Aust J Prim Health 2012;18:62–7. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1071/PY10081&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=22394664&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 17. 17.Lee SA, Byth K, Gifford JA, et al. Assessment of health research capacity in Western Sydney Local Health District (WSLHD): a study on medical, nursing and allied health professionals. J Multidiscip Healthc 2020;13:153–63. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=32103975&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 18. 18.Rahman M, Fukui T. Biomedical research productivity: factors across the countries. Int J Technol Assess Health Care 2003;19:249–52. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1017/S0266462303000229&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=12701955&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000181728100022&link_type=ISI) 19. 19.Huang JS. Building research collaboration networks - an interpersonal perspective for research capacity building. J Res Adm 2014;45:89–112. 20. 20.Rubio DM. Common metrics to assess the efficiency of clinical research. Eval Health Prof 2013;36:432–46. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1177/0163278713499586&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=23960270&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 21. 21.Sarre G, Cooke J. Developing indicators for measuring research capacity development in primary care organizations: a consensus approach using a nominal group technique. Health Soc Care Community 2009;17:244–53. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1111/j.1365-2524.2008.00821.x&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=19040697&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000265012900003&link_type=ISI) 22. 22.Cooke J. A framework to evaluate research capacity building in health care. BMC Fam Pract 2005;6:44. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1186/1471-2296-6-44&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=16253133&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 23. 23.Bates I, Akoto AY, Ansong D, et al. Evaluating health research capacity building: an evidence-based tool. PLoS Med 2006;3:e299. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1371/journal.pmed.0030299&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=16942394&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 24. 24.Matus J, Wenke R, Hughes I, Mickan S. Evaluation of the research capacity and culture of allied health professionals in a large regional public health service. J Multidiscip Healthc 2019;12:83–96. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 25. 25.Patel VM, Ashrafian H, Ahmed K, et al. How has healthcare research performance been assessed?: a systematic review. J R Soc Med 2011;104:251–61. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1258/jrsm.2011.110005&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=21659400&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 26. 26.Cole DC, Boyd A, Aslanyan G, Bates I. Indicators for tracking programmes to strengthen health research capacity in lower- and middle-income countries: a qualitative synthesis. Health Res Policy Syst 2014;12:17. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=24725961&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 27. 27.Bilardi D, Rapa E, Bernays S, Lang T. Measuring research capacity development in healthcare workers: a systematic review. BMJ Open 2021;11:e046796. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiYm1qb3BlbiI7czo1OiJyZXNpZCI7czoxMjoiMTEvNy9lMDQ2Nzk2IjtzOjQ6ImF0b20iO3M6MzE6Ii9qYWJmcC8zNy9TdXBwbGVtZW50Mi9TMTczLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 28. 28.Kotsemir M, Shashnov S. Measuring, analysis and visualization of research capacity of university at the level of departments and staff members. Scientometrics 2017;112:1659–89. 29. 29.Kreiman G, Maunsell JH. Nine criteria for a measure of scientific output. Front Comput Neurosci 2011;5:48. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=22102840&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 30. 30.Matus J, Walker A, Mickan S. Research capacity building frameworks for allied health professionals - a systematic review. BMC Health Serv Res 2018;18:716. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=30219065&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 31. 31.Boulkedid R, Abdoul H, Loustau M, Sibony O, Alberti C. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLoS One 2011;6:e20476. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1371/journal.pone.0020476&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=21694759&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 32. 32.Godin B. On the origins of bibliometrics. Scientometrics 2006;68:109–33. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11192-006-0086-0&link_type=DOI) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000239601400006&link_type=ISI) 33. 33.Roldan-Valadez E, Salazar-Ruiz SY, Ibarra-Contreras R, Rios C. Current concepts on bibliometrics: a brief review about impact factor, Eigenfactor score, CiteScore, SCImago journal rank, source-normalised impact per paper, H-index, and alternative metrics. Ir J Med Sci 2019;188:939–51. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11845-018-1936-5&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=30511320&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 34. 34.Blakeman K. Bibliometrics in a digital age: help or hindrance. Sci Prog 2018;101:293–310. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=30131092&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 35. 35.Chandrashekhar Y, Narula J. Challenges for research publications: what is journal quality and how to measure it? J Am Coll Cardiol 2015;65:1702–5. [FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6MzoiUERGIjtzOjExOiJqb3VybmFsQ29kZSI7czo0OiJhY2NqIjtzOjU6InJlc2lkIjtzOjEwOiI2NS8xNi8xNzAyIjtzOjQ6ImF0b20iO3M6MzE6Ii9qYWJmcC8zNy9TdXBwbGVtZW50Mi9TMTczLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 36. 36.Belter CW. Bibliometric indicators: opportunities and limits. J Med Libr Assoc 2015;103:219–21. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.3163/1536-5050.103.4.014&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=26512227&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom) 37. 37.Lodato JE, Aziz N, Bennett RE, Abernethy AP, Kutner JS. Achieving palliative care research efficiency through defining and benchmarking performance metrics. Curr Opin Support Palliat Care 2012;6:533–42. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1097/SPC.0b013e32835a7cb4&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=23080309&link_type=MED&atom=%2Fjabfp%2F37%2FSupplement2%2FS173.atom)