Abstract
Background: Practice facilitation supports practice change in clinical settings. Despite its widespread use little is known about how facilitators enable change.
Objective: This study identifies which implementation strategies practice facilitators used and the frequency of their use in a study to improve the quality of cardiovascular care in primary care.
Design: Cross-sectional analysis of data collected by practice facilitators in the Healthy Hearts Northwest (H2N) study.
Participants: Notes collected by facilitators in the H2N study.
Approach: We coded these field notes for a purposeful sample of 44 practices to identify Expert Recommendations for Implementation Change (ERIC) strategies used with each practice and calculated the proportion of practices where each implementation strategy was coded at least once. Strategies were categorized as foundational (used in 80% to 100% of practices), moderately used (20%−<80% of practices), rarely used (1- % of practices), or absent (0%).
Key Results: We identified 26 strategies used by facilitators. Five strategies were foundational: Develop and/or implement tools for quality monitoring, Assess barriers that may impede implementation, Assess for readiness or progress, Develop and support teams, and Conduct educational meetings.
Conclusions: Commonly used strategies can help guide development of the core components of practice facilitation strategies.
- Cardiovascular Risk Factors
- Cross-Sectional Studies
- Implementation Science
- Primary Health Care
- Quality Improvement
Introduction
Implementation strategies are “methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice”.1 The Expert Recommendations for Implementation Change (ERIC) team sought to improve the conceptual clarity, relevance and comprehensiveness of implementation strategies; these 73 implementation strategies are designed to be used alone or in combination in implementation research and practice.2 Because each implementation strategy represents a potentially complex and variable set of activities,2 there is need for further understanding of the underlying components when operationalizing a specific implementation strategy.1,2
Within the ERIC compendium of strategies, facilitation is an implementation strategy defined as “a process of interactive problem solving and support that occurs in a context of a recognized need for improvement and a supportive interpersonal relationship.”3,4 It is a strategy undertaken by a trained facilitator who provides support to both individuals and teams as they navigate the complex process of change and overcome contextual barriers.4 Facilitation has also been described as the active ingredient that aligns the proposed innovation or improvement to the individuals and teams involved and the context in which they work, thereby enabling successful implementation.4 Facilitation has also been recognized as a “meta-strategy,” one that might use some or many of the other ERIC defined strategies when implementing changes in health care settings.5
Practice facilitation (PF) is an established approach to facilitation specifically designed to support practice change in ambulatory clinical settings.5–10 Practice facilitation involves trained facilitators who work with ambulatory primary care clinical practices and health care teams to identify and address challenges in implementing evidence-based interventions. Nguyen et al11,12 described 4 key components of practice facilitation: remaining flexible to align with practice and organizational priorities, building relationships, providing value through information technology expertise, and building capacity and create efficiencies.
Despite widespread use of practice facilitation as an implementation strategy in primary care and its evidence of effectiveness for improving implementation of clinical interventions and guidelines, operationalizing facilitation requires an understanding of how a facilitator enables successful implementation.13–15 Reported explanations of the activities and principles underlying facilitation to date are largely based on organizational theory5 or on summarizing retrospective assessments of practice facilitation efforts.16 Hemler et al17 used diverse data sources, including direct observation of facilitation efforts, and found that across on-the-ground quality improvement cooperatives, facilitators employed a variety of strategies to help practices conduct quality improvement when they encountered challenges in accessing performance monitoring data. Clearly, a greater understanding is needed about how a facilitator engages ambulatory primary care teams, and which implementation strategies might be commonly used during practice facilitation.17
The objective of this study is to identify the implementation strategies employed by practice facilitators and the frequency with which each strategy was used over the course of a large-scale implementation effort designed to improve the quality of cardiovascular disease preventive care in primary care settings, the AHRQ-funded EvidenceNOW Healthy Hearts Northwest (H2N) study.18 These findings will advance our knowledge of practice facilitation and provide guidance for the design and evaluation of future efforts that use practice facilitation as an implementation strategy.
Methods
Setting
A detailed description and the primary research results of the H2N study have been previously published.18,19 Briefly, H2N was a large, pragmatic clinical trial of interventions designed to increase quality improvement capacity with a focus on improving practice performance on measures of cardiovascular disease risk across 209 smaller primary care practices in Washington, Oregon, and Idaho. All practices received practice facilitation (ie, coaching) from 1 of 17 practice facilitators, over 15 months of active support. The objectives of the facilitation were to 1) build quality improvement (QI) capacity, and 2) develop and implement an improvement plan for improving practice performance on measures of cardiovascular disease risk for each practice using “Plan-Do-Study-Act” (PDSA) cycles. The facilitation protocol included at least 5 face-to-face quarterly practice facilitation visits, with at least monthly contact (in-person visits, telephone calls, or e-mails) in between the in-person visits. Facilitators participated in 2 2-day in-person training sessions and all facilitators participated in monthly coaching calls to harmonize their approach.
Data Sources
Practice facilitators kept written field notes and observations about their facilitation activities, in a web-based intervention tracker, after each encounter, whether in-person or virtual. The notes included a combination of discrete field data about the activities completed as well as free-text fields where practice facilitators were encouraged to record observations about the clinics, staff, and quality improvement activities. This study was conducted using secondary analysis of deidentified data and did not require institutional review board approval. For this analysis we used open text data recorded by practice facilitators for each selected practice. We also used data describing practice characteristics (eg, geographic location, number of clinicians) from the H2N practice questionnaire completed by an office manager in each practice at the start of the study.
Selecting Practices for Analysis
Due to the high number of practices and the volume of field notes kept by the facilitators, we used practice characteristics data from the baseline H2N practice questionnaire to select a subset of practices for analysis. We based our selection on 2 of the 3 components in the conceptual framework for practice improvement proposed by Solberg.20 This framework posits that the presence of 3 elements facilitate improvement efforts: change process capability, prioritization of efforts to improve, and care process content. The study questionnaire asked about the practice’s self-reported priority for improving cardiovascular disease preventive care in the next year. Practices were categorized as either low (1 to 5) or high priority (6 to 10). The questionnaire also included an adapted version of the Change Process Capability Questionnaire (CPCQ).21–23 The CPCQ was developed and validated by Solberg and colleagues to measure organizational capability to manage change within a primary care practice setting.24 We calculated CPCQ scores and categorized practices’ CPCQs as high (7 to 28) or low (−28 to +6). We randomly selected 6 practices from each of the 8 strata created by the 3 baseline characteristic categories for analysis (high/low priority, high/low CPCQ and Qualis or Oregon Rural Practice and Research Network facilitation organization). We reviewed the completeness of coaching notes for each of the sampled practices and excluded practices for which coaching notes were not available for 3 or more consecutive months of the 15-month study period (n = 4), resulting in 44 practices with complete data for analysis.
Coding Scheme
We created a coding scheme to identify implementation strategies documented by practice facilitators during the H2N study. The coding scheme used for categorizing practice facilitator implementation strategies is based on the ERIC framework, which identifies 73 distinct implementation strategies.2 The research team reviewed the 73 ERIC implementation strategies and removed 42 that were thought by the team to be not relevant to the H2N intervention (eg, Use financial strategies was eliminated because changing clinical financial structures was beyond the scope of practice facilitation in the H2N study). The final ERIC strategy list was also reviewed for completeness and gaps by the practice facilitation supervisor for the H2N study, who verified that it reflected activities included in facilitation training or guidance provided to facilitators during the study. The primary coder (JS) did preliminary coding of a sample of 6 practices. The full research team (LMB, MP, AC) dual-coded, reviewed and reconciled random samples of text from these 6 practices. During this review, 2 implementation strategy codes (Conduct small cyclic small tests of change, Facilitation) were removed because they represented prescribed implementation strategies of the planned intervention for the study rather than implementation strategies chosen by the facilitator. One implementation strategy needed more specification: the original implementation strategy code based on a reported implementation strategy “Assess for readiness and identify barriers and facilitators” was disaggregated into 3 separate implementation strategies: “Assess for readiness or progress, Assess barriers that may impede implementation, and Assess facilitators that enhance implementation.” Finally, 2 codes were used interchangeably in the coding and were merged in the analysis: Capture and share local knowledge and Promote network weaving. One ERIC implementation strategy, Organize clinician implementation team meetings, was renamed Develop and support teams to allow coding to capture support for implementation teams that included nonclinicians. This resulted in a final codebook with 30 implementation strategy codes.
Coding
An experienced qualitative researcher (JS) applied the coding framework to identify implementation strategy codes present in the practice facilitator notes, working through all content for each practice until all session notes from the 44 practices had been coded. A second coder (GK) independently coded a subset of practice session notes (24 notes entries randomly selected from across 13 practices). In 66.7% of notes reviewed together, the secondary coder identified at least 1 additional code/missing code (an average of 1.1 additional codes per note). Using this figure as an estimate, we calculated that the primary coder identified an average of 87.7% of codes. Representative quotes for each implementation strategy are displayed in the Appendix.
Analysis
For each implementation strategy, we calculated the number and proportion of practices where that implementation strategy was coded at least once at any time during the 15-month intervention period. We also calculated the proportion of practices for which the implementation strategy was coded at least once, for each of the 5 quarters of the intervention period. We then categorized each implementation strategy as commonly used (used at least once in 80% to 100% of practices), moderately used (used at least once in 20%-<80% of practices), rarely used (used at least once in 1- % of practices), or absent (0%).
Results
Primary care practice characteristics are summarized in Table 1. There were 21 (48%) rural and 23 (52%) urban practices included in our analysis (Table 1). A third (32%) of practices employed 6 or more clinicians, 18% of practices were federally qualified health centers, and 50% were independent primary care practices.
Of the 30 implementation strategies in the codebook, we identified 26 that were used by practice facilitators in this study (Table 2). Among the 26 strategies, 5 were commonly used, 9 were moderately used and 12 were rarely used (Table 2). One commonly used implementation strategy, Develop and/or implement tools for quality monitoring, was used with 100% of the practices analyzed. Several of the moderate implementation strategies were used with more than 70% of practices, including Use data experts, and Distribute educational materials. Moderate implementation strategies that were used with fewer than 40% of practices included Assess facilitators that enhance implementation, Train for leadership, and Promote adaptability. The rarely used implementation strategies included: Audit and provide feedback, Facilitate relay of clinical data to clinicians, Remind clinicians, Involve patients/consumers and family members, and Obtain and use patients/consumers and family feedback.
We examined the proportion of practices with which each implementation strategy was used for each of the 5 quarters of the 15-month practice facilitation effort (Table 2). We found that the commonly used strategies were generally used with a high proportion of practices across multiple quarters. The rarely used strategies were used only with a small number of practices in each quarter.
We explored whether there was an association between the baseline practice characteristics used for sampling practices and the implementation strategies that were commonly used. The group of commonly used implementation strategies remained commonly used within all practice subgroups (eg, high CPCQ vs low CPCQ) (data not shown).
Limitations
Our study has several potential limitations. There was variability in the amount of information recorded by the practice facilitators in their field notes as well as in the types of information recorded. As a result, there may be ERIC strategies employed that we were not able to identify or that the field notes may not have completely captured when a specific strategy was employed within a practice, resulting in an underestimation of frequency of use. Building our coding framework from the ERIC strategies may have limited the types of implementation activities we were able to identify. We were limited in the number of practices we were able to include due to time and resource constraints. Our selection of practices was also limited to those that had complete data on the variables used to stratify and select our sample: the priority score, CPCQ score and coaching organization. It is possible that selecting practices for analysis based on characteristics other than the 3 used in this study might result in a different distribution of ERIC strategies employed by these facilitators. The quantitative summary of implementation strategies employed by practice facilitators may not fully demonstrate the variation and nuance of implementation strategy selection and application in practice facilitation efforts. An additional limitation is that we did not obtain information or perspectives from clinicians or other team members in the participating practices. Thus, we are unable to assess what additional resources or support the practices may have needed for this study’s quality improvement effort.
Discussion
This study identified 5 commonly used implementation strategies enacted by practice facilitators in a study designed to test the effectiveness of practice facilitation for improving quality of cardiovascular disease preventive care in primary care settings.21 The commonly used strategies were employed with a large proportion of practices across the duration of the practice facilitation effort. This suggests that these strategies may be important to an effective practice facilitation approach. It is more challenging to interpret the temporal pattern for moderate and rarely used strategies because of the relatively small numbers. How do these findings align with previously published implementation research findings on facilitation? Practice facilitation has been conceptualized as an interactive, problem-solving approach to supporting improvement,23 and 3 of the commonly used strategies (Develop and support teams, Assess for readiness or progress, and Assess barriers that may impede implementation) reflect interactions between the facilitator and the practice that support a problem-solving approach. Walunus et al24 found that the most successful practice facilitation approaches include elements of training the individuals within the practice, which is reflected in facilitators’ frequent use of the Develop and support teams and Conduct educational meetings implementation strategies. This also aligns with the health care facilitation logic model and mechanistic map developed by Kilbourne and colleagues who describe the importance of activities that promote engagement and acceptance of the facilitator by the involved social system within the care setting.25
In a simulation study by Waltz et al,24 implementation scientists and implementation practitioners were asked to rate each of the 73 ERIC implementation strategies in terms of perceived feasibility and perceived importance. The commonly used strategies we identified in our study were all strategies that were also rated as highly feasible by both implementation scientists and implementation practitioners in the Waltz study. While our study did not gather information from facilitators about their perceptions of feasibility of strategies, our findings may suggest avenues for future research to explore facilitator and practice characteristics that influence selection of implementation strategies. Furthermore, future research might examine whether the use of practice facilitation approaches that enact most or all the commonly used strategies are associated with improvements in patient outcomes. This would further demonstrate the value of commonly used practice facilitation strategies.
Waltz and colleagues also used responses from implementation scientists and implementation practitioners to group related strategies into conceptual categories. Three of the commonly used implementation strategies our study identified (Assess for readiness or progress, Assess barriers that may impede implementation, and Develop and/or implement tools for quality monitoring) are included in the evaluative and iterative strategies category described by Waltz et al.24 This aligns with the previously noted conceptualization of facilitation as an interactive and iterative problem-solving process.26–28 The remaining commonly used strategies map to the Waltz et al training and educating stakeholders category (Conduct educational meetings), or the developing stakeholder interrelationships category (Develop and support teams). These 3 categories of implementation strategies identified by Waltz may serve as a framework for the implementation strategies that form the foundation of practice facilitation.
We identified 9 implementation strategies moderately used by practice facilitators and 12 implementation strategies rarely used by practice facilitators. The facilitators’ use of these strategies with some, but not all practices, demonstrates that facilitators most likely tailor implementation efforts to local context. This would be consistent with the findings of a study of practice facilitation by Nguyen et al12 that “remaining flexible and aligning with organizational priorities” was viewed as a critical approach for practice facilitators. It is also consistent with the recognition that deploying a single set of ERIC implementation strategies for an intervention may not be appropriate for all settings and tailoring the selection of implementation strategies to setting is important.29–31 This is also consistent with Kilbourne and colleagues mechanistic map of facilitation whereby contextual factors are predictive of the facilitation activities undertaken.25
Implications
Given the limited resources of many health care settings, and the popularity of facilitation as an approach to improve quality and outcomes of care, our findings which describe the activities of practice facilitation, address a critical gap in the understanding of the dynamic process of how facilitators engage with primary care practices is needed.25 These commonly used strategies may form a framework for design of practice facilitation interventions in both research and practice facilitation.25 If our findings are confirmed in future research, applications could include preparation and training of facilitators and might be designed to ensure a high level of competency across the 5 commonly used strategies. However, future implementation science research is needed to expand our knowledge of how facilitation works to improve care in primary care settings. From a methodologic perspective, this will require a consensus among researchers on what observations or data should be routinely collected in studies that use practice facilitation, ideally grounded in a theory of how and why facilitation is effective.
Conclusions
The 5 implementation strategies identified as commonly used by practice facilitators aligns with the conceptualization of practice facilitation as an interactive, problem-solving approach to supporting change in ambulatory practice.4,5 The commonly used implementation strategies may form a framework for design of practice facilitation interventions in both research and practice and inform the training of facilitators who would use these strategies.
Acknowledgments
We acknowledge Judith Schaffer, who contributed to design and data analysis for this project.
Appendix.
Notes
This article was externally peer reviewed.
Funding: Healthy Hearts Northwest: An EvidenceNOW collaborative: Supported by grant number R18HS023908 from the Agency for Healthcare Research and Quality. Institute of Translational Health Sciences: Supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1 TR002319.
Conflict of interest: None.
To see this article online, please go to: http://jabfm.org/content/37/3/444.full.
- Received for publication August 18, 2023.
- Revision received December 15, 2023.
- Accepted for publication January 2, 2024.