Abstract
Objective: Chronic disease collaboratives help practices redesign care delivery. The North Carolina Improving Performance in Practice program provides coaches to guide implementation of 4 key practice changes: registries, planned care templates, protocols, and self-management support. Coaches rate progress using the Key Drivers Implementation Scales (KDIS). This study examines whether higher KDIS scores are associated with improved diabetes outcomes.
Methods: We analyzed clinical and KDIS data from 42 practices. We modeled whether higher implementation scores at year 1 of participation were associated with improved diabetes measures during year 2. Improvement was defined as an increase in the proportion of patients with hemoglobin A1C values <9%, blood pressure values <130/80 mmHg, and low-density lipoprotein (LDL) levels <100 mg/dL.
Results: Statistically significant improvements in the proportion of patients who met the LDL threshold were noted with higher “registry” and “protocol” KDIS scores. For hemoglobin A1C and blood pressure values, none of the odds ratios were statistically significant.
Conclusions: Practices that implement key changes may achieve improved patient outcomes in LDL control among their patients with diabetes. Our data confirm the importance of registry implementation and protocol use as key elements of improving patient care. The KDIS tool is a pragmatic option for measuring practice changes that are rooted in the Chronic Care Model.
The US health system requires substantial change to deliver safe, efficient, and effective patient care.1 In the 2001 Crossing the Quality Chasm report, the Institute of Medicine specifically states that systems must be “redesigned” because existing systems fail to support high-quality care for chronic diseases. To aid in redesign efforts, national and state-level organizations have created programs in which practice staff and providers receive instruction and assistance in implementing quality improvement (QI) strategies in their clinical settings.2⇓⇓⇓⇓⇓–8 Such programs often are called chronic disease collaboratives; teams of clinicians and office staff are taught experientially how to implement key drivers of practice changes that are rooted in the Chronic Care Model (CCM).9,10
To date, observational studies regarding the impact of collaborative participation on outcomes suggest that participation can positively affect some process and outcome measures.3⇓–5,7,9 However, since collaboratives involve simultaneously learning many new skills and implementing several facets of chronic disease care, it is challenging to tease out which specific facets are of value.11 In addition, how well such activities are actually implemented in clinical settings is poorly understood in clinical research,12,13 and the Patient Centered Outcomes Research Institute has identified implementation challenges as a key barrier to the widespread adoption of potentially effective interventions.14
To both overcome barriers to adoption and accurately assess the effectiveness of an intervention, measures are needed that validly and reliably capture how well interventions are implemented at the organizational level.15 Although some work has been done to create such implementation measures in evaluations of chronic care collaboratives, this work was done several years after the work in the practices commenced.12,16 Thus, to date there is little information linking prospectively collected implementation assessments with improvements in patient outcomes.
Using a sample of practices involved in the North Carolina Improving Performance in Practice (IPIP) program, a statewide QI project in North Carolina, we examined whether the extent of implementation of 4 key drivers of practice change was associated with improved population-level outcomes for diabetes care as indicated by measures of serum glycohemoglobin (A1C), low-density lipoprotein (LDL), and blood pressure (BP).
Methods
Setting
The North Carolina IPIP program is a nationally supported, state-based QI program that is rooted in the CCM.9,10 IPIP combines 2 improvement designs: a “1-to-many” or collaborative design17 and a “1 to 1” practice coaching model design.6 By participating in IPIP, primary care providers and staff are introduced to QI methods with the help of an onsite QI practice coach. The practice team learns how to implement and monitor their QI efforts and participates in learning networks with peer organizations that share practice improvement strategies. All primary care practices in North Carolina are eligible to participate in the IPIP program. Practices receive $2000 for participating and providers can obtain continuing medical education credits. The work also provides a mechanism to fulfill requirements for Part IV Maintenance of Certification.
The IPIP organization chose nationally endorsed clinical quality measures to evaluate the impact of their diabetes QI program on the following patient population-level outcomes; A1C, LDL, and BP. However, unlike groups like the National Committee for Quality Assurance, which puts forth performance measurement thresholds for practices to reach, IPIP establishes performance goals for practices to aim for based on the experiences of the better-performing practices.
IPIP's first year included a small cohort of practices and 2 practice coaches that could pilot test many of the nascent program components. After evaluating experiences over the first year, the national team, in collaboration with state-level IPIP stakeholders and international experts in systems improvement, agreed that a more formal guidance document, called a change package, was essential to enabling the change processes within practices. To provide a measurement tool to capture the implementation of change package activities, the IPIP leadership simplified and sequenced 6 elements of the CCM to 4 key drivers of practice change,6 resulting in the development of the Key Driver Implementation Scales (KDIS).
The KDIS ordinal ratings are used by practice coaches to document a practice's adoption and the extent of implementation of the 4 key drivers on a monthly basis. The KDIS prospectively captures the extent of implementation of (1) a disease registry, (2) the use of planned care templates to standardize items that are addressed with every diabetic patient at every visit, (3) comprehensive care protocols to guide global diabetic care beyond what is just included in the planned care templates, and (2) self-management support (SMS) systems within a practice. In general, a KDIS score of 0 indicates that the practice has had no activity in the respective practice change variable, while a 1 indicates that a particular item, such as a type of registry or a specific planned care template, has been selected for use in the practice. A score of 2 signifies that staff roles for an activity have been assigned or that an item such as a disease registry has been installed. A score of 3 signals that a practice is actually testing an item, while a 4 indicates that a large percentage of the practice is using the item. Practice-wide dissemination is indicated by a score of 5. A sample of the scale, limited to the registry item, is provided in Table 1. A full description of the key drivers and scale are available at forces4quality.org/af4q/download-document/3470/960.
The practice coaches assign the first KDIS scores soon after getting started with a new practice and subsequently submit these data each month to the state director. When the KDIS was first created, it was generally expected that the practices would focus on the 4 key drivers in sequence, starting with the development of a disease registry, followed by the use of a care protocol and planned care templates, and then finally development of SMS tools for patients, with the understanding that overlap of these concepts exists and that practices start at different levels along the improvement continuum. The sequencing of implementation was for starting, not finishing, work on one element of practice change.
The KDIS scores not only allow for individual and aggregate practice data review but also provide data to the IPIP program leadership for use in continuous evaluation of the program. The scores also can be used to demonstrate to program funding agencies that practice-level changes are occurring and are doing so at a time far earlier than when patient-level outcome measures can be generated. To quote a key IPIP stakeholder, the KDIS scores capture practice changes that occur “while the clinical data are catching up.”
Data Sources
We collected 2 sets of data for 42 practices that participated in the diabetes track of the IPIP program, starting in February 2008 or later: (1) monthly KDIS scores, as described above, and (2) monthly population-level clinical data that included numerators and denominators used to calculate the percentage of a practice's diabetic patients whose values of A1C were <9%, LDL <100 mg/dL, and in-office BP measurements <130/80 mmHg.
Analysis
To be included in our analysis, practices needed to have (1) participated with a practice coach for at least 13 months starting in February 2008 or beyond, (2) submitted clinical data reports in months 10, 11, or 12, and (3) submitted another clinical data report at some point during their second year of participation with their coach. Our data collection timeline is presented in Figure 1.
For our analysis we calculated a KDIS score representing the score at year 1 for each of the 4 key drivers within each clinic. To reduce the effect of a spurious outcome and missing values, this score represents the average of an individual practice's KDIS scores at months 10, 11, and 12. The distribution of 1-year scores among the 42 practices is represented in Figure 2.
To test for associations between the year 1 score and subsequent improvements in population-level clinical outcomes, we created for each outcome a model to estimate whether higher KDIS scores at the year 1 mark were associated with subsequent improvements in a practice's clinical data during the second year of IPIP participation. Improved clinical data were defined as any increase in the proportion of diabetic patients with hemoglobin A1C levels <9%, BP values <130/80 mm Hg, and LDL levels <100 mg/dL during the second year of practice coach involvement. Within the model, this increase can be detected through an odds ratio of >1.
In our model we controlled for clinical outcomes at the end of year 1. We used the means of the clinical outcomes measures at months 10, 11, 12, again choosing a range of time points for robustness and to capture data that suggested practice engagement with a practice coach for at least 1 year. We also included time and the interaction between time and the KDIS scores to capture changes in the association between KDIS scores and clinical outcome over time. We ran a repeated measures logistic regression to account for the repeated measures within clinics over time. The outcome variable was the proportion of patients who met a clinical threshold out of the total number of eligible patients seen (eg, those with diabetes). We clustered the data analysis at the practice level and across time using the method described by Williams.18 This structure assumes that within-clinic model residuals closer in time are more highly correlated than those further apart. Model estimates were used to construct odds ratios of improvement in clinical outcomes, defined as an increase in the proportion of patients meeting a clinical threshold from year 1 to year 2. Because of how we applied our eligibility criteria and created out analytical model, there were no missing data other than for one practice that did not report LDL data. This practice was not included when we ran the model for LDL improvement by level of KDIS implementation. All analyses were done using SAS software version 9.2 (SAS Institute Inc., Cary, NC). The Biomedical Institutional Review Board at the University of North Carolina reviewed and approved this project.
Results
The demographics of the 42 practices included in the model are listed in Table 2. Of note, 43% of the practices had ≤3 providers; 73% were staffed by family practitioners and, on average, 23% of patients were covered by Medicaid. Of all the practices, 62% were located in rural counties in North Carolina. To provide some context regarding the study practices' starting points with regard to their clinical diabetes data, we also calculated the number of the practices whose data would have made them eligible for recognition in the Diabetes Recognition Program supported by the National Committee on Quality Assurance and the number that reached organization-specific goals for these measures set by IPIP.
The distribution of KDIS scores as measured at the end of 1 year of engagement with a practice coach is presented in Figure 2. A greater number of practices were able to achieve deeper implementation of some key driver elements more than others. For example, 31 of 42 practices achieved a KDIS score of ≥3 on the registry item by the end of 1 year, while only 15 of 42 were able to achieve a KDIS score of ≥3 for the SMS item during this same time interval. Of note, in most cases KDIS scores remained the same or improved during the second year of participation with a practice coach (data not shown).
Table 3 reveals the odds ratios of practices with a higher percentage of their patients with LDL clinical measures under control at the 2-year mark compared with the 1-year data. Statistically significant improvements in the proportion of patients who met the LDL threshold at the 2-year mark were noted for practices that achieved a KDIS score of 4 or 5 on the registry and protocol items. In accordance with this, the point estimate trends suggest that improved LDL control is associated with increasingly higher degrees of implementation of these activities in a dose-response relationship. When we modeled having all key drivers increasing together (“All 4 Drivers” column in Table 2), similar improvements in the LDL population measures are noted.
Our model also suggests that practices without any activity or improvement in the KDIS score for the “protocol” key driver saw worsening of their LDL performance measure (Table 3). When we used the same analysis method to analyze KDIS scores against improvements in A1C and BP values, none of the odds ratios were statistically significant, nor were there any apparent trends (results are available from the authors on request).
Discussion
Our analysis suggests that improved outcomes for patients with diabetes may be associated with a practice's ability to implement key drivers of practice change. Most notable is that practices that implement a disease registry, use their registry data to plan patient care, and produce performance reports by the end of a single year of involvement with IPIP may realize improvements in population-level LDL control during the second year of involvement. Our data also suggest that similar improvements in population-level LDL control may be realized when disease protocols are agreed on and implemented widely within a practice.
We did not find improvements in clinical outcomes when SMS or planned care templates were more extensively implemented. For SMS in particular, this may be due in part to the limited time interval we used in our analysis and the fact that the practice coaches were instructed to guide practices to implement all 4 key drivers in sequence, with SMS improvements being last. It is notable that the activities that conferred a score of 1 or 2 for SMS were not those that would be expected to affect patient behavior change and thus clinical outcome data. As noted in Figure 2, the vast majority of practices received KDIS scores of 0, 1, or 2, indicating that practices either had no efforts at SMS during year 1 or were just in the early planning periods of rolling out SMS activities.
Aside from having a small sample size and inadequate power, we cannot explain why there were no signals in the data to indicate that the use of planned care templates may be associated with improved clinical outcomes, as seen elsewhere in the literature.11 The current leadership of the IPIP organization continues to feel that all 4 key drivers are crucial and independently important items for securing practice change. Thus further evaluation of the relationship between this specific key driver and outcomes should continue as greater numbers of practices make progress on implementation.
It is unclear why our results indicate an improvement in population-level LDL control but not control of A1C and/or BP. Other investigators have used these same outcome measures in cohort designs to study the effect of QI interventions and found no improvement in any of these 3 variables,19 improvements in LDL and BP but not A1C,20 or improvements in A1C and BP but not LDL.21 Improving patient- or population-level clinical outcomes involves complex issues that rely on patient-, practice-, and system-level factors, and partial implementation of key drivers may have only small effects on outcome measures. However, improved lipid control at a population level is noteworthy, and future evaluations should attempt to uncover why certain outcome measures improve faster or to a greater extent than others.
Our data add further support to the importance of registry implementation and the institution of care protocols as key elements of improving patient care. Perhaps that a practice can develop and meaningfully use a patient registry, disease protocols, or both represents a host of contextual factors or surrogate measures that indicate a practice's potential to affect patient outcomes. Evaluating implementation of these key drivers over a longer period of time may allow us to determine whether other key drivers have important effects on patients outcomes.
Since prior studies indicate that the improvement process is long and slow22,23 and requires tremendous effort,24 our limited findings of association between patient outcomes and the KDIS's key driver measures suggest promise for the KDIS tool for measuring practice transformation and implementation. Programs such as IPIP, which support practice transformation, need strategies to assess implementation, provide timely data to program stakeholders, and understand the success of their practice support programs. Although more complex evaluation tools have been designed to capture this type of data,12 the KDIS tool provides a pragmatic alternative that seems to capture meaningful practice change variation without creating significant burden on practices or coaches. Of note, since the time of our study, the IPIP program has evolved into the North Carolina Area Health Education Centers Practice Support Program and has taken on an increased set of practice support responsibilities, including assisting practices with the meaningful use of electronic health records and patient-centered medical home initiatives. Even with these expanded responsibilities, all 4 components of the KDIS tool are still of great value to program leadership and continue to be used by practice support teams to measure the implementation of key drivers.
Limitations
Our analysis is based on 42 primary care practices in a single state. With only 42 practices, we have limited power to detect smaller implementation effects. Furthermore, none of our clinics have ≥3 years of data, and only 16 have >2 years. Because it may take months of work to affect the KDIS scores and even longer for clinical outcomes to change within a practice, our data may only be able to tell us the beginning of the story. As well as a QI project, some practice numerators and denominators reflected sample data abstractions only, especially early on in the practices' learning process, and thus did not reflect their full population of diabetic patients. However, when we re-ran the LDL improvement analysis model without including practices that used sample data, the results from the remaining 34 practices were essentially the same with regard to the implementation depth of a registry, disease protocols, and planned care templates.
Our results may also be biased by the limitation of our study design. The North Carolina IPIP is a QI project; thus rigorous study design was not a priority during implementation. We do not have a comparison group or data to suggest the effect of secular trends. In addition, we are aware of anecdotal reports from IPIP coaches that although most of the KDIS tool and taxonomy worked well, there was sometimes confusion in understanding the difference between “protocol” and “planned care templates,” especially when specific electronic medical record systems and vendors used the term protocol in the manner that IPIP used planned care templates. Finally, the types of practices that chose to undergo a QI project may not be broadly representative of all primary care practices.
The KDIS was developed to supplement the QI work with health care delivery systems and was not subject to the rigorous development processes typically involved in the development of instruments, such as interrater reliability testing. However, a single trainer within the IPIP organization provided instructions to all 9 staff who served as practice coaches during the time interval of this study. This group continuously addressed the tool and measurement methods as part of their monthly conference calls with the IPIP program leadership. As is typical of many of the QI interventions implemented by IPIP, the tool was carefully piloted with a small number of experienced coaches before being used more broadly in the field.
Conclusions
Our work suggests that the degree of implementation of 4 key drivers of practice change may be associated with improvement in selected outcomes for patients with diabetes. LDL control seems most likely to change in a short time frame and to be associated most strongly with the implementation of a patient registry and care protocols. However, patterns in our results suggest that other important associations between practice changes and clinical outcomes may emerge over a longer study period. Although our findings are based on a small number of diverse practices in a single state, our work is a first step in using a practical rating system, with roots in the CCM, to independently measure the extent of practice change implementation. Such data will be needed for practices and practice support programs to monitor transformation progress and for researchers and policy makers to understand the effectiveness of practice change interventions controlling for depth of implementation.
Notes
This article was externally peer reviewed.
Funding: This research was supported by the Agency for Healthcare Research and Quality (Award Number R18 HS019131). Additional support was provided by the National Institutes of Health/National Institute of Environmental Health Sciences training grant no. T32ES007018 (AW).
Conflict of interest: none declared.
Disclaimer: Statements in this presentation should not be construed as endorsements by the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services.
- Received for publication February 19, 2013.
- Revision received August 19, 2013.
- Accepted for publication August 26, 2013.