More Extensive Implementation of the Chronic Care Model is Associated with Better Lipid Control in Diabetes =========================================================================================================== * Jacqueline R. Halladay * Darren A. DeWalt * Alison Wise * Bahjat Qaqish * Kristin Reiter * Shoou-Yih Lee * Ann Lefebvre * Kimberly Ward * C. Madeline Mitchell * Katrina E. Donahue ## Abstract *Objective:* Chronic disease collaboratives help practices redesign care delivery. The North Carolina Improving Performance in Practice program provides coaches to guide implementation of 4 key practice changes: registries, planned care templates, protocols, and self-management support. Coaches rate progress using the Key Drivers Implementation Scales (KDIS). This study examines whether higher KDIS scores are associated with improved diabetes outcomes. *Methods:* We analyzed clinical and KDIS data from 42 practices. We modeled whether higher implementation scores at year 1 of participation were associated with improved diabetes measures during year 2. Improvement was defined as an increase in the proportion of patients with hemoglobin A1C values <9%, blood pressure values <130/80 mmHg, and low-density lipoprotein (LDL) levels <100 mg/dL. *Results:* Statistically significant improvements in the proportion of patients who met the LDL threshold were noted with higher “registry” and “protocol” KDIS scores. For hemoglobin A1C and blood pressure values, none of the odds ratios were statistically significant. *Conclusions:* Practices that implement key changes may achieve improved patient outcomes in LDL control among their patients with diabetes. Our data confirm the importance of registry implementation and protocol use as key elements of improving patient care. The KDIS tool is a pragmatic option for measuring practice changes that are rooted in the Chronic Care Model. * Chronic Disease * Diabetes Mellitus * Primary Health Care * Quality Improvement The US health system requires substantial change to deliver safe, efficient, and effective patient care.1 In the 2001 *Crossing the Quality Chasm* report, the Institute of Medicine specifically states that systems must be “redesigned” because existing systems fail to support high-quality care for chronic diseases. To aid in redesign efforts, national and state-level organizations have created programs in which practice staff and providers receive instruction and assistance in implementing quality improvement (QI) strategies in their clinical settings.2⇓⇓⇓⇓⇓–8 Such programs often are called chronic disease collaboratives; teams of clinicians and office staff are taught experientially how to implement key drivers of practice changes that are rooted in the Chronic Care Model (CCM).9,10 To date, observational studies regarding the impact of collaborative participation on outcomes suggest that participation can positively affect some process and outcome measures.3⇓–5,7,9 However, since collaboratives involve simultaneously learning many new skills and implementing several facets of chronic disease care, it is challenging to tease out which specific facets are of value.11 In addition, how well such activities are actually implemented in clinical settings is poorly understood in clinical research,12,13 and the Patient Centered Outcomes Research Institute has identified implementation challenges as a key barrier to the widespread adoption of potentially effective interventions.14 To both overcome barriers to adoption and accurately assess the effectiveness of an intervention, measures are needed that validly and reliably capture how well interventions are implemented at the organizational level.15 Although some work has been done to create such implementation measures in evaluations of chronic care collaboratives, this work was done several years after the work in the practices commenced.12,16 Thus, to date there is little information linking prospectively collected implementation assessments with improvements in patient outcomes. Using a sample of practices involved in the North Carolina Improving Performance in Practice (IPIP) program, a statewide QI project in North Carolina, we examined whether the extent of implementation of 4 key drivers of practice change was associated with improved population-level outcomes for diabetes care as indicated by measures of serum glycohemoglobin (A1C), low-density lipoprotein (LDL), and blood pressure (BP). ## Methods ### Setting The North Carolina IPIP program is a nationally supported, state-based QI program that is rooted in the CCM.9,10 IPIP combines 2 improvement designs: a “1-to-many” or collaborative design17 and a “1 to 1” practice coaching model design.6 By participating in IPIP, primary care providers and staff are introduced to QI methods with the help of an onsite QI practice coach. The practice team learns how to implement and monitor their QI efforts and participates in learning networks with peer organizations that share practice improvement strategies. All primary care practices in North Carolina are eligible to participate in the IPIP program. Practices receive $2000 for participating and providers can obtain continuing medical education credits. The work also provides a mechanism to fulfill requirements for Part IV Maintenance of Certification. The IPIP organization chose nationally endorsed clinical quality measures to evaluate the impact of their diabetes QI program on the following patient population-level outcomes; A1C, LDL, and BP. However, unlike groups like the National Committee for Quality Assurance, which puts forth performance measurement thresholds for practices to reach, IPIP establishes performance goals for practices to aim for based on the experiences of the better-performing practices. IPIP's first year included a small cohort of practices and 2 practice coaches that could pilot test many of the nascent program components. After evaluating experiences over the first year, the national team, in collaboration with state-level IPIP stakeholders and international experts in systems improvement, agreed that a more formal guidance document, called a change package, was essential to enabling the change processes within practices. To provide a measurement tool to capture the implementation of change package activities, the IPIP leadership simplified and sequenced 6 elements of the CCM to 4 key drivers of practice change,6 resulting in the development of the Key Driver Implementation Scales (KDIS). The KDIS ordinal ratings are used by practice coaches to document a practice's adoption and the extent of implementation of the 4 key drivers on a monthly basis. The KDIS prospectively captures the extent of implementation of (1) a disease registry, (2) the use of planned care templates to standardize items that are addressed with every diabetic patient at every visit, (3) comprehensive care protocols to guide global diabetic care beyond what is just included in the planned care templates, and (2) self-management support (SMS) systems within a practice. In general, a KDIS score of 0 indicates that the practice has had no activity in the respective practice change variable, while a 1 indicates that a particular item, such as a type of registry or a specific planned care template, has been selected for use in the practice. A score of 2 signifies that staff roles for an activity have been assigned or that an item such as a disease registry has been installed. A score of 3 signals that a practice is actually testing an item, while a 4 indicates that a large percentage of the practice is using the item. Practice-wide dissemination is indicated by a score of 5. A sample of the scale, limited to the registry item, is provided in Table 1. A full description of the key drivers and scale are available at forces4quality.org/af4q/download-document/3470/960. View this table: [Table 1.](http://www.jabfm.org/content/27/1/34/T1) Table 1. Sample of Registry Item of the Practice Assessment Scales/Key Driver Implementation Scale* in Improving Performance in Practice The practice coaches assign the first KDIS scores soon after getting started with a new practice and subsequently submit these data each month to the state director. When the KDIS was first created, it was generally expected that the practices would focus on the 4 key drivers in sequence, starting with the development of a disease registry, followed by the use of a care protocol and planned care templates, and then finally development of SMS tools for patients, with the understanding that overlap of these concepts exists and that practices start at different levels along the improvement continuum. The sequencing of implementation was for starting, not finishing, work on one element of practice change. The KDIS scores not only allow for individual and aggregate practice data review but also provide data to the IPIP program leadership for use in continuous evaluation of the program. The scores also can be used to demonstrate to program funding agencies that practice-level changes are occurring and are doing so at a time far earlier than when patient-level outcome measures can be generated. To quote a key IPIP stakeholder, the KDIS scores capture practice changes that occur “while the clinical data are catching up.” ### Data Sources We collected 2 sets of data for 42 practices that participated in the diabetes track of the IPIP program, starting in February 2008 or later: (1) monthly KDIS scores, as described above, and (2) monthly population-level clinical data that included numerators and denominators used to calculate the percentage of a practice's diabetic patients whose values of A1C were <9%, LDL <100 mg/dL, and in-office BP measurements <130/80 mmHg. ### Analysis To be included in our analysis, practices needed to have (1) participated with a practice coach for at least 13 months starting in February 2008 or beyond, (2) submitted clinical data reports in months 10, 11, or 12, and (3) submitted another clinical data report at some point during their second year of participation with their coach. Our data collection timeline is presented in Figure 1. ![Figure 1.](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/27/1/34/F1.medium.gif) [Figure 1.](http://www.jabfm.org/content/27/1/34/F1) Figure 1. Data collection timeline. BP, blood pressure; KDIS, Key Drivers Implementation Scale; LDL, low-density lipoprotein. For our analysis we calculated a KDIS score representing the score at year 1 for each of the 4 key drivers within each clinic. To reduce the effect of a spurious outcome and missing values, this score represents the average of an individual practice's KDIS scores at months 10, 11, and 12. The distribution of 1-year scores among the 42 practices is represented in Figure 2. ![Figure 2.](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/27/1/34/F2.medium.gif) [Figure 2.](http://www.jabfm.org/content/27/1/34/F2) Figure 2. Frequency distribution of Key Drivers Implementation Scale (KIDS) scores attained at 1 year (with averages across months 10, 11, and 12) after coaching commenced by key driver. To test for associations between the year 1 score and subsequent improvements in population-level clinical outcomes, we created for each outcome a model to estimate whether higher KDIS scores at the year 1 mark were associated with subsequent improvements in a practice's clinical data during the second year of IPIP participation. Improved clinical data were defined as any increase in the proportion of diabetic patients with hemoglobin A1C levels <9%, BP values <130/80 mm Hg, and LDL levels <100 mg/dL during the second year of practice coach involvement. Within the model, this increase can be detected through an odds ratio of >1. In our model we controlled for clinical outcomes at the end of year 1. We used the means of the clinical outcomes measures at months 10, 11, 12, again choosing a range of time points for robustness and to capture data that suggested practice engagement with a practice coach for at least 1 year. We also included time and the interaction between time and the KDIS scores to capture changes in the association between KDIS scores and clinical outcome over time. We ran a repeated measures logistic regression to account for the repeated measures within clinics over time. The outcome variable was the proportion of patients who met a clinical threshold out of the total number of eligible patients seen (eg, those with diabetes). We clustered the data analysis at the practice level and across time using the method described by Williams.18 This structure assumes that within-clinic model residuals closer in time are more highly correlated than those further apart. Model estimates were used to construct odds ratios of improvement in clinical outcomes, defined as an increase in the proportion of patients meeting a clinical threshold from year 1 to year 2. Because of how we applied our eligibility criteria and created out analytical model, there were no missing data other than for one practice that did not report LDL data. This practice was not included when we ran the model for LDL improvement by level of KDIS implementation. All analyses were done using SAS software version 9.2 (SAS Institute Inc., Cary, NC). The Biomedical Institutional Review Board at the University of North Carolina reviewed and approved this project. ## Results The demographics of the 42 practices included in the model are listed in Table 2. Of note, 43% of the practices had ≤3 providers; 73% were staffed by family practitioners and, on average, 23% of patients were covered by Medicaid. Of all the practices, 62% were located in rural counties in North Carolina. To provide some context regarding the study practices' starting points with regard to their clinical diabetes data, we also calculated the number of the practices whose data would have made them eligible for recognition in the Diabetes Recognition Program supported by the National Committee on Quality Assurance and the number that reached organization-specific goals for these measures set by IPIP. View this table: [Table 2.](http://www.jabfm.org/content/27/1/34/T2) Table 2. Practice Characteristics (n = 42) The distribution of KDIS scores as measured at the end of 1 year of engagement with a practice coach is presented in Figure 2. A greater number of practices were able to achieve deeper implementation of some key driver elements more than others. For example, 31 of 42 practices achieved a KDIS score of ≥3 on the registry item by the end of 1 year, while only 15 of 42 were able to achieve a KDIS score of ≥3 for the SMS item during this same time interval. Of note, in most cases KDIS scores remained the same or improved during the second year of participation with a practice coach (data not shown). Table 3 reveals the odds ratios of practices with a higher percentage of their patients with LDL clinical measures under control at the 2-year mark compared with the 1-year data. Statistically significant improvements in the proportion of patients who met the LDL threshold at the 2-year mark were noted for practices that achieved a KDIS score of 4 or 5 on the registry and protocol items. In accordance with this, the point estimate trends suggest that improved LDL control is associated with increasingly higher degrees of implementation of these activities in a dose-response relationship. When we modeled having all key drivers increasing together (“All 4 Drivers” column in Table 2), similar improvements in the LDL population measures are noted. View this table: [Table 3.](http://www.jabfm.org/content/27/1/34/T3) Table 3. Change in the Proportion of Patients* Meeting the Low-Density Lipoprotein Goal as a Function of Key Drivers Implementation Scale (KDIS) Score at Year 1† Our model also suggests that practices without any activity or improvement in the KDIS score for the “protocol” key driver saw worsening of their LDL performance measure (Table 3). When we used the same analysis method to analyze KDIS scores against improvements in A1C and BP values, none of the odds ratios were statistically significant, nor were there any apparent trends (results are available from the authors on request). ## Discussion Our analysis suggests that improved outcomes for patients with diabetes may be associated with a practice's ability to implement key drivers of practice change. Most notable is that practices that implement a disease registry, use their registry data to plan patient care, and produce performance reports by the end of a single year of involvement with IPIP may realize improvements in population-level LDL control during the second year of involvement. Our data also suggest that similar improvements in population-level LDL control may be realized when disease protocols are agreed on and implemented widely within a practice. We did not find improvements in clinical outcomes when SMS or planned care templates were more extensively implemented. For SMS in particular, this may be due in part to the limited time interval we used in our analysis and the fact that the practice coaches were instructed to guide practices to implement all 4 key drivers in sequence, with SMS improvements being last. It is notable that the activities that conferred a score of 1 or 2 for SMS were not those that would be expected to affect patient behavior change and thus clinical outcome data. As noted in Figure 2, the vast majority of practices received KDIS scores of 0, 1, or 2, indicating that practices either had no efforts at SMS during year 1 or were just in the early planning periods of rolling out SMS activities. Aside from having a small sample size and inadequate power, we cannot explain why there were no signals in the data to indicate that the use of planned care templates may be associated with improved clinical outcomes, as seen elsewhere in the literature.11 The current leadership of the IPIP organization continues to feel that all 4 key drivers are crucial and independently important items for securing practice change. Thus further evaluation of the relationship between this specific key driver and outcomes should continue as greater numbers of practices make progress on implementation. It is unclear why our results indicate an improvement in population-level LDL control but not control of A1C and/or BP. Other investigators have used these same outcome measures in cohort designs to study the effect of QI interventions and found no improvement in any of these 3 variables,19 improvements in LDL and BP but not A1C,20 or improvements in A1C and BP but not LDL.21 Improving patient- or population-level clinical outcomes involves complex issues that rely on patient-, practice-, and system-level factors, and partial implementation of key drivers may have only small effects on outcome measures. However, improved lipid control at a population level is noteworthy, and future evaluations should attempt to uncover why certain outcome measures improve faster or to a greater extent than others. Our data add further support to the importance of registry implementation and the institution of care protocols as key elements of improving patient care. Perhaps that a practice can develop and meaningfully use a patient registry, disease protocols, or both represents a host of contextual factors or surrogate measures that indicate a practice's potential to affect patient outcomes. Evaluating implementation of these key drivers over a longer period of time may allow us to determine whether other key drivers have important effects on patients outcomes. Since prior studies indicate that the improvement process is long and slow22,23 and requires tremendous effort,24 our limited findings of association between patient outcomes and the KDIS's key driver measures suggest promise for the KDIS tool for measuring practice transformation and implementation. Programs such as IPIP, which support practice transformation, need strategies to assess implementation, provide timely data to program stakeholders, and understand the success of their practice support programs. Although more complex evaluation tools have been designed to capture this type of data,12 the KDIS tool provides a pragmatic alternative that seems to capture meaningful practice change variation without creating significant burden on practices or coaches. Of note, since the time of our study, the IPIP program has evolved into the North Carolina Area Health Education Centers Practice Support Program and has taken on an increased set of practice support responsibilities, including assisting practices with the meaningful use of electronic health records and patient-centered medical home initiatives. Even with these expanded responsibilities, all 4 components of the KDIS tool are still of great value to program leadership and continue to be used by practice support teams to measure the implementation of key drivers. ### Limitations Our analysis is based on 42 primary care practices in a single state. With only 42 practices, we have limited power to detect smaller implementation effects. Furthermore, none of our clinics have ≥3 years of data, and only 16 have >2 years. Because it may take months of work to affect the KDIS scores and even longer for clinical outcomes to change within a practice, our data may only be able to tell us the beginning of the story. As well as a QI project, some practice numerators and denominators reflected sample data abstractions only, especially early on in the practices' learning process, and thus did not reflect their full population of diabetic patients. However, when we re-ran the LDL improvement analysis model without including practices that used sample data, the results from the remaining 34 practices were essentially the same with regard to the implementation depth of a registry, disease protocols, and planned care templates. Our results may also be biased by the limitation of our study design. The North Carolina IPIP is a QI project; thus rigorous study design was not a priority during implementation. We do not have a comparison group or data to suggest the effect of secular trends. In addition, we are aware of anecdotal reports from IPIP coaches that although most of the KDIS tool and taxonomy worked well, there was sometimes confusion in understanding the difference between “protocol” and “planned care templates,” especially when specific electronic medical record systems and vendors used the term *protocol* in the manner that IPIP used *planned care templates*. Finally, the types of practices that chose to undergo a QI project may not be broadly representative of all primary care practices. The KDIS was developed to supplement the QI work with health care delivery systems and was not subject to the rigorous development processes typically involved in the development of instruments, such as interrater reliability testing. However, a single trainer within the IPIP organization provided instructions to all 9 staff who served as practice coaches during the time interval of this study. This group continuously addressed the tool and measurement methods as part of their monthly conference calls with the IPIP program leadership. As is typical of many of the QI interventions implemented by IPIP, the tool was carefully piloted with a small number of experienced coaches before being used more broadly in the field. ## Conclusions Our work suggests that the degree of implementation of 4 key drivers of practice change may be associated with improvement in selected outcomes for patients with diabetes. LDL control seems most likely to change in a short time frame and to be associated most strongly with the implementation of a patient registry and care protocols. However, patterns in our results suggest that other important associations between practice changes and clinical outcomes may emerge over a longer study period. Although our findings are based on a small number of diverse practices in a single state, our work is a first step in using a practical rating system, with roots in the CCM, to independently measure the extent of practice change implementation. Such data will be needed for practices and practice support programs to monitor transformation progress and for researchers and policy makers to understand the effectiveness of practice change interventions controlling for depth of implementation. ## Notes * This article was externally peer reviewed. * *Funding:* This research was supported by the Agency for Healthcare Research and Quality (Award Number R18 HS019131). Additional support was provided by the National Institutes of Health/National Institute of Environmental Health Sciences training grant no. T32ES007018 (AW). * *Conflict of interest:* none declared. * *Disclaimer:* Statements in this presentation should not be construed as endorsements by the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services. * Received for publication February 19, 2013. * Revision received August 19, 2013. * Accepted for publication August 26, 2013. ## References 1. 1. Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academies Press; 2001. 2. 2. Cretin S, Shortell SM, Keeler EB. An evaluation of collaborative interventions to improve chronic illness care. Framework and study design. Eval Rev 2004;28:28–51. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NToic3BlcngiO3M6NToicmVzaWQiO3M6NzoiMjgvMS8yOCI7czo0OiJhdG9tIjtzOjE5OiIvamFiZnAvMjcvMS8zNC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 3. 3. Landon BE, Hicks LS, O'Malley AJ, et al. Improving the management of chronic disease at community health centers. N Engl J Med 2007;356:921–34. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1056/NEJMsa062860&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=17329699&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000244496400007&link_type=ISI) 4. 4. Newton WP, Lefebvre A, Donahue KE, Bacon T, Dobson A. Infrastructure for large-scale quality-improvement projects: early lessons from North Carolina Improving Performance in Practice. J Contin Educ Health Prof 2010;30:106–13. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1002/chp.20066&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=20564712&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) 5. 5. Chin MH, Cook S, Drum ML, et al. Improving diabetes care in midwest community health centers with the health disparities collaborative. Diabetes Care 2004;27:2–8. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiZGlhY2FyZSI7czo1OiJyZXNpZCI7czo2OiIyNy8xLzIiO3M6NDoiYXRvbSI7czoxOToiL2phYmZwLzI3LzEvMzQuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 6. 6. Margolis PA, DeWalt DA, Simon JE, et al. Designing a large-scale multilevel improvement initiative: the improving performance in practice program. J Contin Educ Health Prof 2010;30:187–96. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1002/chp.20080&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=20872774&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) 7. 7. Daniel DM, Norman J, Davis C, et al. A state-level application of the chronic illness breakthrough series: results from two collaboratives on diabetes in Washington State. Jt Comm J Qual Saf 2004;30:69–79. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=14986337&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) 8. 8. Bricker PL, Baron RJ, Scheirer JJ, et al. Collaboration in Pennsylvania: rapidly spreading improved chronic care for patients to practices. J Contin Educ Health Prof 2010;30:114–25. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=20564714&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) 9. 9. Wagner EH, Austin BT, Von Korff M. Organizing care for patients with chronic illness. Milbank Q 1996;74:511–44. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.2307/3350391&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=8941260&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1996VU47200004&link_type=ISI) 10. 10. Bodenheimer T, Wagner EH, Grumbach K. Improving primary care for patients with chronic illness. JAMA 2002;288:1775–9. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.288.14.1775&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=12365965&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000178484700034&link_type=ISI) 11. 11. Walsh J, McDonald KM, Shojania KG, et al. Closing the quality gap: a critical analysis of quality improvement strategies. Technical reviews, no. 9. AHRQ publication no. 04-0051-3. Rockville, MD: Agency for Healthcare Research and Quality; 2005. 12. 12. Pearson ML, Wu S, Schaefer J, et al. Assessing the implementation of the chronic care model in quality improvement collaboratives. Health Serv Res 2005;40:978–96. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1111/j.1475-6773.2005.00397.x&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=16033488&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000230292400003&link_type=ISI) 13. 13. Weiner BJ, Lewis MA, Linnan LA. Using organizational theory to understand the determinants of effective implementation of worksite health promotion programs. Health Educ Res 2009;24:292–305. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiaGVyIjtzOjU6InJlc2lkIjtzOjg6IjI0LzIvMjkyIjtzOjQ6ImF0b20iO3M6MTk6Ii9qYWJmcC8yNy8xLzM0LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 14. 14. Methodology Committee of the Patient-Centered Outcomes Research Institute (PCORI). Methodological standards and patient-centeredness in comparative effectiveness research. JAMA 2012;307:1636–40. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.2012.466&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=22511692&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000302896100026&link_type=ISI) 15. 15. Atkins D, Kupersmith J. Implementation research: a critical component of realizing the benefits of comparative effectiveness research. Am J Med 2010;123:e38–45. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/j.amjmed.2010.10.007&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=21184866&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) 16. 16. Pollard C, Bailey KA, Petitte T, Baus A, Swim M, Hendryx M. Electronic patient registries improve diabetes care and clinical outcomes in rural community health centers. J Rural Health 2009;25:77–84. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1111/j.1748-0361.2009.00202.x&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=19166565&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000261980600011&link_type=ISI) 17. 17. Kilo CM. Improving care through collaboration. Pediatrics 1999;103:384–93. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MTA6InBlZGlhdHJpY3MiO3M6NToicmVzaWQiO3M6MjE6IjEwMy9TdXBwbGVtZW50X0UxLzM4NCI7czo0OiJhdG9tIjtzOjE5OiIvamFiZnAvMjcvMS8zNC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 18. 18. Williams DA. Extra-binomial variation in logistic linear models. Appl Stat 1982;31:144–8. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.2307/2347977&link_type=DOI) 19. 19. Thomas KG, Thomas MR, Stroebel RJ, et al. Use of a registry-generated audit, feedback, and patient reminder intervention in an internal medicine resident clinic–a randomized trial. J Gen Intern Med 2007;22:1740–4. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11606-007-0431-x&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=17973175&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000251011100016&link_type=ISI) 20. 20. Cleveringa FG, Gorter KJ, van den Donk M, Rutten GE. Combined task delegation, computerized decision support, and feedback improve cardiovascular risk for type 2 diabetic patients: a cluster randomized trial in primary care. Diabetes Care 2008;31:2273–5. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiZGlhY2FyZSI7czo1OiJyZXNpZCI7czoxMDoiMzEvMTIvMjI3MyI7czo0OiJhdG9tIjtzOjE5OiIvamFiZnAvMjcvMS8zNC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 21. 21. DiPiero A, Dorr DA, Kelso C, Bowen JL. Integrating systematic chronic care for diabetes into an academic general internal medicine resident-faculty practice. J Gen Intern Med 2008;23:1749–56. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11606-008-0751-5&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=18752028&link_type=MED&atom=%2Fjabfp%2F27%2F1%2F34.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000261423900004&link_type=ISI) 22. 22. Crabtree BF, Nutting PA, Miller WL, Stange KC, Stewart EE, Jaen CR. Summary of the National Demonstration Project and recommendations for the patient-centered medical home. Ann Fam Med 2010;8(Suppl 1):S80–90. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiYW5uYWxzZm0iO3M6NToicmVzaWQiO3M6MTM6IjgvU3VwcGxfMS9TODAiO3M6NDoiYXRvbSI7czoxOToiL2phYmZwLzI3LzEvMzQuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 23. 23. Nutting PA, Miller WL, Crabtree BF, Jaen CR, Stewart EE, Stange KC. Initial lessons from the first National Demonstration Project on practice transformation to a patient-centered medical home. Ann Fam Med 2009;7:254–60. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiYW5uYWxzZm0iO3M6NToicmVzaWQiO3M6NzoiNy8zLzI1NCI7czo0OiJhdG9tIjtzOjE5OiIvamFiZnAvMjcvMS8zNC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 24. 24. Jaen CR, Ferrer RL, Miller WL, et al. Patient outcomes at 26 months in the patient-centered medical home National Demonstration Project. Ann Fam Med 2010;8(Suppl 1):S57–67. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiYW5uYWxzZm0iO3M6NToicmVzaWQiO3M6MTM6IjgvU3VwcGxfMS9TNTciO3M6NDoiYXRvbSI7czoxOToiL2phYmZwLzI3LzEvMzQuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9)