Abstract
Purpose: Quality improvement (QI) initiatives have been implemented to facilitate transition to a chronic disease management approach in primary health care. However, the effect of QI initiatives on diabetes clinical processes and outcomes remains unclear. This article reports the effect of Partnerships for Health, a QI program implemented in Southwestern Ontario, Canada, on diabetes clinical process and outcome measures and describes program participants' views of elements that influenced their ability to reach desired improvements.
Methods: Part of an external, concurrent, comprehensive, mixed-methods evaluation of Partnerships for Health, a before/after audit of 30 charts of patient of program physicians (n = 35) and semistructured interviews with program participants (physicians and allied health providers) were conducted.
Results: The proportion of patients (n = 998) with a documented test/examination for the following clinical processes significantly improved (P ≤ .005): glycosylated hemoglobin (A1c), cholesterol, albumin-to-creatinine ratio, serum creatinine, glomerular filtration rate, electrocardiogram, foot/eye/neuropathy examination, body mass index, waist circumference, and depression screening. Data showed intensification of treatment and significant improvement in the number of patients at target for low-density lipoprotein (LDL) and blood pressure (BP) (P ≤ .001). Mean LDL and BP values decreased significantly (P ≤ .01), and an analysis of patients above glycemic targets (A1c >7% at baseline) showed a significant decrease in mean A1c values (P ≤ .01). Interview participants (n = 55) described using a team approach, improved collaborative and proactive care through better tracking of patient data, and increased patient involvement as elements that positively influenced clinical processes and outcomes.
Conclusions: QI initiatives like Partnerships for Health can result in improved diabetes clinical process and outcome measures in primary health care.
The prevalence of type 2 diabetes and related health care costs are increasing worldwide and threatening the ability of countries to markedly improve the health of their populations.1,2 To help address this problem, primary health care reform strategies have been developed to transition from the traditional acute care approach to a chronic disease management approach.3⇓–5 It is hypothesized that a chronic disease management approach will enhance clinical processes, improve clinical outcomes, lead to a healthier population, and decrease the burden of chronic illnesses such as diabetes.6 In Canada, 3 strategies have received considerable attention: (1) the development of new funding models to support a team-based approach7; (2) the adoption of electronic medical records (EMRs) to improve documentation, surveillance, and provider collaboration8,9; and (3) the promotion of clinical practice guidelines (CPGs) to encourage early screening and optimal treatment.10,11 Implementation of these strategies unfortunately have faced many challenges, identifying the need for more effective and innovative programs to support primary health care providers in ameliorating their approach to diabetes care.7⇓–9,12
Despite the shortage of evidence to support the widespread use of quality improvement (QI) programs in health care, governments and other agencies around the world have launched QI initiatives to try to accelerate health care transformation.13,14 Thus far, evaluation of QI initiatives targeting diabetes have relied primarily on anecdotal and self-reported data, yielding varied results.6,13,15,16 The challenges in evaluating complex programs like QI initiatives have been well documented and the need for more rigorous study designs identified.17⇓⇓⇓–21 Because randomized controlled trials are often not possible or inadequate for evaluating programs implemented in naturalistic environments, nonexperimental designs must be strengthened by using external evaluation teams, collecting data concurrently with program implementation, and providing a clear scope of the program.22⇓⇓⇓⇓⇓⇓–29 An effective way to incorporate these key elements into a single research study is to use a comprehensive mixed-methods evaluation design.23,30
The purpose of this study was to use an external, concurrent, mixed-methods, multimeasure evaluation design to: (1) determine the effect of a QI program (Partnerships for Health) on clinical process and outcome measures for diabetes; (2) assess how the level of program involvement effected the results; and (3) obtain the views of program participants regarding the elements that influenced improvement in diabetes clinical processes and outcomes.
Background
In Southwestern Ontario, Canada, a government-funded program (Partnerships for Health) that applied the concepts of the Chronic Disease Prevention and Management Framework31 and QI methodologies32,33 was implemented between 2008 and 2011 to improve diabetes care in the region.31 Similar to the Chronic Care Model34 and the Expanded Chronic Care Model,35 the concepts of the Chronic Disease Prevention and Management Framework outline the need to enhance evidence-based, planned, and integrated collaborative chronic care in primary health care settings by promoting a population-based approach and emphasizing interactions between patients and practice teams.31 In terms of QI methodologies, the program embraced the Institute for Healthcare Improvement's Breakthrough Series approach32 of bringing multiple teams together 2 or 3 times (ie, learning sessions) over 6 to 15 months to learn from experts, and each other, as they plan and test practice changes.32,36 Furthermore, this approach encourages team members who attend learning sessions, usually 2 or 3 people, to work with additional team members in their organization to test and implement changes between the learning sessions (ie, action periods). Lastly, the program embedded the Model for Improvement as a strategy to test and evaluate changes on a few patients before implementing them at a practice or organizational level.33
The Partnerships for Health program targeted primary health care teams (family physicians, as well as practice- and community-based allied providers and administrative staff) and featured educational activities (series of offsite learning sessions); supportive activities (teleconferences, onsite practice coaching, web-based tools, and onsite information technology support); and reporting activities (QI efforts and clinical data). The program emphasized and facilitated (1) the establishment of a team-based approach, (2) enhanced use of information technology systems to better adhere to Canadian Diabetes CPGs10 and to participate in population-based tracking/surveillance, and (3) the promotion of patient self-management. Participation was voluntary. Program implementers have provided an overview of the program at http://www.questforquality.ca/content.asp?id=112.
Methods
Study Design
Part of an external, concurrent, comprehensive, mixed-methods, multimeasure evaluation, an audit of 30 patient charts per program physician (n = 35) was conducted before and 12 months after the program (accounting for clustering at the physician level) to measure the impact of the overall program and the level of program involvement on diabetes clinical processes and outcomes. Semistructured individual interviews with program participants, including physicians and allied-health providers, were conducted 12 months after the program to obtain their views of the elements that influenced improvement in diabetes clinical processes and outcomes. The study was approved by The University of Western Ontario Research Ethics Board.
Chart Audit
Measures
The proportion of patients with a documented albumin-to-creatinine ratio (ACR) test was the primary outcome measure because of the short evaluation timeline and suggestions by QI experts that changes in clinical processes occur more promptly during QI. ACR was selected because it has been shown to be tested in less than 50% of patients.37,38 Secondary clinical process measures were selected in accordance with recommendations from nationally published diabetes CPGs10 and were captured as binary variables. A test/examination was considered complete when it was documented at least once in a patient's chart during the period before or after the program. Treatment intensification was defined as follows: (1) glycemic: adding an oral agent or insulin and/or increasing the dose of oral agent or total dose of insulin; (2) hypertension: adding and/or increasing the dose of antihypertensive agent; and (3) cholesterol: adding and/or increasing the dose of a statin and/or switching to a more potent agent. Clinical outcome measures recommended by CPGs10 were recorded as continuous values. If more than one value was documented, the last recorded value in the before-and-after period was used. If no value was documented after the program, the baseline value was not carried forward and that patient's data were excluded from analyses. A full list of the measures is provided in Tables 1, 2, and 3.
Data Collection
Physicians who consented to the evaluation were asked to generate a patient list using the International Classification of Diseases, Ninth Revision, diagnostic code 250 and to randomly assign study numbers to patients using a random list generated in Microsoft Excel (Microsoft Corp, Redmond, WA) by the evaluation team. An external auditor reviewed the charts in the practice until 30 eligible patients were completed. The audit before the program was inclusive of the 12 months before the physician start date in the program and the audit after the program included the 12 months following the start date of the program.
Sample and Eligibility
For patient charts to be eligible for audit, (1) the patient's physician had to be part of a practice site that participated in the program; (2) the patient's physician had to provide written, informed consent for an external auditor to review randomly selected patient charts; (3) the patient had to have a diagnosis of type 2 diabetes; (4) the patient had to be ≥18 years of age; and (5) there had to be at least one documented visit during the periods before and after the program. Charts of patients with type 1 diabetes, gestational diabetes, or prediabetes were excluded.
The evaluation team was not involved in the recruitment of program participants; therefore sample size estimates were based on (1) the number of program physicians; (2) a 25% change in the proportion of patients with a documented ACR test (primary outcome, ∝ = 0.05 and β = 0.10); (3) an interclass correlation coefficient of 0.21; and (4) 21 physicians each contributing 45 charts. Because the physician consent rate was higher than anticipated, the number of charts per physician was reduced to 30.
Data Analysis
Descriptive statistics were performed using SPSS software version 17(IBM, Chicago, IL). Multilevel regression analyses were conducted using Stata software (StataCorp, College Station, TX). Multilevel mixed-effects logistic regression (xtmelogit) was used for the dichotomous measures, and multilevel mixed-effects linear regression (xtmixed) was used for the continuous measures. Both types of analyses controlled for within-subject data and clustering at the physician level. Additional analyses were completed on a subsample of patients identified as above CPG targets: hemoglobin A1c (A1c) > 7%, blood pressure (BP) >130/80 mmHg, or low-density lipoprotein (LDL) >2.0 mmol/L. A stratified sample analysis was done for scores after the program to compare measures for patients of physicians with different levels of program involvement. Group A physicians were involved in educational activities (at least one learning session), supportive activities, and reporting activities, whereas group B physicians were involved or affected by local practice change only (at the same practice sites as group A physicians but did not attend any educational activities). P < .05 was considered statistically significant.
Interviews
Data Collection
To obtain the views of participants regarding the elements that influenced improved diabetes processes and outcomes, individual interviews were conducted approximately 12 months after the program by 1 of 2 trained interviewers. A semistructured interview guide was used. All individual interviews were tape-recorded and transcribed verbatim. All transcripts were reviewed for accuracy before starting the analysis process.
Sample
A purposive sampling approach was used, and maximum variation was sought according to health care providers' professional role, team/practice, and location (urban/rural). To be eligible for an interview, program participants had to have attended at least one learning session (group A physicians, allied providers, or administrative staff).
Data Analysis
Data analyses were conducted concurrently with data collection. J. Paquette-Warren, M. Tyler, and R. Caruso independently reviewed the transcripts and field notes to identify key concepts and themes. Coding templates were created as themes emerged from the data. The coding templates were used in a second round of analysis to verify that no key concepts or themes were missed. During the final 3 rounds of the iterative process with independent and team analyses, quotations were pulled from the data and cleaned. Crystallization and immersion39 were used to identify overarching themes (NVivo 8; QSR International, Doncaster, Victoria, Australia). Trustworthiness and credibility were maximized by verbatim transcription, field notes, and independent and team analyses.40
Results
Chart Audits
Thirty-five physicians consented to the chart audit (79% of program physicians), and 998 randomly selected patient charts were audited. Physician and patient demographics are provided in Table 4. Physician characteristics were comparable to national averages with the exception of a larger proportion of physicians practicing in new funding models and using EMRs.7,41
Clinical Processes
Clinical process measures showed statistically significant improvements in testing and documentation for all measures (ACR [primary outcome]; A1c [annual and quarterly]; cholesterol; serum creatinine; glomerular filtration rate; electrocardiogram; foot, eye, and neuropathy exams; body mass index; waist circumference; and depression screening) except blood pressure (Table 1). Odds ratios were higher for group A physicians for all significantly different clinical processes except glomerular filtration rate and waist circumference; however, none of these differences reached statistical significance. Intensification of glycemic, hypertension, and/or cholesterol treatment occurred in the entire sample, and more intensification was evident in the subsample of patients who exceeded CPG targets (Table 2).
Clinical Outcomes
There were significant improvements in clinical outcome measures, including the percentage of patients at target for LDL and BP, mean LDL cholesterol, mean systolic BP, and mean diastolic BP (Table 3). There was no significant change in the proportion of patients at target A1c, and although there was a small significant increase in mean A1c for the entire sample, the increase was not clinically relevant. Overall, mean A1c, systolic BP, diastolic BP, and LDL cholesterol significantly improved in the subsample of patients above the CPG A1c target (Table 3).
Qualitative Interviews
Fifty-five interviews were conducted with physicians (n=7), allied providers (n=38), and administrative staff (n=10) to capture their views of elements that influence diabetes clinical processes and outcomes.
Elements Influencing Clinical Processes
Participants described how the program gave them the knowledge, skills, and confidence to change how they deliver care: “When the practice leaders are confident and experienced with the [Chronic Disease Prevention and Management] Framework, it gives you the confidence to change your practice.” The elements identified as contributing to improved clinical processes were: (1) using a team approach and better care coordination; (2) establishing a better tracking mechanism for more proactive care and adherence to CPGs; and (3) being more patient-centered: “In terms of improving my practice as a whole, I really feel that it's taken a whole team … to start tracking more accurately.” “We have now become more patient-centric … depending on what the patients are wishing to do and how we can help them.” Participants explained how increasing patient involvement and supporting self-management seemed to improve patient knowledge, adherence to treatment, and skills in self-management: “I think that if you talked to patients they would feel more in charge of their own health and what they're doing. I think that's huge.” They expressed a strong opinion that these changes had positively affected clinical processes and outcomes and would lead to further improvements over time: “Patients are being seen on a regular basis…. They get the screening tests done, medications are being added or adjusted. I think it also points out [to patients] that it's really important for them to manage their diabetes. They're getting probably more referrals to other health care providers to help manage their diabetes. So ultimately it should improve their care.”
Elements Influencing Clinical Outcomes
The elements that influenced achieving better clinical outcomes were identified as: (1) having accurate data to inform clinical decisions; (2) health status and room for improvement; and (3) the progressive nature of diabetes. “Some target numbers will not change just due to the progressive nature of the disease…. If we did absolutely everything right for a certain number of people we would still lose control over time.” Other elements included: (1) provider and patient comfort with treatment intensification; (2) comorbidities, side effects, and treatment interactions; and (3) personal goals and environmental factors. “Some patients have a beautiful A1c, but … they are low 3 or 4 times a week…. We change medication and they get rid of their lows, but their A1c is now 7.3 instead of 6.8…. We want to see it under 7, but at what cost to the patient?”
Beyond clinical outcomes were themes about improvement in patients' general health, mental health, and quality of life: “I think patient satisfaction is one thing. Patient involvement and self-management are components that are improving the overall health of the patient. Even if we don't see it with their diabetes numbers, patients are going to see advantages in their quality of life.”
Discussion
Our study used a mixed-methods approach (external chart audit and individual interviews) to assess the effect of a QI program on diabetes clinical processes and outcomes, and to capture the views of program participants related to the elements that influenced improvements in diabetes clinical processes and outcomes. At 12 months after the program, chart audit results showed significantly improved diabetes clinical processes and outcomes for the monitoring and management of glycemic control and related diabetes complication risk factors (cholesterol and hypertension). This was reflective of program participants' views that the program gave them the knowledge, skills, and confidence to change practice and that using a team approach/coordinated care, better patient tracking, and proactive care to meet CPG recommendations, and a more patient-centered approach influenced clinical processes and outcomes. Interview data revealed a lack of accurate patient data and comfort with treatment intensification as perceived barriers to improving patient outcomes, but chart audit results showed evidence of intensification of treatment for hyperglycemia, hypertension, and dyslipidemia. It may be that the perceived increase in patient involvement and active self-management helped to outweigh or overcome some of the identified barriers.
Chart audit results for the entire sample showed a small (7.2% to 7.3%) but statistically significant rise in A1c. This corresponds with program participants' concerns related to the progressive nature of the disease influencing outcomes. However, chart audit data revealed a well-controlled patient population at baseline, with a mean overall A1c of 7.2%, providing little opportunity for physicians to initiate clinically relevant improvements.10 Likewise, the interview results highlighted how health status and room for improvement influenced decisions to intensify treatment. Additional analysis supported the concept of clinical inertia and qualitative themes by showing a more significant reduction in A1c (8.2 to 8.0%; P < .01) for patients above CPG targets at baseline. Other barriers to improving clinical outcomes were identified as comorbidities, treatment interactions, and side effects; personal goals; and environmental factors.
This article has important implications for practice-based QI initiatives like Partnerships for Health that aim to improve the quality of care by shifting from an acute care approach to a focused chronic disease management approach. Evaluation of QI initiatives is critical to inform policy makers and influence how funding and support is provided to primary health care providers. Previous literature has yielded unclear results regarding the effectiveness of QI initiatives at improving diabetes care, showing positive change in clinical process measures and intensification of medication but no change in clinical outcomes.36,42⇓–44 Recent chart audit studies showed inconsistent improvement in A1c, BP, and LDL, depending on the length of the period after the initiative.16,45⇓–47 This study had a relatively short period after the program (1 year), yet significant improvements were found. Future research could employ a longer period to examine the impact over time and to determine the spread and sustainability of the results reported herein. Such research could provide evidence to determine whether program participants' views related to the continued improvement of clinical outcomes resulting from enhanced clinical processes (eg, team approach, collaborative and proactive care) and other program benefits (eg, increased patient knowledge and involvement, better general health status, mental health, and quality of life) have merit.
Limitations
Participation in the program and in the evaluation was voluntary, and demographics showed that the proportions of consenting physicians with EMRs and practicing in a new funding model were higher than the national averages.7,41 This introduces a potential participant bias (early adopters of health care reform strategies), affecting the results and limiting the potential for generalizability. The evaluation design did not include a control group, negating the opportunity to account for influential factors beyond the program. Groups with different levels of involvement in the program were compared, but, again, program involvement was voluntary, not randomly allocated, and although odd ratios showed increasing trends with a higher level of involvement, no statistical differences were found. Including all program participants in the evaluation irrespective of their level of involvement may have affected estimations of impact. By virtue of the positive findings and the lack of difference between groups, it seems that the approach taken by the program to target select members of a practice site to participate in educational activities and encourage them to work with colleagues at the practice site to redesign care is adequate to yield positive change in clinical process and outcomes. Future studies are needed to determine the ideal ratios related to the level of involvement of multiple team members in a practice site and should include a comparison to control practices. Lastly, the length of the period after the program may have affected the estimated impact of the program. Having a longer period after the program could have minimized the effect of this limitation.
Conclusions
QI initiatives like Partnerships for Health can lead to improved diabetes clinical processes and outcomes, and they can be detected as early as 1 year after the program. Additional studies are needed to assess the sustainability and spread of the improvements found, and to identify the program features that support the elements perceived by program participants as critical in achieving positive results. This information will contribute to replicating and enhancing the results reported here in future QI initiatives in primary health care.
Acknowledgments
The authors acknowledge the cooperation, support, and dedication of the Partnerships for Health team, steering committee, and participants. The authors also acknowledge the editorial assistance of Jordan Tompkins and the research assistance of Marie Tyler and Rosie Caruso at the Centre for Studies in Family Medicine, The University of Western Ontario, London, Ontario, Canada.
Notes
This article was externally peer reviewed.
Funding: Support for the Partnerships for Health Project was provided by the Government of Ontario. The initiative was sponsored by the South West Local Health Integration Network, with the South West Community Care Access Centre serving as the transfer payment agency, providing oversight of the project agreement and funding arrangement with the Ontario Government Ministry of Finance. SH holds the Canadian Diabetes Association Chair in Diabetes Management and the Dr. Ian McWhinney Chair of Family Medicine Studies. AT holds the Canada Research Chair in Health Services Research. MS is funded by the Dr. Brian W. Gilbert Canada Research Chair in Primary Health Care Research.
Conflict of interest: none declared.
Disclaimer: The views and opinions expressed herein do not necessarily represent the official policies of the Government of Ontario.
- Received for publication August 17, 2012.
- Revision received May 10, 2013.
- Accepted for publication May 28, 2013.