Abstract
Purpose: HHS’ Million Hearts campaign focused the delivery system on ABCS clinical quality measures (appropriate Aspirin use, Blood pressure control, Cholesterol control, and Smoking cessation counseling). AHRQ’s Evidence Now project funded 7 collaboratives to test different ways to improve performance and outcomes on ABCS within small primary care practices. The Heart of Virginia Health care (HVH) collaborative designed 1 of the approaches in Evidence Now.
Methods: Two hundred sixty-four eligible practices were recruited to participate and randomized to 3 cohorts in a stepped wedge design, and 173, employing 16 different EHRs, remained for the duration of the initiative. The practice support curriculum was delivered by trained practice coaches to enhance overall practice function and improve performance on the ABCS metrics. The intervention consisted of a kickoff meeting, 3 months of intensive support, 9 months of ongoing support, and access to online learning materials and expert faculty. The mean practice contact time with coaches was 428 minutes, but the standard deviation was 426 minutes.
Results: Overall, the short HVH intervention had a small but statistically significant positive average effects on appropriate use of aspirin and other antithrombotics, small negative effects on blood pressure control, except for those practices which did not attend the kickoff, and small negative effects on smoking cessation counseling.
Conclusions: The intervention phase was truncated due to difficulty in recruiting a sufficient number of practices. This undoubtedly contributed to the lack of substantial improvements in the ABCS. Other likely contributing factors were our inability to provide real time feedback on metrics and the frequency with which major practice disruptions occurred. Future efforts to improve primary care practice function should allow adequate time for both practice recruitment and external support.
- Cardiovascular Diseases
- Disease Management
- Heart Disease Risk Factors
- Primary Health Care
- Quality Improvement
- Virginia
Introduction
Health and Human Services’ Million Hearts campaign focused the delivery system on ABCS clinical quality measures (related to appropriate Aspirin use, Blood pressure control, Cholesterol control, and Smoking cessation counseling).1 The rationale for this campaign is that heart disease is the number 1 cause of death in the United States, whereas stroke is the number 5 cause.2 Findings from population studies show that addressing these risk factors can yield substantial gains in life expectancy and reduce the burden of cardiovascular disease.3,4 Although primary care practices deliver most chronic disease prevention and care,5 their overall performance on risk factors for cardiovascular disease is suboptimal.6⇓⇓⇓⇓–11
AHRQ’s Evidence Now project funded 7 collaboratives to test different ways to improve performance and outcomes on ABCS within small primary care practices.12 This reflects AHRQ’s longstanding emphasis on practice facilitation13 as well as considerable evidence that primary care practices do improve when provided practice facilitation.14 The Heart of Virginia Health care (HVH) project designed one of the approaches in EvidenceNOW.15
Methods
Study Design
The purpose of AHRQ’s Evidence Now initiative was to test if the performance of small primary care practices on standard ABCS metrics could be improved without payment incentives through coaching and technical assistance. We specifically sought to address practice wellbeing before working on quality improvement. Our rationale for this approach is that physician burnout is increasing, with multiple consequences including poorer care quality.16 Therefore, care of the patient requires care of the clinician.17 How to achieve this and improve care quality is found in case studies of high functioning primary care practices that use team care models and simplified workflows.18 General internist Christine Sinsky has created a simple model to implement this in primary care,19 and the AMA has a website with dozens of modules to help practices simplify processes, create care teams, reduce stress, and improve care.20
The Heart of Virginia Health care (HVH) project conducted a stepped-wedge cluster randomized trial to execute “Restoring Primary Care in Virginia” (intervention) to improve small and medium-size primary care practices’ quality of health care. “Restoring Primary Care in Virginia” was a short intensive intervention, using practice coaches for 3 months and ongoing support from coaches and academic medicine faculty for 9 months after that. The guiding principle was to address practice function and clinician and staff well-being as preamble to quality improvement work. Practices chose areas of focus from a faculty-prepared toolkit detailing a range of workflow redesign activities, emphasizing functional practice improvements and pathways to specific improvements in ABCS metrics.
Practice Selection
Two hundred sixty-four primary care practices in Virginia with fewer than 10 clinicians were originally recruited to the project because AHRQ required all collaboratives to recruit at least 250 practices.21 Our recruitment strategies included presentations at statewide meetings of family physicians and general internists, reaching out to community family physicians active in medical student education, and collaboration with health systems with large numbers of primary care practices. Some dropped out over time for various reasons and others were unable to deliver or permit usable data to be extracted, so our final ABCS analysis file was obtained from 173. The CONSORT (Consolidated Standards for Reporting Trials) flow diagram (Figure 1) describes the filtering from 264 to 173, by stepped wedge cohort. The unit of analysis throughout was the practice-quarter. The final number of usable practice-quarter observations was 1033, 312 of which were baseline in the stepped-wedge framework.
Practices spanned independent (30%), federally qualified health centers (13%), and hospital system owned (57%). Table 1 shows the distribution across ownership type and intervention cohort.
Study Design
All practices were randomized to staggered intervention cohorts in a stepped-wedge design, displayed in Figure 2. The stepped-wedge enables each cohort of practices to serve as a control group before the intervention begins for them. Our design randomly assigned enrolled practices to 1 of 3 cohorts at baseline (2015, quarter 4), so that each practice’s data are sometimes control, sometimes intervention, and sometimes from the maintenance period. Intervention was the 3-month active contacts with practice coaches who would work with practices directly, in person or on the phone. Maintenance effectively continued until the end of the project in the first quarter of 2018 and was the period in which practices could reach to HVH project faculty on their own and access online resources designed to supplement the tool kit. The ABCS data were collected at baseline and during each subsequent measurement period.
Description of the Intervention
Our intervention had 3 phases and several supporting elements. The first phase was a kickoff event in which practices randomized to a given cohort met to understand the goals and rationale for the project, the key elements of simplifying practice processes and creating team care models and improving reimbursement through more effective documentation. The second phase was a 3-month period during which practice coaches met with key practice personnel and ascertained what the practices wanted to work on with regard to redesign as well as on improving their ABCS measures. Our coaches were experienced practice facilitators from the Quality Improvement Organization that serves Virginia and Maryland and had earlier helped nearly 1000 primary care practices choose and implement EHRs. They received training on strategies to simplify practice workflows and conduct quality improvement activities. They had weekly meetings with project faculty to share experiences and create solutions to problems. A final 9-month phase provided the opportunity for continuing support from our coaches as well as consultations with experienced physician faculty. Supporting elements included a comprehensive toolkit for practice redesign and quality improvement (included as an online Appendix), a private online chat room for posing questions or sharing practical learnings, and a compendium of relevant resource materials.
Clinical Data Extraction
The HVH project designed a mixed research method to collect clinical data (ie, ABCS) and organizational data. The clinical data were extracted from practices’ electronic health record systems (EHRs). The organizational data were gathered by surveying clinicians and staff, as described in the Survey section below.
There were 16 different EHRs in use in our final sample of small Virginia practices during the study period. The EHRs and the practices themselves had widely varying capacities to deliver ABCS data to the HVH project in a timely and useful fashion.22 Therefore, the HVH project used 4 different approaches to extract ABCS data: (1) The HVH team visited practices in person to extract ABCS data from locally-based EHRs, typically by exporting deidentified continuity of care documents (CCDs), and computing the ABCS scores ourselves. (2) The HVH team was granted credentials to access practices’ cloud-based EHRs and generated custom reports with ABCS metrics. (3) The HVH team provided metric specifications and guided practices to generate ABCS data themselves and send the data back to the HVH team; (4) The HVH team worked with third parties (eg, hospital system’s IT departments) to obtain practices’ ABCS data in report form.
Survey Design
The HVH project conducted individual-level surveys to investigate clinicians and staff’s perception of adaptive reserve (AR), that is, practice flexibility and ability to take on change. The HVH project also implemented practice-level surveys, called Change Process Capacity Questionnaire (CPCQ) to examine a lead clinician or practice manager’s perception of practice capacity and characteristics. Both surveys were completed in 2016 and 2017. More detail about survey administration and response rates can be found in Cuellar et al (2018).21
Outcome Variables
Aspirin Use by High-risk Individuals (A)
Aspirin use by high-risk individuals measures the percentage of patients aged 18 years and older with Ischemic Vascular Disease with documented use of aspirin or other antithrombotic (as defined by the National Committee for Quality Assurance (NCQA) measure #0068 and equivalently the Physician Quality Reporting System (PQRS) measure #204).23
Blood Pressure Control (B)
Blood Pressure control measures the percentage of patients aged 18 through 85 years who had a diagnosis of HTN and whose blood pressure was adequately controlled (<140/90) during the measurement year (ie, NCQA measure #0018 or equivalently PQRS measure #236).24
Cholesterol Management (C)
Cholesterol management measures the percentage of high-risk adult patients 21 and older who were previously diagnosed with, or currently have an active diagnosis of clinical atherosclerotic cardiovascular disease (ASCVD); Or adult patients 21 and older with a fasting or direct Low-Density Lipoprotein Cholesterol (LDL-C) level >= 190 mg/dL; Or patients aged 40 to 75 years with a diagnosis of diabetes with a fasting or direct LDL-C level of 70 to 189 mg/dL; who were prescribed or are already on statin medication therapy during the measurement period (ie, Centers for Medicare and Medicaid Services Group Practice Reporting Option (CMS GPRO) measure PREV-13).25
Smoking Cessation (S)
Smoking cessation measures the percentage of patients aged 18 years and older who were screened about tobacco use 1 or more times within 24 months AND who received cessation counseling intervention if identified as a tobacco user (ie, NCQA measure #0028, PQRS measure #226).26
HVH Intervention Variables
Intensive
Intensive is the name of a binary variable that takes on a unitary value when the HVH team worked with practice coaches to provide patient centered outcomes research education and to help implement major elements of the Sinsky model of primary care practice redesign.18 Intensive periods lasted 3 months. The start and end times of intensive periods are illustrated in the stepped-wedge design of Figure 2. Once an intensive period starts, the overall intervention began until the end of the study period, implying that a practice received the intervention should still be treated for the reminder of the study period. Thus we forced the intensive variable to remain “on” for the remainder of the study period when the intervention began, consistent with an ‘intent to treat’ approach. The coefficient on intensive represents the average effect of the intervention on a specific dependent variable, an ABCS measure, across all practices.
Maintenance
Maintenance represents the time period during which the HVH team used the online platform, conference calls, and Skype visits with faculty to assist practices with monitoring and incorporating new patient-centered outcomes research measures (ABCS) and with workflow redesign questions. The time points of maintenance are also depicted in Figure 2. Once a maintenance period started, Maintenance stayed “on” through the entire study. The coefficient on maintenance represents the marginal impact on a dependent variable of the maintenance period activities.
Independent Variables
Kickoff
Kickoff measures whether the practice participated in the kickoff training event (yes = 1; no = 0). The topics of the event include PCOR education, the Sinsky team care model, enhanced coding for reimbursement, keys to a highly functioning team and avoiding provider burnout.
Coach Time
Coach time, collected by the HVH coach team, measures how many minutes the HVH coach team worked with the practice, in total, counting in person and telephone minutes.
Ownership
Ownership measures the practice’s characteristic and was confirmed by phone. The practices were categorized as independent, owned by a hospital system, or a FQHC (Federally Qualified Health Center). The variable was converted to 3 dummy variables and the independent practice is the reference group.
Control Variables
CPCQ
The Change Process Capability Questionnaire (CPCQ) consists of 2 sets of questions introduced by AHRQ.27 The first set has 18 questions measuring practices’ approaches to quality improvement. The second set has 14 questions measuring practices’ strategies that have been used to improve care quality. The 5-point Likert type scale ranges from strongly disagree (1) to strongly agree (5). The HVH project adopted the second set of the questions and converted the responses to -2 to 2. The 14 questions were summed to a single score (Cronbach's α = 0.93). We interpret the CPCQ score for the practice as reflective of the practice’s culture or overall attitude toward change.
AR
The original Adaptive Reserve (AR) questionnaire consists of 23 questions, and the 5-point Likert type scale ranges from strongly disagree (1) to strongly agree (5).28 The HVH project adopted 18 questions (excluded questions 4, 6, 9 11 and 18) from the original AR and converted the response to a value between 0 and 1. Because AR was collected from the individual-level survey, this article calculated the individual average score of the 18 responses and then calculated the practice average score of the individual average score (Cronbach's α = 0.95).
ACO
Accountable Care Organization (ACO) was collected by the practice-level survey. ACO measures whether the practice participates in an ACO (yes =1; no = 0).
Medicare
Medicare, collected by the practice-level survey, measures the percentage of a practices’ patients who had Medicare coverage.
Medicaid
Medicaid, collected by the practice-level survey, measures the percentage of patients who had Medicaid coverage.
Practice size
Practice size, collected by the practice-level survey, measures the total number of clinicians in the practice and was defined as 1 or solo practice, 2 to 5 clinicians, or greater than or equal to 6 clinicians. Solo practices are the reference group.
Location
Location measures whether the practice is located in an urban or rural area (urban area =1; rural area = 0) based on definitions provided by the Office of Rural Health Policy.29
Cohort
Cohorts 1, 2, and 3 were assigned by the stepped-wedge design. The cohort variable was converted to 3 dummy variables. Cohort 1 is the reference group.
Measurement Period
measurement period (MP) is the study quarter and ranges from 1 to 10.
Statistical Approach
Random effects models are the preferred way to control for intracohort correlation possibilities within a stepped-wedge design.30 Our analysis begins with parsimonious models to estimate the effects of Intervention and Maintenance on ABCS, controlling for MP and Cohort. Then the analysis moves to more complex models to estimate the effects of Intervention, Maintenance and other independent variables of interest on ABCS, controlling for MP, Cohort, ACO, insurers, locations, practice size, CPCQ, AR, and location. We used linear regression and the dependent variable was the percentage of patients that meet the quality measure. Analyses were performed using Stata (Version 12, StataCorp).
Results
Table 2 reports baseline descriptive statistics, using the preintervention observations from each participating practice. Performance on the 4 clinical variables of interest, ABCS, ranged from 60 to 79%, indicating the practice-level average percentage of patients getting appropriate care across all practices. Practices exhibited a wide range in ABCS values, from as low as zero to as high as 100%, and large standard deviations in baseline performance on all but blood pressure control. About half of the practices attended their respective cohort’s kickoff event. Both CPCQ and AR surveys revealed wide variation across the sample as well. The total coach time (in minutes) with the practice coaches who delivered the QI intervention was 421 minutes on average but ranged from zero to over 2000. Approximately 8% of participating Virginia practices were FQHCs, 60% were from hospital systems, and almost 70% were in at least one ACO. Almost 30% of these PCP practices’ patients were Medicare enrollees, and 17% were Medicaid. A majority of participating practices had between 2 and 5 clinicians.
Tables 3–5 report statistical models which control for practice characteristics (cohort, ownership, payer mix, size, urban/rural, etc.), as well as CPCQ and AR, and finally whether they attended the kickoff session. The first column tests for average effects. Column (2) in each table tests for differential impacts of the intervention on the outcome measures across ownership types by interacting the intensive and maintenance phase indicators with FQHC and system status indicators. If the average effect was identical across ownership types, then the FQHC and system interactions would be significant.
Table 3 column (1) reports the average effect of the both the intensive and maintenance phases on appropriate aspirin or other antithrombotic use was significantly positive (1.47 percentage point and 1.53 percentage points respectively, P ≤ .05). Column (2) suggests that the intervention effect is more like 8.5% for independents and FQHCs (impact is identical for them) but −6.7% for system practices. This negative result may be tempered by observing the large positive coefficient on system ownership among the control variables, indicated that system practices on average scored 21 to 27% higher at baseline than independents did on aspirin use. Although system practice performance may have deteriorated postintervention, they still outperformed independent practices on average. Overall, our conclusion from Table 3 is that there was an immediate (intervention) and lingering (maintenance) positive average effect of the HVH intervention on aspirin performance in general, that it was strongest for independent practices, but that it was small compared with baseline practice variation.
Turning to blood pressure control, first we note from Table 2 that the standard deviation of BP performance is roughly half that of Aspirin though the means are almost identical. There is considerably more uniformity in attention to BP control among Virginia small primary care practices. Table 4, column (1) reports that the average impact of the HVH intervention was to lower BP control in the maintenance phase by approximately 1%. Column 2 makes clear that that average effect was driven by the FQHCs whose performance dropped compared with independents. Again, baseline FQHC performance on BP control was better than other practice types (by 42%), so the slight dip post-HVH is outweighed by superior performance throughout. Our inference from these results is that the impact of the HVH intervention did not persist and any shorter-term impacts were among FQHC practices.
Testing for the impact of the HVH intervention on cholesterol control was complicated by the fact that the guidelines for cholesterol management were being revised while practices were being recruited and the intervention was being implemented. We chose a version of the measure that at least was in use at the time, the C control measure that was included in the GPRO reporting set, but it was not at the time of our study programmed into EHRs as standard meaningful use metrics. Therefore, even the most EHR-savvy practices could not track their progress on this metric in real time. As a result, we were not surprised to see observe zero estimated average effects of HVH on this metric either in the intensive or the maintenance phases (results available on request).
Estimated effects of HVH on smoking cessation counseling (Table 5) were slightly negative (approximately 1%) in the maintenance period on average, and no differential FQHC or system effect was observed.
We also note that practices in Cohort 3 performed 20 to 22 points lower than average, approximately a 25% differential on this metric, suggesting that practices in this cohort had much room for improvement. However, we did not find any differential impact of the intervention on the separate cohorts (results not shown).
Discussion
Overall, the short HVH intervention had small positive average effects on appropriate use of aspirin and other antithrombotics, small negative effects on blood pressure control, and small negative effects on smoking cessation counseling. These small effects were dwarfed by variation in performance across primary care practices in Virginia. Our conclusion is that the intensive phase of the intervention was probably too short to engender lasting change in process and results. Another factor limiting the impact of the intervention was that technical data extraction difficulties and hospital IT system delays prevented the HVH team from being able to report ABCS performance and movement to participating practices in real time during the intervention and even maintenance periods in many cases. Another limitation was that competing priorities, including some for which practices had financial incentives which the Evidence Now project was not able to offer, likely affected the effectiveness of the intervention efforts. Additional contributing factors were delays in recruiting the target number of practices, which then resulted in a truncated intervention period, and the multiple major disruptions (for example, changes in staff, ownership, or EMR) that degraded practice engagement with the HVH initiative. Our inability to recruit large numbers of independent practices meant that most of our recruited practices were system owned, and there is some evidence that such practices may not perform as well on important quality measures compared with independent practices.31,32 Multiple major disruptions were widespread in at least 1 other Evidence Now cooperative,33 and suggests that practice support initiatives must be designed to be able to respond and work to re-engage after a disruption. Therefore, our intervention fidelity, although not formally measured, was probably significantly compromised by these multiple factors. Finally, we found that our physician experts and our learning community were minimally used, and another cooperative reported that they were key to cardiovascular measure improvement.34 The other EvidenceNOW collaboratives had mixed results, with some showing improvements on one35 and none34 of the measures, although others showed 3 to 5% improvements on multiple measures.36⇓–38
Conclusion
The intervention period of the HVH project was too short, given the complexities of QI from a good average starting point and given competing priorities of small practices trying hard to survive in today’s rapidly evolving environment of meaningful use, value-based payment models, and the implementation of the Medicare Access and CHIP Reauthorization Act (MACRA).
Acknowledgments
The authors are grateful to Daniel Mora, Moji Zare, and Meng-Hao Li for expert programming and to Iwona Kicinger for assistance with the practice surveys.
Appendix
Notes
This article was externally peer reviewed.
Funding: Supported by AHRQ Grant 1R18HS023913-01.
Conflict of interest: None.
To see this article online, please go to: http://jabfm.org/content/35/5/979.full.
- Received for publication January 15, 2021.
- Revision received October 1, 2021.
- Revision received April 11, 2022.
- Accepted for publication April 25, 2022.