Abstract
Purpose: Becoming certified as a patient-centered medical home now requires practices to measure how effectively they provide continuity of care. To understand how continuity can be improved, we studied the association between provider practice characteristics and interpersonal continuity using the Usual Provider Continuity Index (UPC).
Methods: We conducted a mixed-methods study of the relationship between provider practice characteristics and UPC in 4 university-based family medicine clinics. For the quantitative part of the study, we analyzed data extracted from monthly provider performance reports for 63 primary care providers (PCPs) between July 2009 and June 2010. We tested the association of 5 practice parameters on UPC: (1) clinic frequency; (2) panel size; (3) patient load (ratio of panel size to clinic frequency); (4) attendance ratio; and (5) duration in practice (number of years working in the current practice). Clinic, care team, provider sex, and provider type (physicians versus nonphysician providers) were analyzed as covariates. Simple and multiple linear regressions were used for statistical modeling. Findings from the quantitative part of the study were validated using qualitative data from provider focus groups that were analyzed using sequential thematic coding.
Results: There were strong linear associations between UPC and both clinic frequency (β = 0.94; 95% CI, 0.62–1.27) and patient load (β = −0.37; 95% CI, −0.48 to −0.26). A multiple linear regression including clinic frequency, patient load, duration in practice, and provider type explained more than 60% of the variation in UPC (adjusted R2 = 0.629). UPC for nurse practitioners and physician assistants was more strongly dependent on clinic frequency and was at least as high as it was for physicians. Focus groups identified 6 themes as other potential sources of variability in UPC.
Conclusions: Variability in UPC between providers is strongly correlated with variables that can be modified by practice managers. Our study suggests that patients assigned to nurse practitioners and physician assistants have continuity similar to those assigned to physicians.
Interpersonal continuity (IC), a fundamental principle of primary care, is defined by the Institute of Medicine as the product of “personal interactions that include trust and partnership between patients and clinicians.”1 Numerous studies have demonstrated the benefits of enhanced IC, including increased patient and provider satisfaction,2⇓⇓–5 healthier patient behaviors,6 increased receipt of preventive and screening services,7⇓⇓⇓–11 reduced hospitalization rates,12,13 decreased emergency department and intensive care unit utilization,14⇓–16 decreased overall health costs,17 and reduced elderly mortality.18,19 Despite broad consensus regarding these benefits, practice characteristics that improve IC remain poorly understood, and practice trends such as remote patient interactions and team-based care are leading to new dimensions of continuity outside of traditional face-to-face encounters.20 With the advent of formal systems to evaluate and certify patient-centered medical homes (PCMHs), primary care practices increasingly are being asked to assess and report measures of continuity of care.
Decades of research have employed a variety of ways to measure IC.21 Before electronic health records (EHRs) there were few practical methods for collecting accurate continuity data, requiring investigators to infer continuity from chart review, claims analysis, or survey data.22 This presented barriers to accurate measurement of IC and limited investigators' abilities to perform rigorous analyses. As a result, there is little consensus on how to improve continuity or whether a benchmark or target continuity rate exists.
The Usual Provider Continuity Index (UPC) is a measure of how often patients see their self-identified primary care provider (PCP).21,23 It is calculated by first determining the population of active patients who are assigned to a particular provider. The monthly UPC for that provider then is defined as the number of monthly clinic visits during which these patients see the assigned PCP divided by their total visits to the clinic that month. Measures of UPC now are required for several systems of PCMH certification, suggesting that primary care practices will need to measure and improve performance in this area in the coming years.24,25
In 2008, the Oregon Health & Science University (OHSU) Department of Family Medicine established a system using data from EHRs for automated monthly collection of the UPC for each provider and clinic team. This has provided a robust database of prospectively collected continuity data, allowing for analyses of the determinants of continuity that were not feasible before EHR implementation. The objective of our study was to investigate whether certain provider practice parameters are associated with higher IC, thereby suggesting ways for practices to improve continuity as they transform into PCMHs.
Methods
Design
We conducted a sequential, explanatory, mixed-methods study26 of the effects of several provider practice parameters on IC as measured by UPC. We used retrospective data from 12 monthly UPC reports and department personnel records for quantitative analysis. The monthly reports included information about panel size and clinic frequency as well as UPC data, and the personnel records contained information about provider sex and duration in practice; however, we also wanted to learn about other potential factors that might affect continuity of care. Therefore, we conducted provider focus groups under an expert panel paradigm for qualitative analysis after the quantitative analysis was complete, allowing us to ask providers about other factors that might impact the UPC score.
Setting
The OHSU Department of Family Medicine operates 4 academic family medicine clinics. These include one federally qualified health center and one rural health clinic. All the clinics are recognized as level 3 PCMHs by Oregon's designation system,24 which is similar to that of the National Committee for Quality Assurance.25 Each clinic is divided into care teams, which consist of physicians and mid-level providers (nurse practitioners [NPs]) and physician assistants [PAs]), residents (with the exception of the rural health clinic), nurse coordinators, medical assistants, and ancillary staff. The EPIC electronic health record is used in each of the clinics.
Two years before the start of this project, each of the clinics engaged in a comprehensive quality improvement project to ensure that the PCP field in every patient's health record was accurate and up to date. To ensure correct PCP assignment data, patients are asked to identify their PCP each time they have an encounter with the clinic, including phone calls, laboratory visits, nursing visits, and provider visits. Because of this protocol, a current and accurate PCP field is verified at every patient encounter.
One year before the project, the department began to track and report the UPC rate for every provider on a monthly basis. This provides a quantitative assessment of the availability of each provider to the patients for whom he or she is the registered as PCP. Data are disseminated in monthly provider performance reports. Since every PCP is a member of a discrete team, the UPC for each team is also tracked monthly.
Subjects for Quantitative Analysis
The unit of analysis for this study was the individual provider, specifically the individual faculty physicians (MDs and DOs) and mid-level providers (NPs and PAs) at each of the 4 clinics. The study period was from July 1, 2009, to June 30, 2010. The inclusion criteria were faculty and clinical fellows who had documented clinic sessions at any of the 4 clinics and who had a registered patient panel for which they were the designated PCP during the study period. To identify all eligible providers, we searched department records of all providers who had documented clinic sessions at any of the 4 clinics during the study period; we expanded the search to include all faculty members identified by departmental personnel records. This yielded 124 potentially eligible providers. Of these 124 individuals, 61 were excluded for the following reasons:
Thirty-two faculty members were physicians without an assigned patient panel (15 consulting specialists, 7 locum tenens physicians, 5 nonclinical faculty, 3 exclusive residency preceptors, and 2 nonclinical fellows).
Two were physicians who had a mixed primary care and specialty sports medicine referral practice.
Five were nonphysician providers without a registered patient panel (ie, 2 acupuncturists, 2 clinical social workers, 1 PA who was assigned no patients).
Twenty-two were providers who left the department during the study period (5 physicians, 9 fellows who recently graduated, and 8 mid-level providers).
In total, the final analysis set included 63 providers comprising 45 physicians (including 6 fellows) and 18 mid-level providers. Provider data were historic in nature and de-identified before analysis; thus an exemption was granted by the OHSU Institutional Review Board.
Data Collection and Outcome Variable
System-wide EHR use allowed for continuous data collection with respect to provider panel sizes, clinic frequency, and visit volumes. Our outcome variable was the mean monthly provider UPC for each provider over the 12-month study period.
Predictor Variables
Clinic frequency, panel size, patient load, attendance ratio, duration in practice, as well as other covariates, were the practice parameters investigated as potential independent predictors of UPC.
Clinic Frequency
Clinic frequency was defined as the number of monthly half-day clinic sessions for a given provider. Providers are scheduled to work in the clinics in 4-hour blocks of time, which are referred to as half-day sessions. Counts were obtained from monthly provider performance reports and averaged over the 12-month study period to yield a single mean monthly half-day count for each provider.
Panel Size
End-of-month patient panel size for each provider was obtained from monthly provider performance reports. Panel size values were determined from the total number of patient charts in the EHR with a given provider listed in the PCP field. Patients who had not seen their PCP in 3 or more years are dropped from the provider's panel; thus end-of-month panel size reflected only active patients.
Patient Load
Defined as the ratio of panel size to clinic frequency (panel-to-half-day ratio), the patient load variable normalizes panel size for part-time providers and was obtained by dividing the mean monthly panel size by the mean monthly half-day clinic session count for each provider.
Attendance Ratio
Clinic attendance ratio was calculated for each provider by dividing the mean monthly half-day session count by the expected monthly half-day count as indicated by the provider's contracted clinical full-time equivalent. A 1.0 clinical full-time equivalent corresponds to 8 clinic half-day sessions per week for mid-level providers and 7 half-day sessions per week for physicians, who have 1 half-day per week designated for resident precepting. This measure did not discriminate by reason for absence from the clinic.
Duration in Practice
Duration in practice for each provider was defined as the total number of years practicing in OHSU Family Medicine at the end of the study period (June 30, 2010) based on department personnel records.
Other Covariates
Clinic and care team assignments for each provider were obtained from monthly performance reports. Provider type (physician vs. mid-level) and sex were obtained from departmental personnel records.
Statistical Analysis
Stata statistical software version 11.0 (StataCorp, College Station, TX) was used for all statistical analyses. Simple linear regression was used to assess the effect of each of the individual predictors on the outcome variable (UPC). One-way analysis of variance with Bonferroni-adjusted pairwise comparisons was used to assess variability in UPC by clinic and team assignments. Two-sample t tests were used to compare UPC and predictor variables by provider type (physician vs. mid-level) and sex. Multiple linear regression modeling was performed using the backward elimination method, Mallow's criteria, and adjusted R2 to assess multiple predictors simultaneously and to identify the best set of independent predictors for UPC.
Qualitative Methods, Subjects, and Analysis
The qualitative portion of our study used provider focus groups under an expert panel paradigm.26,27 We introduced our hypotheses and proposed quantitative analysis predictor variables to provider groups to solicit their expert opinions regarding our methods and quantitative aims and to generate additional hypotheses for future study. Particular attention was paid to unique characteristics of clinics or individual providers that could limit the validity of our quantitative findings, as well as provider commentary on shifting perceptions of IC. Focus groups using a standardized script were conducted and audio-recorded by 1 of the authors (TM) during scheduled faculty meetings at each of the 4 clinics. Physician and mid-level provider participants were not formally identified at the time of the focus groups. Audio recordings were transcribed by 1 of the authors (TM), with anonymity of respondents maintained. Transcripts were independently coded into themes, subthemes, and representative quotations by 2 of the authors (TM and JS), with subsequent joint reconciliation of themes.
Results
Descriptive Analysis
A total of 63 providers and 15 care teams from 4 clinics were included in our quantitative analysis. Among these providers, 21 were female physicians, 24 were male physicians, 16 were female mid-level providers, and 2 were male mid-level providers. Thus, 58.7% (n = 37) were female and 28.6% (n = 18) were mid-level providers. A descriptive summary of outcome and predictor variables is given in Table 1. There were significant differences in UPC, clinic frequency, and patient load by provider type. There were no significant differences in UPC or predictor variables by provider sex. There were no significant differences in UPC between teams in a given clinic.
Clinic 1 had a significantly lower mean provider UPC relative to the other 3 clinics (56.1% vs. 65.4%; P < .05). For this reason we examined whether the relationships between predictor variables and UPC differed among the clinics. Linear plots of UPC on predictor variables stratified by clinic revealed similar trends in UPC across all clinics, with the only discrepancy being a lower baseline UPC for clinic 1; thus there was no effect modification by clinic.
Univariate Analysis
Simple linear regressions of UPC on predictor variables are summarized in Table 2. Clinic frequency was significantly associated with increased UPC (Figure 1), whereas both panel size and patient load were significantly associated with decreased UPC (Figure 2). There was no significant association between UPC and clinic attendance ratio. Although there was no significant linear association between UPC and duration in practice, a strong association was observed between patient load and duration in practice (β = 1.75; r = 0.55; P < .001); thus duration in practice was investigated as a potential confounder in multivariate analysis.
Multivariate Analysis
Multiple linear regression modeling identified a set of independent predictors: clinic frequency, patient load, duration in practice, provider type, and interaction between patient load and provider type (Table 3). Clinic frequency and patient load were the primary modifiable predictors of UPC. Duration in practice was included as a significant confounder because of the association between UPC and patient load. Provider type was included as an effect modifier because the univariate analysis suggested potentially different associations between UPC and clinic frequency by provider type. The final model explains >60% of the variation in UPC across our population of clinicians (adjusted R2 = 0.629; P < .0001).
Effect modification by provider type is further represented in Figure 3, which demonstrates the differential association between UPC and clinic frequency among physician and mid-level providers, holding patient load and duration in practice constant.
Qualitative Analysis
Physician and mid-level providers (n = 35) from each of the 4 clinics participated in focus groups. Six general themes were identified during sequential coding of provider responses (Table 4). Themes 1 to 3 identify potential sources of variation in UPC that are intrinsic to specific clinics, providers, or patient populations. Themes 4 and 5 focus on alternative perceptions of IC beyond face-to-face encounters between a single patient and their personal PCP. Theme 6 addresses the potential relationship between clinic absences and IC.
Discussion
The purpose of our study was to (1) define baseline UPC measures for 63 providers in our 4 clinics after a full year of careful measurement and (2) understand differences in UPC among providers based on provider practice characteristics. Our findings should be useful to clinic managers and physician leaders seeking to improve IC within their practices, but this method of analysis is only valid when patients' PCP assignments are known to be highly accurate and frequently updated. We identified patient load and clinic frequency as major modifiable predictors of UPC, both of which can be manipulated to achieve higher UPC. Based on our model, a provider's IC as measured by UPC can be improved by adjusting how often the provider is in the clinic and the size of his or her practice panel. A physician provider near the mean of all studied parameters (clinic frequency, 14 monthly half-days; panel size, 540 patients; patient load, 38.6; duration in practice, 7.2 years) can expect a UPC of 62.0%, nearly identical to our observed mean (61.0%). If this same provider were to add 4 half-days per month without a change in panel size, UPC would increase to 67.7%. Our qualitative analysis suggests that clinic scheduling patterns (subtheme 2.5) may account for additional unexplained variability in UPC. Providers in our focus groups believed continuity was likely to be more important for visits related to chronic or ongoing care, suggesting that these visits might be evaluated differently than acute care visits when assessing UPC (subtheme 4.1).
An important finding of our study is that IC for mid-level providers as measured by UPC is at least as good as it is for physicians in this practice setting, where patients are allowed to choose physicians or mid-level providers as their PCP. Focus group participants hypothesized that intrinsic practice differences may contribute to the variability in UPC trends by provider type, including approach to patient care (subtheme 2.1), scope of practice (subtheme 2.3), and breadth of nonclinical duties (subtheme 2.4). Most of the physicians in this study perform hospital and maternity care and see patients in the office, whereas mid-level providers work only in the office setting. A specific limitation of our study is that it included only providers in academic practices, which calls into question whether our findings would be reproducible in community practices. Physicians in our practices have substantially more teaching and other academic duties than the mid-level providers. It is possible that the practices of mid-level providers in academic clinics may more closely resemble community-based physician practices, where physicians are less likely to have these academic responsibilities.
Another key finding in our study was that longer duration in practice improves UPC, but this occurs only after adequately controlling for practice size because more experienced providers care for larger panels with fewer clinic sessions as their practices mature. Qualitative analysis suggested that providers with more years in practice might achieve continuity that is not accounted for in UPC calculations, such as during resident precepting (subtheme 2.3). In addition, providers with more mature relationships with their patients may be able to achieve the benefits of IC despite less frequent patient visits.
One of the more consistent themes in our qualitative analysis was a concern about variability in UPC between providers of different sexes. Female providers in particular were concerned that different rates of maternity and well-woman visits (subtheme 2.2), as well as extended absences in the form of maternity leave (subtheme 6.2), would lead to lower UPC among female providers. Instead, our quantitative analysis found no significant differences in UPC or any of our predictor variables by provider sex.
The evolving nature of IC was discussed at length during the provider focus groups. Participants believed that efforts to quantify continuity should account for alternative forms of patient interactions, such as phone calls, E-mails, or interactions through the EHR (subtheme 4.2). These were thought to be meaningful interactions enhancing IC despite the absence of a face-to-face encounter. Providers also stated that the value of IC needed to be reassessed in light of growing reliance on team-based care (subtheme 5.1) because the PCMH model emphasizes the value of team-based care. Further research is needed to investigate the value of continuity with a team of providers versus an individual provider.
Conclusions
Our study is a novel approach to the assessment of IC in family medicine and was made possible by our system-wide EHR, a tool that is still new for many practices. Our results improve the understanding of predictors of continuity while furthering efforts to establish benchmark UPC rates. Our methods should be reproducible in similar clinics or health systems and could be helpful to practicing family physicians as they address requirements for measuring and benchmarking continuity in the certification process to become a PCMH. Further research in this area might address how IC changes over time, how quality improvement efforts might improve continuity performance, and how clinic teams can expand on the proven value of relationship-based care in family medicine. Our mixed-methods study design allowed qualitative validation of our quantitative findings and suggests a number of key elements for future study.
Acknowledgments
The authors are grateful to Ms. LeNeva Spires for editing and publication assistance.
Notes
This article was externally peer reviewed.
Funding: Support for the project was provided by the Departments of Family Medicine and the Public Health and Preventive Medicine, Oregon Health & Science University.
Prior presentation: These results were presented as part of the master's thesis for a Master of Public Health degree (TSM).
Conflict of interest: none declared.
- Received for publication November 15, 2012.
- Revision received March 12, 2013.
- Accepted for publication March 19, 2013.