Abstract
Background: Primary care medical practices increasingly are asked by payers, employers, and government agencies to report quality data, but the process of doing so is not well delineated.
Methods: Providers and office staff in a diverse sample of eight primary care practices in North Carolina comprised this study population. Interviews were conducted and self-administered questionnaires were disseminated in practices that were successfully reporting data to one or more of 4 reporting programs. Our measures included responses to open-ended and Likert scale questions about experiences and potential facilitators and barriers, as well as subscales of the Practice Assessment tool and the Culture of Group Practices instrument.
Results: Study practices had stronger change histories, higher information and quality emphases, and lower business emphases than historical comparison practices. Motivation to participate, a leader who catalyzes the process, and establishment of new systems characterized successful practices. Staff time, information technology challenges, and resistance from some providers were common barriers. Practices achieve a sustainability state when numerous barriers have been successfully overcome and tangible results achieved from the process.
Conclusions: Implementing and sustaining quality reporting requires a complex set of motivators, facilitators, and strategies to overcome inherent barriers that can present themselves in practices that seek to implement changes in this direction.
Primary care medical practices are increasingly confronted with requests from payers, employers, and government monitoring agencies to report quality data.1 In 2007, Medicare began the Physician Quality Reporting Initiative (PQRI), which pays physicians a bonus for reporting a standard set of ambulatory care measures.2 Other data-reporting activities include initiatives by private insurance companies, government-sponsored collaboratives, and programs to support development of the “medical home.”1, 3
In a consensus report from the US Agency for Health care Research and Quality (AHRQ), 6 barriers to quality data collecting and reporting were identified: (1) data gathering inefficiency, (2) variation among performance measurement systems, (3) organizational and cultural barriers, (4) technological barriers, (5) economic pressures, and (6) competing priorities AHRQ 2007.4 Strong leadership, a culture that values quality, information technology (IT), and external incentives have been posited to help practices overcome these barriers.5–6
Much is unknown, however, about the process by which quality improvement and performance data reporting can be implemented and sustained, the impact of such activities on practice staff, or approaches through which practices can most successfully initiate and sustain a quality focus. Some information has been reported about the implementation of quality initiatives in large, high-performing health systems6; however, the majority of primary care practices are small, meet quality targets less regularly, 5 and have been little studied. Furthermore, electronic health record (EHR) implementation can be especially challenging in smaller practices, resulting in difficulty collecting and reporting data.6
To better understand the process by which practices initiate, support, and maintain performance data reporting, we conducted an in-depth study of 8 diverse practices, each of which was successfully participating in one or more of 4 performance data reporting programs. Through a combination of quantitative and qualitative data collection and analysis, we hoped to identify characteristics, motivators, facilitators, and strategies that helped overcome barriers to successful reporting and to link those factors together into a theoretical model, outlining the steps necessary to initiate and sustain quality data reporting in primary care practices.
Methods
Programs Studied
The 8 practices that participated in this project ascribed to one or more of 4 data reporting programs:
Community Care of North Carolina (CCNC).9 An integrated Medicaid program comprising 16 independently managed networks, CCNC collects quality data yearly via chart audit of a sample of patients from each practice. The major disease foci are diabetes and asthma. Training occurs during optional quarterly meetings. Quality achievement is recognized, but financial rewards are not provided.
Improving Performance in Practice (IPIP).10 A state-based, nationally led quality improvement initiative, IPIP employs quality improvement consultants to assist practices in redesign and quality improvement. Practice staff participates in quarterly network meetings and receive continuing education credit for participation; practices receive one-time incentive payments for initial participation and data report generation.
The PQRI.11 Started as a pilot on July 1, 2007, the PQRI represents Medicare's first step toward linking payment to quality. In 2007, participating practices had to report data on at least 80% of the relevant visits involving a minimum of 3 (of 74) quality measures to receive up to 1.5% of the Medicare-allowable charges during the reporting period. Quality information is entered as “G” codes or CPT II codes on visit billing claim forms.
Bridges To Excellence (BTE).12 A not-for-profit organization that designs and creates programs to encourage quality improvement, BTE in North Carolina is a 3-year pilot program sponsored by Blue Cross/Blue Shield. Physicians can earn rewards in 3 distinct areas: diabetes, cardiac disease, and office systems that support chronic disease management. Reporting requirements and financial incentives vary by area.
Practice Selection
To identify practices that were successfully participating in the above reporting systems, we conducted a telephone survey of more than 100 practices in the North Carolina Network Consortium, a statewide consortium of primary care practice-based research networks and solicited recommendations from key informants in organizations involved in quality data work. Based on these recommendations, we purposely selected 8 practices that represented a range of practice size, specialty, ownership, and performance program participation. Seven agreed to participate; the eighth declined and was replaced. Our final sample included 4 for-profit practices, 3 nonprofit practices, and one teaching practice.
Data Collection
This project used a multimethod design (interviews and self-administered questionnaires). Because the project was inductive in nature, qualitative and quantitative methods were used simultaneously.13
Interviews
Interviews were conducted by an interdisciplinary team consisting of a health services researcher (PDS, SZ), a quality improvement specialist (TR), a qualitative researcher (SZ), and an economist (LS, SS, SH). In each practice, individual interviews were conducted with the practice manager, the lead physician (practice champion), and (if different from these individuals) the person in the practice (often a nurse) who was responsible for quality improvement. If those interviews identified one or more other persons who were also involved in the process of data gathering and reporting (eg, someone working in information management or in medical records), they were also interviewed. In addition, we conducted a group interview over lunch, to which other providers in the practice were invited; this interview typically included between 4 and 8 persons. For each interview, one member of the research team led the interview while others took notes, and all interviews were audiotaped for later review.
The interview questions addressed the following areas: history of the practice's involvement in quality improvement and data reporting; quality measures reported; logistic issues related to gathering, extracting (from records), and reporting data; reviewing and acting on the data; engaging physicians and other office staff; barriers encountered and problems overcome; impact of participation on billing; outside recognition; and perceived quality of care. Questions about physician attitudes toward participation in quality improvement efforts and data reporting programs were drawn from the work of Young and Meterko and colleagues14,15; potential barriers and challenges to data collecting and reporting were derived from the report of the 2006 AHRQ National Conference on Health Care Data Collection and Reporting,15 and questions about factors leading to adoption and sustainability of quality improvement activities by practices were derived from the work of Bray et al.16 In addition to open-ended responses to these items, questions were used to measure the intensity of the facilitators, barriers, and attitudes with a 4-point Likert scale, with 1 = very easy/not a problem, 2 = somewhat easy/a little problematic, 3 = somewhat difficult/fairly problematic, and 4 = very difficult/very problematic.
Questionnaires
In addition to interviews, providers and office staff completed self-administered questionnaires. In practices with 5 or fewer personnel in each category (physician, nurse practitioner, physician assistant, nurse, administrators), all providers and staff were asked to complete the questionnaire; in larger practices, up to 10 respondents were randomly selected from a stratified sample that equally included individuals with varying lengths of employment. Completed forms were placed in sealed envelopes and returned to the practice administrator, who sent them by prepaid overnight mail to the investigative team.
Selected subscales from 2 instruments assessed organizational attributes and styles that were hypothesized to relate to quality monitoring and reporting. Four subscales from the Practice Assessment Tool developed by Ohman-Strickland et al17 were administered: quality of communication (4 items), participatory decision making (8 items), perception of stress/chaos in the practice (6 items), and history of recent change in the practice (3 items). All subscales are means with a range from 1 to 5.17 Five subscales from the Culture of Group Practices instrument of Kaissi et al18 were administered to providers only (ie, physicians, nurse practitioners, and physician assistants): information emphasis (4 items), quality emphasis (6 items), business emphasis (4 items), innovativeness (3 items), and provider autonomy (3 items). Each was reported as a mean, ranging from 1 = not at all to 4 = to a great extent.18 To provide a comparison with our study results, we utilized published data from 51 family practices in New Jersey and eastern Pennsylvania reported by Ohman-Strickland et al17 and from 88 primary care group practices in North Dakota, South Dakota, and Wisconsin reported by Kaissi et al.18
Data Analysis
Quantitative data were analyzed using software from the Statistical Analysis System, version 9.1 (SAS Institute, Cary, NC). Means and standard deviations were calculated and, when appropriate, differences in means were evaluated between study practices and more representative samples from the literature using the standard t test.
Interview data were transcribed and compiled into narrative case reports for each practice. Using standard qualitative methods,19 data were initially read by 4 members of the research team (JH, TR, PS, SZ), who identified codes based on the data (eg, “champion,” “engaging leadership,” “teamwork,” “staff time/effort,” “IT challenges,” “difficulties changing physician behavior”). The research team achieved consensus on the codes, coded all interviews, and resolved discrepancies in coding. The codes and related text were then visually examined and discussed by the team to identify larger themes (eg, desire to improve quality of care), which are reported as results.
Results
Description and Organizational Characteristics of the Practices
Table 1 provides descriptive data on the practices studied. They included one large primary care group; 3 small practices (one internal medicine, one pediatrics, one family medicine); 2 federally qualified community health centers; one hospital-affiliated rural nonprofit practice; and one academic/teaching practice. Four of the practices were participating in the Medicare PQRI pilot, 7 in CCNC, 3 in IPIP, and 2 in BTE.
Characteristics of the Primary Care Practices Studied
When practices were surveyed about organizational attributes, their mean scores were similar to those of published comparison practices in communication, decision making, and stress/chaos. There was a trend of borderline statistical significance (3.47 vs 3.13; P = .063) for study practices to score higher in the change history scale than comparison practices. Table 2 displays these findings.
Self-Assessment of Organizational Attributes and Culture among the Primary Care Practices Studied (n = 8)
When the study practices were evaluated using 5 subscales from the Culture of Group Practice instrument, they scored higher than historical comparison practices in information emphasis (3.00 vs 2.28; P < .001) and quality emphasis (3.03 vs 2.61; P = .005), and they scored lower in business emphasis (2.42 vs 2.80; P = .022). Scores on the innovativeness and autonomy subscales did not differ between study practices and historical comparison practices.
Motivation to Participate in Data Reporting Programs
During qualitative interviews, all practices reported that desire to improve quality of care was a motivation for participating in quality data reporting programs. In one practice, the physician champion joined IPIP to improve his own asthma care and “contribute to the community effort.” In another practice, a lead nurse was motivated by county-level data demonstrating a high prevalence of diabetes.
Another common theme was that quality improvement and data reporting would lead to future financial rewards. As one lead physician stated, “Pay for performance seems inevitable, and we wanted to prepare our practice for it.” Though current reimbursement levels were less of a motivator, 2 practices did note that the BTE program provided sufficient financial reimbursement to motivate them to participate in other programs.
Desire to gain a competitive edge vis-à-vis other practices was another motivator. One practice put it this way: “If we are providing quality of care, we want to separate ourselves out and be recognized.” Staff in another practice spoke with pride about developing new models of care and expressed eagerness to demonstrate their results to others.
Getting Started
Previous participation in a chronic disease collaborative or the CCNC program was often cited as having raised awareness and helping prepare practices for other initiatives. Two practices noted that receiving results from CCNC audits motivated their practice to develop systems to improve and track diabetes care. Another practice, which was part of a multisite organization, was motivated by a system-wide quality effort that compared them with other sites.
Most practices described either an individual or a program that got them started. In several practices, the catalyst was a lead nurse or practice manager with previous quality improvement experience, often in hospital settings. In 3 of the practices, the catalyst was a physician “champion” who became motivated because of involvement at the statewide level or an interest in IT.
Provider and staff buy-in were viewed as essential to program success. As one staff member put it, “The providers set the tone and empower the staff, and the staff members carry out the work.”
Overcoming Logistic Challenges
Figure 1 displays graphically the Likert-scale responses to questions about logistic issues related to gathering data, the quality measures, reviewing and acting on the data, engaging others, and other concerns. Gathering data from records was somewhat difficult or very difficult for the majority of study practices; problematic areas included variability in coding and recording, inconsistency of reporting requirements, staff time conflicts, engaging physicians and nurses, office flow disruption, conflict with other competing priorities, concerns about provider productivity, and insufficient financial incentives. Engaging administrative leadership and staff were less problematic.
Degree of difficulty or problematic nature of collecting and reporting quality indicators in specific areas, as reported by our eight study practices.
In qualitative interviews, 3 prominent logistic problem areas were identified: (1) staff time and effort, (2) IT challenges, and (3) difficulties changing physician behavior. All practices felt that the time and effort required for data retrieval was a significant barrier, whether from paper charts or electronic records. For example, when one practice wanted to report the percentage of their diabetics who were given advice to quit smoking, they had to first run a query of all their diabetics, and then manually go through clinic notes in each electronic record to look for evidence of smoking cessation advice. Another practice developed an electronic system to report PQRI quality codes; however, the physicians had to extract the data manually, which required opening the laboratory information system and toggling back to the data entry form. “You can spend 20 minutes at the end of your day doing this coding,” one physician reported. Practices participating in more than one initiative reported added, “It's almost like a mania. The clinicians and staff are being driven to a frazzle.”
The majority of practices (6 of 8) reported problems with the IT systems themselves. Many measures could not be queried in discrete fields from an EHR, registry, or flow sheet. One practice reported a 12-month period during which their stand-alone disease registry would frequently “crash;” another lost audit data representing 80 hours of work by a Licensed Practical Nurse when an laptop that was not backed up “crashed.” In practices using paper charts, registry encounter forms or flow sheets were often underutilized by physicians, so nursing staff had to manually extract needed data. Although 5 of 8 practices used an EHR, 2 had to contract with an outside consultant to develop templates for reporting quality data, whereas others used manual chart audits. A quality improvement nurse commented, “I'm sure that the EHR vendor could develop a query to do this, if we paid them enough.”
Getting different computer systems to talk to each other was often a challenge. One practice reported paying an IT consultant for 40 hours of labor to program the laboratory system data to be entered into the EHR. In another practice, some diabetes measures are captured electronically (glycosylated hemoglobin and blood pressure) while others are the responsibility of the medical assistant (foot examination), the physician (smoking cessation), and the medical record staff (eye examination).
Engaging physicians and nurses was not always easy—especially physicians who were “set in their ways.” Data feedback helped obtain this buy-in. As one practice manager reported, “Once you start to measure quality, the first thing the providers do is question the measures. Once you get over that hurdle, providers are a competitive group…and you can move quality.” The process of engagement seemed to reach maturity when quality reporting became “the thing you do.”
Data inconsistency caused problems as well. For example, one practice had to train the physicians to consistently record under “feet” instead of “extremity” so that data would be captured by the quality reporting program. Others noted inconsistencies between the way they interpreted the program's guidelines and the program itself. One practice, for example, had to create a report on smoking cessation counseling 3 times before it was in a format that was accepted by the quality organization.
Establishing a System for Reviewing and Acting on the Data
Most of the practices had a meeting structure in place via which quality issues were discussed. However, practices reported difficulty finding enough time to review and act on quality data reports, with 4 of 7 reporting practices rating data review and 3 of 7 rating taking action as “somewhat difficult.” Three practices reported holding only one meeting on quality data during the previous 3 months. Of these, one practice had consciously reduced meetings to quarterly to minimize costs associated with meeting participation.
When meetings did occur, it was difficult for clinicians to attend. Common barriers cited included patient emergencies and hospital rounds. One practice solved this problem by assigning a physician's assistant to cover the other clinicians.
A common theme was teamwork, with increased responsibilities for nursing staff. “Initially,” one interviewee stated, “providers are burdened by a new reporting activity. But after a while it takes less effort because [they] figure out how to give it to nursing.” For example, to help one practice improve their rates of performing diabetic foot exams, the lead physician spent approximately 20 hours designing and implementing a new system whereby nurses performed the examination and recorded it on a flow sheet.
Consequently, study respondents noted a need to provide incentives not only to providers but also to nurses. When one practice received reimbursement from Blue Cross/Blue Shield for diabetes, the extra funds were shared with the teams that had achieved it.
Typically, the practice manager served a crucial facilitator role. One respondent described the practice manager this way: “She gets very excited about these kinds of things. She presents the data in a fun way—she puts time into preparing it for you, in charts, so that we have clarity. If it isn't something we can handle, she helps us work it out.”
Perceived Effects on Provider Productivity and Practice Finances
Scores on our quantitative questions (Figure 1) indicated that many practices felt that quality data reporting programs lacked sufficient financial incentives and that the time involved in these activities took staff away from other priorities. In qualitative interviews, several practices said that quality data collection slowed down productivity initially, but overall productivity increased over time. Use of a disease registry and clinical practice guidelines enabled several practices to bring back diabetics more frequently, order more laboratory tests, and more confidently code higher-level visits. “Good income for good medicine” is how one provider put it. Others felt that the effect on practice income was minimal.
All 3 PQRI-participating practices expressed concerns that the program's financial incentive was inadequate. One physician, reflecting on the amount of time and energy spent compared with the amount of anticipated reimbursement, commented that, “They are taking money out of my pocket.”
A Model of Development and Sustainability of a Culture of Quality in Primary Care
Based on these interviews and analyses, we propose a model to describe the process by which practices develop and sustain a culture of quality measurement and reporting (Figure 2). According to our model, conditions that help prepare a practice include a focus more on quality than on income and one or more providers or staff with prior experience in quality improvement. A catalyst is essential. Usually this takes the form of a committed leader; occasionally it is an institutional mandate. The practice then embarks on a process of infrastructure development, which, to be successful, requires leadership support, adequate data entry and reporting resources, and regular staff meetings to discuss and act on reports. The practice achieves a “culture of quality” or sustainability state when numerous barriers have been successfully overcome, staff acknowledge that tangible constructive change results from the process, favorable billing or other financial benefits are realized, the process enhances the practice's sense of self and/or its reputation in the community, and strategic partnerships are in place that promote quality improvement.
Factors involved in development and maintenance of quality assessment, improvement, and reporting in a primary care practice.
Discussion
Primary care leaders and policy makers have recommended that providers gather and report data on quality indicators and use that data to improve service delivery.20–22 This study provides evidence from a small, diverse sample of North Carolina practices that certain challenges are consistently faced by practices who seek to develop successful data reporting systems. These issues include effective leadership, ongoing motivation of practice staff, numerous IT challenges, and the need to develop systems that would assure data quality and provide for program sustainability.
Leadership was identified as a key factor. Successful leaders often had prior experience with quality improvement, had links with external organizations, and were passionate about developing a culture of quality. Commitment was also important on the part of other providers and support staff. These findings are in line with other reports that strong leadership and a culture of quality were associated with overcoming barriers to implementing care management processes.4,5 Practices that plan to participate in pay-for-quality programs will need to identify these leaders, and quality programs may need to invest in leadership training and support.
The use of electronic health records seemed to be both a facilitator and a barrier. Practices that invested time and money in building queries and interfaces to facilitate reporting found EHRs to be facilitative. However, many practices, especially small ones, struggled to obtain quality data in an automated way, often without adequate IT support and resources. Thus, EHRs per se may not stimulate or facilitate quality improvement or reporting, a conclusion that is mirrored by a report that EHR use was not significantly associated with improved quality of care.7 Instead, electronic systems will only lead to practice change if they are implemented in systems that have motivated stakeholders, sufficient resources, external pressures to change, and opportunities for change.23 Furthermore, practices investing in EHRs should chose systems that can report data easily, and national and international eHealth initiatives should incorporate this functionality into common standards across EHR platforms.24
Each of the 4 programs we studied included unique quality measure specifications and reporting requirements. Because of this, practices that participated in more than one program simultaneously reported added burden. Thus, primary care practices may not have the capacity to participate in multiple programs unless efforts are made to increase consistency across programs. A useful model is the British National Health Service, which has successfully implemented a nationwide system, including 146 common measures and involving 10 chronic conditions, that is used both to increase provider salaries and improve quality of care.25
This work is preliminary and has several limitations. Though they were the best available comparison data we could identify, the comparison practice data used in Table 2 cannot be considered as being from equivalent practices. Comparison practices were selected from different states, and the comparison data were collected several years before ours; therefore, we cannot rule out the possibility that regional differences and/or secular trends may be present. In addition, the comparison practices in both the studies by Ohman-Strickland et al17 and Kaissi et al18 most likely were not entirely representative of primary care practices. However, any bias in selection would have likely made the comparison practices more like the “early adopters” we studied; so, the observation of differences between our practices and the comparison practices is likely to underestimate differences between primary care practices overall.
Conclusion
Implementing and sustaining quality reporting requires a complex set of motivators, facilitators, and strategies to overcome inherent barriers that can present themselves to practices that seek to implement changes in this direction. Future research and program development should seek to better understand these issues and the policy, programmatic, and practice-based activities that can help all practices achieve a culture of quality.
Notes
This article was externally peer reviewed.
Funding: Work for this project was conducted under Task Order Contract #HHSA290200710014 from the US Agency for HealthCare Research and Quality.
Conflict of interest: none declared.
- Received for publication April 27, 2010.
- Revision received March 17, 2011.
- Accepted for publication March 23, 2011.