Abstract
Purpose: Practice facilitators (“facilitators”) can play an important role in supporting primary care practices in performing quality improvement (QI), but they need complete and accurate clinical performance data from practices' electronic health records (EHR) to help them set improvement priorities, guide clinical change, and monitor progress. Here, we describe the strategies facilitators use to help practices perform QI when complete or accurate performance data are not available.
Methods: Seven regional cooperatives enrolled approximately 1500 small-to-medium-sized primary care practices and 136 facilitators in EvidenceNOW, the Agency for Healthcare Research and Quality's initiative to improve cardiovascular preventive services. The national evaluation team analyzed qualitative data from online diaries, site visit field notes, and interviews to discover how facilitators worked with practices on EHR data challenges to obtain and use data for QI.
Results: We found facilitators faced practice-level EHR data challenges, such as a lack of clinical performance data, partial or incomplete clinical performance data, and inaccurate clinical performance data. We found that facilitators responded to these challenges, respectively, by using other data sources or tools to fill in for missing data, approximating performance reports and generating patient lists, and teaching practices how to document care and confirm performance measures. In addition, facilitators helped practices communicate with EHR vendors or health systems in requesting data they needed. Overall, facilitators tailored strategies to fit the individual practice and helped build data skills and trust.
Conclusion: Facilitators can use a range of strategies to help practices perform data-driven QI when performance data are inaccurate, incomplete, or missing. Support is necessary to help practices, particularly those with EHR data challenges, build their capacity for conducting data-driven QI that is required of them for participating in practice transformation and performance-based payment programs. It is questionable how practices with data challenges will perform in programs without this kind of support.
- Electronic Health Records
- EvidenceNOW
- Health Information Technology
- Practice Facilitation
- Primary Care Practice
- Quality Improvement
Practice facilitators (“facilitators”), also known as practice coaches or quality improvement (QI) consultants, provide support to primary care practices conducting quality improvement and practice transformation work.1⇓⇓⇓⇓⇓–7 The goal of facilitation is to help practices transform into independent learning systems capable of continuous and self-directed improvement. To this end, facilitators work with practice staff to help them implement evidence-based guidelines and techniques to improve care delivery and disease management8⇓⇓⇓⇓–13, build the motivation and skill sets needed to enact workflow and other practice changes2,3,8,14⇓–16, and, importantly, use clinical and other data to set improvement priorities and monitor progress over time.17⇓–19
Facilitators may use multiple sources of data to guide practices in their change processes, but robust clinical quality improvement depends on having continual access to accurate patient-, clinician-, and practice-level electronic health record (EHR) data.19⇓–21 Practice- and clinician-level clinical performance data are needed to monitor improvements in relation to the QI changes that practices implement, while patient-level data are needed to identify care gaps and perform patient outreach to aid in improving clinical performance.21⇓⇓–24 These data should be available through practices' EHRs or data systems. However, small independent practices, particularly those with limited resources, often do not have the data they need for QI and/or require support to access and use EHR data for QI.25⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓–36 Little is known about how facilitators respond to practice-level EHR data challenges that create barriers to performing data-driven QI.18,35,37
This study addresses the following questions: what are the challenges facilitators face in using EHR performance data for QI, and how do they help practices with these challenges perform data-driven QI? We analyzed qualitative data across the 7 regional cooperatives participating in the national initiative EvidenceNOW. Cooperatives embedded approximately 136 facilitators in 1500 United States-based primary care practices with the goal of implementing sustainable improvements in practice capacity and cardiovascular preventive care delivery, as measured by 4 clinical quality measures (CQMs). Here, we describe the strategies facilitators employed across interventions to assist practices challenged in accessing full, accurate performance data from their EHRs to conduct data-driven QI.
Methods
Setting
In 2015, the Agency for Healthcare Research and Quality (AHRQ) launched the EvidenceNOW initiative. AHRQ funded 7 regional cooperatives across 12 states to help improve cardiovascular disease (CVD) preventive care in small-to-medium-sized primary care practices (those with 15 or fewer clinicians). Cooperatives, collaborations of public and private health care organizations, enrolled over 1500 practices (approximately 250 each), including approximately 5000 clinicians. All cooperatives used practice facilitation as their main intervention strategy to help practices improve delivery of the ABCS: aspirin use in high-risk individuals, blood pressure control, cholesterol management, and smoking cessation counseling. As part of EvidenceNOW, AHRQ also funded a national evaluation, Evaluating System Change to Advance Learning and Take Evidence to Scale (ESCALATES), to identify cross-cooperative lessons.
Study Sample
Cooperatives employed approximately 136 facilitators, the majority of whom worked with between 10 and 20 practices (see Table 1). EvidenceNOW practices were required to report quarterly CQMs; facilitators were expected to use these measures and other EHR data to provide performance feedback and QI support.
Data Sources
We used 3 sources of qualitative data for this study. First, ESCALATES created a private online diary38 for each cooperative; this is a web-based platform where ESCALATES team members interacted with cooperative staff. Each cooperative identified from 14 to 41 people to participate on their diary: facilitators, data specialists, project managers, and project investigators. Diaries were initiated in July 2015 (Year 1 of the grant) and concluded in January 2018. Facilitators' efforts to help practices with data-driven QI emerged early on as a key topic. Second, ESCALATES conducted annual 2- to 4-day onsite visits with each cooperative. Year 1 visits focused on cooperative-level data collection. Year 2 visits, conducted from July 2016 to April 2017, focused on implementation of the intervention; we observed 44 facilitators working in 62 practices and wrote field notes. Third, we interviewed 33 facilitators individually and 3 groups of facilitators (16 total) during Year 2 using a semistructured interview guide (see Appendix 1) that explored the tools and strategies facilitators used to assist practices with QI. We identified data challenges in Year 1 and continued exploring themes through subsequent data collection in Year 2.
We also present select quantitative data results from our baseline Practice Survey to describe facilitator context. The survey assessed practice characteristics and health information technology (HIT), among other internal and external characteristics. Cooperatives administered surveys to practices, before or at intervention start date, and collected surveys from September 2015 to April 2017. One practice leader per practice filled out the survey. We received complete data from 1182 practices (78.3%).
Data Management and Analysis
Starting in Year 1, the ESCALATES qualitative team reviewed all diary entries together on a weekly basis. Toward the end of Year 1, we created a small workgroup to analyze diary data related to facilitator challenges using EHR data for QI. One researcher (JRH) searched for quotations related to “data” and “EHRs” and filtered output by relevance to practice facilitation. Two researchers (JRH and BFC) met biweekly to read these data together, using an immersion-crystallization approach to identify themes within and across cooperatives.39⇓–41 An additional researcher (JDH) resorted the raw data into these themes to validate categorizations. This small group discussed interpretive differences until reaching consensus.
During Year 2, we conducted a rolling analysis of the facilitator observations and interviews conducted during site visits. Using a similar immersion-crystallization approach, the group refined and added to previously identified themes with each round of analysis. When themes reached saturation, we included them in our findings. On finalizing analysis, we created a manuscript workgroup that included interested ESCALATES and cooperative members. Three cooperatives compared our themes to their data and experiences and provided feedback.
Our qualitative team entered all qualitative data, including diaries, site visit field notes, and interview transcripts, into Atlas.ti software42 for data management and analysis. Our quantitative team created descriptive statistics for Practice Survey responses.
Results
Table 1, above, describes key EvidenceNOW facilitator and practice characteristics that shaped the context in which facilitators worked.
From our qualitative data, we found that facilitator workload varied by cooperative organizations and intervention design: facilitators met with practices weekly, biweekly, or monthly during active intervention time frames, and visit duration was usually 1 hour, although meetings could vary from a short check-in to a several-hour observation or chart audit. Facilitators followed up in-person meetings with emails and phone calls.
Facilitators worked across states and regions, but our quantitative results from the Practice Survey showed that most practices had fewer than 10 clinicians and approximately 40% were clinician owned. Although most practices across cooperatives participated in meaningful use (MU), most did not perform their own reporting; only about a quarter had an in-house clinician or staff member who created CQM reports. Furthermore, only about a third reported that they routinely discussed their clinical quality data “often.”
These practice characteristics present a glimpse into the practice environments in which facilitators facilitated data-driven QI. Despite having MU-certified EHRs, practices were largely unaccustomed to producing reports from their EHRs themselves, and few had used these reports in QI. Building on these survey results, we found qualitatively that not only were practices unaccustomed to producing reports on their own, but, as facilitators reported, many practices could not access or produce full or accurate ABCS reports from their EHRs for QI. As shown in Figure 1 and described in detail below, facilitators employed different strategies for performing data-driven QI based on the timing of their work with the practice and the EHR data challenges the practice faced. In Figure 1, we present facilitator strategies and EHR data challenges as ideal types; in reality, EHR data challenges often overlapped, and facilitators tailored strategies to the individual practice.
Facilitators developed initial data-QI strategies, whereby they used available practice data; interim data-QI strategies, if needed, whereby facilitators created and used provisional data; and continuous data-QI strategies, performed concurrently with other strategies, whereby they performed activities to help practices think about and use data for QI rather than solely for reporting needs. Facilitators adapted these strategies to the EHR data practices had available and to the challenges they had in obtaining full, accurate ABCS measures. Specifically, practices faced 3 types of EHR data challenges: lack of ABCS data, whereby practices could not access ABCS measures or produce other ABCS-related reports; partial or incomplete ABCS data, whereby practices did not have full measures, but could produce limited reports; or inaccurate ABCS data, whereby practices received their measures, but found them inaccurate. These challenges unfolded temporally for some (eg, lack of ABCS data at first, then inaccurate ABCS data, as cooperatives connected practices to external data platforms), whereas they overlapped for others (lack of data on 1 or 2 measures due to measures not being programmed or updated in their EHRs and inaccurate data on other measures). Based on findings that emerged from the qualitative data, we describe facilitator strategies for working within these conditions in detail below.
Initial Data-QI Support
For Practices with Lack of ABCS Data
Facilitators reported in diaries and during site visits that practices lacked access to ABCS performance measures or related EHR data due to limitations in EHR ability or their access to reporting functions. While waiting for data they assumed would soon be available from the practice's health system, EHR vendor, or external data platform, facilitators worked on techniques that could indirectly improve ABCS outcomes or practice capacity. For example, one facilitator said the following:
I was able to present to the team the option of looking at the clinical improvement side while we wait for the data IT issue to be resolved. This brought forth great brainstorming and excitement from the team. We discussed the Care Team, using Self-Management and Motivational Interviewing with hypertension…. (cooperative 1, facilitator, diaries, 3/16/17).
Without performance data, facilitators worked on workflows and “pain points” identified by practices. They found they could strengthen relationships with practices by working on practice needs, which they aligned with EvidenceNOW goals. Facilitators reported that they found this strategy particularly useful in cases where practices were reluctant to select a specific ABCS measure to work on without first seeing their performance data.
For Practices with Partial ABCS Data
A common refrain throughout EvidenceNOW was that “Some data are better than no data” (cooperative 5, diaries, 4/28/17). Facilitators reported they used whatever data they had available, such as “canned” reports or annual Patient Quality Reporting System reports, to approximate the denominators and numerators of ABCS measures. Some facilitators noted that they helped practices create patient lists to target at-risk patients and track improvement, although these lists did not meet ABCS measure specifications:
The site has limited access to internal reporting functionality and queries (user access to this functionality is locked down and determined by headquarters). The site does have access to running patient list of hypertensive patients with elevated BP during their last encounter. They will be using this query to target patients needing additional follow up to achieve adequate control. Improvement will be tracked by counting the number of patients on this list and aiming to reduce the number. (cooperative 4, facilitator, diaries, 5/18/2016)
Facilitators, even those without data challenges, noted in diary entries and interviews that they used these types of reports and patient lists to help clinic staff monitor patient status and plan upcoming patient visits. But for practices lacking performance measures, facilitators discussed using these data to manually track improvements and estimate performance measures as well.
For Practices with Inaccurate ABCS Data
Facilitators and cooperative members reported in the diaries and during site visits that practices' ABCS measures could be inaccurate for multiple reasons: flawed measure logic, issues in data extraction and mapping, or lack of clinical documentation. Identifying the source of error could immediately help practices improve a performance measure:
The lead clinician kept assuring us that she and her team were doing what they needed to do and asked us to look into [the measure] more…. After going through this process for over an hour, we found that only 1 patient from our list legitimately needed to be put on aspirin…. [T]he lead clinician was relieved because she said she knew that the numbers we presented her could not be accurate…. She wanted us to do that for all her other measures…. (cooperative 7, facilitator, diaries, 5/3/16)
Many facilitators reported performing chart audits to check patient data against measure numerators and denominators, although some facilitators shared that they were not comfortable doing this without additional information technology support. Facilitators across cooperatives realized that validating measures was a necessary step to ensure measure accuracy, build trust in the data, and increase clinician support and engagement. If errors were due to incomplete documentation of care, practices could extend teaching proper documentation into teaching about other techniques for using EHR data for QI as described below.
Interim Data-QI Support
For Practices with Lack of or Partial ABCS Data
Facilitators reported in the diaries and during site visits that they helped practices find alternate data sources or methods to approximate performance data while waiting for reports or dashboards to be delivered. For some practices, these “provisional” data were the best data they would obtain during the intervention period. Some facilitators reported using the Health Care Effectiveness Data and Information Set (HEDIS), the Uniform Data System (UDS), American Heart Association's Million Hearts materials, older Patient Quality Reporting System data, or publicly available regional data to produce benchmarks:
Saw an article on [State] health rankings by county and thought it might be interesting to look at the health factors (particularly adult smoking) that were calculated…. [I]t would not hurt to show a practice where their county sits in comparison with others. Especially since many practices do not have their own baseline data reports yet. It could motivate them on working to improve those numbers over the next year. (cooperative 7, facilitator, diaries, 3/16/16)
Facilitators Used These Data to Guide Design of Improvement Goals. Facilitators also discussed using other “stand ins” for missing measures. One cooperative's facilitators used mean scores from their partner health information exchange to estimate the cholesterol measure. Others helped practices estimate the percentage of their patient population at risk for CVD. This facilitator, for example, explained how her practices were using the Framingham Heart Chart to indirectly work on the cholesterol measure:
I would not say [practices are working on the] cholesterol [measure] specifically. The heart chart's probably the best asset we have in that sense…. Every single 1 of them has embraced that full on. [One practice] just asked me to order 400 of them for them…. We've worked on the workflow for that. In that sense [we're working on the cholesterol measure]…. (cooperative 6, facilitator, interview, 10/26/16)
By measuring patient risk scores and recording them in tables or patient charts, some facilitators helped track at-risk patients and helped create proxy performance data. Other facilitators said they used similar techniques, such as creating ABCS checklists, intended to help clinicians flag care needs but which could also be used to estimate clinical performance and track improvement. Using clinical tools in unintended ways helped practices with EHRs that lacked reporting functionality obtain data.
If facilitators had advanced HIT skills and/or worked with HIT specialists and had access to practices' back-end or server data, they could build custom measures from limited queries or create patient lists from server data.
In the absence of EHR data or functional reporting tools, some cooperatives′ facilitators performed chart audits to help practices generate data for QI and evaluation.
Continuous Data-QI Support
For All Practices
While facilitators and practices performed the data-QI activities discussed above, they also helped practices petition and negotiate with health systems, EHR vendors, or external data providers. Practices needed this help because they often did not know what their EHRs could do, where to find functions within the EHR, or what they might need for QI:
I have a 1 provider clinic that has an office manager who was tasked with reporting for [EvidenceNOW]. When we started this process, she sat down and said, “I have no idea what we are even looking for.” I walked her through the screens to the existing quality reports and we did not find what we needed. We decided to call technical support for her EHR. She said, “I do not even know what to ask for, can you please explain to them what we need?” So I explained it to them as we sat together. (cooperative 4, facilitator, diaries, 5/18/16)
Facilitators said practices were often initially uncomfortable talking about and using data. Facilitators helped expand practices' expectations of vendors and their health systems in terms of the data and support these entities should provide. In addition, some facilitators encouraged practices with extremely limited EHRs to upgrade EHRs or connect to a CQM-calculation registry, software platform, data warehouse, or health information exchange, and helped the practice connect to and use these resources.
For Practices with Inaccurate ABCS Data
In addition to the above, facilitators were able to inaccuracies in performance measure reports as teachable moments, particularly when errors were due to documentation. Demystifying the EHR and teaching proper documentation helped practices catch the “spark” of using data for QI:
[O]ne office was surprised at how low their [smoking cessation] baseline was. Sitting around the table at the meeting, they discovered that they really do not know where to document that cessation counseling was given. They are going to reach out to their vendor for some white articles or guidance on how to document this. So, overall I am seeing some interest spark in some offices in looking deeper into where documentation goes and CDS [Clinical Decision Support] use and/or development of their own rules. (cooperative 2, facilitator, diaries, 2/18/16)
Facilitators reported that practices tended to become more engaged in data-driven QI as they learned how to embed EHR tools in daily workflows to help them improve patient care. In this way, practices began to appreciate data as part of quality improvement. Facilitators noted that practice staff mastery over EHRs and personalization of EHR use for improvement goals helped them become active data users rather than passive producers of annual reports.
Discussion
Facilitators working within the EvidenceNOW initiative were tasked with helping small- to-medium-sized primary care practices in different parts of the United States implement best practices to improve cardiovascular preventive care delivery, via the ABCS, and increase practice capacity for change. Although a quarterly submission of ABCS data to cooperatives was required for EvidenceNOW, facilitators were often faced with challenges in accessing accurate ABCS data for QI purposes.36 Facilitators adjusted their strategies for working with practices on data-driven QI based on the type of data that practices had available and the time frame of the intervention.
Facilitators were resourceful and flexible when working without adequate data: they directed QI activities toward overall practice improvement or CVD-related workflows to indirectly affect the ABCS; they created data to use provisionally for assessment and measurement while helping practices communicate with vendors and other data platforms to procure the data they needed for QI; and they helped identify and address errors in CQMs. They helped practices redesign workflows for accurate and consistent care documentation. Facilitators blended strategies to provide initial- or interim-level support while also guiding practices to become active users of data for improving patient care to the best extent possible.
Practices in EvidenceNOW may have had different challenges for the different ABCS measures occurring at the same time. Although facilitators were skillful and creative, data-driven QI and feedback became compromised43 without valid, continuously available data.21,22,24 In practices with multiple challenges or extremely limited EHRs, facilitators could spend large portions of the intervention working on obtaining or correcting EHR data and documentation workflows instead of being able to help practices manage data-driven QI. EvidenceNOW suggests that facilitators need not only a combination of management and communication skills to motivate practices and the QI skills to help advance the change process,44 which they are traditionally trained in,45,46 but they also need strong HIT skills to help practices access and use their data.47
Facilitators are part of the workforce supporting primary care practices in their efforts to implement QI, work on practice transformation, and prepare for performance payment programs. Practices in EvidenceNOW faced challenges that are likely to exist in greater degrees for practices that do not have QI experience or belong to support networks and health systems. As practices with data-QI challenges transition to mandatory programs such as the Quality Payment Program of the Medicare Access and CHIP Reauthorization Act of 201548,49, which requires practices to use CQMs and other EHR data for quality improvement, the demand for data-driven QI support may increase. For instance, although 5,000 clinicians participated in EvidenceNOW, 600,000 are expected to participate in the Quality Payment Program, which began January 1, 2017.50 It may be necessary for stakeholders to invest in facilitation as a cost of improving primary care. Already shown to produce sizable cost and health savings and returns51,52, facilitators can further contribute to gains by helping practices access and use accurate EHR data for QI.
Limitations
Facilitator characteristics varied widely within and between regional cooperatives. Some facilitators were new, whereas some had had prior relationships with practices. Having an established, trusting relationship with a practice seasoned in QI may have enabled that facilitator to be more flexible and creative in designing data-QI strategies. Facilitators also had different institutional support and knowledge across cooperatives. For instance, some cooperatives hired HIT-skilled facilitators or paired facilitators with HIT specialists, whereas others hired facilitators trained in EHR use by Regional Extension Centers. Skill set and team configuration may have shaped the type of HIT support facilitators provided to practices.
Depth and availability of qualitative data also varied between cooperatives. Diary posts by facilitators or those representing facilitators varied by cooperatives, and we were not able to observe and individually interview all facilitators within each cooperative. We also do not know how generalizable our findings are to facilitators who work for and within health systems.
Lastly, the practices enrolled in EvidenceNOW may be subject to self-selection bias. Practices that volunteered to participate may have higher QI capacity and be earlier EHR adopters than practices that did not participate. The trends and challenges we identified are likely to be intensified outside the EvidenceNOW sample.
Conclusion
Facilitators can play an important role in supporting primary care practices as they enact quality improvement and transformation processes. Due to ongoing and rapid changes in care delivery expectations, practices across the United States will need to become active users of their EHR data for QI. With greater stakeholder investment, facilitators can extend their support to larger numbers of practices with EHR data challenges, helping them meet the demands of required performance programs and achieve their practice improvement goals. Without support, it is uncertain how practices, especially those experiencing challenges using their EHR data for QI, will perform in these programs.
Acknowledgments
Many talented people contributed to this manuscript workgroup. We are extremely grateful for their time, work, and feedback. Listed in alphabetical order: Bijal Balasubramanian, PhD, MBBS; Virginia Brooks, MHA, CPHQ; Melinda Davis, PhD, CCRP; Claire Diener, BA; Jennifer Hayes, BHA; Clarissa Hsu, PhD; Emily Hurd, BA; Kyle Knierim, MD; Cynthia Perry, PhD, FNP-BC; Kurt Stange, MD, PhD; Shannon Sweeney, PhD, MPH; Rikki Ward, MPH; Tanisha Tate Woodson, PhD; and Bernadette Zakher, MBBS.
Appendix: Sample Practice Facilitator Observation Guide and Interview
Note: While interview questions were largely consistent for facilitators across cooperatives, we tailored several questions to the specific cooperative based on our previous knowledge
PRACTICE FACILITATOR OBSERVATION GUIDE/INTERVIEW EN ROUTE TO/FROM VISIT
[On the way to the practice] First, can you tell me a little about yourself?
Probes:
What is your background?
Where do you live? Not exactly, but in general.
How near or far is the clinic from your home base?
Do you work on multiple projects? How do you divide your time?
What training did you receive to prepare you for this project?
[On the way to the practice] Can you please tell me a little about the community we're going to?
Probes:
What type of people live there?
What type of work do people do in this area?
What kind of resources are available for people?
[On the way to the practice] Can you please tell me a little about the practice we're going to visit?
Probes:
What is your experience / history with this practice?
What have they been working on with you (in general), with EvidenceNOW?
What is goal of this visit?
How does this visit fit into the extended plan for this practice?
What have you done to prepare for this visit?
Describe the drive to the practice.
Describe terrain and the communities through which you travel.
Note: see if audiorecording device can be turned on for car-ride back
[On the way back] Tell me your thoughts about that meeting.
Probes:
What are you thinking about for the next visit with the practice?
What resources do you think the practice needs?
What do you feel like is helping the practice change or keeping them from making changes?
What, if anything, strikes you as remarkable or unusual about the visit?
When you think about your work, what were the most important things that you did in the visit today?
What were the most important things the practice did?
[On the way back] Think about the practice(s) that we visited together, how does this/these practice(s) compare with other EvidenceNOW practices you work with?
Probes:
Similarities? Differences?
How do you adapt your approach for different kinds of practices?
Can you provide some examples or a story to illustrate this?
PF OBSERVATION IN THE PRACTICE
Describe the visit from the beginning to the end paying special attention to:
External support strategies: PF, performance feedback/benchmarking, expert consultants, learning collaboratives, web modules, etc
Practice implementation strategies and tools/resources used with the practice
Meeting organization, flow, flexibility, interaction and agenda setting
PF's relationship with the people in the practice
PF's relationship with other EvidenceNOW staff (AD, HIT, etc) involved in the clinic and their roles
PRACTICE FACILITATOR INTERVIEW
Introduction
Confirm audio-recording. Thank you for taking the time to talk with us today. We would like to hear about your experiences on the EvidenceNOW project. Observing the work you do in a clinic will give us a greater understanding of what do happens in a particular clinic visit. We also want to understand more broadly about the range of work you do and the relationships you are building. We are hoping you can help us better understand these areas over the course of the interview.
Could you walk me through a typical day or week working on [EvidenceNOW initiative]?
How do you prepare for a visit?
What are you doing when not in a practice?
How have your work and the intervention evolved over the waves?
Are you working on projects besides [EvidenceNOW initiative]? What is your role?
In the context of [EvidenceNOW initiative], could you tell me about the tools you use and the thought process you have when working with practices?
How much flexibility is there in the work?
What is the process by which improvisation is happening?
When you think about your work, what are the most important things that you do?
Can you tell me about any role for data (quality measures, patient lists, etc.) in the work you do? How do you get those data?
Can you tell me about the role of leadership and building a QI team in the work you do? Who do you meet with in the practice?
In the context of [EvidenceNOW initiative], could you tell me about the HIT tools you use and work on with the practice?
What has been your role in the connection of practices to the dashboards?
How have the dashboards been functioning?
How would you assess the skillset of your practices in terms of data and EHR/HIT?
Would you talk a little bit more about the specific practices you are working with on [EvidenceNOW initiative]?
How many practices do you work with? How many of these had you worked with before? To what extent?
Can you walk us through a couple of practices?
How do you and the practice decide what to work on?
How are you going about helping them with their goals?
What strategies have you used frequently?
Are practices more responsive to some than others?
How much do you see burnout in practices? What increases joy?
Tell me about how your role functions with other elements of the intervention like clinical experts, trainings and peer to peer learning?
How have you been communicating these resources to practices?
How are they using them, if at all?
What has been your greatest success in [EvidenceNOW initiative] so far?
Your greatest challenge?
What do you see as the lasting impact of [EvidenceNOW initiative]? For your organization?
What do you think practices have learned through working on [EvidenceNOW initiative]?
How do you think [EvidenceNOW initiative] will affect primary health care in the region?
Will you continue to work with [EvidenceNOW initiative] partners and organizations after [EvidenceNOW initiative]?
What advice would you give to other people in your position trying to support practices in similar ways?
What information would be helpful in thinking about implementing this work in other regions?
How do you envision this work happening in this region in the future?
Notes
This article was externally peer reviewed.
Funding: This research was supported by grant R01HS023940–01 from the Agency for Healthcare Research and Quality (AHRQ).
Conflict of interest: The authors declare that they have no competing interests.
To see this article online, please go to: http://jabfm.org/content/31/3/398.full.
- Received for publication July 5, 2017.
- Revision received October 31, 2017.
- Accepted for publication December 10, 2017.