Abstract
Purpose: Near-miss events represent an opportunity to identify and correct errors that jeopardize patient safety. This study was undertaken to assess the feasibility of a near-miss reporting system in primary care practices and to describe initial reports and practice responses to them.
Methods: We implemented a web-based, anonymous near-miss reporting system into 7 diverse practices, collecting and categorizing all reports. At the end of the study period, we interviewed practice leaders to determine how the near-miss reports were used for quality improvement (QI) in each practice.
Results: All 7 practices successfully implemented the system, reporting 632 near-miss events in 9 months and initiating 32 QI projects based on the reports. The most frequent events reported were breakdowns in office processes (47.3%); of these, filing errors were most common, with 38% of these errors judged by external coders to be high risk for an adverse event. Electronic medical records were the primary or secondary cause of the error in 7.8% and 14.4% of reported cases, respectively. The pattern of near-miss events across these diverse practices was similar.
Conclusions: Anonymous near-miss reporting can be successfully implemented in primary care practices. Near-miss events occur frequently in office practice, primarily involve administrative and communication problems, and can pose a serious threat to patient safety; they can, however, be used by practice leaders to implement QI changes.
Near-miss events, or errors that are corrected before a patient is harmed, represent an opportunity to identify and correct flaws that jeopardize patient safety. Because more than half of all medical ambulatory visits occur in primary care, improved attention to near-miss events could markedly improve overall patient safety.1 Others have demonstrated that error- and event-reporting systems can be implemented in primary care; however, these rarely focus on near misses or the coordination of near-miss reports with quality improvement (QI).2⇓⇓⇓⇓⇓⇓–9
Barriers to reporting events include the additional workload burden, concern over punitive action, lack of confidence that positive change will result, and psychological barriers to admitting an error.10⇓⇓⇓–14 Anonymous reporting systems may increase the number of error reports and reduce concerns about punitive actions but might reduce the detail of the events.15,16 There is value in including all office staff in a reporting system, but this strategy may require frequent reminders to keep reporting volumes from dwindling.17
While errors occur frequently in primary care, few seem to result in significant harm to patients, consistent with the “near-miss” nature of many of these errors.18⇓⇓⇓–22 Nevertheless, given the volume of ambulatory visits, even these relatively infrequent adverse events may be associated with a substantial portion of inpatient admissions and other patient harm.23 A systematic approach to identify and correct near-miss events in primary care could be an important strategy to improve patient safety.
To demonstrate that such a system can be successfully adopted by a broad range of primary care practices, we designed and implemented an anonymous, practice-wide near-miss reporting and improvement tracking system in 7 diverse primary care medical practices. Our goals were to assess the feasibility of regular reporting, better understand the types of near-miss events that occur in ambulatory practices, and observe how medical practices use near-miss reports to initiate QI changes.
Methods
Participants
We recruited 7 diverse practices in western North Carolina to participate in this 1-year study. Practices included 2 family medicine residency practices, a federally qualified health center, a county-owned health department, and 3 private practices (2 family medicine, 1 pediatrics). Together the study practices employed more than 70 medical providers and 200 clinical support staff, provided >2000 office visits per month, and represented the full scope of primary care services (pediatric, geriatric, adult, and obstetric care) in both rural and urban settings. All but 1 used electronic medical records. Table 1 summarizes descriptive data on these practices.
Near-miss Reporting System
Our operational definition of a near-miss event was “an event/situation in which a negative outcome could have occurred but did not, either by chance or because the problem was identified and corrected before a negative outcome occurred.”24 All staff members were invited to anonymously report near-miss events using an online form that had been adapted from previous studies and field tested, with an average completion time of 2 minutes per report (See Appendix).25 The online form did not include any patient identifiers, was available electronically from any Internet-enabled computer, and stored reports on a central computer in an encrypted format. Staff attended a standardized, 1-hour orientation and during the study period received an automated E-mail message every 2 weeks inviting them to report any near-miss event they could recall from the previous 2 weeks. Project participation was phased in over 2 months from September 15 to November 30, 2010, and data collection was terminated at the end of June 2011; thus, the project period last 7 to 9 months, depending on practice site.
Near-miss Event Reports
Before being forwarded to the project's central computer, each near-miss report was reviewed by a designated individual in the practice (usually the medical director), who (1) excluded from the study any events that were adverse events causing patient harm; (2) ensured the absence of patient-identifying data in the responses forwarded for analysis; and (3) reviewed the incident for possible initiation of QI efforts in the practice. There was no attempt to standardize across practices which near-miss reports would be assigned for QI; during the structured interview process after the study, medical directors and practice administrators reported that they concentrated on events that seemed to be likely to recur, would have potential serious consequences if harm reached the patient, and seemed to be in their control to change.
QI from Event Reports
Initiation of QI around near misses was encouraged as part of the project. At the time of enrollment in the study, the 7 practices had significant differences in how they approached performance improvement. Several had robust QI teams in place, whereas others reported no formal performance improvement processes; none had incorporated near-miss reporting into the QI process. As part of study orientation, practice leaders each received a short orientation including a brief overview of how to initiate a Plan Do Study Act (PDSA) cycle and how to use the PDSA tracking software that was included in the near-miss system. After 3 months of successful reporting, each practice was expected to initiate at least 1 improvement process based on the near-miss reports from the practice. All near-miss reports within each practice were reviewed every 2 months during a QI committee meeting. At the end of the project period, leaders from each practice participated in a structured group interview to gather additional information about how they actually responded to the information contained in the near-miss reports.
Each practice was reimbursed $5,000 for identifying a core implementation team, participating in planning meetings and the all-staff orientation, and completing the baseline survey information. An additional $1,500 per month was given to each practice when they reported at least 10 near-miss events and identified at least 1 near-miss event to remediate and track. Staff themselves did not receive any direct monetary inducement to submit reports, but several practices introduced small team-based rewards if the practice overall met the monthly reporting target. During the structured group interviews after the reporting period, there were no reports that staff felt pressured to report.
Data Analysis
After standardized training, the narrative portions of the near-miss error reports were coded by a team of 6 physician coders using a published taxonomy of ambulatory care errors.26 For each report, the primary error was defined as “the breakdown in process, or knowledge/skill deficit that led to the reported problem.” In addition, up to 4 associated or “cascade” errors and up to 4 contributing factors and possible preventive measures also were coded using the same taxonomy. The coders also provided their own subjective ratings, on a scale of 0 to 100, of the potential seriousness of the near-miss event, where 0 indicated “not very serious” and 100 indicated “extremely serious.” They also rated the likelihood of harm and potential cost to the patient had the error actually occurred, as well as the estimated cost to the practice to remedy the system problem identified in the near-miss report, all on a 3-point scale, where 0 = “none/minimal,” 1 = “some,” and 2 = “a lot.” Before coding, study leadership and coders met to achieve a common understanding about what would classify as “very serious” or “a lot of harm or cost.” To ensure reliability of the coding and rating, 10% of the reports were coded independently by a second coder without knowledge of the first coder's results. Coder agreement was 70% at the finest level of detail (3 levels in the 5-level taxonomy) and 87% at 2 levels of detail.
Quantitative data were analyzed using SAS 9.1 software (SAS, Inc., Cary, NC). Continuous data are reported using means and standard deviations (SDs), whereas categorical data are reported as frequencies and percentages. The study protocol was reviewed by and received institutional review board approval from Margaret R. Pardee Hospital.
Results
A total of 632 near misses were reported by the 7 practices. The most common categories of reported near-miss events, overall and by practice, are summarized in Table 2. The most common types of errors were breakdowns in office processes (47.3%), such as filing (25.3%), chart data entry errors (15.0%), problems with patient flow (2.2%), and problems with appointments and referrals (4.8%). The second most common category of errors was in ordering (6.2%), implementing (7.1%), or reporting the results of (12.2%) investigations, representing 25.5% of all near-miss reports. The pattern of near-miss events was similar across practices. Errors involving clinical knowledge or performance represented a very small percentage of errors (1.9%).
Table 3 reports coder ratings of near-miss severity, likelihood of an adverse event (AE) if the near miss had not been identified, the potential financial costs if the near-miss event had resulted in an AE, and the estimated cost to the practice to remedy the problem. Filing errors, the most common single near miss reported, had a mean severity rating of 51.8 (SD, 30.7), with 23.8% (n = 38) of these errors judged to be at high risk for leading to an AE had the error not been identified. Among all error types, those related to reporting investigations were rated as potentially most serious, with a mean severity score of 72.0 (SD, 28.3). Errors involving ordering medication and treatments (ordering, dispensing, or implementing) represented only 14% of near-miss reports but were rated as the second most severe (mean range, 59.1–63.0; SD range, 29.3–31.0).
Practices reported that 14.4% of the errors were secondarily attributable to the electronic medical record (EMR), including 21.9% of the filing errors, an example of which was an EMR interface that did not deliver the results of an important test to the ordering provider and resulted in a delay in addressing the test results. The EMR also was implicated in 40% of ordering medication or treatment errors and 4.3% of communication with other health care providers sharing patient care errors. These computer-related errors had the third highest mean severity rating (mean, 59.2; SD, 25.2). The EMR was the perceived primary cause of 49 of these errors (7.8% of the total sample).
By the end of the study period, each of the practices had initiated at least 1 practice improvement process directly tied to the near-miss reports. Table 4 summarizes these practice improvement efforts.
Discussion
This study reports the results of the successful introduction of near-miss reporting in 7 primary care practices, each of which generated a substantial number and broad array of events and initiated performance improvement activities as a result.
The most frequent near-miss events recorded involved relatively mundane office processes such as charting data, filing, and computer operation, which is consistent with previous reports.26 Somewhat surprisingly, however, our data showed that administrative errors were frequently considered to carry with them the potential to lead to significant patient AEs, which supports our approach of encouraging all office staff to be involved in near-miss reporting. The events judged to be associated with the highest potential cost were those involving dispensing medication or implementing treatment (30% judged to involve “a lot” of potential cost) and handling test results (22% judged as “a lot”).
EMRs were directly linked to 14% of near-miss events, including 40% of the errors related to prescribing. Among the filing, data retrieval, and prescribing errors, 21.9%, 28.4%, and 40% of near-miss events, respectively, were attributed to EMR use. This finding reflects what others have found: While EMRs can reduce errors, they can also cause errors.27⇓–29 Additional study of this important finding is needed to redesign EMRs to reduce error rates.
Participating practices seemed to have used the data generated from these near-miss reports to implement meaningful practice changes and improvements. Each initiated at least 1 Continuous Quailty Improvement (CQI) project as a result of the study, and each identified at least 1 important safety improvement they made as a result of a near-miss report. Our interview data suggest that practice leaders found that immediate action or rapid PDSA cycles were used to avert potentially dangerous situations identified by near-miss reports; many expressed surprise about the type and frequency of near-miss errors that occurred in their practice. Indeed, the relatively large volume of near-miss reports generated by each practice suggests the importance of developing a systematic approach to process improvement driven not only by potential for harm but also by frequency of occurrence.
Although our project included a cash bonus to practices for their participation, this did not seem to be an important issue for practices to continue near-miss reporting; the per capita reporting rate did not seem to vary depending on whether the practice offered a reporting incentive. In fact, study practices have continued to log near-miss reports even after the project officially ended and the cash bonuses stopped. Practice leader buy-in and encouragement seems to be a key element of a successful reporting system, as has been demonstrated in hospital settings.30
This study has several important limitations. Although we purposively chose practices to represent a diversity of size, ownership, specialty, and range of clinical services, our sample was small; therefore results cannot be generalized to all US primary care practices. Similarly, the frequency and types of near-miss reports in this sample cannot be used to estimate the frequency of actual near-miss events. Furthermore, even under the conditions of the study, some underreporting likely occurred. In addition, because event reporting was anonymous, we could not be certain that some events were not reported more than once (ie, by different individuals).
Our project involved only near-miss reports. We took great care to exclude AEs (where harm came to the patient) because of concerns of legal liability associated with data sharing. The self-report of likelihood of harm resulting from a near-miss event is, therefore, an estimate. According to leaders in our participating practices, near-miss events that affect patient outcomes are rare; therefore, it is possible either that the subjective estimates were exaggerated or, alternatively, that patients may suffer low-level AEs more often than they report. The reporting system did not invite patients to report errors, as some have suggested.31
The reporting phase of our project lasted only 7 to 9 months. The short time frame is insufficient to make broad conclusions about how practice change may result from near-miss reporting or how enduring those changes will prove to be. This important question requires further study over a longer period of time.
Conclusions
We demonstrated that an anonymous near-miss reporting system can be successfully implemented in a diverse group of primary care practices in a region. The reports generated indicate that near-miss events occur frequently in office practice, primarily involve administrative and communication problems, and occasionally pose a significant risk of patient harm. Practice leaders in our project found these reports helpful and used this information to implement meaningful practice improvement. Further study is needed to determine whether these improvements can be sustained.
Acknowledgments
The near-miss reporting and tracking system was developed by Scott Pierson of WindSwept Solutions, Austin, Texas.
Appendix
Notes
This article was externally peer reviewed.
Funding: This study was funded by grant no. PS/R21 HS19558-01 from the US Agency for HealthCare Research and Quality (“Ambulatory Near Miss Reporting and Tracking to Improve Patient Safety,” Steven Crane, MD, Principal Investigator, September 2010–2011).
Conflict of interest: none declared.
- Received for publication February 2, 2014.
- Revision received March 30, 2015.
- Accepted for publication April 14, 2015.