Doing what we all agree we ought to do with every patient we see—and spreading this across all primary care practices—is a strategic challenge for our discipline and our profession. How to do this is a major focus of current research and practice. How does the study of Mold et al1 fit within this burgeoning field?
Focusing on prevention is important. A key feature of the new model of care in family medicine is prevention2; despite good intentions, however, delivery of preventive care falls below guidelines in almost all practices. Prevention is also tough; unlike efforts to improve care for chronic diseases like diabetes, prevention requires attention to every patient who walks in the door, greatly increasing the scale of office interventions necessary to improve quality. Moreover, some physicians do not prioritize prevention as highly as others3; among doctors who do value it strongly, there are often strong differences of emphasis. Physicians define prevention differently, use different guidelines for decisions or put more emphasis on one or another flavor of preventive counseling. Such differences can make building the group practice consensus necessary for practice transformation difficult, probably more difficult than when improving chronic disease care.
Mold et al1 anchor their work in a broader set of preventive services: immunizations, mammography, and colorectal cancer screening. Although not comprehensive, the set of measures reflects a nod toward prevention focused on the “whole person.” In contrast, many funding agencies and the researchers they support have focused on single categories of prevention, such as cancer. Indeed, with a narrower focus, it is easier to obtain improved rates of delivery of preventive services.4,5 Unfortunately, a focus on single conditions misses the potential benefit of other kinds of preventive care for patients, such as immunizations or screening for blood pressure. Despite the difficulty, we must build “whole person” prevention into studies of implementation of preventive services if we are to make the case for the value of a generalist function in our health care system.
Another strength of the Mold et al1 article is the use of the right laboratory. As has been recently underscored,6 practice-based research networks are ideal for studying the spread of quality improvement. A key terrain feature for studies of dissemination is the ongoing and substantial change in practices; compared with studies in traditional academic settings, practice networks capture that variability and therefore the generalizability of findings is greater. It is important to keep in mind, however, that there remain some biases inherent in the use of practice-based research networks. First, practices typically volunteer for specific studies. Mold's sample of practices represents 24% of their total network. Thus, practices participating in the study represent early adopters, and what is true for dissemination for this group may not be true for the early majority or other groups. Public policy ultimately needs to address the initially unwilling. Moreover, there is large variation in the “tightness” of practice networks. Many practice-based networks have been organized for the purpose of research and do not include substantial organizational or financial linkages, whereas hospital-based clinics often have organizational, financial, and informatic links among practices, which can either slow or accelerate practice change.
Mold et al1 use a combination of well-established techniques to spread innovations, individualizing the intervention to the practices. Feedback with benchmarks, academic detailing, and standing orders are well established though of modest effectiveness; of interest in this study is the use of the benchmark of peer Oklahoma practices. Pragmatically, whatever will motivate practices is appropriate. Practice facilitators, called Practice Enhancement Assistants in this study, are another key strategy. Originally described in England7 and New England,8 and similar to agricultural extension agents, there is increasing recognition that facilitators can play a critical role in dissemination of best practices9 and that personal characteristics, continuity, and the ability to be assertive about changing key office systems are critical to outcomes (Oldham J, personal communication).10 Mold et al1 describe the number of visits and time spent with the practice but do not describe specifics of what was done with the practices or other kinds of support. That 5 different facilitators were involved over the 6 months with 12 practices suggests the difficulty of this approach and may have limited the impact of the facilitators.
It is also worth noting interventions that have facilitated spread in other studies but were not used here. Regular sharing of data across participants has been a foundation of successful quality improvement projects in many areas of medicine; the Mold et al1 study does this with benchmarking at the beginning but not in ongoing fashion. This study also seems to put less emphasis on developing teams or on teaching practices a “change model” and rapid small tests of change, which have been successful in improving preventive care in pediatric practices,11,12 or in Institute for Healthcare Improvement “breakthrough” style collaboratives. More fundamentally, there is less emphasis on learning directly from other practices, which has been critical to the success of collaboratives in the UK National Health Service, although practice facilitators do some of this. Finally, de facto, there was no inclusion of financial or other incentives such as continuing medical education or credit for Board certification, which might speed dissemination.
Some will ask why the primary outcome of this article is the adoption of key office systems and not the actual rate of delivery preventive services. Although it is ultimately critical to demonstrate improvement of the delivery of specific preventive care, the focus on key drivers of improvement of quality of care is reasonable given the state of research in improving preventive care. We know from multi-method studies of the process of care in primary care offices that office systems are critical for improving the quality of care.13,14 This study includes reminders at the point of care and standing orders for nursing; there is excellent evidence of efficacy for both. A strategy for prevention (in this study, reflected by the focus on implementing wellness visits) is also important, although the US Preventive Services Task Force15 and others16 emphasize including prevention in every visit. Analogy to improvement of quality for chronic care17 would suggest several other possible important drivers of improvement. These include patient reminders: communicating to patients that needed preventive care is due18; self management: setting up office systems to help patients to take responsibility for their own preventive care; community support: media and community support, along with systematizing referral systems for needed preventive services such as colonoscopy; and implementing a system to improve fidelity of interventions: tracking to make sure that care processes occur consistently.
How should we assess whether interventions to improve preventive care are successful? A key confounding issue is the baseline organization of the practice with regard to systems of care. The basic design of this study, with its focus on a single doctor and nurse in each practice, muddies this issue, but the authors do use a modification of the Assessment of Chronic Illness Care19 to assess practice organization. Although the validity of extending the Assessment of Chronic Illness Care to preventive care is not well studied, it is plausible. Another key confounder is the use of electronic health records, which can serve as barriers to improving care because of the financial and organizational cost of implementation or lack of functionality. What is important in studies about the improvement of care is that the current status of information systems be described in detail. A final issue is the duration of the trial. Changing practice systems is challenging, and most successful trials last at least 12 months, whereas this trial was limited to 6 months. Beyond initial spread, sustainability is critical, and several trials have noted that improvements in preventive care decline over time.20–22
This study raises another broader issue: should we emphasize the randomized controlled trial as the key strategy for research on practice improvement? Of course, randomization forces prospective design and data collection and protects against unmeasured confounding influences. There are, however, significant theoretical and practical limitations to randomized designs. From a statistical perspective, it is important to account for the variance attributable to the practices, and this means increasing the numbers of practices in trials substantially. Also important is the challenge of standardizing the intervention. It seems clear that successful improvement strategies have scores of attributes that may influence outcomes; standardizing and assessing which are critical to outcomes is hard. Finally, the extra time necessary to design and fund a large randomized trial may slow progress over the long-term.
What alternative is there? Prospective case studies can play an important role. Proposed methodological standards for such case studies have been published23; a key feature is sufficient detail in the description of the context and that the intervention be reproducible by others. Investigator and editorial willingness to publish failures is also critical. As a discipline and as a profession, we need to learn about how to learn about practice improvement. A strategy of many small steps and being willing to learn from our failures will go a long way.
Notes
Funding: none.
Conflict of interest: none declared.
See Related Article on Page 334.
- Received for publication May 1, 2008.
- Revision received May 1, 2008.
- Accepted for publication May 2, 2008.