Growing evidence that certain interventions can significantly lower morbidity and mortality (eg, mammography, immunizations, b-blockers for acute myocardial infarction) has focused attention on the challenges of implementation: translating this evidence into practice. Clinicians do not always offer recommended services to patients, and patients do not always readily accept them. Approximately 50% of smokers report that their physician has never advised them to quit.1 One out of 4 patients with an acute myocardial infarction is discharged without a prescription for a b-blocker.2 Only 40% of patients with atrial fibrillation receive warfarin.3 Conversely, tests that are known to be ineffective, such as routine chest radiographs, urinalyses, and preoperative blood work, are ordered routinely.4
Tools for behavior change
Various programs have been developed to bridge the gap between what should be practiced and what is actually done, but few have been uniformly successful. Passive education, such as conferences or the publication of clinical practice guidelines, has been shown consistently to be ineffective.5 More active strategies to implement guidelines, such as educational outreach, feedback, reminder systems, and continuous quality improvement, offer greater promise and have captured the interest of physicians, health systems, hospitals, managed care plans, and quality improvement organizations.6 To date, however, research on whether these methods produce meaningful change in practice patterns or patient outcomes has yielded mixed results.
One such study appears in this issue of the Journal. McBride and colleagues7 compared 4 strategies for improving preventive cardiology services at 45 Midwestern primary care practices. The control practices attended an educational conference and received a kit of materials. The other 3 groups attended a similar conference but also received a practice consultation, an on-site prevention coordinator, or both. The study used surrogate measures—provider behaviors rather than health outcomes such as lipid levels and blood pressure—to gauge effectiveness, and the results were positive. Patient history questionnaires, problem lists, and flow sheets were used more often by the combined intervention group than by the conference-only control group. Other behaviors, such as documentation of risk factor screening and management in the medical record, improved across all intervention groups. The authors apparently did not examine whether patients in the intervention groups had improved outcomes, such as better control of risk factors or a lower incidence of heart disease. With only 10 or 11 practices per group, the statistical power and duration of follow-up to make such comparisons was probably lacking.
The need for discretion in quality improvement
How should we apply these results? When there is evidence that a particular strategy improves outcomes—in this case, practice consultations and on-site coordinators—should physicians immediately adopt that approach in their own practices? Although McBride and colleagues found full participation in the project, it is doubtful that practices nationwide would have the necessary resources. The practice consultation included 3 meetings and 2 follow-up visits, and the on-site coordinator devoted 4.5 hours per week per physician. Moreover, this is only one of many studies showcasing a promising success in quality improvement. No clinician could adopt the full range of strategies that have been advocated by researchers and health systems. Even if that were possible for one disease, it would be impossible for all of the conditions encountered in primary care for which quality improvement is needed, such as preventive care, heart disease, diabetes, asthma, and depression.
Practices face trade-offs when considering quality improvement.8 Although there are exceptions— improved systems of care for one disease can have spillover benefits for other conditions—quality initiatives in one area tend to draw time, resources, and motivation away from others. Before reconfiguring practice operations, the astute clinician must judge not only whether available resources can support the effort, but whether the strategy offers the best use of resources. In the case of the study by McBride and colleagues, physicians might ask whether the proven benefit—improved chart records—justifies a change when data on patient outcomes are lacking. Even if improved health outcomes are likely, they should judge whether applying the same effort to another aspect of care, perhaps for another disease, would help patients even more.
In weighing these choices, physicians should not rely on the results of a single study. It is best to step back and examine the evidence as a whole, reviewing the results of multiple studies of the same strategy. For example, a Cochrane group analyzed 18 systematic reviews of various methods for disseminating and implementing evidence in practice. Although some interventions were consistently effective (eg, educational outreach, reminders, multifaceted interventions, interactive education), others were rarely or never effective (eg, educational materials, didactic teaching) or inconsistently effective (eg, audit, feedback, local opinion leaders, local adaptation, patient-mediated interventions).9 Similarly, a recent review of 58 studies of strategies for improving preventive care found that most interventions were effective in some studies but not others.10