TABLE 3
EFFECT OF REPEATED CHART PROMPTS ON PRESCRIBING RATES, ADJUSTING FOR PHYSICIAN FACTORS AND CLUSTERING* OF PATIENTS BY PHYSICIAN (N = 453†)
Variable | Total Antibiotic Prescriptions (95% CI) | Unnecessary Antibiotic Prescriptions (95% CI) |
---|---|---|
Intervention | 0.57 (0.27, 1.17)‡ | 0.76 (0.42, 1.40) |
Male | 1.33 (0.66, 2.68) | — |
Practices in a city with a population of 25,000 or less | 1.58 (0.73, 3.44) | 1.13 (0.58, 2.22) |
Sees >150 patients/week | 2.17 (0.87, 5.41) | 1.55 (0.78, 3.07) |
Works in solo practice | 0.43 (0.18, 1.05) | — |
In practice for 20 years or more | 1.68 (0.72, 3.92) | 2.20 (1.09, 4.43) |
Diagnosis of strep throat, tonsillitis, or pharyngitis | 7.56 (3.89, 14.71) | 3.06 (1.66, 5.65) |
* The average patient cluster per physician was 3 (range 1 to 8). | ||
† Number of observations < 621 because not all physicians completed practice surveys and some who did reply left some questions unanswered. | ||
‡ Odds ratio. |
Figure
FAMILY PHYSICIANS WHO WERE CONTACTED AND WHO COMPLETED THE STUDY
Discussion
The use of repeated chart prompt reminders to family physicians to use a clinical scoring approach in the management of children and adults presenting with URTI and a sore throat did not affect unnecessary antibiotic prescriptions or overall antibiotic use. Problems encountered in conducting this community-based trial may have contributed to the negative result.
Sixty-seven (41%) physicians agreed to be randomized but failed to complete the study. These losses after randomization and the differing sizes of the patient clusters per physician led to differences in the characteristics of the treating physician between the 2 groups. Characteristics associated with higher antibiotic prescribing rates were more common in the intervention group. As a result, despite the randomized design, the 2 patient groups were not initially similar in terms of their likelihood to receive a prescription for an antibiotic. To compensate for these differences, we controlled for the different physician characteristics in the analysis. However, the large number of physician dropouts also resulted in a failure to achieve the planned sample size. As a result, the study had insufficient power to detect the effect size that had been hypothesized.
We had planned the sample size to detect a 30% decrease in unnecessary antibiotic use. The adjusted analysis produced a point estimate of a 23% decrease in unnecessary antibiotic use and a 43% decrease in overall antibiotic use. These point estimates are the same whether or not the clustering is taken into effect; however, the more appropriate clustered analysis increases the estimate for the sample variance, resulting in wider confidence intervals. Examination of the lower 95% confidence interval reveals that the study lacked sufficient power to rule out as much as a 58% reduction in unnecessary antibiotic use. Therefore, while the study failed to find a statistically significant effect from the intervention, it also did not have the power to rule out a clinically important reduction in unnecessary antibiotic use.
We gave information about the clinical scoring approach to physicians in the control group. Doing so may have reduced the study’s ability to detect an effect of the intervention. We did not include a group that had been not exposed to information because we believed that mailed information was the equivalent of “standard” care in terms of changing physician behavior. Mailed information is a common method of informing physicians about new clinical information but has a limited ability to influence clinical behavior.34 However, the rate of antibiotic prescribing in the control group was indeed somewhat lower than is generally reported in the literature.9 This finding may be compatible with volunteer bias or the Hawthorne effect. More likely, perhaps, asking the control group to complete encounter forms for multiple patients may have inadvertently reminded them about the score. As a result, the control group may have been contaminated from repeated clinical prompts.
Some problems encountered in this study have been noted by other investigators conducting community-based research in primary care.37 The difficulty of retaining community-based physicians resulted in significant losses after randomization. This situation occurred even though qualifying to be randomized required physicians to mail back a reply card indicating that they wished to participate, suggesting that they were motivated to some degree.37 In addition, they received a modest cash honorarium. Some physicians returned the package stating that circumstances had changed and they would be unable to participate. Many who initially agreed to participate failed to reply despite 3 mailed reminders. The level of dropouts did not become apparent until late in the study. In retrospect, it might have been advisable to phone physicians directly soon after randomization in order to detect problems early. Other physicians could then have been randomly selected from the general membership listing to replace those who had dropped out.
This study found that repeated reminders to physicians to use a clinical score in the management of their patients with a sore throat did not reduce unnecessary antibiotic use. The problems encountered in this community-based intervention trial may have contributed to the negative result. Studies of prescribing behavior may need to stratify physicians before randomization by characteristics, such as patient volume and experience, that are related to prescribing behavior. Including a group that received no information is probably necessary to allow the greatest chance of detecting an effect. Particular attention and resources need to be available to ensure the retention, and replacement if needed, of community-based family physicians participating in research studies.