MIAMI BEACH — More information on risk-adjusted mortality, a greater focus on patient satisfaction, and a better user experience are coming to online hospital quality comparison sites, according to a physician specializing in quality improvement.
Current comparison systems are imperfect, Dr. Peter Lindenauer said. “Even relatively strong proponents of public reporting [think] that the information we have available today is insufficient to make the decisions, and many continue to rely on word of mouth recommendations from doctors who are familiar with the hospitals themselves.”
Enhancements are planned for Leapfroggroup.orgwhynotthebest.orgHospitalCompare.hhs.gov
“Once we have risk-adjusted mortality, process-based measures, and patient satisfaction ratings, it will get harder and harder to ignore the ratings,” said Dr. Lindenauer, director of the Center for Quality of Care Research at Baystate Medical Center, Springfield, Mass.
Many quality comparisons are based on hospital mortality, but “there is limited power to discriminate good and bad hospitals on the basis of mortality,” Dr. Lindenauer said. One study, for example, found that hospital caseloads of most surgical procedures—with the exception of coronary artery bypass grafting—were not high enough to show a statistically significant difference in surgical mortality between institutions (JAMA 2004;292:847–51).
Risk adjustment of outcomes would provide more accurate comparisons of mortality and other outcomes, but “it's hard to do and it's expensive,” said Dr. Lindenauer, who is also on the medicine faculty at Tufts University, Boston.
Often, patients are not aware of the value of risk-adjusted outcomes data or make choices based on other factors. For example, when former President Bill Clinton had quadruple bypass surgery in September 2004, he chose New York-Presbyterian Hospital/Columbia University Medical Center, even though the institution's risk-adjusted CABG mortality was about two times the state average. “Like other patients, he did not choose his hospital based on publicly reported data. It's likely that his decision was influenced by the usual referral patterns from the local hospital at which he was first admitted,” Dr. Lindenauer said.
Debate continues over the relative importance for hospital performance reviews of outcome measures, structural measures (such as the availability of intensivists or computerized physician order entry), or processes (such as use of beta blockers for acute MI), Dr. Lindenauer said. The Pennsylvania Health Care Cost Containment Council, for example, focuses on outcomes and reports all hospital-acquired infections in the state. In contrast, the Leapfrog hospital ratings are structure based and examine such factors as intensive care unit staff, nursing staff, and use of electronic medical records. Whynotthebest.org
Dr. Amir Jaffer, chief of the division of hospital medicine at the University of Miami, asked Dr. Lindenauer if he would recommend one Web site as best for consumers. “At this point in time, I don't think there is any one site,” Dr. Lindenauer replied. “The HospitalCompare site is a key one, but not the most user friendly. The California Hospital Outcomes project [CalHospitalCompare.org
The effects of publicly reported quality ratings on a hospital's reputation can go both ways. “Hospitals want to avoid embarrassment, but it's becoming increasingly common for hospitals to promote their performance ratings on their Web sites,” Dr. Lindenauer said, citing Aventura (Fla.) Hospital and Medical Center as an example. On its Web site (www.aventurahospital.com
Increasingly, hospitals are promoting their performance ratings on their Web sites. DR. LINDENAUER