“One option we have considered is aggregating measures in a graphical display, such as star ratings,” states Medicare’s Federal Register of proposed rules release. Set to go in effect on Oct. 1, this simple statement buried in the fine-print of a 339-page release has already set off a firestorm of speculation and a strongly worded response from the American Association of Medical Colleges:
“The AAMC strongly opposes the use of a star rating system, which may make inappropriate distinctions for hospitals whose performance is not statistically different.”
While there are already dozens of quality measures already listed on Medicare’s Hospital Compare website, they are rather difficult to sort through and not clearly useful. For example, if your grandmother was having chest pain and you had to take her to a nearby hospital, would you choose it based on the frequency of readmissions, timeliness of care, cleanliness of the facility, frequency of accidentally punctured lungs or some other metric?
When I enter my ZIP code, the two closest hospitals populate: The recently acquired/renamed Brigham & Women’s Faulkner Hospital and the Brigham & Women’s Hospital. Two fine institutions both likely to receive the highest overall star rating. While the BWH Faulkner hospital is significantly closer and would seem like the appropriate choice, 81 percent of patients gave BWH a 9 or 10 rating out of 10, compared to only 74 percent of Faulkner patients. However, if surgical treatment would be needed, the Faulkner hospital shows the same rate of “serious complications” as the national average while the Brigham has a “Worse than U.S. National Rate” listing.
The point is, when choosing a hospital, patients don’t know what type of treatment is needed. They just need treatment, and they need it to be prompt, (cost)-effective and safe. While admittedly glossing over the fine and varied details of medical care, star ratings accomplish the goal of conveying the overall quality of an institution in an easy-to-understand and quickly digestible manner.
Rather than opposing a star rating system, I would encourage my colleagues at the American Association of Medical Colleges to embrace a patient-friendly system for hospital ratings. On our physician review system that integrates with Google reviews, we have found consistently that patients are eager to choose primary care physicians, specialists, chiropractors and cosmetic treatments based on Google stars and reviews. In fact, with an ever-increasing volume of searches from mobile devices, more and more patients are choosing their PCP or specialist from their smartphone. It only makes sense for the same system to be available for hospital comparison.
The initial Medicare star rating system will undoubtedly emphasize certain types of quality care more than others, which will subsequently cause uproar and a temporary and shortsighted drive by hospitals to improve certain quality metrics. However, over time it will be revised and improved to more closely resemble the type and quality of care received.
There also doesn’t need to be a single star rating, but rather, star ratings in various categories such as safety, timeliness, cost and even potentially national/physician reputation (as U.S. News is largely based upon). Physicians and hospital groups should embrace the coming age of hospital shopping and comparison and seek to guide its best practice. Rather than focusing on delaying its implementation, we should focus on how to best mold it to resemble actual care because, soon enough, there will be an app for it.
Zachary Landman, MD, is the chief medical officer for Doctorbase, a developer of scalable mobile health solutions, patient portals and patient engagement software. He earned his medical degree from UCSF School of Medicine. As a resident surgeon at Harvard Orthopaedics, he covered Massachusetts General Hospital, Brigham and Women’s Hospital and Beth Israel Deaconess Medical Center.