Will Medicare quality metrics hurt general radiologists?

2018 06 06 00 59 2262 Quality 400

Do Medicare quality metrics accurately capture the performance of radiologists? Perhaps not, and this could be a problem not only for general radiologists but also for patients, according to a study in the June issue of the Journal of the American College of Radiology.

A team led by Dr. Andrew Rosenkrantz of NYU Langone Medical Center found that practice size and subspecialization of radiologists were the strongest predictors of high performance scores. But these types of measures do not necessarily reflect the majority of U.S. radiologists' accomplishments.

"[Generalists] seem to be inherently disadvantaged by such measures," the group wrote. "In light of the fact that generalists represent about half of radiologists nationwide and disproportionately serve patients in smaller and more rural communities, this performance disparity merits closer attention, because these well-intended ... scoring systems could inadvertently incentivize practice changes that compromise patient access."

Physicians who participate in the Merit-Based Incentive Payment System (MIPS) program receive positive or negative payment adjustments based on their total scores across a range of quality measures. The U.S. Centers for Medicare and Medicaid Services (CMS) has sought to give doctors flexibility in how they participate, allowing them to select quality measures to submit for scoring and to deliver the data in a variety of ways, such as claims-based reports, registry reports, or Qualified Clinical Data Registry (QCDR) reports.

To assess radiologists' performance on Medicare quality measures and explore physician characteristics that may influence their scores, Rosenkrantz's team used 2015 CMS data that included 15,045 radiologists reporting 40,427 quality scores; scores came from 25 claims measures, 18 registry measures, and two qualified clinical data measures. The group adjusted for differences in reported measures with a method similar to what CMS uses for its value modifier program (JACR, June 2018, Vol. 15:6, pp. 842-849).

The researchers found that larger practice size and subspecialization were associated with higher performance scores.

"The strongest predictor was ... practice size, which may be related to larger practices having more robust infrastructures for measure identification and reporting," they wrote. "It may also be driven in part by the more subspecialized nature of larger practices."

Rosenkrantz and colleagues cautioned that MIPS scoring methodology grades physicians "on a curve" -- thus making payment bonuses dependent on outperforming other physicians, which could be challenging for measures that have high benchmark scores across the country.

"Maximizing one's score under MIPS will be facilitated by identifying and targeting measures without such high national benchmark performance but for which individual physicians have performed well," the group wrote.

In any case, better quality metrics are needed to accurately capture radiologists' performance, Rosenkrantz and colleagues concluded.

"[We] believe that our findings highlight the continued need for radiology professional societies and CMS to work together to develop more meaningful and diverse quality measures for radiologists," they wrote.

Page 1 of 1166
Next Page