Benchmarking can help radiology practices evaluate productivity

2018 03 09 19 22 0495 Coffta Sandy 400 Thumb

It's natural to want to compare one's performance against others or to some standard. Radiologists often chat among themselves about the number of exams per year they read or maybe the number of relative value units (RVUs) they generate. While there are inherent problems with some of these comparisons, measuring and monitoring productivity can be beneficial to a radiology practice and to the individual radiologist.

The work RVU component of the Medicare resource-based relative value scale (RBRVS) is the most commonly used metric for productivity measurement. Work RVUs are readily obtainable from the practice's billing system; if the system does not report them directly, then they are easily calculated using a spreadsheet to multiply procedure volume by the RVUs per procedure available in a table found on the Medicare website. The work RVUs per procedure are the same, regardless of whether the procedure is performed in a hospital or a freestanding imaging center.

Sandy Coffta from Healthcare Administrative Partners.Sandy Coffta from Healthcare Administrative Partners.

Work RVUs are a measure of clinical effort. They do not take into account any nonclinical activities such as practice administration, teaching and training, research, or practice development. They also do not account for the valid and important clinical activities of time spent speaking with a patient or referring physician, or the differences in complexity between cases with the same procedure code.

Obtaining an independent benchmark value against which to compare your own performance is somewhat problematic. Surveys conducted by various organizations such as the American College of Radiology (ACR), the Radiology Business Management Association, or the Medical Group Management Association attempt to produce benchmarking statistics. However, unless the survey rigorously gathers very specific data through questions posed in just the right way, these survey benchmarks are quite meaningless.

Practices across the country have very different work cultures that can affect their productivity. Any such nationally aggregated data would have to be broken down finely into practice setting, group size, modality mix, hours worked per day, days worked per year, etc. in order to be comparable to another practice. Unfortunately, the participation in these surveys is not as widespread as one would hope, so such finely tuned data are usually not available.

Practical approaches

One practical approach to using productivity data measured in work RVUs is to compare performance within a single practice or against oneself over time. This removes the difficulty of using an independent value derived from unknown sets of practice patterns that do not necessarily reflect those in your own practice. The practice can review its own data to determine its own set of benchmarks and goals for productivity. Most importantly, the practice can define how to calculate the productivity measure in a way that is most relevant to the practice's circumstances.

Since work RVUs are a measure of clinical activity, it makes sense to compute RVU production for those hours or days actually worked to derive a per diem RVU rate. This removes the variability introduced when one physician might have more or less time off than others, or when a physician is less than full-time in the practice. Similarly, nonclinical time should be removed from the calculation. Here is an example of the calculation:

Sample calculation of radiologist productivity
  Total RVUs Days worked Administrative days Adjusted days worked RVUs per workday
Doctor A 11,071 225 (27) 198 55.91
Doctor B 11,794 241   241 48.94
Doctor C 10,795 225   225 47.98
Doctor D 5,667 119   119 47.62
Doctor E 11,039 232   232 47.58
Doctor F 9,982 223   223 44.76
Doctor G 10,018 228   228 43.94
Doctor H 10,610 239   239 44.39
Doctor I 10,232 240   240 42.63
  91,208 1,972 (27) 1,945  
        Mean = 46.89
        Median = 47.58

In our sample group, the productivity ranges from 5,667 to 11,794 work RVUs per year, the doctors do not all work the same number of days, and one of the doctors spends time as the practice's managing partner. Looking only at the raw data (Total RVUs), it would seem that Doctor B is the most productive since she has the highest total RVUs. However, when we consider the number of days she worked in the year, she is second in the ranking.

The managing partner (Doctor A) had fewer total RVUs than Doctor B, but after adjusting for his administrative days, he is the highest producer. Doctor D generated the lowest number of RVUs, but he works only half-time and ranks fourth in the practice's productivity. The data presented here is for illustrative purposes only and does not represent a radiology group's actual productivity.

An alternative approach would be to include all of the nonclinical aspects of the practice in the productivity measurement. This would involve assigning RVU values to administration, teaching, practice development, etc. and then including those activities in the RVU totals. A system for gathering the information for this reporting would have to be devised and implemented. Duszak and Muroff1 provide a good overview of these nonclinical considerations, as well as examples for reporting productivity metrics.

Presentation of the productivity data can be done in a table, such as shown above, or more creatively using graphs or an individualized report card. Some practices choose to present productivity data in a blinded fashion, while others identify the data for each physician by name. In our example, the doctors could be told their own doctor letter to keep the results anonymous, or the names of each doctor could be listed for full disclosure. This is one of the decisions for the group to make when considering the collection and reporting of productivity data.

Within a practice, there might also be different benchmarks for various subspecialties. If the group is large enough, subspecialists can be compared with each other rather than with a composite benchmark for the group as a whole. Regardless of the reporting method, it is important for the group to reach a consensus on the goals of productivity measurement, the measurement methods to be used, the target benchmark to be attained, and the consequences (if any) of not attaining the target. This last consideration should be carefully thought out before embarking on the program, and perhaps should not be applied until some time after the group has become comfortable with the measurement program and has been able to adjust the measurement process to best meet the group's needs.

The subject of measuring radiologist productivity has been covered in many articles over the years. There is no right or wrong system for a particular group, only the one the group's members feel is best for them. The measurement system should be adjusted dynamically as the group's needs and circumstances change.

1For more in-depth information on this topic, the articles by Drs. Duszak and Muroff in the June 2010 Journal of the American College of Radiology (Part 1) and (Part 2) are well worth reading, and their list of references point to additional material.

Sandy Coffta is vice president of client services at Healthcare Administrative Partners.

The comments and observations expressed are those of the author and do not necessarily reflect the opinions of AuntMinnie.com.

Page 1 of 253
Next Page