Can radiology AI win with a bad hand?

2021 06 16 22 40 5377 Computer Gambling 400

More has been written about artificial intelligence (AI) and its potential than any other technology in radiology. More vendors also offer more AI algorithms than all of the imaging modalities combined. This is fascinating for a technology that has yet to prove its value, has not shown any viable return on investment (ROI), or that has even been embraced by the imaging community.

The PACSman, Mike Cannavo.The PACSman, Mike Cannavo.

Comparing it with poker, AI is like a seven/deuce off-suit poker hand playing against a pair of aces. The technology can still win, depending on the turn of the cards, but there is a lot being bet on the river card to determine the final outcome.

Vendors are starting to come around to accept the reality that AI will not replace radiologists. This is especially important when a recent study found that nearly two-thirds of radiology AI software that has received the CE Mark in Europe showed no peer-reviewed evidence of efficacy. The remaining one-third of the available software applications focused only on diagnostic accuracy. Less than 20% of the total number of products demonstrated potential clinical impact on diagnostic thinking, patient outcome, or costs.

Higher levels of efficacy may also be necessary for health insurance reimbursement. The algorithm would need to deliver a higher level of performance than a radiologist would otherwise produce without it. More clinical studies may be required if a vendor needs to provide higher-level validation to justify the cost of using the algorithm. This in turn might drive up the price of the software in a market that is already price sensitive.

AI developers are also getting better at explaining what their algorithms do and how they do it. Sadly, these messages are often still lost on an audience that is concerned about declining reimbursement and the impact AI might have on it. Thorny practical matters also still remain, such as figuring out how AI will be paid for when used and, more importantly, who will pay for it.

High stakes

It's crucial to demonstrate a ROI for any clinical system. Most of the discussion around ROI in AI has centered lately on added payment provided under the U.S. Centers for Medicare and Medicaid Services (CMS) New Technology Add-On Program (NTAP). Unfortunately, just a few vendors have qualified for NTAP dollars already.

Those just applying for NTAP funds this year will be lucky to get a year's worth of added payment from the program before it is phased out in 2023. There is talk about diagnosis-related group (DRG) assignments being reconsidered and having AI factored into the cost, but that's still in the discussion phase.

The often-misunderstood NTAP program has even led to a mini-civil war between competing vendors after Viz.ai received NTAP status in September 2020 for its LVO AI algorithm in detecting large vessel occlusion (LVO) strokes.

Logic dictates that if one vendor qualifies for a payment for a particular algorithm, then all other vendors offering the same product in the same area should qualify for payment as well. But logic has no place here. It depends on when the vendor applied to the program.

The one vendor that was granted NTAP funds applied over two years ago, and as it was the first to do it, it had to demonstrate that it met the newness, cost, and clinical improvement criteria, showing that the product reduced time to treatment, has high specialist engagement, and yielded improved patient outcomes.

Since one company's NTAP approval, a handful of other startup companies have claimed the new reimbursement applies to their software as well. These solutions range from stroke triage, to radiology prioritization, and even to computer-aided detection (CAD), according to Niall Brennan, a former chief data officer at CMS and also a former advisory board member to Viz.ai.

Playing the odds

The question is, does the NTAP code apply to everyone, and will it be the panacea to jumpstart medical imaging AI? In his article published online in December on MedCity News, Brennan wrote:

"To date, no other AI technologies have been approved by CMS. However, CMS has stated that the intent of NTAP is not to provide an advantage or reward to a single vendor. Therefore, if other vendors provide evidence of substantial similarity, their technology should also be able to achieve NTAP."

Seems clear cut, right? The article continues, "Where things start to get problematic is if, instead of CMS determining substantial similarity, other vendors unilaterally assert substantial similarity in order to 'piggyback' on an existing NTAP."

Brennan concludes that "Without official word from CMS, hospitals have to choose whether to take a risk -- as the decision lies with individual hospitals' risk tolerance as to the technology they want to use and submit for NTAP payments ... and ensure that the correct codes are being submitted for the correct services."

So there you have it. Heads you win, tails you get audited ...

Dr. Ameer Hassan of the University of Texas Rio Grande Valley also discussed NTAP payments for stroke AI software in an article published online last month in the Journal of Neurointerventional Surgery. He wrote that "The NTAP decision specifies a single product as qualifying for the additional payment, and CMS has to determine whether this applies to other products or not; to date, no other products have been deemed eligible by CMS."

In any event, the bottom line remains that the significant initial investment for radiology AI means that it needs to show some form of payback in order to be used on a widespread basis.

A way to win?

Unless you perform a large volume of studies that would benefit from the use of a CT algorithm stroke protocol, lung CT algorithm, or others, you may be hard-pressed to justify the purchase of AI software without obtaining additional revenue. There are significant advantages to using AI, but no clear path for how it will pay for itself in hard dollars.

Small hospitals make up over 70% of all facilities in the U.S. (excluding outpatient diagnostic centers, which add another 5,000+ facilities), and many simply cannot or are unwilling to make the investment required by many AI firms. This is where an AI company can do well in the market by offering the use of algorithms on a "per-click" basis, an approach similar to how many PACS studies are charged today.

In this scenario, vendors could charge end users a low one-time sign-on fee and require a minimum volume commitment with associated algorithm(s) costs based on volume. Smaller facilities can then access as many algorithms as they desire without incurring AI's current high entry costs per algorithm.

Selecting the best game

An end user can select the AI vendor and algorithm(s) they want to use either directly via the vendor or through a clearinghouse/marketplace. They can have exams defaulted to a particular algorithm based on the DICOM header information (i.e., CT head protocol gets a CT head algorithm), have the patient's imaging data (current and prior studies as well as any and all additional desired clinical data) sent to the cloud, have the study processed, and get a report sent back. A facility can use as many algorithms as they wish in this manner.

While not nearly as ideal as AI fully integrated with a PACS/enterprise imaging system (EIS), any delays with this approach should not be significant enough to impact the radiologist's workflow. The study will remain on the PACS worklist just long enough to get the AI interpretation back -- typically within 10 minutes. This obviously exempts stroke/LVO studies, which need to be done in real-time. However, this model would enable a facility to be more selective in its use of AI until reimbursement is more real and/or AI becomes part of the standard of care.

The major costs with AI come from creating the integration with the PACS/EIS. You can still use AI without an interface, but it's just not as elegant. In an ideal world, you would want a seamless AI interface to the PACS. It's unrealistic, though, to think this is going to happen with every AI vendor to every PACS/EIS provider. If the required data could be uploaded to a cloud-based site, the algorithm could be run, the data sent back. and then imported to the study as an attachment (report).

Obtaining protected health information (PHI) typically requires interfacing with multiple clinical systems. True AI requires as much of this data as possible to generate a diagnosis. A few companies are testing interfaces that can pull this required data together from various clinical information systems so it can be sent to the cloud for processing. It will be a bit longer before you see this on a widespread basis though.

There are many companies at the radiology AI poker table. Some have a big stack of chips while others seem to be all-in with almost every hand they play. The size of the chip stack doesn't make the player (or product) better; it just keeps them in the game a bit longer.

The best hand also doesn't always win either. Many companies have bluffed their way into winning while holding a bad hand. It all depends on what the other guy thinks you have. Does he raise, call, or fold his hand?

Radiology AI is at the stage of a poker tournament where players are slowly getting whittled down. It will be interesting to see how each plays the game and who ultimately sits at the final table.

Michael J. Cannavo is known industry-wide as the PACSman. After several decades as an independent PACS consultant, he worked as both a strategic accounts manager and solutions architect with two major PACS vendors. He has now made it back safely from the dark side and is sharing his observations.

His healthcare consulting services for end users include PACS optimization services, system upgrade and proposal reviews, contract reviews, and other areas. The PACSman is also working with imaging and IT vendors developing market-focused messaging as well as sales training programs. He can be reached at [email protected] or by phone at 407-359-0191.

The comments and observations expressed are those of the author and do not necessarily reflect the opinions of AuntMinnie.com.

Page 1 of 775
Next Page