U.K. radiographers need boost to explain AI to patients

2021 03 17 23 48 7787 Computer Ai Hud Display 400

U.K. radiographers are on shaky ground when it comes to explaining decisions aided by artificial intelligence (AI) algorithms to patients, say researchers. They suggest the issue could be holding back wider use of the technology in an article published July 1 in Radiography.

Radiographers were asked in a national survey if they trusted AI algorithms to help alleviate the current backlog of unreported images in the U.K. A majority said yes, but when asked how confident they would be explaining AI decisions to others, responses were tepid, researchers found.

"Confidence does not equate to competence and therefore education of the workforce and increased transparency of the systems are suggested," wrote corresponding author Clare Rainey, a diagnostic radiography lecturer at Ulster University in Newtownabbey, Northern Ireland.

The U.K.'s National Health Service (NHS) is under significant pressure due to a national shortage of radiographers, and artificial intelligence (AI) support in reporting could help minimize the backlog of unreported images, according to the authors. In this study, the researchers explored the perceptions of reporting radiographers regarding AI and aimed to identify factors that affect their trust in these systems.

Rainey and colleagues distributed a survey open to all U.K. radiographers through social media (LinkedIn and Twitter) and made it available from February to April 2021. They received 411 full survey responses and focused their analysis on 86 respondents who indicated that image reporting was a part of their role.

When asked if they understood how an AI algorithm makes its decisions, 62% (n = 53) of reporting radiographers selected "agree" options indicating they were confident on the issue, according to the results.

On the other hand, 59.3% of respondents (n = 51) disagreed that they would be confident in explaining AI decisions to other healthcare practitioners. Similarly, only 29.1% agreed that they would be confident explaining AI decisions to patients and carers (n = 25).

The respondents were also asked to indicate their trust in AI for diagnostic image interpretation decision support on a 0 to 10 scale, with results indicating a mean score of 5.28 and median of 5.

Finally, the respondents were asked which features of an AI system may offer assurance of trustworthiness. The most popular choices were "indication of the overall performance of the system" (n = 76), and "visual explanation" (n = 67), the authors wrote.

That radiographers are confident in understanding how AI systems reach decisions but less confident in explaining the process to others belies a hurdle in implementing the technology, the authors wrote. Ultimately, only 10.5% (n = 9) of reporting radiographers were currently using AI as part of their reporting at all, the authors found.

"Developers should engage with clinicians to ensure they have the information they need to allow for appropriate trust to be built. Awareness of how clinicians interact with an AI system may promote responsible use of clinically useful AI in the future," Rainey et al concluded.

Page 1 of 364
Next Page