AI boosts radiologist perception training

A team in Europe has launched a novel web application to help radiology trainees gain skills in detecting lung nodules on chest x-rays. They provided a step-by-step description of how to use it in an article published October 24 in the British Journal of Radiology.

Significantly, the app leverages deep learning-based image segmentation in combination with computer-simulated lung nodules to automatically generate cases, lead author Jens Borgbjerg, MD, PhD, of Akershus University Hospital in Oslo, Norway, told AuntMinnie.com.

“For the novice, beginning interpretation of cross-sectional imaging can be a daunting task,” he noted.

Self-directed learning is a mainstay of radiological education. Systematic approaches to image interpretation are generally regarded as a cornerstone in the education of trainees. However, current desktop-based software tools are scarce, with many involving labor-intensive case curation, Borgbjerg explained.

As a potential alternative, researchers have begun to explore the use of AI for artificially generating clinical cases. The concept is that AI can simulate lesions on top of previous imaging studies, which could eliminate the need for identifying real clinical cases.

To that end, Borgbjerg and colleagues aimed to develop and make available a pure web-based application for perception training in lung nodule detection in chest x-rays.

In brief, the researchers constructed a deep-learning model for lung segmentation based on open-access data from three chest x-ray data sets. In total, the data sets included 513 normal images and 154 with small lung nodules. Next, they converted the model to a web-based format, a step only recently made possible by new JavaScript libraries (Tensorflow.js, for instance) that enable deep-learning models to run locally in browsers, the authors noted.

The researchers then combined the segmentation model with an algorithm for generating artificial lung nodules that allowed “on-the-fly” procedural insertion of lung nodules in the chest x-rays. Lastly, they integrated this functionality into an existing zero-footprint web-based DICOM viewer, and created a dynamic HTML page through which users can specify parameters to generate cases. The result is what Borgbjerg and colleagues call the â€śPerception Trainer."

Screenshots of the imaging viewer interface before (a-b) and after (c-d) having clicked “Review placed markers.” The “Case assessment widget” has been magnified in (b,d). One marker was incorrectly placed (c), which is signified with a red cross. One nodule was not identified (c) and was highlighted by a yellow circle through the mouse over function in the “List of synthetic lesions.” Image courtesy of the British Journal of Radiology.Screenshots of the imaging viewer interface before (a-b) and after (c-d) having clicked “Review placed markers.” The “Case assessment widget” has been magnified in (b,d). One marker was incorrectly placed (c), which is signified with a red cross. One nodule was not identified (c) and was highlighted by a yellow circle through the mouse over function in the “List of synthetic lesions.” Image courtesy of the British Journal of Radiology.

“The application demonstrates a novel approach for on-the-fly generation of cases in chest x-ray lung nodule detection employing deep-learning-based segmentation and lung nodule simulation,” the authors wrote.

The app allows users to specify the characteristics of lung nodules to be inserted into chest x-rays and further produces automated feedback regarding the user’s performance. The project website includes written and video instructions on how to use the app.

Ultimately, an estimated 60% to 80% of radiological errors are attributable to perceptual errors and radiology educators have encouraged developing and disseminating easily accessible interactive approaches to improve training, the authors noted.

“While our presented application facilitates the acquisition of implicit perceptual skills, trainees need to obtain explicit knowledge of pathology and imaging techniques,” they wrote.

Hence, they suggested the application can be a helpful adjunct in radiology education by providing an easily accessible option for high-volume and intensity training of perceptual skills.

Moreover, based on the work, similar additional tools can be created, they noted. Specifically, given that the imaging viewer is primarily built for volumetric datasets, adaptation to CT is a promising avenue for further development (liver/lung segmentation and insertion of simulated liver metastases/lung lesions), the group wrote.

“We anticipate that the description and availability of our developed solution with open-sourced codes may help facilitate radiological education and stimulate the development of similar AI-augmented educational tools,” Borgbjerg and colleagues concluded.

The full article is available here.

Page 1 of 383
Next Page