Large language-image model segments organs, detects cancer

Sunday, November 26 | 1:50 p.m.-2:00 p.m. | S4-SSIN01-6 | Room S401

In this presentation, researchers will share how large language-image models can automatically segment 25 abdominal organs on CT exams and detect six tumor types.

Setting out to investigate the utility of large language-image models, the researchers developed what they called CLIP-Driven Universal Model, which integrates text embedding learned from Constructive Language-Image Pretraining (CLIP) models into segmentation models.

“The proposed CLIP-Driven Universal Model, integrating image representation with language, possessed remarkable efficiency, generalizability, transferability, and extensibility in organ segmentation and tumor detection,” presenter Zongwei Zhou and colleagues wrote.

The researchers developed their model using 3,410 CT scans from 14 public datasets. After assessing generalizability and transferability on 117 CT scans from two external datasets, they then independently evaluated CLIP-Driven Universal Model on 5,308 CT scans with 21 annotated organs from a dataset assembled at Johns Hopkins Hospital.

In that test, the model yielded strong segmentation performance, high computational efficiency, and generalizable results.

“The high-performing AI can facilitate the creation of large datasets that are significantly larger than current medical datasets, thus promoting larger-scale studies for early cancer detection in real-world settings,” the researchers wrote.

Check out this Sunday afternoon presentation to get all of the details.

Page 1 of 2
Next Page