Book excerpt from 'The Future of Seeing' by Daniel Sodickson

Daniel Sodickson, MD, PhDDaniel Sodickson, MD, PhDMedical imaging has had dramatic effect on the human experience, and it will continue to do so, according to a new book called "The Future of Seeing: How Imaging is Changing Our World" and written by NYU Langone Health researcher Daniel Sodickson, MD, PhD.

The work was published October 14 by Columbia University Press. In this excerpt, Sodickson describes the impact AI could have on the future of imaging. 

From Chapter 11: No More Tunnel Vision  

… Let us now consider the possible shape of tomorrow’s AI-powered medical imaging devices. To begin with, in addition to a brain, smart medical scanners will gain a memory. Previous images and other relevant information will be used to tailor the new data that is gathered, making imaging sessions progressively faster, and focusing them explicitly on detecting departures from your baseline state of health. One of the reasons pancreatic cancer is so deadly is that it grows gradually under the radar, and it is generally not detected until it has already started to spread. Imaging machines that are specifically tuned to detect small changes over time could provide the early warning that is critical for survival. Prostate cancer, by contrast, can percolate uneventfully for years or decades after it is discovered. The key is to detect when it starts to become aggressive and life-threatening.

Here, MRI has already been shown to allow noninvasive monitoring in place of regular invasive biopsies, and change-attuned machines could make longitudinal monitoring faster, cheaper, and more accessible. That is why my NYU colleague Hersh Chandarana -- an expert in imaging of the abdomen and pelvis -- is obsessed with scanners of the future. Patricia Johnson, a physicist in our radiology department, is working with Hersh and his abdominal imaging colleague Angela Tong to make scanners intelligent enough to decide automatically after just a few minutes that a patient is at high enough risk of prostate cancer to continue scanning. My graduate student, Arda Atalik, is working on emulating depth perception by using contextual information derived from prior scans or text reports to enable ultrafast imaging with small numbers of projections. And Lavanya Umapathy, a postdoctoral fellow in my laboratory, is exploring how giving neural networks memory of a patient’s prior images can make the networks better at estimating disease risk by enabling them to detect which features of current scans are new and which have remained stable over time. The longer-term goal is to change the way scanners are operated, and even the way they are constructed.

Image quality for the first generation of new memory-enabled imaging machines -- let’s call them type 1 scanners of the future -- will likely equal or exceed the current state of the art. The machines themselves will likely resemble today’s scanners in their overall bulk and expense. However, their operation will be both newly efficient and newly data-rich. They will be bristling with peripheral sensors that will track in detail each patient’s movements, as well as the machines’ own internal states. They will gather data in continuous, multisensory streams, adapting on the fly as conditions shift. They will be the medical equivalent of self-driving cars, always monitoring inside and out while hunting for worrisome changes that might affect their precious cargo.

In the current episodic model of medical imaging, the task of arranging distinct episodes into a storyline is left largely to the mind of the physician. Soon, though, our scanners will be able to tell their own stories. They themselves will be able to sound the alarm at any concerning changes. Are you on a slow trajectory toward Alzheimer’s disease? Has there been an ill-defined but clearly uncharacteristic shift in your body’s structures since the last time you were scanned? These are the kinds of questions that smart, adaptive, and predictive imaging could answer.

Eventually, memory and predictive power will allow the design of medical scanners themselves to change, making them smaller, cheaper, and more accessible, and delivering in earnest on the promises of the current less-is-more trend. When image quality inevitably suffers, as the physics of tomography dictates that it must when certain engineering compromises are made, the use of image memory, representations, and correlated sensor streams will help to rescue degraded images. Perhaps more importantly, though, these coordinated tools will also allow us not to focus so single-mindedly on producing visually pleasing images. In a revolutionary break from the tyranny of image quality that has defined the progress of imaging for millennia, new imaging devices will arise with the specific purpose of providing early warning, as our own senses evolved to do for us. Cheap scanners will be deployed in doctors’ offices or in drugstores. Rather than producing images that require advanced training to interpret, they will provide simple answers, like any other medical test. They will tell you if you are continuing in your baseline state of health, or if instead you might need to go in for some more advanced imaging. Think of these cheap, answer-oriented devices as type 2 scanners of the future.

But why stop at the drugstore? I fully expect that the latticework of sensors built into future scanners will eventually escape the bounds of our time-honored tubes entirely. If sensors can be trained to operate like auxiliary senses within high-end medical imaging devices, then why can they not also operate on their own, informed by memory of previous images? Cheap sensors -- type 3 scanners of the future -- may not be able to produce state-of-the-art cross-sectional images in a traditional tomographic sense, but they may still be able to provide early warning of changes to your health.

Consider this analogy. Once you have carefully inspected the layout of a room -- once you have built up a sufficiently robust representation of it in your mind -- you can generally close your eyes and navigate pretty effectively by touch, sound, and other senses. You may not be quite as agile as you were when you had your eyes open, but you can get by. At the very least, you should be able to sense when someone new enters the room. Now invert the scene. Instead of you training your senses on the room around you, imagine that the room is focusing attention on you. Imagine that the walls, the furniture, even your clothes are instrumented with a diverse network of sensors with built-in spatial awareness. Imagine that all the room’s senses are on the alert for internal changes, collectively attuned to signatures of approaching disease. You can also think of type 3 scanners of the future as a new twist on self-driving cars: not smart vehicles steering you through a changing environment, but smart environments looking out for concerning changes in you.

Page 1 of 1199
Next Page