Leveraging the benefits of the cloud, artificial intelligence (AI) shows promise for delivering image interpretation services for the 4 billion people in the world who live in areas without regular access to imaging or enough physicians to interpret the images, according to Moore, who joined Google in December 2016 as vice president of healthcare for Google's Cloud platform. A neuroradiologist, engineer, and informaticist, Moore is Google's senior healthcare leader.
While keeping a lower profile than companies such as IBM with its Watson AI program, Google has been researching and developing AI technology for medical imaging applications for several years. In a keynote talk on Tuesday at C-MIMI, Moore discussed several Google AI initiatives and shared the firm's vision for medical imaging AI.
Big, siloed data
Medical imaging has undergone a complete analog-to-digital transformation over the past 20 years, resulting in an abundance of data that could potentially be utilized to gain new insights, Moore said. An estimated 2 trillion images are generated per year worldwide, and a little under half of those are produced in the U.S. That imaging volume translates into 450 petabytes of storage.
"Historically that's doubled about every five years, although in the last two or three years we've seen that [growth rate] actually accelerate," he said.
For medicolegal reasons, images must be stored for various periods of time. Overall, Moore estimates that approximately 4.5 exabytes of image data are stored worldwide.
Unfortunately, the data are most often siloed in PACS archives across institutions, and sometimes even within institutions for those who have multiple PACS, he said. These archives contain 10 to 20 years of longitudinal data and could potentially be correlated with the electronic health record (EHR), pathology, and outcomes data.
"An open question for all of us in the research community is, are these large clinical image archives actually useful for research?" he said. "Many of us have grown up only doing research on research protocol scans, and have not paid too much attention to clinical archives."
Moore and colleagues have been investigating the question of leveraging clinical imaging archives for radiomics purposes over the past couple of years. As an example of that work, the group reported in the September issue of Radiology that automated brain volume measurements could be accurately computed on thick-section, clinical quality MR images.
Diagnostic radiology is currently at its human limits, Moore said. A radiologist typically reads about 50 patient studies a day, with an average of approximately 435 images per study. That represents an average of 1.52 seconds per image.
"As we have evermore powerful machines and higher-resolution images, we have not only this avalanche of data, but images that we need to be responsible for with taking care of patients," Moore said.
The only traditional way to increase the scale of radiology is to train more radiologists.
"We are blessed here in the U.S. and in the developed world to have a relative abundance of well-trained radiologists," he said. "I will say that's a bit of a bubble; that is decidedly not the case throughout the world."
This shortfall serves as ample motivation to increase access to the interpretation of medical imaging around the world, he said. Google's vision as a company is to organize the world's information and make it universally accessible and useful, Moore said. These goals can be accomplished in medical imaging with the cloud and AI.
"In a medical context, my mission is to connect the cloud and AI to develop incredible, powerful solutions to assist caregivers and providers in serving their patients and to scale access to care," he said.
Google has developed a number of AI technologies that can be used in medical imaging applications. In November 2015, the company released its TensorFlow open-source software for general machine learning. Google has also developed advanced techniques such as AutoML, a method that automates the development and optimization of algorithms.
"As we look at medical imaging in general and medicine in general, we are going to need thousands and thousands of algorithms to even approach what a radiologist or a physician does on a given day," Moore said. "This is an answer to begin approaching how we scale on the development of these algorithms and speed them up."
Google has also begun utilizing "sparse" architecture in machine learning, which allows the inclusion of more parameters in algorithms and shorter training time, Moore said.
Data privacy and security are critical in healthcare, and Google has created differential privacy tools that allow researchers from different institutions to aggregate their data while still respecting patient privacy, he said.
Google's first imaging project
Google's first AI initiative in medical imaging targeted diabetic retinopathy, which is the fastest growing cause of blindness around the world. In India, there is a shortage of 127,000 eye doctors, and 45% of patients experience vision loss before diagnosis, Moore said.
Diabetic retinopathy is diagnosed on retinal fundus images, and Google adapted a deep convolutional neural network to read these fundus images. In testing, the algorithm yielded results that were on par with human ophthalmologists.
"We think that as we move this forward operation-wise -- and we are actively doing that with a number of partners and vendors to do this in the developing world -- we can have a significant impact in screening for disease in this area, and to really provide access to care," he said.
The second imaging AI project for Google involved digital pathology; in particular, detecting breast cancer metastases on digitized pathology slides. In initial research published earlier this year, the performance of the company's deep-learning model was equivalent to or better than that of pathologists in a number of areas.
Importantly, this model learned to deliver good and stable performance with only 270 samples for training, according to Moore. Typically, thousands or even hundreds of thousands of images were needed to train these algorithms.
"With the new processing units operating with TensorFlow, we've been getting some incredible examples of these models [achieving good and stable performance] with much smaller amounts of data," he said. "We don't know if it's generalizeable to other datasets or modalities, but we've had some encouraging use cases with even small numbers [of training samples]... with this new architecture and these new tools that are coming forward. [This] could open up some interesting possibilities for us as a field as we struggle with very, very large datasets to aggregate."
A Google spokesperson also noted that more data might allow the model to perform even better.
There are substantial opportunities for joining pathology, genotype, radiology, laboratory, and electronic health record data for machine-learning purposes, Moore said.
"The combination of these tools and parameters feeding in [to an algorithm] will ultimately produce machine learning-enabled outcomes," he said. "I think it's an incredible opportunity for us all, and we all have to pay attention not to be siloed in our particular specialty or area."
From a global perspective, medical imaging is a story of scarcity and error, Moore said. Two-thirds of the world's population lack meaningful access to medical imaging, according to the World Health Organization. Furthermore, approximately 4 billion individuals are in radiology-scarce zones -- meaning there's an inability to acquire images and/or unavailability of a physician to interpret these studies, according to research published in a 2014 paper in the Journal of the American College of Radiology.
In addition, at least 43 million people worldwide are affected by medical errors each year, and the greatest burden is felt in low-income countries, Moore said.
"From a global perspective, it's my hope and my desire that [machine learning] will transform issues of access and quality of medical imaging," Moore said. "But it will have to be done in a very careful manner, with detailed pathology and methodology, and in our traditional processes of publishing peer-reviewed projects and expert review."
Moore said that a cloud platform will be necessary to implement this worldwide. Underserved areas could connect to the cloud platform in a number of ways, perhaps even using high-altitude balloons placed in the stratosphere to provide an aerial worldwide wireless network -- as envisioned in Google's Project Loon, he said.
"My company likes to hold us to a standard of having a billion users for any new product that we design," he said. "We have the opportunity here as a community to affect 4 billion."
Copyright © 2017 AuntMinnie.com