"CAD was originally thought to decrease interpretation time, and also potentially improve polyp detection sensitivity," said Dr. Christopher Beaulieu, who is a professor of radiology and California's Stanford University. "More recently it's been realized that there's a lot of variability between observers, and evening that out with CAD could be helpful."
Beaulieu spoke about the factors that are driving the need for CAD in VC screening at Stanford's 2006 International Symposium on Multidetector-Row CT in San Francisco.
With the technological improvements that have come in recent years, reading times for CT colonography (or virtual colonoscopy) have decreased across the board, he said, but interpretation times in a recent study were still running 11-16 minutes per case, even with experienced practitioners. That may be too long.
"It's very difficult in busy practice to spend even 10 minutes on a patient's exam," Beaulieu said. And in the context of finding lesions in a screening population, the problem becomes clear -- routine exams need to be fast and accurate for VC screening to make sense in a low-prevalence screening population.
Screening produces many negative exams. Statistically, only one to two patients out of 20 -- or about 5%-10% of average-risk screening patients -- might have a 10-mm polyp for which they would need a referral to colonoscopy, Beaulieu said. Fifteen minutes' interpretation time per case, combined with a statistically normal prevalence of clinically significant polyps, means 2.5 hours' reading time or 13,000 images to find a single 10-mm polyp that would need to be removed at conventional colonoscopy.
The preponderance of negative results might be called the "dissatisfaction of search," Beaulieu said. "It's certainly not what radiologists are used to doing all day long when we have a lot of abnormal cases. Radiologists' skills are probably better for problem solving than for pushing through miles and miles of normal colon."
CAD gets better
What can CAD offer? Quite a lot if used correctly, Beaulieu said. Large multicenter CAD studies aren't available yet, but studies in smaller datasets have shown fairly high sensitivities ranging from the 70s all the way to 100%. The naturally high contrast between air and the colon wall allows any number of methods to be used to map the surface of the colon and detect polypoid lesions that protrude from it.
For example, Beaulieu and his colleagues at Stanford developed the surface-normal overlap CAD scheme, which achieved 92% sensitivity for polyps 10 mm and larger in a 2004 study, based on the observation that in spherical structures such as the colonic lumen, large numbers of surface normals intersect near the center of the structure.
Casting normals and "seeing where they intersect" produces a reliable pattern recognition process for detecting polypoid candidates and rejecting normal haustral folds, Beaulieu said of Stanford's SNO method (IIEE Transactions in Medical Imaging, June 2004, Vol. 23:6, pp. 661-675).
Later tests in small datasets have produced sensitivities of up to 100% for clinically significant lesions, so "from a technical standpoint it's certainly promising," he said.
National Institutes of Health (Summers et al) and University of Chicago (Yoshida et al) researchers have also used surface curvature-based methods, Beaulieu noted. Other groups have approached automated detection using the convexity and sphericity of colonic structures, volumetric and surface shape and texture, tissue intensity, and other features. Measuring wall thickness was an early proposal that "didn't turn out to be nearly sensitive enough," Beaulieu said.
The main goal is to identify combinations of features that work together to reliably identify true positives but not false negatives. False-positive detections and flat polyps present enduring challenges to CAD developers.
Moving into practice
"Several groups have started to implement CAD, not in a true clinical setting like has been done for breast or lung imaging, but in a preclinical way," Beaulieu said. And there is already some evidence that CAD can help radiologists find more polyps.
A collaborative study paired VC data from the Mayo Clinic with the National Institutes of Health's CAD scheme, which uses support-vector machines to determine feature classifiers, and a smoothed leave-one-out (SLOO) cross-validation method for obtaining error estimates.
In the study, two readers examined 40 VC cases and found 18 polyps, Beaulieu said. "It turned out CAD found four polyps that were missed by two radiologists, and seven polyps missed by one or both radiologists," he said; the 67% sensitivity without CAD became a potential sensitivity of 89% with it. "I say 'potential' because they didn't actually have the radiologists look at these CAD cases and say what they would have done with them."
In another small study, Beaulieu and his group found that CAD improved sensitivity from 61% to 82% for polyps 5 mm and larger. CAD didn't decrease overall reading time but did decrease the time needed to find the first polyp, he said. The group also found that CAD equalized the performance of the less-skilled readers with those who had more experience, he said, "so there was a definite decrease in interobserver variability."
A recent paper from the U.K. paired the commercially available Colon CAR software (version 1.2, Medicsight, London, U.K.) versus 3 expert human readers. The CAD system identified 26 (81%) of 32 polyps, compared with an averagesensitivity of 70% for the expert reviewers. All polyps missed by expert 1 (n = 4) and expert 2 (n = 3) and 12 (86%) of 14 polyps missed by expert 3 were detected by CAD. CAD also generated 13 false positives per case, but 91% were "easily dismissed by the readers," Beaulieu noted (American Journal of Roentgenology, March 2006, Vol. 186:3, pp. 696-702).
Complicating the development of VC CAD is the use of oral contrast for fluid and fecal tagging, which is becoming the standard of care in VC. "The algorithms have to be modified or the oral contrast somehow has to be subtracted for your algorithms to work," Beaulieu said. Work is continuing on both fronts.
As for getting CAD into VC practice, Medicsight's ColonCAR 1.2 received 510(k) clearance in 2004, and other systems are expected to follow. Regulatory issues appear to be moving along without major roadblocks at this time, Beaulieu said.
False positives remain problematic, but the ease of interaction with the system is likely to mean more than absolute false-positive numbers, he said. The maximum acceptable number of false positives might turn out to be 10 or perhaps even 20 per case, "but it's really going to depend on what viewing tools you can use to make a decision as to whether those locations are true polyps or false positives," he said. "The question is whether radiologists will trust CAD, and that will take some time."
But no matter what computers might be trained to do, "it certainly won't preclude us from needing to view the data directly, and it's certainly not going to take our jobs away," Beaulieu said.
CAD has shown that it has potential to improve sensitivity, but multicenter evaluations will be needed to prove it on a larger scale; efforts are underway by the NIH and others to organize larger studies, Beaulieu said. The data also suggest that CAD will probably decrease interobserver variability, he added, though it may not shorten exam times by much.
"When you look at the way radiologists read datasets, at least in California ... I think there's no question that computer algorithms will be part of the future for CTC," Beaulieu said.
By Eric Barnes
AuntMinnie.com staff writer
September 13, 2006
VC visualization tool combines advanced features, August 17, 2006
2D primary reading plus CAD has an edge in VC study, July 11, 2006
VC CAD improves results for readers at all levels, April 7, 2006
New VC reading schemes could solve old problems, October 13, 2005
CAD aids polyp detection in 64-slice study, May 12, 2006
New VC reading schemes could solve old problems, October 13, 2005
Copyright © 2006 AuntMinnie.com