Part II: Medical image processing has room to grow

By Eric Barnes, AuntMinnie.com staff writer

September 4, 2006 -- The second installment of our series on 3D and virtual reality, based on talks at the 2006 Computer Assisted Radiology and Surgery (CARS) meeting in Osaka, Japan, discusses fundamental concepts in the development of volume-image modeling.

The emergence of ultrafast computing with the rapid advances in the software needed to render, reconstruct, and segment image data have enabled the development of sophisticated 3D patient models that offer an accurate graphic representation of numerical image data from any modality, or a combination of imaging modalities.

When combined with physiological processes, any number of which can be mapped to the anatomy over time, the result can be an intuitive marriage of function and anatomy that is very useful for evaluating biological processes in specific anatomic regions.

"The images become measurements," said Richard Robb, Ph.D., professor of biophysics and computer science at the Mayo Clinic College of Medicine in Rochester, MN. "They may be subjective measurements, mental measurements, but more and more they're becoming actual quantitative measurements about what's going on with the patients. Those images and measurements can then be used to guide interventional procedures and to intervene effectively."

Robb made his remarks in July at a seminar on visualization and virtual reality in medicine, part of the 2006 Computer Assisted Radiology and Surgery (CARS) meeting in Osaka, Japan.

With the aid of mapping algorithms, the so-called structure-feature models have evolved to do more work in less space than the original image data, said Robb, who is also the director of the Mayo Clinic's Biomedical Imaging Resource.

The models use algorithms to convert data voxels into polygons, yielding surface models or less frequently solid models that are patient-specific, and produce a smaller data footprint than the original voxels, he said. Models have surfaces, making them ideal for applying other kinds of information that can be read quantitatively to aid in medical decision-making, Robb said.

Data modeling uses several kinds of algorithms to convert image data into polygons. Images courtesy of Richard Robb, Ph.D.

"If carefully done, they can be extremely accurate and useful tools for looking at the interior or the exterior, and they are ideal for training and education," he said. "They can tell us something about anatomy, pathology, and physiology."

Algorithms bend image data

The literature documents many different kinds of algorithms that have been developed to perform these functions. The tiling algorithms commonly used to create surface models come in several flavors, including marching cubes, recursive subdivision, and active surface algorithms. And while the mathematical formulas underlying them are complex, the goal is to create an accurate representation of the surface using polygons.

Active surface modeling can be used to grow surfaces. "You start with a polygon inside the image voxels and basically grow it out of the edge of the volume and that becomes your surface," Robb said. Alternatively, a shrinking surface process can be applied "where you have your image volume inside a number of points which you algorithmically shrink onto the surface, and when all these points hit the surface you are left with a boundary."

Above and below, in a surface model, image data are converted to polygonal surface data. Tiling algorithms use polygons to create variations in the surface anatomy. Images courtesy of Richard Robb, Ph.D.

The adaptive deformation model also uses polygons created from voxels, which have a smaller data footprint and are therefore easier to manipulate than voxels, Robb said.

Adaptive deformation shapes the model in a binary fashion to correspond to the shape of an atlas, or to anatomic data from a single patient. "You can use it to get the best model you can get with your polygonal budget," he said.

The polygonal budget dictates the model's precision. For example, if the display consists of 50,000 polygons per second, that is the maximum number of polygons available to create the surface. The goal, therefore, is to spread the available polygons intelligently across the surface to get the best possible representation, a job that is performed by the adaptive deformation and adaptive tiling algorithms, Robb said.

In the brain model demonstrated above, polygons are distributed via Delaunay triangulation, which represents the brain surface by fitting triangles onto it -- more triangles where the radius of curvature is higher and fewer triangles in flat areas.

"We intelligently distribute them on the surface to get an image without ever exceeding the polygonal budget," Robb said.

Structure-feature fusion

The goal of structure-feature fusion modeling is to add function to anatomy through time, Robb said.  The result is more than a 3D model; it can be called a 4D or even 5D model depending on the number of functions that are added to the anatomic model, Robb said. Many different kinds of functions can be associated with anatomy through time, including kinetics, elasticity, electrical, absorption and diffusion, local and regional flow, pressures, temperatures, viscosity, strain, and more.

"Any kind of any kind of functional parameters that affect tissues you can think of as a feature you might want to associate with anatomy through time, and add that to a model," Robb said.

Structure-feature fusion models add function to anatomy through time. Many different kinds of functions can be associated in this manner, including kinetics, elasticity, electrical mapping, absorption and diffusion, flow, pressure, temperature, viscosity, and strain. Image courtesy of Naoki Suzuki, Jikei University, Tokyo.

Texture mapping makes models more realistic; the technique is particularly useful in educational application, he said. The process involves mapping a pattern onto a 3D surface that has been included in a model. It is generally accomplished by taking 1D or 2D functions and applying them to the polygonal surface of the model.

"This often increases the perceived resolution -- not the actual resolution -- but by adding texture, we often get a better impression of the depth, size, and type of structure in the model. It increases the information content of the surface," Robb said. In a heart model, for example, a myocardial infarction can be studied via changes in regional velocity and volume, "which can be studied quantitatively and parametrically," he said.

In another example, the Visible Human Project, anatomic data are rendered as digital sections of anatomic regions based on biophysical properties of the tissues being modeled. Colors from each section of the anatomy are mapped onto it, and the anatomic shapes are created by deformation algorithms with applied volume texturing. Such models are especially useful for simulating surgical procedures, and can be used as surgical planning tools or for education, Robb said.

Parametric displays use color and texture to denote functional measures. Spatial and temporal data are mapped to the geometric surface. In a parametric map of the myocardium, for example, color is used to emphasize individual parts of the myocardium, and indicate how far each region has moved through the cardiac cycle.

Above, anatomic data are rendered as digital sections of the torso based on biophysical properties of the tissues being modeled. Colors from each section of the anatomy are mapped onto it; the anatomic shapes are created by deformation algorithms with applied volume texturing. These models are especially useful for surgical planning and rehearsal, and as educational tools. Images courtesy of Richard Robb, Ph.D.

In virtual colonoscopy data, a map of the colonic lumen becomes four-dimensional by mapping an assessment of microvascular density for individual colonic polyps. If the microvascular density is high, the polyp could have metastatic potential, Robb said.

Performing such analyses with simple CT value thresholding is "not necessarily reproducible, and it certainly has some specificity errors, but you can begin getting an estimation of microvascular density," he said. "It's just 3D spatial information, but we've added a feature that tells us something more about the tissue than just the geometry alone."

In a beating heart, a patient-specific model could be said to have five dimensions, including three spatial dimensions, a fourth to map heartbeat through the cardiac cycle, and a fifth dimension in the form of an electrophysiology activation map. The timing of the activation, mapped in different colors onto a left ventricle for example, can illuminate the contractile sequence, pointing out arrhythmias and abnormal heartbeats that can be used to plan an ablative procedure.

Top, 3D assessment of microvascular density for individual colonic polyps can help assess the risk of malignancy in 3D endoluminal view of a colonic polyp. Bottom, parametric mapping adds the electrophysiology cycle as a color map superimposed on the left ventricular chamber of the heart. Early activation site in red can be used for targeting ablation therapy. Images courtesy of Richard Robb, Ph.D.

Color can also be used to map the movement of different regions of the myocardium throughout the cardiac cycle.

Parametric color mapping shows regional maximum movement of the myocardium throughout the cardiac cycle. Movement is greater at the base and apex. Images courtesy of Richard Robb, Ph.D.
Motion-derived parameters are used to separate infarcted from noninfarcted myocardium. Images courtesy of Richard Robb, Ph.D.

Patient-specific modeling requires several steps, some better developed than others, Robb said, summing up the process. "We scan the desired volume of the body, we segment specific subvolumes from that, we transform those into surface shells (or solid models), we tile with adaptive deformable polygons, map desired structures or functions onto those polygons, and then render that map surface for display and use," he said.

Virtual reality

Virtual reality modeling goes beyond the structure-fusion model to create highly interactive computer-generated scenes so realistic and responsive that they evoke the real experience, Robb said.

"The development of virtual anatomy over time started with simple geometry like organs," he said.  "We added elasticity and deformable models; we began adding physiology like muscle dynamics. We are into the microanatomy world, too, where we're looking at (basal) ganglia cells and organelles.... Where we are going but haven't achieved yet is the real properties of biochemistry, where we can understand for example the immune system and model it, or the endocrine system. Getting realistic patient-specific data is the problem."

Virtual reality modeling will be covered in Part III of this series.

By Eric Barnes
AuntMinnie.com staff writer
September 4, 2006

Part I: Medical image processing has room to grow, July 4, 2006

3D lab efficiency may depend on who's in control, June 22, 2006

Technologists take advantage of 3D opportunity, April 25, 2006

Centralized 3D labs hold benefits for community practices, July 18, 2005

Integrated 2D/3D offers workflow, clinical gains, June 17, 2005

Copyright © 2006 AuntMinnie.com

 

To read this and get access to all of the exclusive content on AuntMinnie.com create a free account or sign-in now.

Member Sign In:
MemberID or Email Address:  
Do you have a AuntMinnie.com password?
No, I want a free membership.
Yes, I have a password:  
Forgot your password?
Sign in using your social networking account:
Sign in using your social networking
account: