Part III: Image processing has room to grow

2006 10 18 11 04 45 706

The third and final installment of our series on 3D and virtual reality, based on presentations at the 2006 Computer Assisted Radiology and Surgery (CARS) meeting in Osaka, Japan, discusses the development of virtual reality imaging applications.

Virtual reality in medicine has light years to go before it will approach the kind of seamless structural and functional integration that the doctors on "Star Trek" wielded so handily in their deep-space medical suites on 1960s television.

But things are moving fast if not at warp speed, and progress toward developers' most cherished goals is being made right here on earth. Researchers at Japan's Institute of High Dimensional Medical Imaging at Jikei University School of Medicine in Tokyo, for example, are among those working to create the virtual reality future of medical imaging.

A virtual factory for advanced application development, the institute has invested in a broad array of medical virtual reality projects encompassing virtual surgery, 4D imaging applications, telemedicine, and 3D databases.

2006 10 18 11 12 48 706
Tokyo's Institute for High Dimensional Medical Imaging, Jikei University School of Medicine, Tokyo, encompasses a broad range of medical virtual reality projects. All images courtesy of Dr. Naoki Suzuki, Ph.D.

At a seminar on virtual reality (VR) in medicine held concurrently with the 2006 Computer Aided Radiology and Surgery (CARS) meeting in Osaka, Dr. Naoki Suzuki described several projects at the institute aimed at shaping the ways in which diagnostic imaging will be used in patient care.

"Our vision is to utilize VR techniques to improve medical simulation and navigation," Suzuki said.

As many as 35 researchers at the main Tokyo campus are engaged in VR, telesurgery, robotics, and 4D imaging projects -- the latter hampered, Suzuki said, by a lack of advanced display technologies.

Research into 4D -- defined as 3D imaging plus functional parameters – remains at the early stages "because we don't have a way to show 4D images in our features, (which) you now see on 2D screens," he said. "We need a new display to show real 4D information."

2006 10 18 11 13 11 706
4D imaging adds functional parameters to the 3D imaging space.

Research overlap is inevitable, Suzuki said. Broadly defined, VR has important applications in medical education, surgery, and 4D imaging, encompassing the following:

  • Diagnosis using 3D and 4D datasets
  • Data fusion in the operating room
  • Virtual surgery
  • Treatment simulation
  • Analysis of body locomotion in a 4D space
  • Medical education in virtual space

A key challenge they all share is the trade-off between real-time imaging and surgical simulation and navigation, Suzuki said. The faster a VR project operates, the lower the image quality.

"We always have to think about the balance between image reconstruction (quality) and its speed," he said. "At the center, we can do interactive manipulation of 4D images, for example picking up a moving heart or (manipulation) inside organs. The ultimate goal is to depict all human activities in the 4D space. We would like to visualize the whole-body movement of skeletal and muscular systems."

Central to the group's work is image-guided surgery, using data fusion as a part of surgical navigation in which the patient's internal 3D organ structure is superimposed onto the surgeons' field-of-view.

Virtual surgery systems enable the user to design surgical procedures based on a 3D model reconstructed from the patient's own imaging data, whether it be CT, MR, or another kind of imaging data. By means of forced-feedback devices, the system must then transmit realistic tactile sensations back to the user during organ manipulation, creating as real an experience as possible for a surgeon or medical student.

To make surgical simulations more realistic, "we are applying haptic systems to virtual surgery," Suzuki said.

2006 10 18 11 13 23 706
Virtual surgery systems incorporate the patient's own image data into interactive systems, with subtle forced-feedback devices that transmit realistic tactile sensations back to the user during organ manipulation.

2006 10 18 11 13 35 706
Virtual surgery suite, main control room.

One project used an MRI dataset from a healthy volunteer to make a standard reconstruction model for virtual surgery, Suzuki said. Grid marks enable quantitative evaluation for cases in which measurement is critical. "Then we can trust that the internal structure is adequate; we can do surgical simulation or inject some material," Suzuki said.

2006 10 18 11 14 47 706
Above and below, reconstructed liver and stomach from an MRI of a normal volunteer, with texture mapping marks at right available for quantitative assessment of image data.
2006 10 18 11 13 47 706

The institute is also building telesurgery simulation systems for use with the robotic da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA), all in cooperation with Dr. Makoto Hashizume and colleagues at the Center for Integration of Advanced Medicine and Innovative Technology at Kyushu University Hospital in Fukuoka, Japan.

The project required the entire da Vinci surgical system and multiple forceps, including long-tip forceps, Cadiere forceps, round-tip scissors, and cautery spatula, to be reconstructed as 3D VR data and included in the surgeon's view.

One VR application used da Vinci to perform a cholecystectomy on a deformable liver and gallbladder reconstructed from the patient's thin-section CT data. Manipulation of the forceps models was controlled by Phantom haptic devices (SensAble Technologies, Woburn, MA), Suzuki noted.

A second project recreated a colectomy procedure in a patient-specific deformable colon model created from CT data. To represent a deformation corresponding to the shape and size of the colon, a mesentery model was reconstructed with 0.8-mm radius spheres, a duodenum with 1.0-mm spheres, and a vessel with 1.0-mm spheres.

2006 10 18 11 13 59 706
Telesurgery cholecystectomy simulation using the da Vinci robot featured a deformable liver and gallbladder reconstructed from thin-section CT data. Project conducted in cooperation with Dr. Makoto Hashizume, Kyushu University Hospital in Fukuoka, Japan. Manipulation of the forceps models was controlled by Phantom haptic devices (SensAble Technologies, Woburn, MA).

2006 10 18 11 14 12 706
Virtual surgery application simulates colectomy in a deformable colon model. Anisotropic deformations of a vessel model and mesentery were represented by analyzing the direction of vessel structures. Forceps manipulation was controlled by SensAble Technologies' Phantom haptic devices.

The most difficult virtual organ surgical model is the beating heart. "We can catch the moving heart and look down the apex ... when the artery is moving," Suzuki said. "Every image should be quantified...."

The kinetic and geometric modeling of a da Vinci manipulator requires that the shapes and distances of each link to the da Vinci system be measured in detail and reconstructed for the 3D VR system. Haptic navigation defines each link between the surgical system and the virtual representation in a master-slave relationship. Feedback is presented both on monitors and through tactile forced-feedback delivered through the manipulator and back to the operator's hands.

"With inverse-kinematics computation, the corresponding motion of connected links is generated according to the 3D tip position of the forceps," Suzuki explained in his notes.

2006 10 18 11 14 26 706
For the development of a simulation system for robotic surgery, the surgical robot's motion was simulated and rehearsed preoperatively with the kinematic constraints at the trocar site, and the inverse kinematics of the surgical robot. Intuitive preoperative planning kept the robotic arms away from the surgical assistant. Integration with a haptic interface enabled surgeons to push and drag the arms of the virtual surgical robot consistently and quantifiably together with corresponding elements of the virtual representation.

To develop a simulated robotic surgery system, the robot's motion was simulated and rehearsed preoperatively with the kinematic constraints at the trocar site, Suzuki said. Preoperative planning also kept the robotic arms away from the surgical assistant. Integration with a haptic interface allowed the surgeons to push and drag the arms of the virtual surgical robot consistently and quantifiably with those of the virtual representation.

Human locomotion

Another project sought to analyze human body locomotion in four dimensions. "Ten years ago we tried to construct master models to move skeletal systems," Suzuki said. "We placed muscle systems in unique muscle bundles. We produced very accurate muscle models but we could not use them in real-time because data volume was so heavy.... So we could not visualize real human motions; now we are downsizing the (functionality) to simplify the muscles as muscle streams, and we can easily handle whole-body movement in real-time."

Toward this goal, Suzuki's group has studied the interaction of bone and muscle, and their mutual interference during locomotion. Precise representations of the human gait and muscle-bone-joint interactions could be useful in various applications ranging from diagnosis of muscle and bone pathology to physical therapy planning.

The simplified or stream-based muscle model not only has the shape and volume of real muscles, it is activated like real muscles, Suzuki said. MRI data from a healthy volunteer was used to create a digital dummy project, which incorporates 200 bones and 600 muscle models in a full skeletal model.

"We tried to align the movement of humans with 600 muscles but it was impossible -- very detailed movements we cannot design," he said. "First we had to know the nature of activity of muscles in the whole body so we made a partial model."

Thus the project was limited to muscle streams and bones in the legs. A skeletal model was superimposed onto video footage of a moving leg. Surface markers were placed on the subjects, and open MRI imaging was used to quantify the movement of each surface marker relative to the location of joints and bones as the subject moved, and finally the internal image data were tied precisely to the surface movements.

Body surface movement was measured using a commercial (Viacom, New York City) motion-capture system. The motion-capture camera was calibrated by the correspondence of the marker positions in the 2D image and 3D space. Motion-capture devices are quite familiar to Hollywood types making computer graphics, Suzuki said. But science and entertainment have different goals.

2006 10 18 11 14 59 706
Muscle and bone model based on imaging data tracks mutual interferences during locomotion.

"In medicine, we don't need motion that looks like humans," Suzuki said. "We have to know the real activity of humans. Each marker on surface is moving and the movement is different according to the person. When the person is skinny or fat, the movement is different."

2006 10 18 11 14 36 706
To recreate the movement of bones, muscles, and joints in human locomotion, researchers superimposed MRI skeletal and muscle data onto a video of the subject's legs in motion.

A more complex image capture system was needed to provide detailed 4D observation, analysis, and modeling of whole-body locomotion. The researchers, having tried unsuccessfully to contract out the development of a large ring camera to accomplish this goal, built the camera themselves. The resulting Dynamic Spatial Video Camera (DSVC) multicamera device can readily slice time and motion at any moment from multiple angles.

The DSVC consists of a 4-meter diameter ring for the subject to move in, equipped with 60 video cameras. The ring port was hung under a motor-controlled crane that moves along a rail attached to the laboratory ceiling. For analysis, a PC clustering scheme steers image data from every two cameras into one PC. Time sequential images were used to observe a rhythmic gymnastics session by rotating the viewpoint clockwise.

2006 10 18 11 15 12 706
The Dynamic Spatial Video Camera (DSVC) (above) consists of a ring equipped with 60 video cameras, measuring 4 meters in diameter for the subject to move in. The ring port was hung under a motor-controlled crane that moves along a rail attached to the laboratory ceiling. For analysis, a PC clustering scheme steers image data from every two cameras into one PC. Time sequential images were used to observe a rhythmic gymnastics session by rotating the viewpoint clockwise (below).
2006 10 18 11 15 24 706

Once the time-sequential images are acquired, any moment of movement from any angle can be stopped and examined retrospectively, Suzuki said. It takes great effort and a long time to visualize whole-body movement on CT or MRI data in this manner, but the results have been very successful to date, providing useful educational materials.

"When we get all the images, we can reconstruct the surface," he said. "Then we adjust the models to see the inner structures ... in a very free way. We have to add more functions to estimate each (surface) wall. Then we can see whole-body skeletal movements in real-time someday."

An auspicious start

In his closing remarks, co-moderator Richard Robb, Ph.D., complimented the breadth and depth of projects under way at the Tokyo-based institute. It's important to acknowledge the important progress the team has made, he said, while paying attention to the work that lies ahead before medical virtual reality becomes clinical reality.

"We still have some progress to make, and some problems," Robb said.

Chief among them are standards -- both lacking or too numerous -- that will be essential for multiple developers and end users work together. There are shortcomings in computer languages and software and interfaces, and single-application systems hamper communication, he said.

As for the debate between open-source and proprietary code, Robb said he thinks there's a place for both kinds in systems development. The graphic user interface is often the most ignored component of systems; developers forget about the people who are going to use the systems, he said.

The same problem applies to hardware interfaces. Plug-and-play devices need to be developed in such a way that they can be plugged into multiple systems, Robb said. DICOM is off to a good start, he said, but its functionality will need significant expansion to accommodate interoperability and optimal communication.

Interactivity is always a challenge, Robb said, echoing Suzuki's earlier remarks about the trade-off between performance speed and image quality. Systems, for example robotic surgery manipulators, may not need not perform in real-time to be adequate to the task, Robb said, but their latency should not be so great as to interfere with the normal medical procedure.

Finally, even with lower-cost storage and fast data transfers, data overload is an enduring problem, Robb said. Gigabyte-sized imaging datasets are being fed into networks, storage, and virtual-reality systems, but "once we get to the point where we're doing (VR) interventions, we will have to precondition data so we're only using what is necessary for the task," he said.

By Eric Barnes
AuntMinnie.com staff writer
October 23, 2006

Related Reading

Part II: Medical image processing has room to grow, September 4, 2006

Part I: Medical image processing has room to grow, July 4, 2006

3D lab efficiency may depend on who's in control, June 22, 2006

Technologists take advantage of 3D opportunity, April 25, 2006

Centralized 3D labs hold benefits for community practices, July 18, 2005

Copyright © 2006 AuntMinnie.com

Page 1 of 156
Next Page