The Practice of Ultrasound: Part 3 -- 4G ultrasound
Article Thumbnail ImageFebruary 10, 2012 -- AuntMinnie.com presents the third in a series of columns on the practice of ultrasound from Dr. Jason Birnholz, one of the pioneers of this modality.
Member Sign In:
  MemberID or E-mail Address:
  Password:
(?)

New AuntMinnie.com Members

Becoming a Member is FREE!

  • Real-time radiology-specific news
  • Case of the Day and Teaching Files
  • Focused topics digital communities
  • Lively, discussion groups
  • Medical imaging bookstore
  • SalaryScan
  • Job Boards
  • Online CME
  • Bookstore, market reports, more …
  • Conference Calendar
  • User-controlled eNewsletters
  • … registration is FREE and easy!
  • Fellow UltraSounder,

    The Eugene P. Pendergrass New Horizons lecture at the last RSNA conference proved once again to be dazzling and awe-inspiring for clinical imagers. Dr. Gregory Sorensen shared how the latest forms of data processing and image display for CT, MR, and PET of the brain make it possible to go beyond visualizing pathology into revealing the basic mechanisms of disease states.

    Dr. Jason Birnholz
    Dr. Jason Birnholz.
    This same theme was reiterated in Dr. Jeffrey Petrella's Annual Oration in Diagnostic Radiology, which focused on Alzheimer's disease and applying imaging research and practice to the ongoing search for therapy of this pervasive condition.

    I have often wondered, though, why the rich data mines of ultrasound have never been explored with an equal level of technical sophistication. All of the imaging modalities have access to the same processing and analysis tools, and ultrasound is an especially popular field for electrical engineering doctoral thesis work.

    Unfortunately, packets of low-power, broad-bandwidth sonic energy undergo very complex scattering events, which change in unpredictable ways with small variations in pulse generation and with equally miniscule changes in tissue histology, target geometry, and body location. This has always made technical work in ultrasound more of a semi-intuitive art form than other types of medical imaging.

    I also suspect that other deterrents lately to significant new product development are that radiology has never treated ultrasound very seriously and that a lot of clinical facilities are perfectly well-satisfied with the current performance of ultrasound in its traditional applications.

    A new horizon?

    In a quest to discover what's on the horizon for ultrasound, I decided to interview two of the smartest electrical engineers I know working in ultrasound instrument development: Glen McLaughlin, PhD (founder of Zonare Medical Systems), and Jacques Souquet, PhD (founder of SuperSonic Imagine). It is really fascinating that they both identified the same factor limiting the growth of ultrasound and took the lead in providing a new form of ultrasound imaging equipment to the clinical community.

    But before I share what I learned from my interviews, I first want to present an entirely arbitrary list of the "generations" of ultrasound equipment to illustrate how present-day ultrasound is built upon the successive achievements of the past.

    The first generation, G1, utilized manual scan B-mode equipment with a single, relatively large, typically circular transducer attached to a position-sensing gantry that limited probe motion to a line path. These devices progressed from black and white (bi-stable) images of organ margins to grayscale displays including parenchyma in the mid 1970s. Scan conversion went from analog to partly digital, and the scanning arm was replaced by rotating transducers for rapid, almost continuous viewing of a limited part of the scanning field.

    The second generation, G2, included the early array systems. A single, large transducer was replaced by an array of multiple small elements, which introduced a lot of new problems for system designers and clinical users. The beam pattern of individual small elements is really poor, and individual elements will not tolerate the high power and shock excitation of larger, single-element transducers.

    Array systems went from sequentially firing groups of elements (to simulate motion of a larger transducer surface) -- a so-called "rectilinear array" -- to the phased array, which uses all of the elements together. The phased array achieved beam steering by subtle time delays of excitation of adjacent elements, typically referred to as "electronic sector scanners." There was a surge of progress in G2 in transducer materials, probe construction, and in understanding modes of excitation.

    G3 began in 1982 with the introduction of "computed sonography," but it might best be referred to as comprising the "beamformer" era. The number of elements in an array is vaguely like the f number of an optical lens. More elements means higher f stop, better depth of field in focus. G3 probes had many more elements than their predecessors and achieved dynamic focusing on a composite of all the received energy, simulating exact transmit focusing in its physical absence. The result was better spatial resolution and better contrast with less clutter and other forms of noise.

    Late G3 advances

    Big improvements in G3 were achieved in the early 1990s, including correcting for aberration of signal phase, harmonics, and chirp signals. Bats and dolphins both use chirp acoustics. The idea was this: The briefer the sonic pulse in time, the more frequency components there are in the signal. The shorter the pulse, the broader the bandwidth of the signal, and the better the axial resolution.

    A perfect (infinitely brief) impulse will have an infinitely broad bandwidth. The problem is producing a really broad bandwidth signal from real transducer materials which reverberate, exhibit inertia, and have limits to the power they can tolerate for repetitive firing in clinical use. Chirp signals are an elegant solution; the signal is long in time and low in peak power as excitation frequency is increased over the range where electromechanical transduction will occur.

    This sculpted and contoured signal can be decoded with a filter that simulates a high peak power, short pulse having the same bandwidth. One of the areas where this has had the biggest impact is in high-frequency applications, such as in the breast, where attenuation of higher frequencies with depth limited usage. Other forms of signal coding and manipulation developed originally for military sonar and commercial radar have also been migrated into the ultrasound arena.

    Breaking the sonic barrier

    Getting back to my interviews with McLaughlin and Souquet, both perceived that all of the previous generations of ultrasound form images on a line-by-line basis. You fire off a pulse, and then wait until all of the echoes return to the probe, etc., building up an image of some part of the body. The consequence is that, for combination of pulse repetition rate and image line density, you can have a big, detailed field-of-view at a slow imaging rate or a small, often pretty noisy field at a high rate, but not both.

    Processing each line of data separately also contributes to the lateral resolution problems that degrade the distal part of imaging fields. The essence of the new form of ultrasound is that whole field-of-view is insonified at once with a plane wave of ultrasound and then all of the returns are captured and analyzed. Electronics takes place at light speed, and there has been a steady increase in the computational speed of transforms and filtering operations. So it follows that an awful lot of processing can occur, especially with parallel engines, during the dead time that was wasted during the prolonged passive listen times of all of the previous ultrasound generations.

    In a lot of ways, this is like and just as significant as when (Dr.) Roger Bannister broke the four-minute mile. That is one of the reasons I like to think of 4G as a really new (fourth) generation of ultrasound instrumentation.

    4G steps

    My impression is that these two new 4G companies are following overlapping paths that are a lot like a fuzzy distinction between medicine and surgery, or between systemic versus local world views, or between long-exposure, high-contrast full-focus pictures versus high-speed stop-action shots in photography. Ultimately 4G will reconcile both regimes, but for the present McLaughlin seems to favor sampling and analysis protocols aimed at enhanced image quality with the potential for quantitative tissue characterization from coded, full-field insonification.

    A common visual example might be in recognizing and grading cirrhosis from the ultrasonic parenchymal macrotexture. Think of the ultrasound image as a surrogate for a biopsy section stained for elastin. McLaughlin used the term "virtual histology," which is a good description. Just as there are many kinds of tissue stains that are used routinely in pathology, there are multiple wave-propagation factors and filter operations ready for clinical use.

    Souquet has been exploring the consequences of imaging at several thousand frames per second. One result has been ultrafast Doppler for studying minute and local variations in blood-flow patterns within vessels or a chamber of the heart. The other is shear-wave elastography.

    With shear-wave elastography, a shaped ultrasonic pulse from the transducer at the skin surface excites shear waves that propagate through tissues perpendicular to the pulse path at about a third of the velocity of longitudinal ultrasonic waves. The target area is interrogated at several thousand frames per second, enabling detection of the transverse waves as they propagate from minute variations in tissue position. Shear-wave velocity is related directly to tissue stiffness. This is a measurement that works throughout the imaging field and which does not involve any kind of surface force or vibration. In the same example, a shear-wave elastogram of the liver provides an overlay on the B-mode image of the amount of fibrosis present therein.

    New transducers

    Both McLaughlin and Souquet commented on newly developed silicone-based transducer materials that generate very broadband signals at a fraction of the power required for current transducers. This is important because transducer heating is a big safety and regulatory concern. They both believe that these new materials, especially as 2D or area arrays, won't make a big difference for G3 devices, but will enable significant improvements in the evolving 4G experience through its unique system architecture.

    Another facet of the 4G digital processing and the high-speed, high-capacity computational components available now is that instruments can be physically smaller and more mobile than the top-of-the-line analog-digital hybrids of G3.

    McLaughlin's preference has been for a range of point-of-care uses, such as the emergency room and, especially, pediatric facilities that operate with minimized ionizing radiation exposure protocols. Souquet has a personal interest in serial monitoring of therapy, such as tumor ablation or chemotherapy, typically performed someplace removed from the ultrasound suite. He refers to this application as "theragnostic" imaging.

    Looking forward and giving thanks

    I would imagine that other manufacturers are busy working on their own versions of plane-wave 4G equipment. Every generation of ultrasound equipment has expanded the role of ultrasound and created new clinical applications. For the manufacturer, every generational improvement has meant confronting more theoretical complexity and having to design entirely new implementation and manufacturing techniques.

    When a new method first appears, its potential is almost never appreciated generally. Recognition expands when the new technique goes beyond what could be achieved by the very best of whatever it is surpassing. NMR is a good example for those who remember its introduction.

    I have been blessed by being involved in the very start of G2 and G3, and now, with more than nine months of clinical experience in the 4G world, I am convinced that this will be the most significant advance in ultrasound ever. "The real trick of 4G technology is how to be able to obtain diagnostic image quality that is temporally coherent and shift invariant," McLaughlin said.

    That's engineer speak for an HD ultrasound image that is a true depiction of tissue features visible at high speed. That, after all, is what we have always wanted and craved.

    It is time that those of us who are lucky enough to identify ourselves as clinical ultrasound people applaud the individuals and teams of engineers, physicists, programmers, signal processors, and materials science people who have made it possible for us to work with this fabulous form of diagnostic imaging, and whose continued efforts spanning the past 50 years continue to help us improve and deliver medical care for everyone.

    Dr. Jason Birnholz is a graduate of Johns Hopkins School of Medicine and did his diagnostic radiology residency at Massachusetts General Hospital. He was awarded an advanced academic fellowship from the James Picker Foundation and has been a professor of radiology.

    The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnie.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.


    || About || Advertising || AuntMinnieCME.com || Bookstore || Breast MRI || Career Center || Case of the Day || Communities || Conferences || Contact Us || ECR News 2014 || Education || Equipment Classifieds || Europe || Facebook || Forums || Home || Links || Marketplace || Middle East || Mobile || Molecular Breast Imaging || New Installations || News in Brief || People in the News || Privacy Policy || RSNA News 2013 || Reference || Salary Survey Results || Trends in Radiology || Twitter || Vendor Connect || Webinars || XML/RSS ||

    Copyright © 2014 AuntMinnie.com. All Rights Reserved.