May 8, 2014 -- On April 30, I entered a conference room at the U.S. Centers for Medicare and Medicaid Services (CMS) outside Baltimore to attend a meeting of a Medicare Evidence Development and Coverage Advisory Committee (MEDCAC) panel. The panel was convened to advise CMS about Medicare coverage of CT-based lung cancer screening. There was no way I could miss this historic event.
I had seen my first screen-detected lung cancer in the early 1970s while a surgical resident at the Mayo Clinic, and spent two decades participating in lung cancer screening research, guideline development, and policy. The MEDCAC process was expected to culminate in implementation of population-based lung cancer screening that would signal the end of decades of inaction and delay. U.S. citizens would finally gain the benefit of lifesaving screening.
At 3:30 p.m., I sat slumped in my chair, stunned and disoriented. Other participants stared into space disconsolately, tears streaming down their faces. We had just witnessed what I can only characterize as an appalling travesty of the scientific process and public health policy.
Conflicts of interest?
Thirty-eight prominent clinical and scientific organizations have recommended that CMS cover screening for people at high risk of lung cancer. However, the principles of so-called evidence-based medicine posit that such clinical experts have various biases, such that only methodologists, without bias, should assess and interpret relevant data and make final policy decisions.
Attendees at the MEDCAC meeting were sternly admonished that federal rules required frank and complete disclosure of potential conflicts of interest. I was puzzled, as my quick review of the bios of the panel members showed that four of them (Dr. Mark Grant, PhD; Dr. Jo Carol Hiatt; Dr. Curtis Mock; and Dr. Michael Gould) worked for healthcare payors (Blue Cross, United Healthcare, and Kaiser).
I am certain they would argue that they had no conflicts with respect to participation, because the question at hand was whether CMS -- not they payors they work for -- would have to pay for screening. However, since standard practice in the U.S. is that private companies typically provide services approved by CMS, this argument appears dubious.
One particularly egregious example was the choice of panelist Dr. Curtis Mock, vice president of the Medicare Advantage section of UnitedHealthcare. If CT screening is implemented by CMS, UnitedHealthcare would then have to copay for the diagnostic workup in those individuals found to have a nodule on the screening CT.
Mock subsequently cast the lowest possible score of 1, registering his high degree of uncertainty as to the value of screening in Medicare-age patients (which could mean that United would avert a major financial obligation). I don't see how such a multimillion-dollar conflict of interest could have been resolved, short of excluding Mock from participation in the MEDCAC process.
Another panelist (Dr. Jeffrey Rich) neglected to disclose that he was a former director of Medicare.
Excluding clinical experts
Although payor conflicts were ignored, individuals with clinical and hands-on research were excluded from the panel. The result was a group with few clinicians and only one nonvoting member (Gould) with substantial experience in the care of lung cancer patients.
Were the MEDCAC panelists true experts in methodology? One committee member (Dr. Harry Burke, PhD) repeatedly asserted that there was no statistically significant reduction in mortality for the National Lung Screening Trial (NLST) subset over age 65. He would not accept the repeated admonition from the NLST biostatistician that it was improper to so interpret subset data in the context of a randomized clinical trial.
Burke refused to recant, and he appeared to convince other panelists of the validity of his interpretation of the data. A college freshman giving such an answer on a biostatistics 101 exam would flunk.
Another tenet of evidence-based medicine is that only "high-level" data are considered. Thus, the panel focused its consideration very narrowly; there was no mention of the abundant data from multiple Japanese prefectural cohort and case-control studies, or serious consideration of the results of prospective screening of more than 60,000 individuals by the International Early Lung and Cardiac Action Program (I-ELCAP).
Their questions and discussion were sharply focused primarily on data from a single study, the U.S. National Cancer Institute's NLST, the only randomized, controlled trial with sufficient accrual size and duration of follow-up to demonstrate a benefit in both lung cancer-specific and all-cause mortality reduction.
But principles of evidence-based medicine were jettisoned when they conflicted with the opinions of panelists. Chair Dr. Rita Redberg of the University of California, San Francisco (UCSF) repeatedly cited major risks of radiation carcinogenesis from diagnostic CT scans, and she even offered specific estimates of quantitative cancer risk per number of CT scans. Such estimates appeared to be accepted by other panelists.
There is, in fact, no high-level evidence that diagnostic CT scans cause cancer in adults -- not from any meta-analysis; randomized, controlled trial; cohort study; or case-control or observational study. There is not even case-report evidence. All evidence cited by Redberg comes from modeling studies, typically based on radiation exposure from atomic bomb explosions or nuclear accidents and interpreted under the discredited linear nonthreshold theory.
The panel's lack of clinical experience combined with the extreme limitation of evidence considered suggests to me an analogy in which a group of city dwellers is allowed to set forestry policy after studying aerial photographs, without ever having walked into the woods.
What went right
There are two important positives to the MEDCAC process. First, the conference was videotaped in its entirety and can be reviewed by anyone interested in fact-checking the assertions I and others might make about the conduct of the meeting.
Second, the 330 comments from interested individuals and the slides of presenters can be viewed on the conference website, representing a valuable virtual multiauthor textbook on lung cancer screening. Although the CMS process presumes that panelists would carefully review such material, I seriously doubt whether this occurred.
For example, Mock told his colleagues on the panel that he did not know whether follow-up CT scans after initial screening CT were low-dose or standard high-dose (8 mSv). Had he read the submitted comments, such uncertainty would have been unlikely.
The next part of the conference was the public presentation of evidence. Four presenters were permitted 15 minutes, and a larger group four minutes each. Subsequent comments were restricted to commenters, and they were allowed no more than one minute. In a number of instances, commenters were peremptorily cut off even earlier.
First up was Dr. Peter Bach, who has consistently maintained over the preceding decade that lung cancer screening is ineffective and dangerous. He failed to disclose his former position as senior advisor to the director of CMS or multiple other financial conflicts freely available on the Internet.1 During a break, I spoke with Tamara Syrek Jensen, the acting director of CMS' Coverage and Analysis Group, and informed her of Bach's undisclosed financial information. No action was taken.
The thrust of Bach's presentation and multiple later comments is that he has "changed his position" on the efficacy of lung cancer screening somewhat, but insists that screening be tightly restricted to NLST age and smoking cessation criteria and only be performed in "centers of excellence" capable of collecting registry data. Such limitations would substantially reduce the number of people who would be covered by CMS, as well as cut costs to CMS and private payors.
In the area of registries, I was informed that one of the panelists (Gould), a recent co-author of Bach, has a research grant application for funding such a research registry in process. He did not disclose this potential conflict of interest.
The next stage was discussion and questioning of speakers. In my opinion, Redberg was consistently curt and dismissive of speakers with whom she disagreed, while allowing extra time and latitude to speakers she appeared to favor.
I found this particularly offensive in the cases of Dr. Christine Berg and Dr. Claudia Henschke, the principal investigators of the NLST and I-ELCAP trials, respectively, and the two people in the room with the most knowledge and experience on the subject at hand. Because the conference was videotaped, readers may make their own interpretation of this allegation by their own review of the recordings.2,3
Before and after the negative vote on whether to support CMS coverage of CT screening, panelists explained their reasoning. I was startled that panelists appeared to have accepted, without question, the assertion (by Dr. Steven Woolf) that "thousands of iatrogenic deaths" were expected with population screening, based upon "16 iatrogenic deaths in the NLST study."
In fact, there is no evidence that even a single death in NLST was a complication of screening. There were 16 deaths that occurred within 60 days of a screening CT scan, but no report has attributed a single one of those deaths to a procedure following a CT false-positive study. If Berg had been allowed more latitude to testify, this false perception might have been corrected.
The committee also appeared to accept, without question, the assertion that the chance of cure following a CT screening-based diagnosis of lung cancer was 1 in 5 (Bach). Again, if Berg had had the opportunity to comment, the panel might have learned that most lung cancer deaths in NLST occurred after CT screening had ceased, and that 52% -- that is, 1 in 2, not 1 in 5 -- screen-detected lung cancer patients survived more than five years in NLST.
Henschke might have also showed the panel that in I-ELCAP and in European and Japanese screening studies, five- to 15-year actuarial survival of patients after a screen-detected diagnosis of lung cancer is even higher: 80% or more, or 4 of 5, survive. Extrapolation of the committee's gross underestimation of survival benefit into calculation of derived estimates -- for example, number needed to screen, cost-effectiveness, and benefit-risk ratios -- resulted in gratuitously inaccurate assumptions, leading directly to their low confidence vote scores.
MEDCAC panelists also did not appear to understand the implications of there being "no study-wide diagnostic algorithm in NLST," and that the high incidence of false positives and invasive interventions in patients with benign nodules has been far lower in studies using effective diagnostic algorithms, such as the I-ELCAP and Danish randomized trials.
In the final analysis, I consider that this disconcerting output from a panel of methodologists, MBAs, and policy wonks represents a very strong indictment against the ability of self-confessed evidence-based medicine experts to make intelligent and fair decisions in areas in which they have no clinical experience. This is particularly true after limited review of available evidence, which has been, in turn, collected by still other methodologists similarly lacking experience in the subject at hand.
I am confident that, as in a current series of TV commercials, if early grade-school students had been empanelled by CMS and asked by a moderator, "Which is better to have, a big tumor or a little tumor?" they would have outperformed the MEDCAC panel and quickly reached a consensus: "Small is better. It's not complicated!"
The MEDCAC opinion would be laughable were it not for its dire potential consequences to public health. Unless ignored or revised by CMS, this misinformed and misinterpreted report will predictably result in further delay and the unnecessary suffering and death of thousands of people known to be at high risk of lung cancer, disproportionately affecting underserved populations and members of minority groups who do not have the resources to pay for screening.
Panel member David Howard, PhD, may have best understood and characterized the panel's decision.
"I just worry that we might be setting a threshold so high that no new technology could pass it," he said. "I don't think we need to become the Amish of medical technology and freeze things as they were in 2014."
His advice was ignored.
The comments and observations expressed herein are those of the author and do not necessarily reflect the opinions of AuntMinnie.com.
Quote from Jingles
Vote for liberal big government politicians, get policy foundered on politics and nothing else.
All you guys who voted for these clowns who constructed this law broke the mold...now you can own it.