A response to Harding et al

The ploy used by Harding et al in their new study is classic. Explain why using your approach is scientifically unsupportable, and then go ahead and use it claiming there is no alternative.

"Clinicians are correct to be wary of ecological studies because of the ecological fallacy," the authors wrote in their study. "Ultimately, however, decisions must be made on the evidence that is available and not unachievable ideals."

There is a reason why statisticians warn of the "ecological fallacy." The "fallacy" to which they refer is inferring from the observations of groups the results for individuals without any proof (cause and effect) that these inferences have any validity. It is entirely possible that the authors could have looked at car sales and come to similar conclusions about their relationship to breast cancer.

Dr. Daniel Kopans from Massachusetts General Hospital.Dr. Daniel Kopans from Massachusetts General Hospital.

As has been the error in many of the "studies" being used to reduce access to screening,1,2 the fundamental failure of this analysis is that the authors have no idea which women were actually screened, and which cancers were detected by screening. They are linking what women report in surveys (notoriously inaccurate) to reports in the SEER database that do not contain the patient-specific information they would need to make their evaluation have any validity.

The real story is that poor peer review has resulted in nonscience being published. To avoid "unachievable ideals" is to simply say they were not interested in facts. There are numerous studies that have looked directly at patient data and outcomes that have shown that screening is related to a decline in deaths and a reduction in advanced cancers.

Harding et al have built a true house of cards on the underlying fallacy of the analysis. You cannot, legitimately, use incomplete and unrelated data to draw the conclusions they are making. The percent of women claiming to have had a mammogram within the previous two years cannot be used to establish a relationship with the size of the cancers in the region. It cannot be used to claim that the rate of advanced cancers has been unaffected. They admit this, but then go ahead and do it anyway, correctly assuming that the reviewers will overlook the underlying "fallacy."

How is it possible to know the relationship between screening and what happens to the rate of advanced cancers when you only look at one year? Trends require the passage of time. They have no idea whether or not the rate of advanced cancers declined with the use of mammography in any of the counties since they have no longitudinal data. If you start with fallacious relationships, expanding your conclusions is simply expanding the fallacies.

One also has to wonder what the authors have done with the data. Between 2000 and 2010 (10 years of follow-up), the SEER breast cancer death rate fell from 26.6 deaths per 100,000 women to 21.9 deaths per 100,000 women -- a decline of 18% (almost 2% per year), yet the authors claim there was no decline in deaths over the same period using SEER data. Something is wrong. This should have been recognized by the peer review and the fundamental discrepancy addressed.

The claim of massive overdiagnosis of invasive cancers has been manufactured. No one has ever seen an invasive breast cancer disappear on its own, yet it has been claimed there are tens of thousands each year. When we looked directly at patient data, we found that more than 70% of the women who died of breast cancer in major Harvard Medical School teaching hospitals were among the 20% of women who were not participating in screening.3

It is clear that the material in this paper demonstrates nothing more than what the authors admit is their problem -- "the ecological fallacy."

Enough is enough. The effort to reduce access to screening has been relentless by a small group that has continued to use specious arguments and scientifically flawed analyses to support their agenda. At some point, peer reviewers need to read more carefully and stop the publication of scientifically unsupportable and misleading material.

References

  1. Bleyer A, Welch HG. Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med. 2012;367(21):1998-2005.
  2. Jørgensen KJ, Zahl PH, Gøtzsche PC. Breast cancer mortality in organised mammography screening in Denmark: comparative study. BMJ. 2010;340:c1241.
  3. Webb ML, Cady B, Michaelson JS, et al. A failure analysis of invasive breast cancer: most deaths from disease occur in women not regularly screened. Cancer. 2014;120(18):2839-2846.

Dr. Kopans is a professor of radiology at Harvard Medical School and a senior radiologist in the department of radiology, breast imaging division, at Massachusetts General Hospital.

The comments and observations expressed herein are those of the author and do not necessarily reflect the opinions of AuntMinnie.com.

Page 1 of 50
Next Page