Kopans: Breast cancer overdiagnosis grossly exaggerated

2013 04 29 15 33 22 49 Kopans Daniel 200 20130429223516

In this Second Opinion article, breast imaging expert Dr. Daniel Kopans offers his rebuttal of a study published last November in the New England Journal of Medicine, which raised questions about the value of screening mammography and claimed that one-third of detected breast cancers represent "overdiagnosis."

Most physicians and women do not realize that there is a concerted effort to try to reduce access to mammography screening. This has, in fact, been going on since the 1970s, but it has intensified following the "out of the blue" guidelines issued by the U.S. Preventive Services Task Force (USPSTF) in 2009.

This group of individuals, none of whom provided care for women with breast cancer, and only one of whom had any experience with breast cancer screening data, discouraged women from starting screening until the age of 50, and extended the time between screens to every two years instead of annually.

Dr. Daniel Kopans from Massachusetts General Hospital.Dr. Daniel Kopans from Massachusetts General Hospital.
Dr. Daniel Kopans from Massachusetts General Hospital.

Physicians and women remain unaware that the data provided to the USPSTF show that the most lives are saved by annual screening beginning at age 40. There are, in fact, no data that show that the age of 50 has any biological or scientific reason to be a threshold. It has been established based on nonscientific analyses and data manipulation; none of the parameters of screening change abruptly at the age of 50 or any other age.

Randomized, controlled trials are the only way to prove that screening saves lives, and everyone agrees that the randomized, controlled trials of screening have proved that lives are saved by screening beginning at the age of 40. The major effort to reduce access to mammography has reformed around the so-called "harms" of screening (after an earlier attempt in the 1990s).

In its 2009 recommendation, the USPSTF expressed concern about women being recalled unnecessarily from screening for additional evaluation (which, unfortunately, are inaccurately called "false positives"). But the task force failed to inform women and their physicians that approximately 10% or fewer women are recalled from screening; in fact, the vast majority are found to have nothing of consequence based on a few extra pictures or an ultrasound.

Only 1% to 2% of screened women are advised to have a biopsy, and 20% to 40% of these are found to have breast cancer. If a woman waits until she has a lump that a clinician feels should be biopsied, the yield of cancer is 15%, and these cancers are usually larger, later stage, and less curable than those found by mammography.

Nevertheless, the inexperienced USPSTF concluded that avoiding recalls was more important than saving lives. They were willing to sacrifice as many as 100,000 lives among women now in their 30s that could be saved by annual screening beginning at the age of 40.

A number of papers have been published that are scientifically and methodologically flawed in an effort to bolster the USPSTF position. These have made it to publication based on poor peer review in journals that have undeclared but clear biases against screening.

A failure of peer review

The most recent effort to reduce access to screening is based on the suggestion that screening leads to the detection of cancers that would never become clinically evident. Authors have suggested that thousands of breast cancers each year would disappear on their own if left undiscovered, despite the fact that there is not one credible report of an invasive breast cancer "disappearing" without some intervention.

Perhaps one of the more egregious papers to suggest massive overdiagnosis was published recently in the New England Journal of Medicine (November 22, 2012, Vol. 367:21, pp. 1998-2005). Authors Dr. Archie Bleyer and Dr. H. Gilbert Welch claimed that in 2008 as many as 70,000 cancers detected by screening would never have become clinically evident and that screening was leading to massive overdiagnosis of breast cancer.

This paper is an excellent example of a failure of peer review. The article is not based on science, since it did not have direct patient data, but is instead based on registry numbers and assumptions, estimates, and extrapolations that prove to be incorrect.

The paper's primary conclusion -- that mammography leads to massive overdiagnosis -- is specious simply because the authors had no way of knowing which of the study cohort had mammography, and had no idea which cancers were detected by mammography. The authors did not even mention the multiple scientific studies, based on direct patient data, that have shown that mammography screening leads to little if any overdiagnosis.1,2,3,4,5,6 In fact, the only way to accurately measure overdiagnosis is through randomized, controlled trials, and these have shown that the overdiagnosis rate is at the most 10%7 and most likely is less than 1%.8

The authors based their conclusion on their estimate of what breast cancer incidence would have been had screening not been available from 1974 to 2008. They estimated this "baseline" by using data from 1976 to 1978. However, this is highly in doubt since they had no way of knowing the effect on screening that occurred as a result of publicity surrounding the diagnosis of breast cancer in Margaretta "Happy" Rockefeller and Betty Ford in 1974.

The authors also ignored 40 years of data collected prior to any general screening and provided by the Connecticut Tumor Registry that showed the rate of invasive breast cancers had been increasing steadily at 1% per year.9 This was four times the estimate used by the authors, who claimed that in the absence of screening the incidence would have increased at the rate of only 0.25% per year.

Not only are 40 years of data far more reliable than three years, but since the prolonged prevalence peak that extended from 1980 to 2006 returned toward baseline as women began to participate in screening,10 the incidence is, once again, increasing at a rate of 1% per year. Bleyer and Welch simply ignored this return to a steadily increasing baseline that is unrelated to screening.

Another critical error is that the authors combined the number of cases of ductal carcinoma in situ (DCIS) with the numbers of small invasive cancers. No knowledgeable analysts would do this, since DCIS constitutes a series of lesions that are highly controversial and their relation to invasive cancers is debated. The addition of DCIS to invasive cancers simply dilutes and corrupts the estimates.

If we use a 1% per year increase in the incidence of invasive cancers, starting with 102 per 100,000 invasive cancers in 1980, by 2008, in the absence of screening, there would have been 132 invasive breast cancers per 100,000 women in the population. U.S. Surveillance, Epidemiology, and End Results (SEER) Medicare data show that there were only 128 per 100,000 cases of invasive breast cancers in 200811 -- fewer than predicted.

This means that using the authors' methods and the correct extrapolation, not only was there no overdiagnosis of invasive breast cancer, there were actually fewer cases than would have been expected. Bleyer has calculated that had they used the 1% per year estimate, there would have still been 34,000 overdiagnosed cancers in 2008.11

What he failed to appreciate was that in 2008, 57,000 cases of DCIS were predicted.12 If the authors remove DCIS from their estimate, as they should have done, then by their own calculation there were 23,000 fewer invasive breast cancers than would have occurred in the absence of screening (57,000-34,000 = 23,000). They have to agree not only is there no overdiagnosis of invasive cancers, but, in fact, we can wonder if the apparent decline in invasive breast cancer is due to the removal of DCIS lesions over the preceding years.

DCIS is found almost exclusively by mammography. There have been major efforts to try to address the issues that surround DCIS with regard to diagnosis and treatment. This is not new13 and is not worthy of a publication that only confuses the issues.

This scientifically unsupportable paper has been used to raise concerns among women and their physicians and will no doubt discourage some women from participating in screening. The analysis in this paper is fundamentally flawed: Using the authors' methods, but with a more accurate extrapolation, there is no evidence of overdiagnosis of invasive breast cancer and the paper should be withdrawn by the New England Journal of Medicine.

Dr. Kopans is a professor of radiology at Harvard Medical School and senior radiologist in the department of radiology, breast imaging division, at Massachusetts General Hospital in Boston. He first expressed his opinions on the Welch study in an opinion piece published online March 29 in the Journal of the American College of Radiology.

The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnie.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.

References

  1. Njor SH, Olsen AH, Blichert-Toft M, Schwartz W, Vejborg I, Lynge E. Overdiagnosis in screening mammography in Denmark: population-based cohort study. BMJ. 2013;346:f1064.
  2. Yen MF, Tabar L, Vitak B, Smith RA, Chen HH, Duffy SW. Quantifying the potential problem of overdiagnosis of ductal carcinoma in situ in breast cancer screening. Eur J Cancer. 2003;39(12):1746-1754.
  3. Puliti D, Duffy SW, Miccinesi G, et al. Overdiagnosis in mammographic screening for breast cancer in Europe: a literature review. J Med Screen. 2012;19(Suppl 1):42-56.
  4. Kopans DB, Smith RA, Duffy SW. Mammographic screening and 'overdiagnosis.' Radiology. 2011;260(3):616-620.
  5. Paci E, Warwick J, Falini P, Duffy SW. Overdiagnosis in screening: is the increase in breast cancer incidence rates a cause for concern? J Med Screen. 2004;11(1):23-27.
  6. Duffy SW, Agbaje O, Tabar L, et al. Overdiagnosis and overtreatment of breast cancer: estimates of overdiagnosis from two trials of mammographic screening for breast cancer. Breast Cancer Res. 2005;7(6):258-265.
  7. Zackrisson S, Andersson I, Janzon L, Manjer J, Garne JP. Rate of overdiagnosis of breast cancer 15 years after end of Malmo mammographic screening trial: follow-up study. BMJ. 2006;332:689-692.
  8. Yen AM, Duffy SW, Chen TH, et al. Long-term incidence of breast cancer by trial arm in one county of the Swedish Two-County Trial of mammographic screening. Cancer. 2012;118(23):5728-5732.
  9. Garfinkel L, Boring CC, Heath CW Jr. Changing trends. An overview of breast cancer incidence and mortality. Cancer. 1994;74(Suppl 1):222-227.
  10. Centers for Disease Control and Prevention. Use of mammography among women 40 years of age and over. http://www.cdc.gov/nchs/data/health_policy/mammography.pdf. Published July 2009. Accessed April 29, 2013.
  11. National Cancer Institute. SEER stat fact sheets: breast. http://seer.cancer.gov/statfacts/html/breast.html. Accessed April 29, 2013.
  12. Autier P, Boniol M. Effect of screening mammography on breast cancer incidence. NEJM. 2013;368(7):677-679.
  13. American Cancer Society. Breast cancer facts & figures 2007-2008. http://www.cancer.org/acs/groups/content/@nho/documents/document/bcfffinalpdf.pdf. p. 9. Accessed April 29, 2013.
Page 1 of 569
Next Page