LETTERS TO THE EDITOR Re: “Counterpoint: Overdiagnosis in Breast Cancer Screening” Robert Smith [1] has misrepresented the extent of overdiagnosis in the Canadian National Breast Screening Study [2] because he has used the wrong denominator in his calculations. It makes no sense to relate our estimate of numbers of cases overdiagnosed to the total of breast cancers ascertained throughout our follow-up period. The extent of overdiagnosis must be related to the numbers of breast cancers detected by screening, which we did [2]; hence our estimate of 22%, which increases to 35% if in situ cancers are included. It is also strange that he cited data from the Swedish TwoCounty Trial, which cannot be used to determine the extent of overdiagnosis, as the control group was screened at the end of the screening period in the intervention group. Anthony Miller, MD University of Toronto Dalla Lana School of Public Health 155 College Street Toronto, ON M5T 3M7 Canada e-mail:
[email protected] REFERENCES 1. Smith RA. Counterpoint: overdiagnosis in breast cancer screening. J Am Coll Radiol 2014;11:648-52. 2. Miller AB, Wall C, Baines CJ, Sun P, To T, Narod SA. Twenty five year follow-up for breast cancer incidence and mortality of the Canadian National Breast Screening Study: randomised screening trial. BMJ 2014;348: g366. http://dx.doi.org/10.1016/j.jacr.2014.05.013 S1546-1440(14)00277-4
Author’s Reply A randomized controlled trial of breast cancer screening in which the control group is not screened at the end of the trial theoretically is
ideal for estimating the extent of overdiagnosis. But more than this single feature is required. At the end of the study, the invited subjects stop screening and are never screened again in their natural lives; the control subjects also are never screened. If the randomization truly produced two equal groups, when all study subjects have died, any excess number of breast cancers in the invited group compared with the control group may be judged to be overdiagnosis. This is fine in theory, but none of the randomized controlled trials truly fulfill these requirements because neither study subjects nor policymakers are ever so cooperative. After the Canadian National Breast Screening Study (CNBSS) screening centers closed in 1988, screening was initiated in 4 provinces (British Columbia, Alberta, Ontario, and Nova Scotia) in quick succession (1988e1991) [1]. Although the design of CNBSS did not include screening the control group at the end of the trial, Canadian policymakers had other ideas. The fact that a policy of screening was implemented in 4 provinces within 3 years of the last screening round means that any attempt to measure overdiagnosis is compromised by different effects of lead time in each study arm. How important is the influence of lead time? According to Duffy and Parmar [2], the effect that an average lead time of 40 months confers to observed excess incidence is 37% after 20 years, which falls below 10% only after 25 years of follow-up. Miller favors measuring overdiagnosis at 15 years of follow-up because this is when a difference of 103 excess cancers (22%) between the two arms becomes constant. Not only does it not appear constant in Figure 4 of Miller et al’s [3] report, but as reported in Table 1, the difference in the total number of breast
ª 2014 American College of Radiology 1546-1440/14/$36.00 http://dx.doi.org/10.1016/j.jacr.2014.05.013
cancer cases at 25 years of followup is substantially diminished to only 3.7%. If overdiagnosis is defined as a cancer that never would have become apparent in a woman’s natural life if she had not undergone screening, then measuring the difference in incidence at the furthest point of follow-up makes more sense, with one important caveat. Once both study arms are undergoing screening, the comparison of longterm rates becomes a comparison of the unknown fractions of both progressive and nonprogressive cancers in each arm. The point here is not which denominator is correct but rather that neither is correct. At 15 years of follow-up, overdiagnosis is overestimated, and at 25 years of follow-up, it may be underestimated. In short, it simply is not possible to estimate overdiagnosis with any measurable confidence with the CNBSS data. However, if the difference at 25 years of follow-up between the two arms is only a nonsignificant 3.7%, and one arm not only had an imbalance of cancer and risk at the start of the study but also has undergone nearly 200,000 more mammographic examinations, it is hard to see that overdiagnosis is a very significant problem, and not even remotely close to the high rates claimed by Miller and colleagues. Robert A. Smith, PhD American Cancer Society 250 Williams Street Atlanta, GA 30303 e-mail:
[email protected] REFERENCES 1. Canadian Partnership Against Cancer. Organized breast cancer screening programs in Canada: report on program performance in 2007 and 2008. Toronto, Ontario, Canada: Canadian Partnership Against Cancer; 2013.
923