Practical Radiation Oncology (2011) 1, 83–84

www.practicalradonc.org

Commentary

Finding the answers we need: comparative effectiveness Lee N. Newcomer MD ⁎ UnitedHealthcare, Edina, Minnesota Received 17 February 2011; accepted 18 February 2011

I am always surprised by resistance to comparative effectiveness (CE) research. Simply asking the question, “What is the most effective therapy for a patient with this particular problem?” is the essence of the work. What clinician or patient wouldn't want to know the answer? The rhetoric concerning this topic, however, is disturbing. Critics argue that CE will restrict choices and access to certain technologies. The assertion is almost certainly true, but who wants access to a technology that has been proven to be inferior? Skeptics believe that CE will stifle innovation. Paradoxically, CE will create a consistent standard for innovators as they invent their new technologies—they will seek breakthrough technologies and not waste precious resources on incremental or insignificant changes. Finally, critics are concerned that payments for new technologies will diminish with CE evaluation. This assumption may or may not be correct, but pricing will be more rational because comparative effectiveness will quantify the benefits of the new technology. One thing is certain—some technologies will win and some will lose in a comparative effectiveness analysis. The losers should be deleted from the options available to clinicians. Unfortunately, that will be bad news for the investors and physicians who are supporting an inferior technology. It will not be bad news for patients. Gathering this data is no easy task. The general public and clinicians are right to ask for assurance that the work is done accurately and without bias. Bekelman and colleagues have written an excellent review demonstrating how that can happen.1 There are many options that can be used

See Related Article on page 72. Conflicts of interest: None. ⁎ 5901 Lincoln Drive, Edina, MN 55436. E-mail address: [email protected].

to find relevant, valid, and reproducible data for the analysis. Randomized controlled trials, the gold standard for these comparisons, often fail to accrue enough patients. The reasons for failure to accrue are complex: physicians often refuse to enroll patients because they believe one technology is superior, patients perceive new technology as inherently superior and they want to avoid randomization to standard therapy, and it is a lot of extra work for all parties to enroll in a trial. For readers who doubt those reasons, try finding a randomized trial in prostate cancer comparing proton therapy to intensity-modulated radiation therapy or brachytherapy. A randomized controlled trial, although the most desirable approach isn't an absolute necessity for high quality comparative effectiveness research. Research has demonstrated that carefully controlled observational series are equally powerful predictors of outcomes as the randomized trials.2 The key to the adaptive trials and observational series described in Bekelman's paper is collecting enough essential information to ensure that the patients are standardized. It's equally important to understand who has been excluded in an observational series. Perhaps the most chilling example of this error was the original observational series for high-dose chemotherapy with autologous bone marrow rescue—the original series of patients were highly selected without specific criteria. The resulting enthusiasm from the outcomes demonstrated in the early series led to thousands of women getting a treatment that was later proved to be harmful in a randomized trial.3 Rigorous definitions of patient selection in the original observational series could have prevented this disaster. Comparative effectiveness research must be carefully conceived as Bekelman suggests or it will be misleading. Finally, comparative research is focused on patient outcomes. One can't assume that a higher dose of radiation

1879-8500/$ – see front matter © 2011 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved. doi:10.1016/j.prro.2011.02.004

84

L.N. Newcomer

to a target field will result in better survival or better tolerance. Outcomes of significance to patients, survival, disease-free survival, quality of life, or specific symptoms related to the targeted field, will be the focal points of this research. All of this effort comes back to the question posed in the beginning of this article, “What therapy is most effective for a patient with this condition?” Bekelman's call to get started is good advice.

Practical Radiation Oncology: April-June 2011

References 1. Bekelman JE, Shah A, Hahn SM. Implications of comparative effectiveness research for radiation oncology. Pract Radiat Oncol. 2011;1:72-80. 2. Besnon K, Hartz AJ. A comparison of observational studies and randomzied, controlled trials. N Engl J Med. 200;342:1878-1886. 3. Brownlee S. Bad science and breast cancer. Available at: http:// discovermagazine.com/2002/aug.

Finding the answers we need: comparative effectiveness.

Finding the answers we need: comparative effectiveness. - PDF Download Free
82KB Sizes 1 Downloads 3 Views