Volume 24, Number 1

January 2014

Comparative Effectiveness Research in Oncology: The Promise, Challenges, and Opportunities Introduction

Stakeholder Involvement

C

Unlike the traditional model where researchers shoulder the full responsibility for conceiving research ideas and designing studies, there is an emphasis in CER to collaborate with stakeholders early and throughout the research process—from study design to dissemination of findings. According to one definition, stakeholders are “an individual or group who is responsible for, or affected by, health- and health care–related decisions that can be informed by research evidence.”3 As the IOM definition for CER indicates, stakeholders for CER in oncology generally include patients, physicians, payers, and policymakers. A major reason for including stakeholders in CER is to ensure that the research topic and study end points are important and relevant to these end users, avoiding scenarios where significant time and money are spent conducting studies that ultimately do not change clinical practice or patient decision making. In an innovative effort, the Agency for Healthcare Research and Quality assembled a stakeholder group that included representatives from patient groups, payers and policymakers (including the Center for Medicare and Medicaid Services and private insurers), and physician representative from oncologyrelated specialty groups.4 This group met for the first time in 2010 and was charged with generating a list of the highest priority CER topics in oncology. This is an example where researchers and a funding agency asked the stakeholders about research they consider important. Subsequent stakeholder meetings focused on operationalizing priority topics into feasible studies, and providing continued feedback and guidance to researchers of funded and ongoing studies. This direct collaboration between researchers and stakeholders to identify priority research topics, design studies (including selecting outcomes relevant to the stakeholders), and the continued collaboration throughout study conduct—embodies the overall goal of CER to provide useful research evidence to ultimately inform these stakeholders. A similar effort is seen in Alliance for Clinical Trials in Oncology, a National Cancer Institute–sponsored clinical trials cooperative group. The standing Patient Advocates Committee of the Alliance and the integration of patient advocates throughout scientific

omparative effectiveness research (CER) and its methodologies are not new, but there has been a recent popularization of this term and a heightened awareness of the importance of this type of research. In 2009, the Institute of Medicine (IOM) published the “Initial National Priorities for Comparative Effectiveness Research.”1 In this report, CER was defined as the “generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor a clinical condition, or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.” CER has further gained national attention with the funding of several research projects since 2009 by the American Recovery and Reinvestment Act, and creation of the PatientCentered Outcomes Research Institute, which has enormous funding (4$500 million per year by 2014) for CER.2 With the continued rapid development of diagnostic and therapeutic advances, there is a clear and urgent need for CER in oncology. Facing an increasing number of options, patients need research evidence to help them make informed decisions. For example, men with early prostate cancer have options ranging from active surveillance to radical prostatectomy to radiation therapy. Further, for prostatectomy, there are different techniques (open vs robotic prostatectomy); for radiation, the different options include brachytherapy, intensity-modulated radiation therapy, proton radiation, and stereotactic body radiation therapy. The comparative quality of life, treatmentassociated morbidity, cancer control, and survival outcomes of this wide range of modern treatment options are not well known. It is, therefore, not surprising that localized prostate cancer treatment options is one of the highest priority CER topics on the IOM list.1 CER is also highly relevant to policymakers, who need research data to inform coverage decisions. There are several consistent themes in CER, and some are relatively unique to this field of research. These are briefly described in this article and with more detail in the remaining articles of this issue. 1053-4296/13/$ - see front matter & 2014 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.semradonc.2013.08.001

1

R.C. Chen

2 committees help ensure that clinical trials are designed with patient input, with results that will be relevant to patients. This close involvement of patients also facilitates the conduct of other types of CER in the cooperative group setting.

multiple different types of studies are needed and can provide complementary results to fully inform stakeholders. Meyer et al describe the strengths and limitations of different CER study designs in more detail, whereas the articles by Chen, Aneja, Neuman, and Hirsch provide examples of study designs in CER questions related to Radiation, Surgical, and Medical Oncology.

CER Methodology The article by Meyer et al in this issue provides an overview of the most common study designs used for CER. Although the randomized controlled trial remains an important type of study design to answer CER questions, it has important limitations including the significant cost and time required from study conception to completion and reporting of results, and lack of generalizability. There is an emphasis in CER to provide results representative of outcomes experienced by “real-world” patients with a particular condition. The concern about generalizability with clinical trials arises from the often restrictive enrollment criteria of trials, a physician preference to enroll younger and healthier patients, and low participation rate of eligible patients. For example, in the Prostate Cancer Intervention Versus Observation Trial, patients with early prostate cancer were randomized between radical prostatectomy vs watchful waiting.5 This was one of the most important randomized trials in prostate cancer; however, to illustrate the issue of generalizability, the trial screened 13,022 patients, found 5023 eligible, and ultimately randomized 731 patients. Whether the results of these 731 patients represent those of all patients with prostate cancer is unknown. In general, it is unclear whether a benefit demonstrated in a controlled clinical trial setting and given to a highly selected group of patients translates to a similar benefit when used in everyday oncologic practice in the community for all patients with a specific disease. In addition, for many CER questions, randomized trials may not be feasible owing to patient and physician biases leading to an unwillingness to participate. There is also recognition that, in an environment of limited resources for research, not all relevant CER questions can be answered by randomized trials. In the article by Neuman et al in this issue, several examples are given to further demonstrate this “gap” between the efficacy of treatments that are measured in clinical trials and the effectiveness of the same treatments in “realworld” settings. As a result, discussions related to CER often include observational studies including prospective cohorts and analyses of large, administrative databases like the Surveillance, Epidemiology and End Results-Medicare-linked data. Strengths of observational studies can complement the weaknesses of randomized trials and vice versa. Although observational studies, especially population-based prospective cohorts, can provide results that are more timely and generalizable than clinical trial data, they are limited by potential patient selection biases inherent in a nonrandomized study design. The continued development of sophisticated analytic methods especially as they apply to CER can partially mitigate these limitations of observational studies. Importantly, no study design is perfect and for the highest priority CER topics,

Cost-Effectiveness Decision analysis is another useful methodology for CER. As described in the article by Sher and Punglia, a major strength of decision analysis is the ability to take the best available data (including prospective and retrospective studies, and even expert opinion) and incorporate patient preferences for different outcomes to model comparative outcomes in a variety of clinical scenarios. Therefore, the power of this methodology is the ability to incorporate specific patient characteristics and preferences to arrive at an individualized “best” option for each patient. These models can also incorporate costs to assess the cost-effectiveness of different interventions. One major reason for the recent emphasis on CER on a national level is the recognition that the escalating health care costs in the United States are unsustainable6; a sizable portion of these rising costs is due to cancer care.7,8 Cost-effectiveness studies are especially relevant to the assessment of new treatments and technologies. If indeed newer and costlier technologies are better than the existing technologies by providing an improvement in relevant patient outcomes— which needs to be demonstrated first—then an understanding of the costs associated with this incremental benefit becomes a central question for both policymakers and also patients, who shoulder part or the entire cost burden. Even if cost-effectiveness is not a formal part of policy or coverage decisions for a new technology, these analyses can provide critical information regarding the value provided by different treatments and technologies.

Technology Assessment and a Race Between Research and Diffusion Technology assessment is a common topic for CER in oncology, with numerous examples throughout Radiation, Surgical, and Medical Oncology (see articles by Chen, Aneja, Neuman, and Hirsch, in this issue). One important issue facing the United States health care system is the rapid diffusion of new technology, often before research is conducted to demonstrate a benefit in patient outcomes compared with older or current technology. In prostate cancer, the use of robotic prostatectomy has increased dramatically to replace the older open surgical technique9; similarly, intensity-modulated radiation therapy almost completely replaced the older conformal radiation technique over a span of 8-10 years.10,11 Once a technology has become widely adopted, CER is no longer possible or

Comparative effectiveness research in oncology relevant. Neuman et al describe a randomized trial to examine the comparative patient outcomes of sentinel lymph node biopsy for breast cancer vs axillary dissection. Before results from this trial were reported, sentinel node biopsy had already been widely disseminated into clinical practice and adopted as a standard of care. As this trend of rapid diffusion continues, researchers have a limited time window—perhaps as little as 8-10 years—to produce meaningful research results that can still be informative to patients and other stakeholders. This is an important consideration when selecting CER study designs and end points (ie, overall survival may not always be feasible), and is another reason for collaborating with stakeholders early during study design. Importantly, advances in oncologic treatments do not always involve increasing costs. As Aneja and Yu detail in their article regarding radiation therapy, the development and continued study of stereotactic radiation, hypofractionation, and brachytherapy have the potential to significantly reduce not only the cost of treatment but also treatment time and burden on patients. Should policymakers have a firm stance requiring CER evidence before allowing coverage for a new treatment or technology? This can be problematic because it would prevent patients from being able to access promising new treatments for many years while research evidence accumulates. As described in the article by Chen, “Coverage with Evidence Development” is a compromise where reimbursement is provided for new technologies (allowing patients to have access) with the requirement for CER data collection. After a defined period, these data are analyzed and if the new technology is found to be ineffective, reimbursement can stop. This type of policy has the potential to be a win-win for both patients and policymakers—who are the primary stakeholders for CER. However, caution is needed. Because this type of policy allows the diffusion of a new technology, and perhaps encourages it, the design for the associated study must be sound because there is unlikely to be an opportunity to conduct additional CER studies once the technology becomes widely adopted. Further, one potential consequence of this policy is a removal of the need or incentive to conduct randomized trials. But even with a non-trial design, CER studies evaluating a new technology should include a comparator arm of current technology to assess whether there is an incremental benefit from the new. Given these considerations, coverage with evidence development should be utilized selectively, and reserved only for the most promising technologies where access to patients cannot wait until after research is conducted.

Dissemination of Research Findings The article by Lawrence from the Agency for Healthcare Research and Quality provides a perspective on how data from CER studies can be used to inform policy and patient

3 decision making. As the central goal of CER is to provide data that will assist stakeholders in making informed decisions, which leads to improved health outcomes,1 CER does not end when the research is complete. To fulfill the goal of CER, dissemination of research findings into the hands of patients and policymakers is of the highest importance. This goes beyond traditional methods of dissemination through manuscript publications and scientific meeting presentations, and requires a significant amount of additional effort and funding. As direct-toconsumer advertising has helped fuel the demand for and rapid diffusion of new technologies in oncology, researchers need to put forth a concerted effort to make CER data more accessible to patients to inform their decision making. One powerful mechanism for dissemination is described in the Neuman article. The partnership between the Alliance for Clinical Trials in Oncology cooperative group with the Commission on Cancer, which provides accreditation to cancer centers across the United States, provides a direct channel for research evidence to be incorporated into clinical practice guidelines, which can then be used to evaluate the performance of cancer centers seeking accreditation. This is a strong incentive for adoption of evidence-based cancer care. In the North Carolina Prostate Cancer Comparative Effectiveness & Survivorship Study, a population-based cohort of patients with localized prostate cancer diagnosed from 20112013, patients were asked about their perceptions regarding the relative efficacy and quality-of-life effects of different treatment options.12 Significantly more patients (42%) reported that robotic prostatectomy provides the best chance of cure among all treatment options, whereas only 14% chose open prostatectomy. Furthermore, more patients believed that there would be less urinary and sexual dysfunction after robotic than open prostatectomy. Although research on the comparative patient outcomes from open vs robotic prostatectomy is still evolving, there is no evidence currently that robotic vs open prostatectomy is more effective or causes less long-term morbidity. These findings demonstrate that patients can often have a favorable perception of a new treatment technology even before definitive research evidence, highlight the challenges researchers face against the background of other sources of information for patients, and further emphasize the importance of dissemination of research findings if CER were to have an effect on clinical practice.

Building the Infrastructure for CER The recognition of the importance of CER has led to a rapid response by the research community to build the infrastructure to conduct this type of research, and funding to support it. In addition to Patient-Centered Outcomes Research Institute, current efforts by cancer-specific groups like the Radiation Oncology Institute,12 American College of Surgeons (Neuman, in this issue), American Society of Clinical Oncology (Hirsch, in this issue), and the continued development of electronic medical record systems13 will add tremendously to the current

R.C. Chen

4 capabilities for CER. The hope is that these efforts will lead to a culture in the United States that closely connects researchers with patients, payers, and policymakers—who collaborate to maintain a sustainable health care system that provides highquality care to cancer patients. Ronald C. Chen, MD, MPH Department of Radiation Oncology Cecil G. Sheps Center for Health Services Research Lineberger Comprehensive Cancer Center University of North Carolina at Chapel Hill Chapel Hill, NC

6. 7.

8.

9.

10.

References 1. IOM: IoM: Initial National Priorities for Comparative Effectiveness Research. Washington, DC, The National Academies Press, 2009 2. Pearson SD: Cost, coverage, and comparative effectiveness research: The critical issues for oncology. J Clin Oncol 30:4275-4281, 2012 3. Concannon TW, Meissner P, Grunbaum JA, et al: A new taxonomy for stakeholder engagement in patient-centered outcomes research. J Gen Intern Med 27:985-991, 2012 4. Greenberg C, Wind J, Chang G, et al: Stakeholder engagement for comparative effectiveness research in cancer care: Experience of the DEcIDE Cancer Consortium. J Comp Eff Res 2013 5. Wilt TJ: The Prostate Cancer Intervention Versus Observation Trial: VA/ NCI/AHRQ Cooperative Studies Program #407 (PIVOT): Design and

11.

12.

13.

baseline results of a randomized controlled trial comparing radical prostatectomy with watchful waiting for men with clinically localized prostate cancer. J Natl Cancer Inst Monogr 24:184-190, 2012 Spiro T, Lee EO, Emanuel EJ: Price and utilization: Why we must target both to curb health care costs. Ann Intern Med 157:586-590, 2012 Meropol NJ, Schrag D, Smith TJ, et al: American Society of Clinical Oncology guidance statement: The cost of cancer care. J Clin Oncol 27:3868-3874, 2009 Mariotto AB, Yabroff KR, Shao Y, et al: Projections of the cost of cancer care in the United States: 2010-2020. J Natl Cancer Inst 103:117-128, 2011 Hu JC, Gu X, Lipsitz SR, et al: Comparative effectiveness of minimally invasive vs open radical prostatectomy. J Am Med Assoc 302:1557-1564, 2009 Goldin G, Sheets N, Meyer A, et al: Comparative effectiveness of intensity modulated radiation therapy and conventional conformal radiation therapy in the treatment of prostate cancer after radical prostatectomy. J Am Med Assoc Intern Med 173(12):1136-1143, 2013 Sheets NC, Goldin GH, Meyer AM, et al: Intensity-modulated radiation therapy, proton therapy, or conformal radiation therapy and morbidity and disease control in localized prostate cancer. J Am Med Assoc 307:1611-1620, 2012 Chen R, Nielsen M, Reeve B, et al: Perceptions regarding prostate cancer (CaP) treatment options: Results from the North Carolina Prostate Cancer Comparative Effectiveness and Survivorship Study (NC ProCESS). J Clin Oncol 31(suppl), 2013[abstr 6530] Miriovsky BJ, Shulman LN, Abernethy AP: Importance of health information technology, electronic health records, and continuously aggregating data to comparative effectiveness research and learning health care. J Clin Oncol 30:4243-4248, 2012

Comparative effectiveness research in oncology: the promise, challenges, and opportunities.

Comparative effectiveness research in oncology: the promise, challenges, and opportunities. - PDF Download Free
131KB Sizes 0 Downloads 0 Views