Editorial For reprint orders, please contact: [email protected]

Should comparative effectiveness research ignore industry-funded data? “

From studies examining how and when trials are funded, undertaken and published, it is clear that the system of evidence production is largely inefficient.



Keywords: clinical trials • complexity • conflicts of interest • evidence reversals • industry funding • publication bias • reporting bias • systematic reviews

Background Industry-sponsored research makes up a substantial proportion of the evidence that clinicians rely on to determine the comparative efficacy of drug interventions, yet industry-sponsored evidence can make interventions look safer and more effective than they really are. This problematic influence on the quality of clinical evidence might prompt suggestions to ignore industry–funded data in comparative effectiveness research. Doing so, however, would leave critical and potentially dangerous gaps in the evidence base of many drug classes. Here, we consider whether improvements in transparency, sharing and surveillance of evidence might allow us to maximally utilize all sources of clinical evidence, regardless of provenance. Industry-sponsored trials make up a substantial proportion of all clinical trials. For new drug interventions especially, evaluations of drug safety and efficacy are critically reliant on industry sponsorship. At the same time there is clear evidence that industry-sponsored trials and data syntheses are systematically different to those produced without industry sponsorship or financial conflicts of interest. Should we thus ignore industry data entirely, or instead accept a level of bias in evidence that sometimes makes interventions seem safer and more effective than they really are? There is a third option – the development of new forms of evidence

10.2217/CER.14.31 © 2014 Future Medicine Ltd

surveillance that can detect early signals of biases in the process of evidence production and translation, and trigger efforts to repair the evidence base. The provenance of clinical evidence From studies examining how and when trials are funded, undertaken and published, it is clear that the system of evidence production is largely inefficient. The public funding of clinical research is not well aligned with the burden of disease [1,2] . Only around 66% of registered trials are published [3] , and when the results are published, about half of the efficacy outcomes and less than half of the harm outcomes are correctly reported [4] . There were 93,986 trials registered on ClinicalTrials.gov with completion dates between 2007 and 2013 (as of 5 May 2014). Of these, trials that were completely or partially sponsored by industry made up 41% of registrations, and these trials were more likely to be labeled as completed (74% compared with 56%, p < 0.001; Fisher’s exact test) and larger (median enrollment 78 compared with 60, p < 0.001; two sample K–S test). Thus nearly half (48%) of all completed trials in the period were sponsored wholly or partially by industry. It is therefore very difficult to argue that we should completely ignore data from industry-sponsored research in a system where nearly half of the clinical evidence

J. Comp. Eff. Res. (2014) 3(4), 317–320

Adam G Dunn Author for correspondence: Centre for Health Informatics, Australian Institute of Health Innovation, The University of New South Wales, Sydney, Australia a.dunn@ unsw.edu.au

Enrico Coiera Centre for Health Informatics, Australian Institute of Health Innovation, The University of New South Wales, Sydney, Australia

part of

ISSN 2042-6305

317

Editorial  Dunn & Coiera produced relies on this funding, the rest is poorly aligned to the burden of disease, and substantial proportions never reach the public domain because of missing or inadequate reporting. The impact of bias on the quality of clinical evidence Even though industry-sponsored trials are often larger and tend to meet higher standards of quality than trials without industry sponsorship [5] , they are also less likely to be published, more likely to report positive results [3] and more likely have conclusions that do not match the results [6] . For some conditions, industry-sponsored trials are less likely to consider head-to-head comparisons [7,8] and less likely to consider safety outcomes [8] . These systematic biases exhibited by industrysponsored research have been linked to the slow withdrawal of unsafe drugs, with consequences that included widespread harm [9,10] . There is also some evidence that systematic manipulation of reviews by authors with conflicts of interest may have contributed to the slow change in policy and practice for at least one intervention [11] . It is unknown exactly how many withdrawals and restrictions that occur each year are delayed as a consequence of industry influence over the design, reporting and synthesis of evidence. Systematic reviews are designed to fairly and reliably represent the entire evidence base for an intervention or condition but they are not a panacea for mitigating biases associated with industry sponsorship. The biases specific to industry sponsorship may pass through systematic reviews undetected by standard tools [12] , and systematic reviews themselves may be influenced by the authors’ financial conflicts of interest [13] .



It is unknown exactly how many withdrawals and restrictions that occur each year are delayed as a consequence of industry influence over the design, reporting and synthesis of evidence.



It is therefore also very difficult to argue that we should unquestioningly rely on data from industrysponsored research in a system where the biases that are introduced are systemic and can flow through evidence translation into policy decisions and clinical decision making. The impact of missing evidence on the quality of clinical evidence Even though the rate of clinical evidence production is accelerating [14] , for many conditions and

318

J. Comp. Eff. Res. (2014) 3(4)

population groups there is still not enough evidence to determine which treatment is safest and most cost effective. These gaps in evidence disproportionately affect comparative effectiveness research, where headto-head trial designs make up only 22% of the trials ­registered in the US [15] . A lack of evidence can lead to problems in the quality of care. Evidence reversals – where established practices are found to be ineffective or unsafe – are surprisingly common [16] . Delays in identifying evidence reversals are not only associated with the negative influence of biased research, but can also be the direct consequence of a lack of published evidence. For example, it took a decade before issues related to cardiovascular risk were identified for rosiglitazone [10] , and this was partially due to the lack of definitive studies assessing safety. For oseltamivir, the evidence supporting its use in the prophylaxis and treatment of influenza is still in dispute more than 15 years after approval [17] . Evidence surveillance Improvements in transparency and advances in evidence surveillance may provide an escape route from a false choice of either reluctantly accepting the potentially biased evidence that comes with industry influence, or ignoring it, and not having enough evidence to compare the safety and efficacy of available treatments. The trend towards transparency arguably started with the registration of trials a decade ago [18] , and has continued recently with a push to mandate the sharing of results. In 2012, we considered what an open source community of clinical trial data sharing might look like, and where the motivations for sharing might come from [19] . Around the same time, Doshi et al. [20] shared their experience of problems with access to clinical study reports in relation to oseltamivir. In the 2 years since, we have seen the development of the AllTrials movement; statements from GlaxoSmithKline, Sanofi and Novartis that they will release patient-level data from their own trials; and legislation from the European Parliament to make the registration and sharing of trial results compulsory from 2016. In the near future, we can expect to see the summary results of nearly all new clinical trials available in several large repositories and linked to registrations. This means that, for the first time, humans and data mining algorithms will have free and open access to examine and synthesize results across trials. Even after we automate systematic reviews [21] , the systematic differences between industry-sponsored trials and all others may not be detected by current

future science group

Should comparative effectiveness research ignore industry-funded data? 

risk-of-bias tools [6] . Evidence reversals would thus still rely on the foresight of individuals willing to challenge the consensus through meta-analyses or large clinical trials. As an alternative, automated surveillance of trial registrations and reporting could provide signals whenever further investigation is warranted. For example, the automated monitoring of trial registrations will provide a more direct evaluation of the gaps in the research agendas of industry and nonindustry [7,8] , and direct measures of publication bias [3] . Discrepancies between registration and publication are already used to identify potential reporting bias [22] , and verification of registration versus reporting could be built into the peer-review process. Finally, new applications of machine learning applied to published manuscripts may enable the detection of spin in reporting [23] , and problematic selective citation in reviews [24] . In each case, the tools of evidence surveillance could be used to help mitigate the influence of biases, or by providing the information needed to better target research funding where the gaps and biases are most problematic.

Conclusion The development of an ongoing and systematic approach to the surveillance of gaps and biases in clinical evidence may negate the need to discount or distrust evidence from industry-funded trials. Together with the release of patient-level data from all trials, these developments may help us to make the most of all clinical research data, regardless of what motivated their production.

References

10

Mullard A. The long Avandia endgame. Lancet 378, 113 (2011).

11

Wang AT, McCoy CP, Murad MH, Montori VM. Association between industry affiliation and position on cardiovascular risk with rosiglitazone: cross sectional systematic review. BMJ 340: c1344 (2010).

12

Bero L. Industry sponsorship and research outcome: a cochrane review. JAMA Intern. Med. 173(7), 1–2 (2013).

13

Bes-Rastrollo M, Schulze MB, Ruiz-Canela M, MartinezGonzalez MA. Financial conflicts of interest and reporting bias regarding the association between sugar-sweetened beverages and weight gain: a systematic review of systematic reviews. PLoS Med. 10, e1001578 (2013).

14

Califf R, Zarin D, Kramer J, Sherman R, Aberle L, Tasneem A. Characteristics of clinical trials registered in ClinicalTrials.gov, 2007–2010. JAMA 307, 1838–1847 (2012).

Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 7, e1000326 (2010).

15

Lundh A, Sismondo S, Lexchin J, Busuioc OA, Bero L. Industry sponsorship and research outcome. Cochrane Database Syst. Rev. 2012(12), MR000033.pub2 (2012).

Bourgeois FT, Murthy S, Mandl KD. Comparative effectiveness research: an empirical study of trials registered in ClinicalTrials.gov. PLoS ONE 7, e28820 (2012).

16

Prasad V, Vandross A, Toomey C et al. A decade of reversal: an analysis of 146 contradicted medical practices. Mayo Clin. Proc. 88, 790–798 (2013).

17

Krumholz HM. Neuraminidase inhibitors for influenza. BMJ 348, g2548 (2014).

18

Dickersin K, Rennie D. Registering clinical trials. JAMA 290, 516–523 (2003).

19

Dunn AG, Day RO, Mandl KD, Coiera E. Learning from hackers: open-source clinical trials. Sci. Transl. Med. 4, cm132–cm135 (2012).

20

Doshi P, Jefferson T, Del Mar C. The imperative to share clinical study reports: recommendations from the tamiflu experience. PLoS Med. 9, e1001201 (2012).

1

2

3

4

5

6

7

8

9

Gross CP, Anderson GF, Powe NR. The relation between funding by the National Institutes of Health and the burden of disease. N. Engl. J. Med. 340, 1881–1887 (1999). Stuckler D, King L, Robinson H, McKee M. WHO’s budgetary allocations and burden of disease: a comparative analysis. Lancet 372, 1563–1569 (2008). Bourgeois FT, Murthy S, Mandl KD. Outcome reporting among drug trials registered in clinicaltrials.gov. Ann. Intern. Med. 153, 158–166 (2010). Chan A, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 291, 2457–2465 (2004).

Dunn AG, Mandl KD, Coiera E, Bourgeois FT. The effects of industry sponsorship on comparator selection in trial registrations for neuropsychiatric conditions in children. PLoS ONE 8, e84951 (2013). Dunn AG, Bourgeois FT, Murthy S, Mandl KD, Day RO, Coiera E. The role and impact of research agendas on the comparative-effectiveness research among antihyperlipidemics. Clin. Pharmacol. Ther. 91, 685–691 (2012). Topol EJ. Failing the public health – Rofecoxib, Merck, and the FDA. N. Engl. J. Med. 351, 1707–1709 (2004).

future science group

Editorial

Financial & competing interests disclosure This work was supported by funding from the National Health and Medical Research Council (Project Grant 1045065) and the NHMRC Centre for Research Excellence in E-Health. The study sponsor played no role in the preparation of the manuscript. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. No writing assistance was utilized in the production of this manuscript.

www.futuremedicine.com

319

Editorial  Dunn & Coiera

320

21

Tsafnat G, Dunn A, Glasziou P, Coiera E. The automation of systematic reviews. BMJ 346, f139 (2013).

22

Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 302, 977–984 (2009).

23

Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with

J. Comp. Eff. Res. (2014) 3(4)

statistically nonsignificant results for primary outcomes. JAMA 303, 2058–2064 (2010). 24

Jannot A-S, Agoritsas T, Gayet-Ageron A, Perneger TV. Citation bias favoring statistically significant studies was present in medical research. J. Clin. Epidemiol. 66, 296–301 (2013).

future science group

Should comparative effectiveness research ignore industry-funded data?

Should comparative effectiveness research ignore industry-funded data? - PDF Download Free
2MB Sizes 0 Downloads 4 Views