CLINICAL TRIALS

COMMENTARY

Clinical Trials 2014; 11: 15–18

Commentary on Berlin et al. Ben Goldacre

The public discourse on sharing individual patient data (IPD) from clinical trials has shifted rapidly, and the article by Berlin et al. [1] in this issue exemplifies the best and worst of this transition. The benefits of sharing IPD are clear. It allows third parties to verify trialists’ own analyses of their own data, especially where flawed analytic techniques have been used in the initial published results. It permits meta-analysis of pooled IPD from multiple trials, which can give more accurate point estimates of benefits. That same technique offers greater statistical power for network meta-analyses on relative benefits of treatments and, crucially, greater power to conduct subgroup analyses, which can in turn help predict the best or worst responders to a treatment. Where adverse event data have been well coded, an IPD meta-analysis can permit more powerful exploratory analyses of side effects. It facilitates innovation, by allowing new hypotheses to be explored in existing data. Finally, where data are shared on abandoned products, it presents an opportunity to identify signals that suggest potentially beneficial new uses for old treatments. All these research questions could be answered by running new large trials, instead of repurposing old data, but that would be an inefficient use of resources, the most important being patients. People often tolerate both risk and inconvenience to participate in trials, expecting that the results of their own experience, pooled with others, will help improve treatment decisions for patients like them in the future. Failing to use the data is a betrayal of past participants, but may also present ethical challenges for future work, because it is arguably unacceptable to expose new patients to randomization where the clinical question being interrogated could be easily answered with data that have already been collected. Over the past year, there have been broadly positive statements on data sharing from various bodies including the European Medicines Agency [2], the European Federation of Pharmaceutical Industries

and Associations and the Pharmaceutical Research and Manufacturers of America [3], GlaxoSmithKline [4], Roche [5], Leo Pharma [6], and more, with an Institute of Medicine review on best practice to come, as well as a Food and Drug Administration consultation. This represents strong progress. But we should not be so naive as to imagine that IPD sharing will happen soon, and there is a great deal we can learn from the ongoing failure to ensure that even summary results from clinical trials are reported. Iain Chalmers, the founder of the Cochrane Collaboration, described 7 years ago how a rush of positive activity on publication bias in the late 1990s soon deteriorated into failure, despite various codes of conduct and promises from major drug companies [7]. There is an active campaign at http:// www.alltrials.net seeking to address this ongoing issue, almost three decades after the phenomenon of trial results being withheld from doctors and patients was first quantitatively documented [8]. The current National Institute of Health Research (United Kingdom) review summarizes dozens of studies on publication bias: overall, the chances of a trial being published are roughly half [9]. While much of this evidence covers trials from the past it is far from historical, as these trials cover the risks and benefits of the treatments in use today. The problem also persists for current trials: the most recent study still finds that only half of all trials on clinicaltrials.gov have posted results, 3 years after completion [10], and legislation on this issue has failed. The Food and Drug Administration Amendment Act 2007 required trials to post results on clinicaltrials. gov within 12 months, but offered no routine public audit on compliance: the only published paper on compliance estimates that 78% of trials failed to post results as required [11]. At the same time, access to Clinical Study Reports is becoming actively worse. These long documents are created for industry trials and little known in the academic community, but contain a wealth of detail

Research Fellow, Department of Non Communicable Disease Epidemiology, London School of Hygiene & Tropical Medicine, London, UK Author for correspondence: Ben Goldacre, Research Fellow, Department of Non Communicable Disease Epidemiology, London School of Hygiene & Tropical Medicine, Keppel St, London WC1E 7HT, UK. Email: [email protected]

Ó The Author(s), 2014 Reprints and permissions: http://www.sagepub.co.uk/journalsPermissions.nav Downloaded from ctj.sagepub.com at Griffith University on April 25, 2015

10.1177/1740774513515768

16

B Goldacre

on methods and results that is often missing from other sources: a recent study by the Institute for Quality and Efficiency in Health Care (IQWiG), the German cost-effectiveness agency, estimates that Clinical Study Reports contain twice as much information on benefits and harms as academic papers on trials [12]. From 2010, the European Medicines Agency began releasing Clinical Study Reports on request, after a European Ombudsman ruling of maladministration against the agency for withholding such information [13]. However, the companies AbbVie and InterMune have now sued the European Medicines Agency [14], with the support of the European Federation of Pharmaceutical Industries and Associations and the Pharmaceutical Research and Manufacturers of America, and have obtained an interim ruling that is currently preventing the European Medicines Agency from releasing any further Clinical Study Reports to any applicant on any treatment [15]. These episodes are not merely historical context for current plans and promises on IPD sharing, they also represent important statistical context. It is inevitable that only a subset of all trials will be made available in IPD form, if only for practical reasons such as information technology challenges, data loss, or problems with consent forms. As a consequence, we may simply see the same problem of biased dissemination as before, but with the false reassurance of greater methodological rigor: IPD meta-analysis, conducted to high standards of competence, but on an incomplete and biased subset of the data. Only full access to summary results on all trials can provide the statistical context needed to assess this source of bias, and identify whether the shared IPD provides a complete or representative dataset. Poor progress with summary results also shows the importance of addressing arguments made against transparency and ensuring that a proportionate approach is taken. Berlin et al. explain that we must be aware of the opportunity costs from industry resources being spent on sharing data, instead of new research work: but the marginal extra workload of making one existing dataset available is trivial, compared to the enormous task of conducting an entire clinical trial to collect new data. Berlin et al. also suggest that third parties accessing trial data must be competent and prove that their analyses will be of high quality. This is welcome, but raises questions of how such adjudications will be applied, and whether the process can be independent and transparent. Furthermore, while any focus on improving analytic standards is welcome, an emphasis on secondary analyses seems misplaced, since researchers commonly conduct and publish problematic analyses of their own trial data, for example, by switching primary outcomes [16], or by

using questionable methods to handle missing data [17]. While prior deposition of statistical analysis plans and protocols, with universal reporting of these preplanned analyses, are good suggestions, this is a much more pressing issue for existing primary analyses by trialists on their own data, which are often regarded by clinicians as canonical, than for notional secondary analyses by third parties which have not yet begun to happen. Berlin et al. join others in arguing that clinicians may be confused by poor quality or contradictory analyses of IPD, but here again, it is unclear whether the concern is proportionate to the scale of the problem. There are 28,100 scientific journals in print today, with over a million articles published each year [18] and over 23 million papers indexed in PubMed. Work of poorer quality is routinely conducted and published already: it is managed – to a reasonable degree – in the academic ecosystem of evidence synthesis and critical appraisal before it can impact on practice. Any harm that might theoretically arise from a fractional increase in the total quantity of weak academic publications must be balanced against the huge benefits of wider access to IPD. It has also been argued that popular health scares, such as those around vaccination, might be fuelled by flawed analyses of IPD if these were to be more widely shared [19]. This is a surprising analysis of anti-vaccination campaigns, since they are often actively driven by a widespread popular belief that drug companies, regulators, and the research community are conspiring to withhold information about the risks and benefits of treatments from patients and the public. Actively withholding such information is unlikely to resolve these concerns, and public trust in medicine and the regulatory process is likely to be enhanced more through transparency than secrecy. Patient consent for research represents a more complex area, with opportunities for rapid positive steps. It is currently being suggested in various forums on IPD sharing that the wording of previous consent forms may pose a barrier to transparency. This may be true in some cases. However, data from any one trial are already worked on by a wide range of actors from varied institutions including Clinical Research Organization staff, company statisticians, nurses engaged in routine clinical care, physicians acting as consultants, academic collaborators, commercial medical writers with statistical skills, and more. It is not clear that an academic conducting a pooled meta-analysis would necessarily be excluded from any wording that can include all the preceding individuals and organizations. In any case, this is a matter for further research, and a survey of the limits on IPD sharing in a representative sample of consent forms would be very welcome, as would survey

Clinical Trials 2014; 11: 15–18

http://ctj.sagepub.com Downloaded from ctj.sagepub.com at Griffith University on April 25, 2015

Commentary data or qualitative work on previous and future participants’ beliefs and expectations. It may also be prudent to take swift global action and ensure that all consent forms from now, at least, give explicit permission for IPD to be appropriately shared. Finally, it is odd to note that much current discussion on IPD sharing seems to neglect the rich history and experience in this field. The first IPD meta-analysis was published in 1970 [20], with numerous notable examples since, for example, the influential evaluation of breast cancer treatment by the Early Breast Cancer Trialists’ Collaborative Group [21]. There is also three decades’ worth of experience from epidemiologists and industry researchers sharing access to the full electronic health records of National Health Service (United Kingdom) patients for observational research [22], which raises many of the same issues concerning patient confidentiality. None of the criticisms above should detract from the welcome positive statements on transparency from Berlin et al. or other industry representatives. However, while the practical challenges are discussed, there is one clear threat to patient care that deserves urgent attention. The battle for access to summary results from clinical trials has lasted for three decades, without resolution, and it is now clear that during the long delay, large quantities of withheld trial results from even a decade ago have been irretrievably lost. Although data sharing proposals from the European Medicines Agency and European Federation of Pharmaceutical Industries and Associations only relate to IPD from new trials after 2014, it is clear that retrospective access is important, as the treatment decisions in current clinical practice are informed by the trials of the past, conducted on the medicines we are using today. Therefore, while the wider community discusses details around implementation of IPD sharing, such as who should hold and manage access, there is one precautionary step that will help prevent any further deterioration. We could identify all existing IPD from clinical trials, past and present; take simple archive copies, in whatever chaotic formats they may be found; and store them under lock and key, respecting the trialists’ wishes on wider sharing, until such time as new norms are agreed. Only an ambitious and urgent program of data archiving will protect patients from the harm that will otherwise arise from deletions and accidental losses, while we debate the right way forward. Hopefully, this time, it might take less than three decades.

Funding Ben Goldacre holds a Wellcome Clinical Research Fellowship at LSHTM.

17

Conflict of interest I am a cofounder of AllTrials, a campaign group that calls for all trials to be registered, with full summary results reported. I receive income from public speaking and writing for lay audiences about problems in science and medicine, including access to trial results.

References 1. Berlin JA, Morris S, Rockhold F, et al. Bumps and bridges on the road to responsible sharing of clinical trial data. Clin Trials 2014; 11(1): 7–12. 2. European Medicines Agency. Draft policy 70: Publication and access to clinical-trial data, 2013. Available at: http://www.ema.europa.eu/ema/doc_index.jsp?curl=pages/ includes/document/document_detail.jsp?webContentId= WC500144730&;murl=menus/document_library/document_ library.jsp&mid=0b01ac058009a3dc 3. Pharmaceutical Research and Manufacturers of America, European Federation of Pharmaceutical Industries and Associations. Principles for responsible clinical trial data sharing, 2013. Available at: http://phrma.org/sites/default/ files/pdf/PhRMAPrinciplesForResponsibleClinicalTrialData Sharing.pdf 4. Nisen P, Rockhold F. Access to patient-level data from GlaxoSmithKline clinical trials. N Engl J Med 2013; 369(5): 475–78. 5. Kmietowicz Z. Roche says it will not relinquish control over access to clinical trial data. BMJ 2013; 346: f1374. 6. Adams B. LEO Pharma to release clinical trial data. Pharmafile, 2013. Available at: http://www.pharmafile.com/ news/181363/leo-pharma-release-clinical-trial-data 7. Chalmers I. From optimism to disillusion about commitment to transparency in the medico-industrial complex. J R Soc Med 2006; 99(7): 337–41. 8. Simes RJ. Publication bias: The case for an international registry of clinical trials. J Clin Oncol 1986; 4(10): 1529–41. 9. Song F, Parekh S, Hooper L, et al. Dissemination and publication of research findings: An updated review of related biases. Health Technol Assess 2010; 14(8): iii, ix–xi, 1–193. 10. Huser V, Cimino JJ. Linking ClinicalTrials.gov and PubMed to track results of interventional human clinical trials. PLoS One 2013; 8(7): e68409. 11. Prayle AP, Hurley MN, Smyth AR. Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: Cross sectional study. BMJ 2012; 344: d7373. 12. Wieseler B, Wolfram N, McGauran N, et al. Completeness of reporting of patient-relevant clinical trial outcomes: Comparison of unpublished clinical study reports with publicly available data. PLoS Med 2013; 10(10): e1001526. 13. Goldacre B. Bad Pharma: How Medicine is Broken, and How We Can Fix It. Fourth Estate, London, 2013. 14. Bodoni S, AbbVie. InterMune sue to block clinical-trial data release. Bloomberg, 2013. Available at: http://www. bloomberg.com/news/2013-03-11/abbvie-sues-eu-regulator to-block-clinical-trial-data-release.html

http://ctj.sagepub.com

Clinical Trials 2014; 11: 15–18 Downloaded from ctj.sagepub.com at Griffith University on April 25, 2015

18

B Goldacre

15. AllTrials. Medical researchers denied clinical trial information, 2013. Available at: http://www.alltrials.net/news/ medical-researchers-denied-clinical-trial-information/ 16. Vedula SS, Li T, Dickersin K. Differences in reporting of analyses in internal company documents versus published trial reports: Comparisons in industry-sponsored trials in off-label uses of gabapentin. PLoS Med 2013; 10(1): e1001378. 17. O’Connor AB. The need for improved access to FDA reviews. JAMA 2009; 302(2): 191–93. 18. Ware M, Mabe M. An Overview of Scientific and Scholarly Journal Publishing. International Association of Scientific, Technical and Medical Publishers, 2012. Available at: http:// www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf 19. Eichler HG, Abadie E, Breckenridge A, et al. Open clinical trial data for all? A view from regulators. PLoS Med 2012; 9(4): e1001202.

20. International Anticoagulant Review Group. Collaborative analysis of long-term anticoagulant administration after acute myocardial infarction. Lancet 1970; 295(7640): 203–09. 21. Early Breast Cancer Trialists’ Collaborative Group (EBCTCG), Darby S, McGale P, et al. Effect of radiotherapy after breast-conserving surgery on 10-year recurrence and 15-year breast cancer death: Meta-analysis of individual patient data for 10,801 women in 17 randomised trials. Lancet 2011; 378(9804): 1707–16. 22. Williams T, van Staa T, Puri S, et al. Recent advances in the utility and use of the General Practice Research Database as an example of a UK Primary Care Data resource. Ther Adv Drug Saf 2012; 13(2): 89–99.

Clinical Trials 2014; 11: 15–18

http://ctj.sagepub.com Downloaded from ctj.sagepub.com at Griffith University on April 25, 2015

Commentary on Berlin et al.

Commentary on Berlin et al. - PDF Download Free
71KB Sizes 3 Downloads 0 Views