REVIEW URRENT C OPINION

Cost-efficiency of knowledge creation: randomized controlled trials vs. observational studies Rafael Struck, Georg Baumgarten, and Maria Wittmann

Purpose of review This article reviews traditional and current perspectives on randomized, controlled trials (RCTs) and observational studies relative to the economic implications for public healthcare stakeholders. Recent findings It takes an average of 17 years to bring 14% of original research into clinical practice. Results from highquality observational studies may complement limited RCTs in primary and secondary literature bases, and enhance the incorporation of sound evidence-based guidelines. Observational findings from comprehensive medical databases may offer valuable clues on the effectiveness and relevance of public healthcare interventions. Major expenditures associated with RCTs relate to recruitment, inappropriate site selection, conduct and reporting. Application of business strategies and economic evaluation tools, in addition to the planning and conduct of RCTs, may enhance clinical trial site performances. Summary Considering the strengths and limitations of each study type, clinical researchers should explore the contextual worthiness of either design in promulgating knowledge. They should focus on quality of conduct and reporting that may allow for the liberation of limited public and private clinical research funding Keywords cost-efficiency, evidence-based, knowledge promulgation, observational studies, randomized controlled trials

INTRODUCTION Professional communities engaged in EvidenceBased Medicine and Comparative Effectiveness Research are increasingly concerned with incorporating the results of clinical research into clinical practice [1]. For this purpose, the National Institutes of Health (NIH), among others, have devised a translational roadmap favoring three major pathways, promoting cost-efficient creation, synthesis, transfer and utilization of medical knowledge. Initially, a composite of basic research discoveries and identified unmet clinical needs fuel the translation of preclinical research to clinical practice using observational, case–control series and phase I/II randomized, controlled trials (RCTs). Promising outcomes then advance to larger scope of practice. The majority of knowledge promulgation involves phase III RCTs and observational studies [2,3]. The outcome from these RCTs and observational studies are then synthesized in systematic reviews and pooled in meta-analyses. Both information formats provide the evidentiary foundation of clinical guidelines and knowledge tools, designated to aid www.co-anesthesiology.com

clinical decision-making [3,4]. Recent estimates cite an average of 17 years to bring 14% of original research into clinical practice [5]. The reasons for the research time lag and waste of resources are multiple, often relating to problems with production, synthesis, transfer and utilization of medical knowledge [3,6]. Studies are often not based on existing evidence [7] or are poorly designed [6]. Study results are often published too slowly [8], not at all [9,10 ,11] or reported inappropriately [6], and are likely to be false or have inflated results [12,13]. Consistent models that quantify the economic and social impact of health research are sparse, and [14,15] the incorporation of clinical trial &&

Department of Anesthesiology and Intensive Care Medicine, University Hospital of Bonn, Bonn, Germany Correspondence to Maria Wittmann, MD, Department of Anesthesiology and Intensive Care Medicine, University Hospital of Bonn, SigmundFreud-Straße 25, 53105 Bonn, Germany. Tel: +49 228 287 14134; e-mail: [email protected] Curr Opin Anesthesiol 2014, 27:190–194 DOI:10.1097/ACO.0000000000000060 Volume 27  Number 2  April 2014

Copyright © Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.

Cost-efficiency of knowledge creation Struck et al.

KEY POINTS  Comprehensive administrative databases grant costefficient access to representative patient populations, enable long-term follow-up, and rarely raise ethical concerns.  Results of high-quality observational studies may complement RCT data in systematic reviews and meta-analyses.  Reporting of observational studies according to STROBE guidelines should be considered. Systematic reviews often misuse the STROBE statement, however, to assess the quality of observational studies.  Major costs incurred by RCTs are related to recruitment, site selection, conduct, monitoring. Only 50% RCTs achieve their recruitment target of which half of these are completed in time.  Out of 137 000 trials recorded in the ClinicalTrial.gov database (accessed december 2012) less than 10% had published results.

results into systematic reviews and/or meta-analyses remains problematic [16 ]. Community practitioners tend to fall back on shared experiences, depicted as ‘collectively reinforced mindlines’, rather than strictly adhering to evidence-based guidelines [17]. Understanding the strengths and limitations of RCTs and observational studies as sources of knowledge promulgation may help enhance the translational pathway for beneficial healthcare interventions [18]. This article reviews traditional and current perspectives on both study designs relative to the economic implications for public healthcare stakeholders. &&

COST-EFFICIENCY OF OBSERVATIONAL STUDIES – FILLING THE KNOWLEDGE GAPS WITH OBSERVATIONAL RESEARCH The exponential rise of healthcare cost (as opposed to declining resources) [19 ,20 ] and lack of incorporation of cost-intensive research into clinical decision models highlight the importance of finding cost-containing solutions to current clinical research practices [18]. In this regard, traditional clinical research methods, promoting cost-intensive RCTs as primary study designs to support evidencebased conclusions, has been widely accepted [21,22–32]. The hierarchical classification of study methods likely evolved from early comparisons of treatment effect estimates of RCTs with historic observational studies, with the latter study design &

&

displaying a tendency toward overestimation of effect [33]. These early findings were challenged a decade ago, when several reviews comparing variations in treatment effect in similar settings and with similar control arms by either clinical research design, could not detect significant differences [23,26,34,35]. Furthermore, evidence suggested that correlations of treatment effects may depend on the quality of studies chosen for comparison [36]. Comparing RCTs with high-quality observational studies was more likely to yield similar results than comparisons of studies of mixed quality [36]. The above controversy is likely related to an increase in quality of applied observational study design and applied statistical methods [22,23,26,37]. In this regard, the use of a uniformed person strategy for treatment allocation, may overcome some of the issues associated with ‘confounding by indication’ [37]. Restrictive cohort designs harness some features of RCTs, such as baseline assessment of prognostic risk factors, well defined eligibility criteria and intention to treat strategies [22,37]. Subsequent application of appropriate statistical techniques such as multivariate analyses and propensity score methods may account for more than one influential variable, and provide enough statistical control to draw observational studies nearer to experimental validity [21,37]. However, both statistical approaches imply that the confounding variables are known to the investigator. In conclusion, the results of high-quality observational trials may complement limited or controversial RCTs when published as systematic reviews and meta-analyses [37]. RCTs that are incorporated into systematic reviews and meta-analyses and subsequently become the basis for recommended clinical practice, should adhere to reporting guidelines. These serve as tools to enhance the transparency and comparability of trial reports and thereby reduce reporting bias and related duplication of study results [38]. Likewise, increasing numbers of observational studies [19 ] have raised concern over the quality of reporting of such studies [39,40]. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) initiative presented a checklist of 22 items with the aim to guide researchers, reviewers and publishers in the conduct of reporting epidemiological studies [41]. However, more than 50% of systematic reviews misuse the STROBE statement as a ‘tool to assess the study quality’ [42,43]. A variety of additional instruments exist that allow assessment of the methodological quality of observational studies [44]. The current objectives of observational research are multifaceted. In contrast to RCTs, observational

0952-7907 ß 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins

&

www.co-anesthesiology.com

191

Copyright © Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.

Ethics, economics and outcome

designs are a more affordable form of investigation in a clinical setting. As a consequence, the efficient and effective translation of evidence into clinical decision-making may be enhanced and related costs reduced. The spectrum of application includes the prospective evaluation of patient population and disease characteristics, the assessment and comparison of costs or effectiveness associated with diagnostics, treatments and technologies, the investigation of adherence to guidelines, the postmarketing surveillance of theurapeutic agents, the detection of responsive subpopulations, the characterization of risk factors and the level of risk, the identification of relevant sources of uncertainty, the provision of information on individual or group activity costs and the formulation of hypotheses to be tested in subsequent experiments [19 ]. Comprehensive administrative databases allow one to address many of these issues in a cost-efficient manner as they are useful tools to access large representative populations as well as small subpopulations, enable long-term follow-up of patients, while maintaining ethical guidelines [21]. Increasing network and sample sizes, and extending study periods however, may limit observational study feasibility. It is therefore advisable to keep the number of investigated variables and outcome parameters simple while devising a clear research question [19 ]. &

&

COST-EFFICIENCY OF INTERVENTIONAL TRIALS – THE NEED TO IMPROVE RECRUITEMENT AND REPORTING Practicing the methodological rigor of RCTs consumes resources and requires detailed attention of participating investigators [45 ,46]. The success of a RCT is vulnerable to multiple factors. Retention and recruitment of trial participants within an anticipated timeframe is considered a complex intervention [45 ] and represents the hallmark of scientific and economic success of RCTs [46]. However, it has been postulated that only half of RCTs conducted achieve their recruitment targets and even less are completed on time [46]. In particular, two consecutive reviews demonstrated poor recruiting in 187 publicly funded multicenter RCTs between the years 1994 and 2008. The authors noted that approximately half of the trials required extension with only a moderate subsequent success rate. Furthermore, many trials experienced recruitment delays related to organizational barriers. The additional activation of trial centers, which was commonly observed, did not impact trial success. According to these authors, local and central trial staff problems, overestimation of the number of participants available or eligible, and patient denial &

&

192

www.co-anesthesiology.com

of consent were the main reasons for failure. Other barriers to recruitment related to the identification of appropriate trial sites, conflicting trials, funding, principle investigator changes, ethics committees queries, local clinical practice and study drug supply [47,48 ]. Noteworthy, support by clinical trial units and initially higher sample size calculations with respect to achieving a minimum of 80% statistical power had a positive impact on recruitment. Trials planned with smaller sizes ranging from 1 to 200 patients were more successful in achieving recruitment goals than larger trials aimed to enroll a minimum of 1000 participants [48 ]. The latter finding supports the observation that slightly over 60% of interventional trials registered at the ClinicalTrials.gov website (from 2000 through 2010) had anticipated enrollment of less than 100 participants [11,49]. This is an alarming trend as expert opinion cautions against the interpretation of ‘true associations’ obtained from RCTs when small datasets are used resulting in inappropriately selected significance thresholds, suboptimal power, early termination strategies and flexible analyses. The effect sizes are likely to be inflated and subsequent reports biased [12,48 ]. The development of effective recruitment strategies has proven thus far to be difficult, despite the identification of potential recruitment barriers. Recently, a business model that incorporates marketing strategies has been suggested as an option to overcome these barriers [50]. In the style of industrial management, the model emphasizes that establishing RCTs should involve proper communication of the scientific value, legitimate and prestigious sources of funding, simple and complete processes that increase site performance (tailored design, protocol, data capture forms and monitoring approaches), public buy-in strategies and appropriate communication codes. Investing in recruitment, retainment and training of trial staff, collaborators and participants may foster further commitment and positively affect RCT recruitment rates [50,51]. Not every principle investigator or physician scientist is equipped with economic experience or tools and thus relies on self-educated estimates of projected costs and profits associated with the conduct of an RCT. In order to not incur debt, the amount of sponsorship should be negotiated on the basis of accurate expense projection related to anticipated and actual preparatory and operational services. Simple financial decision support tools may be useful to evaluate the economic feasibility and dimension of a RCT performance. The actual cost and profitability as assessed by an interactive spreadsheet revealed that prescreening and baseline cost were substantially underestimated and the cost &

&

&

Volume 27  Number 2  April 2014

Copyright © Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.

Cost-efficiency of knowledge creation Struck et al.

of patients visits overestimated, in a publicly sponsored Canadian multicentered trial [51,52 ]. The time required to perform minor nonscheduled tasks, such as resolving queries, updating charts or communicating with patients was also underestimated [51,52 ]. Industrial and academic experts increasingly question the value and cost-efficiency of on-site monitoring that consumes up to 30% of resources of large clinical trials [51,52 ]. Instead, the use of computer systems that centralize the management of data and detect fraud via statistical methods are now being promoted as cost-efficient and value-adding alternatives [51,52 ,54,55]. However, risk-based monitoring strategies require predetermination of proficient trial site capabilities and continuous verification of sustainable human resources [51,52 ,56]. Moreover, site selection based on feasibility questionnaires and prestudy visits may be deceiving and may fail to capture the actual picture as trials progress. Finally, the cost involved with the activation of additional trial sites in order to meet anticipated recruitment targets should be considered. A monitor should safeguard the training of trial participants and assess trial site qualifications [56]. The majority of biomedical research funding is supported by the pharmaceutical industry [49,57], which certainly may influence human clinical research. Reporting trial failure has significant impact on stock returns [58], and as a consequence, pharmaceutical companies may choose to not report trial results or may terminate early after interim analyses, thus raising the issue of reporting bias [12,13]. Recent multivariate analysis of almost 137 000 trials recorded in the ClinicalTrial.gov database from December 2012 and from 18 000 journal publications with trial registration numbers, found that less than one-tenth of studies registered and less than one-seventh of those completed had published results [59 ]. Only one-fifth of RCTs completed between 2005 and 2010 were published. Studies with industrial sponsorship had poor publishing rates but still outnumbered those funded by NIH/ governments [11,59 ]. Analysis of large RCTs also confirm this trend [10 ]. Furthermore, less than 25% of 1523 published RCTs between 1963 and 2004 merely substantiate results of prior research [7]. In contrast, the dissemination of study results appears to be affected by distorted journal publishing practices that promote ‘publishability over truth’, display variations in acceptance rates by subject field and journal index [8,9]. &

&

&

&

&

&&

and the results often lack clinical utility [3]. In contrast, observational studies consume less resource and may complement limited RCT datasets in meta-analyses [37]. Comprehensive medical databases may offer valuable clues on the effectiveness and relevance of public healthcare interventions and frame characteristics of populations with unmet clinical needs [19 ,21]. However, observational studies are prone to heterogeneity, selection bias and confounding errors. Considering the strengths and limitations of each study type, clinical researchers should explore the contextual worthiness of either design in promoting knowledge [18,27] and strengthen their focus on the quality of conduct [45 ,46,51,53,57] and reporting [10 ,11,42,43,49] in order to avoid waste of resource. Major cost incurred by RCTs likely relate to failure of recruitment, inappropriate site selection, conduct [51] and reporting [59 ]. Strengthening the performance in these tasks may substantially economize and liberate public and private funding resources. Care should be taken with risk-based [55,56] and statistical monitoring approaches [52 ,54] as they may ignore the impact of adequate trial site qualifications. From the viewpoint of institutional investigators, the use of business strategies [50] and economic evaluation tools [53] when establishing RCTs, may enhance performance or assess profitability. &

&

&&

&&

&

Acknowledgements None of the authors received payments or services, either directly or indirectly (i.e. via his or her institution), from a third party in support of any aspect of this work. None of the authors, or their institution(s), have had any financial relationship, in the 36 months prior to submission of this work, with any entity in the biomedical arena that could be perceived to influence or have the potential to influence what is written in this work. Also, no author has had any other relationships, or has engaged in any other activities, that could be perceived to influence or have the potential to influence what is written in this work.

&&

&&

CONCLUSION RCTs represent the gold standard for testing the efficacy of an intervention but they are expensive

Conflicts of interest There are no conflicts of interest.

REFERENCES AND RECOMMENDED READING Papers of particular interest, published within the annual period of review, have been highlighted as: & of special interest && of outstanding interest 1. Luce BR, Drummond M, Jonsson B, et al. EBM, HTA, and CER: clearing the confusion. Milbank Q 2010; 88:256–276. 2. Westfall JM, Mold J, Fagnan L. Practice-based research: ’Blue Highways’ on the NIH roadmap. JAMA 2007; 297:403–406.

0952-7907 ß 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins

www.co-anesthesiology.com

193

Copyright © Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.

Ethics, economics and outcome 3. Green LW, Ottoson JM, Garcı´a C, Hiatt RA. Diffusion theory and knowledge dissemination, utilization, and integration in public health. Annu Rev Public Health 2009; 30:151–174. 4. Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof 2006; 26:13–24. 5. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med 2011; 104:510–520. 6. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009; 374:86–89. 7. Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med 2011; 154:50–55. 8. Nosek BA, Bar-Anan Y. Scientific Utopia: I. Opening scientific communication. Psychol Inq 2012; 23:217–243. 9. Nosek BA, Spies JR, Motyl M. Scientific Utopia II. Restructuring incentives and practices to promote truth over publishability. Perspect Psychol Sci 2012; 7:615–631. 10. Jones CW, Handler L, Crowell KE, et al. Nonpublication of large randomized && clinical trials: cross sectional analysis. BMJ 2013; 347:f6104–f16104. Important recent analysis on the frequency of nonpublication of trial results and the frequency with which results are unavailable in the ClinicalTrials.gov database. 11. Dickersin K, Rennie D. The evolution of trial registries and their use to assess the clinical trial enterprise. JAMA 2012; 307:1861–1864. 12. Ioannidis JPA. Why most discovered true associations are inflated. Epidemiol Camb Mass 2008; 19:640–648. 13. Ioannidis JP. Why most published research findings are false. PLoS Med 2008; 2:e124. 14. Banzi R, Moja L, Pistotti V, et al. Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. Health Res Policy Syst 2011; 9:26. 15. Bornmann L. Measuring the societal impact of research. EMBO Rep 2012; 13:673–676. 16. Rosas SR, Schouten JT, Cope MT, Kagan JM. Modeling the dissemination && and uptake of clinical trials results. Res Eval 2013; 22:179–186. The authors present a model hat allows to monitor the progress of primary research into clinical practice guidelines. 17. Gabbay J, le May A. Evidence based guidelines or collectively constructed ‘mindlines?’. BMJ 2004; 329:1013. 18. Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed. Annu Rev Public Health 2007; 28:413– 433. 19. Tavazzi L. Do we need clinical registries? Eur Heart J 2013; 35:7–9. &

The editorial promotes the use of observational studies for specific research questions and provides example of benefit from using comprehensive administrative databases in cardiology. 20. Emanuel EJ. The future of biomedical research. JAMA 2013; 309:1589– & 1590. Provides insight into why NIH funding of biomedical research is decreasing. Underpins the importance of streaming medical research along the translation pathway and finding cost-lowering treatment alternatives. 21. Hershman DL, Wright JD. Comparative effectiveness research in oncology methodology: observational data. J Clin Oncol 2012; 30:4215–4222. 22. Concato J. Observational versus experimental studies: what’s the evidence for a hierarchy? NeuroRx 2004; 1:341–347. 23. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med 2000; 342:1887–1892. 24. Kunz R. Randomized trials and observational studies: still mostly similar results, still crucial differences. J Clin Epidemiol 2008; 61:207–208. 25. Yang W, Zilov A, Soewondo P, et al. Observational studies: going beyond the boundaries of randomized controlled trials. Diabetes Res Clin Pract 2010; 88 (Suppl 1):S3–S9. 26. Ligthelm RJ, Borzı` V, Gumprecht J, et al. Importance of observational studies in clinical practice. Clin Ther 2007; 29:1284–1292. 27. Korn EL, Freidlin B. Methodology for comparative effectiveness research: potential and limitations. J Clin Oncol 2012; 30:4185–4187. 28. Rothwell PM. External validity of randomised controlled trials: ‘To whom do the results of this trial apply?’. Lancet 2005; 365:82–93. 29. Black N. Why we need observational studies to evaluate the effectiveness of healthcare. BMJ 1996; 312:1215–1218. 30. Marko NF, Weil RJ. The role of observational investigations in comparative effectiveness research. Value Health 2010; 13:989–997. 31. Luce BR, Kramer JM, Goodman SN, et al. Rethinking randomized clinical trials for comparative effectiveness research: the need for transformational change. Ann Intern Med 2009; 151:206–209. 32. Glasziou P, Vandenbroucke J, Chalmers I. Assessing the quality of research. BMJ 2004; 328:39–41.

194

www.co-anesthesiology.com

33. Sacks H, Chalmers TC, Smith H Jr. Randomized versus historical controls for clinical trials. Am J Med 1982; 72:233–240. 34. Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med 2000; 342:1878–1886. 35. Britton A, McKee M, Black N, et al. Choosing between randomised and nonrandomised studies: a systematic review. Health Technol Assess Winch Engl 1998; 2:1–6. 36. Furlan AD, Tomlinson G, Jadad A, et al. Methodological quality and homogeneity influenced agreement between randomized trials and nonrandomized studies of the same intervention for back pain. J Clin Epidemiol 2008; 61:209–231. 37. Shrier I, Boivin J-F, Steele RJ, et al. Should meta-analyses of interventions include observational studies in addition to randomized controlled trials? A Critical examination of underlying principles. Am J Epidemiol 2007; 166:1203–1209. 38. McGauran N, Wieseler B, Kreis J, et al. Reporting bias in medical research: a narrative review. Trials 2010; 11:37. 39. Pocock SJ, Collier TJ, Dandreo KJ, et al. Issues in the reporting of epidemiological studies: a survey of recent practice. BMJ 2004; 329:883. 40. Tooth L, Ware R, Bain C, et al. Quality of reporting of observational longitudinal research. Am J Epidemiol 2005; 161:280–288. 41. Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Prev Med 2007; 45:247–251. 42. Poorolajal J, Cheraghi Z, Irani AD, Rezaeian S. Quality of cohort studies reporting post the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement. Epidemiol Health 2011; 33:33. 43. Costa BR da, Cevallos M, Altman DG, et al. Uses and misuses of the STROBE statement: bibliographic study. BMJ Open 2011; 1:e000048. 44. Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int J Epidemiol 2007; 36:666–676. 45. Tramm R, Daws K, Schadewaldt V. Clinical trial recruitment: a complex & intervention? J Clin Nurs 2013; 22:2436–2443. In detail this research article frames sources of complexity associated with clinical trial recruitment. 46. Fletcher B, Gheorghe A, Moore D, et al. Improving the recruitment activity of clinicians in randomised controlled trials: a systematic review. BMJ Open 2012; 2:e000496. 47. McDonald AM, Knight RC, Campbell MK, et al. What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies. Trials 2006; 7:9. 48. Sully BGO, Julious SA, Nicholl J. A reinvestigation of recruitment to rando& mised, controlled, multicenter trials: a review of trials funded by two UK funding agencies. Trials 2013; 14:166. Recent update on recruitment performance and barriers of publicly sponsored multicenter RCTs. 49. Califf RM, Zarin DA, Kramer JM, et al. Characteristics of clinical trials registered in ClinicalTrials. gov. JAMA J Am Med Assoc 2012; 307:1838–1847. 50. McDonald AM, Treweek S, Shakur H, et al. Using a business model approach and marketing techniques for recruitment to clinical trials. Trials 2011; 12:74. 51. Eisenstein EL, Collins R, Cracknell BS, et al. Sensible approaches for reducing clinical trial costs. Clin Trials 2008; 5:75–84. 52. Pogue JM, Devereaux PJ, Thorlund K, Yusuf S. Central statistical monitoring: & detecting fraud in. Clin Trials 2013; 10:225–235. Useful statistical approach to reduce monitoring cost associated with onsite source data verification and fraud. 53. Holler B, Forgione DA, Baisden CE, et al. Interactive financial decision support for clinical research trials. J Healthcare Finance 2011; 37:25–37. 54. Venet D, Doffagne E, Burzykowski T, et al. A statistical approach to central monitoring of data quality in. Clin Trials 2012; 9:705–713. 55. Walden A, Nahm M, Barnett ME, et al. Economic analysis of centralized vs. decentralized electronic data capture in multi-center clinical studies. Stud Health Technol Inform 2011; 164:82–88. 56. Ansmann EB, Hecht A, Henn DK, et al. The future of monitoring in clinical research - a holistic approach: linking risk-based monitoring with quality management principles. GMS Ger Med Sci 2013; 11:1–8. 57. Shore BJ, Nasreddine AY, Kocher MS. Overcoming the funding challenge: the cost of randomized controlled trials in the next decade. J Bone Jt Surg 2012; 94:101–106. 58. Hwang TJ. Stock market returns and clinical trial results of investigational compounds: an event study analysis of large biopharmaceutical companies. PLoS One 2013; 8:e71966. 59. Shamliyan TA, Kane RL. Availability of results from clinical research: Failing && policy efforts. J Epidemiol Glob Health. [Epub ahead of print]. doi:10.1016/ j.jegh.2013.08.002. This report highlights the importance of clinical regisitries on clinical research transparency. The review of interventional registries until December 2012 demonstrated overall poor reporting.

Volume 27  Number 2  April 2014

Copyright © Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.

Cost-efficiency of knowledge creation: randomized controlled trials vs. observational studies.

This article reviews traditional and current perspectives on randomized, controlled trials (RCTs) and observational studies relative to the economic i...
205KB Sizes 1 Downloads 0 Views