COMMENTARY For reprint orders, please contact [email protected]

Comparative effectiveness research: a view from the other side of the pond Investigating the comparative effectiveness of competing interventions – ­comparative effectiveness research (CER) – is both a noble endeavor and a public good. It is a component of what, in most countries, is generally called ‘health technology ­assessment’ (HTA). HTA is concerned with answering four questions: ■■ Does a particular health technology work? ■■

For whom?

■■

How does its effectiveness compare with alternatives?

And at what cost? CER seeks answers to the first three of these questions but the antagonism to CER, by some US politicians [101] , astounds those of us who are mere observers of American healthcare politics. Nevertheless, the establishment of the PatientCentered Outcomes Research Institute (PCORI) promises to enhance healthcare both in the USA and globally. Indeed, it is encouraging that PCORI, as part of its agenda, will be examining the methodological challenges facing CER. The publication, in this issue of the journal, of a quartet of articles focusing on study designs for CER is therefore timely. The theme, running throughout these articles, emphasizes the importance of observational – as well as experimental – designs to provide robust evidence about the comparative effectiveness of competing interventions. The only surprising omission is the absence of a discussion about the role and importance of ‘effect size’ in the evaluation of the effectiveness of health technologies. It has been accepted, by even the most rigorous methodologists [1] , that observational studies in the form of historical controlled trials can provide reliable evidence of effectiveness when there is a ‘dramatic’ response. Indeed, under such circumstances, the use of historical controls may make randomized controlled trials unnecessary [2] . The adoption, for example, of imatinib (Gleevec®, Glivec®) for the treatment of chronic myeloid leukemia was initially based on comparisons with historical controls [3] . However, there are other issues relating to comparative effectiveness that will need to be resolved as the techniques of CER mature. They include: ■■ The comparator(s) ■■

■■

Risk-benefit assessment

■■

Mixed treatment comparisons

■■

Comorbidities

Michael D Rawlins*

“The publication, in this issue of the journal, of a quartet of articles focusing on study designs for comparative effectiveness research is therefore timely.”

*National Institute for Health & Clinical Excellence, London, UK and London School of Hygiene & Tropical Medicine, Keppel Street, London, WC1E 7HT, UK [email protected]

The comparator(s)

In CER identifying the right comparator  –  the ‘competing intervention’ as it is sometimes called – is critical. As a general rule, the comparator should

10.2217/CER.12.17 © 2012 Future Medicine Ltd

1(3), 293–295 (2012)

part of

ISSN 2042-6305

293

Commentary  

Rawlins

normally be ‘current best practice’ or the ‘current standard of care’, which may include ‘best supportive care’. Difficulties arise where there is no consensus among clinical specialists about the form of ‘current best practice’. The difficulties are even greater when ‘current best practice’ includes products that are used ‘off-label’ or when they are ‘unlicensed’. This, of course, applies particularly to pharmaceuticals given to children where, in hospitals, ‘off label’ or ‘unlicensed’ prescribing may account for more than 70% of prescriptions of products for ­pediatric use [4,5] . Difficulties also arise when a particular approach has already been widely adopted by the relevant clinical community in the absence of good evidence for its clinical effectiveness. In an ideal world the results of CER would be available before, rather than after, the widespread adoption of a particular diagnostic or therapeutic strategy. However, the world is not ‘ideal’ and when the results of CER challenge perceived wisdom, they will sometimes be uncomfortable. Risk–benefit assessment

Balancing the benefits and harms of two or more competing interventions is often a major component of CER. The eternal problem confronting decision-makers is that the outcomes are usually expressed differently from the benefits. For example, what incidence of gastrointestinal hemorrhage offsets the benefits of NSAIDs in the treatment of osteoarthritis? A reliable approach to a quantitative analysis of the balance between benefits and harms remains elusive. It should, obviously, take account of the views of patients and their families, but no reliable and consistent methodology has been developed by either drug regulatory authorities or HTA agencies to do this. To be sure, some lip service is sometimes paid to the (often informal) views of a small number of patients and their families, but it is largely ‘tokenism’. It would, of course, be possible to express both benefits and harms (or disbenefits) as changes in ‘health utilities’, but such an approach has rarely been attempted. The problem for PCORI is that the use of health utilities for this purpose is very close to expressing benefits and harms as ‘quality-adjusted life years’ gained or lost (respectively). If PCORI is uncomfortable about using a metric originally designed for the

294

J. Compar. Effect. Res. (2012) 1(3)

assessment of cost–effectiveness, it could adapt other generic quality-of-life measures such as the SF-36. Mixed treatment comparisons

The papers in this issue largely accept that the methodology underpinning CER must embrace both experimental and observational designs. For this approach to be most valuable, methodological developments in mixed treatment comparisons are essential. Simple direct comparisons of one treatment versus another are desirable but unachievable in many, if not most, instances. Because of the preference of the US FDA for placebocontrolled trials of pharmaceutical products, rather than active comparator-controlled trials, it means that many new products become available without a direct comparison with ‘current best practice’. In the absence of direct comparative data, many HTA agencies use indirect comparisons. In this approach, the comparative effectiveness of two products is derived from the results of placebo-controlled trials of each and then imputing a comparison indirectly. “…the establishment of the PatientCentered Outcomes Research Institute promises to enhance healthcare both in the USA and globally.” There is, however, increasing use of ‘mixed treatment comparisons’ (also known as ‘network meta-analyses’). In this method, the results of trials with either placebo or active comparator controls are incorporated into a single model that allows multiple comparisons to be imputed [6–8] . The technique has been used to good effect in, for example, comparing the effectiveness of 12 new-generation antidepressants based on the results of 117 randomized controlled trials [9] . Further methodological development of this approach is needed, however, to improve its reliability as well as to allow the incorporation of the results of observational studies. Comorbidities

As discussed by Greenfield and Kaplan in this series, one of the main weaknesses of conventional randomized controlled trials is their uncertain generalizability. Especially in trials designed for licensing purposes, participants tend to be homogeneous patient populations with few (or

future science group

Comparative effectiveness research: a view from the other side of the pond 

no) comorbid conditions. In general use, however, patients treated with pharmaceutical products will tend to be much more diverse and – especially in older people – they will often suffer from three or more comorbidities [7] . In these circumstances, the extent to which such patients will benefit from particular products may be uncertain. One proposed solution [10] to this difficulty, is to undertake ‘pragmatic’ trials among patient populations more closely resembling the ultimate users. Such study designs will indicate whether the product is effective – on average – amongst the population for which it is most likely be used. Such studies certainly have merit but they will not provide information about subgroups of patients with specific comorbidities. Designing pragmatic trials with sufficient power to provide definitive answers in all potential subgroups is impractical [7] . Methodological research is therefore needed in order to better treat people with comorbidities. Observational studies, once a product has been in wider use, seem at the present time to be the only reliable methodological approach. Even this, though, is less than ideal for it means that substantial numbers of patients with comorbidities will need to be treated in routine care, before we know whether it provides benefits or harms in particular circumstances.

2

3

4

5

Doll R, Peto R. Randomised controlled trials and retrospective controls. Br. Med. J. 280(6206), 44 (1980). Glasziou P, Chalmers I, Rawlins M, McCulloch P. When are randomised controlled trials unnecessary? Picking signal from noise. BMJ 334(7589), 349–351 (2007). Garside R, Round A, Dalziel K, Stein K, Royle P. The effectiveness and cost– effectiveness of imatinib in chronic myeloid leukaemia. Health Technol. Assess. 6(33), 1–162 (2002). Jong GW, van der Linden PD, Bakker EM et al. Unlicensed and off-label drug use in a paediatric ward of a general hospital in the Netherlands. Eur. J. Clin. Pharmacol. 58(4), 293–297 (2002). Lindell-Osuagwu L, Korhonen MJ, Saano S, Helin-Tanninen M, Naaranlahti T, Kokki H. Off-label and unlicensed drug prescribing in

future science group

Conclusion

The quartet of thoughtful publications in this issue of Journal of Comparative Effectiveness Research provides a necessary, but not sufficient, research agenda to support CER in the future. The need for such research is exemplified by the bad-tempered debate, currently in full flow on both sides of the Atlantic, about the merits of screening for breast cancer by routine mammography [11,12] . Despite numerous experimental and observational studies [13] involving a total of more than 600,000 women, there is still argument about whether and for whom routine mammography is worthwhile. As the work of PCORI moves forward, its activities will be watched closely by the global evidence-based medicine and HTA communities. Much is being asked of it and I, for one, do not expect to be disappointed! Financial & competing interests disclosure M Rawlins has been chairman of the National Institute for Health and Clinical Excellence since 1999. The author has no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. No writing assistance was utilized in the production of this manuscript. three paediatric wards in Finland and review of the international literature. J. Clin. Pharm. Ther. 34(3), 277–287 (2009).

10

Schwarz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutic trials. J. Chronic Dis. 20(8), 637–648 (1967).

6

Caldwell DM, Ades AE, Higgins JP. Simultaneous comparisons of multiple treatments: combining direct and indirect evidence. BMJ 331(7521), 897–900 (2005).

11

Quanstrum KH, Hayward RA. Lessons from the mammography wars. N. Engl. J. Med. 363(11), 1076–1078 (2010).

7

Rawlins MD. Therapeutics, Evidence and Decision-Making. Hodder Arnold, London, UK (2011).

8

Jansen JP, Fleurence R, Devine B et al. Interpreting indirect treatment comparisons and network meta-analysis for health-care decision making: report of the ISPOR task force on indirect treatment comparisons good research practices: part 1. Value Health 14(4), 417–428 (2011).

References 1

Commentary

9

Cipriani A, Furukawa TA, Salanti G et al. Comparative efficacy and acceptability of 12 new-generation antidepressants: a multipletreatments meta-analysis. Lancet 373(9665), 746–755 (2009).

www.futuremedicine.com

12 Hackshaw A. Benefits and harms of

mammography screening. BMJ 344, D8279 (2012). 13 Gotzsche PC, Nielsen M. Screening for breast

cancer with mammography. Cochrane Database Syst. Rev. 19(1), CD001877 (2011).

■■ Website 101 Gingrich N, Rawlins MD. Economist

Debates. This house believes that the widespread use of comparative effectiveness reviews and cost/benefit analyses will stifle medical innovation and lead to an unacceptable rationing of healthcare (2009). The Economist Newspaper Limited© (2011). www.economist.com/debate/overview/155 (Accessed February 2012)

295

Comparative effectiveness research: a view from the other side of the pond.

Comparative effectiveness research: a view from the other side of the pond. - PDF Download Free
677KB Sizes 0 Downloads 0 Views