Psychology & Health

ISSN: 0887-0446 (Print) 1476-8321 (Online) Journal homepage: http://www.tandfonline.com/loi/gpsh20

Risk of bias in randomised controlled trials of health behaviour change interventions: Evidence, practices and challenges Marijn de Bruin To cite this article: Marijn de Bruin (2015) Risk of bias in randomised controlled trials of health behaviour change interventions: Evidence, practices and challenges, Psychology & Health, 30:1, 1-7, DOI: 10.1080/08870446.2014.960653 To link to this article: http://dx.doi.org/10.1080/08870446.2014.960653

Accepted author version posted online: 16 Sep 2014.

Submit your article to this journal

Article views: 666

View related articles

View Crossmark data

Citing articles: 2 View citing articles

Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=gpsh20 Download by: [University of California, San Diego]

Date: 07 November 2015, At: 21:16

Psychology & Health, 2015 Vol. 30, No. 1, 1–7, http://dx.doi.org/10.1080/08870446.2014.960653

EDITORIAL

Downloaded by [University of California, San Diego] at 21:16 07 November 2015

Risk of bias in randomised controlled trials of health behaviour change interventions: Evidence, practices and challenges The major strength of a randomised controlled trial (RCT) is the degree to which it can establish a causal relationship between an experimental treatment and the outcome (i.e. internal validity), as randomisation should ensure that potential confounders are equally distributed over the treatment and control arms. Internal validity in RCTs can be threatened, however, by multiple sources of bias. For example, poor randomisation procedures (e.g. failure to conceal allocations until treatments have been assigned) can lead to selection bias because more high-risk individuals are selected to receive the experimental treatment. There is a range of potential sources of bias in RCTs, and several well-known tools for assessing the risk of bias provide useful overviews of these sources and strategies for reducing them (e.g. Guyatt et al., 2011; Higgins, Altman, & Sterne, 2011). These tools are also influential: trials scoring high on risk-of-bias assessments should have a smaller chance on publication and – if published – to be included in ‘best evidence’ systematic reviews (Johnson, Low, & MacDonald, 2015), and thus for their interventions to influence policy and practice. Systematic reviews suggest that many health behaviour change (HBC) trials suffer from a moderate to high risk of bias (e.g. Oberjé, de Kinderen, Evers, van Woerkum, & de Bruin, 2013; Poobalan, Aucott, Precious, Crombie, & Smith, 2009); more so than, for example, drug trials (Crocetti, Amin, & Scherer, 2010). Moreover, risk-of-bias scores have been found to explain heterogeneity in effect sizes, especially in trials with subjective outcome measures (Savovic et al., 2012; Wood et al., 2008). Since replication studies of HBC intervention evaluations are uncommon, invalid inferences due to bias may not be easily discovered and rectified. Despite these concerns, there has been little attention in the behaviour change intervention literature to the sources and consequences of bias in trials, and to strategies that may be effective in reducing the risk of bias. This paucity of research on risk of bias in HBC trial is in turn reflected in widely used instruments for assessing the risk of bias, such as the Cochrane risk-of-bias tool (Higgins et al., 2011), which seems to be mostly based on evidence from non-behavioural trials. The objective of this special issue is to reflect on the evidence, practices and challenges in relation to reducing the risk of bias in HBC trials. We hope that this special issue will both impact scientific practice (i.e. enhancing the quality of HBC trial design, reporting and synthesis) and have an agenda-setting effect (i.e. inspire people to do empirical research into what sources of bias actually do affect HBC trials, what strategies are effective for mitigating these, and thus what criteria should be used for grading the quality of the evidence of HBC trials).

© 2014 Taylor & Francis

2

Editorial

Downloaded by [University of California, San Diego] at 21:16 07 November 2015

The articles in this special issue The article by de Bruin, McCambridge, and Prin (2015) is first presented, because it includes a literature review on ‘Common sources of bias’ and ‘Common strategies for reducing the risk of bias’, which can serve as an introduction for those less familiar with the literature on this topic. The aim of this article is, however, not just to review the literature; it is also to present an approach/methodology for exploring the question whether the relatively high risk of bias often reported in reviews of HBC RCTs can be simply attributed to poor trial design and incomplete trial reporting (as is commonly done), or whether there are also valid reasons for not implementing common risk-of-bias strategies. The authors propose that HBC trial authors routinely complete a risk-of-bias justification table (RATIONALE) when designing their trial, detailing what has (not) been done to address the risk of bias and why (the rationale), and publish this with their trial protocol and/or evaluation paper. This practice could lead to more consistent adoption of widely recognised strategies for reducing the risk of bias (i.e. improved trial design), more comprehensive reporting of the risk-of-bias strategies applied, and it makes each trial report a potentially valuable case study that provides valuable information about the applicability of the risk-of-bias literature to HBC trials (e.g. Could all common risk-of-bias strategies be applied? Were risk-of-bias strategies applied that are not covered by commonly used risk-of-bias tools?). The authors apply the RATIONALE approach to one trial (a case study) to illustrate this. Tarquinio, Kivits, Minary, Coste, and Alla (2015) conduct a scoping review of the literature but approach this topic from a different angle: they raise the question of whether RCTs are the appropriate methodology for evaluating ‘complex interventions’ (which HBC interventions often are). Based on a review and several informative case studies (also drawing on the psychotherapy domain), the strengths and the limitation of using RCTs are debated. Key issues are striking the balance between internal and external validities, the role of context and transferability of results and the problem of double-blinding in RCTs, and how all these topics relate to complex interventions and HBC interventions in particular. The authors also discuss how RCTs can be adapted to better capture the complex nature of HBC intervention evaluations, such as cluster-RCTs, process evaluations, ‘realistic randomized controlled trials’ and non/quasiexperimental designs. This article gives a valuable overview of the literature for those not only interested in establishing intervention efficacy, but also in producing intervention evaluations that are closely related to real-life practice. The third paper also includes a literature review and case studies, but focuses on the risk of bias and strategies for mitigating these risks when evaluating the cost-effectiveness of HBC interventions. Demonstrating that an intervention is cost-effective is an additional challenge to demonstrating their efficacy and effectiveness, but can be essential to the widespread implementation of a potent intervention. Evers, Hiligsmann, and Adarkwah (2015) focus on trial-based (not model-based) economic evaluations, and highlight 11 sources of bias that can occur in such RCTs, specifically with respect to the economic evaluation. Note that, just as Tarquinio et al. (2015) do, Evers and colleagues touch upon the subject of generalisability of results from RCTs, and that trial-based evaluations may demonstrate cost-efficacy rather than cost-effectiveness (depending on its design). The text is organised in three sections: pre-trial bias, bias during the trial and bias that may occur after the trial. This paper gives an excellent

Downloaded by [University of California, San Diego] at 21:16 07 November 2015

Editorial

3

overview of the topic, including a brief introduction in cost-effectiveness analyses, and is recommended reading for anybody planning to subject their HBC intervention to a cost-effectiveness evaluation. The fourth article by McCambridge (2015) discusses the literature in relation to one particular source of bias that has received considerable attention in the health psychology literature over the last years: the question–behaviour effect. This is one of these sources of bias that seems specific to the evaluation of educational, psychological or behavioural (i.e. non-pharmacological) interventions, and may not be adequately captured by commonly used risk-of-bias tools. McCambridge explains how this area of research has developed more generally, and then focuses on how the question–behaviour effect might impact on the results of RCTs of HBC interventions. The author also briefly addresses research participation effects in HBC intervention trials more generally, before making the closing point that we need a much better understanding – based on more and more rigorous studies – of how participation in research (and the question–behaviour effect more specifically) influences HBC trial outcomes. The next four articles in this special issue apply meta-analyses, or focus on advancing the methodology of meta-analysis, in order to study and improve the methodological quality of HBC trials. Both Ayling, Brierley, Johnson, Heller, and Eiser (2015) and Bishop, Fenge-Davies, Kirby, and Geraghty (2015) examine the role of the support provided to control groups in HBC trials. Previous studies suggested that the active content of support provided to control groups in HBC trials may vary considerably between studies, impact control group success rates and – consequently – trial effect sizes (de Bruin, Viechtbauer, Hospers, Schaalma, & Kok, 2009; de Bruin et al., 2010). Ayling and colleagues examine this issue in trials promoting self-management among young people with type 1 diabetes. They found that standard care was poorly described in most articles but could be reliably collected from authors directly. Moreover, standard care quality (basically, the sum score of BCTs provided to control group patients) varied considerably between trials, and meta-analyses revealed possible trends of improved glycaemic control and psychological outcomes among patients exposed to higher quality standard care. Given the modest sample size available for the meta-analyses and the focus on change scores in outcomes from baseline to follow-up (note that in some studies, patients were treatment experienced at baseline and thus exposed to standard care prior to the trial, which may restrict the amount of change observed during the trial), this study shows that reports of control group support in scientific manuscripts should be improved; and that the quality, and possibly the effectiveness, of standard care provided to control groups in different studies varies considerably, and thus may influence trial effect sizes. Bishop et al. (2015) examined the role of BCTs provided to control and intervention groups in trials that aim to increase adherence to physical activity in musculoskeletal pain. Although they did not contact study authors for additional information, the authors did obtain and code any referenced material that could be found online. Interestingly, Bishop and colleagues also added ‘context effects’ to the mix, examining whether intervention and control group support shared the following five contextual features: practitioners’ characteristics, patient–practitioner relationship, intervention credibility, superficial treatment characteristics such as delivery modality and environment. The rationale behind this approach being that if such environmental factors vary between

Downloaded by [University of California, San Diego] at 21:16 07 November 2015

4

Editorial

intervention and control groups, this may unwittingly influence patient behaviour and trial results. Meta-analysis revealed that several contextual factors and the number of unique intervention BCTs (i.e. BCTs provided to intervention but not to control groups) explained heterogeneity in effect sizes. Hence, this study demonstrates that we should pay attention to both content and context of control group support when interpreting, synthesizing and generalizing intervention effects. Attrition, and especially differential attrition, has been suggested to be an important source of bias in HBC trials (Amico, 2009). Crutzen, Viechtbauer, Spigt, and Kotz (2015) randomly select a sample of 60 HBC RCTs, and explore the absolute and relative attrition rates. In a meta-regression analysis, the relative attrition rate is used as the dependent variable and the role of several factors hypothesised to influence relative attrition rates are examined (e.g. intensity of the intervention vs. control group support, follow-up duration). Interestingly, they find that attrition is 10% (95% CI: 1.01–1.20, p = .02) higher in intervention arms, but that this is unrelated to any of the predictors. Given that a previous meta-analysis of HBC trials in which one of the authors was involved (Dr. Viechtbauer, in de Bruin et al., 2010) revealed that attrition may not be random (i.e. patients doing better were more likely to dropout), differential attrition might indeed be an important source of bias in HBC trials. It, thus, illustrates the importance of applying intent-to-treat analyses using appropriate data imputation methods (Higgins et al., 2011). Last but not least, Johnson et al. (2015) focus on the question how meta-analyses of health promotion interventions incorporate methodological quality (including the risk of bias) in their analyses. Based on a survey of 200 meta-analyses, they arrive at the conclusion that many do assess methodological quality, but very few incorporate them in their analyses. Johnson and colleagues were particularly interested in whether methodological quality was used in moderator analyses (‘an interactive approach’), e.g. when examining whether a particular behaviour change technique or modality of intervention delivery explains heterogeneity in effect sizes. They find that this is rarely reported by the meta-analyses reviewed, but when it is reported the moderator results remained significant in higher quality studies, or were present among higher but not lower quality studies. Johnson and colleagues also make the important argument that instead of discarding trials that score lower on certain methodological quality criteria (i.e. assuming that this affects the evidence-base), it is much more informative to include all trials in the meta-analyses and actually establish whether and how methodological quality influences trial results. Conclusions and recommendations There are several conclusions that can be drawn from the studies included in this special issue, and these seem relevant to those conducting trials and meta-analyses, as well as for the general readers of this literature including practitioners and policy-makers consulting the evidence to support their decisions. First, for trial designers, in line with CONSORT, TIDieR and WIDER statements (Abraham, Johnson, de Bruin, & Luszczynska, 2014; Boutron et al., 2008; Hoffmann et al., 2014), it is essential that we continue to improve the comprehensiveness of our trial reports: how we handle the risk of bias, what support is provided to control and intervention groups and the relevant contextual variables. Moreover, it does seem that

Downloaded by [University of California, San Diego] at 21:16 07 November 2015

Editorial

5

many HBC trials can be better designed, at least from the perspective of this special issue (i.e. the measures taken to protect against the risk of bias in effectiveness and cost-effectiveness trials); but also through considering the use of adapted RCT designs that may allow for a more real-life test of the intervention. Trial designers are also urged to consider using their trial as a case study, to reflect on whether and how common strategies for reducing the risk of bias can be applied in their study (and if not, why not); or to experimentally test strategies for reducing the risk of bias in their trial (e.g. a randomised experiment within the larger trial, as in the case study by de Bruin et al., 2015). Second, for those conducting systematic reviews of HBC interventions, the studies in this special issue suggest that it could be worthwhile to explore in detail what the content and context is (and the difference therein) between the support that is given to control groups. This may be crucial for the accurate interpretation, comparison and generalisation of the studies synthesised. Moreover, although assessing the risk of bias has become a routine practice in many systematic reviews, which is a good thing, studies in this special issue suggest to reviewers to not ‘simply’ apply these criteria to the HBC trials reviewed, possibly followed by exclusion of the lower quality studies. Instead, given the paucity of evidence on the key sources of bias in HBC trials and the strategies for mitigating these, each meta-analysis has the opportunity to make an important contribution to this evidence base by examining whether and how a particular source of bias is related to trial effect sizes. Moreover, systematic reviewers could consider adapting the risk-of-bias tool to the literature under review: some common measures like blinding participants to group assignment might not be possible, but alternative or additional measures might be taken to reduce the risk of bias in these trials and it may be worthwhile considering (or evaluating the success of) these. Finally, for those consulting the trial and particularly the systematic review literature to support policies and practice, the data and opinions expressed in this special issue illustrate that many HBC interventions are complex; that addressing the risk of bias in such trials may be more complex than in pharmacological trials (e.g. because biasreducing measures such as double-blinding are not possible, but probably also because the resources available in a typical HBC trial can easily be 100 times smaller than those available in a typical drug trial); that many systematic reviews apply the same criteria to HBC trials as they do to grade the quality of the evidence from simpler interventions; and that at present, we do not seem to have an established, evidence-based approach towards grading the quality of the evidence from HBC trials. This also seems to be the key challenge to the field: to develop a sound evidence base – through experiments, case studies, meta-analyses and so forth – and reach consensus about how to score the risk of bias, and by extending the quality of the evidence, in HBC interventions, in order to identify those interventions that are worthwhile disseminating on a large scale. A final note: although the title of this special issue reads ‘Risk of bias …’, some of the studies have focused on broader methodological issues (e.g. methodological quality, including the risk of bias) or on methodological topics of which it is not directly clear whether they should be considered as a source of bias (e.g. variability in treatment as usual provided to control groups in different trials).

6

Editorial

Downloaded by [University of California, San Diego] at 21:16 07 November 2015

References Abraham, C., Johnson, B. T., de Bruin, M., & Luszczynska, A. (2014). Enhancing reporting of behavior change intervention evaluations. Journal of Acquired Immune Deficiency Syndromes, 66, S293–S299. doi:10.1097/QAI.0000000000000231 Amico, K. R. (2009). Percent total attrition: A poor metric for study rigor in hosted intervention designs. American Journal of Public Health, 99, 1567–1575. doi:10.2105/AJPH Ayling, K., Brierley, S., Johnson, B., Heller, S., & Eiser, C. (2015). How standard is standard care? Exploring control group outcomes in behaviour change interventions for young people with type 1 diabetes. Psychology & Health, 30(1), 85–103. Bishop, F., Fenge-Davies, A., Kirby, S., & Geraghty, A. (2015). Context effects and behaviour change techniques in randomised trials: A systematic review using the example of trials to increase adherence to physical activity in musculoskeletal pain. Psychology & Health, 30(1), 104–121. Boutron, I., Moher, D., Altman, D. G., Schulz, K. F., & Ravaud, P.; CONSORT Group. (2008). Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: Explanation and elaboration. Annals of Internal Medicine, 148, 295–309. Crocetti, M. T., Amin, D. D., & Scherer, R. (2010). Assessment of risk of bias among pediatric randomized controlled trials. Pediatrics, 126, 298–305. doi:10.1542/peds.2009-3121 Crutzen, R., Viechtbauer, W., Spigt, M., & Kotz, D. (2015). Differential attrition in health behaviour change trials: A systematic review and meta-analysis. Psychology & Health, 30(1), 122– 134. de Bruin, M., McCambridge, J., & Prin, J. (2015). Reducing the risk of bias in health behaviour change trials: Improving trial design, reporting, or bias assessment criteria? A review and case study. Psychology & Health, 30(1), 8–34. de Bruin, M., Viechtbauer, W., Hospers, H. J., Schaalma, H. P., & Kok, G. (2009). Standard care quality determines treatment outcomes in control groups of HAART-adherence intervention studies: Implications for the interpretation and comparison of intervention effects. Health Psychology, 28, 668–674. doi:10.1037/a0015989 de Bruin, M., Viechtbauer, W., Schaalma, H. P., Kok, G., Abraham, C., & Hospers, H. J. (2010). Standard care impact on effects of highly active antiretroviral therapy adherence interventions: A meta-analysis of randomized controlled trials. Archives of Internal Medicine, 170, 240–250. doi:10.1001/archinternmed.2009.536 Evers, S., Hiligsmann, M., & Adarkwah, C. (2015). Risk of bias in trial-based economic evaluations: Identification of sources and bias-reducing strategies. Psychology & Health, 30(1), 52–71. Guyatt, G. H., Oxman, A. D., Vist, G., Kunz, R., Brozek, J., Alonso-Coello, P., … Schünemann, H. J. (2011). GRADE guidelines: 4. Rating the quality of evidence–study limitations (risk of bias). Journal of Clinical Epidemiology, 64, 407–415. doi:10.1016/j.jclinepi.2010.07.017 Higgins, J. P. T., Altman, D. G., & Sterne, J. A. C. (2011). Chapter 8: Assessing risk of bias in included studies. In J. P. T. Higgins & S. S. Green (Ed.), Cochrane handbook for systematic reviews of interventions Version 5.1.0. Hoffmann, T., Glasziou, P., Boutron, I., Milne, R., Perera, R., Moher, D., … Michie, S. (2014). Better reporting of interventions: Template for intervention description and replication (TIDieR) checklist and guide. British Medical Journal, 348, g1687. doi:10.1136/bmj.g1687 Johnson, B., Low, R., & MacDonald, H. (2015). Panning for the gold in health research: Incorporating studies’ methodological quality in meta-analysis. Psychology & Health, 30(1), 135–152. McCambridge, J. (2015). From question-behaviour effects in trials to the social psychology of research participation. Psychology & Health, 30(1), 72–84. Oberjé, E. J., de Kinderen, R. J., Evers, S. M., van Woerkum, C. M., & de Bruin, M. (2013). Cost effectiveness of medication adherence-enhancing interventions: A systematic review of trial-based economic evaluations. Pharmacoeconomics, 31, 1155–1168. doi:10.1007/s40273013-0108-8

Downloaded by [University of California, San Diego] at 21:16 07 November 2015

Editorial

7

Poobalan, A. S., Aucott, L. S., Precious, E., Crombie, I. K., & Smith, W. C. (2009). Weight loss interventions in young people (18 to 25 year olds): A systematic review. Obesity Reviews, 11, 580–592. doi:10.1111/j.1467-789X.2009.00673 Savovic, J., Jones, H., Altman, D., Harris, R., Juni, P., Pildal, J., … Sterne, J. (2012). Influence of reported study design characteristics on intervention effect estimates from randomised controlled trials: Combined analysis of meta-epidemiological studies. Health Technology Assessment, 16, 1–82. Tarquinio, C., Kivits, J., Minary, L., Coste, J., & Alla, F. (2015). Evaluating complex interventions: Perspectives and issues for health behaviour change interventions. Psychology & Health, 30(1), 35–51. Wood, L., Egger, M., Gluud, L. L., Schulz, K., Jüni, P., Altman, D. G., … Sterne, J. A. C. (2008). Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: Meta-epidemiological study. British Medical Journal, 336, 601–605. doi:10.1136/bmj.39465.451748.AD

Marijn de Bruin [email protected]

Risk of bias in randomised controlled trials of health behaviour change interventions: evidence, practices and challenges.

Risk of bias in randomised controlled trials of health behaviour change interventions: evidence, practices and challenges. - PDF Download Free
307KB Sizes 1 Downloads 6 Views