626646 research-article2016

APY0010.1177/1039856215626646Australasian PsychiatryBibb et al.

Australasian

Psychiatry

Regular Article

A critical interpretive synthesis of the most commonly used selfreport measures in Australian mental health research

Australasian Psychiatry  ­–6 1 © The Royal Australian and New Zealand College of Psychiatrists 2016 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/1039856215626646 apy.sagepub.com

Jennifer Bibb  PhD Candidate, National Music Therapy Research Unit, Melbourne Conservatorium of Music, University of Melbourne, Parkville, VIC, Australia Felicity A Baker  Professor, National Music Therapy Research Unit, Melbourne Conservatorium of Music, University of Melbourne, Parkville, VIC, Australia Katrina Skewes McFerran  Professor, Head of Music Therapy, National Music Therapy Research Unit, Melbourne Conservatorium of Music, University of Melbourne, Parkville, VIC, Australia

Abstract Objective: To critically examine the self-report measures most commonly used in Australian mental health research in the last 10 years. Method: A critical interpretive synthesis was conducted using seven outcome measures that were identified as most popular in 43 studies from three mental health journals. Results: Results suggest that the amount and type of language used in outcome measures is important in both increasing the accuracy of the data collected and fostering positive experiences of data collection for participants. Conclusions: Results indicate that many of the measures most often used in Australian mental health research may not align with the current contemporary philosophy of mental health clinical practice in Australia. Keywords:  outcome measures, self-report, mental health research

C

ontemporary mental health research is influenced by Recovery principles and emphasises the importance of consumers’ perspectives informed by their lived experience.1 Based on the principles of empowerment and collaboration, Recovery brings lived experience together with the skills and knowledge of mental health practitioners to challenge conventional philosophies of power and knowledge in health care.2 Thus, the role of consumers as active participants in mental health research has been increasingly apparent in recent years.

When asked about their experiences of completing selfreport measures, consumers in two studies conferred great value to completing a questionnaire to measure mental health outcomes.4,5 In a study interviewing young people and their parents about clinical outcome measures used in child and adolescent mental health care, parents reported that answering questions on measures enabled them to reflect on their child’s condition.5 Young people in the same study reported that completing a written measure was ‘easier’ and ‘less embarrassing’ than talking to practitioners about their feelings (p.522).

Self-report measurement in mental health research

There is some debate regarding the appropriateness of self-report measures in mental health research. One

Self-report tools aim to measure subjective components of health and wellbeing such as psychosocial wellbeing, happiness or loneliness. According to some researchers, selfreport measures can foster experiences of self-efficacy and empowerment in comparison to clinician-reported measures since they emphasise user perspective.3,4

Corresponding author: Jennifer Bibb, National Music Therapy Research Unit, Melbourne Conservatorium of Music, University of Melbourne, 151 Barry Street, Parkville 3010, VIC, Australia. Email: [email protected]

1 Downloaded from apy.sagepub.com at Middle East Technical Univ on February 25, 2016

Australasian Psychiatry 

objection made by some consumers in the study by Stasiak et al. was the lack of ability to report on the complexity of mental health or capture more specific issues.5 The continuous fluctuation in the acuity of mental illness can make it difficult to accurately track the progress of mental health and functioning.3 The other main critique of self-report measures has been the potential for participants to be dishonest in their responses or alternatively, misunderstand the questions asked of them. In the study by Stasiak et al., young people reported that it was ‘easy’ and ‘tempting’ to be dishonest when completing self-report measures, particularly when asked about sensitive topics by clinicians they did not trust (p.527).5 To determine the effectiveness of treatment outcomes in community mental health settings, it is essential that self-report outcome measures accurately capture consumers’ perspectives.6 Currently, it is assumed that these types of measures are understood by consumers and, therefore, provide an accurate representation of their perceptions. A critical examination of the measures most commonly used in this field of research is needed to challenge the assumptions that underpin our current beliefs around self-report outcome measures. Without this, we are unable to confidently propose a ‘true picture’ of what works in community mental health care in Australia. The intention of this study was to critically examine the self-report measures most commonly used in Australian mental health research in the last 10 years.

Method

Critical interpretive synthesis A critical interpretive synthesis (CIS) is a specific approach to qualitative literature synthesis. Unlike integrative literature reviews aiming to find ‘truth’ or ‘effects’,7 a CIS is interpretive and focuses on critically examining the literature and challenging the way conclusions have been constructed.8 Our methodology for conducting this CIS was adopted from the recommendations made by Skewes McFerran et  al. for ‘doing a CIS’ (see Table 1; Skewes McFerran et al., unpublished data, 2015). This methodology is adapted to reflect the specific focus of the current CIS, which is to examine self-report outcome measures used in community mental health research studies published in the last 10 years. Approaching the outcome measures and the literature Rather than identifying a research question which is fixed from the outset, the study was guided by the intention to critically examine the self-report measures that provided a ‘compass’ to guide the review ‘journey’.9 Emerging ideas then influenced the rise of new questions and concepts that were pursued throughout the analysis.8

Table 1.  Methodology adapted from Skewes McFerran et al., unpublished data, 2015 1 2 3 4

Approaching the outcome measures and the literature Gathering the outcome measures from the literature Interrogating the outcome measures Synthesising the outcome measures

Gathering the outcome measures from the literature Literature was firstly gathered to examine the most commonly used outcome measures used in published research studies. Studies conducted in Australia between 2005 and 2015 with adult mental health consumers living in the community were systematically collected. Inclusion criteria were research studies conducted in Australia with adults using standardised pre-post self-report outcome measures in a community mental health context. To limit the search, Australia’s two leading mental health journals were selected – the Australian and New Zealand Journal of Psychiatry (impact factor 3.765) and Australasian Psychiatry (impact factor 0.556) and the leading international journal in community-based programs Community Mental Health Journal (impact factor 1.146). The final search terms included ‘community’ and ‘Australia’. All references resulting from the search were imported into an Endnote library (Endnote X5, Thomson Reuters). Three stages of screening were used to determine the final selection of included studies (seen in Figure 1). Interrogating the outcome measures The names of the outcome measures used in the 43 included studies were extracted. Outcome measures that were tallied three or more times were considered the most ‘popular’ tools used in the three journals between 2005 and 2015. These outcome measures were included in the review and are presented in Table 2. A total of 87 tools were used within the 43 studies, with most studies using more than one. Copies of the tools were then sourced and information was extracted from each tool and entered into an Excel spreadsheet. The aim of analysis was to discover the similarities and differences in the data extracted from the outcome measures.

Results

Synthesising the outcome measures There were three main factors that appeared to be influencing the popularity of outcome measures being used by Australian mental health researchers:

2 Downloaded from apy.sagepub.com at Middle East Technical Univ on February 25, 2016

Bibb et al.

Figure 1.  Flowchart of study selection.

Table 2.  Most commonly used self-report outcome measures Tool

Number of articles

Kessler 10 (K10) Recovery Assessment Scale (RAS) Stages of Recovery Inventory (STORI) World Health Organisation Quality of Life – Brief (WHOQoL-BREF) Beck Anxiety Inventory (BAI) Beck Depression Inventory (BDI) Depression Anxiety and Stress Scales (DASS)

7 4 4 4 3 3 3

•• Outcome measure development; •• Outcome measure aims; •• Outcome measure structure.

These three influencing factors will now be discussed in detail. Outcome measure development.  The measures identified in the current synthesis all had consumer input into their development. It is important, however, to distinguish between tools which were piloted on consumers and collaborative research where consumers were equal partners or co-investigators in the development process.10 The Beck Depression Inventory (BDI) and the Recovery Assessment Scale (RAS) were developed collaboratively with consumers where they had equal input into their development. The BDI was developed by collecting consumers’ own descriptions of their symptoms and using them to develop a scale to reflect the severity of each symptom.11 Similarly, the RAS was created from the analysis of consumer’s ‘recovery stories’ from an initial participatory research study and then piloted with 1800 community mental health consumers.12 The remaining five tools were piloted with consumers after the questions and structure of the tools had initially been developed by the researchers. It is surprising that only two out of the top seven most commonly used self-report tools in Australian mental health research were developed from a

3 Downloaded from apy.sagepub.com at Middle East Technical Univ on February 25, 2016

Australasian Psychiatry 

consumer perspective. Especially given the recent emphasis on consumer involvement in research and its importance for learning about contemporary developments in Recovery. Outcome measure aims.  The intended purpose of the outcome measures was identified by sourcing the original articles describing their development by the authors. Two tools (the RAS and Stages of Recovery Instrument [STORI]) were designed to measure mental health Recovery and in one case (RAS) Recovery was seen as defined by the consumers constructing the tool. The remaining tools aimed to measure mental health symptoms (4) and quality of life (1), as defined by researchers or clinicians. Outcome measures used in mental health research are traditionally developed by researchers to reflect clinically defined priorities.13 As such their content is influenced by what researchers believe to be a good outcome. Outcome measure structure.  In order to analyse the structure of the outcome measures the researchers chose to focus on two aspects; the amount of words per item and the complexity of language used in the tools. This was in direct response to the very elements that consumers in the authors’ previous study had appeared to struggle with (Bibb et al., unpublished data, 2015). Consumers in this study reported difficulties with following a sentence through to the end and grasping what the words in standardised tools meant. The average number of words per item was calculated for each outcome measure by manually counting the words and calculating means. Averages ranged from 2 to 15 words per question. The most commonly used tool (K10) had the largest mean amount of words per question (15). The longest question in the K10 had 18 words and asked, ‘during the last 30 days, about how often did you feel so nervous that nothing could calm you down?’ Although 18 words sounds lengthy, the question has been wellstructured and uses simple language. The tool with an average of two words per question was the BDI, where questions were structured under themes with graded item-response choices. Short sentences were provided under different themes to describe levels of severity for that theme or symptom. This is seemingly helpful for participants who are literate and can scan the topic, looking through the possible options that suit them. However, a consumer with severe mental illness who may have limited mental capacity would need to read and process each response option separately.6 There were two outcome measures included in the review (World Health Organization Quality of Life – Brief [WHOQoL-BREF] and Depression Anxiety Stress Scales [DASS]) that used jargon or professional language in their questions. The tools had five or more words identified by the primary author of this review as professional terminology, including terms such as ‘physical environment,’

‘bodily appearance,’ ‘leisure activities’ and ‘perspired.’ Despite the fact that Microsoft Word readability statistics suggested that the language used in these measures was easily understood by an 11–13-year-old (Flesch-Kincaid Grade Levels 4.8 and 5.6, respectively), this is based on the American education system and does not take into account the capacity of the average person with a mental illness. Although the average person in the general population may understand the meaning of these phrases, a mental health consumer is likely to become overwhelmed and confused if they do not understand the meaning of complex words. Using simple language and avoiding unnecessary clinical terms may increase participant comprehension and in turn obtain meaningful answers to outcome measures. Australian policy states that contemporary mental health care should ‘challenge traditional notions of professional power and expertise’ (p.2).2 Using professional and confusing language in outcome measures increases the division of power between consumers and researchers and potentially disempowers consumers through the research process (Bibb et al., unpublished data, 2015). If measures are not adapted or developed to be accessible to consumers, we risk capturing the perceptions of populations who in fact do not represent the majority of consumers with mental illness in Australia.13

Conclusion A CIS was conducted to critically examine the self-report measures most commonly used in Australian mental health research in the last 10 years. Seven outcome measures were identified as most popular and included in this review. The strengths and weaknesses of each tool are summarised in Table 3 in an attempt to provide the reader with an accessible reference when choosing outcome measurement tools for use in community mental health research. Results from the current review suggest that the amount and type of language used in measures is important in both increasing accuracy of the data collected and fostering positive experiences of data collection for participants. Results also indicate that many of the measures most often used in mental health research may not align with the contemporary philosophy of mental health practice in Australia which emphasises lived experience and consumer input. We recommend that the development of consumer-oriented outcome measures be a prominent goal for researchers in this field. The authors are very cautious that this study is not suggesting that the results should be generalisable to all studies. This is clearly located in three journals selected for purpose and may or may not have greater relevance than that. Nonetheless, the findings are interesting. Disclosure The authors report no conflict of interest. The authors alone are responsible for the content and writing of the paper.

4 Downloaded from apy.sagepub.com at Middle East Technical Univ on February 25, 2016

Bibb et al.

Table 3.  Strengths and weaknesses of the included outcome measures Tool

Strengths

Weaknesses

Kessler 10 (K10)

• Only 10 items • Well-structured questions, simple language

• Developed in USA – may have different cultural understandings of language • Developed by researchers and piloted on consumers • Intended use with non-clinical populations • Based on ‘last 4 weeks’ Average of 15 words per question – long questions

Recovery Assessment Scale •  Developed in 2000s •  Developed in USA (RAS) •  Developed collaboratively with consumers •  Lengthy – 41 items •  Outcome of ‘recovery’ defined by consumers •  Intended use with clinical populations •  Based on the ‘current moment’ •  Minimal use of jargon/professional language Stages of Recovery Inventory (STORI)

• Developed in 2000s so have incorporated Recovery principles •  Developed in Australia •  Intended use with clinical populations •  Based on the ‘current moment’ •  Minimal use of jargon/professional language

• Developed by researchers and piloted on consumers •  Lengthy – 50 items •  Reverse coding •  Graded item-response options

World Health Organisation Quality of Life – Brief (WHOQoL-BREF)

•  Cross-cultural use

• Developed by researchers and piloted on consumers • Intended use with non-clinical populations •  Reverse coding •  Graded item-response options •  Uses jargon/professional language

Beck Anxiety Inventory (BAI) •  Intended use with clinical populations • Developed in USA – may have different •  Minimal use of jargon/professional language cultural understandings of language • Developed by researchers and piloted on consumers Beck Depression Inventory (BDI)

•  Developed collaboratively with consumers •  Developed in 1972 •  Intended use with clinical populations •  Developed in USA • Only average of 2 words per question – short • Items structured under themes with questions graded item-response choices – difficult •  Minimal use of jargon/professional language to understand •  Reverse coding

Depression Anxiety and Stress Scales (DASS)

•  Developed in Australia •  Intended use with clinical populations

• Developed by researchers and piloted on consumers •  Lengthy – 42 items •  Uses jargon/professional language

5 Downloaded from apy.sagepub.com at Middle East Technical Univ on February 25, 2016

Australasian Psychiatry 

References 1. Banfield MA, Barney LJ, Griffiths KM, et al. Australian mental health consumers’ priorities for research: Qualitative findings from the SCOPE for research project. Health Expectations 2012; 17: 365–375. 2. Commonwealth of Australia. A national framework for recovery-oriented mental health services: Guide for practitioners and providers. Canberra: Commonwealth of Australia, 2013.

7. Harden A and Thomas J. Mixed methods and systematic reviews: Examples and emerging issues. In: Tashakkori A and Teddlie C (eds) Handbook of mixed methods in the social and behavioral sciences. 2nd ed. New York: Sage, 2010, pp.749–774. 8. Dixon-Woods M, Cavers D, Agarwal S, et al. Conducting a critical interpretive synthesis of the literature on access to healthcare by vulnerable groups. BMC Medical Research Methodology 2006; 6(1): 35–48.

3. Trauer T. Outcome measurement in chronic mental illness. International Review of Psychiatry 2010; 22(2): 99–113.

9. Eakin JM and Mykhalovskiy E. Reframing the evaluation of qualitative health research: Reflections on a review of appraisal guidelines in the health sciences. Journal of Evaluative Clinical Practice 2003; 9: 187–194.

4. Guthrie D, McIntosh M, Callaly T, et al. Consumer attitudes towards the use of routine outcome measures in a public mental health service: A consumer-driven study. International Journal of Mental Health Nursing 2008; 17: 92–97.

10. Green CA, Estroff SE, Yarborough BJH, et al. Directions for future patient-centered and comparative effectiveness research for people with serious mental illness in a learning mental health care system. Schizophrenia Bulletin 2014; 40(Suppl.1) S1–S94.

5. Stasiak K, Parkin A, Seymour F, et al. Measuring outcome in child and adolescent mental health services: Consumers’ views of measures. Clinical Child Psychology and Psychiatry 2012; 18(4): 519–535. 6. Greeno CG, Colonna-Pydyn C and Shumway M. The need to adapt standardized outcome measures to community mental health. Social Work in Public Health 2007; 23(2): 125–138.

11. Beck AT. Depression: Causes and treatment. Philadelphia: University of Pennsylvania Press, 1972. 12. Corrigan PW, Salzer M, Ralph RO, et al. Examining the factor structure of the Recovery Assessment Scale. Schizophrenia Bulletin 2004; 30: 1035–1042. 13. Kroll T, Wyke S, Jahagirdar D, et  al. If patient-reported outcome measures are considered key health-care quality indicators, who is excluded from participation? Health Expectations 2011; 17: 605–607.

6 Downloaded from apy.sagepub.com at Middle East Technical Univ on February 25, 2016

A critical interpretive synthesis of the most commonly used self-report measures in Australian mental health research.

To critically examine the self-report measures most commonly used in Australian mental health research in the last 10 years...
768KB Sizes 0 Downloads 5 Views