This article was downloaded by: [Selcuk Universitesi] On: 02 January 2015, At: 06:32 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Applied Neuropsychology: Adult Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/hapn21

Effort Test Performance in Clinical Acute Brain Injury, Community Brain Injury, and Epilepsy Populations a

b

b

c

Natalie E. Hampson , Steven Kemp , Anthony K. Coughlan† , Chris J. A. Moulin & Bipin B. Bhakta

d

a

Department of Neuropsychology , Salford Royal NHS Foundation Trust , Salford , United Kingdom b

Department of Neuropsychology , St. James's University Hospital , Leeds , United Kingdom

c

LEAD CNRS UMR 5022 , Dijon , France

d

Click for updates

Academic Department of Rehabilitation Medicine, Faculty of Medicine and Health , University of Leeds , Leeds , United Kingdom Published online: 12 Sep 2013.

To cite this article: Natalie E. Hampson , Steven Kemp , Anthony K. Coughlan† , Chris J. A. Moulin & Bipin B. Bhakta (2014) Effort Test Performance in Clinical Acute Brain Injury, Community Brain Injury, and Epilepsy Populations, Applied Neuropsychology: Adult, 21:3, 183-194, DOI: 10.1080/09084282.2013.787425 To link to this article: http://dx.doi.org/10.1080/09084282.2013.787425

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

APPLIED NEUROPSYCHOLOGY: ADULT, 21: 183–194, 2014 Copyright # Taylor & Francis Group, LLC ISSN: 2327-9095 print=2327-9109 online DOI: 10.1080/09084282.2013.787425

Effort Test Performance in Clinical Acute Brain Injury, Community Brain Injury, and Epilepsy Populations Natalie E. Hampson Department of Neuropsychology, Salford Royal NHS Foundation Trust, Salford, United Kingdom

Downloaded by [Selcuk Universitesi] at 06:32 02 January 2015

Steven Kemp and Anthony K. Coughlany Department of Neuropsychology, St. James’s University Hospital, Leeds, United Kingdom

Chris J. A. Moulin LEAD CNRS UMR 5022, Dijon, France

Bipin B. Bhakta Academic Department of Rehabilitation Medicine, Faculty of Medicine and Health, University of Leeds, Leeds, United Kingdom

Effort tests have become commonplace within medico-legal and forensic contexts and their use is rising within clinical settings. It is recognized that some patients may fail effort tests due to cognitive impairment and not because of poor effort. However, investigation of the base rate of failure among clinical populations other than dementia is limited. Forty-seven clinical participants were recruited and comprised three subgroups: acute brain injury (N ¼ 11), community brain injury (N ¼ 20), and intractable epilepsy (N ¼ 16). Base rates of failure on the Word Memory Test (WMT; Green, 2003) and six other less well-validated measures were investigated. A significant minority of patients failed effort tests according to standard cutoff scores, particularly patients with severe traumatic brain injury and marked frontal-executive features. The WMT was able to identify failures associated with significant cognitive impairment through the application of profile analysis and=or lowered cutoff levels. Implications for clinical assessment, effort test interpretation, and future research are discussed.

Key words:

effort, malingering, symptom validity, WMT

For the interpretation of neuropsychological tests to be valid, the examinee must apply full effort; otherwise, clinicians risk making a Type I error by y Dr. Tony Coughlan sadly died during the preparation of this article. The authors are extremely grateful for his contribution to this research. Address correspondence to Natalie E. Hampson, Department of Neuropsychology, Clinical Sciences Building, Salford Royal Hospital, Stott Lane, Salford, M6 8HD, United Kingdom. E-mail: natalie. [email protected]

concluding that someone is brain-damaged when this is not the case. There is currently considerable interest in the potential for feigning and exaggeration of symptoms as a threat to the validity of neuropsychological test results, particularly where personal gain is involved (e.g., Rogers, 2008a). To date, the majority of the literature has focused on populations in which feigning is thought to be most prevalent, such as medico-legal settings and disability benefit assessments.

Downloaded by [Selcuk Universitesi] at 06:32 02 January 2015

184

HAMPSON ET AL.

Although difficult to establish with certainty, numerous studies have sought to quantify the base rate of malingering across a variety of populations, with suggested rates of approximately 40% in those with mild traumatic brain injury (TBI) receiving disability payments or in litigation (Larrabee, 2003), up to 90% in criminal assessments (Ardolf, Denney, & Houston, 2007), and 8% in medical and psychiatric cases (Mittenberg, Patton, Canyock, & Condit, 2002), suggesting the economic burden of such false claims could be substantial. There are also clinical considerations: If feigning cannot be accurately identified, it may prevent or delay treatment for those genuinely in need. A label of malingering is also highly pejorative, and wrongful accusation or diagnosis can have a wide-ranging impact on the individual and their life. Recently developed clinical guidelines have highlighted the importance of standardized symptom validity tests (commonly referred to as ‘‘effort’’ tests) within neuropsychological assessment. For example, the National Academy of Neuropsychology (NAN) position paper views such tests as central to understanding participant responses in neuropsychological assessment (Bush et al., 2005), and the British Psychological Society (BPS) also emphasizes the value and use of such assessments within UK practice (McMillan et al., 2009). Belanger, Curtiss, Demery, Lebowitz, and Vanderploeg’s (2005) meta-analysis of factors moderating outcome in mild TBI revealed that people not in litigation recovered within 3 months of injury, whereas those in litigation often continued to report symptoms or got worse over time. The recent Rohling et al. (2011) meta-analysis also showed no measurable effect of mild TBI at 3 months postinjury on neuropsychological tests, which are much more difficult than tests of effort. Effort tests are typically memory tests designed to appear demanding, but they are actually simple to complete, even for people who have substantial cognitive impairments. Therefore, it is not credible that a mild TBI could cause people to fail extremely easy symptom validity subtests or cause more impairment than that seen in children with developmental disabilities (Green, Flaro, & Courtney, 2009). Numerous studies have identified that more people with mild TBI fail effort tests and score lower on other neuropsychological tests compared with those with more severe brain injury who pass such tests (e.g., Constantinou, Bauer, Ashendorf, Fisher, & McCaffrey, 2005; Green, Iverson, & Allen, 1999; Green, Rohling, Lees-Haley, & Allen, 2001; Meyers, Volbrecht, Axelrod, & Reinsch-Boothby, 2011; Moss, Jones, Fokias, & Quinn, 2003; Stevens, Friedel, Mehren, & Merten, 2008; West, Curtis, Greve, & Bianchini, 2011). For example, Fox (2011) reported no correlation between commonly used neuropsychological tests and objectively determined

brain damage among those who failed effort tests. However, for those who passed effort tests, the expected relationship between cognitive test data and brain damage was found. Therefore, poor effort seems to be the only reasonable explanation for people with a mild TBI to fail well-validated effort tests. However, questions remain regarding the level of failure in genuinely impaired populations. Although the recent NAN and BPS position papers provide guidance, there is still no gold standard to assess test effort. Without such a standard, it is difficult to assess validity adequately. Therefore, it is vital to know the accuracy of published cutoff scores if appropriate decisions are to be made regarding whether or not someone is putting forth their best effort. However, cutoff scores on effort tests have been criticized for increasing the probability that a person with genuine impairment could be classified as applying suboptimal effort (e.g., Greve, Ord, Curtis, Bianchini, & Brennan, 2008; Haines & Norris, 1995). Although most well-established symptom validity test manuals acknowledge the problem of high failure rates in truly impaired populations, this issue has often been managed by removing such populations from validation studies. This minimizes the very real concern that some people with genuine impairments will fail, and clinicians might wrongly conclude poor effort if tests are to be routinely used and guidance for their interpretation is not followed accurately. Therefore, it is important that the base rate of failure among clinical groups with more severe impairments is available. The Word Memory Test (WMT) is one of the most popular and well-investigated measures of symptom validity currently available, with the authors stating that differences in scores cannot be explained on the basis of actual cognitive deficits apart from those with the most extreme forms of learning and memory impairments, such as those with dementia (Green, 2003; Green et al., 2001). Despite increasingly widespread use, relatively few studies have provided base rate data in clinical populations. Some researchers have also implied that people with less extreme impairments could fail effort tests for cognitive reasons (e.g., Batt, Shores, & Chekaluk, 2008; Bowden, Shores, & Mathias, 2006). However, such studies have been criticized due to their design and nonstandard administration procedures (e.g., Flaro, Green, & Robertson, 2007; Rohling & Demakis, 2010), including the fact that the correct procedure for interpreting the WMT was not applied appropriately, as only the first trial was utilized, so judgments about failure rates are likely to be inaccurate. In addition, alongside the research involving those in litigation, some studies have attempted to provide data from clinical populations who have significantly impaired cognitive functioning. For example, Merten, Bossink, and

Downloaded by [Selcuk Universitesi] at 06:32 02 January 2015

EFFORT TEST BASE RATES

Schmand (2007) reported a base rate failure of 50% to 58% when using standard cutoff scores and no additional investigations in 24 people with ‘‘clinically obvious symptoms,’’ as judged by a clinician as demonstrating bradyphrenia, repetitive speech, and word-finding difficulties. Despite Flaro et al. (2007) and Rohling and Demakis (2010) providing evidence that people with milder injuries cannot fail the WMT for cognitive reasons, and that incomplete WMT administration and interpretation may be inflating failure rates in genuinely impaired participants, the Bowden et al. (2006) and Merten et al. (2007) studies do indicate a need for further research in more severely impaired clinical populations, and their results have led to recent attempts to refine the cutoff scores and methods of test interpretation for certain clinical groups. Greve et al. (2008) studied the classification accuracy of the WMT, the Portland Digit Recognition Test (Binder, 1993), and the Test of Memory Malingering (TOMM; Tombaugh, 1996) in detecting malingering in 109 people with TBI. The authors concluded that the WMT misclassified 19 of these participants, and that although the measure was sensitive to malingering when using the established cutoffs, this led to reduced specificity and unacceptably high rates of false positives. As such, adjusted cutoff scores were proposed associated with 90% and 98% specificity levels. Axelrod and Schutte (2011) also identified lower specificity in a clinical sample of 153 participants when using a shorter version of the WMT (Medical Symptom Validity Test [MSVT]; Green, 2004) in comparison to the TOMM and California Verbal Learning Test-Second Edition list-learning task (Delis, Kramger, Kaplan, & Ober, 2000), with reduced cutoff levels again being suggested. However, the Greve et al. (2008) sample included participants with mild TBI, so it is possible that those who were judged as false positives are actually false negatives, given that many of the participants were not severely impaired and that some may have had incentives to malinger that were not considered within the study criteria. It could also be argued that the TOMM may be less sensitive than the WMT, rather than the WMT having lower specificity than the TOMM. For example, Blaskewitz, Merten, and Kathmann (2008) reported the TOMM only identified 68% of children who were simulating, versus a detection rate of 90% with the MSVT, indicating different sensitivity rates across the two measures. Green (2011) and Armistead-Jehle and Gervais (2011) have also identified similar findings using the nonverbal version of the MSVT (NV-MSVT; Green, 2008). As such, other researchers have proposed an alternative approach to evaluating performance on the WMT and related measures (i.e., the MSVT and NV-MSVT) in

185

people with established impairments that includes the retention of published cutoff levels and analysis of the profile of scores across the various subtests (Howe & Loring, 2009; Singhal, Green, Ashaye, Shankar, & Gill, 2009). For example, Henry, Merten, Wolf, and Harth (2010) identified a maximum false-positive rate of 1 out of 65 in 44 people with neurological conditions and 21 people with dementia diagnoses when applying profile analysis to the NV-MSVT. More recently, Green, Montijo, and Brockhaus (2011) also highlighted using profile analysis with the WMT in people being screened for dementia, and they reported much fewer failures when compared with interpretation using standard cutoff scores alone. Although profile analysis is now included in the advanced interpretation computer program (Green, 2009) designed to allow users to automatically apply these methods to the interpretation of WMT data, profile analysis represents a recent finessing of effort test interpretation and is not yet routinely used clinically. For example, Axelrod and Schutte’s (2010) argument that profile analysis is not useful was based on a potentially incomplete understanding of profile analysis, as a significant number of their participants had a diagnosis of mild TBI with no neurological impairment. Only when a person has an established clinical condition known to produce severe cognitive impairments would a ‘‘genuine cognitive impairment profile’’ be interpreted as reflecting legitimate severe impairment. Although Axelrod and Schutte (2011) do go on to acknowledge this as an issue, by extension, profile analysis may be incorrectly applied, or not applied at all, in clinical practice with genuinely impaired populations. For good reasons, use of symptom validity testing is becoming standard clinical practice, with recent UK and U.S. guidelines promoting their use. There is limited research regarding the performance of clinical populations with established diagnoses other than dementia. Hence, we know more about the sensitivity of these measures than we know about their specificity, and clinicians are now at risk for making a Type II error (i.e., assuming insufficient test effort and overlooking brain damage). The current research is an exploratory study intended to provide initial data regarding effort test performance among three distinct and clinically prevalent groups. We compared performance relative to standard cutoffs on the WMT (Green, 2003), along with several other less well-established measures that have been applied as tests of effort. Additionally, the study considers whether applying adjusted cutoff scores or ‘‘genuine cognitive impairment profile’’ analysis affects the failure rate, as suggested by a number of recent authors.

186

HAMPSON ET AL.

METHOD

Downloaded by [Selcuk Universitesi] at 06:32 02 January 2015

Participants A total of 47 participants (32 men, 15 women) were recruited during a 10-month period, encompassing three groups: Group 1 (acute brain injury) included 11 inpatients on a National Health Service postacute neurological rehabilitation ward; Group 2 (community brain injury) included 20 people in residential community rehabilitation services run by a registered charity; Group 3 (intractable epilepsy) included 16 outpatients attending a National Health Service regional center for epilepsy surgery (see Table 1). Participants were recruited in accord with the stipulations of the ethics committee that granted study approval. Namely, potential participants were identified by the treating consultant following inclusion=exclusion criteria. All participants were required to be older than 18 years of age and have a definitive diagnosis of brain injury or epilepsy (identified through computed tomography and=or magnetic resonance imaging brain scans and=or brain electroencephalography [EEG]). Participants were also required to have a good grasp TABLE 1 Brain Injury Data for the Three Participant Groups

Time since diagnosis Mean Range Nature of brain injury Hypoxic injury Traumatic injury Stroke Intracerebral hemorrhage Subarachnoid hemorrhage Infection Tumor Cavernoma Hippocampal sclerosis Brain lesion Diffuse

Group 1 (Inpatient) N ¼ 11

Group 2 (Community) N ¼ 20

Group 3 (Epilepsy) N ¼ 16

1.1 months 0–2 months

11.9 years 1–30 years

22.9 years 2–43 years

— 4 (36.3%) 2 (18.2%) 1 (9.1%)

3 (15.0%) 12 (60.0%) 1 (5.0%) 1 (5.0%)

— — — —

1 (9.1%)

1 (5.0%)



1 (9.1%) 2 (18.2%) — —

2 (10.0%) — — —

— 6 (37.5%) 1 (6.2%) 9 (56.3%)

1 (9.0%)

9 (45%)

Right hemisphere

2 (18.2%)

2 (10%)

Left hemisphere

4 (36.4%)

2 (10%)

Primarily frontal

4 (36.4%)

7 (35%)

Bitemporal ¼ 1 (6%) Right Temporal ¼ 7 (44%) Left Temporal ¼ 7 (44%) Frontal ¼ 1 (6%)

of the English language. Potential participants were excluded if the clinical team determined that they did not have the mental capacity to consent, if the team judged them as not capable of participating in the study for any reason, if they were using any substances that might influence cognitive test scores (e.g., current drug or alcohol misuse), or if they had experienced a seizure in the last 24 hr. The presence of comorbidity judged by the clinical team to be a significant influence (e.g., serious somatic or psychiatric illness) or visual, motor, or language dysfunction that precluded administration of computerized tasks (e.g., hemiparesis, aphasia) were also used as the basis for exclusion from the study. Exclusion criteria also included ongoing litigation (assessed via direct questioning of participants and=or confirmation by a treating professional). However, none of the participants required exclusion on this basis, likely due to all participants presenting for clinical reasons to UK National Health Services or being involved in receiving rehabilitation from a registered charity. None of the participants in the inpatient group were receiving any benefits for their injury, but all of the patients in the epilepsy group were receiving or had received some form of state benefit, ranging from disability payments to free prescription medications. The community participants had either already been awarded prior litigation payouts for their injuries, which were now being used to fund their care, or were having their care fees paid by local statutory services. In addition to having a confirmed significant neurological condition and no current litigation, the participants showed no signs of suboptimal performance, uncooperativeness, or negative response bias within their clinical presentation or history. This is in line with Henry et al.’s (2010) definition of ‘‘bona fide’’ neurology patients. Potential participants were contacted by the research team, provided with the study information sheet, and given time to consider whether to participate in the study. A total of 56 participants were identified by their treating consultant; of the 9 who declined to participate, 4 were from Group 1, 1 was from Group 2, and 4 were from Group 3. No external incentive was provided to any of the participants. As can be seen in Table 1, location of brain injury varied depending on the participant’s subgroup. The epilepsy sample had focal lesions (mainly left or right temporal lesions), whereas the community brain injury and acute brain injury subgroups had more diffuse and anterior brain damage. Due to the obvious overlap between brain injury and epilepsy, only those participants with epilepsy as their primary presenting problem, or due to an in-situ tumor resulting in monitoring at an epilepsy clinic, were included in the epilepsy group. None of the patients in the acute brain injury group were judged to be in posttraumatic amnesia (PTA) by the treating consultant. All patients with TBI in Group

Downloaded by [Selcuk Universitesi] at 06:32 02 January 2015

EFFORT TEST BASE RATES

187

1 and Group 2 had PTA for more than 24 hr and Glasgow Coma scores

Effort test performance in clinical acute brain injury, community brain injury, and epilepsy populations.

Effort tests have become commonplace within medico-legal and forensic contexts and their use is rising within clinical settings. It is recognized that...
173KB Sizes 1 Downloads 4 Views