Archives of Clinical Neuropsychology 30 (2015) 377– 386

Does True Neurocognitive Dysfunction Contribute to Minnesota Multiphasic Personality Inventory-2nd Edition-Restructured Form Cognitive Validity Scale Scores? Phillip K. Martin*, Ryan W. Schroeder, Robin J. Heinrichs, Lyle E. Baade *Corresponding author at: University of Kansas School of Medicine, 7829 E. Rockhill, Suite 105, Wichita, KS 67206, USA. Tel.: +1 316 293 3850; fax: +1 855 476 0305. E-mail address: [email protected]. Accepted 5 May 2015

Abstract Previous research has demonstrated RBS and FBS-r to identify non-credible reporters of cognitive symptoms, but the extent that these scales might be influenced by true neurocognitive dysfunction has not been previously studied. The present study examined the relationship between these cognitive validity scales and neurocognitive performance across seven domains of cognitive functioning, both before and after controlling for PVT status in 120 individuals referred for neuropsychological evaluations. Variance in RBS, but not FBS-r, was significantly accounted for by neurocognitive test performance across most cognitive domains. After controlling for PVT status, however, relationships between neurocognitive test performance and validity scales were no longer significant for RBS, and remained non-significant for FBS-r. Additionally, PVT failure accounted for a significant proportion of the variance in both RBS and FBS-r. Results support both the convergent and discriminant validity of RBS and FBS-r. As neither scale was impacted by true neurocognitive dysfunction, these findings provide further support for the use of RBS and FBS-r in neuropsychological evaluations. Keywords: Minnesota Multiphasic Personality Inventory-2nd Edition-Restructured Form; Cognitive dysfunction; Validity scale; Malingering

Introduction The professional literature documents similarities and differences between cognitively based performance validity tests (PVTs) and self-report symptom validity tests (SVTs) (Larrabee, 2012). Both can assess validity of cognitive complaints; however, one assesses the credibility of cognitive performances while the other assesses the credibility of self-reported cognitive symptoms. While associations have been found between PVTs and SVTs, they are not perfectly related (Ruocco et al., 2008; Van Dyke, Millis, Axelrod, & Hanks, 2013). Thus, it is often beneficial to utilize both types of validity scales when attempting to determine the credibility of observed and reported cognitive deficits. The Minnesota Multiphasic Personality Inventory—2nd Edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) and the newer Minnesota Multiphasic Personality Inventory—2nd Edition—Restructured Form (MMPI2-RF; Ben-Porath & Tellegen, 2008) are commonly used tests that assess a variety of clinical issues including symptom validity. The MMPI-2-RF, specifically, has multiple standard validity scales designed to detect exaggeration or feigning of symptoms (Ben-Porath & Tellegen, 2008). Two of these validity scales, the Symptom Validity Scale (FBS/FBS-r) and the Response Bias Scale (RBS), are of notable relevance to neuropsychologists given their sensitivity to the presentation of non-credible cognitive symptoms, in particular. The FBS of the MMPI-2 was developed by Lees-Haley, English, and Glenn (1991) for the purpose of identifying personal injury litigants reporting implausible symptoms. The original FBS consists of 43 MMPI-2 items rationally chosen to discriminate between a group of credible personal injury litigants and two groups of malingerers—a non-credible personal injury group and # The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]. doi:10.1093/arclin/acv032 Advance Access publication on 7 June 2015

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

University of Kansas School of Medicine, Wichita, USA

378

P. K. Martin et al. / Archives of Clinical Neuropsychology 30 (2015); 377–386

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

a group of medical outpatients instructed to simulate mental illness. Lees-Haley and colleagues noted that the scale content reflected both an exaggeration of post-injury distress and a minimizing of pre-injury personality problems. Given the expanded empirical support and popularity of FBS, a 30-item corollary to the scale, the FBS-r, was provided with the publication of the MMPI-2-RF. Despite being shorter in length, research on FBS-r has shown that the scale is largely comparable with, if not slightly more sensitive than, the original FBS (Gervais, Ben-Porath, Wygant, & Sellbom, 2010; Jones & Ingram, 2011; Tellegen & Ben-Porath, 2008). RBS was created to specifically identify individuals with non-credible memory complaints (Gervais, Ben-Porath, Wygant, & Green, 2007). Gervais and colleagues administered the MMPI-2 and various PVTs to disability claimants and counseling clients. Regression analyses predicted freestanding PVT outcome and identified 28 predictor items that were combined to develop RBS. Next, a validation of RBS was performed using another sample of patients who either passed or failed performance validity testing. Mean RBS scores were significantly different between the PVT pass and PVT fail groups, and linear regression analyses indicated that there was incremental validity of RBS compared with F, Fp, and FBS in discriminating between passing and failing of a freestanding PVT. Unlike FBS, which was revised from the MMPI-2 to the MMPI-2-RF, the composition of RBS is identical across these two versions of the MMPI. Since the initial validation of FBS-r and RBS, follow-up studies have further supported their use in discriminating between credibly presenting patients and non-credibly presenting patients. For example, Jones, Ingram, and Ben-Porath (2012) found that scores on both RBS and FBS-r increased with increasing frequency of PVT failure. Furthermore, Schroeder, Baade, and colleagues (2012), found that FBS-r and RBS both identified .40% of individuals failing PVTs. Additionally, Jones and Ingram (2011) found that both FBS-r and RBS outperformed the other MMPI-2-RF validity scales in discriminating patients passing and failing PVTs. Taken together, such studies confirm an association between PVT performance and the cognitive symptom validity indicators of the MMPI-2-RF. Given that both FBS-r and RBS have been found to relate to exaggerated cognitive impairment, one might ask if these scales are similarly associated with true cognitive dysfunction. Indeed, questions of whether validity tests might be influenced by specific cognitive deficits and whether validity tests might be related to actual cognitive abilities have been posed (Larrabee, 2012). An association between genuine cognitive dysfunction and FBS-r and RBS cannot be excluded simply on a rational basis since some cognitive ability is required to comprehend and appropriately respond to the test items, and since both scales contain item content pertaining to difficulties with attention, concentration, reading comprehension, cognitive efficiency, memory, speech, and mental clarity (MMPI-2-RF; Ben-Porath & Tellegen, 2008). A small handful of studies have examined the impact of cognitive deficits on the MMPI-2-RF validity scales. Youngjohn, Wershba, Stevenson, Sturgeon, and Thomas (2011) found that there were no significant differences between scores on the MMPI-2-RF validity scales across patients grouped according to traumatic brain injury severity. However, despite its potential relationship to cognitive symptom report, RBS was not examined in this study. Additionally, performance validity was not controlled for; thus, it is unclear if the MMPI-2-RF validity scale scores would have increased as a function of brain injury severity if those demonstrating a non-credible performance (41.5% of the sample) were excluded or if validity were held constant. The authors are aware of only one study examining the relationship between MMPI-2-RF cognitive validity scale scores and actual cognitive performance while controlling for performance validity. In that study, Gervais and colleagues (2010) found that the MMPI-2-RF over-reporting validity scales were associated with subjective memory complaints, as measured by a mean Memory Complaints Inventory (Green, 2004) score, but were not significantly correlated with objective memory test performance as measured by various California Verbal Learning Test (Delis, Kramer, Kaplan, & Ober, 1987) scores. However, correlations between MMPI-2-RF validity scales and other cognitive domains including intelligence, Working Memory, Mental Processing Speed, Verbal Ability, Visuospatial Ability, and Executive Functioning were not examined. Furthermore, Gervais et al.’s study was comprised of non-head injured patients, who are reasonably less likely to have cognitive deficits than head injured or other neurologic patients. Thus, it is still unknown whether dysfunction in various specific domains of cognitive functioning might result in elevated MMPI-2-RF cognitive validity scale scores in an impaired neuropsychological sample. The purpose of the present study is to examine whether specific cognitive deficits result in increasing MMPI-2-RF cognitive validity scale scores in a sample of neuropsychological patients. Given the scales’ content pertaining to cognitive problems as well as their demonstrated convergence with cognitive PVTs, it is quite conceivable that credible individuals with increasing degrees of cognitive dysfunction could produce elevated scores on RBS and FBS-r. As these scales are intended to elevate due to exaggerated, but not genuine, cognitive dysfunction, such an occurrence would threaten the construct validity of the scales and undermine the accuracy of their clinical interpretation. However, if the finding were that increasing cognitive dysfunction does not result in increased cognitive validity scale scores, this would provide further evidence for the use of these validity scales in neuropsychological evaluations of cognitively impaired individuals.

P. K. Martin et al. / Archives of Clinical Neuropsychology 30 (2015); 377–386

379

Methods Participants

Procedure All patients were administered a comprehensive neuropsychological battery minimally consisting of either the third or fourth edition of the Wechsler Adult Intelligence Scale (WAIS-III or WAIS-IV) and either the 3rd or 4th Edition of the Wechsler Memory Scale (WMS-III or WMS-IV). Most patients were administered the WAIS-IV (n ¼ 113) and WMS-IV (n ¼ 112), whereas very few patients were administered the WAIS-III (n ¼ 7) and WMS-III (n ¼ 8). All patients were also administered selected measures from the Delis –Kaplan Executive Functioning System (D-KEFS), a number of PVTs, and the MMPI-2 or MMPI-2-RF. For the minority of individuals administered the MMPI-2, the MMPI-2 was converted to the MMPI-2-RF. For this study, multiple cognitive domains were coded and these included: Verbal Ability, Perceptual Reasoning, Working Memory, Processing Speed, Learning, Memory Recall, and Executive Functioning. Verbal Ability was assessed with the Verbal Comprehension Index (VCI) of either the WAIS-IV or WAIS-III. The VCI is a composite index consisting of the subtests Table 1. Presence and type of external incentive External incentive

Total (n ¼ 120)

PVT fail group (n ¼ 27)

PVT pass group (n ¼ 93)

No external incentive identified External incentive identified Seeking or maintaining disability Civil litigation Worker’s compensation Criminal litigation Military related Avoiding testifying

60

8

52

29 14 8 4 4 1

8 6 4 0 0 1

21 8 4 4 4 0

Note: PVT ¼ Performance Validity Test.

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

Patients were identified retrospectively from an archival database consisting of individuals who underwent a comprehensive neuropsychological evaluation in an outpatient clinic in Southeastern Kansas. Only patients who completed the tests utilized in the current study were retained for the sample. Individuals were excluded from the study sample if they had diagnoses of dementia or intellectual disability because of the increased likelihood of false-positive findings on many validity tests, which is not uncommon with these disorders (Dean, Victor, Boone, Philpott, & Hess, 2009; Marshall & Happe, 2007). Furthermore, individuals with diagnoses of somatoform or conversion disorder (n ¼ 7) were excluded given that such patients tend to produce high scores on FBS-r and RBS due to excessive report of physical and cognitive dysfunction (Sellbom, Wygant, & Bagby, 2012). However, because these patients comprise a notable proportion of adult neuropsychological referrals, additional analyses were conducted including these individuals in the sample. Finally, individuals who responded inconsistently on the MMPI-2-RF, as defined by a VRIN or TRIN T score ≥80 (Ben-Porath & Tellegen, 2008), were excluded. Overall, the final study sample contained 120 individuals. This sample was split between individuals for whom an external incentive was apparent (n ¼ 60) and individuals for whom no external incentive was identified (n ¼ 60). Placement in the no external incentive group did not guarantee a lack of incentive. Rather, it merely indicated that no external incentive was inherent in the referral question or explicitly stated upon interview of the examinee. The type and frequency of various identified external incentives can be seen in Table 1. Diagnostic impressions following evaluations were mixed and can be seen in Table 2. Demographic information is available in Table 3. Study patients were classified according to number of failed PVTs, using the PVTs with associated cut-offs listed in Table 4. While 11 PVTs were administered to the patients overall, individual patients were administered only a select number of these measures and the exact PVTs administered varied by patient. The percentage of study patients who were administered each PVT can be seen in Table 5. On average, patients were administered approximately five PVTs and the mean number of failures by those in the PVT fail group was three (see Table 6). Individuals who passed all PVTs and who were administered at least three PVTs were placed in the PVT Pass group (n ¼ 93) and individuals failing two or more PVTs were placed in the PVT fail group (n ¼ 27). As it is common for both credible and noncredible individuals to fail one PVT (e.g., Victor, Boone, Serpa, Buehler, & Ziegler, 2009) patients failing one, and only one PVT, were excluded from the study (n ¼ 31).

380

P. K. Martin et al. / Archives of Clinical Neuropsychology 30 (2015); 377–386

Table 2. Frequencies of diagnostic impressions separated by PVT classification PVT fail group (n ¼ 27)

PVT pass group (n ¼ 93)

Depressive disorder Cognitive disorder NOS Moderate –severe TBI Anxiety disorder Mild TBI Multiple sclerosis Mild cognitive impairment Anoxia/hypoxia CVA Right Left No diagnosis Chronic pain Bipolar disorder ADHD Epileptic seizures Brain abscess Cancer history Electrical injury Encephalitis

6 1 2 1 10 1 1 1

23 15 11 11 7 6 5 3

0 0 0 0 1 0 0 0 1 1 1

2 2 2 2 1 1 1 1 0 0 0

Notes: PVT ¼ Performance Validity Test, TBI ¼ traumatic brain injury, CVA ¼ cerebrovascular disease, ADHD ¼ attention deficit hyperactivity disorder. Table 3. Demographic information

Age Education Male/female (%) Ethnicity (%) Caucasian African American Hispanic Native American Middle Eastern Unknown

Total sample (n ¼ 120)

PVT fail group (n ¼ 27)

PVT pass group (n ¼ 93)

45.70 + 15.09 14.26 + 2.58 51.7/48.3

46.19 + 13.52 12.93 + 2.13 51.9/48.1

45.56 + 15.58 14.65 + 2.59 51.6/48.4

92.5 2.5 0.8 0.8 2.4 0.8

88.9 7.4 0.0 3.7 0.0 0.0

93.5 1.1 1.1 0.0 3.3 1.1

Note: PVT ¼ Performance Validity Test.

Similarities, Vocabulary, and Information and assesses verbal comprehension and reasoning, word knowledge, verbal concept formation, and fund of knowledge (Wechsler, 2008, 1997a, 1997b). Perceptual Reasoning was assessed with the Perceptual Reasoning Index (PRI) on the WAIS-IV or the Perceptual Organization Index (POI) on the WAIS-III. The PRI is a composite index consisting of the subtests Block Design, Matrix Reasoning, and Visual Puzzles, and the POI consists of the subtests Block Design, Matrix Reasoning, and Picture Completion. Both indices are interpreted as measuring the analysis and synthesis of abstract visual stimuli, nonverbal concept formation and reasoning, visual perception and organization, and fluid intelligence (Wechsler, 1997a, 1997b, 2008). Working Memory was assessed with the Working Memory Index (WMI) on either the WAIS-IV or WAIS-III. The WMI on the WAIS-IV is a composite index consisting of the subtests Digit Span and Arithmetic, and the WMI on the WAIS-III consists of these subtests in addition to Letter – Number Sequencing. The WMI Assesses Attention, Working Memory, and mental manipulation (Wechsler, 2008, 1997a). Processing Speed was assessed with the Processing Speed Index (PSI) on either the WAIS-IV or WAIS-III. The PSI on the WAIS-IV is a composite index consisting of the subtests Coding and Symbol Search, and the PSI on the WAIS-III consists of the subtests Digit Symbol-Coding and Symbol Search. The PSI measures mental processing speed, short-term visual memory, visual-motor coordination, attention, cognitive flexibility, and visual scanning ability (Wechsler, 1997a, 2008). Learning was calculated by averaging the scaled scores of Logical Memory I (LM I) and Verbal Paired Associates I (VPA I) from either the WMS-IV or the WMS-III (Wechsler, 1997b, 2009a). On LM I, individuals are asked to repeat details from two stories immediately following their presentation. On VPA I individuals are read a list of word pairs and then immediately following the presentation of this list, they are asked to provide the associated word when given one word from the pair.

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

Diagnosis

P. K. Martin et al. / Archives of Clinical Neuropsychology 30 (2015); 377–386

381

Table 4. Cut-off points for administered PVTs PVT

Cut-off

Evidenced by

1. Test of memory malingering Trial 2 Retention Albany consistency index

,45 ,45 ≥10 IR

Tombaugh (1996)

2. Word Memory Test Immediate memory Delayed memory Consistency 3. Reliable digit span (WAIS-III or WAIS-IV)

≤82.5% ≤82.5% ≤82.5% ≤6

Green (2005)

11. WMS-III weighted consistency

≤39.5

Schroeder, Baade, and colleagues (2012), Schroeder, Peck, Buddin, Heinrichs, and Baade (2012), Schroeder, Twumasi-Ankrah, Baade, and Marshall (2012) Schroeder and Marshall (2010) Boone, Salazar, Lu, Warner-Chacon, and Razani (2002) Wechsler (2009b) Boone and colleagues (2002) Arnold and colleagues (2005) Frederick (2003) Schroeder, Baade, and colleagues (2012), Schroeder, Peck, and colleagues (2012), Schroeder, Twumasi-Ankrah, and colleagues (2012) Bortnik and colleagues (2010)

Notes: PVT ¼ Performance Validity Test, IR ¼ inconsistent responses, WAIS-III ¼ Wechsler Adult Intelligence Scale-3rd Edition, WAIS-IV ¼ Wechsler Adult Intelligence Scale-4th Edition, WMS-III ¼ Wechsler Memory Scale-3rd Edition. Table 5. Percentage of patients administered each PVT by group PVT

Total sample (%)

PVT fail group (%)

PVT pass group (%)

Reliable digit span (WAIS-III or WAIS-IV) Test of memory malingering Word choice Finger tapping (1st three trials dominant hand) Dot counting test Word Memory Test Sentence repetition Rey 15-item plus recognition combination score WMS-III weighted consistency Coin-in-the-hand test Validity indicator profile

100 73.3 71.7 68.3 60.8 51.7 45.0 20.8 6.7 5.0 4.2

100 92.6 63.0 66.7 63.0 70.4 40.7 29.6 18.5 11.1 3.7

100 67.7 74.2 68.8 60.2 46.2 46.2 18.3 3.2 3.2 4.3

Notes: PVT ¼ Performance Validity Test, WAIS-III ¼ Wechsler Adult Intelligence Scale-3rd Edition, WAIS-IV ¼ Wechsler Adult Intelligence Scale-4th Edition, WMS-III ¼ Wechsler Memory Scale-3rd Edition. Table 6. Average number of PVTs administered and failed

Number of PVTs administered Number of PVTs failed

PVT fail group

PVT pass group

5.59 + 1.25 3.07 + 1.30

4.90 + 1.26 0.00 + 0.00

Note: PVT ¼ Performance Validity Test.

Memory Recall was calculated by averaging the scaled scores of Logical Memory II (LM II) and Verbal Paired Associates II (VPA II) from either the WMS-IV or the WMS-III (Wechsler, 1997b, 2009a). Both tests require individuals to retain and retrieve previously learned information following a 20– 30 min delay. Executive Functioning was calculated by averaging the scaled scores of Tower Total Achievement, Trail Making Switching, and Verbal Fluency Category Switching from the D-KEFS (Delis, Kaplan, & Kramer, 2001). Tower is a measure of visuospatial planning. Trail Making Switching requires individuals to alternate between connecting numbers and letters and is a measure of sequencing and cognitive flexibility. Verbal Fluency Category Switching requires individuals to name objects from two disparate categories in an alternating fashion and is a measure of cognitive set shifting and semantic verbal fluency.

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

9. Validity indicator profile 10. Coin-in-the-hand test

≤10 ,20 ≤43 ≥19 ≤35 (males) ≤28 (females) Fail ≥2 errors

4. Sentence repetition 5. Rey 15-item plus recognition combination score 6. Word choice 7. Dot counting test 8. Finger tapping (1st three trials dominant hand)

Gunner, Miele, Lynch, and McCaffrey (2012), Schroeder and colleagues (2013)

382

P. K. Martin et al. / Archives of Clinical Neuropsychology 30 (2015); 377–386

Data Analyses To determine the proportion of the variance of RBS and FBS-r accounted for by cognitive domain and by PVT status, squared Pearson correlations were analyzed. To determine the proportion of the variance of RBS and FBS-r accounted for by true cognitive ability, squared partial correlations between each MMPI-2-RF validity indicator and each cognitive domain were analyzed while controlling for PVT status. Given that multiple statistical calculations were performed, results for these analyses were considered statistically significant if p , .01 to reduce the likelihood for Type I error. Results

Table 7. Mean and SDs for cognitive indices and MMPI-2-RF validity indicators across PVT fail and PVT pass groups PVT fail

Verbal Ability Perceptual Reasoning Working Memory Processing Speed Learning Memory Recall Executive Funct. RBS FBS-r

PVT pass

M

SD

M

SD

91.70 91.15 80.93 82.11 7.43 6.81 7.49 82.41 75.85

15.04 12.53 12.13 11.73 2.68 2.68 2.87 12.90 13.36

102.99 102.56 102.11 98.22 10.30 10.26 10.23 69.87 63.97

14.38 13.72 13.46 15.86 2.47 2.57 2.45 14.28 12.04

t

Cohen’s d

p-value

3.553 3.877 7.353 3.897 5.218 6.086 4.910 4.099 4.405

0.654 0.714 1.354 0.902 0.961 1.121 0.904 0.755 0.811

.001 ,.001 ,.001 ,.001 ,.001 ,.001 ,.001 ,.001 ,.001

Notes: PVT ¼ Performance Validity Test, Verbal Ability ¼ VCI from either the WAIS-IV or WAIS-III (Standard Score), Perceptual Reasoning ¼ PRI/POI from either the WAIS-IV or WAIS-III (Standard Score), Working Memory ¼ WMI from either the WAIS-IV or WAIS-III (Standard Score), Processing Speed ¼ PSI from either the WAIS-III or WAIS-IV (Standard Score). Learning ¼ Average of Logical Memory I and VPA I from either the WMS-IV or WMS-III (Scaled Score), Memory Recall ¼ Average of Logical Memory II and VPA II from either the WMS-IV or WMS-III (Scaled Score), Executive Funct. ¼ (Executive Functioning) Average of Tower, Trail Making Switching, and Category Fluency Verbal Switching from the D-KEFS (Scaled Score), RBS ¼ Response Bias Scale from the MMPI-2-RF (t-score), FBS-r ¼ Symptom Validity Scale from the MMPI-2-RF (t-score). Table 8. Variance of RBS and FBS-r, accounted for by PVT status RBS PVT status

FBS-r

0.125*

0.136* 2

Notes: PVT ¼ Performance Validity Test, RBS ¼ Response Bias Scale, FBS-r ¼ Symptom Validity Scale. Squared correlations (r ) are used for the above effects sizes. *p , .001.

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

Means and SDs across cognitive indices and MMPI-2-RF validity indicators are provided for both the PVT fail group and the PVT Pass group in Table 7. PVT fail designation was associated with significantly lower test performance across all cognitive domains at the p , .001 level except for Verbal Ability, for which group differences were significant at p ¼ .001. Group differences were also significant for both RBS and FBS-r when Type I error rate was set at a ¼ 0.01. The effect sizes for these differences ranged from medium to large, with six of nine comparisons yielding a large effect. Similarly, PVT status accounted for a significant proportion of the variance of both RBS (12.5%) and FBS-r (13.6%) (see Table 8). Examination of Table 9 reveals that RBS was generally significantly associated with test performance across cognitive domains at p , .01, with the relationship between RBS and Perceptual Reasoning being the single exception. Cognitive test performance accounted for as much as 9.5% of the variance of RBS. Conversely, cognitive test performance did not account for a significant proportion of FBS-r. When controlling for PVT status, cognitive test performance was no longer associated with RBS and remained unassociated with FBS-r (Table 10). In accounting for RBS variance, only one cognitive domain, Verbal Ability, even approached significance (p ¼ .029). Similar patterns emerged after including seven individuals diagnosed with somatoform disorder who were initially excluded from the analyses. As illustrated by Table 11, cognitive functioning contributed to both RBS and FBS-r in this expanded sample; however, these associations were no longer significant after controlling for PVT status (Table 12). Finally, Table 13 illustrates the relationship between cognitive functioning and the MMPI-2-RF cognitive validity indicators while controlling for PVT status when considering individuals with and without an identified external incentive, separately. As noted, neither RBS nor FBS-r was significantly related to cognitive functioning across these two groups.

P. K. Martin et al. / Archives of Clinical Neuropsychology 30 (2015); 377–386

383

Table 9. Variance of RBS and FBS-r accounted for by cognitive test performance before controlling for PVT status

Verbal Ability Perceptual Reasoning Working Memory Processing Speed Learning Memory Recall Executive Functioning

RBS

FBS-r

0.083* 0.048 0.095* 0.070* 0.063* 0.081* 0.085*

0.023 0.042 0.044 0.024 0.017 0.014 0.009

Notes: PVT ¼ Performance Validity Test, RBS ¼ Response Bias Scale, FBS-r ¼ Symptom Validity Scale. Squared correlations (r2) are used for the above effects sizes. *p , .01.

Verbal Ability Perceptual Reasoning Working Memory Processing Speed Learning Memory Recall Executive Functioning

RBS

FBS-r

0.040 0.013 0.020 0.020 0.013 0.019 0.029

0.002 0.008 0.000 0.000 0.013 0.006 0.004

Notes: PVT ¼ Performance Validity Test, RBS ¼ Response Bias Scale, FBS-r ¼ Symptom Validity Scale. Squared partial correlations (pr2) are used for the above effects sizes. Correlations are not significant at p , .01.

Table 11. Variance of RBS and FBS-r accounted for by cognitive test performance before controlling for PVT status (somatoform included in sample)

Verbal Ability Perceptual Reasoning Working Memory Processing Speed Learning Memory Recall Executive Functioning

RBS

FBS-r

0.085* 0.046 0.107** 0.071* 0.059* 0.072* 0.073*

0.025 0.027 0.055* 0.023 0.009 0.009 0.004

Notes: PVT ¼ Performance Validity Test, RBS ¼ Response Bias Scale, FBS-r ¼ Symptom Validity Scale. Squared correlations (r2) are used for the above effects sizes. *p , .01, **p , .001.

Table 12. Variance of RBS and FBS-r accounted for after controlling for PVT status (somatoform included in the sample)

Verbal Ability Perceptual Reasoning Working Memory Processing Speed Learning Memory Recall Executive Functioning

RBS

FBS-r

0.042 0.012 0.029 0.023 0.012 0.015 0.024

0.004 0.004 0.005 0.000 0.003 0.006 0.006

Notes: PVT ¼ Performance Validity Test, RBS ¼ Response Bias Scale, FBS-r ¼ Symptom Validity Scale. Squared partial correlations (pr2) are used for the above effects sizes. Correlations are not significant at p , .01.

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

Table 10. Variance of RBS and FBS-r accounted for after controlling for PVT status

384

P. K. Martin et al. / Archives of Clinical Neuropsychology 30 (2015); 377–386

Table 13. Variance of RBS and FBS-r accounted for after controlling for PVT status in individuals with and without an external incentive External incentive

Verbal Ability Perceptual Reasoning Working Memory Processing Speed Learning Memory Recall Executive Functioning

No external incentive

RBS

FBS-r

RBS

FBS-r

0.031 0.024 0.000 0.052 0.003 0.005 0.028

0.000 0.009 0.002 0.006 0.001 0.002 0.003

0.056 0.005 0.083 0.005 0.022 0.036 0.030

0.011 0.013 0.007 0.002 0.007 0.002 0.030

Notes: PVT ¼ Performance Validity Test, RBS ¼ Response Bias Scale, FBS-r ¼ Symptom Validity Scale. Squared partial correlations (pr2) are used for the above effects sizes. Correlations are not significant at p , .01.

The results from the current study help to clarify the relationship between neurocognitive performance and invalid reporting of cognitive symptomatology on the MMPI-2-RF. It is reasonable to suspect that neurocognitive test performance might relate to scores on RBS and FBS-r because each of these scales first, contains item content relating to various domains of cognitive functioning; second, has been shown to elevate in individuals feigning or exaggerating cognitive symptoms; and third, requires some cognitive ability to read and appropriately respond to the questions. While research has demonstrated these scales to discriminate between individuals failing PVTs and those with true cognitive impairment, such findings do not rule out the possibility that both true cognitive impairment and exaggerated cognitive impairment might contribute to elevations on RBS and FBS-r. Such an occurrence would be problematic as it would obfuscate the clinical interpretation of these scales and make it difficult to determine whether a given elevation were the result of genuine cognitive difficulty or invalid responding. To determine the contribution of cognitive functioning on MMPI-2-RF cognitive validity scales, the present study examined the relationships between cognitive test performance and RBS and FBS-r of the MMPI-2-RF both before and after controlling for PVT results. Relationships between cognitive test performance and RBS were generally significant when not controlling for PVT status. While such findings demonstrate convergence between RBS and neurocognitive test performance, they do not necessarily establish an association between the scale and true cognitive functioning. Previous research has demonstrated that a substantial percentage of neuropsychological patients do not perform to the best of their ability on testing (Mittenberg, Patton, Canyock, & Condit, 2002) and that performance validity explains a significant proportion of the variance in neuropsychological test performance (Meyers, Volbrecht, Axelrod, & Reinsch-Boothby, 2011). Such phenomena were observed in the present study as 22.5% of sample patients performed non-credibly. Across cognitive domains, these individuals performed significantly worse than those passing PVTs, with group differences more often than not yielding large effect sizes. Together, these findings strongly reinforce the position that cognitive test performance is not commensurate with cognitive ability when test validity is not first established. Therefore, to more accurately determine the impact of true cognitive functioning on RBS and FBS-r, squared correlations were analyzed, this time controlling for the variance accounted for by cognitive test validity. FBS-r FBS-r was unrelated to cognitive test performance and, after controlling for performance validity, remained unrelated. Such findings are not surprising given that FBS-r has been found to assess post-injury distress and under-reporting of pre-incident personality problems and is not necessarily expected to assess either true or feigned deficits in cognitive functioning specifically. Interestingly, while FBS-r did not relate to cognitive test performance, of the two MMPI-2-RF validity scales examined in this study, it was the more strongly associated with performance validity. In fact, 13.6% of the variance in FBS-r scores was explained by whether patients passed or failed PVTs. Such findings further support the use of FBS-r as an indicator of non-credible clinical presentation. RBS Prior to controlling for PVT status, RBS was significantly associated with cognitive test performance across the domains of Verbal Ability, Working Memory, Processing Speed, Learning, Memory Recall, and Executive Functioning. A significant

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

Discussion

P. K. Martin et al. / Archives of Clinical Neuropsychology 30 (2015); 377–386

385

proportion of the variance in RBS scores (12.5%) was also explained by PVT status, an unsurprising finding given that RBS items were selected based upon their ability to distinguish between those failing and passing the Word Memory Test. However, when controlling for PVT status, thus removing the confounding effect of test validity from the model, variance of RBS was no longer accounted for by cognitive ability. Such findings indicate that genuine cognitive ability does not impact one’s endorsement of RBS items and that deficits in actual cognitive ability are not expected to contribute to elevated scores on RBS. These results indicate that while high scores on RBS may be anticipated in individuals with low scores on cognitive testing, such occurrences are due to the impact of test invalidity on both cognitive test performance and RBS, and not due to a direct association between cognition and RBS. Additionally, the results demonstrate that patients providing credible test performance are not susceptible to inflated RBS scores as a function of cognitive impairment. Finally, findings that RBS is related to PVT status reinforces the results of previous studies (e.g., Gervais et al., 2007; Schroeder, Baade, et al., 2012; Schroeder, Peck, et al., 2012; Schroeder, Twumasi-Ankrah, et al., 2012; Tarescavage, Wygant, Gervais, & Ben-Porath, 2013) demonstrating that RBS discriminates between those passing and failing PVTs across a variety of contexts. Conclusions

Conflict of Interest None declared. References Arnold, G., Boone, K. B., Lu, P., Dean, A., Wen, J., Nitch, S., et al. (2005). Sensitivity and specificity of Finger Tapping Test scores for the detection of suspect effort. The Clinical Neuropsychologist, 19, 105– 120. Ben-Porath, Y. S., & Tellegen, A. (2008). MMPI-2 Restructured from: Manual for administration, scoring, and interpretation. Minneapolis: University of Minnesota Press. Boone, K., Lu, P., & Herzberg, D. (2002). The Dot Counting Test. Los Angeles: Western Psychological Services. Boone, K. B., Salazar, X., Lu, P., Warner-Chacon, K., & Razani, J. (2002). The Rey 15-Item Recognition Trial: A technique to enhance sensitivity of the Rey 15-Item Memorization Test. Journal of Clinical and Experimental Neuropsychology, 24, 561–573. Bortnik, K. E., Boone, K. B., Marion, S. D., Amano, S., Ziegler, E., Victor, T. L., et al. (2010). Examination of various WMS-III Logical Memory scores in the assessment of response bias. The Clinical Neuropsychologist, 24, 344– 357. Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, A., & Kaemmer, B. (1989). Minnesota multiphasic personality inventory-2: Manual for administration and scoring. Minneapolis: University of Minnesota Press.

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

Although other studies have previously demonstrated an association between PVT status and the MMPI-2-RF cognitive validity indicators, this current study is novel and additive in that it examined the relationship between RBS and FBS-r and true cognitive dysfunction. Thus, while previous research has supported the convergent validity of the MMPI-2-RF cognitive validity scales, the discriminate validity of these scales had not been established prior to the current study. By demonstrating convergence between the MMPI-2-RF validity indicators and PVT performance and a lack of convergence between these scales and valid neurocognitive test performance, this study provides strong evidence for the construct validity of FBS-r and RBS and supports their use in the evaluation of individuals reporting cognitive impairment. Both scales are associated with invalid approaches to cognitive testing as determined by failures on two or more PVTs, and, equally as critical, neither scale is associated with genuine cognitive functioning. These findings held true even when patients diagnosed with somatoform disorder were included in the analyses and occurred irrespective of the presence or absence of an external incentive. Thus, clinicians may be reassured that elevations on RBS and FBS-r are likely the result of invalid clinical presentation and not the result of true cognitive deficits. While the present study utilized a sample of neuropsychological referrals, many of whom were referred due to cognitive decline, the exact degree of cognitive decline in these individuals is unknown. Thus, while the current study provides information regarding the contribution of cognitive functioning to FBS-r and RBS, it does not directly speak to the impact of specific degrees of cognitive decline on these scales. While Youngjohn and colleagues (2011) found that FBS-r was not impacted by brain injury severity, it appears equally important to determine that these scales are similarly unaffected by impairment severity in other neurological populations given that domains and extent of impairment differ according to pathology. As the current study used a mixed neuropsychological referral sample, the results may be appropriately generalized to patients experiencing a variety of conditions affecting cognitive functioning. Given the relatively high average education and predominance of Caucasian individuals in this sample, though, future researchers are encouraged to replicate these findings across demographic groups as previous research has demonstrated education and race to occasionally exert influence over scores on objective personality scales.

386

P. K. Martin et al. / Archives of Clinical Neuropsychology 30 (2015); 377–386

Downloaded from http://acn.oxfordjournals.org/ at Stockholms Universitet on August 28, 2015

Dean, A. C., Victor, T. L., Boone, K. B., Philpott, L. M., & Hess, R. A. (2009). Dementia and effort test performance. The Clinical Neuropsychologist, 23, 133– 152. Delis, D., Kaplan, E., & Kramer, J. (2001). Delis-Kaplan executive function system. San Antonio, TX: The Psychological Corporation, Harcourt Brace & Company. Delis, D. C., Kramer, J. H., Kaplan, E., & Ober, B. A. (1987). California Verbal Learning Test (CVLT) manual. San Antonia, TX: The Psychological Corporation. Frederick, R. I. (2003). Validity indicator profile manual. Texas: Pearson. Gervais, R. O., Ben-Porath, Y. S., Wygant, D. B., & Green, P. (2007). Development and validation of a Response Bias Scale (RBS) for the MMPI-2. Assessment, 14, 196–208. Gervais, R. O., Ben-Porath, Y. S., Wygant, D. B., & Sellbom, M. (2010). Incremental validity of the MMPI-2-RF over-reporting scales and RBS in assessing the veracity of memory complaints. Archives of Clinical Neuropsychology, 25, 274– 284. Green, P. (2005). Manual for the Word Memory Test for Windows (Rev. ed.). Edmonton, AB, Canada: Green’s Publishing. Green, P. (2004). Memory complaints inventory. Edmonton, AB, Canada: Green’s Publishing. Gunner, J. H., Miele, A. S., Lynch, J. K., & McCaffrey, R. J. (2012). The Albany Consistency Index for the Test of Memory Malingering. Archives of Clinical Neuropsychology, 27 (1), 1– 9. Jones, A., & Ingram, M. V. (2011). A comparison of selected MMPI-2 and MMPI-2-RF validity scales in assessing effort on cognitive tests in a military sample. The Clinical Neuropsychologist, 25, 1207– 1227. Jones, A., Ingram, M. V., & Ben-Porath, Y. S. (2012). Scores on the MMPI-2-RF scales as a function of increasing levels of failure on cognitive symptom validity tests in a military sample. The Clinical Neuropsychologist, 26, 790–815. Larrabee, G. (2012). Performance validity and symptom validity in neuropsychological assessment. Journal of the International Neuropsychological Society, 18, 1– 7. Lees-Haley, P. R., English, L. T., & Glenn, W. J. (1991). A fake bad scale on the MMPI-2 for personal injury claimants. Psychological Reports, 68, 203–210. Marshall, P., & Happe, M. (2007). The performance of individuals with mental retardation on cognitive tests assessing effort and motivation. The Clinical Neuropsychologist, 21, 826–840. Meyers, J. E., Volbrecht, M., Axelrod, B. N., & Reinsch-Boothby, L. (2011). Embedded symptom validity tests and overall neuropsychological test performance. Archives of Clinical Neuropsychology, 26 (1), 8– 15. Mittenberg, W., Patton, C., Canyock, E. M., & Condit, D. C. (2002). Base rates of malingering and symptom exaggeration. Journal of Clinical and Experimental Neuropsychology, 24 (8), 1094– 1102. Ruocco, A. C., Swirsky-Sacchetti, T., Chute, D. L., Mandel, S., Platek, S. M., & Zillmer, E. A. (2008). Distinguishing between neuropsychological malingering and exaggerated psychiatric symptoms in a neuropsychological setting. The Clinical Neuropsychologist, 22, 547–564. Schroeder, R. W., Baade, L. E., Peck, C. P., VonDran, E. J., Brockman, C. J., Webster, B. K., et al. (2012). Validation of MMPI-2-RF validity scales in criterion group neuropsychological samples. The Clinical Neuropsychologist, 26, 129–146. Schroeder, R. W., Buddin, W. H., Jr, Hargrave, D. D., VonDran, E. J., Campbell, E. B., Brockmas, C. J., et al. (2013). Efficacy of test of memory malingering trial 1, trial 2, the retention trial, and the Albany consistency index in a criterion group forensic neuropsychological sample. Archives of Clinical Neuropsychology, 28, 21– 29. Schroeder, R. W., & Marshall, P. S. (2010). Validation of the Sentence Repetition Test as a measure of suspect effort. The Clinical Neuropsychologist, 24, 326– 343. Schroeder, R. W., Peck, C. P., Buddin, W. H., Heinrichs, R. J., & Baade, L. E. (2012). The Coin-in-the-Hand Test and dementia: More evidence for a screening test for neurocognitive symptom exaggeration. Cognitive and Behavioral Neurology, 25 (3), 139–143. Schroeder, R. W., Twumasi-Ankrah, P., Baade, L. E., & Marshall, P. S. (2012). Reliable digit span: A systematic review and cross-validation study. Assessment, 19, 21– 30. Sellbom, M., Wygant, D., & Bagby, M. (2012). Utility of the MMPI-2-RF in detecting non-credible somatic complaints. Psychiatric Research, 197, 295–301. Tarescavage, A. M., Wygant, D. B., Gervais, R. O., & Ben-Porath, Y. S. (2013). Association between the MMPI-2 restructured form (MMPI-2-RF) and malingered neurocognitive dysfunction among non-head injury disability claimants. The Clinical Neuropsychologist, 27 (2), 313–335. Tellegen, A., & Ben-Porath, Y. S. (2008). Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF): Technical manual. Minneapolis: University of Minnesota Press. Tombaugh, T. (1996). Test of memory malingering. Toronto, Canada: MultiHealth Systems. Van Dyke, S. A., Millis, S. R., Axelrod, B. N., & Hanks, R. A. (2013). Assessing effort: Differentiating performance and symptom validity. The Clinical Neuropsychologist, 27, 1234–1246. Victor, T. L., Boone, K. B., Serpa, J. G., Buehler, J., & Ziegler, E. A. (2009). Interpreting the meaning of multiple symptom validity test failure. The Clinical Neuropsychologist, 23, 297–313. Wechsler, D. (1997a). Wechsler Adult Intelligence Scale (3rd ed.). San Antonio, TX: The Psychological Corporation. Wechsler, D. (1997b). Wechsler Memory Scale (3rd ed.). San Antonio, TX: The Psychological Corporation. Wechsler, D. (2008). Wechsler Adult Intelligence Scale (4th ed.). San Antonio, TX: Pearson Assessment. Wechsler, D. (2009a). Wechsler Memory Scale (4th ed.). San Antonio, TX: Pearson Assessment. Wechsler, D. (2009b). Advanced clinical solutions for the WAIS-IV and WMS-IV. San Antonio, TX: Pearson. Youngjohn, J. R., Wershba, R., Stevenson, M., Sturgeon, J., & Thomas, M. L. (2011). Independent validation of the MMPI-2-RF somatic/cognitive and validity scales in litigants tested for effort. The Clinical Neuropsychologist, 25, 463–476.

Does True Neurocognitive Dysfunction Contribute to Minnesota Multiphasic Personality Inventory-2nd Edition-Restructured Form Cognitive Validity Scale Scores?

Previous research has demonstrated RBS and FBS-r to identify non-credible reporters of cognitive symptoms, but the extent that these scales might be i...
133KB Sizes 1 Downloads 7 Views

Recommend Documents