550478

research-article2014

ASMXXX10.1177/1073191114550478AssessmentMeyer et al.

Article

Evaluating the Validity Indices of the Personality Assessment Inventory–Adolescent Version

Assessment 2015, Vol. 22(4) 490­–496 © The Author(s) 2014 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/1073191114550478 asm.sagepub.com

Justin K. Meyer1, Sang-Hwang Hong2, and Leslie C. Morey1

Abstract Past research has established strong psychometric properties of several indicators of response distortion on the Personality Assessment Inventory (PAI). However, to date, it has been unclear whether the response distortion indicators of the adolescent version of the PAI (PAI-A) operate in an equally valid manner. The current study sought to examine several response distortion indicators on the PAI-A to determine their relative efficacy at the detection of distorted responding, including both positive distortion and negative distortion. Protocols of 98 college students asked to either overreport or underreport were compared with 98 age-matched individuals sampled from the clinical standardization sample and the community standardization sample, respectively. Comparisons between groups were accomplished through the examination of effect sizes and receiver operating characteristic curves. All indicators demonstrated the ability to distinguish between actual and feigned responding, including several newly developed indicators. This study provides support for the ability of distortion indicators developed for the PAI to also function appropriately on the PAI-A. Keywords PAI, PAI-A, Personality Assessment Inventory, adolescents, response distortion, malingering

Modern psychological assessment depends on the accuracy of information gathered about clients, and the ability to assess for any potential distorted self-reported information. As selfreport measures are among the most commonly utilized assessment tools in modern psychology, there is a legitimate concern regarding the potential for clients to intentionally distort their responses to achieve some goal. For example, an individual may positively distort their response to a facevalid item on a questionnaire in order to secure a job, or negatively distort their response in order to attempt to qualify for disability benefits. These concerns have been addressed by the inclusion of scales and indices within self-report measures designed to be able to detect such distortions (see Rogers, 2008 for a comprehensive review). The Personality Assessment Inventory (PAI; Morey, 1991) is one such assessment tool which offers a number of built-in validity scales, as well as several supplementary indices which were developed to help identify distortion sets. In total, there are six primary indicators of efforts at response distortion that have received significant research attention, including three for positive distortion and three for negative distortion. Positive distortion indicators include the Positive Impression Management (PIM) scale (Morey, 1991), the

Defensiveness Index (DEF; Morey, 1993, 1996), and a discriminant function developed by Cashel et al. (Cashel, Rogers, Sewell, & Martin-Cannici, 1995). The PIM scale (Morey, 1991) involves the presentation of an unrealistically favorable impression or the denial of relatively minor faults. Elevations can be indicative of positive impression management, but other factors, such as a lack of insight or an elevated self-appraisal can increase scores. The DEF (Morey, 1993, 1996) is composed of eight configural features of the PAI profile that tend to be observed much more frequently in the profiles of individuals instructed to present a positive impression than in actual normal or clinical individuals. The Cashel discriminant function (Cashel et al., 1995) is calculated using six different scales of the PAI, and is the result of an analysis conducted on prison inmates that was designed to optimally distinguish between defensive and honest responding. 1

Texas A&M University, College Station, TX, USA Chinju National University of Education, Jinju, Republic of Korea

2

Corresponding Author: Justin K. Meyer, Texas A&M University, 312 Psychology Building, College Station, TX 77843-4235, USA. Email: [email protected]

Downloaded from asm.sagepub.com at UNIV OF OTTAWA LIBRARY on August 5, 2015

491

Meyer et al. Negative distortion indicators include the Negative Impression Management (NIM; Morey, 1991) scale, the Malingering Index (MAL; Morey, 1993, 1996), and a discriminant function developed by Rogers et al. (RDF; Rogers, Sewell, Morey, & Ustad, 1996). The NIM scale (Morey, 1991) was designed to provide an indication of the possibility that the results of the test may portray a more negative impression of the individual than might otherwise be merited, given their response style. The MAL (Morey, 1993, 1996) is composed of eight configural features of the PAI profile that tend to be observed much more frequently in the profiles of individuals simulating mental disorder (particularly severe mental disorders) than in actual clinical patients. The Rogers discriminant function (Rogers et al., 1996) is calculated using 20 different scales and subscales of the PAI, and was designed to distinguish the PAI profiles of bona fide patients from those simulating such patients (including both naïve and “coached” simulators). These six indicators have been shown to be effective at the accurate detection of distorted responding on the PAI (Morey & Lanier, 1998). However, while the usefulness of the distortion indicators of the PAI are now well established (see Hawes & Boccacini, 2009 for a review), little has been done to explore the same capabilities of the PAI’s adolescent version, the PAI-A (Morey, 2007). Some preliminary support has emerged recently suggesting that the same validity indicators developed for the PAI can also have utility on the PAI-A (Meyer & Morey, 2012; Rios & Morey, 2013). For example, Rios and Morey’s (2013) study examined the standard PAI-A validity indicators in their ability to detect students feigning attention deficit/hyperactivity disorder, showing that the PAI-A validity indicators were capable of distinguishing between genuine and feigned responders. The current study was developed in order to further examine the effectiveness of several distortion indicators of the PAI-A. In addition to the six PAI distortion indicators described above, three more recently developed indices were also examined. The first of these is an indicator of intentionally overreported responding called the Negative Distortion Scale (NDS; Mogge, Lepage, Bell, & Ragatz, 2010). The NDS was created using a sample of psychiatric inpatients, and was predicated on the idea that respondents who intentionally overreport on the PAI tend to endorse relatively rare symptoms of psychopathology. Unlike other indicators of response distortion which rely on calculations determined using scales and subscales of the PAI, the NDS is calculated using individual items. Thus, the scale is comprised of 15 items from 12 separate PAI scales which have a low endorsement among inpatient populations, but a high endorsement among overreporting respondents. Because several items of the PAI were omitted when constructing the PAI-A, only 14 corresponding items make up the NDS as it applies to the PAI-A.

The second and third experimental indices were developed using the Korean-language version of the PAI in South Korea (Hong & Kim, 2001). These indicators, the Hong Malingering Index and the Hong Defensiveness Index, were designed as discriminant functions to be able to detect either negative impression management or positive impression management, respectively. The Hong Malingering Index is calculated using five PAI scales (ICN [inconsistency], NIM [negative impression], ARD [anxiety-related disorders], PAR [paranoia], and WRM [warmth]), and the Hong Defensiveness Index is calculated using seven PAI scales (INF [infrequency], PIM [positive impression], ANX [anxiety], NON [nonsupport], RXR [treatment rejection], DOM [dominance], and WRM [warmth]). While these indicators demonstrated strong psychometric properties in South Korean samples, there does not appear to be any research on their use in an American population. The purpose of this study was to examine the operating characteristics of these nine measures of response distortion, including the six PAI indicators and the three experimental indices, when attempting to differentiate underreporting response styles from normal response styles, and intentionally overreported or exaggerated response styles from clinical response styles. Two experimental groups (a simulated underreporting group and a simulated overreporting group) were compared with two control groups (the community standardization sample and the clinical standardization sample from the PAI-A professional manual) to determine the operating characteristics of each measure of response distortion.

Method Participants The study consisted of 196 total participants. Of this total, 98 were college students recruited from an undergraduate introductory psychology class at Texas A&M University. The remainder of the participants were selected randomly from an age-matched sample of the standardization samples from both the community norm sample (as a comparison for the simulated underreporting group) and the clinical norm sample (as a comparison for the simulated overreporting group). The average age of all of the participants was 17.74 years (SD = 0.474), with all individuals being either 17 or 18 years old, and there were no significant age differences between groups. Within the simulated underreporting group, the majority of the participants were Caucasian (60%), followed by Hispanic (23%), African American (6%), and Asian (2%); the majority of participants were female (71%). Within the simulated overreporting group, the most common ethnicity was Caucasian (50%), followed by Hispanic (32%), Asian (12%), and African American (4%); the majority of participants were female (68%).

Downloaded from asm.sagepub.com at UNIV OF OTTAWA LIBRARY on August 5, 2015

492

Assessment 22(4)

Within the community standardization sample, the majority of participants were Caucasian (63%), followed by Hispanic (15%), and African American (15%), as well as Asian (6%); there were equal numbers of males and females. Within the clinical standardization sample, the majority of participants were Caucasian (62%), followed by African American (16%), Hispanic (14%), and Asian (6%); the majority of participants were male (58%). Although students in both the underreporting and overreporting groups likely had some information about mental disorder and self-report tests by virtue of their enrollment in a course in introductory psychology, they were not given any special instruction about simulating mental disorder or about the use of validity scales on self-report tests. Thus, the simulators in this study should be considered “naïve” as opposed to “sophisticated” or “coached” in their approach to test distortion (Rogers, 2008).

Measures Personality Assessment Inventory–Adolescent. The PAI-A (Morey, 2007) is a 264-item self-report questionnaire designed to provide comprehensive assessment information in a variety of contexts in which psychopathology, personality, and psychosocial environment are of clinical concern. The PAI-A was designed as an adolescent counterpart to the adult version, and uses the same scales with some modifications. It is intended for use with adolescents between the ages of 12 and 18 years. The PAI-A community standardization sample was made up of 707 adolescents aged 12 to 18 years who were enrolled in junior high school, senior high school, or college. The clinical standardization sample included 1,160 adolescents being treated in clinical or correctional settings. Items on the PAI-A were selected using examinations of differential item functioning (Holland & Wainer, 1993) between adolescent and adult samples to select items that were interpretively comparable across these samples, in an effort to assure continuity of interpretation across the two instruments. Thus, because the constructs and selection of items were designed to yield an instrument that would directly parallel the PAI, the PAI-A is scored and generally interpreted in the same manner as the PAI itself.

Procedures and Design This study was approved by the university’s institutional review board. Students participating in the study were randomly assigned to one of two groups: a simulated underreporting group (n = 48), and a simulated overreporting group (n = 50). All participants completed the self-administered version of the PAI-A. Participants in the simulated underreporting group were asked to imagine that they were applying for a job and attempting to maximally impress a potential

employer. They were not, however, explicitly told to minimize psychopathology. Participants in the simulated overreporting group were asked to respond to the PAI-A as if they were someone with a severe mental illness, although they were not told to simulate any particular disorder. Participants were provided with course credit for their participation in the study. This dissimulated data was then compared with the age-matched samples taken from the community standardization sample (n = 48) and the clinical standardization sample (n = 50) to determine the effectiveness of each of the nine response distortion indicators. By comparing the values of the positive distortion indicators of the simulated underreporting group with the community standardization sample, and the values of the negative distortion indicators of the simulated overreporting group with the clinical standardization sample, it can be determined if each distortion indicator has the ability to detect dissimulated responding. For example, if an individual who is intentionally overreporting psychopathology (as in the simulated overreporting group) provides values that significantly exceed even that of an individual with genuine clinical psychopathology (as in the clinical standardization sample), this suggests that the individual is likely overexaggerating his or her symptoms. Thus, in the present study, the primary goal was to determine if each of the nine response distortion indicators had the capability of detecting significant differences between genuine responders and dissimulated responders.

Results The results presented in Table 1 show that significant and substantial effects of the underreporting response set were found. Effects for PIM and for the DEF appeared to be specific to efforts at positive impression management, as the simulated overreporting group did not differ significantly from the normative clinical group on these indicators. In contrast, students in both the simulated underreporting and simulated overreporting groups differed significantly from their respective normative comparison groups on the Cashel discriminant function, with both simulation groups scoring higher on this indicator than their relevant comparison groups. However, this result was expected given previous research which has indicated that the Cashel discriminant function is likely a better indicator of impression management in general as opposed to one specific to positive impression management, such as Morey and Lanier’s (1998) study which demonstrated that the Cashel discriminant function was elevated for both underreporters and overreporters on the PAI. The Hong Defensiveness Index also appeared to be somewhat sensitive to both underreporting and overreporting. While there was a significant difference between the simulated underreporting group and the normative community group (with higher scores in the

Downloaded from asm.sagepub.com at UNIV OF OTTAWA LIBRARY on August 5, 2015

493

Meyer et al. Table 1.  Comparison of Standard and Experimental Personality Assessment Inventory–Adolescent Version Validity Indices for Student Underreporter and Community Normative Samples. Student underreporter (n = 48)  

M

Positive Impression Management (PIM) 59.92 Defensiveness Index (DEF) 5.33 Cashel discriminant function (CDF) 144.56 Negative Impression Management (NIM) 44.40 Malingering Index (MAL) .521 Rogers Discriminant Function (RDF) −1.44 Hong Defensiveness Index −0.21 Hong Malingering Index −2.13 Negative Distortion Scale (NDS) 4.46

SD 10.05 1.86 8.92 6.74 .772 .671 1.16 0.68 3.56

Community normative (n = 48) SD

t

d

AUC

SE

9.74 1.95 13.86 10.76 .713 .973 1.28 0.94 4.03

5.68 6.96 4.80 −2.95 0.41 −1.29 5.40 −2.15 −1.24

1.16 1.42 0.98 −0.07 0.09 −0.28 1.11 −0.46 −0.26

.805 .844 .788 .286 .523 .448 .809 .390 .420

.044 .042 .049 .053 .059 .060 .046 .060 .059

M 48.44 2.63 133.11 49.83 .458 −1.22 −1.56 −1.77 5.42

Cut off Sensitivity Specificity 56T 4.5 139.7 46.5T 0.50 −1.44 −.637 −2.04 4.50

.792 .875 .729 .583 .667 .500 .750 .500 .563

.687 .750 .723 .417 .333 .500 .723 .500 .438

Note. Underreporting scale names are underlined whereas overreporting scale names are italicized. All independent-samples t tests and AUC (area under receiver operator curve) values were significant at p < .001. NIM and PIM cutoff values are presented in standardized T-score format.

Table 2.  Comparison of Standard and Experimental Personality Assessment Inventory–Adolescent Version Validity Indices for Student Overreporter and Clinical Normative Samples. Student overreporter (n = 50)   Positive Impression Management (PIM) Defensiveness Index (DEF) Cashel Discriminant Function (CDF) Negative Impression Management (NIM) Malingering Index (MAL) Rogers Discriminant Function (RDF) Hong Defensiveness Index Hong Malingering Index Negative Distortion Scale (NDS)

Clinical normative (n = 50)

M

SD

M

41.86 1.68 151.31 87.20 2.40 1.01 −4.66 1.15 24.72

10.39 1.61 19.35 15.56 1.25 0.87 1.23 0.94 8.24

46.16 1.92 131.95 54.40 .900 −1.08 −1.66 −1.29 6.46

SD

t

d

13.49 −1.78 −0.04 1.95 −0.67 −0.14 18.94 5.01 1.02 11.49 11.91 2.40 .931 6.82 1.36 1.03 10.76 2.19 1.42 −11.18 −2.29 1.00 12.38 2.51 4.43 13.82 2.76

AUC

SE

.401 .476 .763 .943 .828 .936 .055 .962 .973

.057 .058 .048 .021 .041 .024 .022 .016 .011

Cut off Sensitivity Specificity 44.5T 1.5 133.32 69T 1.5 .107 −1.46 .230 13.5

.460 .520 .500 .880 .760 .900 .500 .880 .900

.540 .480 .500 .878 .760 .830 .500 .872 .915

Note. Underreporting scale names are underlined whereas overreporting scale names are italicized. All independent-samples t tests and AUC (area under the receiver operator curve) values were significant at p < .001. NIM and PIM cutoff values are presented in standardized T-score format.

simulated underreporting group than in the normative group), the simulated overreporting group had particularly low scores on this index when compared with the normative clinical group. The results presented in Table 2 show that significant and substantial effects of the overreporting response set were also found. All three standard negative distortion indicators appeared to be specific to overreporting, as the simulated underreporting group did not significantly differ from the normative community group on any of these indicators. The Hong Malingering Index appeared to be specific to overreporting as well, as there were no significant differences between the simulated underreporting group and the normative community sample. Likewise, the NDS also appeared to be highly specific to overreporting. A more detailed comparison of the characteristics of these distortion indicators was provided using Cohen’s d as a measure of effect size and receiver operating characteristics

(ROC) curves to evaluate the sensitivity and specificity of the measures using different cutoff scores. The estimated area under the ROC curve (AUC) is useful as a metric of the overall performance of a measure; a perfect test would yield an AUC of 1.00 while a test which performs no better than chance would yield an AUC of .500 (Streiner & Norman, 2008). The AUC values for the positive impression indicators were high, and the AUC values for the negative impression indicators were particularly impressive. In general, the AUC results parallel the relative performance of the effect sizes. For application purposes, it is important to understand the characteristics of different cutting scores on these measures. Tables 1 and 2 also provide the sensitivity and specificity of some example cutting scores. As a general guideline in the absence of base rate information (i.e., assuming a base rate of .50), a useful starting place for establishing a cutting score is that score that yields the fewest incorrect decisions (i.e., where the sum of sensitivity plus specificity

Downloaded from asm.sagepub.com at UNIV OF OTTAWA LIBRARY on August 5, 2015

494

Assessment 22(4)

Table 3.  Intercorrelations Across All Groups for All Distortion Indicators. Index PIM DEF CDF NIM MAL RDF Hong Defensiveness Index Hong Malingering Index NDS

PIM 1 .832** .358** −.520** −.404** −.319** .679** −.557** −.465**

DEF

CDF

NIM

MAL

RDF

1 .331** 1 −.482** .255** 1 −.308** .246** .777** 1 −279** .309** .656** .522** 1 .645** −.083 −.842** −.646** −.696** −.498** .266** .947** .748** .653** −.396** .316** .910** .752** .677**

Hong Defensiveness Hong Malingering Index Index NDS

1 −.814** −.779**

1 .896**

                1

Note. Underreporting scale names are underlined whereas overreporting scale names are italicized. n = 196. PIM = Positive Impression Management; DEF = Defensiveness Index; CDF = Cashel Discriminant Function; NIM = Negative Impression Management; MAL = Malingering Index; RDF = Rogers Discriminant Function; NDS = Negative Distortion Scale, **p < .001.

are maximized); these cutting scores are the cutoffs provided in the tables, and were selected solely as the optimal balance between both sensitivity and specificity in this study. Because only the NIM and PIM scales were included in the intial standardization of the PAI-A, only these scales are presented in standardized t-score format; all others are presented as raw scores. To examine whether the nine measures converged in their identification of distorted responding, correlations between these measures were calculated and are presented in Table 3. This table reveals that, within each type of response set, the indicators tended to be correlated, and negatively correlated across type of response set. The one exception was the Cashel discriminant function, which was positively (and equally) correlated with indicators of both underreporting and overreporting response sets. Although these measures tended to be correlated, each measure of the different response sets seemed to be contributing some unique variance to the assessment of response distortion, suggesting that the alternative measures could be used in a complementary fashion. For example, the Hong Defensiveness Index’s correlation to PIM and the DEF was somewhat weaker than the correlation between PIM and the DEF, suggesting that the Hong Defensiveness Index can in fact provide incremental utility beyond the use of these other two indices. To support this interpretation, two logistic regressions were conducted with a forced entry of the three relevant distortion indicators using the contrasts of primary interest (underreporting vs. normal, or overreporting vs. clinical). Both models yielded significant coefficients of fit. Among the positive impression indicators, the Defensiveness Index and the Cashel discriminant function each contributed significant unique variance, capturing a total of 83% of the variance in positive distortion. Among the negative impression indicators, the NDS and the Rogers discriminant function

each contributed significant unique variance, capturing a total of 92.7% of the variance in negative distortion. These results suggest that the valid variance in these different indicators is not redundant, and that each may contribute unique information in the assessment of response distortion.

Discussion The results of this study provide support for the usefulness of the response distortion indicators of the PAI on the PAIA. The six standard PAI indices performed comparably to previous research which has examined their utility on the adult version (Morey & Lanier, 1998), suggesting that each of the six different indicators of response distortion on the PAI are useful in identifying individuals instructed to manipulate their performance on the PAI-A. In addition, despite a modification of the NDS to accommodate the reduction of items in the PAI-A (i.e., only 14 items were included in the calculation for the Negative Distortion Scale as opposed to the standard 15), there was no loss of performance in this indicator’s ability to detect distorted responding. Taken together, these results suggest that each of the standard PAI-A indicators can be used in an identical manner as their PAI counterparts, and in similar settings and environments where the detection of response distortion is paramount. In addition, though the Hong Defensiveness Index and the Hong Malingering Index were developed for the PAI-A, it is likely that they could also have utility on the PAI, as well. Of the standard PAI negative distortion indicators, the NIM scale and the Rogers discriminant function appeared to perform the best based on their effect sizes and AUC values. The Malingering Index, while not performing quite as well as NIM or the Rogers discriminant function, also demonstrated a strong effect size and a high AUC value, suggesting that it is also useful at the detection of overreported

Downloaded from asm.sagepub.com at UNIV OF OTTAWA LIBRARY on August 5, 2015

495

Meyer et al. response sets. Of course, in practice, it is better to examine all indicators of potentially overreported responding as opposed to a single index, and the NIM scale, Malingering Index, and Rogers discriminant function should be used in combination with one another in order to provide the clearest possible conclusion regarding distorted responding. With respect to the standard PAI indicators of positive distortion, the performance of these metrics was also impressive, although as a group they performed less well than the negative distortion indicators. This result is generally consistent with research using other instruments (e.g., Bagby, Buis, & Nicholson, 1995) where the effect sizes for defensiveness manipulations are typically lower than in malingering simulations. Nonetheless, all three positive distortion indicators were of use in distinguishing simulated underreporting protocols from normal protocols. The DEF appeared to perform the best according to the criteria of effect size and the ROC analyses, followed by the PIM scale. This is an interesting finding when compared with their functionality in the adult version of the PAI, where PIM typically outperforms the DEF (i.e., Morey & Lanier, 1998). The Cashel discriminant function appeared to perform less well than the other two standard positive distortion metrics. However, the Cashel discriminant function had the unique characteristic of elevating in the presence of both underreported and overreported response sets, suggesting that it may be useful as an indicator of efforts at impression management in general. In addition, the three experimental PAI distortion indicators were examined within the PAI-A and performed impressively well at identifying potentially distorted responding. While the Hong Malingering Index, the Hong Defensiveness Index, and the NDS are still being examined for their utility on the adult version of the PAI, this study suggests that they certainly have use on the PAI-A. In particular, the NDS outperformed all other indicators of negative distortion used in this study. This suggests that it is extremely sensitive and specific to negative distortion, perhaps making it a critical tool in settings where negative distortion among adolescents is a serious concern, such as attention deficit/hyperactivity disorder medication evaluations. It is also quite intriguing that the Hong indices, despite having been developed using South Korean students to provide normative data, appeared to perform as well as or even superior to the indicators specifically designed for American individuals. The Hong Defensiveness Index outperformed the Cashel discriminant function as an indicator of positive distortion, and performed comparably to the PIM scale overall. The Hong Malingering Index, like the NDS, also appeared to outperform the standard PAI indicators of negative distortion. Overall, this study provides excellent support for these experimental indicators, and suggests that further research on each of these three indicators is

warranted for both the PAI and the PAI-A, such as each indicator’s level of incremental validity over the standard indices. For example, the findings of this study (based on the intercorrelations between distortion indicators) seemed to support the incremental utility of the Hong Defensiveness Index over the PIM scale or the DEF, or of the Hong Malingering Index over the Rogers discriminant function. Further research on recommended cut scores is also warranted. Though example cutoff scores were provided in Tables 1 and 2, it is important to note that these cutoffs will be differentially useful in different settings. For example, in a screening context sensitivity is typically of paramount concern, while in a confirmatory application specificity is the primary consideration. Also, one must recognize that variations in the base rate of distorted responding will affect the efficiency of any cutting score. Estimates of the diagnostic efficiency of these cutting scores at different base rates can be calculated using Bayes Theorem and the sensitivity and specificity estimates. Values of diagnostic efficiency (positive predictive power and negative predictive power) are useful in that they provide an estimate of the probability that obtained positive and negative index results, respectively, are accurate in identifying distorted responding. These estimates take into account the fact that base rates will influence test utility, and the base rate of distorted responding may indeed vary widely in different settings; for example, the rate of positive distortion may be very low in anonymous research participants, but very high in personnel selection or custody evaluation situations. It is important to stress that this study examined dissimulators, both positive and negative, that were undergraduate college students who were relatively naïve to the nuances of mental disorder and of self-report tests. Previous research has suggested that different aspects of the simulations (e.g., naïve vs. sophisticated, simulating specific disorders vs. global maladjustment, coaching simulators about disorders or about the operation of validity scales, or strength of incentives to avoid detection by validity scales) can affect the performance of these indicators (Coleman, Rapport, Millis, Ricker, & Farchione, 1998; Jelicic, Merckelbach, Candel, & Geraerts, 2007; Rogers, Ornduff, & Sewell, 1993; Rogers et al., 1996). Nonetheless, the nature of different contexts where distortion is a concern is likely to vary appreciably; some respondents who are motivated to distort their performance may be naïve about these issues while others may be quite sophisticated. This study provides information about the performance of these indices in a population that is bright but generally naïve, although these results may have limited generalizability to other types of respondents. However, it is worth noting that several of the indices were developed under quite different methodologies from the one used in this study. Despite these methodological differences, these indices did in fact generalize to the population studied here.

Downloaded from asm.sagepub.com at UNIV OF OTTAWA LIBRARY on August 5, 2015

496

Assessment 22(4)

An important limitation of the current study was its exclusive use of adolescents at the higher end of the age spectrum assessed by the PAI-A, in an attempt to compare age-matched participants between groups. However, the PAI-A is designed to be used in adolescents from age 12 to 18 years, and while this study provides good support for the ability of distortion indicators on the PAI-A to detect impression management among 17- and 18-year-olds, it is crucial to examine their effects among a younger population as well. In addition, the dissimulated responders in this study were all college students, which could suggest that different results may be obtained among different populations, such as those in lower socioeconomic statuses or those with potentially higher rates of genuine mental health concerns. Future research should also attempt to examine the distortion indicators of the PAI-A controlling for gender and ethnicity, as there were some notable differences in the demographic makeup of each group used in this study. Though the current study chose to focus on an age-matched sample, it is essential to consider gender and cultural factors in any psychometric research. The results of this study support the conclusion that, in a bright but relatively naïve population, all nine of these indicators of distorted responding have promise for the examination of protocol validity. Though the PAI-A still lacks the level of research support of the adult version, this study provides evidence that the ability of the PAI-A to detect dissimulated responding may be as equally strong as that of the PAI. Declaration of Conflicting Interests The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding The authors received no financial support for the research, authorship, and/or publication of this article.

References Bagby, R. M., Buis, T., & Nicholson, R. A. (1995). Relative effectiveness of the standard validity scales in detecting fakebad and fake-good responding: Replication and extension. Psychological Assessment, 7, 84-92. Cashel, M. L., Rogers, R., Sewell, K., & Martin-Cannici, C. (1995). The Personality Assessment Inventory and the detection of defensiveness. Assessment, 2, 333-342. Coleman, R. D., Rapport, L. J., Millis, S. R., Ricker, J. H., & Farchione, T. J. (1998). Effects of coaching on detection of malingering on the California Verbal Learning Test. Journal of Clinical and Experimental Neuropsychology, 20, 201-210.

Hawes, S. W., & Boccaccini, M. T. (2009). Detection of overreporting of psychopathology on the Personality Assessment Inventory: A meta-analytic review. Psycho-logical Assessment, 21, 112-124. Holland, P. W., & Wainer, H. (1993). Differential item functioning. Hillsdale, NJ: Erlbaum. Hong, S. H., & Kim, Y. H. (2001). Detection of random response and impression management in the PAI: II. Detection indices. Korean Journal of Clinical Psychology, 20, 751-761. Jelicic, M., Merckelbach, H., Candel, I., & Geraerts, E. (2007). Detection of feigned cognitive dysfunction using special malinger tests: A simulation study in naïve and coached malingerers. International Journal of Neuroscience, 117, 1185-1192. Meyer, J. K., & Morey, L. C. (2012, March). Applying supplemental PAI validity indices to the PAI–A. Poster presented at the annual meeting of Society for Personality Assessment, Chicago, IL. Mogge, N. L., Lepage, J. S., Bell, T., & Ragatz, L. (2010). The negative distortion scale: A new PAI validity scale. Journal of Forensic Psychiatry & Psychology, 21, 77-90. Morey, L. C. (1991). The Personality Assessment Inventory professional manual. Odessa, FL: Psychological Assessment Resources. Morey, L. C. (1993, August). Defensiveness and malingering indices for the PAI. Paper presented at the meetings of the American Psychological Association, Toronto, Ontario, Canada. Morey, L. C. (1996). An interpretive guide to the Personality Assessment Inventory. Odessa, FL: Psychological Assessment Resources. Morey, L. C. (2007). Personality Assessment Inventory– Adolescent professional manual. Lutz, FL: Psychological Assessment Resources. Morey, L. C., & Lanier, V. W. (1998). Operating characteristics of six response distortion indicators for the Personality Assessment Inventory. Assessment, 5, 203-214. Rios, J., & Morey, L.C. (2013). Detecting feigned ADHD in later adolescence: An examination of three PAI-A negative distortion indicators. Journal of Personality Assessment, 95, 594-599. Rogers, R. (2008). Clinical assessment of malingering and deception, 3rd edition. New York, NY: Guilford Press. Rogers, R., Ornduff, S. R., & Sewell, K. W. (1993). Feigning specific disorders: A study of the Personality Assessment Inventory (PAI). Journal of Personality Assessment, 60, 554-561. Rogers, R., Sewell, K. W., Morey, L. C., & Ustad, K. L. (1996). Detection of feigned mental disorders on the Personality Assessment Inventory: A discriminant analysis. Journal of Personality Assessment, 67, 629-640. Streiner, D. L., & Norman, G. R. (2008). Health Measurement Scales: A practical guide to their development and use. Oxford, England: Oxford University Press.

Downloaded from asm.sagepub.com at UNIV OF OTTAWA LIBRARY on August 5, 2015

Evaluating the Validity Indices of the Personality Assessment Inventory-Adolescent Version.

Past research has established strong psychometric properties of several indicators of response distortion on the Personality Assessment Inventory (PAI...
276KB Sizes 0 Downloads 6 Views