© 2015 American Psychological Association 1040-3590/15/$ 12.00 http://dx.doi .org/10.1037/pas0000077

Psychological Assessment 2015, Vol. 27, No. 2, 726-732

BRIEF REPORT

Clinical Assessment of Organizational Strategy: An Examination of Healthy Adults Pia Banerjee and Desiree A. White Washington University in St. Louis During the assessment of patients with cognitive difficulties, clinicians often examine strategic processing, particularly the ability to use organization-based strategies to efficiently complete various tasks. Several commonly used neuropsychological tasks are currently thought to provide measures of organizational strategic processing, but empirical evidence for the construct validity of these strategic measures is needed before interpreting them as measuring the same underlying ability. This is particularly important for the assessment of organizational strategic processing because the measures span cognitive domains (e.g., memory strategy, language strategy) as well as types of organization. In the present study, 200 adults were administered cognitive tasks commonly used in clinical practice to assess organizational strategic processing. Factor analysis was used to examine whether these measures of organizational strategic processing, which involved different cognitive domains and types of organization, could be operationalized as measuring a unitary construct. A very good-fitting model of the data demonstrated no significant shared variance among any of the strategic variables from different tasks (root mean square error of approximation < .0001, standardized root-mean-square residual = .045, comparative fit index = 1.000). These findings suggest that organizational strategic processing is highly specific to the demands and goals of individual tasks even when tasks share commonalities such as involving the same cognitive domain. In the design of neuropsychological batteries involving the assessment of organizational strategic processing, it is recommended that various strategic measures across cognitive domains and types of organizational processing are selected as guided by each patient’s individual cognitive difficulties. Keywords: strategy, strategic processing, organization, organizational strategic processing, factor anal­ ysis

appropriate recommendation for rehabilitation. However, there is little empirical evidence beyond reliability statistics to guide clinicians and researchers in determining which measures pro­ vide the best assessment of strategic processing. One critical step in providing guidance to clinicians on which measures to select for neuropsychological assessment is to evaluate the construct validity of measures thought to assess strategic pro­ cessing on the basis of the current working definitions of this construct. However, to our knowledge, no thorough analysis of this type has been conducted to date. The current investigation was focused on examining the con­ struct validity of commonly used measures of organizational stra­ tegic processing (OSP), the ability to use categorization-based approaches to efficiently complete tasks. The construct validity of different measures of OSP has been rarely addressed despite substantial research of this type with other cognitive constructs (e.g., attention, memory) and recent calls to formalize cognitive ontologies (Bilder, 2011). One complicating factor is that the conceptualization of OSP as a construct is not well established. Researchers generally agree that organizational strategies involve the assembly of information into meaningful frameworks (e.g., Banerjee, Grange, Steiner, & White, 2011; Taconnat et al., 2009), must be effortful and goal-directed (Bjorklund & Harnishfeger, 1990; Moscovitch, 1994), and may involve interactions among

Strategic processing, broadly defined, is the higher-order cognitive processing that is performed to facilitate efficient attainment of a goal. Through strategic processing, we optimize completion of a wide variety of activities, such as reading, organizing information, and solving problems. As such, during neuropsychological assessment, it is crucial that the efficiency and sophistication of the strategies used by patients with cog­ nitive difficulties be examined (Kaplan, 1990) to better under­ stand how a patient may be approaching a specific task and determine whether teaching more efficient strategies may be an

This article was published Online Fiist January 5, 2015. Pia Banerjee and Desiree A. White, Department of Psychology, Wash­ ington University in St. Louis. Pia Banerjee is now affiliated with the Departments of Neurology and Psychiatry at the University of California-Los Angeles. This research was supported by a National Science Foundation Graduate Research Fellowship. No financial conflicts of interest exist with regard to the reported study. The authors thank Dr. Thomas Rodebaugh, Suzin Blankenship, Syndey Ariagno, Hannah Fox, Alicia Janos, Cara Levitch, Brian Richter, and Dakari Quimby for their contributions to the study. Correspondence concerning this article should be addressed to Pia Banerjee, UCLA Semel Institute, 760 Westwood Plaza #C8-746, Los Angeles, CA 90095. E-mail: [email protected] 726

ORGANIZATIONAL STRATEGIC PROCESSING

basic executive abilities such as response inhibition, shifting of cognitive set, and manipulation of information in working memory (Miyake et al., 2000). However, more specific definitions have not been established. It is not surprising that OSP eludes easy description because it spans across cognitive domains and varies based on the type of organization. For example, organizing words on the basis of se­ mantic category enhances memory during a word-list learning task whereas organizing words on the basis of nonsemantic, phonemic characteristics enhances language fluency during a wordgeneration task. It is critical for the proper assessment of OSP to understand whether these tasks, which are currently thought to measure OSP, are truly measuring the same underlying ability and can be interchangeably used. Factor analysis is a robust method of determining whether OSP is a unitary construct. In factor analysis, variables from multiple measures (i.e., indicators) are used to examine a possible construct, and the commonality among indicators is statistically extracted to produce a factor(s) that is purer than the indicators alone. To our knowledge, only two studies have been conducted using factor analysis to examine OSP. In one study, Maeda, Tagashira, and Miura (2003) examined 16 strategies used by Japanese students to learn English vocabulary, such as repetitively writing words with their definitions. Confirmatory factor analysis was used to examine three possible factors: organiza­ tion, repetition, and imagery. Results confirmed that the proposed three-factor model best represented the strategies examined, and all of the seven strategies involving organizing words and definitions in a meaningful manner loaded together. Likewise, a factor analytic study conducted by Van Ede and Coetzee (1996) identified an organizational strategy factor. The investigators examined the memory strategies used by college students to learn course material. Exploratory factor analysis of utilized memory strategies yielded four factors: rehearsal, verbal elaboration, imagery, and organization. The organization factor comprised study approaches involving the classification of course material to indicate relationships among topics (e.g., grouping historical events with who was president at the time). Both studies found that all organization-based measures loaded onto a single factor, suggesting that OSP is a unitary ability. However, each study examined OSP within a single cognitive domain (i.e., memory strategies for studying, vocabulary strategies for second-language acquisition), examined only semantic organi­ zational strategies, and did not use tasks commonly used in clinical research and practice. In the current study, factor analysis was conducted to determine whether OSP is a unitary construct across cognitive domains and types of organizational strategies. To do so, variables that have been referred to as measures of OSP from various commonly used neuropsychological tasks were examined. Healthy adults were selected as the population of interest because it is necessary to establish the construct validity of measures of OSP in a healthy sample before applying this information to clinical populations. Three models of OSP were examined: (a) a one-factor model, (b) a two-factor model based on cognitive domain (memory vs. lan­ guage fluency), and (c) a two-factor model based on semantic versus nonsemantic processing. We also aimed to identify the neuropsychological measures that provide the best assessment of

727

OSP by determining which measures had the strongest loadings for good-fitting models in our factor analysis.

Method The study sample included 200 undergraduates recruited from the Department of Psychology participant pool at Washington University in St. Louis. Participants ranged from 18 to 22 years of age (M = 19.5, SD = 1.1) and comprised 64% females and 40% minorities. Estimated Full-Scale IQ from the Wechsler Test of Adult Reading (Wechsler, 2001) ranged from 96 to 119 (M = 109.4, SD = 4.8). For inclusion in the study, strategic variables were selected from well-established, highly reliable, and standardized neuropsycho­ logical tasks. Some tasks commonly thought to measure OSP were excluded because strategies were required to be apparent to the observer, which is not feasible with the Wisconsin Card Sorting Test (Heaton, 1981), as well as spontaneously generated (i.e., not prompted), which is not the case with Category Switching on the Verbal Fluency subtest of the Delis-Kaplan Executive Function System (DKEFS; Delis, Kaplan, & Kramer, 2001). Tasks were administered in a consistent order during a 1-h session using standard administration. The 11 selected strategic variables were as follows: (a) chance-adjusted semantic cluster score on Trials 1-5 from the California Verbal Learning TestSecond Edition (CVLT-II; Delis, Kramer, Kaplan, & Ober, 2000); (b and c) mean cluster size and cluster ratio (Lanting, Haugrud, & Crossley, 2009; Troyer, Moscovitch, & Winocur, 1997) from the Phonemic Fluency-Letter F subtest of the DKEFS (Delis et al., 2001); (d and e) mean cluster size and cluster ratio (Lanting et al., 2009; Troyer et al., 1997) from the Phonemic Fluency-Letter N test (Borkowski, Benton, & Spreen, 1967) using instructions from the DKEFS (Delis et al., 2001); (f and g) mean cluster size and cluster ratio (Troyer et al., 1997) from the Category FluencyAnimal subtest of the DKEFS (Delis et al., 2001); (h) organization score from the Boston Qualitative Scoring System (BQSS; Stem et al., 1999) for the Rey-Osterrieth Complex Figure Test (Rey-O; Meyers & Meyers, 1995); (i) semantic pairs ratio (BrandlingBennett, 2007) on Trials 1-4 from the Verbal Paired Associates I (VPA) subtest from the Wechsler Memory Scale-Fourth Edition (WMS-IV; Wechsler, 2009); and (j and k) mean cluster size and cluster ratio (Ross, Foard, Hiott, & Vincent, 2003) from Part 1 of the Ruff Figural Fluency Test (RFFT; Ruff, 1988). The strategic variables for the fluency tasks included mean cluster size because this variable was utilized in seminal papers establishing clustering methodology (Troyer, 2000; Troyer et al., 1997) as well as cluster ratio to account for the total number of items generated (Lanting et al., 2009). Description of the neuropsychological tasks and strate­ gic variables are provided in Table 1. In addition, Table 1 illus­ trates that each strategic variable represented either an episodic memory or language fluency strategy as well as either a semantic organization or nonsemantic organization strategy, indicating the factor with which each variable was associated in the proposed two-factor models.

Results Statistical analyses included Pearson correlation analysis, con­ firmatory factor analysis, and exploratory factor analysis of the

BANERJEE AND WHITE

728

Table 1 Summary o f Neuropsychological Tasks and Strategic Variables Cognitive domain Task CVLT-II

Task description

Participants orally presented with a 16-item word list over five learning trials. Words represent four semantic categories. Fluency-Letter F Participants given 60 sec to name as many words as possible beginning with F. Strategic variables involved phonemic clusters: 2+ consecutively reported words sharing a phonemic subcategory (e.g., rhymes, homophones). Fluency-Letter N Participants given 60 sec to name as many words as possible beginning with N. Strategic variables involved phonemic clusters: 2+ consecutively reported words sharing a phonemic subcategory (e.g., rhymes, homophones). Fluency-Animal Participants given 60 sec to name as many animals as possible. Strategic variables involved semantic clusters: 2+ consecutively reported words sharing a semantic subcategory (e.g., pets). Participants copied a complex Rey-O geometric figure and recalled figure after a 30-min delay. Participants orally presented with a VPA list of 14 word pairs over four learning trials. Asked to recall the second word of each pair when prompted with the first word. List contains 4 semantically related pairs and 10 unrelated pairs. Participants given 60 sec to draw RFFT as many different designs as possible by connecting dots on a provided stimulus. Strategic variables involved clusters: 3 + consecutive designs using a rotation strategy (e.g., systematic rotation of a pattern) or enumerative strategy (e.g., systematic addition or removal of a line).

Strategic variable

Variable description

Cluster score Total chance-adjusted semantic cluster score for Trials 1-5.

Cluster size Cluster ratio

Cluster size Cluster ratio

Cluster size Cluster ratio

Organization Pairs ratio

Cluster size Cluster ratio

Episodic Memory

Fluency

Organization type Semantic Nonsemantic X

X

Number of words in phonemic clusters divided by number of phonemic clusters. Number of words in phonemic clusters divided by total number of words generated.

X

X

X

X

Number of words in phonemic clusters divided by number of phonemic clusters. Number of words in phonemic clusters divided by total number of words generated.

X

X

X

X

Number of words in semantic clusters divided by number of semantic clusters. Number of words in semantic clusters divided by total number of words generated.

X

X

X

X

Organization score from the BQSS, examining fragmentation and planning. Ratio of correctly recalled semantically related pairs to correctly recalled unrelated pairs.

Number of designs in clusters divided by number of clusters. Number of designs in clusters divided by total number of designs generated.

strategic variables. Means, standard deviations, and ranges for each general performance and strategic variable are reported in Table 2. Before conducting confirmatory factor analysis, several prelim­ inary analyses were undertaken. For example, Pearson correlations revealed no statistically significant relationships between the de­ mographic characteristics of our sample and the strategic variables of interest (p > .05 for all correlations). Performance of the sample was at least average (>25th percentile) with substantial variability on the general performance (e.g., total word recall on Trials 1-5 of

X

X X

X

X

X

X

X

the CVLT-II, total number of designs generated on the RFFT) and strategic variables from each neuropsychological task. Relatively low but significant Pearson correlations, r = .17 to .41, p < .05, were found between most general performance variables as has been noted in previous studies (Miyake et al., 2000) after correct­ ing a for multiple comparisons using the Benjamini-Hochberg false discovery rate of error control (Benjamini & Hochberg, 2000). These findings demonstrated adequate general performance by our sample and indicated that additional analysis of the strategic variables was appropriate.

ORGANIZATIONAL STRATEGIC PROCESSING

Table 2 Raw Scores for General and Strategic Task Performance Task CVLT-II Fluency-F

Fluency-N

Fluency-Animal

Rey-0 VPA RFFT

Variable

M

SD

Range

Words recalled Cluster score Words generated Cluster size Cluster ratio Words generated Cluster size Cluster ratio Words generated Cluster size Cluster ratio Delayed recall Organization Recalled pairs Pairs ratio Designs generated Cluster size Cluster ratio

59.6 2.1 15.5 2.1 0.4 10.3 1.5 0.3 24.8 3.2 0.8 23 11.6 47.5 1.3 18 3.2 0.4

7.7 2.4 3.8 0.7 0.2 3.0 1.0 0.2 5.0 0.7 0.1 5.6 1.7 7.0 0.4 5.5 2.0 0.3

39-75 - 2 2 - 1 .1 6-26 0-4.0 0-0.9 3-20 0-4.0 0 - 1.0

11^11 2.0-6.0 0 4 -1 .0 7-34 8.0-16.9 20-56 0.9-2.9 6-34 0-9.0 0 - 1.0

Pearson correlation analysis was also used preliminarily to examine relationships among all strategic variables. Only the fol­ lowing four correlations were statistically significant: mean cluster size with cluster ratio from Phonemic Fluency-Letter F, r = .52, p < .001; mean cluster size with cluster ratio from Phonemic Fluency-Letter N, r = .76, p < .001; mean cluster size with cluster ratio from Category Fluency-Animal, r = .37, p < .001; and mean cluster size with cluster ratio from RFFT, r = .70, p < .001. Thus, significant shared method variance was observed, but no statisti­ cally significant correlations were identified among strategic vari­ ables from different tasks. These results suggested that method variance should be specified in the confirmatory factor analysis. In confirmatory factor analysis, shared variance can be modeled and extracted unlike in correlation analysis, enabling the identification of any remaining commonalities that may be masked by the significant method variance in correlation analysis. Confirmatory factor analysis was then conducted to examine the three proposed models of OSP. Because of the non-normality of the data, maximum likelihood parameter estimation with standard errors and a mean-adjusted Satorra-Bentler y2 statistic, which is robust to non-normality (referred to as MLM in Mplus; Muthen & Muthen, 1998-2011), was used. Model fit was evaluated by ex­ amining the Satorra-Bentler x2 statistic, the standardized rootmean-square residual (SRMR), the root mean square error of approximation (RMSEA), the Akaike information criterion (AIC), the Bayesian information criterion (BIC), and the comparative fit index (CFI) (see Byrne, 2012; Hu & Bentler, 1999). Good model fit is indicated by a nonsignificant x2, RMSEA < .05, SRMR < .05, and CFI > 0.95. The 11 strategic variables served as indicators in confirmatory factor analysis, and a one-factor model with all indicators loading onto a single factor was examined first. Because of the significant shared method variance identified in earlier correlation analyses, residuals between indicators from the same tasks were correlated (i.e., mean cluster size and cluster ratio were correlated within each of the following tasks: Phonemic Fluency-Letter F, Phonemic Fluency-Letter N, Category Fluency-Animal, and RFFT). All

729

specified correlations between residuals were determined to be significant at p < .001, consistent with the correlation analysis. Findings from the one-factor model demonstrated that none of the factor loadings onto a single factor were statistically signifi­ cant, and all residual variances were significant at p < .001. However, it is important to note that this nonsignificant model accounting for shared method variance was an extremely good fit with the data (x2 = 34.29, p = .76, RMSEA < .0001, SRMR = .045, CFI = 1.000, AIC = 5,899, BIC = 6,018). In fact, the goodness-of-fit indices suggested that it was very unlikely that any other model could provide a statistically significant improvement. Although it is unusual to encounter nonsignificant factor loadings with good model fit, these findings demonstrated that a very good-fitting model of the data was one in which none of the indicators shared significant variance other than shared method variance due to derivation from the same tasks. Figure 1 shows the standardized values for this model with significantly correlated residuals for indicators from the same tasks. To further examine the finding of shared method variance within tasks but little shared variance across strategic variables from different tasks, a correlation-only model was run in which no factors were specified and only indicators originating from the same tasks were included. Therefore, the model included 8 of the 11 strategic indicators and specified only the four cor­ relations between residuals noted previously (i.e., mean cluster size and cluster ratio within Phonemic Fluency-Letter F, Pho­ nemic Fluency-Letter N, Category Fluency-Animal, and RFFT). Strategic indicators from the CVLT-II, Rey-O, and VPA were not included because they did not share method variance with any other variable. The no-factor correlation model was a very good fit with the data (x2 = 20.07, p = .69, RMSEA < .0001, SRMR = .044, CFI = 1.000), indicating that there were no common factors to be accounted for among the eight included strategic indicators, corroborating the finding that little variance remained to be explained across strategic indicators after accounting for shared method variance due to derivation from the same tasks. Because the nonsignificant one-factor model with the 11 stra­ tegic indicators was a very good fit with the data after shared method variance was accounted for, it was not expected that either of the proposed two-factor models would produce improved goodness-of-fit indices. Nonetheless, both models were examined as proposed. After repeated attempts (e.g., using reference variable vs. fixed-factor methods of scaling), neither model converged within 1,000 iterations. To ensure thorough examination of the data, exploratory factor analysis was also conducted to identify one-, two-, or three-factor models that might fit the data better than the nonsignificant onefactor model with specified method variance. In contrast with confirmatory factor analysis, which is theoretically driven, explor­ atory factor analysis is data driven and does not permit a priori specification of relationships between indicators and factors or between residuals. Each indicator is allowed to load onto any factor to determine the appropriate number of factors (Brown, 2006). Because exploratory factor analysis cannot account for covariation among indicators due to sources other than the factors, shared method variance could not be specified. Promax rotation was selected because statistically significant correlations were expected between any identified factors because each would in-

BANERJEE AND WHITE

730

Figure 1. One-factor model of OSP with correlated residuals for indicators derived from the same tasks. Indicator variables were as follows: CVLT-II, chance-adjusted semantic cluster score; Phonemic FluencyLetter F task, mean cluster size and cluster ratio; Phonemic Fluency-Letter N task, mean cluster size and cluster ratio; Category Fluency-Animal task, mean cluster size and cluster ratio; Rey-O, organization score; VPA, semantic pairs ratio; RFFT, mean cluster size and cluster ratio.

volve strategic processing. After 1,000 iterations, convergence on a solution was not achieved for any model, which was not sur­ prising given the very good model fit found with the nonsignificant one-factor model.

Discussion The present study was conducted to determine whether 11 neuropsychological variables commonly referred to as measures of OSP do in fact measure the same underlying ability. Confirmatory factor analysis was used to evaluate whether OSP is best defined (a) as a unitary construct, (b) on the basis of cognitive domain, or (c) on the basis of type of organizational processing. Exploratory factor analysis was also used to examine the possibility of other good-fitting models of OSP. Findings from the factor analysis revealed that the 11 strategic indicators examined, which have been commonly used by researchers and clinicians to measure OSP, did not share significant variance beyond that attributable to shared method variance within tasks. Moreover, a one-factor model with nonsignificant factor loadings that accounted for shared method variance was an excellent fit with the data. That is, a very good-fitting model was one in which none of the

strategic variables derived from different tasks shared signifi­ cant variance. One potential explanation for the findings is that OSP was common to all (or at least some) of the strategic indicators in the factor analysis but did not vary within the undergraduate sample. If the students comprising the sample performed quite similarly on strategic measures, then the lack of variability would result in being unable to detect factors using factor analysis. However, this possibility is unlikely given the wide range of scores obtained across the strategic tasks administered. Nonetheless, additional research is needed using healthy samples that differ from the participants in the current study in terms of demographic vari­ ables such as age, education, and socioeconomic status. After OSP has been better conceptualized and understood in healthy populations, similar studies should then be conducted with clinical populations. It is also possible that the construct of OSP simply does not exist or that none of the tasks in the present study assess OSP. These explanations would likely be met with significant criti­ cism because researchers and clinicians widely acknowledge the concept of OSP and utilize the tests included in the present

ORGANIZATIONAL STRATEGIC PROCESSING

study as measures of OSP. As such, we believe the most parsimonious explanation of our findings is that OSP exists but differs depending on the specific tasks administered regardless of commonalities across cognitive domains or types of organi­ zation. In other words, OSP may be specific to the demands and goals of a given task. Although Maeda et al. (2003) and Van Ede and Coetzee (1996) identified an OSP factor that was not identified in the current study, the task specificity of OSP is supported by the present study as well as that of the previous two studies. Both of the previous studies only examined the use of strategies for one specific task (e.g., semantic groupings of words and definitions to aid vocabu­ lary learning), and these strategies would be expected to load together onto a single factor if OSP is specific to the task. In addition, it should be kept in mind that these two previous studies examining OSP with factor analysis did not use standardized neuropsychological tasks, which may also account for differences in the findings. To our knowledge, the present study is the first to examine OSP using standardized neuropsychological measures across different cognitive domains and types of organizational processing. The view that OSP is specific to the task is highly relevant to the design of assessment batteries. Investigators and clinicians often appear to use measures of OSP interchangeably, selecting only one or two measures with the thought that they can be used as generalizable measures of a patient’s overall OSP ability. However, on the basis of our findings, it should be kept in mind that a single task may not provide a generalizable assessment of OSP. For example, if a patient appears to have difficulty organizing verbal informa­ tion, it may be best to examine the semantic cluster score from the CVLT-II. However, caution should be used in generalizing find­ ings from this task to the patient’s ability to organize nonverbal or nonsemantic information or in generalizing findings to strategic ability in cognitive domains other than memory. Further research is needed to examine the ecological validity of the currently available measures of OSP, especially if OSP is in fact specific to the task, to provide more robust guidance on which specific measures of OSP provide the most accurate assessment of realworld abilities. At present, to more thoroughly assess a patient’s strategic abilities, it is recommended that clinical batteries include various existing strategic measures that are targeted to a patient’s particular difficulties, a researcher’s specific aims, or a rehabilita­ tion therapist’s precise treatment goals.

References Banerjee, P., Grange, D. K.. Steiner, R. D., & White, D. A. (2011). Executive strategic processing during verbal fluency performance in children with phenylketonuria. Child Neuropsychology, 17, 105-117. http://dx.doi.org/10.1080/09297049.2010.525502 Benjamini, Y„ & Hochberg, Y. (2000). On the adaptive control of the false discovery rate in multiple testing with independent statistics. Journal of Educational and Behavioral Statistics, 25, 60-83. http://dx.doi.org/ 10.3102/10769986025001060 Bilder, R. M. (2011). Neuropsychology 3.0: Evidence-based science and practice. Journal of the International Neuropsychological Society, 17, 7-13. http://dx.doi.org/10.1017/S1355617710001396 Bjorklund, D. F., & Harnishfeger, K. K. (1990). Children’s strategies:

731

Their definition and origins. In D. F. Bjorklund (Ed.), Children’s strat­ egies: Contemporary views of cognitive development (pp. 309-324). Hillsdale, NJ: Erlbaum. Borkowski, J. G., Benton, A. L„ & Spreen, O. (1967). Word fluency and brain damage. Neuropsychologia, 5, 135-140. http://dx.doi.org/10.1016/ 0028-3932(67)90015-2 Brandling-Bennett, E. M. (2007). Categorization during typical develop­ ment. Dissertation Abstracts International: B. The Sciences and Engi­ neering, 67(10-B), 6046. Brown, T. A. (2006). Confirmatory factor analysis for applied research. London, United Kingdom: Guilford Press. Byrne, B. M. (2012). Structural equation modeling with Mplus: Basic concepts, applications, and programming. New York, NY: Taylor & Francis. Delis, D. C., Kaplan, E., & Kramer, J. H. (2001). Delis-Kaplan Executive Function System. San Antonio, TX: The Psychological Corporation. Delis, D. C., Kramer, J. H., Kaplan, E.. & Ober, B. A. (2000). CVLT-II: California Verbal Learning Test-2nd ed. Adult version manual. San Antonio, TX: The Psychological Corporation. Heaton, R. K. (1981). Wisconsin Card Sorting manual. Odessa, FL: Psy­ chological Assessment Resources. Hu, L., & Bender, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struc­ tural Equation Modeling, 6, 1-55. http://dx.doi.org/10.1080/ 10705519909540118 Kaplan, E. (1990). The process approach to neuropsychological assessment of psychiatric patients. Journal o f Neuropsychiatry, 2, 72-87. Lanting, S., Haugrud, N„ & Crossley, M. (2009). The effect of age and sex on clustering and switching during speeded verbal fluency tasks. Journal o f the International Neuropsychological Society, 15, 196-204. http://dx .doi.org/10.1017/S1355617709090237 Maeda, H., Tagashira, K., & Miura, H. (2003). Vocabulary learning strat­ egy use and learning achievement by Japanese high school EFL learners. Japanese Journal o f Educational Psychology, 51, 273-280. http://dx.doi .org/10.5926/jjep 1953.51,3_273 Meyers, J. E., & Meyers, K. R. (1995). Rey Complex Figure Test and Recognition Trial: Professional manual. Odessa, FL: Psychological Assessment Resources. Miyake, A., Friedman, N. P., Emerson, M. J„ Witzki, A. H„ Howerter, A.. & Wager, T. D. (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41, 49-100. http://dx.doi.org/ 10.1006/cogp. 1999.0734 Moscovitch, M. (1994). Memory and working with memory: Evaluation of a component process model and comparisons with other models. In D. L. Schacter & E. Tulving (Eds.), Memory systems (pp. 269-310). Cam­ bridge, MA: MIT Press. Muthdn, L. K., & Muthen, B. O, (1998—2011). Mplus user's guide (6th ed.). Los Angeles, CA: Author. Ross, T. P., Foard, L. E., Hiott, B. F., & Vincent, A. (2003). The reliability of production strategy scores for the Ruff Figural Fluency Test. Archives o f Clinical Neuropsychology, 18, 879-891. http://dx.doi.org/10.1093/ arclin/18.8.879 Ruff, R. (1988). Ruff Figural Fluency Test professional manual. Odessa, FL: Psychological Assessment Resources. Stern, R. A„ Javorsky, D. J., Singer, E. A,, Singer-Harris, N. G., Somer­ ville, J. A., Duke, L. M., & Kaplan, E. (1999). The Boston Qualitative Scoring System for the Rey-Osterrieth Complex Figure. Odessa, FL: Psychological Assessment Resources. Taconnat, L., Raz, N., Tocze, C., Bouazzaoui, B., Sauzeon, H.. Fay, S., & Isingrini, M. (2009). Ageing and organisation strategies in free recall: The role of cognitive flexibility. The European Journal o f Cognitive Psychology, 21, 347-365. http://dx.doi.org/10.1080/09541440802296413

732

BANERJEE AND WHITE

Troyer, A. K. (2000). Normative data for clustering and switching on verbal fluency tasks. Journal o f Clinical and Experimental Neuropsy­ chology, 22, 370-378. http://dx.doi.org/10.1076/1380-3395(200006)22: 3;1-V;FT370 Troyer, A. K., Moscovitch, M., & Winocur, G. (1997). Clustering and switching as two components of verbal fluency: Evidence from younger and older healthy adults. Neuropsychology, 11, 138-146. http://dx.doi .org/10.1037/0894-4105.11.1.138 Van Ede, D., & Coetzee, C. (1996). The Metamemory, Memory Strategy and Study Technique Inventory (MMSSTI): A factor analytic study.

South African Journal o f Psychology Suid-Afrikaanse Tydskrif vir Sielkunde, 26, 89-95. http://dx.doi.org/10.1177/008124639602600204 Wechsler, D. (2001). WTAR: Wechsler Test of Adult Reading manual. San Antonio, TX: The Psychological Corporation. Wechsler, D. (2009). Wechsler Memory Scale—Fourth Edition technical and interpretive manual. San Antonio, TX: Pearson.

Received February 22, 2014 Revision received August 21, 2014 Accepted November 21, 2014 ■

Copyright of Psychological Assessment is the property of American Psychological Association and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Clinical assessment of organizational strategy: An examination of healthy adults.

During the assessment of patients with cognitive difficulties, clinicians often examine strategic processing, particularly the ability to use organiza...
4MB Sizes 1 Downloads 7 Views