Neuropsychological Rehabilitation, 2015 Vol. 25, No. 3, 419– 447, http://dx.doi.org/10.1080/09602011.2014.941295

Test –retest consistency of Virtual Week: A task to investigate prospective memory Giovanna Mioni1,2, Peter G. Rendell3, Franca Stablum2, Luciano Gamberini2, and Patrizia S. Bisiacchi2 1´

Ecole de Psychologie, Universite´ Laval, Quebec, QC, Canada Department of General Psychology, University of Padova, Padova, Italy 3 School of Psychology, Australian Catholic University, Melbourne, Australia 2

(Received 23 October 2013; accepted 2 July 2014)

The present study reports test – retest consistency of Virtual Week, a wellknown measure of prospective memory (PM) performance. PM is the memory associated with carrying out actions at a specific moment in the future. Patients with neurological disorders as well as healthy older adults often report PM dysfunctions that affect their everyday living. In Experiment 1, 19 younger and 20 older adults undertook the standard version of Virtual Week (version A). Older adults showed lower performance compared to younger participants. However, the discrepancy between groups was eliminated at retest. Experiment 2 was conducted to investigate if remembering of PM content determined the improvement observed in older adults at retest in Experiment 1. To this end we created a parallel version (version B) in which we varied the content of the PM actions. Fifty older adults were assigned to one of the two experimental conditions: Version A at test and version B at retest or vice versa (25 participants in each condition). Results showed no group differences in PM performance between version A and version B; moreover, no effect of test –retest was found. The study confirmed that Virtual Week

Correspondence should be addressed to Giovanna Mioni, E´cole de Psychologie, Pavillon Fe´lix-Antoine-Savard, 2325, rue des Bibliothe`ques, Universite´ Laval, Que´bec G1 V 0A6, Canada. E-mail: [email protected] Peter Rendell’s contribution was supported by an Australian Research Council Discovery Grant. The authors gratefully acknowledge all the participants who kindly participated in the study and Chiara Seminati, Veronica Rossetti, Rosita Garbuio and Irene Scarpa who cooperated with the present study. The authors also acknowledge the help of Trevor Daniels with programming Virtual Week and the help of Kathryn Biernacki in analysing the data. # 2014 Taylor & Francis

420

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

is a reliable measure of PM performance and also provided a new parallel version that can be useful in clinical setting. Keywords: Prospective memory; Virtual Week; Test – retest; Ageing.

INTRODUCTION Prospective memory (PM) is the memory associated with carrying out intended actions at a specific moment in the future (Ellis & Kvavilashvili, 2000; Kliegel, McDaniel, & Einstein, 2008; McDaniel & Einstein, 2000). It is a highly complex process that requires formulating plans and intentions, retaining the information, and then executing the planned intention at the appropriate future moment (Kliegel, McDaniel et al., 2008; McDaniel & Einstein, 2000). According to the Einstein and McDaniel’s model (1990), there are two types of PM targets called event-based and time-based. If the target is event-based PM, a person performs an action when a specific event occurs (i.e., passing a message when your friend calls); while if the target is time-based PM, a person forms a self-generated intention to perform an action at a specific time in the future (i.e., remembering the appointment with a friend at 4:00 p.m.). Event-based PM tasks are considered to be less cognitively demanding than time-based PM tasks because they require less self-initiated retrieval with the event providing an intrinsic external cue to help recall the task to be performed (McDaniel & Einstein, 1993; McDaniel, Guynn, Glisky, & Routhieaux, 1999; McFarland & Glisky, 2009). PM relies on retrospective memory for learning and retaining the “content” of tasks to be remembered (i.e., “what”), but also involves executive functions (i.e., initiation, planning, monitoring, and inhibition of ongoing activities) (Groot, Wilson, Evans, & Watson, 2002; McDaniel et al., 1999). Adequate PM abilities have functional and safety implications for many activities of everyday living such as remembering to take medication, turning up to a meeting at a specific time, or remembering to pass on a phone message. Patients as well as healthy older adults rated PM dysfunctions as their most salient area of concern compared to other memory problems in daily activities. PM failures have the potential to limit the personal independence causing the necessity to rely on a carer for prompting (Fleming, Shum, Strong, & Lightbody, 2005). Researchers have extensively investigated the effects of ageing on PM (Bisiacchi, Tarantino, & Ciccola, 2008; Cona, Arcara, Tarantino, & Bisiacchi, 2012; Henry, MacLeod, Phillips, & Crawford, 2004; McDaniel, Einstein, & Rendell, 2008; Phillips, Henry, & Martin, 2008; Mioni & Stablum, 2014) and the impact of PM impairment on activities of daily living in various clinical populations such as traumatic brain injury patients (Mioni, Stablum,

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

421

McClintock, & Cantagallo, 2012; Mioni, Rendell, Henry, Cantagallo, & Stablum, 2013; Shum, Levin, & Chan, 2011), people with Parkinson’s disease (Foster, Rose, McDaniel, & Rendell, 2013; Katai, Maruyama, Hashimoto, & Ikeda, 2003; Kliegel, Altgassen, Hering, & Rose, 2011), mild cognitive impairment and Alzheimer’s disease (Kixmiller, 2002; Thompson, Henry, Rendell, Withall, & Brodaty, 2010), and multiple sclerosis (Rendell, Jensen, & Henry, 2007; West, McNerney, & Krauss, 2007)1. However, there have been limited researches into the assessment of PM impairment, particularly in terms of the psychometric properties of assessment tools. Reliable and valid PM assessments with normative data are necessary for health professionals working with people with these types of neurological disorders as well as working with older adults. Below, we briefly review the most common tools to assess PM in clinical settings (see also Mioni, McClintock, & Stablum, 2014), as well as presenting the potentiality and psychometric characteristics of a well-known PM task, the Virtual Week (Rendell & Craik, 2000), which is the subject of the present study. The Rivermead Behavioural Memory Test (RBMT; Wilson, Cockburn, & Baddeley, 1985, 2003) is probably the most commonly used tool to assess PM and includes three event-based tasks. Although the RBMT has been widely used in clinical settings and in several neuropsychological studies, it provides only a limited range of scores and is unlikely to be sensitive to deficits in moderate or mild patients (Mills et al., 1997; Shum, Fleming, & Neulinger, 2002; Mathias & Mansfield, 2005); moreover, the test has the limitation of not investigating time-based PM performance (Wilson et al., 1985). The Cambridge Behavioural Prospective Memory Test (CBPMT; Groot et al., 2002) includes four time-based and four event-based tasks to be administrated over a 40-minute period. Participants were allowed to use any strategy to remember the tasks. The CBPMT showed significant correlation with a PM questionnaire (Everyday Memory Questionnaire; Sunderland, Harris, & Baddeley, 1983) as well as with executive and neuropsychological measures (Groot et al., 2002); also, good reliability and validity were observed (Wilson, Emslie, & Foley, 2004). Wilson and colleagues revised the CBPMT and created the CAMPROMPT, which has three time-based and three-event based tasks to be completed in 30 minutes (Wilson, Emslie, Watson, Hawkins, & Evans, 2005). Delprado and colleagues (2012) used the CAMPROMPT with people with mild cognitive impairments and showed moderate inter-item reliability, with a Cronbach alpha coefficient of .75, indicating good internal consistency. However, the CAMPROMPT is still limited in the number of PM trials included and no alternative versions were developed to evaluate rehabilitation interventions. The Memory for Intentions 1 For a more detailed presentation of PM impairment in clinical populations please see West (2008) and Kliegel, Ja¨ger, Altgassen, & Shum (2008).

422

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

Screening Test (MIST; Raskin, 2004) includes four event-based and four time-based tasks to be performed in 30 minutes while playing a wordsearch puzzle that works as the ongoing task. The eight PM activities are balanced in terms of delay interval (2 or 5 minutes delay), cue (time-based or event-based), and response modality (verbal or physical response). The MIST also includes an eight-item multiple-choice recognition test and a more naturalistic task or at least a task that has to be performed in daily life and has to be performed after a 24-hour delay. The MIST has been widely used with healthy older adults and various clinical populations (see Raskin, 2009, for a review). Split-half reliability was measured by Woods et al. (2008) as .70 using the Spearman-Brown coefficient. While the interitem reliability of the individual trials was reported to be relatively poor (Cronbach’s alpha ¼ .47) the reliability of the six subscales was judged to be better (Cronbach’s alpha ¼ .88). Finally, the Royal Prince Alfred Prospective Memory Test (RPA-ProMem; Radford, Lah, Say, & Miller, 2011) includes three alternative forms, each of which had two time-based and two event-based tasks that have to be performed within the session or at a later time. No differences were found between scores in the three parallel forms of the RPA-ProMem and good reliability (Spearman correlation r ¼ .71) was found between the three different forms (Radford et al., 2011). Most of the previous PM paradigms have little resemblance to real-life situations and are not adequate for measuring the outcomes of rehabilitation interventions because no alternative forms are provided (apart from RPAProMem). McDaniel and Einstein (2007) have pointed out that most measures of PM lack reliability, with some tasks as low as 20%. Moreover, neuropsychologists need to obtain information about how patients perform in the routines of everyday life, and laboratory-based PM measures may not provide such information (Burgess et al., 2006; Knight & Titov, 2009). In fact, many neuropsychologists have realised the limitations of many conventional tests and are looking for new approaches to measuring PM and functional disabilities (Knight & Titov, 2009). To solve the discrepancy between performance on neuropsychological tests and performance in everyday life, researchers have developed tasks that can provide a bridge between conventional neuropsychological tests and behavioural observation. Virtual tasks, in fact, can simulate the activity of everyday life in a controlled setting (Knight & Titov, 2009; see also Trawley, Law, & Logie, 2011; Trawley, Law, Brown, Niven, & Logie, 2013). Virtual Week (Rendell & Craik, 2000) was developed as a laboratory measure of PM that closely represents PM activities in everyday life in a board game format, where each circuit of the board represents one virtual day. As participants circuit the board they pick up cards describing events relevant to the time of day (e.g., meals and shopping) and they have to select an option (e.g., what to eat and what to buy) and the choice determines the dice

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

423

rolling consequences. The dice rolling, moving token, and event card decisions provide the backdrop or the ongoing task, which is a defining feature for a PM task. Participants are engaged with 10 PM tasks during each virtual day. Of these tasks, four are regular, four are irregular, and two are time-check. The regular and the time-check PM tasks are repeated every virtual day and simulate activities that occur as one undertakes normal healthcare duties. The regular tasks included two event-based (i.e., taking antibiotics at breakfast and dinner) and two time-based (i.e., taking asthma medication at 11 a.m. and 9 p.m.) tasks. In the time-check task participants are required to do a lung test at 2 minutes and at 4 minutes on the chronometer or stop clock. The irregular tasks are different each virtual day and also include two event-based (e.g., buying bus tickets when shopping) and two time-based (e.g., phone the plumber at 4 p.m.) tasks and simulate the kinds of occasional tasks that occur in everyday life. Virtual Week is a very promising task for investigating PM performance and it has been extensively used with normal ageing (Aberle, Rendell, Rose, McDaniel, & Kliegel, 2010; Henry, Rendell, Phillips, Dunlop, & Kliegel, 2012; Margrett, Reese-Melancon, & Rendell, 2011; Rendell & Craik, 2000; Rose, Rendell, McDaniel, Aberle, & Kliegel, 2010; Rendell et al., 2011) and different clinical populations: those with abnormal ageing (Ozgis, Rendell, & Henry, 2009; Thompson et al., 2010; Will et al., 2009), patients with multiple sclerosis (Kardiasmenos, Clawson, Wilken, & Wallin, 2008; Rendell, Jensen et al., 2007a; West et al., 2007), schizophrenics (Henry, Rendell, Kliegel, & Altgassen, 2007; Henry, Rendell, Rogers, Altgassen, & Kliegel, 2011), substance users (Leitz, Morgan, Bisby, Rendell, & Curran, 2009; Paraskevaides et al., 2010; Rendell, Gray, Henry, & Tolan, 2007; Rendell, Mazur, & Henry, 2009), Parkinson’s patients (Foster et al., 2013), and patients with brain damage (Kim, Craik, Luo, & Ween, 2009; Mioni et al., 2013). Virtual Week has been shown to be sensitive to PM deficits with each of these groups: normal ageing, abnormal ageing and the various clinical groups. The reliability of Virtual Week was investigated by Rose et al. (2010) in a study involving younger and older adults. Across the entire sample, reliability estimates of the internal consistency ranged from .84 to .94 for the regular, irregular and time-check tasks. Further, the split-half reliability for the overall Virtual Week measure was estimated to be .74 in a clinical group with schizophrenia and .66 in the controls (Henry et al., 2007) and the split-half reliability for regular, irregular and time-check was .85, .71, and .71 for multiple sclerosis (MS) patients, and for controls was .79, .75, and .73, respectively (Rendell et al., 2012). Cronbach’s alpha for all PM tasks was .89 for traumatic brain injury (TBI) patients and .62 for controls (Mioni et al., 2013), and .89 for patients with Parkinson’s disease and .81 controls (Foster et al., 2013). Thus together, evidence from clinical and

424

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

non-clinical groups suggests that Virtual Week is a reliable indicator of PM function. Test–retest performance has never been investigated with Virtual Week. This is an important feature, in particular in clinical settings, to evaluate the efficiency of rehabilitation training. This study primarily aimed to examine the effect of test–retest in younger and older adults performing Virtual Week twice, one month apart. In addition, we conducted preliminary investigation of the internal consistency and test–retest reliability of Virtual Week as a measure of PM.

EXPERIMENT 1 M ethod

Participants

Twenty older adults (aged 65–84 years, M ¼ 73.75, SD ¼ 5.22; 16 women) and 19 younger adults (aged 22–27 years, M ¼ 23.95, SD ¼ 1.22; 9 women) took part in Experiment 1. Older adults were volunteers from the community of Padova and were screened for visual acuity, the presence of neurological trauma, use of psychoactive medication and dementia with the Mini Mental State Exam (MMSE; Folstein, Folstein, & McHugh, 1975). Older adults who scored below 25 on the MMSE were excluded from participation. The mean MMSE of the selected older participants was 28.25 (SD ¼ 1.62). Younger participants were volunteers recruited at the Department of General Psychology, Padova, Italy. The younger adults had significantly more years of education than the older adults, t(37) ¼ 9.17, p , .001, d ¼ 2.96; younger adults, M ¼16.84, SD ¼ 1.61; older adults, M ¼ 7.90, SD ¼ 3.95. Materials

Prospective memory task: Computer Virtual Week. Virtual Week is a board game (Rendell & Craik, 2000; Rendell & Henry, 2009) in which participants simulate going through the course of a week (in this study we used five consecutive days from Monday to Friday). Participants move around the board with the roll of a dice. The time of the virtual day is displayed on a virtual time clock calibrated to the position of the token on the board (every 2 squares the time increased 15 minutes) (e.g., Griffiths et al., 2012; Henry et al., 2012; Mioni et al., 2013; Rendell et al., 2011). As participants circuit the board, they have decisions to make and things to remember to perform. Participants have to select an “Event Card” each time the token lands on or passes a square labelled “E”. This card describes specific activities and three options relevant to the virtual time of day. Each option involves

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

425

Figure 1. English version of Virtual Week computer screen display.

having to roll either a set number, an even (or odd) number, or any number on the dice. Two clocks were presented on the board (Figure 1). The clock above the dice was the stopclock that started at the beginning of each virtual day and represented the real time. The clock under the dice was a virtual clock that moved 15 minute every two squares. Participants completed five days with 10 PM tasks per day: four regular (repeated), four irregular (non-repeated), and two time-check tasks (repeated). The four activities that occurred regularly every virtual day simulated health activities; two of the four regular tasks are time-based (i.e., triggered by the virtual time of day; “Use asthma inhaler at 11 a.m. and 9 p.m.”), and two tasks are event-based (i.e., triggered by some information shown on an Event Card; “Take antibiotics at breakfast and dinner”). The four irregular tasks simulate activities that are new every day; as for the regular tasks, half were time-based (i.e., “Pick up the laundry at 4 p.m.”) and half were event-based (i.e., “Buy the bus tickets at lunch”). The two time-check tasks were also related to the health activities and were to check lung capacity when the stopclock displayed 2:00 min and 4:00 min after the start of each day. The version used in the present study is an adaptation of the original version translated into Italian and already used with traumatic brain injury (TBI) patients (Mioni et al., 2013). Minor changes have been made to the standard version to reduce some cultural differences that were evident

426

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

between the Australian and Italian population (i.e., at the event card “breakfast” the Australian version required the player to select from: “crunchy peanut butter”, “porridge and apple juice” or “cereal and orange juice”. Peanut butter and porridge are not typical Italian dishes for breakfast and have been substituted with the more familiar options of “biscuits” or “cakes”). Three event cards (on a total of 50) that referred to activities at the university were also changed to activities that were more familiar to the Italian population (“Music store”, “Florist” and “Post office”). As in Rendell and Craik (2000) and subsequent Virtual Week studies, participants were given pre-game instructions and then a practice virtual day to ensure they understood all features of the game. Accuracy on the PM tasks was the primary analysis but performance was also analysed in terms of time (minutes) to execute each virtual day at test and retest. Accuracy was calculated as a proportion of correct responses. Correct scores indicated that the target item was remembered at the correct Event Card (event-based tasks) or at the correct virtual time (time-based tasks) and was correct before the next roll of the dice. For time check tasks data were scored as correct if performed within 10 seconds in respect to the target time. Time to execute virtual days represents the time (in minutes) to perform each virtual day. Procedure

Older adults were tested individually in their own homes while younger participants were tested at the Department of General Psychology, Padova. Each participant undertook two experimental sessions that lasted approximately two hours each. During the first session (Test) participants performed Virtual Week and older adults also performed the MMSE. After one month, participants performed Virtual Week (Retest). Virtual Week was presented on a 15 inch computer screen with participants seated at a distance of approximately 60 cm. Participants gave their informed consent to participate in the study. The study was approved by the ethical committee of psychology of the University of Padova and was conducted according to the principles expressed in the Declaration of Helsinki.

Results Analysis of correct responses

Participants’ performance was analysed in terms of proportion of correct responses2. This was the number correct responses, expressed as proportion 2

To control for the different years of education between younger and older adults an ANCOVA was carried out with years of education as covariate. Results showed only a main effect of years of education, F(1, 36) ¼ 7.24, p ¼ .01, h2p ¼ .16.

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

427

TABLE 1 Experiment 1: Mean (M) and standard deviation (SD) for PM accuracy at test and retest session for younger and older adults

PM task Test session Regular Irregular

Retest session Regular Irregular

PM target

Younger adults M (SD)

Older adults M (SD)

Event-based Time-based Event-based Time-based Time Check

.91 (.09) .86 (.15) .95 (.09) .77 (.24) .58 (.26)

.85 (.21) .61 (.34) .86 (.10) .45 (.23) .22 (.58)

Event-based Time-based Event-based Time-based Time Check

.99 (.03) .83 (.17) .95 (.07) .73 (.23) .60 (.20)

.94 (.02) .77 (.20) .96 (.07) .67 (.22) .36 (.23)

of the PM tasks scheduled for each of the four categories of tasks: regular event, regular time, irregular event, and irregular time (Table 1). Data were analysed with a 2 × 2 × 2 × 2 mixed ANOVA with the between-group variable group (younger, older) and within-group variables PM task (regular, irregular), PM cue (event-based, time-based) and session (test, retest). For this and subsequent analyses, post hoc tests were performed using the Bonferroni correction. Results showed a significant main effect of group, F(1, 37) ¼ 6.71, p ¼ .014, h2p ¼.15; PM task, F(1, 37) ¼ 10.71, p ¼ .002, h2p ¼ .22; PM cue, F(1, 37) ¼ 66.55, p , .001, h2p ¼ .64; and session, F(1, 37) ¼ 15.13, p , .001, h2p ¼ .29. There were two sets of significant interactions that were followed up separately. Firstly, there were interactions of group × session, F(1, 37) ¼ 19.16, p , .001, h2p ¼ .34; and group × PM cue, F(1, 37) ¼ 8.28, p ¼.007, h2p ¼ .18; and a 3-way interaction of these variables group × PM cue × session, F(1, 37) ¼ 4.92, p ¼.033, h2p ¼ .12 (see Figure 2). To further assess this three-way interaction, separate 2×2×2 ANOVAs were run for event and time-based tasks with variables: group, session and PM task. Secondly, there was interaction between PM task and PM cue, F(1, 37) ¼ 21.44, p , .001, h2p ¼ .37, which was followed up with tests of simple effects. No other interactions were significant. The follow up of the 3-way interaction revealed, for the separate analysis of event-based tasks, session was a main effect, F(1, 37) ¼ 14.92, p , .001, h2p ¼ .29; group was not a main effect, F(1, 37) ¼ 1.17, p ¼ .286, h2p ¼ .03; but group significantly interacted with session, F(1, 37) ¼ 8.03,

428

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

Figure 2. Experiment 1. Mean proportion of correct prospective memory (PM) responses for PM tasks as a function of event- and time-based cued tasks, test and retest session for the younger and older adults. The error bars indicate +1 SE.

p ¼ .007, h2p ¼ .18. Tests of simple effects revealed that in the test session, younger adults performed significantly better on event-based tasks than older adults, F(1, 37) ¼ 4.42, p ¼.042, h2p ¼ .11, however, this difference was not significant in the retest session, F(1, 37) ¼ 2.32, p ¼ .136, h2p ¼ .06. Further tests of simple effects revealed that older adults performed better on event-based tasks in the retest session compared to the test session, F(1, 37) ¼ 23.01, p ,.001, h2p ¼ .38. The younger adults did not differ between the test and retest sessions, F(1, 37) ¼ 0.52, p ¼ .477, h2p ¼ .01. In time-based tasks, session was a main effect, F(1, 37) ¼ 7.49, p ¼ .009, h2p ¼ .17; group was a main effect, F(1, 37) ¼ 7.95, p ¼.008, h2p ¼ .18; and there was a significant interaction between group and session, F(1, 37) ¼ 16.27, p , .001, h2p ¼ .31. Tests of simple effects revealed that younger adults performed better than older adults for time-based tasks in the test session, F(1, 37) ¼ 15.06, p ,.001, h2p ¼ .29. This difference was not significant for the retest session, F(1, 37) ¼ 0.95, p ¼ .336, h2p ¼ .03. Further tests of simple effects revealed that older adults performed better for time-based tasks in the retest session than in the test session, F(1, 37) ¼ 23.53, p , .001, h2p ¼ .39, however this difference was not significant for younger adults, F(1, 37) ¼ 0.82, p ¼ .371, h2p ¼ .02.

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

429

The interaction between PM task and PM cue was followed up with tests of simple effects that showed that event-based tasks were performed more accurately than time-based tasks for both regular tasks, F(1, 37) ¼ 29.08, p , .001, h2p ¼ .44 (regular event M ¼ .92, SD ¼ .14; regular time M ¼ .77, SD ¼ .25) and irregular tasks, F(1,37) ¼ 81.38, p , .001, h2p ¼ .67 (irregular event M ¼ .93, SD ¼ .09; irregular time M ¼ .66, SD ¼ .26). Further tests of simple effects revealed that while regular time-based tasks were performed more accurately than irregular time-based tasks, F(1, 37) ¼ 20.34, p , .001, h2p ¼ .36, this difference was not significant between regular event and irregular event-based tasks, F(1, 37) ¼ 0.39, p ¼ .538, h2p ¼ .01. Data from the time check tasks were analysed separately with a 2 × 2 mixed ANOVA with between-groups variable group (younger, older) and within-groups variable session (test, retest)3. Results showed a significant effect of group, F(1, 37) ¼ 17.67, p , .001, h2p ¼ .32, where older adults (M ¼ .30, SD ¼ .27) performed less accurately than younger adults (M ¼ .59, SD ¼ .23). There was also a significant effect of session, F(1, 37) ¼ 4.62, p ¼ .038, h2p ¼ .11, where all participants’ performance improved in the retest session (M ¼ .48, SD ¼ .25) compared to the test session (M ¼ .40, SD ¼ .33). There was no interaction between group and session, F(1, 37) ¼ 2.94, p ¼ .095, h2p ¼ .07. In sum, the analysis of PM accuracy with regular and irregular tasks showed that younger adults showed no test –retest effect. However, older adults improved their performance in the retest session; older adults were less accurate than younger adults at the test session, but the two groups were equally accurate at the retest session. On the time check task, the younger participants were consistently better than the older participants at both test and retest, and all participants improved performance at retest session. Time to execute virtual days

Time to execute virtual days represents the time (in minutes) to perform each virtual day. Data were analysed with a 2 × 5 × 2 mixed ANOVA with the between variable group (younger, older) and within variables of days (Monday, Tuesday, Wednesday, Thursday, Friday) and session (test, retest)4. Significant effects of group, F(1, 37) ¼ 91.77, p , .001, h2p ¼ .71; days, F(4, 148) ¼ 26.58, p , .001, h2p ¼ .42; and session, F(1, 37) ¼ 116.92, p , .001, h2p ¼ .76, were found. The interactions group × session, F(1, 37) ¼ 20.57, p , .001, h2p ¼ .36; group × days, F(4, 148) ¼ 4.14, 3 An ANCOVA was also carried out with years of education as covariate. The main effect of education did not reach significance, F(1, 36) ¼ .33, p ¼ .56, h2p ¼ .01. 4 An ANCOVA was also carried out with years of education as covariate. Results showed a significant main effect of education, F(1, 36) ¼ 5.16, p ¼ .03, h2p ¼ .12.

430

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

Figure 3. Experiment 1. Time to execute virtual days for younger and older adults at test and retest session. The error bars indicate +1 SE.

p ¼.003, h2p ¼ .10; and session × days, F(4, 148) ¼ 6.98, p , .001, h2p ¼ .16, were significant. Interestingly, the interaction group × session × days was also significant, F(4, 148) ¼ 5.05, p , .001, h2p ¼ .12 (Figure 3). Post-hoc analysis showed that younger adults were always faster than older adults in executing the virtual days. Older adults significantly decreased the time to execute Virtual Week from Monday to Friday in the test session and they stabilised their performance after Wednesday. No differences were found between days in the retest session. In younger adults no differences were found between days either at test or retest sessions. Reliability

The internal consistency reliability coefficients (Cronbach’s alpha) for each of the task categories at test and retest were analysed separately for younger and older adults and are reported in Table 2. Test–retest correlations were also conducted separately for younger and older adults and are reported in Table 3.

Discussion The present study was conducted to investigate group differences in PM performance at test and retest in younger and older adults that performed Virtual Week twice, one month apart. We also conducted a preliminary investigation

431

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

TABLE 2 Reliability of Virtual Week: Cronbach’s alpha assessing internal consistency for younger and older adults at test and re-test sessions Regular

Irregular

Event-Based

Time-based

Time Check

All tasks

Number of items

20

20

20

20

10

50

Test session Younger adults Older adults

.770 .582

.529 .900

.554 .716

.822 .878

.704 .865

.644 .915

Retest session Younger adults Older adults

.708 .609

.704 .719

.551 .250

.813 .761

.557 .797

.830 .858

TABLE 3 Test– retest Person’s correlation coefficients for Virtual Week tested one month apart. Experiment 1, versions A of Virtual Week at test and retest. Experiment 2, version A and B of Virtual Week, completed in counterbalanced order Test– Retest A and A (Experiment 1)

Test session

Number of PM tasks per day/total

Young (n ¼ 19)

Old (n ¼ 20)

Test– Retest A and B (Experiment 2) All old (n ¼ 50)

Regular Irregular Event Time Time Check All PM

4/20 4/20 4/20 4/20 2/10 10/50

.393∗ .410∗ .128 .581∗∗ .676∗∗ .613∗∗

.665∗∗ .696∗∗ .397∗ .736∗∗ .542∗∗ .805∗∗

.362∗∗ .550∗∗ .305∗ .603∗∗ .564∗∗ .682∗∗

PM task



p , .05,

∗∗

p , .001

of test –retest reliability and internal consistency of Virtual Week (Rendell & Craik, 2000; Rendell & Henry, 2009). Virtual Week has shown good psychometric properties when investigated with younger and older adults (Rose et al., 2010) and clinical populations (Henry et al., 2007; Rendell et al., 2012; Mioni et al., 2013; Foster et al., 2013), but the test –retest reliability has never been investigated. The internal consistency at test and retest was high for both younger and older adults, and consistent with previous Virtual Week studies (Henry et al., 2007; Mioni et al., 2013; Rendell et al., 2012; Rose et al., 2010). Importantly, this study provides the first test –retest correlation and it is relatively high for both older and younger adults.

432

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

Analyses of PM performance showed that older adults were substantially less accurate than younger adults at test sessions, showing an age-related decrement in PM performance. The discrepancy between younger and older participants was particularly evident when performing irregular tasks and when the cue was time-based. A significant effect of PM task was found indicating that participants were more accurate when the task was repeated every day (regular tasks) compared to the activities that were new every day (irregular tasks). In regular PM tasks, the cues are presented in a consistent routine (i.e., take medication every day at breakfast), and therefore, the preceding situational cues might provide a richer, more extensive set of cues for triggering retrieval (Kvavilashvili & Fisher, 2007; Rose et al., 2010). A significant effect of PM cue was also found; participants were less accurate when performing time-based compared to event-based activities confirming that time-based tasks are more demanding, probably due to the higher self-initiated retrieval to monitor the time and the absence of external cues to help recall the PM activity (Einstein & McDaniel, 1990; McDaniel & Einstein, 2000, 2007). Analysis of time-check also showed lower performance in older compared to younger adults; this might be explained by the non-focality of the time-check task. PM cues are more focal when the ongoing task involves processing features of the PM cues than when ongoing task processing is more peripheral (non-focal cues; Kliegel, Ja¨ger, & Phillips, 2008; Rendell, McDaniel, Forbes, & Einstein, 2007). Moreover, time-check tasks are related to the chronometer, which represents the real time of the game. Time perception is reduced in older adults (Block, Zakay, & Hancock, 1998); this might have also contributed to lower performance observed with older adults in time-check tasks (Mioni & Stablum, 2014). Analysis of the time to execute Virtual Week showed no differences in younger adults when performing Virtual Week at test or retest session, indicating that younger participants were familiar with the experimental procedure. Older adults significantly reduced the time to execute virtual days at test sessions from Monday to Friday. The higher time for older adults to execute each day compared to younger adults is likely because older adults are less familiar with technological devices (Ellis & Allaire, 1999) and are characterised by slower speed of processing (Salthouse, 2000; Yordanova, Kolev, Hohnsbein, & Falkenstein, 2004). Interestingly, older adults significantly reduced the time to execute each day indicating a learning effect and more familiarity with the experimental procedure at the end of the test session. The improvement observed in older adults (i.e., reduction in time to complete each virtual day) was maintained at retest session; in fact older adults were significantly faster at retest session compared to test session. Effect of session (test vs. retest) was also found on PM accuracy. Older adults were significantly less accurate than younger adults at test session, but, interestingly, the discrepancy between younger and older adults was

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

433

eliminated at retest session. In particular, older adults showed a great improvement on irregular time-based targets. Better performance at retest session was also observed for time-check tasks. In this case, both groups (older and younger) showed the improvement. The improvement in PM accuracy at retest session was not expected and needs further consideration. It is possible that, at the test session, our older adults (with lower years of education) were less familiar with the experimental procedure and the demands of a computerised task and dedicated more cognitive resources to understand the procedure; therefore, less cognitive resources were available to implement and execute the PM activities. On the other hand, participants performed the same activities at test and retest. It might be possible that participants remembered or at least were more familiar with the content of the PM activity at retest session. In fact, some participants, before starting the retest session, clearly remembered and repeated the content of some of the irregular tasks with no prompt from the experimenter. This might be expected for regular tasks, which were repeated several times within each session. Therefore, we hypothesised that the higher level of accuracy observed in older adults during the retest session was not only due to the participants becoming more familiar with the experimental procedure but was mainly due to the familiarity of the content of the PM actions. To test this hypothesis we developed two parallel versions of Virtual Week (version A and version B) and tested two groups of older adults one month apart.

EXPERIMENT 2 Experiment 2 was conducted to investigate if the results obtained in Experiment 1 were due to learning of the Virtual Week procedure or due to acquisition of the content of the PM actions. The standard version used in the previous study was included (version A) and we developed a second parallel version of Virtual Week (version B). The two versions were equivalent in the number of PM tasks to be performed each virtual day, only the content of the PM tasks were different from version A to version B. We also investigated reliability and internal consistency of the parallel Virtual Week version.

Methods Participants

Fifty older adults (61–82 years, M ¼ 68.12, SD ¼ 4.98; 29 women) were included in Experiment 2. Twenty-five participants performed version A (11 women) and 25 version B (18 women) at test. Participants that performed version A at test performed version B at retest and vice versa (Group A-B

434

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

and Group B-A). Participants of group A-B were 68.40 years old (SD ¼ 5.07), had 11.23 years of education (SD ¼ 5.12), and had an MMSE score of 28.14 (SD ¼ 1.15). Participants of the group B-A were 67.84 years old (SD ¼ 4.98), had 10.56 years of education (SD ¼ 3.58) and had an MMSE score of 28.52 (SD ¼ 1.25). No differences in age, years of education or MMSE scores were found between the two groups (all ps . .270). Materials

Prospective memory task: Computer Virtual Week version A and version B. A parallel version of the original Virtual Week (version A used in Experiment 1) was created (version B). The virtual days as presented by the 10 event cards per day, had a similar structure but the specific content of the activities in each event card differed between version A and B. Regular PM tasks were the same in version A and version B, in both versions participants were required to undertake everyday health duties: “Take antibiotics at breakfast and dinner”; “Use asthma inhaler at 11 a.m. and 9 p.m.” and “Check the lung capacity at 2 and 4 minutes”. New irregular PM tasks were created for version B. We modified only the content of the PM action not the PM cue: the time or the event card in which the PM action was to be performed was the same in version A and version B. Thus, if participants performing version A were required on Monday to call the bank at 12 p.m., then in version B participants were required on Monday to call the doctor at 12 p.m. (time-based PM tasks). Similarly for the event-based PM tasks, the position of the PM cue in the virtual day was the same in each version. Thus, if participants in version A on Monday had to return the book when at the library, which is the sixth event card that occurred in the virtual afternoon, then in version B, on Monday, participants are required to return the DVD at the videostore that was the sixth card that occurred in the virtual afternoon. Recognition test of PM task content. In Experiment 2 Virtual Week had the added feature that immediately following each virtual day, participants completed a recognition test to assess their retrospective memory for the various PM tasks. Successful PM performance requires executing the intended action at the appropriate moment (i.e., prospective component) as well as remembering the specific action to be performed (i.e., retrospective component) (Einstein & McDaniel, 1996). Therefore, PM failure might be due to forgetting the content of the PM action or failing to retrieve and execute the intended action (Einstein & McDaniel, 1996; Kliegel, Eschen, & Tho¨ne-Otto, 2005; Maylor, Smith, Della Sala, & Logie, 2002; Mioni et al., 2012; Mioni & Stablum, 2014). The recognition test was introduced to further evaluate the source of the PM forgetting. The test required matching each intended action with its

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

435

cue. Participants were presented with a list of the actions, some of which were required during the virtual day while others were distractors. For each action, there was a pull down menu listing possible PM cues (e.g., when shopping, at university) displayed on the screen. Participants were required to identify the required actions and connect each action with the right cue. Proportion of correct responses in the retrospective memory test were calculated for each PM task (regular time-based, regular event-based, irregular event-based, irregular time-based and time check). Procedure

As in Experiment 1 participants were tested individually in two experimental sessions that lasted approximately two hours each. Participants were tested at the Department of General Psychology, at Centre San Pio X, Padova, Italy or in their own homes. Possible effect of location was investigated and no differences were found (p . .05) between participants that were tested at the two locations. During the first session (Test session) participants performed the MMSE and Virtual Week. After one month, participants performed Virtual Week (Retest session). During the test session participants were randomly assigned to one of the two versions, A or B. The version of Virtual Week was counterbalanced between participants, thus half of the participants first performed Virtual Week version A and half started with Virtual Week version B. During the retest session participants that started with version A performed version B first and vice versa. The PM task (Virtual Week) was presented on a 15 inch computer screen with participants seated at a distance of approximately 60 cm. Participants gave their written consent to participate in the study.

Results Preliminary analyses were conducted on PM performance (proportion of correct responses) between version A and version B to investigate possible differences between the two versions. A mixed ANOVA of PM performance at test session revealed PM version (A, B) was not a main effect (p ¼ .877) and did not interact with PM task (regular, irregular) or PM cue (event, time), (all ps ≥ .406). Independent t-tests on PM performance at test session for the time-check task also showed no differences between versions A and B (p ¼ .915). These analyses were repeated at retest session, and once again PM version (A, B) was not a main effect (p ¼ .640) and did not interact with PM task or PM cue (all ps ≥.484). Independent t-tests showed the timecheck task also did not vary between version A and version B at retest (p ¼ .586). Further, preliminary analyses of the PM performance at test and retest were conducted including the between groups variable of version order (version A first, version B first); results revealed that this was not a

436

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

main effect (p ¼ .846) and did not interact with other variables, PM task (regular, irregular), PM cue (event, time), or session (test, retest) (all ps ≥ .106). In summary, PM performance did not differ between version A and version B at either test or retest session; and the analysis of test and retest sessions revealed that PM accuracy did not vary according to whether completing version A at test then version B at retest or the reverse order. Analysis of correct responses. Data were analysed with a 2 × 2 × 2 repeated measures ANOVA with the within-group variables PM task (regular, irregular), PM cue (event-based, time-based) and session (test, retest)5. There was a main effect of PM task, F(1, 49) ¼ 44.72, p , .001, h2p ¼ .48, and PM cue, F(1, 49) ¼ 212.90, p , .001, h2p ¼ .81, but no main effect of session, F(1, 49) ¼ .04, p ¼ .842, h2p ¼ .001, was found. The only significant interaction was a two-way interaction between PM task × PM cue, F(1, 49) ¼ 15.89, p , .001, h2p ¼ .24. The interaction of PM task × PM cue was analysed with tests of simple effects and it was revealed that event-based tasks were performed more accurately than timebased tasks for both the regular tasks, F(1, 49) ¼ 163.85, p , .001, h2p ¼ .77 (Regular event M ¼ .95, SD ¼ .01; Regular time M ¼ .64, SD ¼ .21), and irregular tasks, F(1, 49) ¼ 193.23, p , .001, h2p ¼ .79 (Irregular event M ¼ .89, SD ¼ .16; Irregular time M ¼ .49, SD ¼ .21). Further tests of simple effects also showed that regular tasks were performed more accurately than irregular tasks for both event-based tasks, F(1, 49) ¼ 21.73, p , .001, h2p ¼ .31, and time-based tasks, F(1, 49) ¼ 39.84, p , .001, h2p ¼ .45. Data from the time check tasks were analysed separately and t-test was conducted between accuracy at test and retest session. Results showed no significant effect of session, t(49) ¼ 1.74, p ¼ .08, where participants were slightly less accurate in the test (M ¼ .36, SD ¼ .23) compared to the retest session (M ¼ .41 SD ¼ .21). Analysis of retrospective performance. The proportion of correct responses on the recognition test of PM content for each type of PM task at test and retest is displayed in Table 4. As for PM accuracy, the recognition test of PM content was analysed with a 2 × 2 × 2 repeated measures ANOVA with the within-groups variables PM task (regular, irregular), PM cue (event-based, time-based) and session (test, retest). There was no main effect of session, F(1, 48) ¼ 2.55, p ¼ .11, h2p ¼ .05, but there were main 5

Analyses were also conducted controlling for the effect of age. Participants were divided into two groups: old adults aged 61 – 67 years and old-old aged 68 – 82 years. No differences on years of education or MMSE scores were found between the two groups (all ps . .360). Results showed a significant main effect of age (p , .05) indicating that old-old adults were less accurate than old adults. Age did not interact with any other variable (all ps . .211).

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

437

TABLE 4 Experiment 2: Mean (M) and standard deviation (SD) for PM accuracy and PM accuracy at the recognition task at test and retest session; t-test values and effect size indices (Cohen’s d) are also indicated

PM task Test session Regular Irregular

Retest session Regular Irregular



PM target

PM accuracy M (SD)

Recognition accuracy M (SD)

t

d

Event-based Time-based Event-based Time-based Time Check

.95 (.08) .63 (.18) .91 (.09) .49 (.22) .36 (.23)

.98 (.14) .91 (.27) .95 (.10) .82 (.23) .97 (.07)

1.20 6.76∗ 2.91∗ 8.93∗ 17.99∗

.26 1.22 .42 1.46 3.58

Event-based Time-based Event-based Time-based Time Check

.95 (.08) .66 (.22) .87 (.11) .51 (.23) .42 (.19)

.98 (.05) .91 (.16) .88 (.11) .79 (.16) .93 (.15)

2.38∗ 7.82∗ .59 9.43∗ 14.66∗

.39 1.29 .90 1.41 2.97

p , .001

effects of PM task, F(1, 48) ¼ 21.65, p , .001, h2p ¼ .31, and PM cue, F(1, 48) ¼ 25.05, p , .001, h2p ¼ .34. No significant interactions were found (all ps ≥ .162). Interestingly, t-test was also conducted between PM accuracy and accuracy at the recognition test separately at test and retest. Participants were more accurate on the recognition test of PM content (involving matching the PM action and the PM cue) than performing the PM actions (see Table 4), except for the event-based tasks where participants were close to ceiling on both PM accuracy and recognition task. Time to execute virtual days. Data were analysed with a 5 × 2 ANOVA with the within-groups variables of days (Monday, Tuesday, Wednesday, Thursday, Friday) and session (test, retest). There were main effects of days, F(4, 120) ¼ 34.43, p , .001, h2p ¼ .53, and session, F(1, 30) ¼ 6.17, p ¼ .019, h2p ¼ .17. There was also a significant interaction between days × session, F(4, 120) ¼ 3.56, p ¼ .009, h2p ¼ .11. These findings indicate that participants decreased the time to execute each virtual day at test session and they were stable after Wednesday; at retest their performance was stable from Tuesday. Participants were faster at retest session only on Monday and Friday and they performed equally the other days. Reliability of version A and version B. The reliability coefficients (Cronbach’s alpha) were analysed separately for versions A and B at test

438

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

TABLE 5 Experiment 2: Reliability of Virtual Week: Cronbach’s alpha assessing internal consistency for older adults performing versions A and version B at test (upper part) and retest (lower part) Regular

Irregular

Event-based

Time-based

Time check

All tasks

Number of items

20

20

20

20

10

50

Test session Version A Version B

.37 .29

.64 .76

.40 .41

.66 .74

.64 .39

.78 .71

Retest session Version A Version B

.70 .80

.56 .66

.39 .52

.21 .39

.69 .79

.51 .60

and retest session, for each of the task categories (Table 5). The test –retest correlations were also calculated and reported in Table 3.

Discussion Experiment 2 was primarily conducted to investigate if the better performance obtained by older adults at retest session in Experiment 1 were due to learning of the Virtual Week procedure or due to acquisition of the content of the PM actions. As in Experiment 1, the internal consistency and test – retest reliability was also investigated. Two parallel versions were employed: Version A was the standard Virtual Week (Mioni et al., 2013; Rendell & Craik, 2000) and the same used in Experiment 1. In version B we changed the content of the Event Cards and the cue of the PM tasks (either eventand time-based). The internal consistency was high, confirming previous data (Experiment 1) and consistent with previous studies of Virtual Week (Henry et al., 2007; Mioni et al., 2013; Rendell et al., 2012; Rose et al., 2010). Moreover, test– retest correlations were also sufficiently high for version A and B. The results of PM accuracy showed that, at each testing session, version A and version B were equivalent and no group differences were found between PM accuracy of participants using either version. Significant effects of PM task and PM target were found confirming the results observed in Experiment 1. Participants were less accurate when the activities were new every day (irregular tasks), in particular when the target was time-based compare to event-based (Mioni et al., 2013; Rendell et al., 2011). As in Experiment 1, at test session, participants reduced the time to perform each virtual day, indicating more familiarity and an acquisition of

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

439

competence with the experimental procedure. Participants were also faster in performing Virtual Week at retest session compared to test session, indicating a learning effect and that the confidence acquired with the experimental procedure during the test session was maintained at the retest session. Interestingly, no effect of session was found; participants were equally accurate at test and retest session independent of the version used. This confirms our prediction: the improvement in PM performance observed in Experiment 1 was done by remembering the content of the PM action from test to retest session. In Experiment 1 the higher improvement was observed in irregular time-based tasks, which are the tasks that are new every day (irregular PM task with higher demands on retrospective memory) and with the target that required more self-initiated processes to be performed (time-based). Repeating the same task at test and retest session has increased the performance in older adults in Experiment 1, but when participants perform different PM tasks at test and retest session the performance is equivalent (Experiment 2). As in Experiment 1, an effect of session was found for time-check tasks indicating that participants were more accurate at retest compared to test session. The time-check task is the more demanding considering that the target is non-focal and that is related to “real time”, but it is the task that gains more from repetition over test and retest sessions. It is possible that participants became more familiar (and faster) with the experimental procedure and were more able to deal with the PM activity required resulting in them being more precise and accurate at the time-check task. Experiment 2 also reports the results for the recognition test of PM task content, assessing the retrospective memory component of the task. The assumption is that remembering the content of the PM intention is a necessary prerequisite for the realisation of PM actions. Interestingly, a number of previous studies have found that older adults fail to carry out intentions despite remembering their contents upon later questioning (Einstein & McDaniel, 1996; Kliegel et al., 2005; Maylor et al., 2002; Mioni et al., 2012; Mioni & Stablum, 2014). Our results are in line with the results of previous findings and confirm that PM impairment observed in older adults seems to be caused by failure in executing the PM action at the expected moment (lapses of attention; Mioni & Stablum 2014) rather than forgetting the content of the PM action. This conclusion is based on the finding that participants were more accurate on the test of recognition of PM content and had superior performance on this index of retrospective memory compared to PM. This suggests that the retrospective memory processes involved in encoding and retention of intention contents are intact, whereas the executive processes underlying self-initiated intention retrieval or execution at the appropriate moment in the future are impaired (Kliegel, Martin, McDaniel, & Einstein, 2002; McDaniel & Einstein, 2000).

440

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

GENERAL DISCUSSION Adequate PM abilities are necessary for everyday activities such as remembering to buy food, attend appointments, pay bills, and take medication. PM impairment is often observed in older adults and in clinical populations (Henry et al., 2004; Kliegel, Ja¨ger, Altgassen, & Shum, 2008; Maylor, 2008; McDaniel et al., 2008). PM failure can be frustrating and have the potential to limit the independence of people and force them to rely on a caregiver for prompting. Therefore, it is important to develop reliable tools to evaluate PM performance in clinical populations as well as in healthy older adults. In the present study, we compared PM performance at test and retest with Virtual Week (Rendell & Craik, 2000), a well-known PM measure widely used with healthy older adults and clinical populations (Rendell & Henry, 2009). As observed by McDaniel and Einstein (2007) most of the PM measures have the limitation of having just a few numbers of specific PM tasks or trials and not being representative of daily life activities (see also Burgess et al., 2006; Knight & Titov, 2009). Virtual Week includes a relatively large number of PM tasks (10 for each virtual day) and there is emerging evidence of it being a reliable indicator of PM function in both clinical and non-clinical populations (Rendell & Henry, 2009). In Experiment 1, younger and older participants performed standard Virtual Week twice, 1 month apart. Older adults were less accurate than younger participants, confirming an age-related PM decline (Einstein, & McDaniel, 1990; Henry et al., 2004; Kliegel, Ja¨ger et al., 2008; Maylor, 2008; McDaniel et al., 2008). Moreover, a significant effect of session was found, indicating that older adults reduced the time to execute Virtual Week and were more accurate at retest session. We hypothesised that the higher improvement observed in older adults was mainly due to remembering the content of PM actions from test to retest sessions. Thus, we developed a parallel version (version B) in which only the content of the PM activities were different from the standard version (version A) used in Experiment 1. Experiment 2 was conducted on older adults, half of whom performed version A at test session and version B at retest session and the rest performed the reverse condition. Results showed no differences between the two versions at test session indicating that both versions are equivalent, moreover, no effect of session was found, indicating that better performances obtained in Experiment 1 were mainly due to remembering of the content of PM actions. Older adults still reduced the time to execute Virtual Week indicating an acquisition of competence with the experimental procedure. Interestingly, in Experiment 2, analyses of retrospective memory were also conducted. Participants showed better performance when completing the recognition test at the end of each day compared to the accuracy of completing PM tasks during

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

441

the day. These results seem to indicate that PM dysfunction might be caused by failure to retrieve and execute the intended action at the appropriate moment rather than forgetting the content of the PM actions (Einstein & McDaniel, 1996; Kliegel et al., 2005; Maylor et al., 2002; Mioni & Stablum, 2014). In the present study, we found high internal consistency in line with previous studies with Virtual Week (Henry et al., 2007; Mioni et al., 2013; Rendell et al., 2012; Rose et al., 2010) and moderate to high test –retest reliability. The internal consistency was very high in Experiment 1 for younger (.64 and .83) and for older (.91 and .86) adults. In Experiment 2 internal consistency was high for both version A (test ¼ .64 and retest ¼ .83) and version B (test ¼ .91 and retest ¼ .86). In Experiment 1, when using the same version A, the older adults had a relatively high test –retest reliability (.80) while the young adults had moderate to high test –retest correlation (.61). In Experiment 2 that involved only older adults, the test–retest reliability was high (.68), when using versions A and B (or vice versa) at test and retest. Our results are very interesting and the trends are promising. In light of the moderate to high test –retest reliability when using version A and version B, future research should consider changing the PM task content as well as the PM cue for version B. In the current study, the same cue but different content for irregular PM tasks across both versions A and B of Virtual Week, may have caused some interference that reduced the consistency across the parallel versions. The strength of Virtual Week as a measure of PM is the relatively higher number of specific PM trials and the good reliability, specifically the internal consistency. A critical feature is the inclusion of PM tasks that vary in their relative task demands: regular vs. irregular tasks and time-based vs. eventbased. In the context of clinical practice, a differentiated profile of impairment on Virtual Week may therefore be informative, not only with regard to degree of PM impairment per se, but also the particular circumstances in which PM impairment is more likely to arise (and consequently the manner in which rehabilitation efforts should be targeted). Also important is the inclusion of the recognition task at the end of each virtual day to investigate the retrospective memory component of PM performance. Limitations of Virtual Week might be the use of a computer device and the duration of the session. Regarding the use of a computer device, older participants might be less familiar compared to younger adults, however, computers and technological devices are becoming more common in everyday life and more older adults are becoming familiar with them (Torres, 2011). In the present study, older adults needed more time to run the first virtual days (i.e., Monday and Tuesday) but then they reduced the time to execute each virtual day indicating more familiarity and confidence with the experimental procedure. The second limitation concerns the duration of the experimental

442

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

session. The standard version used in the present study includes five virtual days; this version might be too long to be included in a neuropsychological evaluation. Clinician should consider using a reduced version (three days) that has already been used with clinical groups and demonstrated good reliability (Henry et al., 2007; Mioni et al., 2013). To summarise, the present study was conducted to investigate group differences in PM performance at test compared to retest with Virtual Week. In Experiment 1 younger and older adults were tested with the standard Virtual Week twice, one month apart. Older adults were less accurate than younger participants confirming an age-related PM decline. Age-related PM decline was attenuated at retest session, in fact the older participants were as accurate as younger adults. This improvement was hypothesised to be due to remembering the content of PM action from test to retest session. To investigate this hypothesis two parallel versions were investigated with half of the participants undertaking version A at test and version B at retest and the other half completing each version in reverse order (Experiment 2). There were no differences in PM performance between version A and B at test session indicating that the two versions were equivalent. No effect of session was found, indicating that the improvement observed in older adults in Experiment 1 was due to remembering the PM content rather than participants becoming more familiar with the experimental procedure. The analysis of recognition task showed that PM impairment observed in older adults is mainly due to failure in executing the PM action at the expected moment rather than forgetting the content of the PM action. Our results also confirmed that Virtual Week is a good measure to investigate PM performance. The study also provides promising evidence of relatively high internal consistency (McDaniel & Einstein, 2007; Rose et al., 2010) and test reliability.

REFERENCES Aberle, I., Rendell, P. G., Rose, N. S., McDaniel, M. A., & Kliegel, M. (2010). The age prospective memory paradox: Young adults may not give their best outside of the lab. Developmental Psychology, 46, 1444 – 1453. doi: 10.10374a0020718 Bisiacchi, P. S., Tarantino, V., & Ciccola, A. (2008). Aging and prospective memory: The role of working memory and monitoring processes. Aging Clinical and Experimental Research, 20, 1– 9. Block, R. A., Zakay, D., & Hancock, P. A. (1998). Human aging and duration judgments: A meta-analytic review. Psychology and Aging, 13, 584 –596. Burgess, P. W., Alderman, N., Forbes, C., Costello, A., Coates, L. M., Dawson, D. R., . . . Channon, S. (2006). The case for development and use of “ecologically valid” measures of executive functions in experimental and clinical neuropsychology. Journal of International Neuropsychology Society, 12, 1 – 6. doi: org/10.1017/S1355617706060310

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

443

Cona, G., Arcara, G., Tarantino, V., & Bisiacchi, P. S. (2012). Age-related differences in the neural correlates of remembering time-based intentions. Neuropsychologia, 50, 2692 – 2704. doi: 10.1016/j.neuropsychologia.2012.07.033 Delprado, J., Kinsella, G., Ong, B., Pike, K., Ames, D., Storey, E. . . . Rand, E. (2012). Clinical measures of prospective memory in amnestic mild cognitive impairment. Journal of the International Neuropsychological Society, 18, 295–304. doi: 10.1017/S135561771100172X Einstein, G. O., & McDaniel, M. A. (1990). Normal aging and prospective memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16, 717 – 726. doi: 10.1037/ 0278-7393.16.4.717 Einstein, G. O., & McDaniel, M. A. (1996). Retrieval processes in prospective memory: Theoretical approaches and some new empirical findings. In M. Brandimonte, G. O. Einstein, & M. A. McDaniel (Eds.), Prospective memory: Theory and applications (pp. 115 – 141). Mahwah, NJ: Lawrence Erlbaum. Ellis, R. D., & Allaire, J. C. (1999). Modelling computer interest in older adults: The role of age, education, computer knowledge, and computer anxiety. Human Factors: The Journal of the Human Factors and Ergonomics Society, 41, 345 – 355. doi: 10.1518/001872099779610996 Ellis, J., & Kvavilashvili, L. (2000). Prospective memory in 2000: Past, present, and future directions. Applied Cognitive Psychology, 14, 1– 9. doi:10.1002/acp.767 Fleming, J. M., Shum, D., Strong, J., & Lightbody, S. (2005). Prospective memory rehabilitation for adults with traumatic brain injury: A compensatory training programme. Brain Injury, 19, 1 – 13. doi: 10.1080/02699050410001720059 Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189 –198. doi: 10.1016/0022-3956(75)90026-6 Foster, E. R., Rose, N. S., McDaniel, M. A., & Rendell, P. G. (2013). Prospective memory in Parkinson disease during a Virtual Week: Effects of both prospective and retrospective demands. Neuropsychology, 27, 170 – 181. doi: 10.1037/a0031946 Griffiths, A., Hill, R., Morgan, C., Rendell, P. G., Karimi, K., Wanagaratne, S., & Curran, H. V. (2012). Prospective memory and future event simulation in individuals with alcohol dependence. Addiction, 107(10), 1809– 1816. doi: 10.1111/j.1360-0443.2012.03941.x Groot, Y. C. T., Wilson, B. A., Evans, J., & Watson, P. (2002). Prospective memory functioning in people with and without brain injury. Journal of International Neuropsychological Society, 8, 645 – 654. doi: 10.1017.S1355617702801321 Henry, J. D., MacLeod, M. S., Phillips, L. H., & Crawford, J. R. (2004). A meta-analytic review of prospective memory and aging. Psychology and Aging, 19, 27 – 39. doi: 10.1037/08827974.19.1.27 Henry, J. D., Rendell, P. G., Kliegel, M., & Altgassen, M. (2007). Prospective memory in schizophrenia: Primary or secondary impairment? Schizophrenia Research, 95, 179 – 185. doi: 10.1016/j.schres.2007.06.003 Henry, J. D., Rendell, P. G., Phillips, L. H., Dunlop, L., & Kliegel, M. (2012). Prospective memory reminders: A laboratory investigation of initiation source and age effects. Quarterly Journal of Experimental Psychology, 65, 1274 – 1287. doi: 10.1080/17470218.2011.651091 Henry, J. D., Rendell, P. G., Rogers, P., Altgassen, M., & Kliegel, M. (2011). Prospective memory in schizophrenia and schizotypy. Cognitive Neuropsychiatry, 17, 133– 150. doi: 10.1080/13546805.2011.581536 Kardiasmenos, K. S., Clawson, D. M., Wilken, J. A., & Wallin, M. T. (2008). Prospective memory and the efficacy of a memory strategy in multiple sclerosis. Neuropsychology, 22, 746 – 754. doi: 10.1037/a0013211 Katai, S., Maruyama, T., Hashimoto, T., & Ikeda, S. (2003). Event based and time based prospective memory in Parkinson’s disease. Journal of Neurology, Neurosurgery, and Psychiatry, 74, 704 – 709. doi: 10.1136/jnnp.74.6.704

444

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

Kim, H. J., Craik, F. I. M., Luo, L., & Ween, J. E. (2009). Impairments in prospective and retrospective memory following stroke. Neurocase, 15, 145–156. doi: 10.1080/ 13554790802709039 Kixmiller, J. S. (2002). Evaluation of prospective memory training for individuals with mild Alzheimer’s disease. Brain and Cognition, 49, 237 – 241. Kliegel, M., Altgassen, M., Hering, A., & Rose, N. S. (2011). A process-model based approach to prospective memory impairment in Parkinson’s disease. Neuropsychologia, 49, 2166 – 2177. doi: org/10.1016/j.neuropsychologia.2011.01.024 Kliegel, M., Eschen, A., & Tho¨ne-Otto, A. I. T. (2005). Planning and realization of complex intentions in traumatic brain injury and normal aging. Brain and Cognition, 56, 43 – 54. doi: 10.1016/j.bandc.2004.05.005 Kliegel, M., Ja¨ger, T., Altgassen, M., & Shum, D. (2008). Clinical neuropsychology of prospective memory. In M. Kliegel, M. A. McDaniel, & G. O. Einstein (Eds.), Prospective memory: Cognitive, neuroscience, developmental and applied perspectives (pp. 283– 302). Mahwah, NJ: Lawrence Erlbaum. Kliegel, M., Ja¨ger, T., & Phillips, L. (2008). Adult age differences in event-based prospective memory: A meta-analysis on the role of focal versus non-focal cues. Psychology and Aging, 23, 203– 208. doi: 10.1037/0882-7974.23.1.203 Kliegel, M., Martin, M., McDaniel, M. A., & Einstein, G. O. (2002). Complex prospective memory and executive control of working memory: A process model. Psychologische Beitrage, 4, 303 – 318. Kliegel, M., McDaniel, M. A., & Einstein, G. O. (2008). Prospective memory: Cognitive, neuroscience, developmental and applied perspectives. Mahwah, NJ: Lawrence Erlbaum. Knight, R. G., & Titov, N. (2009). Use of virtual reality tasks to assess prospective memory: Applicability and evidence. Brain Impairment, 10, 3– 13. doi: doi.org/10.1375/brim.10.1.3 Kvavilashvili, L., & Fisher, L. (2007). Is time-based prospective remembering mediated by self-initiated rehearsals? Role of cues, ongoing activity, age and motivation. Journal of Experimental Psychology: General, 136, 112 – 132. doi: 10.1037/0096-3445.136.1.112 Leitz, J. R., Morgan, C. J. A., Bisby, J. A., Rendell, P. G., & Curran, H. V. (2009). Global impairments of prospective memory following acute alcohol. Psychopharmacology, 205, 379 – 387. doi: 0.1007/s00213-009-1546-z Margrett, J. A., Reese-Melancon, C., & Rendell, P. G. (2011). Examining collaborative dialogue among couples: A window into prospective memory processes. Zeitschrift fu¨r Psychologie/ Journal of Psychology, 219, 100 – 107. doi: 10.1027/2151-2604/a000054 Mathias, J. L., & Mansfield, K. M. (2005). Prospective and declarative memory problems following moderate and severe traumatic brain injury. Brain Injury, 19, 271 – 282. doi: 10.1080/ 02699050400005028 Maylor, E. A. (2008). Commentary: Prospective memory through the ages. In M. Kliegel, M. A. McDaniel, & G. O. Einstein (Eds.), Prospective memory: Cognitive, neuroscience, developmental and applied perspectives (pp. 217 – 233). Mahwah, NJ: Lawrence Erlbaum. Maylor, E. A., Smith, G., Della Sala, S., & Logie, R. H. (2002). Prospective and retrospective memory in normal aging and dementia: An experimental study. Memory and Cognition, 30, 871 – 884. doi: 10.3758/BF03195773 McDaniel, M. A., & Einstein, G. O. (1993). The importance of cue familiarity and the cue distinctiveness in prospective memory. Memory, 1, 23 – 41. doi: 10.1080/ 09658219308258223 McDaniel, M. A., & Einstein, G. O. (2000). Strategic and automatic processes in prospective memory retrieval: A multiprocess framework. Applied Cognitive Psychology. Special issue: New perspectives in prospective memory, 14, S127 – S144. doi: 10.1002/acp.775 McDaniel, M. A., & Einstein, G. O. (2007). Prospective memory: An overview and synthesis of an emerging field. Thousand Oaks, CA: Sage.

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

445

McDaniel, M. A., Einstein, G. O., & Rendell, P. G. (2008). The puzzle of inconsistent age-related declines in prospective memory: A multiprocess explanation. In M. Kliegel, M. A. McDaniel, & G. O. Einstein (Eds.), Prospective memory: Cognitive, neuroscience, developmental and applied perspectives (pp. 141 – 160). Mahwah, NJ: Lawrence Erlbaum. McDaniel, M. A., Guynn, M. J., Glisky, E. L., & Routhieaux, B. C. (1999). Prospective memory: A neuropsychological study. Neuropsychology, 13, 103 – 110. doi: 10.1037/ 0894-4105.13.1.103 McFarland, C. P., & Glisky, E. I. (2009). Frontal lobe involvement in a task of time-based prospective memory. Neuropsychologia, 47, 1660 – 1669. doi: 10.1016/j.neuropsychologia. 2009.02.023 Mills, V., Kixmiller, J. S., Gillespie, A., Allard, J., Flynn, E., Bowman, A., & Brawn, C. M. (1997). The correspondence between the Rivermead Behavioural Memory Test and ecological prospective memory. Brain and Cognition, 35, 322 – 325. Mioni, G., McClintock, S. M., & Stablum, F. (2014). Understanding, assessing and treating prospective memory dysfunctions in traumatic brain injury patients. In F. Sadaka (Ed.), Traumatic brain injury (Chapter 18). New York: InTech. doi: 10.5772/57307 Mioni, G., Rendell, P. G., Henry, J. Cantagallo, A., & Stablum, F. (2013). An investigation of prospective memory functions in people with traumatic brain injury using Virtual Week. Journal of Clinical Experimental Neuropsychology, 35, 617– 630. doi: 10.1080/ 13803395.2013.804036 Mioni, G., & Stablum, F. (2014). Monitoring behaviour in a time-based prospective memory task: The involvement of executive functions and time perception. Memory, 22, 536 – 552. doi: 10.1080/09658211.2013.801987 Mioni, G., Stablum, F., McClintock, S. M., & Cantagallo, A. (2012). Time-based prospective memory in severe traumatic brain injury patients: The involvement of executive functions and time perception. Journal of the International Neuropsychological Society, 18, 697 – 705. doi: 10.1017/S1355617712000306 Ozgis, S., Rendell, P. G., & Henry, J. D. (2009). Spaced retrieval significantly improves prospective memory performance of cognitively impaired older adults. Gerontology, 55, 229 – 232. doi: 10.1159/000163446 Paraskevaides, T., Morgan, C. J. A., Leitz, J. R., Bisby, J. A., Rendell, P. G., & Curran, H. V. (2010). Drinking and future thinking: Acute effects of alcohol on prospective memory and future simulation. Psychopharmacology, 208, 301 – 308. doi: 10.1007/s00213-009-1731-01 Phillips, L. H., Henry, J. D., & Martin, M. (2008). Adult aging and prospective memory: The importance of ecological validity. In M. Kliegel, M. A. McDaniel, & G. O. Einstein (Eds.), Prospective memory: Cognitive, neuroscience, developmental and applied perspectives (pp. 161 – 185). Mahwah, NJ: Lawrence Erlbaum. Radford, K. A., Lah, S., Say, M. J., & Miller, L. A. (2011). Validation of a new measure of prospective memory: The Royal Prince Alfred Prospective Memory Test. The Clinical Neuropsychologist, 25, 127 – 140. doi: 10.1080/13854046.2010.529463 Raskin, S. (2004). Memory for Intentions Screening Test. Journal of the International Neuropsychological Society, 10(Suppl. 1), 110. Raskin, S. (2009). Memory for Intentions Screening Test: Psychometric properties and clinical evidence. Brain Impairment, 10, 23 – 33. Rendell, P. G., & Craik, F. I. M. (2000). Virtual and Actual Week: Age-related differences in prospective memory. Applied Cognitive Psychology, 14, 43– 62. doi: 10.1002/acp.770 Rendell, P. G., & Henry, J. D. (2009). A review of Virtual Week for prospective memory assessment: Clinical implications. Brain Impairment, 10, 14 – 22. doi: 10.1375/brim.10.1.14 Rendell, P. G., Gray, T. J., Henry, J. D., & Tolan, A. (2007). Prospective memory impairment in ‘ecstasy’ (MDMA) users. Psychopharmacology, 194, 497 – 504. doi: 10.1007/s00213-0070859-z

446

MIONI, RENDELL, STABLUM, GAMBERINI AND BISIACCHI

Rendell, P. G., Henry, J. D., Phillips, L. H., de la Piedad Garcia, X., Booth, P., Phillips, P., & Kliegel, M. (2012). Prospective memory, emotional valence, and multiple sclerosis. Journal of Clinical and Experimental Neuropsychology, 34, 738– 749. doi:10.1080/13803395.2012. 670388 Rendell, P. G., Jensen, F., & Henry, J. D. (2007). Prospective memory in multiple sclerosis. Journal of the International Neuropsychological Society, 13, 410– 416. doi: 10.1017/ S1355617707070579 Rendell, P. G., Mazur, M., & Henry, J. D. (2009). Prospective memory impairment in former users of methamphetamine. Psychopharmacology, 203, 609 – 616. doi: 10.1007/s00213008-1408-0 Rendell, P. G., McDaniel, M. A., Forbes, R. D., & Einstein, G. O. (2007). Age-related effects in prospective memory are modulated by ongoing task complexity and relation to target cue. Aging, Neuropsychology, and Cognition, 14, 236– 256. doi: 10.1080/13825580600579186 Rendell, P. G., Phillips, L. H., Henry, J. D., Brumby-Rendell, T., de la Piedad Garcia, X., Altgassen, M., & Kliegel, M. (2011). Prospective memory, emotional valence and ageing. Cognition and Emotion, 25, 916 – 925. Rose, N. S., Rendell, P. G., McDaniel, M. A., Abele, I., & Kliegel, M. (2010). Age and individual differences in prospective memory during a “Virtual Week”: The role of working memory, task regularity and cue focality. Psychology and Aging, 25, 595 – 605. doi: 10. 1037/a0019771 Salthouse, T. A. (2000). Aging and measures of processing speed. Biological Psychology, 54, 35 – 54. Shum, D., Fleming, J. M., & Neulinger, K. (2002). Prospective memory and traumatic brain injury: A review. Brain Impairment, 3, 1 – 16. doi: org/10.1375/brim.3.1.1 Shum, D., Levin, H., & Chan, R. C. K. (2011). Prospective memory in patients with closed head injury: A review. Neuropsychologia, 49, 2156 – 2165. doi: 10.1016/j.neuropsychologia. 2011.02.006 Sunderland, A., Harris, J. E., & Baddeley, A. D. (1983). Do laboratory tests predict everyday memory? A neuropsychological study. Journal of Verbal Learning and Verbal Behavior, 22, 341– 357. doi: 10.1016/s0022-5371(83)90229-3 Thompson, C., Henry, J. D., Rendell, P. G., Withall, A., & Brodaty, H. (2010). Prospective memory function in mild cognitive impairment and early dementia. Journal of the International Neuropsychological Society, 16, 318– 325. doi: 10.1017/S1355617709991354 Torres, A. C. S. (2011). Cognitive effects of video games on old people. International Journal on Disability and Human Development, 10, 55 – 58. Trawley, S. L., Law, A. S., Brown, L. A., Niven, E. H., & Logie, R. H. (2013). Prospective memory in a virtual environment: Beneficial effects of cue saliency. Journal of Cognitive Psychology, 26, 39 – 47. doi: 10.1080/20445911.2013 Trawley, S. L., Law, A., & Logie, R. H. (2011). Multitasking: Multiple, domain-specific cognitive functions in a virtual environment. Memory and Cognition, 39, 1561 – 1574. doi: 10.3758/s13421-011-0120-1 West, R. (2008). The cognitive neuroscience of prospective memory. In M. Kliegel, M. A. McDaniel, & G. O. Einstein (Eds.), Prospective memory: Cognitive, neuroscience, developmental and applied perspectives (pp. 261– 279). Mahwah, NJ: Lawrence Erlbaum. West, R., McNerney, M. W., & Krauss, I. (2007). Impaired strategic monitoring as the locus of a focal prospective memory deficit: A case study. Neurocase, 13(2), 115 – 126. doi: 10.1080/ 13554790701399247 Will, C. M., Rendell, P. G., Ozgis, S., Pierson, J. M., Ong, B., & Henry, J. D. (2009). Cognitively impaired older adults exhibit comparable difficulties on naturalistic and laboratory prospective memory tasks. Applied Cognitive Psychology, 23, 804– 812. doi: 10.1002/ acp.1514

TEST– RETEST CONSISTENCY OF VIRTUAL WEEK

447

Wilson, B. A., Cockburn, J., & Baddeley, A. D. (1985). The Rivermead Behavioural Memory Test. London: Pearson Assessment. Wilson, B. A., Cockburn, J., & Baddeley, A. D. (2003). The Rivermead Behavioural Memory Test (2nd Edn.) London: Pearson Assessment. Wilson, B. A., Emslie, H. C., & Foley, J. A. (2004). A new test of prospective memory: The CAMPROMPT. Journal of International Neuropsychology Society, 10, 44. Wilson, B. A., Emslie, H., Watson, J. P., Hawkins, K., & Evans, Y. G. (2005). Cambridge Prospective Memory Test. Oxford: Thames Valley Test Company. Woods, S. P., Moran, L. M., Dawson, M. S., Carey, C. L., Grant, I., & the HIV Neurobehavioral Research Center (HNRC) Group (2008). Psychometric characteristics of the memory for intentions screening test. The Clinical Neuropsychologist, 22, 864– 878. doi: 10.1080/ 13854040701595999 Yordanova, J., Kolev, V., Hohnsbein, J., & Falkenstein, M. (2004). Sensorimotor slowing with ageing is mediated by a functional dysregulation of motor-generation processes: Evidence from high-resolution event-related potentials. Brain, 127, 351 – 362.

Copyright of Neuropsychological Rehabilitation is the property of Psychology Press (UK) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Test-retest consistency of Virtual Week: A task to investigate prospective memory.

The present study reports test-retest consistency of Virtual Week, a well-known measure of prospective memory (PM) performance. PM is the memory assoc...
309KB Sizes 0 Downloads 3 Views