Behavioural Brain Research 264 (2014) 51–63

Contents lists available at ScienceDirect

Behavioural Brain Research journal homepage: www.elsevier.com/locate/bbr

Research report

Reinstatement of encoding context during recollection: Behavioural and neuroimaging evidence of a double dissociation Erin I. Skinner a,∗ , Michelle Manios b , Jonathan Fugelsang b , Myra A. Fernandes b,1 a b

Department of Psychology, Langara College, Vancouver, British Columbia, Canada Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada

h i g h l i g h t s • • • • •

Participants studied words paired with famous faces or places, or scrambled images. Brain activation compared during recollection of words presented alone. Recollection was higher for words studied with meaningful visual contexts. Double dissociation of brain activation found in the FFA and PPA. Results provide strong evidence of cortical reinstatement during recollection.

a r t i c l e

i n f o

Article history: Received 10 December 2013 Received in revised form 19 January 2014 Accepted 25 January 2014 Available online 1 February 2014 Keywords: Memory fMRI Recollection Familiarity Context Cortical reinstatement

a b s t r a c t

In both a behavioural and neuroimaging study, we examined whether memory performance and the pattern of brain activation during a word recognition task differed depending on the type of visual context presented during encoding. Participants were presented with a list of words, paired with either a picture of famous face, a famous scene, or a scrambled image, to study for a later recognition test. During the recognition test, participants made ‘remember’, ‘know’, or ‘new’ responses to words presented alone. In the neuroimaging experiment, the retrieval phase was scanned using event-related fMRI and brain activation was compared for remember and know responses given to words studied with famous faces and famous scenes. Behaviourally, in both studies, memory was enhanced if initial encoding was accompanied by a meaningful image (famous face or famous scene) relative to a scrambled image which contained no semantic information. At the neural level, whole brain analysis showed a double dissociation during recollection: BOLD signal in the right fusiform gyrus (within the Fusiform Face Area) was higher for remember responses given to words studied with famous faces compared to famous scenes, and was higher in the left parahippocampus (within the Parahippocampal Place Area) for words studied with famous scenes relative to famous faces. No such differential activation was found for know responses. Results suggest that participants spontaneously integrate item and meaningful contexts at encoding, improving subsequent item recollection, and that context-specific brain regions implicated during encoding are recruited during retrieval for the recollective, but not familiarity, memory process. © 2014 Elsevier B.V. All rights reserved.

Research on context-dependent memory and transfer appropriate processing have provided strong evidence for an overlap in memory processes engaged at encoding and retrieval (for examples see [26] and [39]). These ideas have been expanded to the biological level, articulated in the ‘cortical reinstatement

∗ Corresponding author at: Department of Psychology, 100W 49th Avenue, Langara College, Vancouver, British Columbia, V5Y 2Z6, Canada. Tel.: +604 323 5248. E-mail address: [email protected] (E.I. Skinner). 1 0002-1467-0342 0166-4328/$ – see front matter © 2014 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.bbr.2014.01.033

hypothesis’. According to popular forms of this model, cues provided at retrieval activate patterns of activity in the hippocampus, which in turn reinstate the pattern of activity produced at encoding (see, for example, [2]). This suggests that recollection involves both content-independent brain regions, including activation of the hippocampus, and content-specific brain regions that vary depending on the specific processes engaged during encoding [21,33]. Studies using functional neuroimaging have provided strong support for the reinstatement hypothesis, demonstrating overlap between the brain regions engaged at encoding and retrieval [18,27,28,42,47]. For example, [27] showed that the recognition of visual words,

52

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

initially paired with sounds at encoding, activated primary and secondary auditory cortex at retrieval. These results suggest that context-information is stored in brain regions involved in the original processing of the context and that these cortical patterns of activity are reactivated at retrieval (for a review of such data see [5]). In the current study we tested this hypothesis, using both behavioural and neuroimaging data, by examining whether memory performance, and a priori predicted regions of brain activation during a word recognition task, differed depending on the type of visual context presented during encoding. Related research has incorporated dual process theories of memory to determine whether the reinstatement of contextspecific brain regions is found exclusively during recollection [20,45]. Dual process theories suggest that recognition may involve the retrieval of detailed contextual information about the learning episode, known as recollection, or a more nonspecific sense that an item has been previously encountered, known as familiarity [12,51]. At the behavioural level, research has shown that recollection and familiarity are differentially affected by divided attention at encoding, levels of processing, and speeded responding [51]. In addition, neuropsychological studies suggest that damage to the hippocampus impairs recollection and spares familiarity, whereas damage to the surrounding temporal lobe has been shown to impair familiarity and spare recollection [4,53]. Neuroimaging data provide converging evidence of this double dissociation [25] and additional research suggests that recollection and familiarity also differ in the frontal and parietal brain regions recruited [35,43]. If the reactivation of context-specific brain regions during retrieval represents the reinstatement of encoding context, such activation should be found during recollection, but not familiarity, as only the former process is believed to involve the retrieval of rich contextual detail. To test this hypothesis, [45] employed the remember-know paradigm, originally developed by [41]. In this procedure, participants study a list of items and, during a recognition test, are asked to state whether they ‘remember’ the item (i.e., if they can recall specific details about it from the study episode), whether they ‘know’ an item (i.e., if it is familiar but lacks specific details from the study episode), or whether they deem the item to be ‘new’, and not from the study list. Remember responses are believed to align with recollective memory processes, whereas know responses support familiarity-based recognition [50]. In [45]Wheeler and Buckner’s (2004) study, participants studied words (e.g., dog) with accompanying related pictures (a picture of a dog), and on a later scanned recognition test using event-related fMRI, made remember, know, or new responses to the words presented alone. They found that activity in a region of the left inferior temporal cortex, known to be activated during the perception of visual information based on a previous experiment, was higher for remember than know responses, supporting their prediction. Additional research suggests that such reinstatement effects can be differentiated from content-insensitive regions activated during recollection, known as the ‘core recollection network’, including the hippocampus, which are activated regardless of the encoding context [21]. In our recent work [38], we used a recognition paradigm examining how the meaningfulness of context information present at encoding affects subsequent memory performance to provide additional evidence that context-sensitive brain regions are selectively engaged during recollection. In this paradigm, participants view target words presented with visual context information rich in meaning (such as the picture of a face) or low in meaning (such as a scrambled face) and subsequently perform a remember-know recognition test. Our behavioural work has shown that memory is higher for words studied with pictures of intact faces than for words studied with pictures of scrambled or inverted faces, and that these effects are specific to remember, and not know, responses [37]. We

have additionally found that while meaningful encoding contexts increase subsequent recollection in younger adults, older adults fail to show this recollection benefit unless instructed to make an arbitrary link between the context and study word [36]. We suggested that younger adults spontaneously use elaborative processes to bind item and meaningful contexts, creating rich memory traces later retrieved using recollective memory processes. We used this paradigm in conjunction with fMRI to provide strong support of the cortical reinstatement hypothesis [38]. As the processing of face information is known to elicit activation in the fusiform gyrus, known as the fusiform face area (FFA; [23,29]), we were able to define, a priori, the specific region of the brain that should be activated during recollection. We found that activation in the right fusiform gyrus was higher for remember responses given to words studied with faces than remember responses given to words studied with scrambled faces, and further, a comparison of know responses showed no such activation. In addition, a regression analysis demonstrated that activation in the right fusiform gyrus increased as the relative recollection benefit for words studied with faces as compared to scrambled faces increased. These results further demonstrate that the brain regions used to process context information at encoding are recruited at retrieval and extend this theory by suggesting that the extent of activation in context-specific brain regions is related to recollection performance. They additionally bolster our suggestion that participants spontaneously bind meaningful context information with study words at encoding and that this context information is exclusively retrieved during recollection. In the current experiments, we extended this paradigm to investigate recollection benefits associated with the reactivation of context-specific brain regions. In both a companion behavioural and event-related fMRI study, participants studied words paired with either famous faces, famous scenes, or scrambled images and made remember, know, or new judgments to words presented alone. Famous face and famous scene contextual stimuli were chosen for two reasons. First, by choosing two different classes of stimuli that activate well-defined brain regions we can extend our previous work, which showed only a single dissociation. Such double dissociations of context reactivation have been shown in previous work. For example, [20] found that the brain activation at retrieval was higher in left occipital and anterior fusiform regions when study words were integrated with scene information at encoding whereas activation was higher in the ventromedial frontal cortex when study words were integrated into a sentence [21]. Similarly, Woodruff and colleagues [47] showed a double dissociation in the lateral and anterior fusiform regions depending on whether recognized words were studied with visual word or picture information. However, by using very specific contextual stimuli, pictures of famous faces and famous scenes, we can show a double dissociation using extremely well-defined context-specific brain regions. Research suggests that face and scene information are processed by distinct neural regions, known as the fusiform face area (FFA) and parahippocampal place area (PPA; [29]). In addition, the visual imagery of famous face and scene information is known to elicit activation in these distinct neural regions, suggesting that such stimuli are particularly apt to demonstrate a powerful double dissociation of reinstatement effects. [19] have used face and scene stimuli to demonstrate that stimuli-specific temporal and frontal lobe activation at encoding is related to subsequent memory performance. However, this study did not look at activation at retrieval. Following this approach, we believe that by using context-specific brain regions identified a-priori, we can make specific inferences regarding the nature of observed contextspecific reactivation at retrieval which will augment the findings of studies examining more global patterns of brain activation (as in [21]).

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

Second, we examined whether contexts that are more familiar would enhance the recollective benefit reported in our prior work. Although our prior behavioural work showed recollection was higher for words studied with novel faces than with scrambled faces, we failed to show this behavioural benefit within the scanner. We suggested that the novelty of the fMRI environment interfered with the use of elaborative encoding processes in some participants, lowering recollection. Previous research suggests that we are better able to form associations between item and context information if the stimuli has a pre-existing memory representation, and so we used this to our advantage in the current study. For example, [32] found that the reinstatement of encoding context influences memory for famous faces but not for unfamiliar faces, suggesting that it is easier to associate item and context information for familiar face stimuli. As such we reasoned that participants would be more likely to show enhanced recollection for studied words if the study context consisted of highly familiar information, such as famous face and famous scene stimuli. In addition, in all of our previous behavioural work, meaningful contextual stimuli was consistently operationally defined as pictures of faces. By including pictures of famous scenes we can replicate and extend our previous findings using a different form of meaningful context, thereby demonstrating that it is meaningful context at encoding, not face contexts per se, that produce recollection benefits. The current study consists of both a behavioural companion and neuroimaging experiment. In both experiments, participants studied words presented visually accompanied by pictures of famous faces, pictures of famous scenes, or scrambled images (the famous face and scene pictures scrambled). Thus, the context information was either high or low in meaningful content, while the basic visual features (luminance and contrast) of the context was kept constant. For the subsequent recognition test, participants gave a remember, know, or new response to words presented alone. This study is novel in that it included both behavioural and neuroimaging evidence to examine how famous face and famous scene contextual stimuli affect subsequent recollection and familiarity. We hypothesized that participants would spontaneously engage in elaborative processes that bind meaningful context information to the study word at encoding, building context-rich memory traces retrieved through recollective processes. In addition, if such binding occurs, and the context information is subsequently reactivated during the recollective, but not familiarity, memory process, we should find that the cortical reinstatement of context-specific brain regions occurs during remember, but not know, responses.

1. Companion behavioural experiment As suggested by our previous work [37], providing visual contexts rich in meaningful content at study may promote the use of elaborative processes that bind item and context, promoting recollection at retrieval. In our previous work, we used face stimuli as a form of high meaningful context as they contain both semantic and perceptual complexity. However, there is evidence to suggest that individuals develop specialized face processing skills that may not apply to other meaningful stimuli [14]. It is thus important to demonstrate that meaningful context at encoding improves subsequent recollection using contexts other than pictures of faces. The behavioural companion experiment was performed outside of the scanner to determine whether words studied with context information rich in meaning, either famous faces or famous scenes, would be more likely to be recognized using recollective memory processes than words studied with context information low in meaning (scrambled images). By using famous faces and famous scenes we were able to use two forms of meaningful encoding context that contain both semantic and perceptual complexity while

53

varying in their specific content. In addition, using famous faces and famous scenes may strengthen the recollective memory benefit as participants may be more likely to form associations between item and context information if the context has a pre-existing memory representation. We hypothesized that, despite equating for luminance and contrast levels of images, the rate of remember responses would be higher for words studied with famous faces and famous scenes than words studied with scrambled images. Know responses, which do not require retrieval of contextual details, were expected to be unaffected by the manipulation of context. 1.1. Method 1.1.1. Participants Eighteen undergraduate students enrolled in psychology classes at the University of Waterloo (9 females) ranging in age from 18 to 29 years (M — — 20.67, SD = 2.89), took part in the experiment and received course credit for participating. All participants were fluent English speakers and reported normal or corrected-to-normal vision and hearing. Years of education ranged from 13 to 16 years (M — — 14.00, SD — — 1.03). The National Adult Reading Test-Revised (NART-R; Blair & Spreen, 1989; Nelson, 1982) was administered to estimate full scale IQ (FSIQ), based on number of errors in pronunciation during vocabulary reading. Participants had a mean FSIQ score of 110.34 (SD — — 8.79). Participants also completed the Digit Span Forward and Backward tests (Wechsler, 1997), obtaining — mean scores of 8.61 (SD — — 2.53) and 8.22 (SD — 2.29) respectively. 1.1.2. Materials 1.1.2.1. Words. Stimuli for the memory tasks were medium to high frequency words chosen from Celex, a lexical database [3]. Three hundred and thirty words were randomly chosen for use in three different study-test sessions. All test lists were equated on let— ter length (M — — 6.31), and word frequency (M — 486 occurrences per million; [3]). An additional 24-item word list was used in the practice session, with the same characteristics as the words in the experimental session. 1.1.2.2. Images. One hundred and thirty two pictures were obtained from the internet, consisting of 66 famous faces (22 males) and 66 famous scenes. Each was displayed in colour set on a white background in a standard size of 400 × 384 pixels. One hundred and thirty two scrambled images were created in Matlab 7.06 software by randomizing the pixels in each of the 66 faces and scene images; thus for each famous face and for each famous scene there was a corresponding scrambled image. This procedure altered the spatial frequency of the image while preserving luminance and contrast. All photos of faces were presented in front view and showed head and shoulders only. Normative data on fame status for each face and scene were obtained in a pilot study conducted in 9 naïve participants. Images were displayed in colour on white paper with 2 rows of 3 images (6 images total per page) with a mix of faces and scenes per page. The first 3 participants were presented 84 famous scenes and 66 famous faces. Twenty eight scene images were removed; 17 because of 0% identification, 1 was a duplicate, and 10 with only 1 correct identification (33%). Additionally, 6 face images were replaced with new images, 1 was a duplicate. The resulting 132 images (66 faces, 66 scenes) were presented to 6 new pilot participants. All face images had a recognition rate of 33% or greater and scene image recognition rate was 16% or greater. The overall mean recognition rate from — 17.10) and for these 6 participants for famous faces was 78.96 (SD — famous scenes was 69.23 (SD — — 22.12). An additional 5 famous face images (3 males and 2 females), and 5 famous scene images, as

54

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

well as 5 scrambled images were chosen to be used in the practice session. 1.1.3. Procedure Three different study-test list combinations were created such that each word was paired with either a picture of a famous face, a picture of a famous scene, a scrambled image, or served as a lure across lists, counterbalanced across participants. The order of presentation of the word lists for the three study-test sessions was also counterbalanced across participants. This Task Order factor was included in the analysis to ensure that differences in memory performance did not differ depending on specific item-context pairs found within each list. Stimulus presentation and response recording were controlled by an IBM PC, using E-prime v.1.2 software (Psychology Software Tools Inc., Pittsburgh, PA). Participants were randomly assigned to a task order, were tested individually, and completed the experiment in approximately 1 h. All participants began by performing the NART-revised and the Digit Span Forward and Backward. Participants were then given a short practice block consisting of 15 study trials in which 5 famous face image-word, 5 famous scene image-word, and 5 scrambled image-word items were presented, in random order, separated by a fixation cross, using the same timings and procedure as in the experimental trials (described below). Subsequently, Remember–Know test response instructions were given (see below), and participants made responses to 9 words on the practice recognition test (2 words studied with famous faces, 2 words studied with famous scenes, 1 word studied with a scrambled image, and 4 new words), presented in random order, centrally, on the computer screen. Following practice, participants completed the three studytest sessions, designed to increase the number of trials associated with each encoding trial-type (words paired with a famous face, a famous scene and words paired with a scrambled image), while lessening the memory demands of each individual recognition task. For each of the 3 study phases, a trial began with a picture of a famous face, a famous scene (context-rich conditions) or a scrambled image (context-weak condition) appearing on the screen for 1000 ms centered in the upper area of the screen (screen coordinates: X — — 324, Y — — 180). A word presented in 28 point bold Arial — 324, font then appeared directly below the picture for 2000 ms (X — Y— — 379), after which both the picture and the word disappeared, followed by a 500-ms fixation cross presented centrally. In each of the three study phases, 66 trials were presented (22 famous face-word, 22 famous scene-word and 22 scrambled-word) and 22 central fixation crosses, with trial type randomized. All stimuli were presented in a fully illuminated room on a 17-in. (43.18 cm) computer screen, and the viewing angles of the picture and word stimuli were approximately 16.6◦ and 5.7◦ , respectively. Participants were asked to memorize the words for an upcoming memory test. To ensure that participants encoded the context (famous face, famous scene, or scrambled image) during study, participants were also asked to manually identify the image for each study trial as a face image, a scene image or a scrambled image by making a keypress. Participants were not provided specific instructions on how to process the contextual stimuli. Each trial lasted 3.5 s, and participants were asked to make their classification response during this time. After each study phase, participants counted backwards by threes, from a 3-digit number (e.g. 123) for 30 s to reduce recency effects. During the ensuing test phase, 110 words (22 studied with famous faces, 22 studied with famous scenes, 22 studied with scrambled images, and 44 lures) were presented in a randomized order. These were presented in the centre of the screen in the same font and size as at study. Participants were asked to make a Remember, Know, or New response by pressing one of three keys (1, 2 or 3 on the numeric keyboard). The word remained on the screen for

3750 ms, followed by a fixation cross for 250 ms. Participants could make their response anytime within the 4000 ms of each recognition trial. Participants were given a short break (approximately 2 to 5 min) between each study-test session. Test instructions for the Remember–Know task were as follows: participants were told that they would see words sequentially, and that some but not others were from the study list. If they recognized a word was not from the study list, they had two options, ‘R’ or ‘K’. They were told to report ‘R’ for Remember by pressing the ‘1 key if they could recall specific details associating a word with the study episode. They were given examples of such details: remembering an image, thought, or feeling they had associated with the word during study, the temporal order, or the picture presented with the word during study. If however, they did not recall a specific study detail associated with the word, they were told to report ‘K’ for Know, by pressing the ‘2 key. To clarify the ‘K’ memory response, participants were also given the example of meeting someone on the street that they knew they had met before, but not being able to determine the specific instance in which they had met them. Participants were instructed to respond ‘N’ for New words they believed were not presented on the Study list, by pressing the ‘3 key. Participants were then asked if they understood the distinction between ‘R’ and ‘K’ responses and, after the practice session, were asked to provide examples following the practice session, of details accompanying an R response, in order to ensure that they understood the difference between ‘R’ and ‘K’, and were not responding on the basis of response confidence. Following the experiment, participants were shown the famous faces and famous scenes presented in the study and were asked to identify the famous face or famous scene. Stimuli were presented in colour, on paper, in flip chart format, with six images per 8.5 × 11 sheet of paper and participants handwrote a label for each image. The labels were then coded for accuracy and a proportion correct was calculated for the famous face and famous scene images. 1.2. Results 1.2.1. Identification task performance The mean identification accuracy, measured as hit rate minus false alarm rate, for the images present at encoding was .95 (SD — — .04) for pictures of famous faces, .93 (SD — — .05) for pictures of famous scenes, and .93 (SD — — .08) for scrambled images. A repeated measure ANOVA revealed no main effect of Context on identifica— tion accuracy, F (2,34) — — 1.53, MSE — .00, p > .05. Although, speed of responding was not emphasized during the identification task, we nonetheless examined these data. The main — 27.83, MSE — — 10579.22, effect of Context was significant, F (2,34) — p < .01. Simple effects tests showed that the identification of famous — — face (M — — 1110.33, SD — 446.66), and famous scene (M — 1108.72, SD — — 408.36) images were slower than for scrambled images — 33.39, MSE — — 8897788.00, and (M — — 888.00, SD = 354.31), F (1,17) — — — F (1, 17) — 32.211, MSE — 876929.39, ps < .001, respectively. However, there was no difference in response time to identify famous — .01, MSE = 46.72, p > .05. faces and famous scenes, F (1,17) — 1.2.2. Memory task performance 1.2.2.1. Accuracy. Table 1 shows the mean overall accuracy score for each memory measure and condition. We analyzed recognition accuracy (measured as number of hits/22 minus number of false alarms/44, for each word type). Data were analyzed in a 3 (Word type: encoded with a famous face, famous scene or scrambled image) x 3 (Task order) ANOVA. There was a main effect of Word type, F (2, 30) — — 23.79, MSE — — .01, p < .001. Simple effects contrasts showed that accuracy was higher for words studied with famous faces and with famous scenes relative to words studied — 31.51, MSE — — .70, p < .001, and with scrambled images, F (1,15) —

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

55

Table 1 Mean memory performance and response time in milliseconds for words studied with famous faces, famous scenes and scrambled images (standard deviation in parentheses). Experiment and condition Companion experiment Famous faces Famous scenes Scrambled fMRI experiment Famous faces Famous scenes Scrambled

Overall accuracy

Remember accuracy

Know accuracy

IRK familiarity

Remember RT

Know RT

.59 (.21) .51 (.16) .39 (.20)

.43 (.18) .33 (.17) .21 (.19)

.15 (.15) .18 (.16) .18 (.14)

.41(.20) .35 (.15) .29 (.17)

1312 (356) 1370 (427) 1403 (367)

1680 (435) 1648 (355) 1598 (315)

.47 (.16) .45 (.13) .40 (.15)

.37 (.18) .30 (.16) .25 (.15)

.10 (.15) .15 (.14) .15 (.17)

.26 (.15) .29 (.11) .25 (.15)

1300 (340) 1342 (440) 1344 (377)

1714 (389) 1707 (371) 1671 (339)

Note: Overall accuracy was calculated as hit rate minus false alarm rate, Remember accuracy was calculated as Remember hit rate – Remember false alarm rate and Know Accuracy was calculated as Know hit rate – Know false alarm rate. Response times are for correct responses only. Familiarity was calculated using the independent remember-know (IRK) procedure.

F (1,15) — — 16.37, MSE = .26, p < .005, respectively. Overall accuracy was also higher for words studied with famous faces than famous — scenes, F (1,15) — — 15.87, MSE — .10, p < .005. There was no effect of Task order nor Word type x Task order interaction. We then analyzed accuracy of Remember (number of correct Remember responses/22–number of false Remember responses/44, for each word type) and Know (number of correct Know responses/22–number of false Know responses/44, for each word type) responses in 3 × 3 Repeated Measures ANOVAs (see Table 1 for means). For Remember responses, there was a main effect of Word type, — F (2, 30) — — 24.20, MSE — .01, p < .001. Simple effects contrasts tests showed that Remember accuracy was higher for words studied with famous faces and famous scenes compared to with scrambled — — images, F (1, 15) — — 30.31, MSE — .87, p < .001, and F (1,15) — 27.22, MSE — — .23, p < .001, respectively. Remember accuracy was also higher for words studied with faces than words studied with scenes, — 12.13, MSE — — .87, p < .005. There was no effect of Task F (1,15) — order nor Word type x Task order interaction. For Know responses there was no main effect of Word type F (2,30) — — 0.97, MSE — — .01, p > .05, and no significant interactions. We additionally calculated independent remember-know (IRK) measures of familiarity1 . There was a main effect of Word type, F — .07, p < .05, and simple effects contrasts found (2,30) = 7.63, MSE — that IRK familiarity was higher for words paired with images of famous faces at encoding as compared to scrambled images, — F (1,15) — — 20.13, MSE — .26, p < .001. Familiarity was also higher for words studied with famous faces than famous scenes, F — (1,15) — — 4.56, MSE — — .07, p — .05. Familiarity did not differ for words — 2.54, studied with famous scenes or scrambled images, F (1,15) — MSE — — .06, p > .05. 1.2.2.2. Reaction time. Although RT was not emphasized at retrieval, we examined these data in two separate 3 × 3 Repeated Measure ANOVAs for correct Remember and Know responses. There was no effect of Word type for either Remember, F (2,30) — — 1.72, MSE = 22 354, p > .05, or Know, F (2,30) — — 1.25, — MSE — 24 252, p > .05, responses, and no effect of Task order, nor any interactions. 1.2.3. End task performance We examined the mean Hit Rate for correct identification of images used in our study. There was no significant difference in identification of famous faces (M — — .67, SD = .24) and famous scenes (M — — .57, SD — — .22), t (17) — — 2.07, p > .05. We additionally correlated

1 Based on the assumption that recollection and familiarity are independent processes, one must divide the number of know responses by the opportunities available to make a know response (i.e., 1–recollection) to gain an accurate measure of familiarity (See [52] for further details).

the proportion of correct identifications with overall, remember, and know accuracy for words studied with each stimulus type. Remember accuracy for words studied with famous faces was not significantly correlated with later face identification, r [16] — — .04; however, the correlations between famous face end task identifi— .71, p — — .001, and Overall cation and both Know accuracy r [16] — — accuracy, r [16] — .55, p < .05 for words studied with famous faces were significant. In addition, the correlation between end task identification of famous scenes and Overall accuracy for words viewed — .61, p < .01. The corwith famous scenes was significant, r [16] — relation between end task identification of famous scenes and — Remember accuracy, r [16] — — .27, and Know accuracy, r [16] — .32, for words studied with famous scenes were not significant.

1.3. Discussion The results show that memory performance, and recollection in particular, benefits when words are presented with pictures of famous faces or famous scenes, as compared to scrambled images. This supports our hypothesis that recollection increases when items are encoded with a context high in meaningful content. These results replicate and extend our previous work by showing that recollection benefits occur with different forms of meaningful context information, specifically famous scenes. The findings of this study and our previous work converge on the notion that participants use a strategic process in which meaningful context information is bound to item information at encoding, developing rich memory traces. These rich memory traces are subsequently retrieved using the recollective memory process. Providing participants a meaningful framework in which to process item information, through the provision of external context information, changes the way in which item information is subsequently remembered. That this occurred without any specific experimental instructions to bind the word and context information suggests that participants use these strategies spontaneously. Importantly, our paradigm shows that one does not need to re-present the context at retrieval to observe enhanced recollection. In this study, we found that IRK measures of familiarity were higher for words studied with famous faces as compared to scrambled images. This was an unexpected finding, as our previous work has shown that even when recollection and familiarity are considered independent processes, familiarity is unaffected by the manipulation of study context. There is some controversy regarding how the remember-know paradigm maps onto the recollective and familiarity memory process. Some models suggest that remember and know responses are direct representations of recollection and familiarity [13]; if so, our study demonstrates a change in recollection, not familiarity. In contrast, [17] has proposed that recollection and familiarity represent independent processes, and if this model more accurately represents how remember and know

56

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

responses reflect the dual recognition processes, our study shows a change in both memory processes. Alternatively, other models have suggested that remember and know responses do not represent different forms of memory retrieval but rather represent different levels of confidence based on as single memory process [6,7] or that responses are based on a strength signal incorporating both recollection and familiarity processes [46]. While it was not a goal of our study to differentiate between these models, this finding may lend some support to models based on signal strength. However, we emphasize that this in an aberrant finding not shown in our previous work (nor was it found in the fMRI study reported below). In addition, we did not find that IRK Familiarity was higher for words studied with famous scenes than scrambled images. Future studies are thus needed before any definitive conclusions can be drawn. We used famous faces and famous scenes in this study, rather than unfamiliar faces or scenes, because we hypothesized that the recollection benefits observed would be higher if the contextual stimuli had pre-existing memory representations. In order to investigate this hypothesis, we calculated the size of the effect (calculated as Cohen’s d) for remember accuracy and compared this to our previous work. The effect size comparing remember accuracy for words studied with faces as compared to scrambled images — 0.54. In the current study, the in our previous work [37] was d — effect size comparing remember accuracy for words studied with — 1.31. famous faces and scrambled images was much larger, d — Thus, although we did not directly compare recollection for words studied with unfamiliar and famous faces in this study, this comparison lends support to the hypothesis that participants are better able to use familiar contextual stimuli at encoding to increase subsequent recollection. Although we found that recollection was higher for words studied with famous scenes than scrambled images, we also found that recollection was higher for words studied with famous faces as compared to words studied with famous scenes. One plausible explanation might be that participants are more familiar with the famous faces than famous scenes, making it easier to encode the face information in memory. However, the finding that there was no difference in famous face and famous scene identification in the End Task suggests that difference is not due to a higher recognition of famous face stimuli. In addition, although some of the correlations between End Task identification of famous faces and famous scenes and memory for words studied with famous faces and scenes were significant, Remember accuracy for both words studied with famous faces and famous scenes did not show significant correlations. This further suggests that the difference found in recollection of words studied with famous faces and famous scenes was not due to greater recognition of the famous face stimuli. Another plausible explanation is that participants spent a different amount of time encoding the famous face than famous scene stimuli, leading to higher recollection at retrieval. However, there were no differences in the time to identify the famous face and famous scene stimuli at encoding, suggesting that the contextual stimuli were processed for equivalent time periods. Rather, we believe that this difference may be due to differences in how the famous face and famous scene stimuli were processed. Research suggests that the encoding of faces may involve specialized processing, such as more configural (as compared to featural) processing than non-face objects (for a review see [24]). There is evidence to suggest that while inverting faces impairs subsequent recognition memory, smaller or no such inversion effects have been found for scenes [48,49]. In addition, recent research suggests that memory for faces is higher at both short- (20 min) and longterm (3 week) retention intervals [34]. Given these differences in face and non-face processing, it is possible that famous faces are more easily integrated into context-item pairs than famous scenes. Alternatively, it may be that participants are more likely to engage

in elaborative processes that bind the item-context information at encoding when famous faces are used as the context. Thus, although our results suggest that recollection benefits when meaningful contextual stimuli are present at encoding, these benefits appear to vary depending on the context. How and why different forms of context are associated with varying levels of recollection benefits is an avenue to explore in future research. 2. fMRI experiment Our second experiment used fMRI to determine whether the recognition of items studied with meaningful context information would involve the cortical reinstatement of the brain regions originally used to process that information. Participants studied words presented visually, accompanied by pictures of famous faces, famous scenes, or scrambled images (encoding was not scanned). In the subsequent recognition test, participants gave a remember, know, or new response to words presented alone. The test phase was scanned using event-related fMRI. In order to determine the specific brain regions involved in the recollection of context information, we contrasted brain activation for remember and know responses given to words studied with famous faces and famous scenes. This allowed us identify the brain regions involved in the recollection of specific encoding contexts, rather than the brain regions involved in recollection per se. We hypothesized that if meaningful encoding contexts are bound to the target word and retrieved through recollective memory processes, activation in the FFA should be higher during remember responses to words studied with famous faces as compared to words studied with famous scenes, and conversely, activation in the PPA should be higher during remember responses for words studied with famous scenes as compared to words studied with famous faces. According to dual-process theory, know responses are not influenced by availability of contextual detail; thus, there should be no significant effect of context on these responses, nor any differences in pattern of brain activation in the FFA or PPA for know responses to words studies with famous faces compared to famous scenes. Our previous work additionally showed that the level of activation in context-specific brain regions, specifically the fusiform gyrus, increased as the relative difference in recollection accuracy between words studied with intact faces compared to scrambled faces increased [38]. We performed two regression analyses to determine whether context-specific reactivation was related to recollection accuracy. The first analysis identified those brain regions that increased in activation as recollection memory performance for words studied with famous faces, as compared to scrambled images, increased. The second analysis identified those brain regions that increased in activation as recollection memory performance for words studied with famous scenes, as compared to scrambled images, increased. We hypothesized that activation in the FFA and PPA would increase as the relative recollection benefit for words studied with famous faces and famous scenes, as compared to scrambled images, increased across participants. 2.1. Method 2.1.1. Participants Nineteen naïve participants were included in this experiment (9 females). One male was excluded from the study due to excessive head movement in the scanner; an additional male participant was run in their place. All were students enrolled in classes at the University of Waterloo, and ranged in age from 17 to 28 years — 2.97) and received $50 remuneration for (M — — 22.39 years, SD — participating in the study. All were fluent english speakers and had normal or corrected-to-normal vision and hearing. Years of

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

education ranged from 12 to 21 years (M — — 16.56, SD — — 2.53). The mean estimate of full scale IQ, based on number of errors in pronunciation of words on the NART-R test was 113.90 (SD — — 11.02). Mean performance on the Digit Span forward test was — 9.28 (SD — — 1.90) and Digit Span backwards was 8.22 (SD — 1.93). 2.1.2. Materials The word lists, faces, scenes and scrambled images were the same as those used in the companion behavioural study. 2.1.3. Procedure Participants were randomly assigned to a task order, were tested individually, and completed the experiment in approximately 1.5 h. Participants completed the NART-revised, Digit Span Forward and Backward either before or after the experimental phase of the study, in counterbalanced order. Participants were given the same task instructions as in the companion behavioural study, and completed a short practice session outside the scanner consisting of a block of the encoding and retrieval task using the same timings and procedure as in the companion behavioural experiment. This was done to ensure that all participants understood the experimental tasks prior to entering the scanner. Following the practice session, participants entered the scanner and an anatomical scan was obtained. Participants then completed each of the 3 study-test cycles, with face and scene lists counterbalanced across participants, with two-minute breaks between each. The study phases were not scanned. During each of the test phases, the 66 studied words along with 44 lure words were presented in a randomized order with the same procedure and timings as in the companion behavioural study. Participants were instructed to use their dominant hand to make their response. Following the study, participants performed the same identification End Task as in the companion behavioural study outside of the scanner. 2.1.4. fMRI scanning parameters At the beginning of the session, a whole-brain T1-weighted anatomical image was collected for each participant (TR — — 7.5 ms; 3 2 TE — — 3.4 ms; voxel size, 1 × 1 × 1 mm ; FOV, 240 × 240 mm ; 150 slices; no gap; flip angle, 8◦ ). The test phase of each study-test cycle was scanned using an event related design. Functional data were collected using gradient echo-planar T2*-weighted images acquired on a Philips 1.5 Tesla — machine (TR — — 2000 ms; TE = 30 ms; slice thickness — 5 mm, 28 slices; FOV — — 200 × 200 mm2 ; voxel size — — 2.75 × 2.75 × 5 mm3 ; flip angle — — 70◦ ). An experimental run consisted of 28 slices/volume. Each of the three memory retrieval runs consisted of 269 time points, which included an initial 5 time points during which the scanner reached a steady state prior to data collection. Stimuli were presented on an Avotec Silent VisionTM (Model SV-7021) fibre-optic visual presentation system with binocular projection glasses controlled by a computer-running E-Prime software (version 1.1 SP3, Psychology Software Tools, Pittsburgh, PA) synchronized to trigger-pulses from the magnet. 2.1.5. fMRI data analysis Processing and analysis were performed using the Analysis of Functional Neuroimages (AFNI, version 2009 12 31 1431) software package (Cox & Hyde, 1997). The first 5 data points in all fMRI time series, corresponding to presentation of a blank screen in our paradigm, were omitted from analysis to ensure magnetization had reached steady state. For the event-related (memory retrieval) runs, between-slice timing differences caused by slice acquisition order were adjusted and time series were spatially co-registered to a reference scan to correct for head motion using a 3D Fourier

57

transform interpolation, using a functional volume that minimized the amount of head motion to < 2 mm. Memory retrieval data were then converted to units of percent change and runs were concatenated using the 3dcalc and 3dTcat commands in AFNI. Individual participant data were analyzed using the 3dDeconvolve program in AFNI. Participant data were sorted into the following response types: (1) RFace: words studied with a famous face and correctly identified with a remember response, (2) RScene: words studied with a famous scene and correctly identified with a remember response, (3) RScrambled: words studied with a scrambled image and correctly identified with a remember response, (4) KFace: words studied with a famous face and correctly identified with a know response, (5) KScene: words studied with a famous scene and correctly identified with a know response, (6) KScrambled: words studied with a scrambled image and correctly identified with a know response, and (7) Fix: baseline fixation crosses. GLTs were used to contrast the selected memory responses to baseline (Fix). New items given remember and know responses (false alarms), misses, and correct rejections were also identified, but were not used in the analyses. A tent function was used to model the data, with the function estimated at 7 time points. Events of interest were time-locked to the stimulus onset. Each participant’s data were extracted and transformed into a common space based on the Talairach and Tournoux (1988) atlas and spatial smoothing was performed using an isotropic Gaussian blur with a full width at half maximum (FWHM) of 6 mm to increase the signal-to-noise ratio. Original 3 × 3 × 5 mm voxels were resampled to 2 × 2 × 2 mm prior to group analysis. A whole-brain analysis was performed on the averaged group data. Two voxel-wise, two-factor ANOVAs, using FIX as baseline, with Response Type (Face, Scene, and Scrambled) as a fixed factor and participants as a random factor, were conducted to compare activation for (1) RFace to RScene, and (2) KFace to KScene responses. We used the Talairach atlas (Talairach & Tournoux, 1988) in AFNI and the automated Talairach Daemon (Lancaster et al., 2000) to label the brain regions of activation identified by the analyses. As in our previous work [38], a regression analysis was additionally performed to identify regions of activation in the RFace > RScrambled and RScene > RScrambled contrast that correlated significantly with behavioural performance on the recognition memory task. For the analysis, memory performance was measured as a difference score: for the RFace > RScrambled analysis, remember accuracy for words studied with faces minus remember accuracy for words studied with scrambled faces and for the RSene > RScrambled analysis, remember accuracy for words studied with scenes minus remember accuracy for words studied with scrambled faces. This difference score shows the relative recollection advantage (or disadvantage) for words studied with faces or scenes as compared to scrambled images, for each participant. The 3dRegAna program in AFNI was used for this analysis. 2.2. Results 2.2.1. Behavioural data 2.2.1.1. Identification task performance. Mean identification accuracy for image identification at encoding, measured as hit rate minus false alarm rate, was .94 (SD — — .05) for pictures of famous faces, .91 (SD — — .07) for pictures of famous scenes, and .98 (SD — — .01) for scrambled images. A repeated measure ANOVA revealed a main effect of Context on identification accuracy, F (2,34) — — 11.63, MSE — — .03, p < .05. Simple effects contrasts showed that identification accuracy was higher for the scrambled images as compared to both the famous face and famous scene images, F — .03, p < .005, and F (1, 17) — — 25.00, MSE — — .01, (1,17) — — 13.60, MSE —

58

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

p < .001, respectively. There was no difference in identification accuracy between famous faces and famous scenes, F (1,17) — — 3.19, MSE — — .02, p > .05. Although, speed of responding was not emphasized during the identification task, we nonetheless examined these data. The RT in milliseconds to make a correct responses was 1208.50 — 596.17) for famous (SD — — 579.44) for famous faces, 1179.97 (SD — scenes, and 898.19 (SD = 505.44) for scrambled images. The main effect of Context was significant, F (2,34) — — 9.52, MSE — — 529506.26, — p — .001, and simple effects tests showed that identification of famous face and famous scene images were significantly slower — than for scrambled images, F (1,17) — — 15.78, MSE — 1733211.68, — — and F (1,17) — 27.58, MSE — 1429176.89, ps < 005, respectively. However, the time to identify pictures of famous faces and famous — scenes did not differ, F (1, 17) — — 0.09, MSE — 14649.01, p > .05.

2.2.2. Memory task performance 2.2.2.1. Accuracy. As in the companion behavioural experiment, overall accuracy, Remember accuracy, Know accuracy, and IRK familiarity were analyzed in a 3 (Word type: encoded with a famous face, famous scene or encoded with a scrambled image) × 3 ANOVA. Table 1 shows the means and standard deviations for each memory measure and condition. The analysis of overall accuracy showed an effect of Word type, — 4.94, MSE — — .005, p < .05. Simple effects contrasts showed F (2,30) — that overall memory accuracy was higher for words studied with famous faces and famous scenes as compared to words studied with scrambled images, F (1,15) — — 6.97, MSE — — .09, p < .05, and F — .5, p < .05, respectively. Overall accuracy for (1,15) — — 8.54, MSE — words studied with famous faces and words studied with famous — 0.52, MSE — — .01, p > .05. scenes did not differ, F (1,15) — We then analyzed accuracy of Remember responses. There was a — 12.48, MSE — — .005, p < .001. Simmain effect of Word type, F (2,30) — ple effects contrasts showed that Remember accuracy was higher for words studied with famous faces as compared to famous scenes and scrambled images, F (1, 15) — — 6.12, MSE — — .09, p < .05, and F — — (1,15) — 28.85, MSE — .27, p < .001, respectively. Remember accuracy was also higher for words studied with famous scenes as — 5.56, MSE — — .05, p < .05. compared to scrambled images, F (1,15) — The analysis for Know accuracy showed no effect of Word type F (2,30) — — 3.17, MSE — — .005, p > .05, or Task order, and no significant interactions. In addition, the analysis for IRK familiarity showed no — 1.09, MSE — — .01, p > .05, or Task main effect of Word type, F (2,30) — order, and no interactions.

2.2.2.2. Response time. Although RT was not emphasized at retrieval, we examined these data in two separate Repeated Measure ANOVAs for correct Remember and Know responses. There was no effect of Word type for either Remember, F (2,30) — — 0.51, MSE = 21 970, p > .05, or Know, F (2,30) = 0.35, MSE — — 26 527, p > .05, responses. There were no effects of Task order or any interactions.

2.2.3. End task performance We analyzed the mean Hit Rate of correct identification of the images, performed at the end of the experiment, with a paired sample t-test. There was no significance difference in recognition of — — — famous faces (M — — .71, SD — .19) and scenes (M — .69, SD — .18), t (17) = .29, p > .05. We examined whether there was a correlation between correctly named famous faces or famous scenes on the End Task and memory performance. The only significant correlation was between End Task identification of famous scenes and Know accuracy for words studied with famous scenes, r [16] — — .51, p < .05.

Table 2 Coordinates and cluster volume for brain regions showing differences in activation for RFace responses compared to RScene responses. Brain region

RFace>RScene Right superior frontal gyrus Right medial frontal gyrus Right insula Right superior frontal gyrus Right precentral gyrus Left middle frontal gyrus Left superior frontal gyrus Left precentral gyrus Right fusiform Left middle temporal gyrus Left cerebellum Left superior temporal gyrus Left parahippocampal gyrus Left superior temporal gyrus Right inferior parietal lobule Right precuneus Right cuneus Right insula Right angular gyrus Left postcentral gyrus Right lingual gyrus RScene>RFace Right middle frontal gyrus Right insula Left ventral anterior nucleus Left cingulate gyrus Left cingulate gyrus Left cingulate gyrus Left paracentral lobule Left parahippocampal gyrus Left parahippocampal gyrus Left inferior temporal gyrus Right precuneus Left parietal lobule

Coordinates

Volume

x

y

z

9 5 38 41 61 −32 −9 −55 42 −47 −41 −62 −15 −32 58 25 18 48 50 −52 9

60 −21 −25 20 −4 62 71 5 −45 −67 −56 −28 −55 18 −29 −70 −88 −33 −62 −20 −89

29 70 2 50 20 8 27 13 −17 13 −47 7 −1 −37 44 50 24 20 37 51 −4

7824 730 461 355 352 782 418 383 598 1846 714 683 679 620 3862 702 558 537 489 387 715

24 35 −14 −14 −3 −4 −16 −14 −43 −60 20 −34

6 11 −5 −20 −5 −20 −34 5 −27 −19 −49 −39

45 11 14 32 37 26 57 −20 −26 −21 49 46

4165 852 780 748 679 574 493 496 357 420 450 669

Note: The Talairach coordinates represent the peak for the given region, we used a significance threshold of with p < .05 and a cluster size of 350 or more contiguous voxels.

2.2.4. fMRI data Using a significance threshold of p < .05 and a cluster size of 350 or more contiguous voxels, direct comparisons of activation for remember responses for words studied with famous faces (RFace) and remember responses for words studied with famous scenes (RScene) showed that RFace responses were associated with increased activity in the right superior frontal, right medial frontal, right insula, right superior frontal, right precentral, left middle frontals, left superior frontal, left precentral, left middle temporal left cerebellum, left superior temporal, left parahippocampal, right inferior parietal, right precuneus, right cuneus, right insula, right angular, left postcentral, and right lingual gyri (see Table 2). Conversely, RScene responses were associated with increased activation in right middle frontal, right insula, left ventral anterior, left cingulate, left paracentral, left inferior temporal, left parahippocampal right precuneus, and left parietal gryi. Importantly, the analysis also showed a double dissociation of brain activation: increased activation was found for RFace than RScene responses in the right fusiform gyrus within the FFA (see Fig. 1), and increased activation for RScene than RFace responses was found within the left parahippocampus, or the PPA (see Fig. 2). The General Linear Model (GLM) analysis comparing activation during KFace and KScene responses is reported in Table 3. Although this analysis did show differential activation in various brain regions, including frontal and parietal regions, the analysis did not show significant activation in the fusiform or parahippocampal gyri. This was true even when the threshold was lowered to p < .10.

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

59

Fig. 1. Brain areas with differences in activity for remember responses given to words studied with famous faces and words studied with famous scenes on averaged anatomical scans. Areas in orange represent regions with higher activity for RFace than RScene responses, p < .05. Axial slices are at z = −22 mm to z = −14 mm from the AC–PC line and the cross-hairs identify increased activity in the right fusiform gyrus (42, -45, -17).

The regression analysis used to identify regions for which the level of activation across participants correlated significantly with behavioural performance on the recognition task. We used a threshold of p < .05 and a cluster size of 350 or more contiguous voxels. Both analyses revealed increased activation in the right thalamus as the difference score increased (x, y, z = 3, −14, 2 for the RFace > RScrambled contrast and x, y, z = 3, −13, 2 for the RScene > RScrambled contrast). However, no regions in the fusiform or parahippocampal gyrus were found.

2.3. Discussion We examined how the presence of meaningful visual context information during encoding of target words influenced later recollection for the words presented alone at retrieval. Replicating our previous behavioural work, we observed a recollection benefit for words studied with pictures of famous faces and famous scenes as compared to scrambled images. These results provide further evidence to suggest that recognition memory for target information, and recollection in particular, increases when the target item is encoded with meaningful context information. In addition, we were able to demonstrate such effects inside of the fMRI scanner environment, unlike in our previous neuroimaging work [38].

At the neural level, the whole-brain analysis of fMRI data showed a double dissociation in terms of brain activation: activation in the right fusiform gyrus (i.e., FFA) was higher for remember responses given to words studied with famous faces compared to famous scenes, and was higher in the left parahippocampus (i.e., PPA) for words studied with famous scenes relative to famous faces. This finding provides strong evidence for the cortical reinstatement hypothesis, suggesting that item recognition is accompanied by the reactivation of the brain regions originally involved in the encoding of context information, even when that context is not provided at retrieval (see also [21,47]). Importantly, we did not find a significant increase in fusiform or parahippocampus activity when know responses were contrasted, even at a lowered threshold, supporting the hypothesis that sensory-specific reactivation is specific to recollection. The fact that we were able to identify other brain regions showing differential activation for know responses given to words studied with famous faces and famous scenes suggests that there were a sufficient number of know responses in our study to observe differences in brain activation; however, no context-specific brain activation was reported. Such results are in-line with previous work showing reactivation of content-specific brain regions during recollection [21,47,45]. Recent research shows that the location of the functionally-defined FFA and PPA varies widely across subjects [11] and some authors have critiqued the approach of applying the term

60

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

Fig. 2. Brain areas with differences in activity for remember responses given to words studied with famous scenes and words studied with famous faces on averaged anatomical scans. Areas in blue represent regions with higher activity for RScene than RFace responses, p < .05. Axial slices are at z = −25 mm to z = −19 mm from the AC–PC line and the cross-hairs identify increased activity in the left parahippocampal gyrus (-41, -29, -21).

FFA to any activation found in the fusiform gyrus, as this is a large cortical region and may represent many different types of visual processes [44]. However, the activation found in the fusiform gyrus during recollection of words studied with famous faces is similar to the functionally-defined FFA found by [23]Kanwisher et al. (1997; peak activation 40, −55, −10) and is close to a region identified in a meta-analysis of fourteen studies identifying activation during the recognition of faces ([22]; peak activation 41, −37, −21). It is additionally similar to the region identified in a meta-analysis of 105 studies that compared brain activation during the processing of neutral or emotional faces to baseline [10]. This provides support for the hypothesis that the activation represents the retrieval of face (context) information. Activation in the parahippocampal region has been implicated during the processing of scenes [8], though the precise localization varies widely across studies [30,15,16], with some research suggesting these differences could be due to the kind of processing carried out on the scene, and is subject to considerable variation in location across participants [11]. It has recently been suggested that the PPA region is more involved in encoding scenic layouts than the scenes themselves [9]. Nonetheless, our coordinates for PPA activity, though more lateral and inferior to that reported in some other studies in still within the parahippocampal region, and distinct from FFA activation associated

with words paired with famous faces. We thus suggest that the activation found in the parahippocampal gyrus is evidence of context-specific reactivation of scene-processing. This double dissociation provides strong evidence that context-specific brain regions implicated during encoding are recruited during retrieval.

3. General discussion In two experiments we found that the provision of meaningful context information at encoding increased subsequent recognition of the target item presented alone. That the increase in memory performance was found for remember, and not know, responses, suggests that the memory boost is specific to recollection. Neurally, we found that activation in the fusiform gyrus was higher for remember responses given to words studied with famous faces than famous scenes, and conversely, that activation in the parahippocampus was higher for remember responses given to words studied with famous scenes than famous faces. No such activation patterns were found for know responses. The results point toward a strong double dissociation of context-specific reactivation and provides compelling evidence that this reactivation occurs only during recollection. To our knowledge, this is the first

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63 Table 3 Coordinates and cluster volume for brain regions showing differences in activation for KFace responses compared to KScene responses. Brain region

KFace > KScene Left middle frontal Left superior frontal Left superior temporal Right postcentral gyrus Right superior parietal Right supramarginal gyrus Left postcentral gyrus Left postcentral gyrus Left superior parietal Right cerebellum Right cerebellum Left cerebellum KScene > KFace Right middle frontal Right superior frontal Left medial frontal Left medial frontal Right inferior temporal Left uncus Left lingual gyrus Right cerebellum Left cerebellum Left cerebellum Right culmen Right mamillary body Left putamen Left semilunar lobule

Coordinates

Volume

x

y

z

−38 −16 −61 30 21 56 −21 −53 −25 14 1 −15

0 35 −23 −33 −63 −53 −37 −21 −57 −31 −48 −42

54 56 −1 64 56 33 65 42 53 −47 −56 −42

710 603 863 727 4640 571 2614 495 968 844 628 475

33 30 −19 −7 61 −26 −26 28 −34 −32 49 5 −26 −5

36 68 30 11 −33 −4 −42 −69 −40 −75 −43 −8 10 −73

−13 −5 30 47 −26 −19 1 −30 −29 −32 −27 −16 2 −36

1165 393 763 398 752 379 429 641 1070 760 653 811 372 1335

Note: The Talairach coordinates represent the peak for the given region, we used a significance threshold of with p < .05 and a cluster size of 350 or more contiguous voxels.

time a double dissociation of this nature has been found for such well-defined brain regions as the FFA and PPA. It should be noted that in the fMRI study, as in the behavioural study, recollection for words studied with famous faces was higher than for words studied with famous scenes. It is thus possible that differences in cortical activation when comparing these responses are due, at least in part, to differences in recollection performance, or strength, rather than context reinstatement. However, given that the fusiform gyrus is not generally associated with the recollection process (see for example, [35]), and that we found activation in the neural regions associated with each class of stimuli (i.e., we also found activation in the parahippocampus for word studied with famous scenes), we believe that the activation in the fusiform gyrus more likely reflects context-specific reactivation, rather than increased recollection. In addition to showing differential activation in the ventral processing stream, remember responses for words studied with famous faces and famous scenes elicited differential activation in the left temporal lobe. Activation was higher for remembered words studied with famous faces in the left middle and superior temporal lobe, whereas activation was higher for remembered words studied with famous scenes in the left inferior temporal lobe. [23] observed activation of middle and superior temporal regions during the passive viewing of faces and suggested that the activation may represent the evaluation of emotional information, as research with macaques has shown that facial expressions elicit neural responses in the superior temporal region. Other research with macaques has found that cells in the inferior temporal cortex respond to scene stimuli, and may be involved in encoding the relative positions of objects within a scene [1]. It is possible that the differential activation found in the temporal lobe for words studied with famous faces and famous scenes represents additional context-specific reactivation, with activation in the superior

61

temporal lobe representing the retrieval of facial emotions and activation in the inferior temporal lobe representing the retrieval of scene locations, though future studies will needed to specifically address this hypothesis. Unlike our previous work [38], we did not find that activation in context-specific brain regions increased as the recollection benefit for words studied with meaningful context information increased. We previously suggested that the extent to which participants show context-specific reactivation, and subsequent recollection benefits, may depend on the extent to which they engage in processes at encoding that bind item and context information. The failure to replicate this finding may indicate that context-specific reactivation is independent of differences in participant performance. However, it is also possible that the required variability in participant performance to observe such effects was not available in the current study. In our previous work approximately half of the participants showed a recollection benefit for words studied with faces and we took advantage of this variability to perform the regression analysis; in the current study, the majority of participants showed such recollection benefits, indicative of reduced variability in recollection benefits. Thus, future research will need to be conducted to further examine the relationship between cortical reinstatement and memory performance. Nonetheless, we did find that activation of the thalamus co-varied with recollection benefits. The thalamus may act as a relay station for aiding in response output and recent research shows that thalamic activation is associated with the recognition associated with recall of contextual details [31]. The increased activation of the thalamus may thus represent an increase in the retrieval of contextual detail during remember responses for words studied with famous faces and famous scenes as compared to remember responses for words studied with scrambled faces. Before concluding it should be noted that alternative models suggest that recollection and familiarity do not recruit different brain regions because they are qualitatively different processes, but rather because they represent different levels of memory strength (see for example, [40]). Although this model is discussed predominantly within the medial temporal lobe, if recollection and familiarity represent different levels of a single memory signal, differential recruitment during remember and know responses found in this study would represent differences in the strength of the memory rather than different memory processes, which we do not believe to be plausible. In a recent study content-specific reactivation was found when ‘remember’ responses were contrasted with ‘know’ responses of varying confidence [21]. These results, in addition to ours showing a double dissociation in content areas reactivated at retrieval, are more in-line with neural models that associate differential brain activation with dual memory processes (see for example, [33]). In conclusion, the current study demonstrates two novel findings. First, we replicated our previous work by showing that recollection is higher for target items when they are encoded with meaningful context information and extended this recollection benefit to include famous scenes and famous faces as contextual stimuli. Second, and more importantly, we were able to establish a double dissociation of cortical-reinstatement using well-established brain regions: the FFA and PPA. When meaningful context (famous face and famous scene) information was presented at study, activation in the fusiform (FFA) and parahippocampal gyrus (PPA) increased during the subsequent recollection of item information presented alone. Importantly, this pattern was present only for remember and not for know responses, in line with dual process theories of recognition. Results suggest that patterns of brain activation at retrieval thus depend on the type of context information presented at study and this effect is specific to the recollection process.

62

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63

Acknowledgement This research was funded by a Discovery grant from the Natural Sciences and Engineering Research Council of Canada (NSERC)Natural awarded to MF, and a Discovery NSERC grant awarded to JF.

References [1] Aggelopoulos NC, Rolls ET. Scene perception: inferior temporal cortex neurons encode the positions of different objects in the scene. Eur J Neurosci 2005;22(11):2903–16, http://dx.doi.org/10.1111/j. 1460-9568.2005.04487.x. [2] Alvarez P, Squire LR. Memory consolidation the medial temporal lobe: a simple network model. Proc Natl AcadSci 1994;91:7041–5, http://dx.doi.org/10.1073/pnas.91.15.7041. [3] Baayen RH, Piepenbrock R, Gulikers L. The CELEX lexical database [CD-ROM]. Philadelphia, PA: Linguistic Data Consortium, University of Pennsylvania [Distributor]; 1995 (Release 2). [4] Bowles B, Crupi C, Mirsattari SM, Pigott SE, Parrent AG, Pruessner JC, Köhler S. Impaired familiarity with preserved recollection after anterior temporal-lobe resection that spares the hippocampus. Proc Natl Acad Sci USA 2007;104(41):16382–7, http://dx.doi.org/10.1073/pnas.0705273104. [5] Danker JF, Anderson JR. The ghosts of brain states past: Remembering reactivates the brain regions engaged during encoding. Psychol Bull 2010;136(1):87–102, http://dx.doi.org/10.1037/a0017937. [6] Donaldson W. The role of decision processes in remembering and knowing. Mem Cognit 1996;24(4):523–33, http://dx.doi.org/10.3758/BF03200940. [7] Dunn JC. Remember-know: a matter of confidence. Psychol Rev 2004;111(2):524–42, http://dx.doi.org/10.1037/0033-295X.111.2.524. [8] Epstien R, Kanwisher N. A cortical representation of the local visual environment. Nature 1998;392:598–601, http://dx.doi.org/10.1038/33402. [9] Epstein RA, Ward EJ. How reliable are visual context effects in the place area? Cereb Cortex 2010;20(2):294–303, parahippocampal http://dx.doi.org/10.1093/cercor/bhp099. [10] Fusar-Poli P, Placentino A, Carletti F, Landi P, Allen P, Surguladze S, Politi P. Functional atlas of emotional faces processing: a voxel-based meta-analysis of 105 functional magnetic resonance imaging studies. Journal Psychiatry Neurosci 2009;34(6):418–32. [11] Frost MA, Goebel R. Measuring structural–functional correspondence: spatial variability of specialised brain regions after alignment. Neuroimage 2012;59(2):1369–81, macro-anatomical http://dx.doi.org/10.1016/j.neuroimage.2011.08.035. [12] Gardiner JM. Functional aspects of recollective experience. Mem Cognit 1988;16(4):309–13, http://dx.doi.org/10.3758/BF03197041. [13] Gardiner JM, Parkin AJ. Attention and recollective experience in recognition memory. Mem Cognit 1990;18(6):579–83, http://dx.doi.org/10.3758/BF03197100. [14] Gliga T, Csibra G. Seeing the face through the eyes: a developmental perspective on face expertise. Prog Brain Res 2007;164:323–39, http://dx.doi.org/10.1016/S0079-6123(07)64018-7. [15] Golomb JD, Albrecht AR, Park S, Chun MM. Eye movements help link different views in the scene-selective cortex. Cereb Cortex 2011;21(9):2094–102, http://dx.doi.org/10.1093/cercor/bhq292. [16] He C, Peelen MV, Han Z, Lin N, Caramazza A, Bi Y. Selectivity for large nonmanipulable objects in scene-selective visual cortex does not require visual experience. Neuroimage 2013:791–9, http://dx.doi.org/10.1016/j.neuroimage.2013.04.051. [17] Jacoby LL. A process dissociation framework: Separating automatic from intentional uses of memory. J Mem Lang 1991;30(5):513–41, http://dx.doi.org/10.1016/0749-596X(91)90025-F. [18] Kuhl BA, Rissman J, Chun MM, Wagner AD. Fidelity of neural reactivation reveals competition between memories. Proc Natl Acad Sci U S A 2011;108(14):5903–8, http://dx.doi.org/10.1073/pnas.1016939108. [19] Kuhl BA, Rissman J, Wagner AD. Multi-voxel patterns of visual representation during episodic encoding are prediccategory tive of subsequent memory. Neuropsychologia 2012;50(4):458–69, http://dx.doi.org/10.1016/j.neuropsychologia.2011.09.002. [20] Johnson JD, Rugg MD. Recollection and the reinstatement of encoding-related cortical activity. Cereb Cortex 2007;17:2507–15, http://dx.doi.org/10.1093/cercor/bhl156. [21] Johnson JD, Suzuki M, Rugg MD. Recollection, familiarity, and contentsensitivity in lateral parietal cortex: a high-resolution fMRI study. Front Human Neurosci 2013;7, http://dx.doi.org/10.3389/fnhum.2013.00219. [22] Joseph JE. Functional neuroimaging studies of category specificity in object recognition: a critical review and meta-analysis. Cogn Affect Behav Neurosci 2001;1(2):119–36, http://dx.doi.org/10.3758/CABN.1.2.119. [23] Kanwisher N, McDermott J, Chun MM. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci 1997;17(11):4302–11.

[24] Maurer D, Le Grand R, Mondloch CJ. The many faces of processing. Trends Cogn Sci 2002;6(6):255–60, configural http://dx.doi.org/10.1016/S1364-6613(02)01903-4. [25] Montaldi D, Spencer TJ, Roberts N, Mayes AR. The neural system that mediates familiarity memory. Hippocampus 2006;16(5):504–20, http://dx.doi.org/10.1002/hipo.20178. [26] Morris C, Bransford JD, Franks JJ. Levels of processing versus transfer appropriate processing. J Verb Learn Verb Behav 1977;16(5):519–33, http://dx.doi.org/10.1016/S0022-5371(77)80016-9. [27] Nyberg L, Habib R, McIntosh AR, Tulving E. Reactivation of encodingrelated brain activity during memory retrieval. Proc Natl Acad Sci USA 2000;97:11120–4, http://dx.doi.org/10.1073/pnas.97.20.11120. [28] Nyberg L, Petersson KM, Nilsson LG, Sandblom J, Aberg C, Ingvar M. Reactivation of motor brain areas during explicit memory for actions. NeuroImage 2001;14:521–8, http://dx.doi.org/10.1006/nimg.2001.0801. [29] O’Craven KM, Kanwisher NN. Mental imagery of faces and places activates corresponding stimulus-specific brain regions. J Cognit Neurosci 2000;12(6):1013–23, http://dx.doi.org/10.1162/08989290051137549. [30] Park S, Chun MM. Different roles of the parahippocampal place area (PPA) and retrosplenial cortex (RSC) in panoramic scene perception. Neuroimage 2009;47(4):1747–56, http://dx.doi.org/10.1016/j.neuroimage.2009.04.058. [31] Pergola G, Ranft A, Mathias K, Suchan B. The role of the thalamic nuclei in recognition memory accompanied by recall during encoding and retrieval: An fMRI study. Neuroimage 2013:74195–208, http://dx.doi.org/10.1016/j.neuroimage.2013.02.017. [32] Reder LM, Victoria LW, Manelis A, Oates JM, Dutcher JM, Bates JT, Gyulai F. Why it’s easier to remember seeing a face we already know than one we don’t: preexisting memory representations facilitate memory formation. Psychol Sci 2013;24(3):363–72. [33] Rugg M, Vilberg K. Brain networks underlying episodic retrieval. Curr Opin Neurobiol 2013;23(2):255–60, memory http://dx.doi.org/10.1016/j.conb.2012.11.005. [34] Sato W, Yoshikawa S. Recognition memory for faces and scenes. J Gen Psychol 2013;140(1):1–15, http://dx.doi.org/10.1080/00221309.2012.710275. [35] Skinner EI, Fernandes MA. Neural correlates of recoland familiarity: a review of neuroimaging and lection patient data. Neuropsychologia 2007;45(10):2163–79, http://dx.doi.org/10.1016/j.neuropsychologia.2007.03.007. [36] Skinner EI, Fernandes MA. Age-related changes in the use of study context to increase recollection. Aging, Neuropsychol Cognit 2009;16(4):377–400, http://dx.doi.org/10.1080/13825580802573052. [37] Skinner EI, Fernandes MA. Effect of study context on item recollection. Q J Exp Psychol 2010;63(7):1318–34, http://dx.doi.org/10.1080/17470210903348613. [38] Skinner EI, Grady CL, Fernandes MA. Reactivation of context-specific brain regions during retrieval. Neuropsychologia 2010;48(1):156–64, http://dx.doi.org/10.1016/j.neuropsychologia.2009.08.023. [39] Smith SM, Vela E. Environmental context-dependent memory: a review and meta-analysis. Psychon Bull Rev 2001;8(2):203–20, http://dx.doi.org/10.3758/BF03196157. [40] Squire LR, Wixted JT, Clark RE. Recognition memory and the medial temporal lobe: a new perspective. Nature Rev Neurosci 2007;8(11):872–83, http://dx.doi.org/10.1038/nrn2154. [41] Tulving E. Memory and consciousness. Can J Psychol 1985;32:130–47, http://dx.doi.org/10.1037/h0080017. [42] Vaidya CJ, Zhao M, Desmond JE, Gabrieli JE. Evidence for cortical encoding specificity in episodic memory: memory-induced re-activation of picture processing areas. Neuropsychologia 2002;40(12):2136–43, http://dx.doi.org/10.1016/S0028-3932(02)00053-2. [43] Vilberg KL, Rugg MD. Memory retrieval and the paricortex: a review of evidence from a dual-process etal perspective. Neuropsychologia 2008;46(7):1787–99, http://dx.doi.org/10.1016/j.neuropsychologia.2008.01.004. [44] Weiner KS, Grill-Spector K. The improbable simplicity of fusiform face area. Trends Cogn Sci 2012;16(5):251–4, the http://dx.doi.org/10.1016/j.tics.2012.03.003. [45] Wheeler ME, Buckner RL. Functional-anatomical correlates of and knowing. NeuroImage 2004;21:1337–49, remembering http://dx.doi.org/10.1016/j.neuroimage.2003.11.001. [46] Wixted JT, Stretch V. In defense of the signal detection interpretation of remember/know judgments. Psychon Bull Rev 2004;11(4):616–41. [47] Woodruff CC, Johnson JD, Uncapher MR, Rugg MD. Content-specificity of the neural correlates of recollection. Neuropsychologia 2005;43:1022–32, http://dx.doi.org/10.1016/j.neuropsychologia.2004.10.013. [48] Wright AA, Roberts WA. Monkey and human face perception: inversion effects for human faces but not for monkey faces or scenes. J Cognit Neurosci 1996;8(3):278–90, http://dx.doi.org/10.1162/jocn.1996.8.3.278. [49] Yin RK. Looking at upside-down faces. J Exp Psychol 1969;81(1):141–5, http://dx.doi.org/10.1037/h0027474. [50] Yonelinas AP. Consciousness, control, and confidence: the 3 Cs of recognition memory. J Exp Psychol Gen 2001;130(3):361–79, http://dx.doi.org/10.1037/0096-3445.130.3.361.

E.I. Skinner et al. / Behavioural Brain Research 264 (2014) 51–63 [51] Yonelinas AP. The nature of recollection and familiarity: a review of 30 years of research. J Mem Lang 2002;46(3):441–517, http://dx.doi.org/10.1006/jmla.2002.2864. [52] Yonelinas AP, Jacoby LL. Noncriterial recollection: familiarity as automatic, irrelevant recollection. Conscious Cogn 1996;5(1-2):131–41, http://dx.doi.org/10.1006/ccog.1996.0008.

63

[53] Yonelinas AP, Kroll NA, Quamme JR, Lazzara MM, Sauvé M, Widaman KF, Knight RT. Effects of extensive temporal lobe damage or mild hypoxia on recollection and familiarity. Nature Neurosci 2002;5(11):1236–41, http://dx.doi.org/10.1038/nn961.

Reinstatement of encoding context during recollection: behavioural and neuroimaging evidence of a double dissociation.

In both a behavioural and neuroimaging study, we examined whether memory performance and the pattern of brain activation during a word recognition tas...
938KB Sizes 0 Downloads 0 Views