This article was downloaded by: [University of Saskatchewan Library] On: 15 March 2015, At: 14:35 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Visual Cognition Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/pvis20

What does that picture sound like to you? Oculomotor evidence for phonological competition in visual search a

b

Stephen C. Walenchok , Michael C. Hout & Stephen D. a

Goldinger a

Department of Psychology, Arizona State University, Tempe, AZ, USA b

Department of Psychology, New Mexico State University, NM, USA Published online: 14 Oct 2013.

To cite this article: Stephen C. Walenchok, Michael C. Hout & Stephen D. Goldinger (2013) What does that picture sound like to you? Oculomotor evidence for phonological competition in visual search, Visual Cognition, 21:6, 718-722, DOI: 10.1080/13506285.2013.844970 To link to this article: http://dx.doi.org/10.1080/13506285.2013.844970

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities

whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.

Downloaded by [University of Saskatchewan Library] at 14:35 15 March 2015

This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sublicensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

718

OPAM 2013 REPORT

Downloaded by [University of Saskatchewan Library] at 14:35 15 March 2015

Kahneman, D., Treisman, A., & Gibbs, B. J. (1992). The reviewing of object files: Object-specific integration of information. Cognitive Psychology, 24(2), 175–219. doi:10.1016/0010-0285(92) 90007-O Yassa, M. A., & Stark, C. E. (2011). Pattern separation in the hippocampus. Trends in Neuroscience, 34, 515–525. doi:10.1016/j.tins.2011.06.006 Yi, D. J., Turk-Browne, N. B., Flombaum, J. I., Kim, M. S., Scholl, B. J., & Chun, M. M. (2008). Spatiotemporal object continuity in human ventral visual cortex. Proceedings of the National Academy of Sciences, 105(26), 8840–8845. doi:10.1073/pnas.0802525105

What does that picture sound like to you? Oculomotor evidence for phonological competition in visual search Stephen C. Walenchok1, Michael C. Hout2, and Stephen D. Goldinger1 1 2

Department of Psychology, Arizona State University, Tempe, AZ, USA Department of Psychology, New Mexico State University, NM, USA

Consider an experiment wherein, on each trial, you are shown a picture of some object (e.g., a hammer) as a visual search target, and then must find an image of a hammer against a background of other depicted objects. For obvious reasons, such studies of attentional guidance in visual search have primarily focused on the visual features of objects (e.g., Wolfe, 2007; Wolfe, Cave, & Franzel, 1989). However, several studies have shown that other, nonvisual object features can influence attentional guidance and interference from distractors. These include conceptual and semantic factors (Dahan & Tanenhaus, 2005; Huettig & Altmann, 2005), specificity of target descriptions in categorical search (Schmidt & Zelinsky, 2009), and phonological similarity (Gorges, Oppermann, Jescheniak, & Schriefers, 2013; Meyer, Belke, Telling, & Humphreys, 2007). In the present study, we further examined the phonological dimension, testing whether distractor object names may be implicitly activated during visual search, as indicated by potential interference from distractors whose names partially overlapped with

Please address all correspondence to Stephen C. Walenchok, Department of Psychology, Arizona State University, PO Box 871104, Tempe, AZ 85287, USA. E-mail: [email protected] © 2013 Taylor & Francis http://dx.doi.org/10.1080/13506285.2013.844970

Downloaded by [University of Saskatchewan Library] at 14:35 15 March 2015

OPAM 2013 REPORT

719

targets. In an experiment similar to the foregoing description, we investigated two key questions, embodied in two key manipulations: The first was whether phonological interference (if present) would be greater when targets were specified using verbal labels, rather than visual icons. The second was whether cognitive load, operationalized by having participants search for either one or three potential targets per trial, would modulate interference from distractors. Given these manipulations, we had two main predictions. First, we expected the greater memory demands of multiple-target search to encourage participants to encode targets using less memory-taxing verbal representations, rather than holding images in memory. We predicted that these verbal representations would result in phonological interference when targets and distractors shared phonological onsets. Second, we predicted that verbal target cues would result in greater interference than visual target cues, due to a lack of guidance from internal visual templates. Previous findings have supported our predictions when participants were only given verbal target cues (Walenchok, Hout, & Goldinger, 2013). Here, we conducted two new eye-tracking experiments to determine the nature of this interference. In both experiments, participants were initially familiarized with the names of all stimuli. For the main search task, participants were given either visual (Experiment 1) or verbal (Experiment 2) target cues. Within each experiment, participants quickly determined target presence or absence for either one or three targets (low and high target load, respectively), with search sets of 12, 16, or 20 items. Only one target could be present in multiple-target search (Figure 1A). Our main variable of interest was competition: Target(s) and distractors could either share /bi/ phonological onsets in the experimental condition (e.g., “beaker”, “beast”, and “beanie”) or were grouped into three control conditions: (1) /bi/ target onset(s) with distractors coming from a heterogeneous pool, each having different onsets, (2) target(s) coming from the heterogeneous pool, with all distractors having /bi/ onsets, and (3) both target(s) and distractors coming from the heterogeneous pool. Both RTs and eye movements were recorded. The following analyses report the effects of target load and competition, our primary variables of interest. In the RTs, we observed a main effect of competition with verbal target cues, F(3, 11) = 9.76, p = .002, g2p ¼ .73, as participants were slower to find targets that shared phonological onsets with the distractors. We also observed a Target load × Competition interaction with image target cues, F(3, 12) = 4.30, p = .028, g2p ¼ .52. As Figure 1B indicates, this effect emerged when people searched for multiple, but not single, targets. In the eye movements, three variables were analysed: (1) mean distractor fixations, (2) mean distractor fixation durations, and (3) the proportion of total items fixated (per trial). In the analysis of distractor fixations, we again observed a main effect of competition with verbal target cues, F(3, 11) = 5.43, p = .015, g2p ¼ .60. We also observed a Target load × Competition interaction with verbal

OPAM 2013 REPORT

Downloaded by [University of Saskatchewan Library] at 14:35 15 March 2015

720

Figure 1. (A) Sequence of events in a multiple-target search trial. (B) Search time (RT). (C) Mean distractor fixations, given that these distractors had previously been fixated. (D) Mean distractor fixation durations for fixated distractors. (E) Proportion of total items fixated, as a function of competition and target load.

Downloaded by [University of Saskatchewan Library] at 14:35 15 March 2015

OPAM 2013 REPORT

721

cues, F(3, 11) = 5.06, p = .019, g2p ¼ .58, indicating greater tendency to fixate distractors that are phonologically similar to the targets in multiple-target search (Figure 1C). The analysis of distractor fixation durations revealed a main effect of competition with verbal target cues, F(3, 11) = 10.46, p = .001, g2p ¼ .74; participants fixated distractors for a longer duration in the presence of phonological competitors (Figure 1D). The analysis of proportional fixations revealed a Target load × Competition interaction with verbal target cues, F(3, 11) = 4.05, p = .036, g2p ¼ .53. However, as Figure 1E indicates, the means were in the opposite direction to that predicted: Participants fixated fewer distractors in the phonological competition condition than in the control conditions in multiple-target search. The latter analysis, together with the other eye movement results, indicate that, although fewer items were fixated overall under phonological competition in multiple-target search, participants tended to return to those distractor items a greater number of times and remain fixated for a longer average duration. These effects emerged primarily when subjects searched for multiple, but not single, targets, and when these targets were specified with labels, but not when they were specified with images. This pattern conforms to our previous experiments (Walenchok et al., 2013) and suggests that—when people search for several targets that are specified with images—they are able to guide attention with great efficiency, and are relatively immune to distractor names. However, when targets are specified verbally, participants must consider the distractor items more carefully. In such conditions, search is affected by overlapping object names, despite the ostensibly visual nature of the task.

REFERENCES Dahan, D., & Tanenhaus, M. K. (2005). Looking at the rope when looking for the snake: Conceptually mediated eye movements during spoken-word recognition. Psychonomic Bulletin and Review, 12(3), 453–459. doi:10.3758/BF03193787 Gorges, F., Opperman, F., Jescheniak, J. D., & Schreifers, H. (2013). Activation of phonological competitors in visual search. Acta Psychologica, 143, 168–175. doi:10.1016/j.actpsy.2013.03.006 Huettig, F., & Altmann, T. M. (2005). Word meaning and the control of eye fixation: Semantic competitor effects and the visual world paradigm. Cognition, 96, B23–B32. doi:10.1016/j. cognition.2004.10.003 Meyer, A. S., Belke, E., Telling, A. L., & Humphreys, G. W. (2007). Early activation of object names in visual search. Psychonomic Bulletin and Review, 14(4), 710–716. doi:10.3758/BF03196826 Schmidt, J., & Zelinsky, G. J. (2009). Search guidance is proportional to the categorical specificity of a target cue. Quarterly Journal of Experimental Psychology, 62(10), 1904–1914. doi:10.1080/ 17470210902853530 Walenchok, S. C., Hout, M. C., & Goldinger, S. D. (2013). Is an image worth a phonological representation? Investigating the effect of target-distractor phonological similarity in multipletarget search. Journal of Vision, 13, 687. doi:10.1167/13.9.687

722

OPAM 2013 REPORT

Downloaded by [University of Saskatchewan Library] at 14:35 15 March 2015

Wolfe, J. M. (2007). Guided Search 4.0: Current progress with a model of visual search. In W. Gray (Ed.), Integrated models of cognitive systems (pp. 99–119). New York, NY: Oxford University Press. Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided Search: An alternative to the Feature Integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15, 419–433. doi:10.1037/0096-1523.15.3.419

The other-race effect in face recognition is sensitive to face format at encoding Mintao Zhao1 and Isabelle Bülthoff1,2 1 2

Max Planck Institute for Biological Cybernetics, Tübingen, Germany Korea University, Seoul, South Korea

People recognize own-race faces better than those from other races (Meissner & Brigham, 2001). This other-race effect (ORE) has been frequently demonstrated with participants learning and recognizing static, front-view face images. However, in everyday life, people often see faces from different viewpoints and in motion. In the present study, we tested the hypothesis that adding these types of information during face encoding may reduce, if not eliminate, the other-race effect. The ORE has been attributed to stronger holistic processing (i.e., faces are processed as gestalts rather than collections of independent face parts) for ownrace than for other-race faces (Michel, Caldara, & Rossion, 2006; Tanaka, Bukach, & Kiefer, 2004). The greater engagement of holistic processing for own-race faces is significantly associated with the ORE (DeGutis, Mercado, Wilmer, & Rosenblatt, 2013). Recently, a categorization-individuation model has been proposed to explain the ORE. Specifically, people tend to individualize faces from own-race, but categorize other-race faces as outgroup members

Please address all correspondence to Mintao Zhao, Max Planck Institute for Biological Cybernetics, Tuebingen, Germany. E-mail: [email protected] © 2013 Taylor & Francis http://dx.doi.org/10.1080/13506285.2013.844971

What does that Picture Sound Like to You? Oculomotor Evidence for Phonological Competition in Visual Search.

What does that Picture Sound Like to You? Oculomotor Evidence for Phonological Competition in Visual Search. - PDF Download Free
167KB Sizes 0 Downloads 0 Views