Developmental Science 17:6 (2014), pp 826–827

DOI: 10.1111/desc.12200

COMMENTARY Redundant constraints on human face perception? Linda B. Smith and Swapnaa Jayaraman Psychological and Brain Sciences, Indiana University, USA

This is a commentary on Wilkinson, Paikan, Gredebäck, Rea & Metta (2014). Human face perception is relevant to many speciesimportant tasks and to their development; these tasks go beyond the recognition and discrimination of individuals and groups, and include speech perception, language learning, emotional development, and social behavior. Given the general importance of face perception and the information that faces convey to humans, it makes sense that human face perception – and its development – is highly constrained. One starting point for theoretical discussions of these constraints is the result, reported in two empirical studies, that newborns are biased to look at very simple ‘face-like’ arrays (Goren, Sarty & Wu, 1975; Johnson, Dziurawiec, Ellis & Morton, 1991), a bias that does not characterize the better studied preferences of older infants (Johnson et al., 1991). The neonatal bias is often interpreted in terms of a face template and innate face-specific visual processes. The alternative to this claim about innateness has been more amorphous: that the neonatal preference arises from biases that are somehow more general. Within the context of this debate, the BCM provides a new hypothesis about a mechanism that might underlie neonatal face preferences, one that emerges from general principles of binocularity and current neurophysiological evidence about the underlying visual circuitry. The plausibility of the hypothesis as a mechanism is demonstrated through robotic simulations that are capable of yielding the preference; moreover, the account yields testable (and disconfirmable) hypotheses. There are critical open questions concerning whether neonatal vergence and binocular circuitry (see Braddick & Atkinson, 2011) are up to the task. Nonetheless, this clear and novel account with its ‘proof of concept’ simulations is a contribution that should advance the research agenda – and our understanding – of infant visual biases. Here we consider broader implications of the idea that the body and its morphology may provide important constraints on the development of visual face processing.

Brains extend to the periphery The response properties of the human visual system are remarkably well aligned to the statistical regularities of the natural visual world, a fact that suggests that the visual system has evolved to optimize processing of those natural statistics (Simoncelli, 2003). Human faces with their specific properties are (and probably have been) an important part of visual scenes for human beings and thus optimized sensitivities to the configural properties of faces is not surprising. However, any optimization of statistically prevalent or important properties necessarily depends on both the sensory surface and the internal circuitry. Recent theoretical advances in computational biology make clear that different locations of sensors create different statistical regularities in the neurally received input and require different solutions and neural circuitry to optimize responsivity to specific environmental regularities (e.g. Lungarella & Sporns, 2006). Current understanding of human binocular circuitry is one example. Because the eyes are separated in space and have different views, the internal circuitry must find and selectively integrate corresponding signals from the two retinas and it does so by putting a gain on binocularly correlated inputs. In this way, the body’s morphology selects and alters the environmental statistics, enhancing some regularities (or configurations) over others. As the BCM shows, these properties of the sensory surface are thus relevant to how humans process visual information about faces.

Brains and bodies co-evolved The brain, the body and the environment form a realtime dynamic system. Internal neural activity is – in the moment – directly influenced by the body and its

Address for correspondence: Linda B. Smith, Department of Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA; e-mail: [email protected]

© 2014 John Wiley & Sons Ltd

Commentary

spatially distributed sensors as well as by the immediate environment (Chiel & Beer, 1997). Internal neural activity also generates behavior and by contributing to action and behavior, neural activity in one part of the brain can drive neural activity elsewhere – not only within the brain but also by going through the sensorimotor environment (e.g. Ghazanfar, Nielsen & Logothetis, 2006). In brief, the brain, body, and environment form a complex system of outside-in and inside-out dependencies. We also know that within individuals, neural activity changes the functional and most likely also the structural patterns of connectivity in the brain (see Sporns, 2011). These interdependencies among brain, body, and environment were also operational over evolutionary time; thus selective pressures might be best understood as operating on the brain-body-environment system as a whole. Because brain, body and behavior co-evolved, it seems unlikely that hypothesized constraints can be teased apart in any meaningful way as adaptation versus exaptation or as being specific to a certain kind of stimulus versus having more general consequences. Further, complex dynamic systems that are multiply constrained are more robust, more stable, and better able to withstand perturbations (Sporns, 2011; Thelen & Smith, 1994). Therefore, and especially given the importance of human face perception in so many human endeavors, multiple overlapping redundant constraints make sense. If this idea is right, then perhaps we should not work so hard to pit hypothesized constraints against each other as to carefully understand the multiple mechanisms, their operating characteristics, and their dependencies.

Starting biases are not the whole story Although infants begin with special sensitivities to and interest in faces, face perception and processing is not mature until adolescence. Mature face perception is characterized by the ability to identify a large number of, and to categorize, faces into subgroups by age, gender, race; and to read the intentions and goals of another from facial gestures (see Calder, Rhodes, Johnson & Haxby, 2011). All of these developments appear to be dependent (albeit to varying degrees) on experiences looking at faces. Thus, a neonatal bias to look at faces might seem critical to the development of face perception. But do infants need to be biased to look at faces? It seems likely that faces are highly prevalent in infants’ visual environments (see Sugden, Mohamed-Ali & Moulson, 2014). Infant immaturity and their need for continuous care impose strong constraints, filling their visual environments with faces. If faces are highly

© 2014 John Wiley & Sons Ltd

827

prevalent in the infant visual environment, the question emerges as to just how much actual work a neonatal bias to look at faces does in developing face perception, especially since it disappears and is not evident beyond very early infancy. We are reminded of the stepping reflex in infants: an early behavioral pattern that looks like coordinated walking then disappears, and does not play an explanatory role in the processes through which walking develops in infants (see Thelen & Smith, 1994). Thus, the function – if any – of the neonatal bias to look at very simple ‘face-like’ stimuli is also open to debate. In summary, the BMC presents an elegant model of how the sensory surface – and the location of the sensors – may matter and of how the brain extends into the periphery. All this matters for understanding how and why the human visual system is as it is, and it might explain the neonatal bias to look at very simple ‘face-like’ arrays.

References Braddick, O., & Atkinson, J. (2011). Development of human visual function. Vision Research, 51 (13), 1588–1609. Calder, A., Rhodes, G., Johnson, M., & Haxby, J. (Eds.) (2011). Oxford handbook of face perception. Oxford: Oxford University Press. Chiel, H.J., & Beer, R.D. (1997). The brain has a body: adaptive behavior emerges from interactions of nervous system, body and environment. Trends in Neurosciences, 20 (12), 553–557. Ghazanfar, A.A., Nielsen, K., & Logothetis, N.K. (2006). Eye movements of monkey observers viewing vocalizing conspecifics. Cognition, 101 (3), 515–529. Goren, C., Sarty, M., & Wu, P. (1975). Visual following and pattern discrimination of face-like stimuli by newborn infants. Pediatrics, 56 (4), 544–549. Johnson, M., Dziurawiec, S., Ellis, H., & Morton, J. (1991). Newborns’ preferential tracking of face-like stimuli and its subsequent decline. Cognition, 40 (1–2), 1–19. Lungarella, M., & Sporns, O. (2006). Mapping information flow in sensorimotor networks. PLoS Computational Biology, 2 (10), e144. Simoncelli, E.P. (2003). Vision and the statistics of the visual environment. Current Opinion in Neurobiology, 13 (2), 144–149. Sporns, O. (2011). Networks of the brain. Cambridge, MA: MIT Press. Sugden, N.A., Mohamed-Ali, M.I., & Moulson, M.C. (2014). I spy with my little eye: typical, daily exposure to faces documented from a first-person infant perspective. Developmental Psychobiology, 56 (2), 249–261. Thelen, E.S., & Smith, L.B. (1994). A dynamic systems approach to the development of cognition and action. Cambridge, MA: MIT Press. Wilkinson, N., Paikan, A., Gredeb€ ack, G., Rea F., & Metta, G. (2014). Staring us in the face? An embodied theory of innate face preference. Developmental Science, this issue.

Redundant constraints on human face perception?

Redundant constraints on human face perception? - PDF Download Free
82KB Sizes 2 Downloads 4 Views