Exp Brain Res DOI 10.1007/s00221-014-3945-6

Research Article

Reach and Grasp reconfigurations reveal that proprioception assists reaching and hapsis assists grasping in peripheral vision Lauren A. Hall · Jenni M. Karl · Brittany L. Thomas · Ian Q. Whishaw 

Received: 26 November 2013 / Accepted: 3 April 2014 © Springer-Verlag Berlin Heidelberg 2014

Abstract The dual visuomotor channel theory proposes that prehension consists of a Reach that transports the hand in relation to an object’s extrinsic properties (e.g., location) and a Grasp that shapes the hand to an object’s intrinsic properties (e.g., size and shape). In central vision, the Reach and the Grasp are integrated but when an object cannot be seen, the movements can decompose with the Reach first used to locate the object and the Grasp postponed until it is assisted by touch. Reaching for an object in a peripheral visual field is an everyday act, and although it is reported that there are changes in Grasp aperture with target eccentricity, it is not known whether the configuration of the Reach and the Grasp also changes. The present study examined this question by asking participants to reach for food items at 0° or 22.5° and 45° from central gaze. Participants made 15 reaches for a larger round donut ball and a smaller blueberry, and hand movements were analyzed using frame-by-frame video inspection and linear kinematics. Perception of targets was degraded as participants could not identify objects in peripheral vision but did recognize their differential size. The Reach to peripheral targets featured a more dorsal trajectory, a more open hand, and less accurate digit placement. The Grasp featured hand adjustments or target manipulations after contact, which were associated with a prolonged Grasp duration. Thus, Grasps to peripheral vision did not consist only of a simple Electronic supplementary material The online version of this article (doi:10.1007/s00221-014-3945-6) contains supplementary material, which is available to authorized users. L. A. Hall · J. M. Karl (*) · B. L. Thomas · I. Q. Whishaw  Canadian Centre for Behavioural Neuroscience, Department of Neuroscience, University of Lethbridge, Lethbridge, AB T1K 3M4, Canada e-mail: [email protected]

modification of visually guided reaching but included the addition of somatosensory assistance. The kinematic and behavioral changes argue that proprioception assists the Reach and touch assists the Grasp in peripheral vision, supporting the idea that Reach and Grasp movements are used flexibly in relation to sensory guidance depending upon the salience of target properties. Keywords  Dual visuomotor channels · Peripheral vision · Prehension · Reach · Grasp · Haptic

Introduction The dual visuomotor channel theory (Jeannerod et al. 1995) proposes that reaching for an object consists of two movements. The Reach transports the hand to an object and is guided by extrinsic properties of the target (e.g., location), whereas the Grasp shapes the digits for object purchase and is guided by intrinsic properties of the target (e.g., size, shape). Temporal integration of the two movements into a single prehensile act occurs in foveal vision (Prablanc et al. 1979). The relation between foveal vision and the reach is tightly coupled because a target object is visually fixated as the reach begins and is disengaged as the object is touched (de Bruin et al. 2008; Sacrey and Whishaw 2012). Visual guidance of reaching can be replaced by somatosensory guidance when somatosensory cues provide online information about the intrinsic and extrinsic properties of an object. This occurs when subjects reach for an object located on the body or held in the mouth (Edwards et al. 2005; Karl et al. 2012b; Pettypiece et al. 2010). Visual integration of the Reach and the Grasp is proposed to be mediated by dorsal stream occipitoparietal-frontal pathways (Binkofski et al. 1999; Cavina-Pratesi

13



et al. 2010b; Jeannerod 1999; Rizzolatti et al. 1998; TanneGariepy et al. 2002; Vesia et al. 2013) and accordingly, somatosensory cuing must include use of these pathways (Dijkerman and de Haan 2007; Fiehler and Rosler 2010; Fiehler et al. 2009; Karl and Whishaw 2013). If target properties are unknown, as occurs when reaching for an unseen and unknown target, e.g., when blindfolded, the configuration of the Reach and the Grasp changes (Karl et al. 2012a). Proprioceptive guidance first directs the Reach to the target after which haptic contact with the target guides the Grasp (Karl et al. 2012a; Karl and Whishaw 2013). This finding illustrates that the configuration of the Reach and Grasp can be adjusted when the salience of target cues change. Reaching to a peripheral visual field is another everyday act, as occurs when reaching for one object while looking at another object. The reduced acuity of peripheral vision, however, produces uncertainty related to a target’s visual properties (Bedell and Johnson 1984; Loftus et al. 2004; Schlicht and Schrater 2007). Kinematic measures of the Grasp show that the Grasp is modified such that peak digit aperture, in which the digits open in preparation to Grasp, is increased with target eccentricity (Brown et al. 2005; Schlicht and Schrater 2003, 2007). It is not known whether the configuration of the Reach and the Grasp also change, and it is this question that is examined in the present study. A potential change in reaching strategy to peripheral targets is relevant to questions of how the Reach and the Grasp are integrated across the wide range of conditions in which they are used (Goodale and Milner 1992; Milner and Goodale 2006). For the experiment, participants repeatedly reached for either a blueberry or round donut ball, which were placed on one of two pedestals. One pedestal was located directly in front of a participant (0°) and the other was located at either 22.5° or 45° from the central pedestal, ipsilateral to the reaching hand. Different groups of participants were used for each target location to control for the influence of learning (Brown et al. 2005; Karl et al. 2013). Because reaching intent influences grasping, participants were asked to Grasp the food item and bring it toward their mouth as if they were going to eat it (Ansuini et al. 2008; Sartori et al. 2011; Valyear et al. 2011). Because reaching is different in dominant and nondominant visual fields (Gonzalez et al. 2008), subjects only reached to their dominant peripheral visual field. Linear kinematics were used to describe arm trajectory, digit aperture, and reaching time, whereas frame-by-frame behavioral analysis was used to describe object contact, hand shape, and Grasp strategy in relation to food item, visual condition, and trial. Separate groups of participants were asked to identify the target objects only and to indicate their size by shaping their thumb and index finger.

13

Exp Brain Res

Materials and methods Participants Seventy-three young adults (mean age = 21.7 years) were recruited from a second year class at the University of Lethbridge for the study. Forty-five participants were randomly assigned to one of three groups of 15 participants for the object identity experiments: central vision, 22.5° peripheral vision, and 45° peripheral vision. Twenty-eight participants were randomly assigned to one of three groups for the reaching experiment: central vision (0°, n  = 9), 22.5° peripheral vision (22.5°, n = 10), and 45° peripheral vision (45°, n = 9). All participants provided informed consent and were self-reported to have no history of neurological or motor disorders. All of the participants were selected because they were right handed for writing. All participants reported having normal or corrected-to-normal vision. Procedures Participants were positioned in a comfortable seated upright posture (Karl et al. 2012a). One self-standing pedestal was placed directly in front of the participant. Another was placed to the participant’s side at an eccentricity of either 22.5° or 45°, ipsilateral to the hand that the participant used for reaching. Both pedestals were adjusted for each participant to allow full extension of the arm while reaching. Using a sample object (neither a blueberry nor donut), all participants were shown the general location where the reaching target would be placed on the pedestal. At the beginning of each trial, participants placed their right hand in the start position, with their index finger and thumb touching together and resting on the dorsal aspect of their right thigh (Jakobson and Goodale 1991; Pettypiece et al. 2010). Targets The main reaching targets were two food items, a blueberry and a donut ball. The donut ball is known to Canadians as a Timbit and is a well-known food item. Based on the measurement of ten target items, the blueberry was approximately 10.32 ± 0.66 mm in diameter and the donut was approximately 27.51 ± 1.51 mm in diameter. To instruct the participants, other target items were used including a small, black paper clip and a small, red toy car. Object identity experiment For pretraining, participants were asked to look at the central pedestal, and then a target object, a nonfood item, was briefly placed on the pedestal. The subjects were first asked

Exp Brain Res

Fig. 1  Reach-to-Grasp task. a Central vision: a participant reached for a target placed on the pedestal directly in front of them. b Peripheral vision: a participant foveated the top of the pedestal directly before them and reached for a target placed either at 22.5° or 45° in

the visual field ipsilateral to the reaching hand. c Measures of grasp aperture relative to reach duration: peak aperture, first contact aperture, and final grasp aperture were measured as the distance between the thumb and primary grasping digit over the duration of a reach

to identify the target object and were then instructed to Grasp the target and hold it in an upright position with the thumb and index finger. They were then asked to put the object down and indicate its size with the thumb and index finger at the location where they had previously held the target. Four target objects were then used for the object identity and size experiment: the paper clip and the car and the blueberry and the donut ball. Each object was placed, in a haphazard sequence, on the appropriate target pedestal for that group. For all conditions, the participants were asked to look at the central platform as each object was presented. After the object was removed, the participants were asked first to identify the object verbally and then to indicate the object’s size using the index finger and thumb.

which they were required to foveate the platform directly in front of them.

Reaching experiment Participants were instructed to reach for the object and bring it up toward their mouth as if they were going to eat it. A reaching target, either the blueberry or round donut ball, was placed on the appropriate pedestal (Karl et al. 2012a). Reaching was initiated after a verbal “one, two, three, GO” command upon which participants reached for the object, brought it up to their mouth, and then handed it to the experimenter. They then returned their hand to the start position on their knee to prepare for the next trial. Kinematic and behavioral data were recorded from the right hand as participants reached and grasped the objects. Groups of participants were tested with the target in either the 0° position, which allowed them to foveate the target as they reached, as illustrated in Fig. 1a, or in the peripheral visual conditions as illustrated in Fig. 1b, during

Kinematic data collection and analysis An Optotrak Certus Motion Capture System and NDI First Principles software (Northern Digitial Inc.) were used to record hand kinematic data. Infrared-emitting diodes (IREDs) were placed on the right hand: one each on the thumb, index finger, middle finger, and wrist (Schmidt et al. 2009). For each trial, the positions of the IREDs were recorded at a frequency of 200 Hz for 4 s. Recordings commenced coincidently with the verbal cue to start reaching. To account for changes in IRED positioning between participants, control aperture measures were taken as participants pressed either the index and thumb or middle finger and thumb together. These were subsequently subtracted from all relevant kinematic measurements for each trial and participant. Figure  1c depicts the kinematic measures of interest in this study. Aperture, on the y axis, was defined as the distance between the IRED positions on the thumb and the index finger, or occasionally the thumb and the middle finger when the index finger was not utilized in the Grasp of the object. The curved line illustrates the idealized change in aperture over the duration of a reach, beginning with the index and thumb touching in the start position and concluding with the object being held. 1. Peak aperture was defined as the maximum aperture observed between the onset of movement and first contact with the object. 2. First contact aperture was defined as the aperture observed when the participant first touched the object.

13



3. Final Grasp aperture was defined as the aperture observed when the object was fully grasped and held. Behavioral data collection and analysis A high-speed digital video camera, time-synchronized with the Optotrak system, operated at 100 frames/s was positioned to the left side of the participant, approximately perpendicular to the midline of the body. This recorded a reach-side view of the participant from the lower leg to chin area. Another video camera, which operated at 300 frames/s, was positioned to record a similar lateral view of the participant. Representative still frames were captured and cropped using the screenshot program Snipping Tool (Microsoft Windows). Pictures were modified for brightness and contrast in Adobe Photoshop (V.12.0 × 64) but not altered in any other way. Offline, frame-by-frame analysis of the time-synchronized high-speed video record was used to score further behavioral measures, which were then compared to the kinematic data. 1. Peak trajectory height was defined as the maximum vertical distance obtained by the knuckle of the index finger relative to the top of the pedestal between movement onset and final Grasp. Using frame-byframe video analysis, still frames were captured, and distances were measured and scaled using Adobe Photoshop. Then, a virtual line demarking the horizontal axis of the pedestal top was used to scale all measurements. 2. Movement times of important movement events were determined using both the video record frame counts converted to seconds as well as time-synced videokinematic data. 3. Time of movement onset was defined as the instant of first discernable movement of the hand toward the object. 4. Time of peak aperture was defined as the time from movement onset to peak aperture. 5. Time of first contact was defined as the time from movement onset to first contact with the object. 6. Time of final Grasp was defined as the time from movement onset to final Grasp of the object. 7. Digit contact locations were determined through frame-by-frame video analysis using previously defined methods (Karl et al. 2013). Video data were reviewed for all participants for both target objects and the location of first contact was marked on the target object. The average first contact location in the central vision condition was regarded as the “reference” first contact location. The distance between the actual contact point and the reference contact point

13

Exp Brain Res

was measured in Photoshop to quantify the deviation of the contact point from the reference. 8. Digit to make first contact was scored by reviewing the video record and reporting which digit (palm, thumb, index, middle, or ring) first contacted the target. 9. Visual grasp strategy. A visual grasp strategy was identified as one in which the digits were preshaped in an over Grasp before object contact, were closed to a grasping configuration as the target was approached, and which grasped the target directly without making adjustments as the target was contacted (Karl et al. 2012a). 1 0. Haptic grasp strategy. A Grasp was considered to be haptic if the hand was not shaped or oriented prior to first contact and/or featured readjustments to hand shape, digit orientation, and digit contact location after first contact. Such hand shapes included an adjust, in which the target is grasped and then slightly released and grasped again, a capture, in which a digit or the palm rests on the target while other digits close to Grasp, a manipulate, in which the target is moved by one or more digits before being grasped, and a release, in which a digit touched the target and then the hand was withdrawn and shaped to Grasp (Karl et al. 2012a). Behavioral scoring Initially, three authors agreed on the rating criteria (above) and confirmed interrater reliability with a sample of 20 reaches to each of the target objects (interrater reliability exceeded 90 %). Then, a video presentation was prepared and presented to a large class of second year university students (n = 263). The viewers were given two examples of visually guided reaches for the donut and two examples of nonvisually guided reaches for the donut. The participants were instructed to observe the trajectory of the hand and the manner of grasping. Then, 15 video clips, consisting of 5 visual grasps, 5 nonvisual grasp, and 5 peripheral vision grasps, were presented in a random sequence and the participants were asked to indicate, using a class clicker system, whether the reaches were visual or nonvisual. That is, no special instructions concerning reach trajectories or grasp strategies were given, and the participants had not been presented with a reach from peripheral vision prior to the test. It was expected that if the peripheral reaches were seen to resemble visual reaches, the participants would indicate that they were visual, and if the peripheral reaches were seen to resemble nonvisual reaches, the participants would indicate that they were nonvisual.

Exp Brain Res

Statistical analysis For the object identity experiment, data for the blueberry and donut ball were analyzed. For the reaching experiment, data for the first and last trial of each condition for each participant were analyzed. The results were evaluated using a mixed analysis of variance (mixed ANOVA) using the computer software SPSS (V.19). Vision (0° vs. 22.5° vs. 45°) and sex (male vs. female) served as betweensubjects factors. Object (Blueberry, Donut) and trial (first, last) served as within-subject factors when analyzing peak trajectory height, digit contact locations, aperture (peak, first contact, and final grasp), and time (to peak trajectory, to peak aperture, to first contact, and to final grasp). A p value of

Reach and Grasp reconfigurations reveal that proprioception assists reaching and hapsis assists grasping in peripheral vision.

The dual visuomotor channel theory proposes that prehension consists of a Reach that transports the hand in relation to an object's extrinsic properti...
1MB Sizes 0 Downloads 3 Views