Journal of Gerontology 1976, Vol. 31, No. 2, 164-169
Matching of Successive Auditory Stimuli as a Function of Age and Ear of Presentation1
Subjects in three age groups matched pairs of tone stimuli as "same" or "different." Combinations of stimuli were presented in succession to the same ear or to different ears. Matching times increased with age but were not related to slowing of simple RT. Word stimuli were matched more rapidly than tone stimuli for all conditions, but RT scores suggested that subjects were matching in term of the physical properties of word and tone stimuli rather than verbal-nonverbal dimensions. "Same" and "different" reaction times differed depending on ear of stimulus presentation. Sex differences were nonsignificant and no Sex by Age interaction was observed.
REVIOUS laboratory studies indicate P that males may be superior to females on
tasks requiring some component of spatial ability (Elias & Kinsbourne, 1974; McGlone & Davidson, 1973; Schaie & Strother, 1968). Elias and Kinsbourne (1974) found that elderly females (ages 63 to 67) were inferior to elderly males on a visual-spatial matching task but superior on a visual-verbal matching task in which subjects were required to match stimuli as rapidly as possible. It seems important to determine whether this age by sex interaction is observed for a broad range of laboratory tasks comparing spatial and verbal stimuli. If the sex by age by verbal-nonverbal interactions are limited to specific matching tasks and specific stimuli defined operationally as verbal or nonverbal, then a question may be raised as to whether interactions with age may not be unique to specific stimulus properties and 'This research was supported, in part, by PHS Research Grant HD 08220 from NICHD to MFE. Thanks are extended to Dr. P. K. Elias, S. Cohen, N. Lass, R. Cohan, the research committee of Toomey Abbott Towers, and the Wagon Wheel Club for their assistance in various aspects of the research project. Data were collected when JEW was a visiting Research Associate at the All-University Gerontology Center, Syracuse. 2 Dept. of Psychology, Texas Tech Univ., Lubbock, 79409. 3 Dept. of Psychology and All-University Gerontology Center, Syracuse Univ., Syracuse, 13210.
164
matching paradigms rather than generalizable to a verbal-nonverbal dimension per se. Thus, one question raised in the present experiment was whether the Elias and Kinsbourne's (1974) sex by age by verbal-nonverbal interaction would be observed for a matching task involving auditory stimuli. A second question raised in the present experiment was whether there would be an age by sex by ear of stimulus presentation interaction. This question was prompted by the literature on hemisphere specialization ("laterality effects") for verbal and nonverbal stimuli. The left cerebral hemisphere appears superior to the right for the recognition of some types of material generally classified as verbal, and the right hemisphere of the brain appears to be superior for the recognition of shapes and melodies (Kimura, 1964; Satz, Achenback & Fennell, 1965). Each cerebral hemisphere appears to receive auditory information primarily from the side of the body contralateral to that hemisphere (Kimura, 1973). Based on this relationship between stimulus input and hemispheric function and verbal-nonverbal stimuli, the following questions were asked: (a) will right ear word presentations result in more
Downloaded from http://geronj.oxfordjournals.org/ at University of Birmingham on August 21, 2015
Jeffrey W. Elias, PhD,2 and Merrill F. Elias, PhD3
AGE AND AUDITORY INFORMATION PROCESSING rapid word matching than left ear word presentations; (b) will left ear tone presentations result in more rapid matching times than right ear tone presentations; (c) will the above relationships be different for males and females and different age groups. METHOD
2nd COMBINATION
Fig. 1. Sequence of events for the presentation of stimuli on each trial. A set is made up of combination 1 and combination 2 presented in succession to the same ear or to different ears. Each combination is made up of a pair of words (stimulus 1 and 2) or a pair of tones (stimulus 1 and 2).
high). Subjects responded to the tone analogues of these sets in the same manner, e.g., "same" was the appropriate response to a 2,250 hz-2,250 hz combination followed by a 2,250 hz-2,500 hz combination; "different" was the appropriate response to a 2,500 hz2,500 hz combination followed by a 2,500 hz2,250 hz combination. A schematic diagram of the flow of stimulus events making up a set is shown in Fig. 1. Onehalf of the sets required a "same" response and one-half required a "different" response. The interval between trials (one set and a response) was approximately seven seconds. Tone stimuli could be readily identified in terms of the physical properties of the tones, while words could be identified on the basis of name and physical properties. Preliminary testing indicated that no single auditory level was optimal for subjects in all age groups. Thus, each subject was asked to adjust the stimulus volume to a clearly audible, but comfortable level. Subjects who had difficulties hearing despite adjustment procedures were eliminated from the sample prior to the experiment. Apparatus. — A Gerbrand's Model RT-6 reaction time apparatus was used to record reaction time. The subject's console contained two precision telegraph keys which were held in a depressed position with the index finger and released to respond. Responses (matching times) were recorded to the nearest msec with a model S-l Standard Electric timer. Stimuli were recorded on Scotch 212 tape at 36.98 cm per second and played from a Sony model TC 210 tape recorder through Pioneer S-20 earphones. Procedure. — Prior to testing on the matching task, simple reaction time data were obtained for each hand. This score represented a measure of the time required to respond to a single tone rather than a pattern of tones or word stimuli. The subject held the key in a
Downloaded from http://geronj.oxfordjournals.org/ at University of Birmingham on August 21, 2015
Subjects. — There were 8 males and 8 females in each age group: young (18-28), middle-aged (38-54), and elderly (63-77). Elderly subjects were healthy, noninstitutionalized persons with retired or semi-retired working status. Subjects were active participants in the social and civic affairs of the community with "blue" and "white collar" employment histories, and were right-handed as classified by the Harris Lateral Dominance Test (Harris, 1958). Questioning revealed no knowledge of hearing difficulties or neurological abnormalities. Stimuli. — Stimulus combinations were presented in succession either to the same ear or to different ears. Each combination consisted of two successive tones or two successive words. The two tones or words making up a combination were separated by an interval of 100 msec. Tones were 500 msec in duration with a rise time of 25 msec. Time between combinations was 500 msec. Words and tones were presented on separate blocks of trials. For each tone combination, the 2,250 hz tone was always presented first (Stimulus 1) and followed by either the 2,000 hz tone or the 2,500 hz tone (Stimulus 2). A single combination was always presented to the same ear; successive combinations were presented either to the same ear or different ears. The word stimuli, approximately 500 msec in duration, were designed as analogues of the tone stimuli. The word "mid" (analogous to the 2,250 hz tone) always preceded the word "high" (analogous to the 2,500 hz tone) or "low" (analogous to the 2,000 hz tone). The word pairs making up each combination were recorded on a master tape at the normal speech tone and pace of the first author. The stimulus tape was made from the master tape so that identical stimuli and temporal sequences were used in all presentations. Two combinations made up a set. A set required either a "same" or "different" response depending on whether combinations 1 and 2 were the same (e.g., mid-low — midlow), or "different" (e.g., mid-low — mid-
I I I COMBINATION
165
166
ELI AS AND ELI AS conditions were regarded as foil conditions. This method of presentation was necessary to prevent subjects from expecting the second combination of stimuli to arrive consistently at the same ear as that of the first combination. Such an expectancy could produce a left or right hemisphere preparation for words and bias attention to the left or right ear prior to arrival of the stimulus (Kinsbourne, 1970). Experimental sessions generally lasted about 1.5 to 2 hours. To minimize fatigue and to insure high levels of motivation, subjects could request rest periods at any time. The testing atmosphere was relaxed and subjects were urged to ask questions and discuss their response strategies when the experiment was completed. Design and analyses. — A completely factorial design was used with two between subjects factors: age and sex, and three within subjects factors: words versus tones; same versus different responses; left versus right ear presentations. Median RTs were calculated for each subject for each cell of the factorial design. These scores were used as raw scores in analyses of variance. Inspection of the variance-covariance matrices indicated that RT scores did not violate homogeneity assumptions underlying repeated measurements analyses of variance (ANOVA) (Winer, 1962). Following significant interactions, multiple contrasts (a = 0.05) were done with the Tukey " a " test (Winer, 1962). RESULTS
Simple RT. — Fig. 2 shows RT for the matching RT and the simple RT (SRT) task. There was a significant difference among age 600
MALES Tones
500
',''
400
•
Words
FEMALES
/
Tones /
/
Words
300 200 Simple RT
Simple RT
100 18-28
38-54
63-77 18-28 38-54 AGE RANGE IN YEARS
63-77
Fig. 2. Mean RT for the matching RT task and for the simple RT task (SRT) for men and women in three age groups.
Downloaded from http://geronj.oxfordjournals.org/ at University of Birmingham on August 21, 2015
depressed position and released it upon detection of a 3,800 hz tone presented simultaneously to both ears. Presentation of a pure tone to both ears was necessary to avoid ipsilateralcontralateral signal-hand detection biasing (Simon, Craft, & Small, 1971). The command "ready" preceded the presentation of the tone at an interval varying from 0.5 to 1.5 seconds. Following a minimum of 15 warm-up trials for each hand, 15 trials were given with the right hand and with the left hand order counterbalanced across subjects. The median of the 15 trials was used as the SRT score for each hand. SRT did not differ for each hand and thus right and left hand scores were combined. The matching RT task required the subjects to listen through earphones and to identify the sets presented on each trial as "same" or "different." The designation of the response keys as "same" or "different" was counterbalanced within age and sex groups. Counterbalancing was necessary since contralateral or ipsilateral signal-hand biases occur in some cases (Simon etal.,1971). Prior to the recording of matching RTs, subjects were given a block of ten practice trials with both words and tones. Once the experiment began, subjects were not informed of their RT on each trial, but they were informed of incorrect responses. RTs to trials in which responses were incorrect were replaced by correct response times to the identical set of stimulus combinations given at a later time (Davis & Schmitt, 1973). Subjects were informed that the purpose of the experiment was not to determine how many errors would be made under various conditions, but to determine how rapidly they could respond without making a mistake. The first stimulus combination (e.g., midlow) was presented only to the left or right ear; the second combination (e.g., mid-high) was presented to the same or opposite ear. This procedure resulted in 26 left ear-left ear (LL), 26 left ear-right ear (LR), 26 right ear-right ear (RR), and 26 right ear-left ear (RL) presentations for word stimuli and an equal number for tone stimuli. Ear presentation conditions were randomized across trials. One-half of each series of stimuli within an ear condition required a "same" response, and one-half required a "different" response. The LL and RR stimulus presentation conditions were of major concern. The LR and RL
AGE AND AUDITORY INFORMATION PROCESSING
167
Table 1. Means (msec) and Standard Deviations (SD) for Words and Tones as a Function of Age, Same-Different Judgments, and Ear of Presentation. Words
Tones
Different
Same
Different
Same
Age Group
63-77 38-54 18-28
LL
RR
LL
RR
LL
RR
LL
RR
M SD
546 131
476 130 442 139
652 165 485 158
131 464
562 166 462
560
468 147
518 152 454
554
M SD M
481 166 404 94
146 451
351 75
340 70
290
343
69
85'
100 378 53
120 390 71
400 52
SD
114
419 62
107
groups for SRT, F (2,42) = 12.84, p < .01, when scores were averaged for males and females, F(2,42) = p < .01. The main effect for Sex was not significant for SRT (p > .05), and the Age x Sex interaction was not significant (p > .05). It may be seen (Fig. 2) that the difference in SRTs between the middle aged and elderly group was not significant for men. Matching RT. — The regression of SRT (covariate) on matching RT (variate) was not significant (p > .05). Thus, matching RT means were not adjusted statistically for SRT means via covariance analyses (Winer, 1962). It may be seen in Fig. 2 that matching RTs increased with age, F (2,42) = 10.82, p < .001 and were significantly shorter for word than for tone stimuli, F(l,42) = 28.39, p < .001. The Sex main effect was not significant, and Sex did not interact with any other factor (p values > .05). Table 1 shows mean RT under all combinations of experimental conditions with the exception of sex. The main effect for "same" versus "different" judgments was not significant (p > .05), although differences between means were in the direction reported previously (e.g., Nickerson, 1972), i.e., "same" RTs were shorter than "different" RTs (450 versus 460 msec) averaging overall conditions. The following interactions failed to reach significance (p > .05): Ear x Word-Tone, Age x Same-Different x Ear, and Age x SameDifferent x Ear x Word-Tones. Consequently, tests of interactions and multiple contrasts within words and tone conditions were inappropriate. The Same-Different x Ear interaction was significant, F (1,126) = 19.96, p < .001. This interaction may be described as follows. Ignoring all other factors including age, means for
the LL and RR presentation were 432 and 454 msec respectively for "same" judgments (p < .005) means for LL and RR conditions were 487 and 443 respectively for "different" judgments (p < .05). Within ear of presentation conditions, the 55 msec difference between "same" (432 msec) and "different" (487 msec) judgments for LL presentations was significant (/? < .05). However, a small and nonsignificant difference 11 msec between "same" and "different" judgments (454 versus 443 msec) was observed for RR presentations. Error data. — The mean percentage of errors on the matching task was 2.5% with homogenous within cell percentages. These error data were not further analyzed. DISCUSSION
Matching times for verbal stimuli were shorter than those for tone stimuli for men and women of all three age groups, but no Age x Verbal-Nonverbal interaction was observed. This finding is not consistent with findings for a previous experiment by Elias and Kinsbourne (1974) in which elderly females were inferior to elderly males on a nonverbal matching task, but superior on a verbal matching task. However, direct comparisons between auditory stimuli and visual stimuli used by Elias and Kinsbourne are complicated by the fact that it is difficult, if not impossible, to equate demands on shortterm memory and other task parameters. Nevertheless, these data indicate that verbalnonverbal by sex by age interactions for matching tasks are task specific, or alternatively, that subjects in the present study matched words and tones on some other basis than verbal-nonverbal dimension. It is highly likely that subjects in the present
Downloaded from http://geronj.oxfordjournals.org/ at University of Birmingham on August 21, 2015
Note: The age x Same-Different x Word-Tones x Ear interaction was not significant. The Word-Tone x Ear interaction was also not significant; means averaged over Age and Same-Different factors was as follows: Words-LL (423 msec); Words-RR (429 msec); Tones-LL (495 msec); Tones-RR (468 msec).
168
ELIAS AND ELIAS (Davis & Schmitt, 1973; Egeth & Epstein, 1972). In the present experiment, only LL ear presentations were associated with significant differences in time required to make "same" and "different" judgments. If results for "same" and "different" judgments for the left and right ears are interpreted in the manner of other laterality studies (Cohen, 1972; Davis & Schmitt, 1973; Kimura, 1973), it would seem that the right hemisphere (via the contralateral ear) offers some advantage for "same" judgments but the left hemisphere presentations are not associated with any advantage for "same" versus "different" judgments. Clearly, this conclusion cannot be reached on the basis of findings in the present experiment, but the possibility is worth examining with less complex auditory tasks specifically designed for this purpose. Similar relationships between the hemispheres and "same"-"different" judgments have been suggested by Bamber (1969) and by Davis and Schmitt (1973) for the matching of letters visually. The rationale underlying possible superiority of the right hemisphere for "same" judgments is related to the issue of serial and parallel information processing, an issue which is beyond the scope of this report. The reader may wish to consult reviews by Bamber (1969) and Nickerson (1972). The largest differences in matching times between "same" and "different" judgments for the left ear presentations (right hemisphere) were observed for the elderly subjects under the condition in which they matched tone stimuli requiring the response "different." However, the Age x Word-Tone x Same-Different interaction must be considered a chance finding in view of the fact that it did not reach significance (p > .05). The slopes of the curves for matching RT and simple RT are different (Fig. 2), and differences between means are quite large. Thus, consistent with findings for previous experiments (Elias & Kinsbourne, 1974; Talland, 1965), slowing of motor response does not appear to account for the increase in matching time with age. Introspective reports by older subjects suggest that they required a longer time to rehearse the rules for making a "same" or "different" response than younger subjects. If this is true, it may account in part, for increasing matching times with increasing age. Further studies in which memory load is manipulated by varying the number of alternatives in a set of stimuli are necessary to test this hypothesis.
Downloaded from http://geronj.oxfordjournals.org/ at University of Birmingham on August 21, 2015
experiment responded to both words and tones on the basis of their auditory-nonverbal characteristics. Specifically, words may not have been matched on the basis of word meaning, but rather on the basis of the initial L and H sounds. This hypothesis is supported by (a) verbal reports to this effect from subjects, (b) lack of left and right hemisphere superiority for words and tones respectively, (c) the relatively short RTs to words. Word stimuli were 500 msec in duration. If all subjects were waiting the 500 msec necessary to hear the entire word before responding, RTs would be considerably slower than those reflected in Table 1. These data indicate that differences in matching times for words versus tones may have been related to task difficulty, i.e., subjects could respond to the initial sounds of words without regard to meaning, but they had to listen to the entire tone, or most of it, to make a "same" or "different" judgment. Thus, findings in the present experiment cannot be interpreted in terms of differences in matching for verbal and nonverbal stimulus dimensions represented by words and tones. Moreover, while stimuli in the Elias-Kinsbourne study were defined operationally as differing along a verbal-nonverbal dimension, there was not evidence that subjects utilized these stimulus dimensions. It seems quite clear that criteria for determining whether or not stimuli are actually matched on the basis of verbal and nonverbal dimensions are of critical importance to investigations of Age x Verbal-Nonverbal x Sex interaction. Left versus right ear matching times do not support contemporary hypotheses concerning shorter matching times for the tonal pattern stimuli with more direct access to the right hemisphere. In fact, the relationship between RR and LL means (Table 1, tones) was opposite to that which would be observed if these were a right hemisphere advantage for tone matching. This finding may be related to the fact that it is more difficult to obtain laterality effects with a reaction time paradigm and an interstimulus interval than it is for studies requiring the recall of stimuli presented dichotically to both ears simultaneously (Geffen, Bradshaw, & Wallace, 1971). "Laterality effects" were observed for "same" and "different" judgments. A review of the literature indicates that dominance of the left or right hemisphere for classifying stimuli as same or different is not as well agreed upon as it is for verbal versus spatial dimensions
AGE AND AUDITORY INFORMATION PROCESSING SUMMARY
menter may not be utilized by the subject as a basis for matching. REFERENCES
Bamber, D. Reaction times and error rates for "same"-''different" judgments of multidimensional stimuli. Perception & Psychophysics, 1969, 6, 169-174. Cohen, G. Hemispheric differences in a letter classification task. Perception & Psychophysics, 1972, 11, 139142. David, R., & Schmitt, V. Visual and verbal coding in the interhemispheric transfer of information. Ada Psychologies 1973, 37, 229-240. Egeth, H., & Epstein, J. Differential specialization of the cerebral hemispheres for the perception of sameness and difference. Perception & Psychophysics, 1972, 12, 218-220. Elias, M., & Kinsbourne, M. Age and sex differences in the processing of verbal and nonverbal stimuli. Journal of Gerontology, 1974,29, 162-171. Geffen, G., Bradshaw, J., & Wallace, G. Interhemispheric effects on reaction time to verbal and nonverbal visual stimuli. Journal of Experimental Psychology, 1911,87, 415-422. Harris, A. J. Harris Tests of Lateral Dominance. Psychological Corp., New York, 1958. Kimura, D. Left-right differences in perception of melodies. Quarterly Journal of Experimental Psychology, 1964,16, 355-358. Kimura, D. The asymmetry of the human brain. Scientific American, 1973, 22, 70-78. Kinsbourne, M. The cerebral basis of lateral asymmetries in attention. Acta Psychologica, 1970, 33, 193201. McGlone, J., & Davidson, W. The relation between cerebral speech laterality and spatial ability. Neuropsychologia, 1973,11, 105-112. Nickerson, R. S. Binary classification reaction time: A review of some studies of information processing capabilities. Psychonomic Monograph Supplements, 1972,4,275-317. Satz, P., Achenback, K., & Fennel, E. Order of report, ear asymmetry and handedness in dichotic listening. Cortex, 1965,1, 377-396. Schaie, K. W., & Strother, C. R. A cross-sequential study of age in cognitive behavior. Psychological Bulletin, 1968, 70, 671-680. Simon, J. R., Craft, J. L., & Small, A. M. Reactions toward the apparent source of an auditory stimulus. Journal of Experimental Psychology, 1971,59, 203-206. Talland, G. A. Initiation of response and reaction time in aging, and with brain damage. In A. T. Welford & J. E. Birren (Eds.), Behavior, aging and the nervous system. Charles CThomas, Springfield, 1965. Winer, B. J. Statistical principles in experimental design. McGraw-Hill, New York, 1962.
Downloaded from http://geronj.oxfordjournals.org/ at University of Birmingham on August 21, 2015
Previous experiments suggest that verbal stimuli are processed more efficiently if they have more direct access to the left hemisphere and nonverbal stimuli are processed more efficiently if they have more direct access to the right hemisphere. Further, it appears that males are superior to females on some tasks demanding spatial ability and females are superior to males for some tasks placing heavy demands on verbal ability. Moreover, for visual matching tasks using a reaction time paradigm, it seems that a Sex by Verbal-Nonverbal interaction may be exaggerated for elderly persons. Thus, it seemed important to test the generality of the sex by age interaction for an auditory matching task, and to raise questions concerning differences in matching efficiency for different age and sex groups when stimuli are provided more direct access to the right or left cerebral hemisphere by virtue of ear of presentation. Male and female elderly subjects were placed in three age groups and asked to match pairs of tone stimuli as "same" or "different" as rapidly as possible without making a mistake. Sometimes combinations of stimuli were presented to the same ear twice; other times one member of each pair was presented to a different ear. Time required for matching stimulus combinations increased with age, and was not related to a simple slowing in motor response. Word stimuli were matched more rapidly than tone stimuli, but no Word-Tone x Age x Sex interaction was observed. Inspection of RT scores suggested that subjects based matching judgments on the physical properties of both word and tone stimuli rather than verbal and nonverbal (physical) properties of the stimuli. Sex differences were not significant and no Sex x Age interaction was observed. Differences in findings between this study and a previous experiment using the visual modality may be related to memory load differences and differences in other factors in addition to mode of stimulus input. Alternatively, differences may be due to the fact that verbal-nonverbal stimulus dimensions designed by the experi-
169