Relationship among the physiologic channel interactions, spectral-ripple discrimination, and vowel identification in cochlear implant users Jong Ho Won, Elizabeth L. Humphrey, Kelly R. Yeager, Alexis A. Martinez, Camryn H. Robinson, Kristen E. Mills, and Patti M. Johnstone University of Tennessee Health Science Center, Knoxville, Tennessee 37996

Il Joon Moon Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, School of Medicine, Sungkyunkwan University, Seoul, 135-710, South Korea

Jihwan Wooa) Department of Biomedical Engineering, University of Ulsan, Ulsan, 680-749, South Korea

(Received 6 May 2014; revised 25 August 2014; accepted 29 August 2014) The hypothesis of this study was that broader patterns of physiological channel interactions in the local region of the cochlea are associated with poorer spectral resolution in the same region. Electrically evoked compound action potentials (ECAPs) were measured for three to six probe electrodes per subject to examine the channel interactions in different regions across the electrode array. To evaluate spectral resolution at a confined location within the cochlea, spectral-ripple discrimination (SRD) was measured using narrowband ripple stimuli with the bandwidth spanning five electrodes: Two electrodes apical and basal to the ECAP probe electrode. The relationship between the physiological channel interactions, spectral resolution in the local cochlear region, and vowel identification was evaluated. Results showed that (1) there was within- and across-subject variability in the widths of ECAP channel interaction functions and in narrowband SRD performance, (2) significant correlations were found between the widths of the ECAP functions and narrowband SRD thresholds, and between mean bandwidths of ECAP functions averaged across multiple probe electrodes and broadband SRD performance across subjects, and (3) the global spectral resolution reflecting the entire electrode array, not the local region, predicts vowel identification. C 2014 Acoustical Society of America. [http://dx.doi.org/10.1121/1.4895702] V PACS number(s): 43.64.Me, 43.64.Pg, 43.66.Fe [ICB]

I. INTRODUCTION

Spectral resolution describes the maximum number of spectral peaks in a complex acoustic sound that the auditory system can resolve. Individuals with cochlear implants (CIs) can differ substantially in their spectral resolution even with the same type of CI sound processor, implant device, and sound coding strategy. It is, therefore, of great importance to better characterize spectral resolution in CI users. Spectralripple discrimination (SRD) has been widely used for various types of CI research, because it can offer a timeefficient, nonlinguistic measure of spectral resolution for both adult and pediatric CI populations (e.g., Henry and Turner, 2003; Won et al., 2007; Drennan et al., 2010; Won et al., 2010; Won et al., 2011a; Won et al., 2011b; Anderson et al., 2011; Jones et al., 2013). The SRD test was first introduced to examine the frequency resolving power (i.e., spectral resolution) in normal-hearing listeners (Supin et al., 1994). In this test, listeners’ just-noticeable difference for the inversion of ripple phase in the frequency domain,

a)

Author to whom correspondence should be addressed. Electronic mail: [email protected]

2714

J. Acoust. Soc. Am. 136 (5), November 2014

Pages: 2714–2725

created by exchanging the positions of spectral peaks and valleys, is measured at a fixed spectral modulation depth. Spectral-rippled noise stimuli have also been used to evaluate the frequency resolving power of CI users. Henry et al. (2005) compared SRD thresholds in normal-hearing, hearing-impaired, and CI listeners and demonstrated that SRD performance was best in normal-hearing listeners, followed by hearing-impaired listeners, and poorest in CI listeners. SRD has been shown to be predictive of speech perception in quiet (Henry and Turner, 2003; Henry et al., 2005) and in noise (Won et al., 2007; Anderson et al., 2011), and basic music perception abilities such as complex-tone pitch discrimination, melody, and timbre identification (Won et al., 2010) in CI users. These previous studies suggest that there are considerable practical applications of the SRD test as a measure of spectral resolution for CI users. In the SRD test, individuals with CIs are typically tested by presenting acoustic signals in the sound field while using their own CI sound processors. In this regard, performance on the SRD test reflects the combined effects of the CI sound processor, the electro-neural interface, and the central nervous system. Thus, it is important to understand potential factors that influence SRD test performance.

0001-4966/2014/136(5)/2714/12/$30.00

C 2014 Acoustical Society of America V

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

With regard to the effects of the CI sound processor, previous studies have shown that SRD is sensitive to the CI sound processor manipulations. For example, CI subjects showed improved SRD performance when they were fit with sound coding strategies that were designed to enhance the fidelity of spectral information of sound (Drennan et al., 2010). The SRD test is also useful to evaluate different designs of implant electrode (EL) arrays. For example, using the SRD test, Golub et al. (2012) compared CI users implanted with long EL arrays and short EL arrays that were designed to provide electro-acoustic stimulation. Golub et al. (2012) demonstrated significantly better SRD performance in the latter group. In addition, differences in the number of active ELs (Henry and Turner, 2003) and EL separation (Won et al., 2011b) have been shown to affect SRD performance in CI users. There is an increasing drive to better understand the effects of different patterns of electro-neural interfaces on CI outcomes (e.g., Bierer, 2010; Pfingst et al., 2011; Garadat et al., 2012; Noble et al., 2013; Long et al., 2014). Differences in electro-neural interfaces across CI subjects are mainly attributed to a wide range of differences in neural survival patterns (Fayad and Linthicum, 2006; Khan et al., 2005), structural changes in the cochlea due to deafness and implantation (e.g., Clark et al., 2014), and EL positions (Bierer, 2010, Noble et al., 2013; Long et al., 2014). These factors have critical implications for the spectral processing of CI users. If the survival of auditory neurons is significantly reduced or absent near certain ELs, both threshold and maximum comfortable levels are typically increased for those ELs (Shannon et al., 2002). Such high levels of stimulation excite more distant neurons, thus the spectral information of incoming signals is delivered in a distorted fashion. If the impedance pathways are not uniform along the cochlear length, the current flow becomes irregular, leading to inaccurate electric stimulations (Shepherd et al., 1994; Saunders et al., 2002). The goal of the current study was to understand the relationship between performance in SRD and different patterns of peripheral electro-neural interfaces. When evaluating such a relationship, it is important to factor out the effects of possible confounding variables on peripheral neural measures such as the effects of various sound processor settings or central processing variability across subjects. To address the influence of CI sound processor settings upon results, previous studies have attempted to evaluate the relationship between the SRD test and measures of spatial (i.e., cochlear place) resolution using psychoacoustic experiments with direct-stimulation testing paradigms. For example, Anderson et al. (2011) evaluated spatial resolution using bandwidths of the spatial tuning curves (estimated via direct-stimulation) and compared it to SRD thresholds in an octave-band in the same frequency region. Anderson et al. (2011) demonstrated a significant correlation between the two measures and suggested that the electro-neural interfaces in the local cochlear region, inferred from the spatial tuning curves, were important factors to determine performance in the octave-band SRD test. Along similar lines, Jones et al. (2013) measured spatial resolution using a channel interaction index measure J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

(Bo€ex et al., 2003) and demonstrated substantial variations in the amount of channel interactions across the electrode array, reflecting different patterns of electro-neural interfaces along the length of cochlea. More importantly, Jones et al. (2013) demonstrated a strong relationship between the interaction indices and SRD thresholds in CI users. However, these experiments are time consuming due to the large amounts of data that are typically required to adequately measure spatial tuning curves or channel interaction indices using direct-stimulation, rendering such approaches impractical for most clinical uses. It should be also noted that central processing also affects performance in the SRD test (e.g., Won et al., 2011a; Lopez Valdes et al., 2014). However, it is not known whether a similar or completely different type of central processing is used for the SRD test as compared to the measures of spatial tuning curves or the channel interaction indices because all of these measures require a subjective behavioral response from the subject. The current study took a different approach to evaluate the relationship between the peripheral electro-neural interfaces and SRD outcomes in CI users, while controlling for potential effects of various CI sound processor settings and central processing variability across subjects. To achieve this goal, the current study characterized individual differences in electro-neural interfaces physiologically via objective measures of electrically evoked compound action potentials (ECAPs). The ECAP is a synchronous physiological response from auditory-nerve fibers in response to electric stimulation by CIs (for reviews, see Hughes, 2012). In the current study, the ECAP was used to characterize channel interactions in individual CI subjects without any influence of CI sound processing strategy or the influence of central processing. Channel interactions occur for all CI users but in a very different way for each individual CI user (Bo€ex et al., 2003; Abbas et al., 2004; Hughes and Stille, 2010; Jones et al., 2013) due to different patterns of spread of excitation in auditory-nerve fibers evoked by electric stimulation across individual CI users (for reviews, see Wilson and Dorman, 2008; Zeng et al., 2008). To describe physiological channel interactions, Abbas et al. (2004) introduced a forwardmasking stimulus paradigm, where masker and probe pulses were delivered through different ELs. This technique exploited the assumption that the auditory-nerve response to the probe EL depended on the extent of overlap in the stimulated neural populations by the probe and masker ELs. Specifically, the ECAP amplitudes from the probe EL were examined as a function of masker EL position to estimate the degree of overlap between the stimulated neural populations in response to the probe and masker ELs. Such ECAP measures provide information about peripheral spatial resolution of hearing via CIs. Information gained from the physiological channel interactions using the ECAP forward-masking stimulus paradigm has provided valuable insights on how peripheral processing affects behavioral outcomes for CI users. For example, Hughes (2008) demonstrated a significant relationship between the peripheral spatial resolution inferred from the relative separation of ECAP excitation patterns for two Won et al.: Channel interaction and spectral resolution

2715

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

stimulating ELs, and CI subjects’ ability to discriminate those two electrodes on the basis of pitch difference. The current study used a similar approach in an effort to link the physiologic and behavioral measures, but the narrowband SRD test was used to evaluate the spectral resolution in the local regions of the EL array. The use of the narrowband SRD test was motivated by the findings of Anderson et al. (2011), where SRD was measured for octave-band ripple stimuli in four contiguous octaves in 15 CI users. Anderson et al. (2011) showed substantial variations in SRD thresholds across the four different frequency ranges. Anderson et al. (2011) argued that differences in individual EL placement, patterns of current spread, and neural survival patterns were likely responsible for the variations found in octave-band SRD thresholds. This finding via the use of the narrowband SRD mirrors the differences found in the pattern of the spread of excitation reflected by the ECAPs both within and across CI subjects (e.g., Cohen et al., 2003; Hughes and Abbas, 2006a; Hughes and Stille, 2008; Tang et al., 2011). In addition, Hughes and Stille (2010) showed that the shape of the ECAP amplitude functions depended upon the location of the probe EL within a subject. This finding suggests that electro-neural interfaces in different local regions of the cochlea could impact behavioral outcomes. The primary hypothesis of this study was that higher channel interactions would be associated with poorer narrowband SRD in the same local region of the cochlea. To test this hypothesis, physiologic channel interaction patterns were measured with the ECAP using a previously established forward-masking paradigm (Abbas et al., 2004; Hughes and Abbas, 2006a,b; Hughes and Stille, 2010). More specifically, channel interaction patterns were measured for three to six probe EL locations in the implanted cochlea under the controlled conditions afforded by direct stimulation. The degree of channel interactions was reported in the form of a bandwidth of ECAP channel interaction functions, as described in Sec. II. In order to evaluate spectral

resolution in the local regions of the cochlea for the same group of CI subjects, SRD was measured using narrowband rippled noise stimuli with a bandwidth that covered the frequency range allocated to five ELs: two more apical and two more basal to the ECAP probe EL. Results supported the hypothesis with a significant negative correlation between the bandwidths of the ECAP channel interaction functions and narrowband SRD thresholds. A secondary goal of the present study was to evaluate whether the patterns of channel interactions were associated with speech perception performance. This issue was examined based upon the claim made by Azadpour and McKay (2012) that CI users might rely on acoustic cues other than fine spectral detail to identify speech. Azadpour and McKay compared spatial resolution at EL14 to speech recognition scores in eight CI subjects and showed that the two measures were not correlated. As argued in Jones et al. (2013), the absence of correlation in these two metrics served as an indication that it is not possible to predict speech perception based upon spatial resolution at any one electrode, because information over the entire acoustic spectrum has to be utilized to understand speech. To further examine this issue, vowel identification was measured in the same group of CI subjects who participated in the ECAP recordings. Our predictions were twofold: (1) bandwidths of the ECAP channel interaction functions for one probe EL may not be correlated with performance on vowel identification; however, (2) mean bandwidths of the ECAP channel interaction functions across three to six probe ELs may be correlated with vowel identification performance. II. METHODS A. Subjects

Nine postlingually deafened adult CI subjects participated in this study, one of which was a bilateral CI user. The bilateral CI user (C08) was tested with each CI; thus, data

R

TABLE I. Demographics for cochlear implant (CI) subjects. Except for C09, all subjects were implanted with the NucleusV 5 devices. C09 was implanted with the Freedom device. Except for C06, who was mapped with the SPEAKTM strategy, the remaining subjects’ clinical sound coding strategies were ACETM strategy. EL: electrode. SRD: spectral-ripple discrimination. VI: vowel identification.

Subject

Age (yrs)

Age at implantation (yrs)

Etiology

Probe ELs tested for SRD

C02 C03 C04 C05 C06 C08(L) C08(R) C09 C10 C11

56 86 72 57 43 57 57 66 50 77

49 82 64 50 31 48 50 60 46 71

AIEDb Unknown Unknown Meniere’s Meningitis Genetic Genetic Vehicle accident Infection Unknown

6, 11, 16 6, 11,16 6, 11, 16 6, 11, 16 6, 11, 16 17 6, 8, 11, 13, 16 6, 8, 11, 13, 16 2, 4, 6, 11, 16 6, 11, 16

Probe ELs tested for ECAP

Stimulation ratea

Clinical pulse width (lsec)

Number of maxima

Experiment participated

6, 11, 16 6, 11,16

1200 900 900 2400 250 1800 1800 1800 900 900

25 50 25 12c 25 25 25 20c 25c 25

8 8 8 10 8 10 10 10 10 8

ECAP, SRD, VI ECAP, SRD, VI SRD ECAP, SRD, VI SRD ECAP, SRD ECAP, SRD, VI ECAP, SRD, VI ECAP, SRD, VI ECAP, SRD, VI

6, 11, 16 6, 11, 17 6, 8, 11, 13, 15, 19 6, 8, 11, 13, 16, 19 2, 4, 6 6, 11, 16

a

Clinical stimulation rate in units of pulses per second per electrode. Autoimmune inner ear disease. c These three subjects encountered the compliance issue while trying to obtain robust ECAP responses. Therefore, pulse width was increased to 25 lsec for C05 and C09, and to 50 lsec for C10. b

2716

J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

Won et al.: Channel interaction and spectral resolution

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

are reported for 10 implanted ears. Table I shows individual CI subject information. The number of implanted ears for each experiment (SRD, ECAP, and vowel identification) slightly varied because some subjects were not able to participate in all experiments due to time constraints. In the rightmost column of Table I, experiments in which each subject participated are indicated. All subjects were native American English speakers and experienced CI users (with minimum 4 yr of experience with their implants). As Hughes and Stille (2010) demonstrated different patterns of physiological channel interactions estimated with ECAPs for different internal devices, only subjects implanted with the NucleusV 5 or FreedomTM internal devices were included in this study. Likewise, only subjects who used the ACETM sound coding strategy (except for C06, who was mapped with the SPEAKTM strategy) were recruited to minimize any potential effects of sound processing strategy on SRD, since Drennan et al. (2010) showed significant effects of sound coding strategies on SRD. All experimental procedures followed the regulations set by the National Institutes of Health and were approved by the University of Tennessee Health Science Center’s Institutional Review Board. Psychoacoustic tasks were performed using subjects’ “everyday” map, where parameters of electric stimulation (e.g., the number of maxima, pulse width and rate) were established during regular clinical appointments. Additional SmartSoundTM features were not utilized during data collection. Electrode impedances were evaluated across the electrode array for all subjects prior to ECAP testing and were confirmed to be within normal limits. In this paper, EL numbers are shown using the convention employed by the Cochlear Ltd.: EL1 is most basal (highest frequency channel) and EL22 is most apical (lowest frequency channel). All psychoacoustic experiments were conducted in a double-walled sound-attenuating (IAC) booth. Stimuli were presented in the free field via a Crown D45 amplifier and a free-standing loudspeaker (Bowers & Wilkins, CM5) placed at head level, positioned at 0 azimuth and 0 elevation. The level of stimuli was set at 65-dBA. CI subjects were seated 1-m from the loudspeaker and were asked to face it during the course of the experiment. R

B. ECAP testing

The ECAPs were measured using the Custom Sound EP 4.0, provided by Cochlear Americas (Centennial, CO). All subjects were tested with their own sound processors that were interfaced with the programming POD. In the present study, ECAPs were measured using a standard forward-masking subtraction paradigm to characterize the ECAP spatial excitation or masking patterns in the implanted cochlea (Abbas et al., 2004; Hughes and Abbas, 2006a,b; Hughes and Stille, 2010). This was done by customizing the “spread of excitation” feature under the Advanced NRT. A minimum of three probe ELs were tested for each individual subject as shown in Table I. Additional probe ELs were tested for subjects C08 and C09. Masker and probe stimuli consisted of single, cathodic-leading, biphasic current pulses. Testing was J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

initiated on each subject using the pulse width established during regular clinical programming sessions. However, if compliance limits were found to be a concern during ECAP acquisition, pulse width was increased. This occurred in three implanted ears. A stimulation rate of 80 Hz, monopolar stimulation relative to the extra-cochlear ball electrode (MP1), and recording relative to the extra-cochlear case electrode (MP2), 60-dB gain, 400-lsec masker-probe-interval, 8-lsec probe and masker inter-phase-gap, and 122lsec delay was used. The number of averages was set to 100. The recording EL was fixed in location, which was typically located two or four EL positions apical to the probe EL. The masker EL varied in random order across all the remaining ELs in the array. Stimulus levels for each probe EL were primarily determined based upon a subjective loudness rating. Subjects were asked to rate the loudness level based on a scale ranging from 0 and 10. Desired current level was determined when loudness perception reached a level 8, which corresponded with “loud but comfortable.” While evaluating the loudness rating, ECAP waveform morphology was also monitored using the Advanced NRT feature in the Custom Sound EP 4.0 to ensure that the ECAP response was observed at the selected level. Stimulus current levels for the masker ELs were set to the same current levels as the probe EL. The focus of the ECAP testing in the present study was to characterize the SOE within and across CI subjects. Thus, ECAP amplitudes were normalized to the ECAP amplitude obtained with the masker and probe on the same EL, where the maximum ECAP amplitudes were expected. In this paper, the normalized ECAP amplitudes are referred to as the ECAP channel interaction function. Finally, the bandwidth of the ECAP channel interaction function was measured in number of ELs at 85% of the normalized ECAP amplitudes, following a similar method presented by Hughes and Abbas (2006a,b). The left and right sides of the ECAP channel interaction functions were derived using the slope between the data points above and below 85%. If normalized ECAP amplitudes remained higher than 85% at the edge of the array, EL1 or EL22 was used as the left or right side of the functions to derive the bandwidth. Hughes and Abbas (2006a,b) quantified the bandwidth of the functions at 75% of the normalized ECAP amplitudes; but about 30% of ECAP channel interaction functions showed normalized ECAP amplitudes higher than 75% at the edge of the array in the current study. To reduce the rate of occurrence of such ECAP data, the peak value was increased to 85%. With this value, only 13% of the ECAP channel interaction functions showed normalized ECAP amplitudes higher than 85% at the edge of the EL array. C. Spectral-ripple discrimination

SRD thresholds were collected using a similar method that was previously described by Won et al. (2007). A custom design MATLAB (The Mathworks, Natick, MA) graphical user interface program running on a PC was used to present stimuli and record responses from subjects for the SRD test. To create rippled noise stimuli, 2555 tones were Won et al.: Channel interaction and spectral resolution

2717

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

spaced equally on a logarithmic frequency scale with a bandwidth of 100–8000 Hz, which covers the entire frequency range of the ACETM or SPEAKTM sound coding strategy. The ripple peaks were spaced equally on a logarithmic frequency scale with a 13 dB peak-to-valley ratio. The spectral modulation starting phase for two standard ripple stimuli was randomly selected from a uniform distribution (0 to 2p rad), and for each corresponding inverted ripple stimulus, the phase was determined by adding p/2 to the phase of the standard ripple stimulus. The stimuli had 500 ms total duration. The order of presentation of the three ripple stimuli was randomized every trial, and the subject’s task was to select the “oddball.” Feedback was not provided. Ripple density was varied in equal-ratio steps of 1.414 ripples per octave in an adaptive 2-up and 1-down procedure with 13 reversals that converges to the 70.7% correct point. Thus, the ripple density was increased by a factor of 1.414 after two consecutive correct responses and decreased by a factor of 1.414 after a single incorrect response while the ripple depth was always fixed at 13 dB. Stimuli were equated to the same root-meansquare level and a level attenuation of 1–8 dB (in 1-dB increments) was randomly selected for each interval in the three-interval task. The threshold for each adaptive run was calculated as the geometric mean of the last eight reversals. The spectral-ripple discrimination threshold for each test condition was the geometric mean of three adaptive runs. Here, higher SRD thresholds indicate better performance. Both the broadband and narrowband SRD tasks were administered for each CI subject. For the broadband condition, the original ripple stimuli with a bandwidth of 100–8000 Hz were used. For the narrowband conditions, three to six different frequency ranges were tested depending on the probe ELs that were used for the ECAP testing for each individual subject. To create the narrowband ripple stimuli, the original ripple stimuli were passed through a bandpass filter (12th-order Butterworth). A lower and upper cutoff frequency for the bandpass filter was set to cover the frequency range associated with five stimulating ELs. For example, if the narrowband SRD test was performed to examine the local spectral resolution centered on the EL#N, the lower cutoff frequency (fL) for the bandpass filter was set to the fL of the EL#N-2, which is specified in the subject’s map. Likewise, the upper cutoff frequency (fH) for the bandpass filter was set to the fH of the EL#N þ 2. The broadband condition was always completed first followed by the narrowband conditions. For brevity, narrowband condition is referred to as the probe EL number in the rest of this paper. For example, the narrowband SRD test for the probe EL6 refers to the testing condition where the passband of the bandpass filtered ripple stimuli covered the frequency range associated between the EL4 and EL8. D. Vowel identification

Eight vowels (/u, U, K, A, æ, I, e, i/) were synthesized in the /hVd/ context using the SynthWorks software 2718

J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

(Scicon R & D Incorporation, Beverly Hills, CA), which implements the Klatt synthesizer (Klatt and Klatt, 1990). The stimuli were 200 ms long with a linear increase in amplitude over the first 30 ms of the stimuli. Fundamental frequency for the vowels was set to 100 Hz. First and second formant frequencies were based on average frequencies for male speakers (Wright and Souza, 2011). The third formant frequency for each vowel was estimated using regression formulas proposed by Nearey (1989), and the fourth formant frequency was fixed at 3500 Hz. Formant bandwidths were calculated from the algorithm described in Johnson et al. (1993). Stimuli were equated in the same root-mean-square value and presented at 65 dBA through a loud speaker in the sound field. A MATLAB graphical user interface running on a PC was used to present vowel stimuli to CI subjects. Subjects responded by clicking on virtual buttons labeled with each vowel in the /hVd/ context on a computer screen. Feedback was not provided. Thus, the identification test was in an eight-alternative, forced-choice paradigm with chance performance at 12.5%. Before actual testing, subjects were conditioned to each vowel and completed one trial for practice. Upon completion of the vowel identification test, a total percent correct score was calculated after identifying 80 vowel presentations.

III. RESULTS A. Spectral-ripple discrimination

Figure 1 shows SRD thresholds for individual CI subjects. The thresholds are shown for three different narrowband conditions: The frequency ranges centered on EL6, EL11, and EL16. Narrowband SRD thresholds were shown to vary across subjects as well as within subjects as a function of the probe EL. For example, C02, C04, and C11 showed better SRD performance in the low frequency channel (EL16) than in the high frequency channel (EL6).

FIG. 1. Narrowband spectral-ripple discrimination thresholds as a function of narrowband frequency ranges centered on three different electrodes (ELs). Data for nine individual cochlear implant subjects are represented by different symbols and lines. For C08(L), data for EL17 is shown for the testing condition of EL16. L: left ear. R: right ear. Won et al.: Channel interaction and spectral resolution

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

However, C03 and C06 showed better SRD performance in the high frequency EL than in the low frequency EL. Subject C05 showed similar performance for probe EL6 and EL16, but for EL11, performance was slightly worse than the other two probe ELs. Some subjects (C08, C09, and C10) showed consistently poor performance across three different ELs. To evaluate the pattern of narrowband SRD performance, a repeated-measures analysis of variance (ANOVA) was done with the main factor of probe ELs. The main effect of probe ELs did not reach significance [F(2,16) ¼ 0.70, p ¼ 0.51], suggesting that there was no systematic pattern in narrowband SRD performance across nine CI subjects. Overall, these results are consistent with Anderson et al. (2011), where considerable withinsubject variability was observed in the octave-band SRD thresholds for 15 CI subjects. These results are also consistent with Jones et al. (2013), where considerable variability was found in channel interactions in different regions of the electrode array. It is possible that CI subjects, particularly those who showed poor SRD performance, might have used an overall intensity cue instead of the global spectral shape for SRD. Won et al. (2011b) examined this issue by comparing CI users’ performance on the SRD test to the phenomenological computational model, where specific perceptual mechanisms were tested. When the model was set to perform using an overall intensity cue for SRD, the maximum SRD threshold achieved by the model was 0.1 ripples/ octave. Note that a spectral modulation depth of 30 dB was used for the model evaluation in Won et al. (2011b), whereas the current study implemented the SRD test using a spectral modulation depth of 13 dB. Thus, when factoring in the difference in the spectral modulation depths between the two studies, the maximum threshold that would have been achieved using an overall intensity cue for the SRD

test in the current study would be less than 0.1 ripples/ octave. Therefore, it is highly unlikely that CI users in the current study performed the SRD tests by simply discriminating a single intensity difference between spectral-ripple stimuli. B. Spread of excitation pattern estimated by ECAPs

Figure 2 displays the ECAP data for two representative CI subjects (C02 in the upper panels and C09 in the lower panels). The left, middle, and right columns represent the data for the probe EL6, 11, and 16, respectively. Overall, C02 showed narrower patterns than C09, indicating variability in the degree of physiological channel interactions across CI subjects. On an individual subject level, C02 showed a broad pattern for EL6, but a narrow pattern for ELs 11 and 16, indicating within-subject variability in the degree of channel interactions across the electrode array. C. Correlations

Figure 3(A) shows the distribution of narrowband SRD thresholds for 34 data points that were collected from all subjects (Table I). There was more count in the lower SRD thresholds, so the distribution was skewed to the right. The Lilliefors test (Lilliefors, 1967) showed that the distribution of narrowband SRD thresholds was not close to normal (p < 0.01). In Fig. 3(B), the distribution of bandwidths of 30 ECAP channel interaction functions that were collected from all subjects. The ECAP bandwidths ranged between 1.16 and 14 electrodes showed the peak in the histogram at around 5.3 electrodes of bandwidth. Following logarithmic transformation of the bandwidth data, the Lilliefors test showed that the distribution was close to normal (p ¼ 0.11).

FIG. 2. Individual examples of electrically evoked compound action potential (ECAP) channel interaction functions. The ECAP normalized amplitudes as a function of masker electrode (EL) are displayed. The ECAP channel interaction functions for probe EL6, 11, and 16 are shown in the left, center, and right columns, respectively. Top and bottom rows represent data for subject C02 and C09, respectively. The bandwidth (BW) of the ECAP channel interaction function at 85% of the normalized amplitudes is indicated on each panel.

J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

Won et al.: Channel interaction and spectral resolution

2719

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

FIG. 3. (A) The distribution of narrowband spectral-ripple discrimination thresholds (number of data points ¼ 34). (B) The distribution of the bandwidths of the evoked compound action potential (ECAP) channel interaction functions (number of data points ¼ 30).

The hypothesis predicted that larger bandwidths of the ECAP channel interaction functions would be related with lower narrowband SRD thresholds, resulting in the negative correlation between the two measures. A comparison of narrowband SRD thresholds with their ECAP bandwidth for the same probe ELs is shown in Fig. 4. Here, 8 implanted ears with total 25 data points were used to elucidate the relationship between narrowband SRD thresholds and bandwidths of the ECAP channel interaction functions. The data points for each individual subject were connected in order to visualize the relationship between narrowband SRD thresholds and the corresponding ECAP bandwidths. On an individual subject level, C02, C03, C05, C08(R), and C11 showed decreasing narrowband SRD thresholds with increasing bandwidths of the ECAP channel interaction functions. These five ears showed negative slopes for the relationship between the two measures. The other two implanted ears, C09, and C10 did not show such relationship. It was not feasible for C08(L) to evaluate the relationship, because

FIG. 4. Comparison of narrowband spectral-ripple discrimination (SRD) thresholds with bandwidths of the electrically evoked compound action potential (ECAP) channel interaction functions. Subjects are represented by different symbols. With all data points, bandwidths of ECAP channel interaction functions showed a significant correlation with narrowband SRD thresholds (r ¼ 0.41, p ¼ 0.04, N ¼ 26), using logarithmic regression. 2720

J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

there was only one data point available for C08(L). Thus, on an individual subject level (i.e., within-subject analysis), five out of seven ears showed the pattern of the relationship between narrowband SRD thresholds and bandwidths of the ECAP channel interaction functions that was predicted by the hypothesis. The relationship between the two measures was also evaluated across subjects. Visual inspection of the data indicated a compressive relationship between ECAP bandwidths and SRD thresholds; therefore, the relationship between the two measures was modeled using a logarithmic function. The regression analysis using a logarithmic function showed a significant relationship (r ¼ 0.41, p ¼ 0.04, N ¼ 26) between ECAP bandwidths and narrowband SRD thresholds. The negative correlation coefficient indicates that narrowband SRD performance is reduced with increasing spread of excitation that is characterized by the bandwidth of the ECAP channel interaction functions, indicating that spectral envelope sensitivity in the local region of the implanted cochlea is constrained by the spread of excitation in that cochlear region. The results also support the view that the SRD test assesses the spectral resolution of CI subjects. Additional correlational analyses were performed to better understand the relationship among the ECAP channel interaction functions, SRD performance, and vowel identification. For these analyses, the Pearson correlation coefficients were computed. Here, the Bonferroni corrections were not applied due to the increased risk of a type II error for the number of comparisons made (Benjamini and Hochberg, 1995, as cited in Hughes and Stille, 2010). In Table II, correlations among ECAP bandwidths, SRD, and vowel identification are reported. In Fig. 5, the relationship of the mean bandwidths of the ECAP channel interaction functions averaged across three probe ELs with broadband SRD thresholds [Fig. 5(A)] and vowel identification scores [Fig. 5(B)] is shown. For C08(R) and C09, mean bandwidths of the ECAP functions averaged across six probe ELs were used for these analyses. These two comparisons were performed to determine if CI subjects with a less degree of channel interactions across the EL array show better performance on the broadband SRD or vowel identification tests, which involves using the entire acoustic spectrum. In Fig. 5(A), a significant correlation was found between mean bandwidths of ECAP channel interaction functions and broadband SRD thresholds (r ¼ 0.72, p ¼ 0.04, N ¼ 8), which is generally consistent with the significant correlation found between ECAP bandwidths and narrowband SRD thresholds across subjects (see Fig. 4). This was as expected because spectral resolution over the entire acoustic spectrum is thought to depend on the overall amount of channel interactions across the EL array (Jones et al., 2013). In contrast, the correlation between mean bandwidths of ECAP channel interaction functions and vowel identification failed to reach significance (r ¼ 0.38, p ¼ 0.41, N ¼ 7). Note that the non-significant correlation between mean bandwidths of ECAP channel interaction functions and vowel identification is consistent with previous studies (e.g., Hughes and Abbas, 2006a; Hughes and Stille, 2008; Tang et al., 2011). Won et al.: Channel interaction and spectral resolution

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

TABLE II. Correlations among ECAP bandwidths, spectral-ripple discrimination (SRD), and vowel identification. Significant correlations are shown in bold. Comparisons ECAP bandwidthsa Mean bandwidths of ECAP functionsb Mean bandwidths of ECAP functionsb Broadband SRD Mean narrowband SRD Mean narrowband SRD ECAP bandwidths for probe EL6 ECAP bandwidths for probe EL6 Broadband SRDc

Narrowband SRD for the same EL Broadband SRD Vowel identification Vowel identification Vowel identification Broadband SRD Broadband SRD Vowel identification Vowel identification

r

p

Number of samples

20.41 20.72 0.38 0.83 0.58 0.69 0.13 0.26 0.86

0.04 0.04 0.41 0.022 0.17 0.061 0.76 0.58 0.029

26 8 7 7 7 8 8 7 7

a

A logarithmic function was used to compute the regression coefficient. See Fig. 4. See Fig. 5. c A partial correlation coefficient between broadband SRD and vowel identification, factoring out the effects of peripheral spatial resolution reflected by the mean bandwidths of ECAP channel interaction functions averaged across three to six probe ELs. b

A significant correlation was found between the broadband SRD thresholds and vowel identification scores (r ¼ 0.83, p ¼ 0.022, N ¼ 7), consistent with Henry et al. (2005), indicating that spectral resolution over the entire EL array contributes to vowel identification for CI subjects. A moderate, positive correlation was found between the mean narrowband SRD thresholds averaged across three to six probe ELs and vowel identification scores (r ¼ 0.58, p ¼ 0.17, N ¼ 7), but it failed to reach significance. Similarly, a moderate, positive correlation was found between the mean narrowband SRD thresholds averaged across three to six probe ELs and broadband SRD thresholds (r ¼ 0.69, p ¼ 0.061, N ¼ 8), but it failed to reach statistical significance at the 0.05 level. It may be possible that too few EL sites were sampled, or the subject sample size was too small to observe significant correlations with the mean narrowband SRD thresholds, given that moderate correlation coefficients were found with vowel identification and broadband SRD. To determine if spread of excitation about any one EL can predict spectral resolution across the EL array or vowel identification, correlations of the bandwidths of the ECAP channel interaction functions for the probe EL6 with broadband SRD thresholds or vowel identification scores were assessed. The probe EL6 was chosen because the largest

number of data points was available for these correlational analyses using EL6. These analyses showed that the correlation magnitudes were small and the associated p-values were far greater than 0.05 (see Table II). This indicates that the spread of excitation about one EL cannot explain the spectral resolution over the entire EL array and vowel identification. Finally, a partial correlation analysis was conducted to determine the extent to which the broadband SRD thresholds correlated with vowel identification scores independent of the degree of physiological channel interactions measured by the bandwidth of ECAP functions. When the mean bandwidths of ECAP channel interaction functions averaged across three to six probe ELs were factored out, the correlation between the broadband SRD thresholds and vowel identification scores still remained significant (r ¼ 0.86, p ¼ 0.029). IV. DISCUSSION A. Variability in the local spectral resolution and channel interactions

There is general agreement about the importance of understanding the effects of variability in electro-neural interface, channel interactions, or spread of excitation in the local regions of the implanted cochlea on perceptual

FIG. 5. Broadband spectral-ripple discrimination (SRD) thresholds (A) and vowel identification scores (B) as a function of mean bandwidths of electrically evoked compound action potential (ECAP) channel interaction functions.

J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

Won et al.: Channel interaction and spectral resolution

2721

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

outcomes in CI users (e.g., Bierer, 2010; Anderson et al., 2011; Pfingst et al., 2011; Garadat et al., 2012; Jones et al., 2013; Noble et al., 2013; Long et al., 2014). For example, a recent study by Jones et al. (2013) measured channel interaction indices and SRD thresholds and showed that (1) considerable variability in channel interactions was found in different regions of the electrode array within subjects; and (2) the pattern of channel interactions varied across subjects. Along similar lines, Anderson et al. (2011) showed substantial variability in SRD thresholds when the bandwidth of spectral-ripple stimuli was restricted to an octave-wide band in different frequency regions. More importantly, Anderson et al. (2011) demonstrated a significant correlation between the bandwidths of the spatial tuning curves to SRD thresholds in an octave-wide band in the same frequency region, rendering the hypothesis that the integrity of the auditorynerve and efficient electro-neural interfaces influences the spatial tuning curve, subsequently affects SRD performance. Using the channel interaction index and SRD data, Jones et al. (2013) also demonstrated findings that support this view: SRD performance decreased with increasing mean interaction index and strong correlations were found between the interaction indices and SRD thresholds. The aforementioned studies motivated the formulation of the hypotheses for the present study. The present study demonstrated that when broadband spectral-ripple stimuli were passed through a bandpass filter to restrict the frequency range available to subjects to perform the discrimination task, variability was observed in narrowband SRD performance in different frequency ranges (i.e., in different regions of the EL array) within subjects. In the present study, only three different regions of the EL array were examined; hence, much more variability in narrowband SRD performance might be expected if broader of regions of the EL array were evaluated. Our results are consistent with those of Anderson et al. (2011) that octave-wide band SRD performance differed across various frequency regions within subjects. The variability in bandwidth of ECAP channel interaction functions found within subjects in the present study is also consistent with the substantial within-subject variability found in channel interactions in different regions of the electrode array reported by Jones et al. (2013). Most importantly, the present study demonstrated that narrowband SRD thresholds of CI subjects decreased with increasing bandwidths of the ECAP channel interaction functions (Fig. 4), supporting the hypothesis that higher channel interactions were associated with poorer spectral resolution in the same local region of the cochlea. On an individual subject level, five out of seven ears showed the predicted direction of the relationship between bandwidths of the ECAP channel interaction functions and narrowband SRD thresholds in the same local region of the cochlea. In contrast, two out of seven ears did not show the pattern of the relationship between narrowband SRD thresholds and bandwidths of the ECAP channel interaction functions that was predicted by the hypothesis (C09 and C10 in Fig. 4). Note that these two ears generally showed poor narrowband SRD performance without much variability in narrowband SRD thresholds. Also, there was less variability in 2722

J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

bandwidths of ECAP channel interaction functions for these two ears compared to other implanted ears. From the perspective of the across-subject analysis, when the relationship between narrowband SRD thresholds and bandwidths of ECAP channel interaction functions was modeled using a logarithmic regression function, a significant correlation (r ¼ 0.41, p ¼ 0.04, N ¼ 26) was found between the two measures. The relationship was best modeled using a logarithmic function, because the distribution of narrowband SRD thresholds was not normal and peaked at around 0.3 ripples/octave. It is possible that a higher correlation would be found between the two measures if narrowband SRD thresholds were distributed normally. We speculate that a different set of narrowband SRD stimuli that use higher spectral modulation depths and different acoustic bandwidths to restrict the use of stimulating ELs could lead to a more normal distribution of narrowband SRD thresholds. In the current study, a significant negative correlation was found between mean bandwidths of the ECAP channel interaction functions averaged across three to six probe ELs and broadband SRD thresholds [see Fig. 5(A), r ¼ 0.72, p ¼ 0.04, N ¼ 8]. This is largely consistent with Jones et al. (2013), where the relationship between broadband SRD performance and mean interaction index was evaluated. Using seven implanted ears, Jones et al. (2013) showed significant negative correlations of 0.97, 0.77, and 0.92 between broadband SRD and the interaction index at electrode separations of one EL, three ELs, and five ELs, respectively. Given the small sample size for both studies, the correlation coefficient of 0.72 was not statistically different from the correlation coefficients of 0.77 or 0.92 using the Fisher’s z-transformation (Fisher, 1921). It is possible that the two measures, bandwidth of the ECAP channel interaction functions and channel interaction index, may evaluate different aspects of spread of excitation occurring in the cochlea; but to the best of our knowledge, such information has not been reported in the literature. Cohen et al. (2003), however, evaluated the relationship between the bandwidths of ECAP amplitudes of the partially masked probe ELs and bandwidths of the forward-masking profiles and showed that these two measures were not correlated. Thus, future studies should explore the relationship between channel interaction indices (Bo€ex et al., 2003; Jones et al., 2013) and ECAP bandwidths (Hughes and Abbas, 2006a,b) in the same group of CI users, since the channel interaction index measure is a subjective behavioral measure whereas the ECAP measure is objective and physiologically based. Although it is difficult to assess the effects of such procedural differences on the measurement of the degree of channel interactions, it is possible that central factors affecting channel interaction might be reflected differently in the two measures. However, the comparable findings between the current study and Jones et al. (2013) highlight that ECAPs offer several advantages over psychophysical channel interaction index measures to characterize channel interactions. The test time for ECAP channel interaction functions is substantially shorter than for interaction index measures. In Jones et al. (2013), total testing time for collecting 46 interaction indices was about 20 h per subject. In contrast, it took about 5 min to Won et al.: Channel interaction and spectral resolution

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

obtain the ECAP amplitude function for one probe EL after establishing stimulation levels for probe and masker ELs in the current study. Furthermore, the speed at which ECAPs can be measured without sedation makes them viable assessments in infants, making it feasible to study outcome measures in very young pediatric populations with CIs. B. Relationship among spectral resolution, channel interaction, and vowel identification

Previous studies took different approaches to evaluate the contribution of spectral resolution into speech perception in CI users. For example, speech perception ability was measured as a function of the number of active ELs or vocoder channels (e.g., Friesen et al., 2001; Xu et al., 2005); or speech perception was measured with different the amount of simulated spread of excitation (Bingabr et al., 2008); place-pitch sensitivity (Donaldson and Nelson, 2000) and electrode discrimination (Henry et al., 2000) was compared to speech perception ability in CI users. These studies suggested that spectral resolution in CI users is negatively affected by broad activation patterns of the electrically stimulated auditory-nerve fibers; hence, CI users’ ability to utilize the spectral information provided by multiple ELs is degraded, resulting in poorer speech perception ability. However, Azadpour and McKay (2012) presented a different view suggesting that CI users do not rely on fine spectral structure when identifying speech. This claim was made based upon their finding that spatial resolution about one electrode was not correlated with speech perception scores in eight CI subjects. In the present study, bandwidths of the ECAP channel interaction functions for the probe EL6 were compared to broadband SRD and vowel identification scores (see Sec. III C). The relationship did not reach significance, replicating Azadpour and McKay’s finding that spatial resolution about any one EL cannot explain psychoacoustic performance that requires the full spectrum of input sounds. However, when mean bandwidth of ECAP amplitude functions were averaged across three to six probe ELs, a significant correlation was found with broadband SRD performance [Fig. 5(A)]. Given the strong correlation between broadband SRD and vowel identification scores (r ¼ 0.83, p ¼ 0.022, N ¼ 7) in the current study, it is plausible to expect a higher correlation when physiological channel interactions are examined across ELs that span the entire array. Thus, the claim made by Azadpour and McKay describes only one aspect of the relationship between spectral resolution and speech perception outcomes for CI users, which is the fact that spectral resolution about one EL does not explain speech perception. In order to examine the relationship between speech perception outcomes and spectral resolution across the entire array, the present study highlights that psychoacoustic experiments which can reflect spectral resolution over the entire array should be used. In the current study, the significant correlation between vowel identification scores and broadband SRD thresholds (r ¼ 0.83, p ¼ 0.022, N ¼ 7) accounted for 69% of the shared variance between these two measures across seven implanted ears. Similarly, the significant correlation between J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

broadband SRD thresholds and mean bandwidths of the ECAP channel interaction functions (r ¼ 0.72, p ¼ 0.04, N ¼ 8) accounted for 52% of the shared variance between these two measures. In contrast, mean bandwidths of the ECAP channel interaction functions were not predictive of vowel identification performance, consistent with previous reports (e.g., Hughes and Abbas, 2006a; Hughes and Stille, 2008; Tang et al., 2011). Note that the masker level was set equal to the probe level for the ECAP recordings in the current study. Thus, the probe levels were equally loud across EL sites, but the masker levels were not equally loud to CI subjects. It is possible that holding masker level equal to the probe might have also confounded the ability to observe a significant correlation with vowel perception, because the channel interactions that occurred during vowel identification (i.e., electric stimulation set by the sound processor) would have been different than that which occurred for the ECAP recordings due to differences in relative stimulation levels across ELs. The partial correlation analysis between vowel identification and broadband SRD thresholds, while controlling the effects of physiological channel interactions (or peripheral spatial resolution) reflected by the mean bandwidths of the ECAP channel interaction functions, showed that even after factoring out the effects of physiological channel interactions, the size of the shared variance between vowel identification scores and broadband SRD thresholds remained the same. This observation suggests that (1) the contribution of physiological channel interactions to the relationship between broadband SRD and vowel recognition may be small, and (2) the lack of a significant relationship between physiological channel interactions reflected by ECAPs and vowel identification might be due to the fact that ECAP measures do not reflect central processing. It should be also noted that for the ECAP recordings in the present study, the masker current levels were set equal to the probe level. Thus, even though the probe level was equally loud across EL sites, the masker levels were not equally loud. For the vowel identification test, CI subjects used their sound processor with most-comfortable levels generally loudness balanced across ELs. Subsequently, it is possible that setting masker level equal to the probe level for the ECAP testing might have confounded the ability to observe a significant correlation with vowel identification because the channel interactions that occurred during the sound processor stimulation would have differed from those that occurred during the ECAP measures due to differences in relative stimulation levels across ELs. Although the present study did not demonstrate a direct link between the physiological channel interactions (or peripheral spatial resolution) and vowel identification, the significant relationship between narrowband SRD thresholds and ECAP bandwidth for the corresponding probe ELs (Fig. 4) suggests that neural encoding of sound at the level of the cochlea is critical, which subsequently affects speech perception outcomes for CI users. C. Implications for cochlear implant research

There is an increasing awareness of the importance of understanding biological conditions of local regions of the Won et al.: Channel interaction and spectral resolution

2723

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

implanted cochlea (e.g., Pfingst et al., 2011). The present study demonstrated that the narrowband SRD test and the measure of bandwidth of ECAP channel interaction functions may provide a quick assessment of local spectral resolution for CI users. The acoustic bandwidth covering five ELs was used to implement the narrowband SRD test in the current study, but it is also feasible to set a smaller bandwidth to make the narrowband SRD test more specific with regard to measuring spectral resolution in the confined region of the cochlea. Such information would be particularly critical to implementing high levels of signal processing algorithms such as current-focusing, current-steering, or the combination of the two (e.g., Litvak et al., 2003; Bierer, 2010) that focus on improving spectral resolution. For example, in order to determine the optimal sites for currentfocusing techniques, one could evaluate narrowband SRD and directly focus the current to the local regions with better spectral resolution. Conversely, for the local regions across the electrode array identified as having poor spectral resolution, sound processing strategies that emphasize temporal information might be the more effective method of stimulation. A critical barrier to improving clinical outcomes of CIs is partly related to the inability to customize CI processing based on a unique electro-neural interface of individual users and the limited knowledge of how patientrelated factors contribute to the formation of channel interactions and speech perception outcomes. Therefore, it will be important that future studies should investigate the relationship between differences in biological infrastructure of the implanted ears for individual patients and the neural coding and perception of sound. For this purpose, the utilization of an individual CI subject’s biological information is required, because the biological conditions in the implanted ears vary substantially among patients and from one stimulation site to another in each patient. In this regard, the current study suggests that the ECAP channel interaction functions would provide important biological information for such future research efforts. In particular, the combination of the ECAP and biophysical computational models would provide an opportunity to investigate how patient-specific biological conditions (e.g., fiber diameter, electrode-to-fiber distance, electrode locations, etc.) in the implanted ears affect the degree of physiological channel interactions, neural coding of speech in the auditory-nerve fibers, and speech perception outcomes. Such efforts have already been undertaken (e.g., Woo et al., 2010; Choi and Wang, 2014) and show a promising path for CI research.

V. SUMMARY

(1) Within-subject variability was found in the bandwidths of ECAP channel interaction functions across three to six probe electrodes. (2) There was no systematic change in performance in the narrowband spectral-ripple discrimination tests within subjects. The pattern of change in performance in the 2724

J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

(3)

(4)

(5)

(6)

narrowband spectral-ripple discrimination tests also varied substantially across subjects. Within-subjects, narrowband spectral-ripple discrimination performance decreased with increasing bandwidths of ECAP channel interaction functions for five out of seven implanted ears. Across-subjects, bandwidths of ECAP channel interaction functions were significantly correlated with performance in the narrowband spectral-ripple discrimination tests in the same local regions of the cochlea. Broadband spectral-ripple discrimination was associated with mean bandwidths of ECAP channel interaction functions averaged across three to six probe electrodes, but not with bandwidth of ECAP amplitude functions about one probe electrode. The narrowband SRD test and the measure of ECAP channel interaction functions offer an efficient way to evaluate local spectral resolution across the electrode array.

ACKNOWLEDGMENTS

This study was supported by the University of Tennessee Health Science Center, the Hearing Health Foundation, and the Todd M. Bader Grant of the Barbara Epstein Foundation. We would like to thank the cochlear implant subjects who participated in this study for their dedicated efforts. We would also like to thank Dr. Michelle Hughes for her valuable discussions to set up the ECAP recordings and Dr. Richard Wright for providing the synthesized vowel stimuli for this study. Abbas, P. J., Hughes, M. L., Brown, C. J., Miller, C. A., and South, H. (2004). “Channel interaction in cochlear implant users evaluated using the electrically evoked compound action potential,” Audiol. Neuro-Otol. 9, 203–213. Anderson, E. S., Nelson, D. A., Kreft, H., Nelson, P. B., and Oxenham, A. J. (2011). “Comparing spatial tuning curves, spectral ripple resolution, and speech perception in cochlear implant users,” J. Acoust. Soc. Am. 130, 364–375. Azadpour, M., and McKay, C. M. (2012). “A psychophysical method for measuring spatial resolution in cochlear implants,” J. Assoc. Res. Otolaryngol. 13, 145–157. Benjamini, Y., and Hochberg, Y. (1995). “Controlling the false discovery rate—A practical and powerful approach to multiple testing,” J. R. Stat. Soc. B. Met. 57, 289–300. Bierer, J. A. (2010). “Probing the electrode-neuron interface with focused cochlear implant stimulation,” Trends Amp. 14, 84–95. Bingabr, M., Espinoza-Varas, B., and Loizou, P. C. (2008). “Simulating the effect of spread of excitation in cochlear implants,” Hear. Res. 241, 73–79. Bo€ex, C., de Balthasar, C., Kos, M. I., and Pelizzone, M. (2003). “Electrical field interactions in different cochlear implant systems,” J. Acoust. Soc. Am. 114, 2049–2057. Choi, C. T. M., and Wang, S. P. (2014). “Modeling ECAP in cochlear implants using the FEM and equivalent circuits,” IEEE. Trans. Magnetics. 50(2), 49–52. Clark, G. M., Clark, J., Cardamone, T., Clarke, M., Nielsen, P., Jones, R., Arhatari, B., Birbilis, N., Curtain, R., Xu, J., Wagstaff, S., Gibson, P., O’Leary, S., and Furness J. (2014). “Biomedical studies on temporal bones of the first multi-channel cochlear implant patient at the University of Melbourne,” Cochlear Implants Int. 15, Suppl. 2, S1–S15. Cohen, L. T., Richardson, L. M., Saunders, E., and Cowan, R. S. (2003). “Spatial spread of neural excitation in cochlear implant recipients: comparison of improved ECAP method and psychophysical forward masking,” Hear Res. 179(1–2), 72–87. Won et al.: Channel interaction and spectral resolution

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

Donaldson, G. S., and Nelson, D. A. (2000). “Place-pitch sensitivity and its relation to consonant recognition by cochlear implant listeners using the MPEAK and SPEAK speech processing strategies,” J. Acoust. Soc. Am. 107, 1645–1658. Drennan, W. R., Won, J. H., Nie, K., Jameyson, E., and Rubinstein, J. T. (2010). “Sensitivity of psychophysical measures to signal processor modifications in cochlear implant users,” Hear. Res. 262, 1–8. Fayad, J. N., and Linthicum, F. H., Jr. (2006). “Multichannel cochlear implants: relation of histopathology to performance,” Laryngoscope 116(8), 1310–1320. Fisher, R. A. (1921). “On the ‘probable error’ of a coefficient of correlation deduced from a small sample,” Metron 1, 3–32. Friesen, L. M., Shannon, R. V., Baskent, D., and Wang, X. (2001). “Speech recognition in noise as a function of the number of spectral channels: Comparison of acoustic hearing and cochlear implants,” J. Acoust. Soc. Am. 110, 1150–1163. Garadat, S. N., Zwolan, T. A., and Pfingst, B. E. (2012). “Across-site patterns of modulation detection: Relation to speech recognition,” J. Acoust. Soc. Am. 131, 4030–4041. Golub, J. S., Won, J. H., Drennan, W. R., Worman, T. D., and Rubinstein, J. T. (2012). “Spectral and temporal measures in hybrid cochlear implant users: On the mechanism of electroacoustic hearing benefits,” Otol. Neurotol. 33(2), 147–153. Henry, B. A., McKay, C. M., McDermott, H. J., and Clark, G. M. (2000). “The relationship between speech perception and electrode discrimination in cochlear implantees,” J. Acoust. Soc. Am. 108, 1269–1280. Henry, B. A., and Turner, C. W. (2003). “The resolution of complex spectral patterns by cochlear implant and normal-hearing listeners,” J. Acoust. Soc. Am. 113, 2861–2873. Henry, B. A., Turner, C. W., and Behrens, A. (2005). “Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners,” J. Acoust. Soc. Am. 118, 1111–1121. Hughes, M. L. (2012). “Electrically evoked compound action potential,” in Objective Measures in Cochlear Implants, edited by M. L. Hughes (Plural Publishing, San Diego, CA), Chap. 7, pp. 101–121. Hughes, M. L., and Abbas, P. J. (2006a). “The relation between electrophysiologic channel interaction and electrode pitch ranking in cochlear implant recipients,” J. Acoust. Soc. Am. 119(3), 1527–1537. Hughes, M. L., and Abbas, P. J. (2006b). “Electrophysiologic channel interaction, electrode pitch ranking, and behavioral threshold in straight versus perimodiolar cochlear implant electrode arrays,” J. Acoust. Soc. Am. 119, 1538–1547. Hughes, M. L., and Stille, L. J. (2008). “Psychophysical versus physiological spatial forward masking and the relation to speech perception in cochlear implants,” Ear Hear. 29(3), 435–452. Hughes, M. L., and Stille, L. J. (2010). “Effect of stimulus and recording parameters on spatial spread of excitation and masking patterns obtained with the electrically evoked compound action potential in cochlear implants,” Ear. Hear. 31, 679–692. Johnson, K., Flemming, E., and Wright, R. (1993). “The hyperspace effect—Phonetic targets are hyperarticulated,” Language 69, 505–528. Jones, G. L., Won, J. H., Drennan, W. R., and Rubinstein, J. T. (2013). “Relationship between channel interaction and spectral-ripple discrimination in cochlear implant users,” J. Acoust. Soc. Am. 133, 425–433. Khan, A. M., Handzel, O., Burgess, B. J., Damian, D., Eddington, D. K., and Nadol, J. B. (2005). “Is word recognition correlated with the number of surviving spiral ganglion cells and electrode insertion depth in human subjects with cochlear implants?,” Laryngoscope 115, 672–677. Klatt, D. H., and Klatt, L. C. (1990). “Analysis, synthesis, and perception of voice quality variations among female and male talkers,” J. Acoust. Soc. Am. 87, 820–857.

J. Acoust. Soc. Am., Vol. 136, No. 5, November 2014

Lilliefors, H. (1967). “On Kolmogorov-Smirnov test for normality with mean and variance unknown,” J. Am. Stat. Assoc. 62, 399–402. Litvak, L. M., Krubsack, D. A., and Overstreet, E. H. (2003). “Method and system to convey the within-channel fine structure with a cochlear implant,” Patent US7317945, Advanced Bionics Corporation. Long, C. J., Holden, T. A., McClelland, G. H., Parkinson, W. S., Shelton, C., Kelsall, D. C., and Smith, Z. M. (2014). “Examining the electro-neural interface of cochlear implant users using psychophysics, CT scans, and speech understanding,” J. Assoc. Res. Otolaryngol. 15, 293–304. Lopez Valdes, A., McLaughlin, M., Viani, L., Walshe, P., Smith, J., Zeng, F. G., and Reilly, R. B. (2014). “Objective assessment of spectral ripple discrimination in cochlear implant listeners using cortical evoked responses to an oddball paradigm,” Plos. One. 9(3), e90044. Nearey, T. M. (1989). “Static, dynamic, and relational properties in vowel perception,” J. Acoust. Soc. Am. 85, 2088–2113. Noble, J. H., Labadie, R. F., Gifford, R. H., and Dawant, B. M. (2013). “Image-guidance enables new methods for customizing cochlear implant stimulation strategies,” IEEE Trans. Neur. Sys. Reh. 21, 820–829. Pfingst, B. E., Bowling, S. A., Colesa, D. J., Garadat, S. N., Raphael, Y., Shibata, S. B., Strahl, S. B., Su, G. L., and Zhou, N. (2011). “Cochlear infrastructure for electrical hearing,” Hear. Res. 281, 65–73. Saunders, E., Cohen, L., Aschendorff, A., Shapiro, W., Knight, M., Stecker, M., Richter, B., Waltzman, S., Tykocinski, M., Roland, T., Laszig, R., and Cowan, R. (2002). “Threshold, comfortable level and impedance changes as a function of electrode-modiolar distance,” Ear Hear. 23, 28S–40S. Shannon, R. V., Galvin, J. J., and Bas¸kent, D. (2002). “Holes in hearing,” J. Assoc. Res. Otolaryngol. 3, 185–199. Shepherd, R. K., Matsushima, J., Martin, R. L., and Clark, G. M. (1994). “Cochlear pathology following chronic electrical stimulation of the auditory nerve: II Deafened kittens,” Hear. Res. 81, 150–166. Supin, A., Popov, V. V., Milekhina, O. N., and Tarakanov, M. B. (1994). “Frequency resolving power measured by rippled noise,” Hear. Res. 78, 31–40. Tang, Q., Benitez, R., and Zeng, F. G. (2011). “Spatial channel interactions in cochlear implants,” J. Neu. Eng. 8, 046029. Wilson, B. S., and Dorman, M. F. (2008). “Cochlear implants: A remarkable past and a brilliant future,” Hear. Res. 242, 3–21. Won, J. H., Clinard, C. G., Kwon, S. Y., Dasika, V. K., Nie, K., Drennan, W. R., Tremblay, K. L., and Rubinstein, J. T. (2011a) “Relationship between behavioral and physiologic spectral-ripple discrimination,” J. Assoc. Res. Otolaryngol. 12, 375–393. Won, J. H., Drennan, W. R., Kang, R. S., and Rubinstein, J. T. (2010). “Psychoacoustic abilities associated with music perception in cochlear implant users,” Ear. Hear. 31, 796–805. Won, J. H., Drennan, W. R., and Rubinstein, J. T. (2007). “Spectral-ripple resolution correlates with speech reception in noise in cochlear implant users,” J. Assoc. Res. Otolaryngol. 8, 384–392. Won, J. H., Jones, G. L., Drennan, W. R., Jameyson, E. M., and Rubinstein, J. T. (2011b). “Evidence of across-channel processing for spectral-ripple discrimination in cochlear implant listeners,” J. Acoust. Soc. Am. 130, 2088–2097. Woo, J., Miller, C. A., and Abbas, P. J. (2010). “The dependence of auditory nerve rate adaptation on electric stimulus parameters, electrode position, and fiber diameter: A computer model study,” J. Assoc. Res. Otolaryngol. 11(2), 283–296. Xu, L., Thompson, C. S., and Pfingst, B. E. (2005). “Relative contributions of spectral and temporal cues for phoneme recognition,” J. Acoust. Soc. Am. 117, 3255–3267. Zeng, F.-G., Rebscher, S., Harrison, W., Sun, X., and Feng, H. (2008). “Cochlear implants: System design, integration, and evaluation,” IEEE. Rev. Biomed. Eng. 1, 115–142.

Won et al.: Channel interaction and spectral resolution

2725

Redistribution subject to ASA license or copyright; see http://acousticalsociety.org/content/terms. Download to IP: 130.239.20.174 On: Sat, 22 Nov 2014 15:32:12

Relationship among the physiologic channel interactions, spectral-ripple discrimination, and vowel identification in cochlear implant users.

The hypothesis of this study was that broader patterns of physiological channel interactions in the local region of the cochlea are associated with po...
440KB Sizes 0 Downloads 5 Views