Hearing Research 326 (2015) 66e74

Contents lists available at ScienceDirect

Hearing Research journal homepage: www.elsevier.com/locate/heares

Research paper

Effects of steep high-frequency hearing loss on speech recognition using temporal fine structure in low-frequency region Bei Li a, Limin Hou b, Li Xu c, Hui Wang a, Guang Yang a, Shankai Yin a, *, Yanmei Feng a, * a

Department of Otolaryngology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai 200233, China College of Communication and Information Engineering, Shanghai University, Shanghai 200000, China c School of Rehabilitation and Communication Sciences, Ohio University, Athens, OH 45701, USA b

a r t i c l e i n f o

a b s t r a c t

Article history: Received 20 September 2014 Received in revised form 6 April 2015 Accepted 9 April 2015 Available online 25 April 2015

The present study examined the effects of steep high-frequency sensorineural hearing loss (SHF-SNHL) on speech recognition using acoustic temporal fine structure (TFS) in the low-frequency region where the absolute thresholds appeared to be normal. In total, 28 participants with SHF-SNHL were assigned to 3 groups according to the cut-off frequency (1, 2, and 4 kHz, respectively) of their pure-tone absolute thresholds. Fourteen age-matched normal-hearing (NH) individuals were enrolled as controls. For each Mandarin sentence, the acoustic TFS in 10 frequency bands (each 3-ERB wide) was extracted using the Hilbert transform and was further lowpass filtered at 1, 2, and 4 kHz. Speech recognition scores were compared among the NH and 1-, 2-, and 4-kHz SHF-SNHL groups using stimuli with varying bandwidths. Results showed that speech recognition with the same TFS-speech stimulus bandwidth differed significantly in groups and filtering conditions. Sentence recognition in quiet conditions was better than that in noise. Compared with the NH participants, nearly all the SHF-SNHL participants showed significantly poorer sentence recognition within their frequency regions with “normal hearing” (defined clinically by normal absolute thresholds) in both quiet and noisy conditions. These may result from disrupted auditory nerve function in the “normal hearing” low-frequency regions. © 2015 Elsevier B.V. All rights reserved.

1. Introduction Sensorineural hearing loss (SNHL) is one of the most common types of hearing loss. About one-third of people over 65 years of age suffer from some degree of SNHL (WHO, 2014). Individuals with SNHL often complain of speech recognition difficulties, especially in noisy backgrounds. Decreased audibility provides at least part of the explanation. However, there is increasing evidence that suprathreshold processing deficits may also contribute to the speech perception problem (Feng et al., 2010; Horwitz et al., 2002; Leger et al., 2012; Lorenzi et al., 2009; Yin et al., 2008). This is supported by the fact that individuals whose auditory sensitivity is restored via hearing-aid amplification still show speech perception

Abbreviations: ANOVA, analysis of variance; E, envelope; ERB, equivalent rectangular bandwidth; MHINT, Mandarin version of Hearing in Noise Test; NH, normal-hearing; SHF-SNHL, steep high-frequency sensorineural hearing loss; SNHL, sensorineural hearing loss; TFS, temporal fine structure * Corresponding authors. Tel.: þ86 21 64834143. E-mail addresses: [email protected] (S. Yin), [email protected] (Y. Feng). http://dx.doi.org/10.1016/j.heares.2015.04.004 0378-5955/© 2015 Elsevier B.V. All rights reserved.

problems (Kerber and Seeber, 2012; Stephens et al., 1996). However, the interpretation of the results in terms of supra-threshold auditory processing deficits is often limited by a mismatch between the effective listening bandwidth (frequency region where absolute thresholds are within the “normal” range) and the stimulus bandwidth (frequency region covered by the acoustic signal without attenuation). The degree of hearing loss across frequency is variable, so the effective listening bandwidth for SNHL listeners differs from that of the normal-hearing (NH) listeners. To clarify the specific contributions of stimulus and listening bandwidth, the contributions of stimulus and listening bandwidth should be studied with appropriate controls. Steep high-frequency SNHL (SHF-SNHL), with normal auditory sensitivity at low-frequency region, is a special case of SNHL. The absolute pure-tone thresholds at and below the cut-off frequency (the maximal frequency with normal threshold in a pure-tone audiogram) could be normal but decrease markedly beyond the cut-off frequency. This provides an ideal model to study the suprathreshold processing ability more rigorously, independent of the effect of hearing loss. When the signals are limited within the effective listening bandwidth, the absolute pure-tone threshold of

B. Li et al. / Hearing Research 326 (2015) 66e74

SHF-SNHL participants is the “same” as that of NH participants. Thus, recognition performance may reflect solely the contribution of supra-threshold auditory processing skills. Results from previous studies demonstrate that there are indeed supra-threshold processing deficits in the frequency region where the absolute thresholds were within normal, as shown by poorer-than-normal temporal, frequency, and intensity discrimination thresholds (Florentine et al., 1993; Leger et al., 2012; Lorenzi et al., 2009; Nelson and Freyman, 1986; Schroder et al., 1994; Simon and Yund, 1993). However, most of these studies used signals synthesized from non-linguistic sounds, such as tone or noise, and the relationship between these psychoacoustic measurements and speech perception remains unclear. Any acoustic signal can be decomposed into envelope (E) and temporal fine structure (TFS). E is the relatively slow variations in amplitude over time and TFS is the rapid oscillations with rate close to the center frequency of the band (Moore, 2008). Unless otherwise indicated, we will regard “E” and “TFS” as the acoustic E and acoustic TFS specifically in this paper. Several studies have shown that TFS information is important for speech perception, especially in a noisy background (Hopkins and Moore, 2009; Hopkins et al., 2008) although a recent study has questioned this (Apoux and Healy, 2013). In view of the important role of TFS in speech recognition and the speech perception problems in SNHL participants in noisy conditions, it has been assumed that TFS perception problems may exist in SNHL participants (Gnansia et al., 2009; Moore, 2008). This has been confirmed in many studies showing that SNHL listeners are less able to take advantage of TFS cues, compared with NH individuals (Ardoint et al., 2010; Bernstein and Brungart, 2011; Hopkins and Moore, 2011; Hopkins et al., 2008; Moore, 2008; Strelcyk and Dau, 2009). Further studies have shown that high-frequency hearing loss may reduce TFS sensitivity even in frequency regions where absolute thresholds are within the normal limits (Leger et al., 2012; Lorenzi et al., 2009). However, these studies did not sort out the effects of hearing loss or age, which are thought to be important for speech recognition using TFS. Thus, the first goal of this study was to explore whether there are TFS processing deficits in the low-frequency region with seemingly normal auditory sensitivity in more stringently selected SHF-SNHL participants. Importantly, the separate effects of the stimulus bandwidth and the listening bandwidth on speech perception using TFS were examined. The SHF-SNHL participants with varying effective listening bandwidths (e.g., cut-off frequency of the audiograms at 1, 2, and 4 kHz) were recruited to study the effect of listening bandwidth. Moreover, TFS stimuli lowpass filtered at different frequencies (e.g. 1, 2, and 4 kHz) were used to study the effects of stimulus bandwidth. To avoid interaction effects of the listening bandwidth and stimulus bandwidth, only the stimuli with cut-off frequency of the lowpass filter lower than or equal to the cut-off frequency of the audiograms of the SHF-SNHL participants were used. TFS cues were extracted from Mandarin sentences using Hilbert transform to form “TFS-speech”. TFSspeech recognition scores were compared between SHF-SNHL groups and age-matched NH participants. Individual and mean psychometric functions for TFS-speech recognition were also plotted. The second goal of this study was to extend the impact of hearing loss on speech perception with a tonal language, such as Mandarin. Mandarin has four distinctive tones that are characterized by syllable-level fundamental frequency contour patterns. Previous studies indicated that compared to the E information, the TFS cues played a dominant role in Mandarin tone recognition (Kong and Zeng, 2006; Wang et al., 2011; Xu and Pfingst, 2003). Moreover, recent work suggests that language experience shapes the ability to use E and TFS cues (Cabrera et al., 2014) and adding

67

TFS cues also yielded significant improvement in the speech and tone recognition in Mandarin-speaking users of cochlear implants (Chen et al., 2013) although the benefit was still unclear (Han et al., 2009; Schatzer et al., 2010). Notably, hearing-impaired listeners showed remarkable deficits in using TFS cues for tone perception (Wang et al., 2011). Together with previous work (Xu and Pfingst, 2003), the perception of TFS cues seems to be critical for Mandarin recognition and cochlear damage may have a stronger detrimental impact on the ability to process TFS cues in listeners using a tonal language. However, there has been little research on Mandarin sentence recognition using TFS cues to date. Thus, we specifically examined whether and to what degree SHFSNHL affected recognition of lowpass filtered Mandarin sentences using TFS cues. 2. Materials and methods 2.1. Participants In total, 42 individuals were recruited to participate in the study: 28 SHF-SNHL participants and 14 NH individuals. The SHFSNHL participants were recruited from the Department of Otolaryngology at Shanghai Jiao Tong University Affiliated Sixth People's Hospital. The NH individuals were recruited from the staff at the same hospital. The program was approved by the Ethics Committee of the Affiliated Sixth People's Hospital of Shanghai Jiao Tong University. Signed consent forms were provided by all participants before starting the experiments. All participants were native Mandarin-speaking listeners. The NH participants had pure-tone thresholds of 25 dB HL or less at octave frequencies between 250 and 8000 Hz in both ears. Participants with SHF-SNHL were selected rigorously using the following criteria: 1) symmetrical SNHL (i.e., differences in absolute thresholds across ears  15 dB at all frequencies between both ears) for more than 6 months, 2) pure-tone thresholds  25 dB HL at and below the audiogram cut-off frequency, 3) slope of hearing loss 15 dB/octave above the cut-off frequency of the audiogram, and 4) type A or AD tympanogram. The SHF-SNHL participants were assigned to one of three groups according to the cut-off frequency of the audiograms (i.e., the 1-, 2-, and 4-kHz groups with effective listening bandwidth of 1, 2, and 4 kHz). The individual and average thresholds of the test ear for the SHF-SNHL and NH participants are shown in Table 1 and the group means thresholds are shown in Fig. 1. 2.2. Stimuli and procedures The original speech material was the Mandarin version of Hearing in Noise Test (MHINT) (Wong et al., 2007). The MHINT contains 14 lists; each list contains 20 sentences. Each sentence contains 10 key words. The scores are expressed as the correct percentages of the key words. The MHINT sentences were recorded with a male speaker. Each MHINT sentence was initially bandpassed using zero-phase, third-order Butterworth filters into 10 adjacent, 3 equivalent rectangular bandwidth (ERB) wide frequency bands spanning a frequency range of 80e8858 Hz. The cut-off frequencies for the bands were 80, 205, 372, 596, 899, 1315, 1893, 2716, 3924, 5782, and 8858 Hz. The Hilbert transform was applied to the signal in each band to decompose the signal into E and TFS. The TFS for each band was multiplied by the root-meansquare power in that band. The “power-weighted” TFS signals were summed over the 10 frequency bands to form the so-called “TFS-speech” signal. The resulting stimuli were lowpass filtered at 1-, 2-, or 4-kHz (1-, 2-, and 4-kHz stimulus bandwidth,

68

B. Li et al. / Hearing Research 326 (2015) 66e74

Table 1 Age and audiometric thresholds for each participant. Listener

Group

Age (years)

Audiometric thresholds (dB HL) at each frequency (kHz) 0.25

0.5

1

2

4

8

NH01 NH02 NH03 NH04 NH05 NH06 NH07 NH08 NH09 NH10 NH11 NH12 NH13 NH14 Mean

NH

33 28 30 56 44 59 56 60 64 53 50 48 53 65 49.93

10 5 10 25 15 10 15 20 25 20 15 10 10 15 14.64

0 5 10 20 10 20 10 15 15 10 10 15 20 15 12.5

5 10 10 20 10 20 15 10 15 15 15 15 15 15 13.57

5 5 5 10 5 25 10 15 20 10 20 20 25 20 13.21

5 10 15 5 5 20 15 20 25 25 15 15 15 20 14.29

10 10 15 20 15 25 20 25 15 15 10 20 20 20 17.14

SHF-SNHL01 SHF-SNHL02 SHF-SNHL03 SHF-SNHL04 SHF-SNHL05 SHF-SNHL06 SHF-SNHL07 SHF-SNHL08 SHF-SNHL09 SHF-SNHL10 SHF-SNHL11 Mean

4 kHz group

54 42 38 54 60 36 43 28 52 56 55 47.09

0 10 10 10 0 5 25 5 20 15 5 9.55

5 10 15 10 0 5 15 10 20 25 10 11.36

10 15 15 10 15 10 10 5 15 25 20 13.64

10 25 15 15 10 15 25 10 15 25 25 17.27

25 20 20 25 20 5 25 15 25 25 20 20.45

45 30 35 35 30 30 35 30 40 45 40 35.91

SHF-SNHL12 SHF-SNHL13 SHF-SNHL14 SHF-SNHL15 SHF-SNHL16 SHF-SNHL17 SHF-SNHL18 SHF-SNHL19 SHF-SNHL20 Mean

2 kHz group

56 62 33 36 63 45 48 68 32 49.22

25 25 25 20 20 0 5 20 5 16.11

25 15 25 15 15 0 5 25 10 15

15 20 15 20 15 5 5 15 10 13.33

20 25 20 20 10 5 5 20 10 15

30 35 30 30 30 30 35 60 65 38.33

35 45 30 35 30 45 40 60 85 45

SHF-SNHL21 SHF-SNHL22 SHF-SNHL23 SHF-SNHL24 SHF-SNHL25 SHF-SNHL26 SHF-SNHL27 SHF-SNHL28 Mean

1 kHz group

58 51 41 63 70 25 45 50 50.38

10 25 5 10 15 15 15 15 13.75

10 25 5 10 15 5 10 15 11.88

20 25 15 15 15 25 15 10 17.5

35 35 30 35 35 40 35 30 34.38

35 55 35 40 50 65 45 50 46.88

50 70 45 55 70 55 60 70 59.38

NH01 ¼ normal-hearing participant No.1, and so on. SHF-SNHL01 ¼ steep high-frequency sensorineural hearing loss participant No.1, and so on.

216 dB/oct) to restrict the spectrum in the effective listening bandwidths of the SHF-SNHL participants. The contribution of E cues recovered from acoustic TFS cues by the differential attenuation produced by cochlear filters (e.g., Ghitza, 2001; Gilbert and Lorenzi, 2006; Zeng et al., 2004) was considered in a preexperiment. The TFS speech was passed through a bank of 30 gammatone auditory filters. The gammatone auditory filters were 1-ERB wide (Irino and Patterson, 1997), with center frequencies ranging from 90 to 7065 Hz. In each band, the E was extracted using the Hilbert transform and lowpass filtered (cut-off frequency ¼ ERB/2, 62 dB/oct slope) using a Butterworth filter. The E was then used to amplitude-modulate a sinusoid with a frequency at the center frequency of the original frequency band, but with a random starting phase. Reconstructed E speech was produced by adding all 30 amplitude-modulated sinusoids together. Our preliminary experiments showed that the reconstructed E

speech was unidentifiable. For each of the 8 NH participants in the pre-experiment, the recognition score was 0. All signal processing was performed with the MATLAB software (ver. 7.0). The TFS-speech was presented at 75 dB (A) in both quiet and noisy conditions and was delivered unilaterally through Sennheiser HD580 headphones with a noise masker (speech-shaped noise that had the same long-term spectrum as that for the MHINT sentence) presented in the contralateral ear at 45 dB (A). In sentence recognition in noise conditions, the speech-shaped noise was presented at 70 dB (A) in the same ear as the speech signals. The noise began 500 ms before the sentence, and continued for 500 ms after the sentence had finished. Participants had no experience with any of these tests before this study. Before the formal test, participants practiced as many times as they wished and were provided feedback to get familiar with the processed stimuli. In formal tests, they were allowed to

B. Li et al. / Hearing Research 326 (2015) 66e74

69

3.2. Matched ages between NH and SHF-SNHL groups Age was 49.93 ± 12.11, 50.38 ± 13.96, 49.22 ± 13.72, and 47.09 ± 10.20 years for the NH group and the 1-, 2-, and 4-kHz SHFSNHL groups, respectively. One-way ANOVA showed that the age was not significantly different across groups (F (3, 38) ¼ 0.146; p ¼ 0.931). 3.3. Speech recognition as a function of stimulus bandwidth

Fig. 1. Mean auditory thresholds (dB HL) for the normal-hearing (NH) group and for the steep high-frequency sensorineural hearing loss (SHF-SNHL) groups with 4-, 2-, and 1-kHz audiogram cut-off frequencies.

repeatedly play the sentences as many times as they wished and were instructed to repeat the sentences as accurately as possible. No feedback was provided in the formal tests. For all the SHF-SNHL groups, each participant was tested with lowpass filtered TFSspeech, the power spectrum of which was below the effective listening bandwidths of the participant. For example, the 4-kHz group was tested using TFS-speeches with 1-, 2- and 4-kHz stimulus bandwidth in quiet and noise separately. For the 2-kHz group, individuals were tested using TFS-speeches with 1- and 2-kHz stimulus bandwidth. For the 1-kHz group, only the TFS-speech with 1-kHz stimulus bandwidth was tested. The test items for the groups are listed in Table 2. The complete set of the tests required approximately 1e2 h and was typically completed in several sessions with each session lasting less than 1 h. The order of the sentence recognition using TFS was randomized across participants to avoid potential order effects. 3. Results 3.1. Absolute pure-tone thresholds of the effective listening bandwidth for the SHF-SNHL and NH groups Mean absolute pure-tone thresholds were averaged across effective listening bandwidth for the SHF-SNHL groups. The mean thresholds were 14.38 ± 4.79, 14.86 ± 7.51, and 14.45 ± 5.01 dB HL for the 1-, 2-, and 4-kHz SHF-SNHL groups, respectively. The mean threshold across all tested frequencies was 14.23 ± 4.75 dB HL for the NH group. One-way analysis of variance (ANOVA) indicated that the mean thresholds of effective listening bandwidth did not differ significantly across groups (F (3, 38) ¼ 0.025; p ¼ 0.995).

Table 2 TFS-speech items tested in the various groups. Stimulus bandwidth of the TFS-speech

NH group 4 kHz group 2 kHz group 1 kHz group

1 kHz

2 kHz

4 kHz

√ √ √ √

√ √ √

√ √

A logistic fitting was performed for each individual subject to facilitate visualization and comparison of the psychometric functions of the TFS-speech recognition scores with different stimulus bandwidths. Only data from the NH and 4-kHz SHF-SNHL groups were analyzed here because these listeners were tested with TFSspeeches of 1-, 2- and 4-kHz stimulus bandwidth. Separate psychometric functions were also plotted for the individual and mean TFS-speech recognition scores as a function of stimulus bandwidth of TFS-speech (Fig. 2 and Fig. 3). Generally, the averaged psychometric function for the NH group was above that for the 4-kHz SHF-SNHL group in both quiet and noisy conditions. However, as the stimulus bandwidth changed from 4 to 1 kHz, the distance between the two psychometric functions decreased in both quiet and noisy conditions. This change was more obvious in the noisy condition. The slope of the psychometric function at the midpoint (%/kHz) for the two groups was also compared, using an independent twosample t-test. Statistical results showed that the variance between the two groups in quiet and noisy conditions were both homogeneous, with Fquiet ¼ 4.229, p ¼ 0.051 and Fnoise ¼ 1.727, p ¼ 0.202, and the slope of the psychometric functions of the NH and 4-kHz SHF-SNHL group differed significantly in both quiet (t ¼ 3.185, p ¼ 0.004) and noisy conditions (t ¼ 4.963, p < 0.001). 3.4. Speech recognition of NH and SHF-SNHL groups The TFS-speech recognition scores of each group using various stimulus bandwidths are shown in Fig. 4. Generally, the TFS-speech recognition scores in quiet were better than those in noise and the TFS-speech recognition scores decreased as the stimulus bandwidth decreased from 4 to 1 kHz for all groups. The TFS-speech recognition scores of the NH group were better than those of the SHF-SNHL groups in both quiet and noisy conditions. The percent correct scores were arcsine-transformed to avoid non-uniform variance in the raw scores (Studebaker, 1985). Twoway ANOVA was performed separately within each TFS-speech stimulus bandwidth (three times) to test the effects of the group factor (listening bandwidth) and filtering condition factor (quiet vs. noise) on sentence recognition. The main factor model was used because the interaction effect of the two factors was not significant (F(1, 46) ¼ 2.725, p ¼ 0.106; F(2, 62) ¼ 0.095, p ¼ 0.910; F(3, 76) ¼ 0.086, p ¼ 0.968 for the 4-, 2-, and 1-kHz stimulus bandwidth TFS-speech, respectively). The F and p values for the main factor model are shown in Table 3. The analysis showed that in each TFS-speech stimulus bandwidth, the group (listening bandwidth) and filtering condition both impacted sentence recognition significantly. Post hoc comparisons were performed with the least significant difference method if there were more than three groups and results showed that for the 2 kHz stimulus bandwidth TFS-speech, the difference between the NH group and the 4- and 2-kHz SHF-SNHL groups was statistically significant (both p < 0.001) but no significant difference was found between the 4- and 2-kHz SHF-SNHL

70

B. Li et al. / Hearing Research 326 (2015) 66e74

Fig. 2. Individual psychometric functions for the normal-hearing (NH) group and the 4-kHz steep high-frequency sensorineural hearing loss (SHF-SNHL) group. Scores in quiet are shown in the left panel and those in noise in the right panel. The individual data of the NH group are shown in cyan solid lines, whereas those shown in magenta dashed lines are the individual data of the 4-kHz SHF-SNHL group.

groups (p ¼ 0.464). For the 1 kHz stimulus bandwidth TFSspeech, post hoc comparisons showed that the scores differed significantly between the NH group and the 2- and 1-kHz SHFSNHL groups (both p < 0.01) and between the 4- and 2-kHz SHF-SNHL groups (p < 0.05).

3.5. Effect of off-frequency listening In the present study, the stimulus bandwidths were matched to the listening bandwidths of the SHF-SNHL individuals. However, the NH listeners might be able to make efficient use of speech cues

Fig. 3. Mean psychometric functions for the normal-hearing (NH) group and the 4-kHz steep high-frequency sensorineural hearing loss (SHF-SNHL) group. Scores in quiet are shown in the left panel and those in noise in the right panel. The mean data for the NH group and the 4-kHz SHF-SNHL group are shown in cyan and magenta lines, respectively.

B. Li et al. / Hearing Research 326 (2015) 66e74

71

Fig. 4. Sentence recognition scores using temporal fine structure (TFS) for the normal-hearing (NH) group and the steep high-frequency sensorineural hearing loss (SHF-SNHL) groups according to the stimulus bandwidth of TFS-speech. Scores in quiet are shown in the left panel and those in noise in the right panel. In both panels, significant differences in sentence recognition using TFS between the SHF-SNHL group and the NH group are indicated by the asterisks and significant differences within SHF-SNHL groups are indicated by the five-pointed stars.

in the “off-frequency” bands, depending on the stimulus characteristic and filter slope. The listening ability from the frequency band beyond the cut-off frequency of which the stimulus was lowpass filtered is called “off-frequency listening”. Here, we considered the potential effect of listening from the off-frequency band by the NH individuals. If the NH individuals could receive more information through the off-frequency band than the SHFSNHL individuals, this could also account for the significant differences between the groups. Thus, a supplementary experiment was conducted to address this issue. Eight NH participants (thresholds below 20 dB HL at octave frequencies between 250 and 8000 Hz in both ears) were recruited. All of the participants were naïve to the speech materials and were aged between 20 and 40 years old. The original speech or the TFS-speech signals were lowpass filtered at 1-, 2-, and 4-kHz (216 dB/oct). Additionally, a steadystate noise at a signal-to-noise ratio of þ12 dB with the same cutoff frequencies but that was highpass filtered (216 dB/oct) was added to the signals (Lorenzi et al., 2009). For example, a 1 kHz highpass filtered steady-state noise was added to the 1 kHz lowpass filtered original sentences or TFS sentences. Similarly, a 2 kHz highpass filtered stead-state noise was added to the 2 kHz lowpass filtered original sentences or TFS sentences. The participants practiced using intact and TFS sentences until they achieved steady scores for both kinds of sentences before formal tests were started. For each processed speech signal, one MHINT list (20 sentences) was tested. The recognition scores of this supplementary experiment are presented in Table 4. Regarding the recognition scores of the lowpass filtered original sentences, with or without adding highpass filtered noise, were all 100% correct, further statistical analysis was omitted. A two-way ANOVA was performed to examine the effects of cut-off frequency of the lowpass filter and the effects of adding

noise in the off-frequency bands. The results showed that the effect of adding noise in the off-frequency bands was not significant (F (1, 47) ¼ 0.041, p ¼ 0.840). Thus, the results of this supplementary experiment demonstrated that off-frequency listening made little contribution if any to sentence recognition with the lowpass filtered original or TFS sentences in the NH listeners. This indicates that the differences in sentence recognition scores between the NH and SHF-SNHL participants using TFS cues are not attributable to the potential differences in contributions of off-frequency band. 4. Discussion The first goal of this research was to establish whether SHFSNHL would impair sentence recognition using TFS in the lowfrequency region in Mandarin-speaking listeners. The present data indicate that SHF-SNHL listeners show poorer-than-normal sentence recognition scores for Mandarin sentences presented in the low-frequency region where their absolute thresholds are considered clinically as normal. This result is consistent with a previous study (Lorenzi et al., 2009) and is an extension of it. The main extension is that the contribution of listening bandwidth and stimulus bandwidth could be assessed separately because SHFSNHL participants with varying listening bandwidths were included and TFS-speeches with varying stimulus bandwidths were presented in this study. Speech recognition decreased as the TFSspeech stimulus bandwidth decreased from 4 to 1 kHz for NH and each SHF-SNHL group. Moreover, speech recognition measured for 1 kHz stimulus bandwidth TFS-speech decreased as the listening bandwidth decreased from 4 to 1 kHz. The mechanism for the deteriorated performance of supra-threshold processing in the low-frequency region with normal absolute threshold remains unclear. Research has shown that age and age-related factors play

Table 3 Two-way ANOVA results in each stimulus bandwidth of TFS-speech. Stimulus bandwidth of TFS-speech

Value

Group factor

Filtering condition factor

4 kHz

F p F p F p

F(1,47) ¼ 56.763

Effects of steep high-frequency hearing loss on speech recognition using temporal fine structure in low-frequency region.

The present study examined the effects of steep high-frequency sensorineural hearing loss (SHF-SNHL) on speech recognition using acoustic temporal fin...
1MB Sizes 0 Downloads 9 Views