PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!
Tytuł artykułu

Sentence Recognition in the Presence of Competing Speech Messages Presented in Audiometric Booths with Reverberation Times of 0.4 and 0.6 Seconds

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This study examined whether differences in reverberation time (RT) between typical sound field test rooms used in audiology clinics have an effect on speech recognition in multi-talker environments. Separate groups of participants listened to target speech sentences presented simultaneously with 0-to-3 competing sentences through four spatially-separated loudspeakers in two sound field test rooms having RT = 0:6 sec (Site 1: N = 16) and RT = 0:4 sec (Site 2: N = 12). Speech recognition scores (SRSs) for the Synchronized Sentence Set (S3) test and subjective estimates of perceived task difficulty were recorded. Obtained results indicate that the change in room RT from 0.4 to 0.6 sec did not significantly influence SRSs in quiet or in the presence of one competing sentence. However, this small change in RT affected SRSs when 2 and 3 competing sentences were present, resulting in mean SRSs that were about 8–10% better in the room with RT = 0:4 sec. Perceived task difficulty ratings increased as the complexity of the task increased, with average ratings similar across test sites for each level of sentence competition. These results suggest that site-specific normative data must be collected for sound field rooms if clinicians would like to use two or more directional speech maskers during routine sound field testing.
Rocznik
Strony
3--14
Opis fizyczny
Bibliogr. 33 poz., wykr.
Twórcy
autor
autor
autor
  • )American University of Beirut – Medical Center Department of Otolaryngology-Head and Neck Surgery P.O. Box 11-0236, Beirut, Lebanon, ks05@aub.edu.lb
Bibliografia
  • 1. Abouchacra K.S. (2000), Synchronized sentence set (S3) [Compact Disk #3], U.S. Army Research Laboratory, Human Research and Engineering Directorate, Aberdeen Proving Ground, Maryland.
  • 2. Abouchacra K.S., Breitenbach J., Mermagen T, Letowski T. (2001), Binaural Helmet: Improving speech recognition in noise with spatialized sound, Human Factors, 43, 584-594.
  • 3. Abouchacra K.S., Letowski T., Besing J., Koehnke J. (2009), Clinical application of the Synchronized Sentence Set (S3), Proceedings of the 16th International Congress of Sound and Vibration, Kraków.
  • 4. American Speech-Language-Hearing Association (1991), Sound Field Measurement Tutorial. Working Group on Sound Field Calibration of the Committee on Audiological Evaluation, American Speech-Language-Hearing Association Supplement Jan, 25-38.
  • 5. Begault D.R., Erbe T. (1994), Multichannel spatial Auditory display for speech communications, Journal of the Audio Engineering Society, 42, 819-826.
  • 6. Bolia R., Nelson W., Ericson M., Simpson B. (2000), A speech corpus for multitalker communications research, Journal of the Acoustical Society of America, 107, 1065-1066.
  • 7. Bregman A.S. (1990), Auditory Scene Analysis: The perceptual organization of sound, MIT Press, Boston.
  • 8. Bronkhorst A.W. (2000), The Cocktail Party Phenomenon: A review of research on speech intelligibility in multiple-talker conditions, Acustica - Acta Acustica, 86, 117-128.
  • 9. Brungart D.S. (2001), Informational and energetic masking effects in the perception of two simultaneous talkers, Journal of the Acoustical Society of America, 109, 1101-1109.
  • 10. Cherry E.C. (1953), Some experiments on the recognition of speech, with one and with two ears, Journal of the Acoustical Society of America, 25, 975-979.
  • 11. Cooke M. (2006), A glimpsing model of speech perception in noise, Journal of the Acoustical Society of America, 119, 1562-1573.
  • 12. Cooke M., Barker J., Cunningham S., Shao X. (2006), An audio-visual corpus for speech perception and automatic speech recognition, Journal of the Acoustical Society of America, 120, 2421-2424.
  • 13. Darwin C., Hukin R. (2000a), Effectiveness of spatial cues, prosody, and talker characteristics in selective attention, Journal of the Acoustical Society of America, 107, 970-977.
  • 14. Darwin C., Hukin R. (2000b), Effects of reverberation on spatial, prosodic and vocal-tract size cues to selective attention, Journal of the Acoustical Society of America, 108, 335-342.
  • 15. Dirks D.D., Wilson R.H. (1969), The effect of spatially separated sound sources on speech intelligibility, Journal of Speech and Hearing Research, 12, 5-38.
  • 16. Dirks D.D., Stream R.W., Wilson R.H. (1972), Speech audiometry: earphone and sound field, Journal of Speech and Hearing Disorders, 37, 62-76.
  • 17. Durlach N.I., Mason C.R., Kidd Jr. G., Arbogast T., Colburn H.S., Shinn-Cunningham B.G. (2003), Note on informational masking, Journal of the Acoustical Society of America, 113, 1984-1987.
  • 18. Ericson M.A., Brungart D.S., Simpson B.D. (2004), Factors that influence intelligibility in multitalker speech displays, The International Journal of Aviation Psychology, 14, 313-334.
  • 19. Festen J.M., Plomp R. (1990), Effects of fluctuating noise and interfering speech on the speech-reception threshold for impaired and normal hearing, Journal of the Acoustical Society of America, 88, 1725-1736.
  • 20. Kidd Jr. G., Arbogast T.L., Mason C.R., Gallun F.J. (2005), The advantage of knowing where to listen, Journal of the Acoustical Society of America, 118, 3804-3815.
  • 21. Kidd Jr. G., Mason C.R., Richards V.M., Gallun F.J., Durlach N.I. (2008), Informational masking, [in:] Auditory Perception of Sound Sources, Yost W.A., Popper A.N., Fay R.R. [Eds.], pp. 143-190, Springer Science + Business Media, New York.
  • 22. Killion M.C., Niquette P.A., Gudmundsen G.I., Revit L.J., Banerjee S. (2004), Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners, Journal of the Acoustical Society of America, 116, 2395-2405.
  • 23. MacKeith N.W., Coles R.R.A. (1971), Binaural advantages in hearing of speech, Journal of Laryngology and Otology, 85, 214-232.
  • 24. Mendel L.L., Danhauer J.L. [Eds.], (1997), Audiologic evaluation and management and speech perception assessment, Mendel, Singular Publishing Group, Inc., San Diego.
  • 25. Moore T. (1981), Voice communication jamming research. AGARD Conference Proceedings 331: Aural Communication in Aviation, 2:1-2:6, Neuilly-Sur-Seine, France.
  • 26. Nabelek A., Letowski T.R., Tucker F.M. (1989), Reverberant overlap- and selfmasking in consonant identification, Journal of the Acoustical Society of America, 86, 1259-1265.
  • 27. Nilsson M., Soli S.D., Sullivan J. (1994), Development of the Hearing In Noise Test for the measurement of speech reception thresholds in quiet and noise, Journal of the Acoustical Society of America, 95, 1085-1099.
  • 28. Peissig J., Kollmeier B. (1997), Directivity of binaural noise reduction in spatial multiple noise-source arrangements for normal and impaired listeners, Journal of the Acoustical Society of America, 101, 1660-1670.
  • 29. Rochlin G. (1993), Status of sound field audiometry among audiologists in the United States, Journal of the American Academy of Audiology, 4, 2, 59-68.
  • 30. Schneider B.A., Li L., Daneman M. (2007), How competing speech interferes with speech comprehension in everyday listening situations, Journal of the American Academy of Audiology, 18, 478-591.
  • 31. Shinn-Cunningham B.G. (2005), Influences of spatial cues on grouping and understanding sound, Proceedings of Forum Acusticum, 29 August - 2 September 2005.
  • 32. Shinn-Cunningham B., Best V. (2008), Selective attention in normal and impaired hearing, Trends in Amplification, 12, 283-299.
  • 33. Taylor B. (2003), Speech-in-noise tests: How and why to include them in your basic test battery, The Hearing Journal, 56, 40-46.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BUS8-0020-0001
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.