PROFILE- What does the impaired ear tell the brain?
14 March 2012
What does it mean to 'hear' in our everyday lives? Imagine yourself at a café talking to a friend. While you are listening to your friend you might also hear the clinking of glasses and cutlery on plates, some snippets of conversation from the table beside you, and some music being played for ambiance. The acoustic signal that reaches our ears is a mix of all these sources, combined with sound reflections from the walls, tables and floors. From this confusing sound soup the auditory system is able to separate the different sources and pick out the relevant information. The way we solve this task, often referred to as the 'cocktail party problem', is still not understood. People with normal hearing deal with these situations effortlessly but the hearing impaired often have great difficulties in situations where several people talk at the same time, particularly in noisy and reverberant environments. This can lead to reduced social interaction, such as the avoidance of group situations, and a significant decrease in the quality of life.
It has been estimated that by the year 2025 there will be 100 million people in the EU with a moderate-to-severe hearing loss that requires treatment (Shield, 2006). Recent advances in hearing aid technology have focused on improving the signal-to-noise ratio delivered to the listener. However, despite enormous technological progress in this area, the actual benefit varies widely across individual listeners. While some listeners show very good performance, others continue to experience difficulties, even though reduced audibility has been fully compensated for. Thus, in a complex acoustical environment, while many hearing-impaired people can hear friends talking, they still cannot understand them.How to solve the 'cocktail party problem'?
In order to choose the right compensation strategy and hearing aid for an individual hearing-impaired person, we need to understand the source of this variability among impaired listeners. Are there other limitations besides audibility? How can they be characterised and where are they 'located' in the brain? How can the performance of the individual listener be predicted and how do we choose the best compensation strategy for the individual? In the past, the processing capabilities of hearing aids were limited. Today, modern digital hearing aids can implement advanced signal processing algorithms. The problem we face today is not technological – it's a lack of detailed knowledge about hearing and hearing impairment.
In audiological research, the threshold of hearing is the golden standard for quantifying hearing impairment. However, this threshold does not reflect the ongoing neural degeneration caused by age or noise exposure and fails to account for the many other consequences of a hearing impairment (eg Kujawa and Liberman, 2009).
Historical efforts focusing on individual academic disciplines have not led to a solution of the 'cocktail party problem'. For example, many researchers in digital signal processing and communication technology are unfamiliar with recent developments in hearing science or knowledge about hearing-impaired listeners. Conversely, researchers in hearing science and audiology tend to concentrate on psychoacoustics or neural processing, lacking insight into state-of-the-art signal processing strategies. An understanding of the processes underlying speech communication can only be achieved by truly integrating current knowledge, concepts and techniques from the disciplines of auditory neuroscience, psychophysics, speech sciences (both perception and production), acoustics, cognitive science, signal processing and biomedical engineering.Multidisciplinary analysis of human hearing in realistic acoustic environments
In collaboration with various partners mainly in Europe and North America, we at the Centre for Applied Hearing Research (CAHR) at the Technical University of Denmark investigate the auditory signal processing strategies in normal-hearing and hearing-impaired listeners and the perceptual consequences of different types of hearing impairment. The centre was established in 2003 and has been supported by the three Danish hearing aid manufacturers Oticon, Widex and GN ReSound, as well as the Danish Research Foundation and several European research programmes (eg HearCom; INSPIRE). A major goal at CAHR is to provide new insights relevant for technical and clinical applications, such as advanced compensation strategies in hearing instruments and novel objective hearing tests (eg for newborn screening). A substantial part of our research focuses on the fundamental processes of signal coding and the representation of sounds (like speech and music) in the brain. Based on these investigations, the centre has developed biologically inspired computational auditory processing models, that estimate the transformation from an acoustical input signal entering the ear to an 'internal (neural) representation' in the brain, at various stages of processing in the auditory system (eg Jepsen and Dau, 2011). Such modelling has been successfully applied in audio coding and automatic speech recognition.
Most experiments in the speech sciences are conducted in sound booths and involve situations that are very different from the normal communication environments we experience in our daily lives. Together with our partners in several European training and research networks, we are developing labs where we can simulate realistic multisensory environments where sound content, source location information, environmental effects (such as reverberation) as well as the visual content are faithfully recreated. By exploring differences across individual hearing-impaired listeners, we can establish relations between fundamental functions of hearing (like temporal, spectral and spatial resolution) and their connection to speech and music perception. Simulating the acoustical environment of real-life situations allows us to directly measure the benefits and consequences of different hearing aid processing strategies for an impaired listener.
Specifically, we consider how signal processing, such as dynamic range compression and noise suppression, affects the properties of the sound and associated perceptual attributes like degree of externalisation, source localisation, distance, apparent source width, and speech intelligibility.
We believe that this work will make significant contributions to our understanding of how speech and music are perceived and will significantly impact hearing-instrument processing, speech and audio coding, automatic speech recognition, and text-to-speech synthesis. However, substantial progress in solving the 'cocktail party problem' is only possible through a continued emphasis on cross-disciplinary research and education and international collaboration.References
Kujawa S G, and Liberman M C (2009), 'Adding insult to injury: cochlear nerve degeneration after 'temporary' noise-induced hearing loss,' The Journal of neuroscience: the official journal of the Society for Neuroscience 29, 14077-85
Shield B (2006), Evaluation of the social and economic costs of hearing impairment. A report for Hear-It
Jepsen M L and Dau T (2011), 'Characterizing auditory processing and perception in individual listeners with sensorineural hearing loss', Journal of the Acoustical Society of America 129, 262-281Professor Torsten Dau
Tel: +45 4525 3936