Dhruv Jain receives Google funding for enhanced sound awareness for deaf and hard-of-hearing individuals

The grant will support Jain’s work to develop a comprehensive auditory scene understanding system.
Dhruv Jain headshot
Prof. Dhruv Jain

Dhruv Jain, assistant professor of computer science and engineering at the University of Michigan, has received a $75,000 grant from the Google Academic Research Awards program for his project titled “Enhancing Auditory Scene Understanding for Deaf and Hard of Hearing People.” Working together with CSE master’s student Leo Wu, Jain aims to transform how deaf and hard-of-hearing (DHH) individuals perceive and interact with their acoustic environments.

The Google Academic Research Awards program aims to support cutting-edge research in computer science and related fields. Each year, Google funds a select number of proposals that demonstrate potential for significant technological impact and relevance to current challenges in the field.

Jain and Wu’s project seeks to address the limitations of existing sound awareness systems by leveraging state-of-the-art machine learning models to provide nuanced auditory scene information to DHH users, such as the source of a sound, its distance, importance, or a sequence of events. These detailed auditory cues can play a critical role in ensuring safety, facilitating communication, and generally enhancing their overall awareness. 

Current sound awareness systems are limited, often focusing on classifying sounds into predefined categories. They fail to provide DHH users with the depth and richness of information necessary to gain a comprehensive understanding of their auditory surroundings. 

Jain and Wu aim to address this gap through a multi-phase research plan that includes understanding the specific auditory cues DHH individuals need, designing prompts for an advanced auditory scene understanding model, and building a mobile application to deliver this information in real time in various contexts.

By enhancing auditory scene understanding for DHH individuals, the research team aims to provide a tool that significantly improves the auditory scene understanding of DHH individuals.

“The hope is that conveying  more holistic semantic auditory information will provide DHH individuals with actionable cues and support them in performing everyday tasks,” Jain described. “For example, a user could be able to discern the order of events, such as a kettle boiling followed by a microwave beep or oil sizzling, the relative loudness of sounds, and specific traits of recognized sounds (e.g., the sizzling pattern) that may assist them in the cooking process.”

Furthermore, their research will contribute valuable insights into the design of auditory systems, advancing the fields of machine learning and human-computer interaction more generally. It will pave the way for more sophisticated models capable of interpreting audio data in meaningful ways for both the DHH community and all users, in the process fostering a more inclusive and accessible world.

Explore:
Dhruv Jain; Honors and Awards; Research News; e-HAIL