Abstract

The research presented in this dissertation focuses on sound localization and audio visualization methods in Virtual Reality (VR) for Deaf and Hard-of-Hearing (DHH) persons. Most VR applications and devices are designed for hearing persons, making it harder for DHH persons to use VR. This dissertation starts with a brief overview of the sensory substitution systems and the importance of audio visualization in DHH persons’ daily lives. A background survey is conducted to understand DHH persons’ requirements in VR and address some challenges encountered at the early stages of designing an assistive haptic VR system for DHH persons. We continue by describing different development phases of our proposed VR assistive systems, including hardware and software designs. We evaluate a haptic VR suit that helps deaf persons complete sound-related VR tasks. Following the results, we introduce a new novel portable system for deaf persons called "EarVR" that analyzes 3D sounds in a VR environment and locates the direction of the closest sound to the user in real-time using two vibro-motors placed on the users’ ears. Then, we propose a novel audio visualization method in VR called “Omni-directional particle visualization” to investigate deaf persons’ reaction times to visual stimuli and compare the result with other methods. Finally, we introduce a new system called "EarVR+," which is an upgraded version of the EarVR system by adding two LEDs to demonstrate visual feedback. Our methods enhance traditional VR devices with additional haptic and visual feedback, which aids spatial sound localization in VR for deaf persons.

Reference

Mirzaei, M. (2023). Auditory sensory substitution in virtual reality : for people with hearing impairments [Dissertation, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2023.113962