Abstract

Information visualization techniques play an important role in Virtual Reality (VR) because they improve task performance, support cognitive processes, and eventually increase the feeling of immersion. Deaf and Hard-of-Hearing (DHH) persons have special needs for information presentation because they feel and perceive VR environments differently. Therefore, it is necessary to pay attention to requirements about presenting information in VR for this group of users. Previous research showed that adding special features and using haptic methods helps DHH persons to do VR tasks better. In this paper, we propose a novel Omni-directional particle visualization method and also evaluate multi-modal presentation methods in VR for DHH persons, such as audio, visual, haptic, and a combination of them (AVH). Additionally, we compare the results with the results of persons without hearing problems. The methods for information presentation in our study focus on spatial object localization in VR. Our user studies show that both DHH persons and persons without hearing problems were able to do VR tasks significantly faster using AVH. Also, we found out that DHH persons can do visual-related VR tasks faster than persons without hearing problems by using our new proposed visualization method. Our results suggest that the benefits of using audio among persons without hearing problems and the benefits of using vision among DHH persons cause an interesting balance in the results of AVH between both groups. Finally, our qualitative and quantitative evaluation indicates that both groups of participants preferred and enjoyed AVH modality more than other modalities.

Reference

Mirzaei, M., Kan, P., & Kaufmann, H. (2021). Multi-modal Spatial Object Localization in Virtual Reality for Deaf and Hard-of-Hearing People. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR). IEEE Computer Society. https://doi.org/10.1109/vr50410.2021.00084