Abstract
Haptic feedback is essential for immersive Virtual Reality (VR) experiences, yet traditional handheld controllers offer limited tactile realism. Mobile Encountered-Type Haptic Displays (mETHDs) address this by autonomously positioning physical props to enable realistic touch interactions "just-in-time". However, safely navigating these robots around users wearing Head-Mounted Displays presents a significant control challenge. This thesis proposes an end-to-end Deep Reinforcement Learning (DRL) framework for the safe and responsive positioning of a mETHD in both single and multi-user scenarios.The proposed architecture processes 2D LiDAR data via a 1D Convolutional Neural Network to ensure robust obstacle avoidance. To manage the complexity of multi-user environments, the control problem is decomposed into hierarchical policies: a strategic policy that predicts user intent and a navigation policy that executes safe movement. Quantitative evaluation in simulation demonstrates that the DRL approach significantly outperforms traditional static and heuristic baselines in both single and multi-user scenarios. The learned policies reduced safety-critical interventions by over 88% and improved haptic positioning times by up to 6x. These gains are achieved through high-frequency proactive control, which allows the mETHD to operate safely in close proximity to users. Beyond immersive environments, the methodologies presented here lay the groundwork for intelligent, proactive robotic systems capable of operating safely alongside humans in broader Human-Robot Interaction contexts.
Reference
Mittermair, V. (2025). Safe Position Control of an Encountered-Type Haptic Display through Reinforcement Learning [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2025.126051
