Abstract
This article presents a brain–computer interface (BCI) coupled with an augmented reality (AR) system to support human–robot interaction in controlling a robotic arm for pick-and-place tasks. BCIs can process steady-state visual evoked potentials (SSVEPs), which are signals generated through visual stimuli. The visual stimuli may be conveyed to the user with AR systems, expanding the range of possible applications. The proposed approach leverages the capabilities of the NextMind BCI to enable users to select objects in the range of the robotic arm. By displaying a visual anchor associated with each object in the scene with projected AR, the NextMind device can detect when users focus their eyesight on one of them, thus triggering the pick-up action of the robotic arm. The proposed system has been designed considering the needs and limitations of mobility-impaired people to support them when controlling a robotic arm for pick-and-place tasks. Two different approaches for positioning the visual anchors are proposed and analyzed. Experimental tests involving users show that both approaches are highly appreciated. The system performances are extremely robust, thus allowing the users to select objects in an easy, fast, and reliable way.
Reference
De Pace, F., Manuri, F., Bosco, M., Sanna, A., & Kaufmann, H. (2024). Supporting Human–Robot Interaction by Projected Augmented Reality and a Brain Interface. IEEE Transactions on Human-Machine Systems, 54(5), 599–608. https://doi.org/10.1109/THMS.2024.3414208