Exploring Multimodal Interaction Strategies for Co-Located Mixed Reality Human-Robot Collaboration
Main Article Content
Abstract
This paper investigates how Mixed Reality (MR) technology could enhance human-robot interaction (HRI) in the workplace. We developed a system employing a Microsoft HoloLens and a Universal Robot UR5 that allows users to execute pick-and-place tasks using two distinct approaches of interaction: heading-based (HB) selection and hand-to- finger (H2F) selection. To make it simpler and more effective, the MR interface combines easy-to-use interaction methods like voice commands and gestures with real-time visualisation. In terms of task completion time, accuracy, and user contentment, sixteen participants participated in studies demonstrating HB selection was superior than H2F. H2F performed better on exact tasks, however, which implies blended approaches could have some success. The research reveals how MR may transform things by overcoming the issues with conventional interfaces, such as being difficult to grasp and demanding much of mental effort. According to the findings, MR-enhanced HRI finds use in several spheres including industrial robots, education, and healthcare. More research will be done on how to include flexible elements like object detection and sophisticated learning models if MR systems are to operate effectively in complex and changing environments. Through connecting the actual and virtual worlds, MR technologies enable people and robots to collaborate in better, more efficient, and simpler-to-use ways.