A Functional Taxonomy of Visual Augmented Reality for Human-Robot Interaction

Abstract

Augmented reality (AR) technologies are today more frequently being introduced to Human-Robot Interaction to mediate the interaction between human and robot. Indeed, better technical support and improved framework integration allow the design and study of novel scenarios augmenting interaction with AR. While some literature reviews have been published, so far no classifications have been devised for the role of AR in HRI. AR constitutes a vast field of research in HCI, and as it is picking up in HRI, it is timely to articulate the current knowledge and information about the functionalities of AR in HRI. Here we propose a multidimensional taxonomy for AR in HRI that distinguishes the type of perception augmentation, the functional role of AR, and the augmentation artifact type. We place sample publications within the taxonomy to demonstrate its utility. Lastly, we derive from the taxonomy some research gaps in current AR-for-HRI research and provide suggestions for exploration beyond the current state-of-the-art.

Publication
Proceedings of the International Conference on Human-Robot Interaction (HRI2022)