### Abstract

Mapping teleoperator motions to a robot is a key problem in teleoperation. Due to differences between workspaces, such as object locations, it is particularly challenging to derive smooth motion mappings that fulfill different goals (e.g. picking objects or passing through key points). Indeed, most state-of-the-art methods rely on mode switches, leading to a discontinuous, low-transparency experience. In this paper, we propose a unified formulation for position, orientation and velocity mappings based on the poses of objects of interest in the teleoperator and robot workspaces. We apply it in the context of bilateral teleoperation. Two possible implementations to achieve the proposed mappings are studied: an iterative approach based on locally-weighted translations and rotations, and a neural network approach. Evaluations are conducted both in simulation and using two torque-controlled Franka Emika Panda robots. Our results show that, despite longer training times, the neural network approach provides faster mapping evaluations and lower interaction forces for the teleoperator, which are crucial for continuous, real-time teleoperation.

### Bibtex reference

@article{Gao21RAL,
author="Gao, X. and Silv\'erio, J. and Pignat, E. and Calinon, S. and Li, M. and Xiao, X.",
title="Motion Mappings for Continuous Bilateral Teleoperation",
year="2021",
journal="{IEEE} Robotics and Automation Letters ({RA-L})",
volume="6",
number="3",
pages="5048--5055",
doi="10.1109/LRA.2021.3068924"
}