Dong, Z., Li, Z., Yan, Y., Calinon, S. and Chen, F. (2022)
Passive Bimanual Skills Learning from Demonstration with Motion Graph Attention Networks
IEEE Robotics and Automation Letters (RA-L), 7:2, 4917-4923.

Abstract

Enabling household robots to passively learn task-level skills from human demonstration could substantially boost their application in daily life. In this work, we propose a Learning from Demonstration (LfD) scheme capturing human uni/bimanual demonstrations with motion capture suit and virtual reality (VR) trackers, wherein the demonstrated skills are transferred to a humanoid with a learnable graph attention network (GAT) based model. The model trained with human hand trajectories and target object poses yield the movement policy as a trajectory generator, which outputs the Cartesian trajectories for robot end-effectors to execute the task given their poses and optionally the object's initial pose as input. Test on synthetic data and three real robot experiments indicated that the policy could learn unimanual and coordinated bimanual, interactive and non-interactive manipulation skills with a unified scheme.

Bibtex reference

@article{Dong22RAL,
	author="Dong, Z. and Li, Z. and Yan, Y. and Calinon, S. and Chen, F.",
	title="Passive Bimanual Skills Learning from Demonstration with Motion Graph Attention Networks",
	journal="{IEEE} Robotics and Automation Letters ({RA-L})",
	year="2022",
	volume="7",
	number="2",
	pages="4917--4923",
	doi="10.1109/LRA.2022.3152974"
}
Go back to the list of publications