Pignat, E. and Calinon, S. (2019)
Bayesian Gaussian Mixture Model for Robotic Policy Imitation
IEEE Robotics and Automation Letters (RA-L), 4:4, 4452-4458.


A common approach to learn robotic skills is to imitate a policy demonstrated by a supervisor. One of the existing problems is that, due to the compounding of small errors and perturbations, the robot may leave the states where demonstrations were given. If no strategy is employed to provide a guarantee on how the robot will behave when facing unknown states, catastrophic outcomes can happen. An appealing approach is to use Bayesian methods, which offer a quantification of the action uncertainty given the state. Bayesian methods are usually more computationally demanding and require more complex design choices than their non-Bayesian alternatives, which limits their application. In this work, we present a Bayesian method that is both simple to set up, computationally efficient and that can adapt to a wide range of problems. These advantages make this method very convenient for imitation of robotic manipulation tasks in the continuous domain. We exploit the provided uncertainty to fuse the imitation policy with other policies. The approach is validated on a Panda robot with three tasks using different control input/state pairs.

Bibtex reference

	author="Pignat, E. and Calinon, S.",
	title="{B}ayesian {G}aussian Mixture Model for Robotic Policy Imitation",
	journal="{IEEE} Robotics and Automation Letters ({RA-L})",


Pignat, E. and Calinon, S. (2019). Bayesian Gaussian Mixture Model for Robotic Policy Imitation. IEEE Robotics and Automation Letters (RA-L).

Source codes

Source codes related to this publication are available as part of PbDlib.

Go back to the list of publications