A robot learning to roll out pizza dough

The aim of robot learning by imitation is to provide user-friendly means of transferring skills to robots, by exploiting the natural teaching capability of the users. Imitation is not simply recording and replaying movements. Instead, the learned skills require to be generalized to new situations.

Machine learning can help to extract relevant patterns from multiple demonstrations of the skill (invariant characteristics of the task). The research challenge is to develop tools with good extrapolation capability and working with small datasets. In learning from demonstrations, the robot should be able to start generalizing the task early in the interaction. This can be done in several ways, with techniques that can learn the underlying structure of the task, that can extract the meanings behind the actions, or that can understand how the robot, user and environment can modify the task.

The user can increase the robot's learning speed by providing several relevant examples of the same task. By scaffolding the environment and by introducing variability, the robot can then extract which parts of the movement matter the most, and how the movement is modulated by external cues such as positions of objects.

The more complex the task are, the more difficult it is for the user to pre-determine and keep track of the possible variations of a task. To reduce this cognitive load, it might thus be relevant to consider machine learning approaches that can not only interpolate between multiple demonstrations, but that can also extrapolate the task to new situations that could be far from the observed situations. Such extrapolation capability would avoid that the requirement of carefully covering all the possible situations in which a motion can be used.

This video presents an example in which five demonstrations of rolling out a pizza dough are provided to the robot by kinesthetic teaching. The controller of the robot compensates for the gravity to facilitate the demonstrations. The user can in this way move the robot as if it had no weight and no motor in its articulations, while the robot records proprioceptive information about the position of its arm, as well as visual information about the pizza dough from an external camera. After demonstration, the robot knows how to move the rolling pin toward the dough and how to change the amplitude and direction of the movement with respect to the current shape of the dough.

The underlying goal of this task is to locally change the rolling motion such that it becomes parallel to the smallest eigenvector of the dough shape extracted by image processing. The adaptation of the movement is not provided to the robot. Instead, the robot learns how to locally modify the trajectory of the end-effector with respect to the dough position, orientation and elongation.


Go back to the list of videos