### Abstract

This paper studies exploiting action-level learning (imitation) in the optimal control problem context. Cost functions defined to control a robot in the optimal control methods are similar to the goal-level learning (emulation) in animals. However, imitating the robot's or others' (e.g. human's) previous experiences (demonstrations) could help the system to improve its performances. We propose to use demonstrations more efficiently by predicting an initialization for the optimal control problems (OCPs) and adding an imitation term to the cost functions. While the predicted initial guess initializes the OCPs close to their local optima, the imitation term guides the optimization, resulting in a faster convergence rate. We test our algorithm in a physical assistive task where a robot should help a human perform a sit-to-stand (STS) task. We define this task as two optimal control problems. The first OCP predicts the human's desired assistance and the other one controls the robot. We conduct two experiments for this task. In experiment 1, we only change the disability type of the human, and in experiment 2, we assume the human’s mass and height are changing as well. Our proposed method reduced the number of iterations by more than 90% and 70% for the human assistance prediction and the robot controller, respectively.

### Bibtex reference

@inproceedings{Razmjoo21ICAR,
author="Razmjoo Fard, A. and Lembono, T. S. and Calinon, S.",
title="Optimal Control combining Emulation and Imitation to Acquire Physical Assistance Skills",
year="2021",
booktitle="Proc.\ {IEEE} Intl Conf.\ on Advanced Robotics ({ICAR})",
pages=""
}