Kormushev, P., Calinon, S., Saegusa, R. and Metta, G. (2010)
Learning the skill of archery by a humanoid robot iCub
In Proc. of the IEEE Intl Conf. on Humanoid Robots (Humanoids), Nashville, TN, USA, pp. 417-423.

Abstract

We present an integrated approach allowing the humanoid robot iCub to learn the skill of archery. After being instructed how to hold the bow and release the arrow, the robot learns by itself to shoot the arrow in such a way that it hits the center of the target. The primary focus of the approach is on learning the bi-manual coordination for achieving the goal of the task. Two learning algorithms are proposed and compared: one with Expectation-Maximization based Reinforcement Learning, and one with vector feedback recursive regression. Both algorithms are used to modulate and coordinate the motion of the two hands, while inverse kinematics controller is used for the motion of the arms. The secondary focus is on the the image processing part, namely, how to recognize where the arrow hits the target. An algorithm based on Gaussian Mixture Models is proposed for color-based detection of the target and the arrow's tip. The approach is evaluated on a 53-DOF humanoid robot iCub.

Bibtex reference

@inproceedings{Kormushev10Hum,
  author="Kormushev, P. and Calinon, S. and Saegusa, R. and Metta, G.",
  title = "Learning the skill of archery by a humanoid robot iCub",
  booktitle = "Proc. {IEEE} Intl Conf. on Humanoid Robots ({H}umanoids)",
  month = "December",
  year = "2010",
  address = "Nashville, TN, USA",
  pages="417--423"
}

Video

After being instructed how to hold the bow and release the arrow, the robot learns by itself to aim and shoot arrows at the target. It learns to hit the center of the target in only 8 trials.

The learning algorithm, called ARCHER (Augmented Reward Chained Regression) algorithm, was developed and optimized specifically for problems like the archery training, which have a smooth solution space and prior knowledge about the goal to be achieved. In the case of archery, we know that hitting the center corresponds to the maximum reward we can get. Using this prior information about the task, we can view the position of the arrow's tip as an augmented reward. ARCHER uses a chained local regression process that iteratively estimates new policy parameters which have a greater probability of leading to the achievement of the goal of the task, based on the experience so far. An advantage of ARCHER over other learning algorithms is that it makes use of richer feedback information about the result of a rollout.

For the archery training, the ARCHER algorithm is used to modulate and coordinate the motion of the two hands, while an inverse kinematics controller is used for the motion of the arms. After every rollout, the image processing part recognizes automatically where the arrow hits the target which is then sent as feedback to the ARCHER algorithm. The image recognition is based on Gaussian Mixture Models for color-based detection of the target and the arrow's tip.

The experiments are performed on a 53-DOF humanoid robot iCub. The distance between the robot and the target is 3.5m, and the height of the robot is 104cm.

Authors of the video:
Petar Kormushev, Sylvain Calinon, Ryo Saegusa and Giorgio Metta
Italian Institute of Technology


Go back to the list of publications