### Abstract

We present a human-robot interface for projecting information on arbitrary planar surfaces by sharing a visual understanding of the workspace. A compliant 7-DOF arm robot endowed with a pico-projector and a depth sensor has been used for the experiment. The perceptual capabilities allows the system to detect geometry features of the environment which are used for superimposing undistorted projection on planar surfaces. The proposed scenario consists of a first phase in which the user physically interacts with the gravity compensated robot for choosing the place where the projection will appear. After, in the second phase, the robotic arm is able to autonomously superimpose visual information in the selected area and actively adapt to perturbations. We also present a proof-of-concept for managing occlusions and tracking the position of the projection whenever obstacles enter in the projection field.

### Bibtex reference

@inproceedings{DeTommaso12ICSR,
author="De Tommaso, D. and Calinon, S. and Caldwell, D. G.",
title="Using Compliant Robots as Projective Interfaces in Dynamic Environments",
booktitle="Intl Conf. on Social Robotics ({ICSR})",
series="LNAI",
publisher="Springer-Verlag",
volume="7621",
year="2012",
pages="338-â€“347"
}

### Video

In this work we present a novel active interface with perception and projection capabilities for simplifying the skill transfer process. During the learning process, the real workspace is used as a tangible interface for helping the user to better understand what the robot has learned up to then, to display information about the task or to get feedback and guidance. Thus, the user is able to incrementally visualize and assess the learner's state and, at the same time, focus on the skill transfer without disrupting the continuity of the teaching interaction. We show here a proof-of-concept based on an experimental setting where a pico-projector and a Kinect RGBD camera are mounted onto the end-effector of a 7-DOF robotic arm.

While a fixed camera/projector system has a static field of view, the proposed robotic setup can project/detect at various places and under various angles. This allows the system to actively handle occlusions, where the user does not need to care during the interaction of being in the field of view. Such configuration also offers adaptive multiresolution tracking and projection features. For example, the detection of users in the surroundings requires a different field of view than the detection of objects close to the robot. For example, if one wants to project on a large surface (e.g. to have an overview of the objects involved in an assembly task), the robot can move back to increase its field of view. Then, if a precise information on the positioning of object is needed (e.g. detecting the alignment of screws and threads), the robot can move closer to the area of interest.

Video credit: Davide De Tommaso