EPFL Students Projects ProposalsThe descriptions below are available for either semester projects or master thesis projects (the content will be adjusted accordingly). Suggestions of other projects (or variants of existing projects) are also welcome, as long as they fit within the group's research interests.
ROBOT ASSISTANCE FOR STANDING UP AND SITTING DOWN
The project will explore the problem of assisting a user to stand up and sit down. This problem will be modeled as two kinematic chains, where only planar movements will be considered. One kinematic chain will represent a humanoid robot and the other will represent the user to assist. It is assumed that both have fixed feet on the ground and are connected to a common endeffector point representing their hands, meaning that the resulting system is a closed kinematic chain in which only a few of the articulations can be controlled.
The assistance skill consists of moving the user from a static sitting pose to a static standing pose through the contact point. It involves challenging aspects of anticipation, shared control, initiation of movements, leader-follower behaviors and haptic communication. This project will concentrate on studying the dynamical motion aspects to achieve such assistance, which requires the consideration of inertia and center of mass movements with respect to the feet.
TENSOR FACTORIZATION FOR MULTI-TASK LEARNING
Manipulation skills in robotics are encoded as a weighted superposition of movement primitives, where the problem consists of learning a dictionary of movement primitives together with the superposition weights. The current limitation is that each skill is learned individually, which limits the skills transfer capability.
The project proposes to address this limitation by relying on tensor methods, which will be used by the robot to devise a common dictionary to learn multiple skills in an incremental manner. Tensor methods are extensions of standard linear algebra techniques to arrays of higher dimension (typically, extension of singular value decomposition to arrays of more than two dimensions, capturing correlations between different dimensions in a compact way).
This project takes place within the LEARN-REAL project (learning physical manipulation skills with simulators using realistic variations).
GAUSSIAN PROCESS FOR IMPLICIT SURFACE REPRESENTATION
There are several ways of representing the environment and obstacles surrounding a robot, including geometric shapes, occupancy voxel grids, or implicit surface representations. The latter have several advantages, such as providing gradients to let the robot knows how to avoid an obstacle, move along a surface, or establish contact with the closest surface. It also provides a measure of uncertainty that can be exploited within an optimal control strategy. Implicit surface representations are typically implemented as Gaussian processes, where standard kernel functions such as radial basis functions are used to measure distances. This project proposes to investigate the extension to other kernels that would better take into account the geometry of the problem.
This project takes place within the MEMMO project (memory of motion).
MOTION OPTIMIZATION WITH A LEGIBILITY PERSPECTIVE
Standard motion optimization problems in robotics focus on the use of cost functions that measure how well a task is executed (e.g., positions to reach, viapoints to pass through, orientations to maintain). In a manipulation task, the typical goal will be to generate movements that efficiently execute the task. We propose to extend the definition of these costs to include human-robot interaction aspects.
The project will investigate the problem of generating movements for the robot that would allow an external observer to understand quickly the intention of the robot. This requires the investigation of legibility costs to be used to generate motions capable of reducing ambiguity (e.g., by exaggerating the movement to make it more legible to a user interacting with the robot). The side image shows how a legible movement can be generated to reduce ambiguity in the intended movement (here, to emphasize that the blue object will be grapsed instead of the orange object).
This project takes place within the ROSALIS project (robot skills acquisition through active learning and social interaction strategies).
SUBSPACE LEARNING FOR ROBOT CONTROL APPLICATIONS
In robotics, movements can be represented as a dynamical system describing the evolution in time of the robot. Most often, the system is nonlinear, and the standard approach is to linearize the system so that locally, a linear system can be considered. Another approach, originally proposed by Koopman, is to augment the original set of variables composing the state space so that the nonlinear system can be expressed as a linear system in this augmented state space.
To do this, several approaches have been proposed, such as forming this augmented state space with polynomial or Fourier expansions of the original signal, or learning this augmented state space with autoencoders. Often, these approaches consider systems that estimate the next state based on the current state.
A promising approach recently proposed in the reference below is to consider a history of the previous states. This approach suggests to go beyond standard polynomial basis functions or Fourier basis functions by exploiting delay coordinates as basis functions, in the form of a factorization of a Hankel matrix. The resulting algorithm is surprisingly short and simple to implement. This project proposes to explore this approach in a robot control task with the 7-axis Panda robot (Franka Emika).
Keywords: dynamical systems, robot control, subspace learning, delay embedding, time series analysis
SMARTPHONE INTERFACE USING AUGMENTED REALITY
We have developed an augmented reality interface running on Android smartphones to display a virtual robot (left image). The app relies on ARCore, Google's toolkit for building augmented reality applications on Android and iOS devices. This toolkit is used to estimate the location of the phone and to render 3D graphics on top of the camera's image displayed on the screen.
The aim of the project is to extend this interface to let the user move the virtual robot to a desired configuration (see right images as an illustration). Several options can be considered, such as clicking on the robot articulations and dragging them to their desired positions, or drawing a stick figure on top of the image, which is then interpreted to set the desired pose of the robot.
The Android smartphone will be programmed in Java (knowledge of either Java or C++ is required for the project).
The developed interface will be tested to change the pose of a real 7-axis Panda robot (Franka Emika), by first setting and visualizing the motion of the virtual robot on the smartphone, and then running the motion on the real robot. A basic interface between the mobile phone and the robot is already available for the project (by using the ROS middleware). The proposed approach will finally be evaluated with inexperienced user to determine if it is accurate and easy-to-use.
Keywords: augmented reality, smartphone interfaces, robotics, inverse kinematics, machine learning