In imitation learning, multivariate Gaussians are widely used to encode robot behaviors. Such approaches do not provide the ability to properly represent end-effector orientation, as the distance metric in the space of orientations is not Euclidean.

In this work we present an extension of common imitation learning techniques to Riemannian manifolds. This generalization enables the encoding of joint distributions that include the robot pose. We show that Gaussian conditioning, Gaussian product and nonlinear regression can be achieved with this representation. The proposed approach is illustrated with examples on a 2-dimensional sphere, with an example of regression between two robot end-effector poses, as well as an extension of Task-Parameterized Gaussian Mixture Model (TP-GMM) and Gaussian Mixture Regression (GMR) to Riemannian manifolds. 

The work is accompanied with source code that can be downloaded here.