Funded PhD position at INSA-Lyon

Deep Learning and Robotics

Supervisors

Christian Wolf (LIRIS, INSA-Lyon)
Laetitia Matignon (LIRIS, Univ. Lyon 1)
Julie Digne (LIRIS, CNRS)

Where?

Lyon, France
INSA-Lyon (Institution),
LIRIS (Laboratory),
In the Remember project and work group.

When, duration

Begin: September 2021
Duration: 36 months

Figure 1: Existing work at the group: embuing deep neural networks with inductive bias and spatial memory. Left: Topological maps (ECCV 2020), right: metric maps (ECML-PKDD 2020).

We are a strong group and target excellent research with publications in top-level conferences and journals.
Feel free to apply if you have excellent academic records.

Topic

This thesis will deal with methodological contributions (models and algorithms) for the training of real and virtual agents allowing them to learn to solve complex tasks independently. Indeed, intelligent agents require high-level reasoning skills, awareness of their environment and the ability to make the right decisions at the right time [1]. The decision-making policies required are complex because they involve large observation and state spaces, partially observed problems, and largely nonlinear and intricate interdependencies. We believe that their learning will depend on the ability of the algorithm to learn compact representations of memory structured spatially and semantically, capable of capturing complex regularities of the environment and of the task in question.

A key requirement is the ability to learn these representations with minimal human intervention and annotation, as manual design of complex representations is almost impossible. It requires the efficient use of raw data and the discovery of patterns through different learning formalisms: supervised, unsupervised or self-supervised, by reward or by intrinsic motivation [6,7], etc.

Another key issue is correct network structure (inductive bias). Past work of the group explored spatial maps with topological [1] or metric [2] structure (Figure 1), and current work looks into transformers, which we have recently successfully applied to video processing [4].

Explainability of the developed models will also be an issue, and explored [5,8] [Reasoning Patterns].

References of the group

[1] Edward Beeching, Jilles Dibangoye, Olivier Simonin and Christian Wolf. Learning to plan with uncertain topological maps. To appear in European Conference on Computer Vision (ECCV), 2020

[2] Edward Beeching, Jilles Dibangoye, Olivier Simonin and Christian Wolf. EgoMap: Projective mapping and structured egocentric memory for Deep RL. To appear in European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2020.

[3] Edward Beeching, Christian Wolf, Jilles Dibangoye and Olivier Simonin. Deep Reinforcement Learning on a Budget: 3D Control and Reasoning Without a Supercomputer. To appear in International Conference on Pattern Recognition (ICPR), 2020.

[4] Brendan Duke, Abdalla Ahmed, Christian Wolf, Parham Aarabi and Graham W. Taylor. SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation To appear in International Conference on Computer Vision and Pattern Recognition (CVPR), 2021 (oral presentation).

[5] Théo Jaunet, Romain Vuillemot and Christian Wolf. DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning. In Computer Graphics Forum (Proceedings of Eurovis), 2020.

[6] A. Aubret, L. Matignon and S. Hassas, A survey on intrinsic motivation in reinforcement learning, arXiv preprint arXiv:1908.06976

[7] A. Aubret, L. Matignon and S. Hassas. ELSIM: end-to-end learning of reusable skills through intrinsic motivation. To appear in European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2020.

[8] Corentin Kervadec, Grigory Antipov, Moez Baccouche and Christian Wolf. Roses Are Red, Violets Are Blue... but Should VQA Expect Them To? To appear in International Conference on Computer Vision and Pattern Recognition (CVPR), 2021.