System Identification: Laurent Bako
Machine Learning: Christian Wolf
Control Theory: Madiha Nadri
Control Theory: Vincent Andrieu
In the context of control of agents like UAVs (drones) and mobile robots, this PhD position addresses fundamental contributions on the crossroads between Artificial Intelligence (AI) / Machine Learning (ML) and Control Theory (CT). The two fields, while being distinct, have a long history of interactions between them and as both fields mature, their overlap is more and more evident. CT aims to provide differential model-based approaches to solve stabilization and estimation problems. These model-driven approaches are powerful because they are based on a thorough understanding of the system and can leverage established physical relationships. However, nonlinear models usually need to be simplified and they have difficulty accounting for noisy data and non modeled uncertainties.
Machine Learning, on the other hand, aims at learning complex models from (often large amounts of) data and can provide data-driven models for a wide range of tasks. Markov Decision Processes (MDP) and Reinforcement Learning (RL) have traditionally provided a mathematically founded framework for control applications, where agents are required to learn policies from past interactions with an environment. In recent years, this methodology has been combined with deep neural networks, which play the role of high-capacity function approximators, and model the discrete or continuous policy function or a function of the accumulated reward of the agent, or both.
The PhD project proposes fundamental research with planned algorithmic contributions on the integration of models, prior knowledge and learning in control and the perception action cycle:
The PhD candidate will participate in the ongoing collaboration between INSA-Lyon, University Lyon 1 and Onera (the French Aerospace Lab). Research exchanges (multi month stays) with international partners are planned and encouraged.
In collaboration with our industrial partners, the obtained results will be applied to concrete problems in drone and robot control in complex environments requiring planning as well as fine-grained control. The objective is to learn stable strategies, which allow drones to solve a high-level goal (navigation, visual search) while at the same time maintaining stable flight paths.
 G. Shi, X. Shi, M. O’Connell, R. Yu, K. Azizzade- nesheli, A. Anandkumar, Y. Yue, and S.-J. Chung. Neural lander: Stable drone landing control using learned dynamics. In arxiv 1811.08027, 2018.
 T. Lesort, N. Dıaz-Rodrıguez, J.-F. Goudou, and D. Filliat. State Representation Learning for Control: An Overview. In arxiv 1802.04181, 2018.
 O. Ogunmolu1, X. Gu2, S. Jiang, and N. Gans. Non-linear systems identification using deep dynamic neural networks. In arxiv 1610.01439, 2016.
 T. Soderstrom and P. Stoica. System identification. Prentice Hall, 1988.
 Elia Kaufmann, Mathias Gehrig, Philipp Foehn, René Ranftl, Alexey Dosovitskiy, Vladlen Koltun, Davide Scaramuzza. The Beauty and the Beast: Optimal Methods Meet Learning for Drone Racing. Arxiv 1810.06224, 2018.