My researches are related to the question of how an agent (a robot or a computer) can learn to evolve in an unknown environment.
- Sensori-motor theories. In this framework, the regularities that an agent has to learn from its interaction with its environment are the sensory consequences of its actions. From a theoretical point of wiew, this changes the data structure to learn, but this may also be the only way to learn knowledge and not only representations. However, this prevents the collection of a database (as the actions and thus the body in necessary to learn, which is related to embodiement) and thus needs autonomous learning.
- Autonomous learning. Learning in interaction with an unknown environment need several properties: online (one example at a time without return), unsupervised (i.e. without label) and continuous (new tasks appearing, so non i.i.d. data) learning. Machine learning does not study much these combined properties, but some of the methods can be used and adapted. Developmental learning, inspired from neuroscience and cognitive science, by consisting in progressive and hierarchical acquisition of interactional properties with the environment, seems more promising.
- Multisensory fusion. Learning sensori-motor contingencies needs to mix various types of information (sensory and motor). Moreover, sensory data themselves are usually multimodal as coming from multiple sensors or mixing various kind of information/structures in a single sensor. Such a learning needs to know what information merging and what confidence giving to each one.
- Artificial curiosity. Sensori-motor learning needs to answer the question of what action to perform and why. Artificial curiosity offers an interesting framework consisting in maximizing an internal (meta)reward based on the agent's learning performance. This formalism can be coupled with active perception mechanisms targetting the choice of an action reducing the uncertainty of the current perception.