Learning Complex Motion Planning Policies

Duration: 2021–2023

Description

In the project, we aim to address motion planning for robotic systems such as multi-legged walking in complex tasks of traversing unstructured terrain or climbing a wall. The proposed approach is to exploit the capabilities of learnable locomotion controllers to build a set of locomotion skills applicable in complex motion tasks. We plan to investigate biologically inspired locomotion control based on the coupling of neural oscillators to develop neural-based locomotion controllers with high plasticity capable of learning multiple gaits. We further plan to employ precise motion planning to synthesis motion planning policies using learning techniques such as hierarchical temporal memory. Complementary, we propose to combine global, less accurate models with a more precise local model using inter-basin actions and employ methods of deep reinforcement learning to provide smooth transitions between locally stable regions of individual motion controllers. We aim to establish complex analyses of the proposed approaches and experimentally verify them in scenarios with real robotic systems.

Related publications