DTC: Deep Tracking Control

preview_player
Показать описание
We have combined trajectory optimization and reinforcement learning to achieve versatile and robust perceptive legged locomotion.

Abstract: Legged locomotion is a complex control problem that requires both accuracy and robustness to cope with real-world challenges. Legged systems have traditionally been controlled using trajectory optimization with inverse dynamics. Such hierarchical model-based methods are appealing due to intuitive cost function tuning, accurate planning, generalization, and most importantly, the insightful understanding gained from more than one decade of extensive research. However, model mismatch and violation of assumptions are common sources of faulty operation. Simulation-based reinforcement learning, on the other hand, results in locomotion policies with unprecedented robustness and recovery skills.
Yet, all learning algorithms struggle with sparse rewards emerging from environments where valid footholds are rare, such as gaps or stepping stones. In this work, we propose a hybrid control architecture that combines the advantages of both worlds to simultaneously achieve greater robustness, foot-placement accuracy, and terrain generalization. Our approach utilizes a model-based planner to roll out a reference motion during training. A deep neural network policy is trained in simulation, aiming to track the optimized footholds. We evaluate the accuracy of our locomotion pipeline on sparse terrains, where pure data-driven methods are prone to fail. Furthermore, we demonstrate superior robustness in the presence of slippery or deformable ground when compared to model-based counterparts. Finally, we show that our proposed tracking controller generalizes across different trajectory optimization methods not seen during training. In conclusion, our work unites the predictive capabilities and optimality guarantees of online planning with the inherent robustness attributed to offline learning.

Authors: Fabian Jenelten, Junzhe He, Farbod Farshidian, and Marco Hutter

Video: Fabian Jenelten

#ANYmal #leggedrobot #robot #robotics #robotdog #AI #reinforcementlearning #rl #rescue #innovation #armasuisse #arche2023 #scienceresearch #stepping
Рекомендации по теме
Комментарии
Автор

To say this is impressive would be an understatement. I look forward to what comes next and wonder what the state of the art will be in 10 years.

TheCrassEnnui_
Автор

The most natural movement I've ever seen, so good

gloudsdu
Автор

Your work on walking in the soft deformable floor when the camera perceives a rigid surface is amazing. I am floored.

BalajiSankar
Автор

Truly impressive. I had previously tried to implement a walking gait for walking robot, but the MPC controller never worked out of the simulation.
Seeing this paper motivates me to try it again.

avinashthakur
Автор

This is way more impressive than ANY other demo I've seen lately from anyone. Just don't go greedy and have governments take up all your contracts.

divinusfilius
Автор

Great video and impressive work! Can't wait to see more about swarm planning and coordination in the future, thanks for sharing.

csmith
Автор

it start to look serious
atill wonder the autonomy of such battery

omnianti
Автор

Do these robots have some kind of pressure sensors on their limbs? It seems to me that this makes it extremely easier to navigate on soft and uncertain ground. Although the design is not perfect yet, it moves incredibly well on a hard surface and I am sure it will be very useful in helping people.

Skora
Автор

So with artificial training environments with realistic physics you can literally train this thing fast forwarded, give it thousands of years of experience in days or something?? That's so cool. We're all fked.

divinusfilius
Автор

They are ahead of many other robotics companies including Tesla when it comes to locomotion...but what about autonomy? The robot needs to do this autonomously for this to be impressive. And self charge/self sustain

waynephillips
Автор

Apart from tge very impressive results can you say ehat you have used to simulate/train the trajectory logic?

VK-qhpr
Автор

It stumbles very "environment aware" looks like an actuall creature :o

Suushidesu
Автор

I feel like this will soon be child's play for anyone, with how fast AI is improving. Model in the slipperiness and sliding footholds in the 3D simulation and train a new AI from the ground up just to see if it works. I believe we've learned from other new AI systems that "human intervention", like how these robots always 'step' rhythmically, even when 'standing still', is something that holds them back in the end.

I don't know what type of AI system this bot is using, but it definitely seems like an advanced iteration of a fairly old way of doing things in the AI space. Don't be afraid to step out of your comfort zone and have a completely separate AI try to learn things solely from 3D environments, with little to no 'humans thinks it should be done this way' intervention.

Ree
Автор

I can see IRS ordering 200, 000, oh these will find you even under your house. Attach some lasers to the front, you WILL pay your taxes.

divinusfilius