Stanford Seminar - Robot Learning in the Era of Large Pretrained Models

preview_player
Показать описание
February 23, 2024
Dorsa Sadigh, Stanford University

In this talk, I will discuss how interactive robot learning can benefit from the rise of large pretrained models such as foundation models. I will introduce two perspectives. First I will discuss the role of pretraining when learning visual representations, and how language can guide learning grounded visual representations useful for downstream robotics tasks. I will then discuss the choice of datasets during pretraining. Specifically, how we could guide large scale data collection, and what constitutes high quality data for imitation learning. I will discuss some recent work around guiding data collection based on enabling compositional generalization of learned policies. Finally, I will end the talk by discussing a few creative ways of tapping into the rich context of large language models and vision-language models for robotics.

Рекомендации по теме
Комментарии
Автор

The Age of Robots has arrived. With the unified interface for large-model robot hardware, we can undoubtedly achieve the same astonishing results as GPT language systems.

cardianlfan
Автор

great talk!
The discussion on guided data collection is pretty informative

cedricmanouan
Автор

It'd be good to listen to the skipped content, including reinforcement learning, and more details on the Take 2 items. Great presentation!

jaiberjohn
Автор

robot learning is 76% understanding education level i assume so
artificial intelligence use
study education quickly 😅

TV