Vision language action models for autonomous driving at Wayve

preview_player
Показать описание

*About Oleg Sinavski's Session on advancing autonomous driving with Vision-Language-Action (VLA) models*

Join Oleg Sinavski, Principal Applied Scientist at Wayve, as he presents the latest advancements in autonomous driving through Vision-Language-Action (VLA) models. Learn how Wayve integrates visual perception with natural language processing to create explainable, end-to-end driving systems.

*Highlights of the session include:*

- An overview of Wayve's innovative Lingo-1 and Lingo-2 models.
- The ability of VLA models to interpret complex driving scenarios and generate actionable commands.
- Demonstrations of these models in real-world applications.
- The challenges and solutions in developing autonomous vehicles that can reason and act like humans.
- Discover the future of autonomous driving technology with insights from a leading expert in the field.
Рекомендации по теме
Комментарии
Автор

Very cool! Can imagine a human interacting with this while driving and speaking commands to it like slow down, speed up, or where should I stop for lunch?

joliver