Active Vision Based Embodied-AI Design For Nano-UAV Autonomy | Ph.D. Defense of Nitin J. Sanket

preview_player
Показать описание
Currently, state-of-the-art in aerial robot autonomy uses sensors that can directly perceive the world in 3D and a massive amount of computation to process this information. This is in stark contrast to the methods used by small living beings such as birds and bees: they use exploratory and active movements to gather more information to simplify the perception task at hand. Using this active vision-based philosophy, we achieve state-of-the-art autonomy on nano-quadrotors using minimal on-board sensing and computation.

In particular, I showcase four methods of achieving activeness on an aerial robot: 1 By moving the agent itself, 2. By employing an active sensor, 3. By moving a part of the agent's body, 4. By hallucinating active movements. Next, to make this work practically applicable I show how hardware and software co-design can be performed to optimize the form of active perception to be used. Finally, I present the world's first prototype of a RoboBeeHive that shows how to integrate multiple competencies centered around active vision in all its glory.

Reference:
Nitin Jagannatha Sanket
Active Vision Based Embodied-AI Design For Nano-UAV Autonomy

Affiliation:
Рекомендации по теме
Комментарии
Автор

Great presentation, and amazing work. Feeling inspired!

akshayelangovan
Автор

Congrats and thank you for sharing this. I learned a lot~

yizhou
Автор

Excellent!!! Thank you very much for sharing this :) :) :)

rajasarkar
Автор

amazing stuff, really indepth approach

AbhishekSingh-uuml
Автор

The most useless, one-time, ugly solution I have ever seen

haluk