Representations vs Algorithms: Symbols and Geometry in Robotics

preview_player
Показать описание
Nicholas Roy, MIT

Abstract: In the last few years, the ability for robots to understand and operate in the world around them has advanced considerably. Examples include the growing number of self-driving car systems, the considerable work in robot mapping, and the growing interest in home and service robots. However, one limitation is that robots most often reason and plan using very geometric models of the world, such as point features, dense occupancy grids and action cost maps. To be able to plan and reason over long length and timescales, as well as planning more complex missions, robots need to be able to reason about abstract concepts such as landmarks, segmented objects and tasks (among other representations). I will talk about recent work in joint reasoning about semantic representations and physical representations and what these joint representations mean for planning and decision making.
Рекомендации по теме
Комментарии
Автор

Excellent talk! Seems at the right level to make real progress in AI and robotics. Understanding of the relationship between so-called symbolic AI and everything else is going to be key, IMHO. More of this please!

PaulTopping