SLAM Robot Mapping - Computerphile

preview_player
Показать описание
Thanks to Jane Street for their support...
More links & stuff in full description below ↓↓↓

JANE STREET...
Applications for Summer 2023 internships are open and filling up... Roles include: Quantitative Trading, Software Engineering, Quantitative Research and Business Development... and more.
Positions in New York, Hong Kong, and London.


This video was filmed and edited by Sean Riley.


Рекомендации по теме
Комментарии
Автор

i made a little robot that used slam mapping once. that is, it slammed into walls to find out where they were

rallekralle
Автор

“The IMU has physics inside”
I’m stealing this line for my next UAV LIDAR lecture

jonwhite
Автор

I remember over a decade ago using a periodically rotating ultrasonic distance sensor on a small Lego robot to do a very basic SLAM, where (luckily) all of the relevant rooms were perfect 1m squares and all we needed to know was where we were in relation to the center of the room.

It's amazing how far tech has come and how incredibly diverse and useful it is! I love seeing the multi-camera SLAM systems like used on Skydio drones and inside-out tracked VR headsets

StormBurnX
Автор

One of the coolest applications of the Kalman Filter is slam modeling. Really gives you a sense for how flexible the Kalman Filter is..

Mutual_Information
Автор

Amazing video, I love to see that ROS is used for all of these applications, it helps the research grow and the results are amazing. Every year new amazing things happen

antonisvenianakis
Автор

Makes you appreciate the human brain. I go down the garden on the right and back on the other side and I am quite happy that I have closed the loop. On the way things will have moved the wheelbarrow, the washing in the wind, I can see a programmer absolutely pulling out their hair trying to get a robot to assign permanance to the right landmarks.

andrewharrison
Автор

Would be very interesting to have a deeper dive into this. Like, how do they extract features from the images, and how do they handle uncertainty in the data?

klaasvaak
Автор

SLAM is something I've been studying and working on for some time, so it's very cool to see it discussed! It is so useful in so many cases for automation, as you aren't dependent on external data, such as communication with GPS and such. It is extremely useful both for the maps it produces, and you can use the results for path planning and obstacle avoidance.

Gaivs
Автор

I only took an introductory course in robotics in college, but I remember learning a bit about SLAM. I personally think it's one of the coolest problems in computer vision.

LimitedWard
Автор

First video that I saw and something about the IMU was said...Thnak you :)

gogokostadinov
Автор

I got a suggestion for map management, treat the robot as being in a 3x3xN cuboid, the cell 2x2x2 is the one that the robot is ALWAYS in, it is declared as RxCxD from 0x0x0 which is always the 1st place it started recording, cell 2x2x2 gets the most attention, the cells directly adjacent are for estimating physics of any interaction (such as a ball knocked off of an overhead walkway somehow that need to evade), a change vs the stored recent state indicates objects &/or living things in local ares to be wary of, otherwise just keep checking with the dedicate threads, the cells 4 onwards of dimension N are for vision based threads, they by default use empty space unless a map of the cell happens to be available, when ever the robot reaches the normalised 1.0 edge of the cell it's in then the RxCxD it's in changes with it and it's normalised position flips from 1.0 to -1.0 & vise versa, keeping to a normalised space for analysing physics & locations that can move to simplifies the math used for it since it only has to focus on what is in RAM rather than whatever map file/s are on disk which is loaded/unloaded by whatever thread moves the robots position, the files just need to be stored something like this:



The underscores indicate next float (in this example anyways) value so in the above case the 3 values would be read into integers, cast to floats via pointer arithmetic or unions then treated as the absolute position of the map that is being used in related to what is stored (which is faster than reading text obv), this combined with the space of the cell in it's none normalised form gives an exponential increase of absolute positioning while also keeping the RAM usage & CPU/GPU/TPU usage reasonable by not using doubles for physics & mapping of the cells themselves, this no doubt translates to even faster processing of those cells, for the count of N in 3x3xN I would go with 10 so that it's not too far ahead that looking but also not ignoring too much with a 3x3x3 arrangement, 10 is also easier for manual math & percentages. Using fixed width map names also makes it easier to map in your own head roughly where the robot saw it in relation to when it 1st started mapping since the fields of column vs row vs depth are neatly aligned ready for quick read of it by human eyes.

zxuiji
Автор

Long time since I have seen Brady. That was nice.

Petch
Автор

You should really go into depth regarding the mathematical intrecacies of probabalistic robotics! It's an awesome field of math and engineering!

VulpeculaJoy
Автор

Can you use the loop closer to add some constant to the IMU guesswork so the errors are likely smaller next time?

skaramicke
Автор

I do so very much enjoy peeking under the hood into the robotics software through these videos.

Nethershaw
Автор

What is the single-board computer being used there?

douro
Автор

We recently programmed a small Mindstorms robot to steer through a parcour that used lidar Scans to map the enviroment and then found its path step by step by the means of PRM (probabilistic roadmap method). That was fun.
2D only but you have to start somewhere I guess :)

alwasitacatisaw
Автор

My robot vacuum cleaner has been doing this for the last five years. Clever little bugger. £200 to never have to push the hoover around again. 👍

sarkybugger
Автор

When will you cover the beer pissing attachment?

hugofriberg
Автор

So you could in theory hive mind these maps so you have multiple robots and potentially drones that exchange the map data with each other right? That could be a very powerful way to quickly learn new robots the landmarks and rooms they might have to navigate in or to more quickly complete loops as robots work together on completing the same loops?

SyntheticFuture