The Truth About Self Driving Cars

preview_player
Показать описание

Almost a decade ago, a sizable list of tech companies, collectively wielding over $100 billion in investment, asserted that within five years the once-unimaginable dream of fully self-driving cars would become a normal part of everyday life. These promises, of course, have not come to fruition. Despite this abundance of funding, research and development, expectations are beginning to shift as the dream of fully autonomous cars is proving to be far more complex and difficult to realize than automakers had anticipated.

THE LAYERS OF SELF DRIVING
Much like how humans drive a vehicle, autonomous vehicles operate using a layered approach to information processing. The first layer uses a combination of multiple satellite based systems, vehicle speed sensors, inertial navigation sensors and even terrestrial signals such as cellular triangulation and Differential GPS, summing the movement vector of the vehicle as it traverses from its start waypoint to its destination. The next layer is characterized by the process of detecting and mapping the environment around the vehicle both for the purposes of traversing a navigation path and obstacle avoidance. At present, the primary mechanisms of environment perception are laser navigation, radar navigation and visual navigation.

LIDAR
In laser navigation, a LIDAR system launches a continuous laser beam or pulse to the target, and a reflected signal is received at the transmitter. By measuring the reflection time, signal strength and frequency shift of the reflected signal, spatial cloud data of the target point is generated. Since the 1980s, early computer based experiments with autonomous vehicles relied on LIDAR technology and even today it is used as the primary sensor for many experimental vehicles. These systems can be categorized as either single line, multi-line and omnidirectional.

RADAR
The long-range radars used by autonomous vehicles tend to be millimeter wave systems that can provide centimeter accuracy in position and movement determination. These systems, known as Frequency modulated continuous wave RADAR or FMCW, continuously radiate a modulated wave and use changes in phase or frequency of the reflected signal to determine distance.

VISUAL PERCEPTION
Visual perception systems attempt to mimic how humans drive by identifying objects, predicting motion, and determining their effect on the immediate path a vehicle must take. Many within the industry, including the visual-only movement leader Tesla, believe that a camera centric approach, when combined with enough data and computing power, can push artificial intelligence systems to do things that were previously thought to be impossible.

AI
At the heart of the most successful visual perception systems is the convolutional neural network or CNN. Their ability to classify objects and patterns within the environment make them an incredibly powerful tool. As this system is exposed to real world driving imagery, either through collected footage or from test vehicles, more data is collected and the cycle of human labeling of the new data and training the CNN is repeated. This allows them to both gauge distance and infer the motion of objects as well as the expected path of other vehicles based on the driving environment.

At the current state of technology, the fatal flaw of autonomous vehicle advancement has been the pipeline by which they’re trained. A typical autonomous vehicle has multiple cameras, with each capturing tens of images per second. The sheer scale of this data, that now requires human intervention and the appropriate retraining now becomes a pinch point of the overall training process.

DANGERS
Even within the realm of human monitored driver assistance, in 2022 over 400 crashes in the previous 11 months involving automated technology have been reported to the National Highway Traffic Safety Administration. Several noteworthy fatalities have even occurred with detection and decision making systems being identified as a contributing factor.

COUNTERPOINT
While the argument could be made that human error statistically causes far more accidents over autonomous vehicles, including the majority of driver assisted accidents, when autonomous systems do fail, they tend to do so in a manner that would otherwise be manageable by a human driver. Despite autonomous vehicles having the ability to react and make decisions faster than a human, the environmental perception foundation these decisions are based on are so distant from the capabilities of the average human that trust in them still lingers below the majority of the public.

--
SUPPORT NEW MIND ON PATREON
Рекомендации по теме
Комментарии
Автор

I found the comparison with insects especially interesting and agreeable. For me it really resonated with the fact that humans are just really complex, and driving is so different from person to person, it’s almost cultural.

benji
Автор

Watching a lot of dash cam videos has ingrained in me the importance of predicting what's going on around you on the road. Having a dash cam gives people a sense of I'm in the right and I can prove it. They tend to continue doing the right thing and ignoring people doing the wrong thing. This often leads to trouble. It is incredibly important to predict potential violations of the rules and avoid them. Self-driving cars will need to adapt to this. It's not something you can visually see but somehow you can sense it. At some point their reactions might get fast enough to react in real time to those infractions. Maybe that will be enough maybe it won't. Some situations happen too late to react you need to predict them.

geniferteal
Автор

If you take into consideration that it took nature millions of years to evolve the current motion, recognition and reaction algorithms, a few decades is isn't really that long to implement them into silicon. Luckily in this case, progress is not just based on survival of the fittest. Great video and explanations!

HuygensOptics
Автор

9:10. And that’s why we all have to “identify the trucks in this picture” to prove we aren’t a robot on websites. We are collectively training the AI!

David-lrvi
Автор

"If you put 'the truth about' in a video title you will get twice the views than if you hadn't"
Mark Twain

free_spirit
Автор

The main thing that annoys me about this idea, is the fact that doing it with rail would be infinitely easier; but at least in America, the car industry probably would not allow it.

petrus
Автор

Car-centric suburbs are the modern dystopia of America. I've first seen such infrastructure in the suburbs of Rome. There is only asphalt, cars, trash and desert. Walking to the shopping mall is 2km alongside the highway with cars speeding past you. There is no sidewalks.
Such infrastructure that forces people to use cars is awful. Driver-less cars solve nothing. We need better public transport and bike infrastructure, to elimimate cars alltogether.

ProjectPhysX
Автор

10:05 Awesome, can't wait for the new captcha's to identify weird traffic cones as well as little animals crossing the road. And then the next generation will probably be to identify whatever things in a snow scene verse daylight, or rain vs fog, etc.

TheTonyMcD
Автор

I work in advanced driver assistance systems development for an OEM, and this video is spot on. You summarized the technologies involved, the limitations, and the future of autonomous extremely well.

Level 3 autonomous driving is much closer than many people may realize (although not there yet, i.e. Tesla's "FSD") and I for one am looking forward to it becoming mainstream. Level 4 and 5 are making strides too, but they will be limited to very specific environments for some time

patrickkennedy
Автор

Its all vaporware and a grift WAKE UP PEOPLE

chromebomb
Автор

I remember that Ted talk where a Google engineer gave an example of a difficult real-world situation for a computer to identify, which was a woman on her scooter turning circles in the street chasing down her pet ducks (which had escaped).

mkst
Автор

Autonomous driving have all of my trust that i can possibly give to it. I would drive with it every time in every conditions, but on train, not in metal box in world full of unimaginable, unique dangers that are undoubtedly not possible to make documents or reports on.

poprawa
Автор

I think it's pretty bizarre that driving is now considered one of the hardest human tasks to automate. We're going to see artists and programmers automated long before truck drivers are replaced. Which is ironic because just a couple of years ago this was still thought of as the reverse.

joey
Автор

You know what would be a cheaper solution to this? More trains and buses.

MyawesumMe
Автор

When I was 16, I wanted a '66 GTO with tri-power, a 4 speed, & positraction.
A self driving car would be as much fun as riding in the back seat, with mom driving.
No thanks.

mickmccrory
Автор

It's insane how much money we put into this research. If we just invested in good public transit, most of the traffic issues would be solved. That being said, I think the immense research into self-dirving cars will solve a lot of other computer vision problems as well.

kibitz
Автор

I recently became visually impaired and amongst the many things I have lost from my life includes the ability to drive. I often now fantasize about the possibility of owning an autonomously driven vehicle that would bring that part of my independence back into my life again. Who knows, within 100 years such a dream will probably be a reality, a bit too late for me though, so I'll just continue to dream.

MrBendybruce
Автор

I see one of the biggest challenges is liability when they’re accidents. The driver is still responsible idea isn’t going to fly when you’re literally creating an environment they is meant for people to be able to be distracted. Otherwise what’s the point in having it

christerry
Автор

What is the obsession with not driving your car? Is it really that difficult?

PaulRubino
Автор

10:40 You know that it is not like 1 or 0? The object detection system can include a probability for this object being the one it thinks it is. Also: Whenever interventions are necessary the collected imagery can be analysed by humans to figure out what went wrong and correct the situation. By doing that positive behaviour is reinforced and negative behaviour is punished meaning that the rate of improvement is a function of interventions analysed and corrected which implies that the quality of the software is a function of fleet size (data collected) and time

zeg