MIT Self-Driving Cars (2018)

preview_player
Показать описание
This is lecture 2 of course 6.S094: Deep Learning for Self-Driving Cars (2018 version). This class is free and open to everyone. It is an introduction to the practice of deep learning through the applied theme of building a self-driving car.

OUTLINE:
0:00 - Intro
9:59 - Different approaches to autonomy
38:36 - Sensors
49:51 - Companies in the self-driving car space
58:18 - Opportunities for deep learning

INFO:

CONNECT:
- If you enjoyed this video, please subscribe to this channel.

LINKS:

2017:
Рекомендации по теме
Комментарии
Автор

Thanks for sharing these lectures, Lex! Truly fascinating and inspiring. It got me into ML. Greetings from Europe

RobinTeuwens
Автор

After a long time, I'm reviewing the lessons, and now everything is even more straightforward. Thanks for sharing this.

carvalhoribeiro
Автор

Very interesting. High quality course. As usual. MIT rocks.

GuillermoPussetto
Автор

Went on youtube looking for funny animal vids and now I'm an hour in learning about driverless cars. - youtube algorithm looking out for my intellectual needs.

denjua
Автор

couldn't wait for remaining lectures

cachem
Автор

Even your lectures are brilliant, what a human. 💖

ladym
Автор

It's great to explore the potential of autonomous vehicles through the course offered by MIT

turhancan
Автор

Lex, excellent lectures and content. Thank you for making it public ally available. A quick point to the stats around 33:10 where you show the human activity during autopilot. Very illuminating. One thing to keep in mind is that likely these humans are drivers so they have a habitual and instinctual behavior in the car that has been ingrained. Just like when your passengers lean over to check both sides of cross traffic and obstruct your point of view when you’re the one who needs this information. They’re acting out of instinct. What will happen as we get newer generations who grow up using autopilot and minimal engagement with vehicle and do not have these reflexes? I think we would see the game playing and distracted behavior that were shown earlier in the lecture. I actually do not believe we’ll get there, I think we are already finding ways to properly engage with the autopilot and set the desired behavioral framework from humans in order to help the human-machine interaction. I’m very optimistic but also agree with the timelines from Brooks you quoted. Keep up the good work!

georgigospodinov
Автор

What data was used to make the radar charts? Very useful charts. Thank you.

MrShawnengineer
Автор

Cool course. Thanks.

One question about the radars. If there is two vehicles side by side how can we know that the wave that we received is the wave that we sent ? And not the one from the neighbors ?

HollyDollyRun
Автор

"Is partially automated driving a bad idea? Observations from an on-road study": April 2018 (@ 24:38)
"MIT 6.S094: Self-Driving Cars": Jan 20, 2018

Finally a lecture on time travel :')

ektokseythras
Автор

Another question, on the slide at 34:20 where you cite 8000 transfers of control from machine to human, you illustrated the case where the human took over control rather than the machine prompting the transfer. Do you differentiate this in the data? What was the difference between the two types of control transfer?

georgigospodinov
Автор

Hi, have you considered installing cameras on people cars from all around the world and use the data (recordings, people reactions, etc) to make cars learn how to drive with reinforcement learning?

wellingtonbengtson
Автор

@46:50 I'm puzzled as to why ultrasonics cannot detect speed. Ultrasonic waves experience doppler shift just like radar microwaves. I would have thought it would also be easy to measure that frequency change, probably easier than microwave.

listerdave
Автор

Sashinka, I am really impressed with your research and proposals within AI transportation.



I was particularly interested in your 6 second glance study. I am a psychology graduate and it interests me as to how one's emotional reactions would affect the facial expression and if that would affect at all the desired result. Perhaps if it is pre-programmed with a variation of emotional facial expressions. But it seems exceptionally interesting if AI could make the decision to call an ambulance etc....based on the facial expressions and or sounds the driver makes.


Its really fascinating how far we have come with technology. It's almost inevitable, as you mentioned, for any great invention or human advancement to be used with bad intent by the wrong person. Meaning there should be a regulating department that makes and oversees that the rules are followed. Similar to DOT but maybe AVRD? Autonomous Vehicle Regulation Department.

Забыла вообще что я хотела сказать ..ну ты сашенька просто прелесть! какой ты умный.. А я давно влюбилась. Захочешь найдешь меня.

InnaVitamina
Автор

Knowledge is the only thing outgrows if you share with others....

rameshmaddali
Автор

Several typos... audio needs to be audi.. a8 released end of 2017.. and question is which cpu was used for audi a8.

younggisong
Автор

Just saw the news about Uber accident. Wondering would a thermographic camera be helpful for pedestrian detection at night?

xuefeiwen
Автор

How much of the AI's learning of drivers emotions would be changed with a driver with different skin tones and/or different cultural existence? Or, What biases are present/not present?

woodywiest
Автор

I wonder, did the people involved in the driver behavior monitoring experiment know that they were being monitored?

agcouper