Active (Machine) Learning - Computerphile

preview_player
Показать описание
Machine Learning where you put in a fraction of the effort? What's not to like? - Dr Michel Valstar explains Active & Cooperative Learning.

This video was filmed and edited by Sean Riley.

Рекомендации по теме
Комментарии
Автор

This is how you train people. Train them on the basics. Then get them to work closely supervised, then with someone they can ask if they get stuck, and then unsupervised.

gasdive
Автор

I hate when the captcha ask me about edge cases, I never know if I should include the pole of the traffic lights.

clem
Автор

wait so im doing work for Google to improve the ai when i select cars and street signs

plasma
Автор

Is it possible to get a list of sources such as academic papers or so with each video for further reading? I feel like it would be pretty easy for the professors to just suggest a few papers or resources for introductory purposes.

brenesrob
Автор

That's exactly what I did. Cooperative Learning is a kind of self-supervised learning but there are potential issues with it when confidence is high in falsely labeled data. There is also a problem with over-fitting that arises from selecting the high confidence training data/labels. Great topic!!

fluidice
Автор

I used to take Michel's Security lectures back in 2013. A very nice guy.

lord_nn
Автор

I have this voice recognition software that is supposed to learn as you speak and become more efficient as you use it. It's called Dragon Natural Speaking. And at first I could not tell any improvement, but now it's been almost a year it's really fine tuned itself to my voice. When someone else uses it, it goes berserk until it learns a new voice. Very cool.

valuedhumanoid
Автор

Never learned this at the university. Thanks!

kebakent
Автор

Wish we could get sources for these videos.

sphereron
Автор

I can bet 5 frikandelbroodjes this guy is Dutch

spogs
Автор

Hey Sean,
Please consider doing a video about the intersection of Control Theory and it's practical implementations in real world computing.

Yesterday I started to research why PID controllers are not used in power supplies. I came across a verbose explanation on this. During the explanation "Type 2" and "Type 3" controllers were passively mentioned. This sent me down the rabbit hole of the control theory wiki. This was a dead end with too much maths for me to gain an abstract overview type understanding.
I just went through all 5 years of your videos looking for content on the subject (I stacked my watch later list in the process) but didn't find anything. I've seen a lot of info about PID controllers, but I'd really like to understand what other types of controllers are out there in practice in the computing world.
...anyways...
Thanks for the upload.
-Jake

UpcycleElectronics
Автор

That’s such an intuitive and amazing idea

brecoldyls
Автор

This would miss all the cases that are wrong but the machine is pretty confident about.

zzfkbcu
Автор

Have to say this is a very under-researched section of ML/AI. This problem is hard enough for classification tasks, gets even harder with Semantic Segmentation when every pixel has an associated probability with it. Hopefully we see some improvement here over the years.

prithviprakash
Автор

Terrific Video. Loved the explanation. Solved a lot of my doubts regarding active learning.

dewangsingh
Автор

my problem with active learning is :
what if the data the machine is confident about is wrong?
For example : you're trying to train one to predict where faces are in a photo. you train it on 10 percent of the data you have. Then it starts predicting 5 out of 10 faces confidently. but out of the 10 faces, 2 are not actually faces. However the machine is pretty sure it is a face. With the method suggested you do not check this probability you just check weather it's confident or not with new data. but what about the accuracy? what about if it is confident about something that isn't correct?

AYabdall
Автор

But what happens when the machine is confident in a wrong answer? Is that just a trade-off that will happen sparely for reducing human labeling?

Zauhd
Автор

This process basically uses the AI to pick out cases that are least like their annotated training data thus far, which is what the AI would learn the most from having next.
This provides humans with the best bang for the buck, achieving their desired accuracy with the least annotation required.

ASLUHLUHCE
Автор

seems unnecessarily confusing that when the guy says "labels" it shows labels on the kind of data, "audio" "images", these aren't the kinds of labels he's talking about at all

robmckennie
Автор

3:10 When you say the low confidence data is labelled and goes back to retrain the model so that it spits out better accuracy than before-
do we label all such low confidence data and ingest back to retrain the model?
Because if we do so, we will not have low confidence testing set to really estimate the improvements.
Why no take 50% of the low confidence data as training set so that we can measure the actual gains on the remaining low confidence data

Vivekagrawal