Few Shot Learning - EXPLAINED!

preview_player
Показать описание

TIMESTAMPS
0:00 Case Study
1:37 Prior Knowledge
3:07 Disadvantage of Traditional Models
4:57 The new Model – explained
6:40 Math
9:04 Code

REFERENCES
Рекомендации по теме
Комментарии
Автор

Hey this channel is my fav, glad you're back

atifadib
Автор

Glad to see you're back. This channel deserved more than 41k Subs! Keep it up!

PD-vtfe
Автор

8:55 - but if it’s the same network used to process image i and j, how exactly do you tune its parameters? Tuning it to make A embedding look closer to B embedding will also change the B embedding values in the same time.

KlimovArtem
Автор

So can someone help me, if the network can only tell if two images are the same or not, what is the actual learning done here? Isn't it just a image to vec compare? Also how does this help with the original problem is it sam or not? Thanks in advance!

McMurchie
Автор

How does zero shot learning fits into this example?

rochaksaini
Автор

Your videos are really informative and entertaining.

IdiotDeveloper
Автор

Thank you very much for the explanation. But I still don't understand how I can pretrain the similarity function, how should I organize its inputs, etc. Can you explain a little bit more about it?

luansouzasilva
Автор

so during training we still have plenty of data to train the model, including data from same category right?

I am first time learner, the name sounds to me even during training we only have very few data, or one example per category.

Thanks for the video!

linzhu
Автор

can you provide a cosine similarity code using TensorFlow ; pls?

aliboudjema
Автор

Hey can u please send link me to the original GAN video

quadhd
Автор

Thank you very much for the explanation.

TUNBOSUNADEREMI
Автор

Do the final embeddings of the trained network make any human readable sense? Like, hair color, face roundness, etc.

KlimovArtem
Автор

Amazing. If possible cover the coding part as well. Good luck.

helloansuman
Автор

Wow nice video
I learned new things
And your's secondary voice is makes me fun😅😂🤣 bye bye

o__bean__o
Автор

Hey, I've really enjoyed all your videos! Very nicely done at an appropriate technical level. I'd say the name of your channel is a bit misleading. It could also be affecting the number of your subscribers... Keep up the good work. Much appreciated!

somerset
Автор

What about prior knowledge you did not go into it.

kryogenica
Автор

I learned i either have half a brain or just face blindness

kenonerboy
Автор

Several amateur problems here: 1. All, so called "prior" knowledge must be handled on preprocessing stage. Like Face detection, for example. First "cook" the data, then "eat". 2 Huge misunderstanding across entire AI/ML community. You professors didn't teach you that there is a huge difference between an array and a vector. Not every array is a vector! Performing "similarity" functions, or any vector function, on an array is useless and you will always get an illusion of recognition. There will always be "weird" cases where you will not be able to explain the decision made by your model.

Estereos
Автор

Dude, this loss function wouldn't work at all in practice. Think about it first before posting video....

Let's discuss positive case only for actual similar pair,
Let's say distance = 0, so sigmoid(0)=0.5 and loss=-log(0.5) = 0.69

And similarity = inverse(distance) = 1/(1+distance)

That's why folks use contrastive loss.

Sn-nwzb