Semi-supervised Learning explained

preview_player
Показать описание
In this video, we explain the concept of semi-supervised learning. We also discuss how we can apply semi-supervised learning with a technique called pseudo-labeling.

🕒🦎 VIDEO SECTIONS 🦎🕒

00:30 Help deeplizard add video timestamps - See example in the description
03:16 Collective Intelligence and the DEEPLIZARD HIVEMIND

💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥

👋 Hey, we're Chris and Mandy, the creators of deeplizard!

👉 Check out the website for more learning material:

💻 ENROLL TO GET DOWNLOAD ACCESS TO CODE FILES

🧠 Support collective intelligence, join the deeplizard hivemind:

🧠 Use code DEEPLIZARD at checkout to receive 15% off your first Neurohacker order
👉 Use your receipt from Neurohacker to get a discount on deeplizard courses

👀 CHECK OUT OUR VLOG:

❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind:
Tammy
Mano Prime
Ling Li

🚀 Boost collective intelligence by sharing this video on social media!

👀 Follow deeplizard:

🎓 Deep Learning with deeplizard:

🎓 Other Courses:

🛒 Check out products deeplizard recommends on Amazon:

🎵 deeplizard uses music by Kevin MacLeod

❤️ Please use the knowledge gained from deeplizard content for good, not evil.
Рекомендации по теме
Комментарии
Автор

Thank you very much for this video! I learnt a lot from this, and find semi-supervised learning a great way to utilize unlabelled data! Great work!

tymothylim
Автор

After pseudo-labeling, do you validate the outcome? Or do you remove data whose prediction was under some threshold? E.g. run the unlabeled data through the model and then only use newly labeled data that exceeds 80% or 90% confidence (for example).

hunttingbuckley
Автор

Thanks, I've definitely got a clearer idea now.

hughculling
Автор

This video is really very helpful as the explanation is really very clear.

VaibhavJawale
Автор

If the unlabeled portion vastly outnumbers the labeled portion, it seems like you're taking a risk pushing through the pseudo-labeled content as it could very well contain a larger number of incorrectly labeled items than the original set. Isn't this going to be counter productive? Is there a way to avoid this without manually evaluating a significant percentage of the giant data set?

sgartner
Автор

so so thank you, It is the best explaination of machine learning i ever seen. please made more video on machine learning with lengthy video.

sourabhkumar
Автор

{
"question": "The unlabeled data gets their labels from………………",
"choices": [
"prediction from trained model with labeled datA",
"Pseudo labeling ",
"Unsupervised learning ",
"prediction from trained model with unlabeled data "
],
"answer": "prediction from trained model with labeled datA",
"creator": "Faiveg ",
"creationDate": "2020-04-04T20:58:18.238Z"
}

gideonfaive
Автор

The cat in the middle (1:30) is the best xD
Nice series though, thanks a lot!

sergiu-danielkopcsa
Автор

Good explanation. Very crisp and clear with good example.

rohitjagannath
Автор

Nice and simple! Thanks a lot for your effort!

konm
Автор

Thank you for these videos! I found your channel today and already watched a bunch of videos.. btw, you have one of best explanations i've ever seen <3

isaquemelo
Автор

thank you very much for this clear and helpful explanation.

qusayhamad
Автор

Thank you so much for the detailed & helpful explanation!! Besides, the background rocks :D

cemregokalp
Автор

Good explanation! Thanks for these videos. You should have a much bigger crowd.

rezaxxx
Автор

Thank you for your video, helped me a lot. My question is why we need semi-supervised learning? What if the trained model is not good enough, and the pseudo-label may not correct for data without a label, so performance of the later trained model with pseudo-label data may not be good enough.

alexanderyau
Автор

I wish I knew your course 6 years ago go .Please do a full course from scratch.

Rainbow-jkok
Автор

How can people say that this is a "very well done video"??? It does not explain anything. It almost does not have any sense even! Who guarantees me that the labelled data are enough to correctly fit the NN? If I am able to fit the model, why should I care about labelling more data? What about overfitting? What if the NN mislabels the unlabelled data?

ruggieroseccia
Автор

Thanks for the clear explanation! I was wondering, if we were to provide that semi-supervised model with a completely different animal to test on, like a bird, what approaches are there to tell the user that the input is neither cat nor dog? I know you mentioned some models can provide probabilities of being assigned cat or dog, so is it possible that some model could say there's <1% chance of the bird being either a cat or dog?

kevinyang
Автор

Very interesting videos. I am just wondering why, in pseudo-labeling, we retrain the model on the labeled dataset, that are already trained?
Thanks for the interesting content.

aymanehar
Автор

{
"question": "Semi-supervised learning employs to create labels for the remaining unlabeled data.",
"choices": [
"pseudo-labeling",
"autoencoders",
"validation sets",
"optimizers"
],
"answer": "pseudo-labeling",
"creator": "Chris",
"creationDate": "2019-12-12T04:16:26.512Z"
}

thespam