Improve Supervised Models With Unlabeled Data

preview_player
Показать описание
📈 You can leverage unlabelled data to improve supervised models with Self-Supervised Learning.

First, train your model to learn the generic pattern in the unlabelled data.

For instance, with images, you can hide random parts of the image and ask the model to reconstruct the missing piece.
Same for NLP with text.

Then, use the embedding learned in the first step and fine-tune the model on the data labels.

You can get better performance than using only the labeled data.

But it works better if the quantity of unlabeled data is huge to learn meaningful patterns.

Hopefully you liked this video 💚
🔥 Subscribe to Bitswired
👍🏽 Leave us a like/comment to support us

💬HASHTAGS:
#machinelearning #deeplearning #selfsupervisedlearning #data #datascience #ai #optimization #training #pytorch #tensorflow #python #programming #coding
Рекомендации по теме
Комментарии
Автор

Would you consider self-supervised learning for your next project?

bitswired