Fine-Tuning BERT with HuggingFace and PyTorch Lightning for Multilabel Text Classification | Dataset

preview_player
Показать описание

Learn how to use BERT to classify toxic comments from raw text. You'll learn how to prepare a custom dataset, tokenize the text using the Transformers library by HuggingFace. We'll have a look at PyTorch Lightning and create a data module for our dataset.

#PyTorch #BERT #PyTorchLightning #NLP #Python #MachineLearning #DeepLearning
Рекомендации по теме
Комментарии
Автор

In the next part we'll fine tune BERT to classify toxic comments and show you a couple of fine-tuning tricks/hacks along the way.
Thanks for watching!

venelin_valkov
Автор

Thank you man, you helped me get through my master thesis.

tehJimmyy
Автор

After 5 months of being AWOL, you came back with a bang... Great video thanks

MMphego
Автор

I was having such a hard time plugging hf with lightning, much clearer how they are plugged together. tx!

ayoutubechannel
Автор

Do more and regular videos brother🙌🏻hatsoff for your efforts😁

VjayVenugopal
Автор

Thanks for the tutorial!! This is so helpful! Would you share the link of the codes (the colab page)? Thankssss!!!

ellenzou
Автор

Thanks for great tutorial, where I can find the notebook for this video? It looks like, it's missing inside github repo. Thanks

fedyaskitsko
Автор

my pretrained bert model is returning the value of batch_size, max_token_length and classes and the target is of size batch_size, so not able to calculate loss

kachrooabhishek
Автор

Was this supposed to end so suddenly like that? When or where will we get the rest of it? Or is there a 'rest of it'?!

malikrumi
Автор

Nice explanation ❤️ . I Love it the way you explain things and goes hands on. It'll very helpful If you upload Named entity Recognition using BERT.

dv
Автор

Great video! looking forward to BERT4Rec Fine Tuning

Deepakkumar-sntr
Автор

Amazing content and tutorials bro. Thank you so much . Could you please organise all your videos into proper playlists?

teetanrobotics
Автор

Its awesome .where is the second part for this .?

yashumahajan
Автор

Some more questions, if you have the time ^^ Ist unsqueeze basically just the opposite of flatten? If so, why did we flatten the data in the first place?

mariere
Автор

Hi Venelin, thank you for the great information, I have a question. How you prepared classes to be in the numerical shape? I have about 2K classes and they are text labels. Can you give me a hint?

saharyarmohamadi
Автор

This video assumes deep familiarity with PyTorch. Otherwise you're just flying blind.

xv
Автор

how to do that trained huggingface model on my own dataset? how i can start ? i don't know the structure of the dataset? help.. very help
how I store voice and how to lik with its text how to orgnize that
I an looking for any one help me in this planet
Should I look for the answer in Mars?

testingemailstestingemails
Автор

Your videos are really helpful! Would your example work just as well with BertForSequenceClassification, or is there a specific reason why you use the 'generic' BERT model?

mariere
Автор

Hi Venelin!! Great work:)
May I know whether continuous retraining is possible using BERT?
i.e., I have a fine-tuned model, Can I further tune it using additional dtaset without merging the new dataset with the old.

kvp
Автор

Hahahaha, that toxic comment made my day lol

r_pydatascience