HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning

preview_player
Показать описание
In this video I show you everything to get started with Huggingface and the Transformers library. We build a sentiment analysis pipeline, I show you the Model Hub, and how you can fine tune your own models.

📓 ML Notebooks available on Patreon:

If you enjoyed this video, please subscribe to the channel:

The Huggingface transformers library is probably the most popular NLP library in Python right now, and can be combined directly with PyTorch or TensorFlow. It provides state-of-the-art Natural Language Processing models and has a very clean API that makes it extremely simple to implement powerful NLP pipelines.

Resources:

~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~ SUPPORT ME ~~~~~~~~~~~~~~

#Python

Timeline:
00:00 - Introduction
00:43 - Pipeline
09:19 - Model And Tokenizer
15:20 - PyTorch classification
21:55 - Saving And Loading
23:33 - Model Hub
31:30 - Fine Tuning

----------------------------------------------------------------------------------------------------------
* This is an affiliate link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
Рекомендации по теме
Комментарии
Автор

The last 6-8 minutes of this video is exactly what I have been trying to hunt down as a tutorial. Thank you!

JoshPeak
Автор

I feel like I've hit a jackpot! It took me forever to find such an easy-to-learn video. Das war sehr gut! Danke!

netrahirani
Автор

There are so many videos out there that show how to use huggingface's models with a pipeline and making it seem so "easy" to do things, which it is. But unlike those videos, this one really shows how we can use models natively and train them with our own cycles. Instead of portraying things as "easy", you decided to show how to actually get things done and I absolutely loved that!!
Thanks for the tutorial :D

just_ign
Автор

Almost 3 years for this video, and still so much relevant today. Thank you sir.

shubhamgattani
Автор

Maaan! I liked how you started the tutorial: well-explained and sweet for the beginners. Starting from Pytorch classification, you probably assumed "enough with beginners, let's level up 100x times lol". Many lines of code with arguments you wrote, require some googling, hence a quicky high-level explanation of those could do magic. Nevertheless, thanks for making this video mate.

aidarfaizrakhmanov
Автор

Thanks for the tutorial buddy, it was amazing!

HuevoFriteR
Автор

i've seen lots of tutorials... this is the best of all!

CppExpedition
Автор

Very nice explanation, many things got cleared I had confusion about eg tokenizers. Really liked the video and your way of teaching. Expecting more like fine tuning bert on custom dataset, please make video on it.

vijaypalmanit
Автор

OMG! Thanks for this video! Don't have to deal with French accent anymore!

haralc
Автор

Loving this ❤! Please do a series on this 🥳

prettiestthing
Автор

Holy shit this just saved my and my thesis from a week of pain. Thank you very much!

philipp
Автор

Thank you Patrick... this was much awaited course... can you please create a full length tutorial including deploying an "dashboard app" on docker

SanataniAryavrat
Автор

Thank you much for this, suscribed :)

annarocha
Автор

Please make a whole series on this :) There is also a very nice framework on top of this called "simple transformers"

robosergTV
Автор

I was ready to subscribe for you for second time :D

WalkAloneLive
Автор

This is really powerful and efficient for real world usage.
I wonder if Kaggle have a rule to ban people doing this on competitions.

We almost hear Patrick speaks German. That was so close!
Danke for the video!

kinwong
Автор

Hello, thank you for the extremely valuable video. I do have one question however. During the fine-tuning process, in the first case where we use Trainer(): as far as I can tell, the model and the data are not in the GPU by default, and we also do not move them there (as we do in the custom PyTorch training loop). I tried it in a notebook and when I run the command "next(model.parameters()).is_cuda", where model is the from_pretrained() model, it returns False.

Still, moving the model to the GPU would be the same even in this case (with the trainer), by doing However, when we only have a dataset and we dont create a dataloader, I am not sure how to move it to the GPU. Do you now perhaps? I would appreciate it a lot!

canernm
Автор

Nice video. It seems that my work in 2015 while at IBM Research which was exactly the same thing presented in this video has been widely accepted in the Machine Learning community. Cool.🤗

mairadebayser
Автор

I am simple man! I see Patrick I like the video!

imdadood
Автор

Hi, would you please make video for the text-generation and question-answering, from dissecting how the pipeline does it and then fine-tuning?

haralc