The Secret to 90%+ Accuracy in Text Classification

preview_player
Показать описание
In this video, we will be providing a beginner's guide to fine-tuning BERT, one of the most powerful natural language processing (NLP) models available. BERT, which stands for "Bidirectional Encoder Representations from Transformers," has been trained on a massive amount of text data and can be fine-tuned for a variety of NLP tasks, such as text classification, machine translation, and named entity recognition. We'll be walking through the process of fine-tuning BERT using the Hugging Face library, and provide examples and code snippets to help you get started with your own fine-tuning projects. This video is perfect for anyone who is new to NLP and wants to learn more about BERT and how to use it in their own projects. So, if you are a beginner in the field of NLP and want to learn more about BERT, this video is for you!

Source Code:

🔗 Social Media 🔗

Timestamps:
00:00 Introduction
00:33 Loading BERT from HuggingFace 🤗
01:52 Loading Tokenizer from HuggingFace 🤗
03:42 Output of BERT 👀 (Understanding Encoder Representations)
05:50 Loading the Dataset
07:22 Building the Model (BERT for Classification)
09:18 Fine-Tuning/Training BERT
09:47 Evaluating BERT (92% Yeyy!!)
10:05 So what are you fine-tuning next? 👀
10:22 Outro. See you soon! 👋

Tags:
#BERT #FineTuning #NLP #MachineLearning #BeginnersGuide #DeepLearning #NaturalLanguageProcessing #Tutorial #howto #BERTModel #TextClassification #LanguageModeling #TransferLearning #NeuralNetworks.

Keywords:
BERT, Fine-Tuning, NLP, Machine Learning, Beginners Guide, Deep Learning, Natural Language Processing, BERT Model, Text Classification, Pre-trained Models, Language Modeling, Transfer Learning, Neural Networks, Fine-Tune BERT, Fine-Tuning BERT

Thank you,
Pritish Mishra
Рекомендации по теме
Комментарии
Автор

how to actually decode the output back to the classes is something this video did not explain : \

silasdhayanand
Автор

Really really amazing Pritish. This video is not like those boring lecture videos. The animations are amazing. your explanation is clear with goof pronunciation. Amazing. Keep it up. I hope you continue posting these type of videos. ❤❤❤❤

viswanathhemanth
Автор

from the last 3-4 hrs i am trying to find a step-by-step proper material on how to fine to BERT with your dataset, finally found it, thank for making this video.

ankitnsfw
Автор

Amazing work Pritish. You definitely deserve more views. Hopefully you will get it soon❤.

villurignanesh
Автор

Very Nice Explanation and nice Animation 🔥🔥🔥🔥
Keep it up 👍🏻

dhiraj
Автор

I’m not clear on what pooling in the video is.

wryltxw
Автор

I'd like to ask, a paper I am trying to use for another dataset said they had optimal performance at epochs=50, however at epochs=3, it's already getting decent performance. May I ask why this is? Also, do you run bert in inference mode?

markinius
Автор

so we don't need to freeze any layer of the pretrained model? i have a problem this is with VIT my image shape is 24x24 but the pretrained model input shape is 224x224 it is possible to fix that? and the learning parameter are and i want to fine tune it on my dataset

diasposangare
Автор

from the last 5-6 hrs i am trying to find a step-by-step proper material on how to fine to BERT with your dataset, finally found it, thank for making this video.

abdelrahmanmohamed
Автор

How to fine tune csv dataset on BERT model

honestklee
Автор

Explanation done by you is the best compared to any others....awesome work Pritish ....keep it up

zeelthumar
Автор

i have some text files in which there are elements and their values but the pattern in which the text is displayed in the file are different from file to file. Is it possible to train Bert on these files so that when I ask it to extract only the element names and their corresponding values it will do that regardless of the text pattern?

AbhishekBade
Автор

Thanks a lot for this video. Could you write the code how to do inference through pooler_outpt?

soumyaranjan
Автор

pritish will you create a video of simulating a robotic arm which is controlled by a GPT-language model, and can cook food in simulation ?

DonaldTrump-od
Автор

Very nice video; I wonder if it is possible to save the classifier for future use.

wen
Автор

Excellent video. I got one error while running the code.
inputs = tokenizer(['Hello world', 'Hi how are you'], padding=True, truncation=True,
return_tensors='tf')
inputs

For this line I got the following error:
TypeError Traceback (most recent call last)
Cell In[50], line 1
----> 1 inputs = tokenizer(['Hello world', 'Hi how are you'], padding=True, truncation=True,
2 return_tensors='tf')
3 inputs

TypeError: 'BertTokenizer' object is not callable



Can you please help?

saimanideeep
Автор

Can you do a video on how to do Natural language inference with Bert? Thanks!

patrickahrend
Автор

I am very impressed with the way you teach.

maheshbhosale
Автор

recreated this in pycharm, when i want to use the model (i saved it first) i get this error: TypeError: No common supertype of TensorSpec(shape=(None, None), dtype=tf.int32, name=None) and TensorSpec(shape=(None, 78), dtype=tf.int64, is there a way to fix this without retraining the model?

Ryan-Pot
Автор

for the bert text summarization can we do in this way????

h