2- Fine Tuning DistilBERT for NER Tagging using HuggingFace | NLP Hugging Face Project Tutorial

preview_player
Показать описание
🔥Hugging Face Tutorials for NLP Projects Playlist | Watch All Videos Here 🔥

✍️🏆🏅🎁🎊🎉✌️👌⭐⭐⭐⭐⭐
ENROLL in My Highest Rated Udemy Courses
to 🔑 Crack Data Science Interviews and Jobs

🏅🎁 Python for Machine Learning: A Step-by-Step Guide | Udemy

🎁🎊 Deep Learning for Beginners with Python

📚 📗 Natural Language Processing ML Model Deployment at AWS

📊 📈 Data Visualization in Python Masterclass: Beginners to Pro

📘 📙 Natural Language Processing (NLP) in Python for Beginners

🎉✌️ Advanced Natural Language and Image Processing Projects | Udemy

📈 📘 2021 Python for Linear Regression in Machine Learning

📙📊 2021 R 4.0 Programming for Data Science || Beginners to Pro

✍️🏆 Introduction to Spacy 3 for Natural Language Processing

In this tutorial, we will learn how to fine-tune DistilBERT for named entity recognition (NER) tagging. NER tagging is the task of identifying named entities in a text, such as people, organizations, and locations. DistilBERT is a small, fast, and efficient Transformer model that can be fine-tuned for a variety of NLP tasks, including NER tagging.

The tutorial will cover the following steps:
- Preparing the data for training
- Loading the DistilBERT model
- Fine-tuning the DistilBERT model
- Evaluating the fine-tuned model

The tutorial will use the CoNLLPP dataset for NER tagging. This dataset contains a set of sentences with their corresponding named entities. The tutorial will also use the HuggingFace Transformers library to fine-tune the DistilBERT model.

This tutorial is for beginners who want to learn how to fine-tune DistilBERT for NER tagging. No prior knowledge of NLP or Transformers is required.

Here are some of the things you will learn in this tutorial:
- How to prepare data for NER tagging
- How to load a pre-trained Transformer model
- How to fine-tune a Transformer model
- How to evaluate a fine-tuned Transformer model

I hope you find this tutorial helpful!

🔊 Watch till last for a detailed description

💯 Read Full Blog with Code
💬 Leave your comments and doubts in the comment section
📌 Save this channel and video for watch later
👍 Like this video to show your support and love ❤️

~~~~~~~~
🆓 Watch My Top Free Data Science Videos
👉🏻 Python for Data Scientist
👉🏻 Machine Learning for Beginners
👉🏻 Feature Selection in Machine Learning
👉🏻 Text Preprocessing and Mining for NLP
👉🏻 Natural Language Processing (NLP)
👉🏻 Deep Learning with TensorFlow 2.0
👉🏻 COVID 19 Data Analysis and Visualization
👉🏻 Machine Learning Model Deployment Using
👉🏻 Make Your Own Automated Email Marketing

***********
🤝 BE MY FRIEND
Рекомендации по теме
Комментарии
Автор

It was a great and helpful video . Thank you for you help bro and keep going on .
once again thank you bro ❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤❤

doungheanime
Автор

this video is the best one that I ever seen for NLP I have never see something like this you are the best you are the one everything is explained carefully and simply to understand I will depend this on my PhD research really thanks from the deep of my heart ♥ but if I have some questions how can I keep in touch with you ... thanks in advanced

KhaliDALKhafaji
Автор

@KGP Talkie what is the purpose of checking if the label is odd or not in align_labels_with_tokens() function? if possible could you give an example? thanks.

sanjaydokula
Автор

How can I do with an CSV data which was annotated by me?

Abishek_B
Автор

what interests you to do all these videos? what makes you so motivated? When I study there is no purpose

akashchandra
Автор

Thank you so much for the tutorial!!! Meanwhile, I hope u can maybe do a same tutorial by using open-source LLM model like LLAMA2.

limjuroy
Автор

hello, i am new to the channel and in machine learning.
I ve been watch some of your videos about NLP and read the blog . Thank you so much, it is so complete and organized.

So about this videos,
i kinda know that we can use spacy pipeline ( ner ) to do a ner task .And we can import the pipeline (ner) from transformers too to do it and if we don't specify the model, it 's gonna take defaulted to in my case .
So why do we need to do all of this fine-tunning for NER Tagging . Can you explain me a little bit better, what makes the difference.

Thank you.

Anick-xinw
Автор

Hi, im trying to annotate a dataset for bert ner, is there a specific tool i can use?

gamingisnotacrime
Автор

thanks a lot. us there a playlist? wheres the part 1 link?

amortalbeing
Автор

Thank you for the tutorial. It helped me a lot, but I still have a few questions after watching the video multiple times for getting better understanding.

1st question:
Can I add more entities, other than the predefined ones like "LOC", "MISC", "ORG", and "PER", in my specific use case? I would like the model to be able to recognize a certain entity in my text data.

2nd question:
I'm still unclear about the purpose and functionality of the "logits" variable in the compute_metrics(eval_preds) function. Could you please provide more clarification?

Lim-rztf
Автор

What is the difference between building a model using Huggingface and Spacy transformer?

shankar
Автор

please make a end to end a ner project with streamlit api.

mdriad
Автор

Hey,
Great video btw.

I was just wondering if it is possible to train with custom data that is not IOB formatted. I have my own dataset which I want to train it on which unfortunately does not have IOB annotations, however it does have basic annotations like brand, color, gender, trying to work on ner project for product listings.

joejay
Автор

Thompson Ronald Hernandez Carol Brown John

GabrielJulia-uv
Автор

Hahahahaha😂 “if it’s not correct that means it’s wrong.” Um….yeah that’s literally the only way reality works bro. When something isn’t correct, that’s the definition of “incorrect” 😂

AtomicPixels