Text Summarization by Fine Tuning Transformer Model | NLP | Hugging Face🤗

preview_player
Показать описание

Check out my other playlists:

This channel focuses on providing content on Data Science, Artificial Intelligence, Machine Learning, Deep Learning, Computer Vision, Natural language processing, Python programming, etc. in Bangla and English.

My mission is to provide inspiration, motivation & good quality education to students for learning and human development, and to become an expert in Artificial Intelligence, Machine Learning, Deep Learning, Computer Vision, Natural language processing, Python programming, and so on.

#dswithbappy aims to change this education system of Bangladesh.
I believe that high-quality education is not just for the privileged few. It is the right of everyone who seeks it. My aim is to bring quality education to every single student. All I need from you is intent, a ray of passion to learn.

Thanks!
#dswithbappy

Connect with me here:

🙏🙏🙏🙏🙏🙏🙏🙏
YOU JUST NEED TO DO
3 THINGS to support my channel
LIKE
SHARE
&
SUBSCRIBE
TO MY YOUTUBE CHANNEL
Рекомендации по теме
Комментарии
Автор

Hi bro, try to do the videos on Pre-trained hugging face transformer models for all kind of tasks

chandut
Автор

Sir please can you help me for abstractive summarization of scientific paper pubmed or arxiv actually i want to know how i build research ques for mscs these . Please help me your material is so help.full

iqranaveed
Автор

Is it only me or at 01:00:55, in the newly generated summary, the last line is incorrect?

SourovNSU
Автор

Sir can you please tell how to add this project in resume in interesting way so it's add value to resume.

navdeetsaini
Автор

How to train the model over less data the model become useless and gets errors and error because of the large size and sir you dont use multiprocessor so the load on cpu becaomes too higher

adityakumarsharma
Автор

The tokenization step, does it tokenize all the data at once or in batches?

What if I have huge data to finetune on?

SameerKhan-htmx
Автор

Sir, i want to fine-tune this model on my data but i have no labelled data. I want to trained on unlabelled data

soniasimran
Автор

i want to fine tune mt5 model for cross lingual summarization.. then what should i do?

vcfbhdd
Автор

@dswithbappy sir please help me, while finetuning the pegasus model it is giving error like out of memory on the multinews .please help me

Theconqueror
Автор

How can I use a Pegasus model for Indonesian text summarization? I have a dataset of Indonesian texts and I want to use the Pegasus model from Hugging Face, but there is no Indonesian Pegasus model available. My question is whether I can use the Indonesian dataset on the Pegasus model and what steps I need to take to do so.

mohamadimansolihinsudrajat
Автор

If I have a dataset without a reference summaries. How will fine tune the pretrained model ?

yadlapalliakhilesh
Автор

Sir i was running this code then during declaring training_argus i was encountered with an error showing that trainer with Pytorch requiring accelerate. i install accelerate using pip but then it is showing error that partialstate is not defined. can you please resolve this?

sarveshsharma
Автор

Hello sir
Can you please provide us Google colab link of this project


Because the code on GitHub link is little bit different at some places and gives error

tejaswini
Автор

Is it only me or at 01:00:55, in the newly generated summary, the last sentence is incorrect?

SourovNSU