Text Summarization by Fine Tuning Transformer Model | NLP | Data Science | Machine Learning

preview_player
Показать описание
🔥🐍 Checkout the MASSIVELY UPGRADED 2nd Edition of my Book (with 1300+ pages of Dense Python Knowledge) Covering 350+ Python 🐍 Core concepts

---------------------

======================

You can find me here:

**********************************************

**********************************************

Other Playlist you might like 👇

#NLP #machinelearning #datascience #textprocessing #kaggle #tensorflow #pytorch #deeplearning #deeplearningai #100daysofmlcode #pythonprogramming #100DaysOfMLCode
Рекомендации по теме
Комментарии
Автор

Thank you so much for the vid and the playlist NLP with HF!
I faced a few errors in my collab. Perhaps somebody will face the same problems:
1. I am not sure that in 2023 you will be able to compute it in standard collab using its power due to lack of memory. So switch on ur local GPU or update to premium collab.
2. dialogue_token_len = [len(tokenizer.encode(s) for s in ] and
summary_token_len = [len(tokenizer.encode(s) for s in ] : I had “TypeError object of type ‘generator’ has no len()”. The error in the code is that you are missing a closing parenthesis after each of the two calls to tokenizer.encode(s))
3. trainer = Trainer(model=model_pegasus, args=training_args : change to trainer_args. I think all noticed this minor mistake, but mb I save a few mins for someone)

nickchern
Автор

does this have multilingual support how to configure it please help

wolf
Автор

what changes should we have to do if we want to use rogue metric (the function which calculates it) as the argument for Trainer(), so that we can track our training?

aseemlimbu
Автор

Should we enable tokenizer padding in this task?

cstu
Автор

Is it possible to finetune the pegasus model with the PEFT-LORA technique?

AndyGonzalez-yp