Question Answering | NLP | QA | Tranformer | Natural Language Processing | Python | Theory | Code

preview_player
Показать описание
===== Likes: 38 👍: Dislikes: 0 👎: 100.0% : Updated on 01-21-2023 11:57:17 EST =====
Question & Answering! Looking to develop a model that can provide answers to any question you have? Well, in this video, I cover the high level overview on the architecture of QA Models (based on BERT). I also go into depth on what QA Modeling is, how it can be applied, and how it is used in the real world. Lastly, I cover the pretraining and fine-tuning phases of the QA Modeling process.

Feel free to support me! Do know that just viewing my content is plenty of support! 😍

Watch Next?

Resources

🔗 My Links 🔗

📓 Requirements 🧐
Understanding of Python
Google Account

⌛ Timeline ⌛
0:00 - Categories of Question & Answering
3:20 - Additional Resources for Question & Answering
4:05 - Architecture and Backend of RoBERTa QA
5:12 - Implementation of Extractive QA (RoBERTa)
6:00 - Transfer Learning (Out of the Box Predictions)
8:45 - RoBERTa Architecture & Fine-Tuning QA Model via CLI
10:00 - Fine-Tuning QA Model with Libraries
13:15 - Pre-Training QA Model

🏷️Tags🏷️:
Python,Natural Language Processing, BERT, Question and Answering, QA, Question, Answering, Tutorial, Machine Learning, Huggingface, Google, Colab, Google Colab, Chatbot, Encoder, Decoder, Neural, Network, Neural network, theory, explained, Implementation, code, how to, deep, learning, deep learning, tasks, QA, Q&A, Extractive, Abstractive, Extractive QA, Abstractive QA,

🔔Current Subs🔔:
3,220
Рекомендации по теме
Комментарии
Автор

Thanks for this video is very simple and great

youssefsayed
Автор

Your videos are of high quality and cover quite a range of topics. But i wonder why the subscribers are so few relatively. My personal take is that you lay a very good foundation- easy to understand, then dive right into coding which is very practical. i feel there's something missing in-between.

shipan
Автор

hi thanks for such an informative video, what about the scenerio if we extract numeric features from our datasets like sentiments etc then how can we input them for transformer specially T5, Albert without doing masking

ammarazamir
Автор

So we give question as input from prompt then our model picks up a random context from our dataset and gives random answer...(if we didn't fine tune the model)

VishwaTeja-wg
Автор

Stuck on this kind of project tbh I'm dying

boro-
Автор

so how these answers can be graded ? can u please tell me how we grade them out of 10

AditiBhagat-pd
Автор

I'm confused, have you not just fine-tuned a squad model with squad data?

Kungfoobacon
Автор

Thanks for making this video. Learnt a lot.

Follow up question: Can the question and answering more of a chat format where you can build questions and follow ups?

Let’s say I am embedding the text, create vector of the text. When a question is asked, it’s converted to vector and then using cosine similarity, fetch the response. Can it be done with any of the models this way? Could you please make a video or share feedback if possible? Thanks.

sriramkrishna
Автор

Please I want the link of this dataset on kaggle

Nour-alshareef
Автор

What is the for a custom dataset, the question for a context has answers coming from multiple section of the paragraph? I believe for the dataset here you only have one answer per question from a context but how to handle multiple start index for a question?

pragyankc
Автор

HI ! Thanks about the data format I read the link and it mainly explain that the data have to be in form of json or list or dictionnaries does it mean that if I have a pandas dataframe with column question, answer, answer_start and answer_end it won't work ?

mariussame
Автор

Hi, Do you have any video on how to do perform MCQ( one question with 4 answers) or please provide any good link to perform MCQ tack...please?

sandeepanand
Автор

What's the use of the model in question answering system, if the dataset contains answer column already? Simple search will also work for SQUAD then there's no need to finetune a model for that. Correct me if I'm wrong about squad dataset

amruthak
Автор

sir, i have getting error on tuning . please help. should i have to change runtype to gpu in colab?

loading
Автор

how can i reduce the dataset size to make the training time shorter

mohamedyasser
Автор

can I get Google's Notebook link for this ?

MohamedAhmed-kvhl
Автор

No module named 'keras.saving.hdf5_format' how to solve it? help help!

刘-xb
welcome to shbcf.ru