Encoder And Decoder- Neural Machine Learning Language Translation Tutorial With Keras- Deep Learning

preview_player
Показать описание
Machine translation is a challenging task that traditionally involves large statistical models developed using highly sophisticated linguistic knowledge.

Neural machine translation is the use of deep neural networks for the problem of machine translation.

In this tutorial, you will discover how to develop a neural machine translation system for translating English To French
Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more

Please do subscribe my other channel too

Connect with me here:
Рекомендации по теме
Комментарии
Автор

Hey krish, I am getting some cardinality in inference model input with this code
Model fits perfectly but while predicting the inference model I am getting this error.

ankurlimbashia
Автор

Hi @krish, Can you tell how you create this new format. I mean which application you use. This looks so cool. I also want to record videos, but not sure which software can give me these features. Any insight ?

mmgtechm
Автор

Hi Krish..Thanks for the video...waiting for more topics like attention mechanism, transformers etc

anjanas
Автор

Greetings from Austria, thanks for your knowledge sharing!

MASadat-lzyz
Автор

@krishnaik06 This is super helpful video. I have been following the NLP playlist. Do you mind sharing this code in the Git repo? The folder for seq2seq in your repo seems empty. thanks:)

abhijittdhavlle
Автор

What is the function of dense layer after decoder? Aren't we actually interested in decoder output? Why adding dense layer would not hamper the actual output by decoder? I would be very thankful if someone answers all my questions.

shashankpal
Автор

What is the encoding scheme used in this tutorial e.g one hot, word2vec, glove etc

nawazalilone
Автор

Can you please take a small sample text and perform encoding and decoding functioning briefly, so that we can understand it briefly as there are few doubts regarding t=timestamp.

Автор

does this apply to time series as well?

apicasharma
Автор

YOLO, BERT, TRANSFORMERS!! Please bring explanations on these

thetensordude
Автор

Hello any one please tell me this model does not accurate i tune with various hyper parameter but the accuracy is not good. Can someone tell me what to do exactly to achieve so?

mukulbhardwaj
Автор

Please provide Github link for this code.

Kumar
Автор

Greetings!! Can you please upload more stuff on Deep learning like attention models, Transformers, BERT and do cover unsupervised learning too if possible. It would be highly appreciated.

tyylermike
Автор

i dont understand why my final testing loop is decoding every input to ' i want to go to room'

i have made hindi to english translation and used the dataset of the blog which was just shown below english french database

DeependraSingh-jhxf
Автор

Are encoder_outputs and h_t not the same thing?

soumyagupta
Автор

Sir, thanks for this nice explanations. But I have one query, Instead of text I have numeric indexes of the text, but these are not vectors. How can I translate those indexes into its corresponding text?

piyalikarmakar
Автор

Sir why we not given the inputs like this : encoder_inputs=Input(shape=(max_encoder_sequence_length, num_encoder_tokens))

gurdeepsinghbhatia
Автор

hey man, kindly give the link of code. if you did not upload there. please upload code there. it's a request.

MuhammadAli-ieps
Автор

Sir i dont understand the input dimension, that how input dimension is like that

gurdeepsinghbhatia
Автор

i have followed every step. still my encoder_input_data is all same, for each sentence, please help

DeependraSingh-jhxf