Python Word Embedding using Word2vec and keras|How to use word embedding in python

preview_player
Показать описание
Python Word Embedding using Word2vec and keras|How to use word embedding in python
#WordEmbeddingInPython #UnfoldDataScience
Hello All,
My name is Aman and I am a data scientist.

About this video:
In this video, I explain word embedding in python. I show step by step process of doing word embedding in python using word2vec and keras. I explain advantages and disadvantages of keras and word2vec approach for word embedding.
Below questions are specifically answered in this video.
1. How to use word embedding in python
2. Word2vec In python
3. Keras word embedding in python
4. Word embedding in python using keras and word2vec
5.Word Embedding layer in Keras

About Unfold Data science: This channel is to help people understand basics of data science through simple examples in easy way. Anybody without having prior knowledge of computer programming or statistics or machine learning and artificial intelligence can get an understanding of data science at high level through this channel. The videos uploaded will not be very technical in nature and hence it can be easily grasped by viewers from different background as well.

Join Facebook group :

Follow on twitter : @unfoldds

Follow on Instagram : unfolddatascience

Watch python for data science playlist here:

Watch statistics and mathematics playlist here :

Watch End to End Implementation of a simple machine learning model in Python here:

Learn Ensemble Model, Bagging and Boosting here:

Access all my codes here:

Рекомендации по теме
Комментарии
Автор

Really really one of the best explanations ever seen, appreciated!

telmanmaghrebi
Автор

you are really unfolding Data Science, absolutely fantastic...

uwaisahamedimad
Автор

Very useful and easy-to-follow tutorial.
Thanks a ton sir 🙏

sowmiya_rocker
Автор

it's good to see such models on a smaller datasets. it is good for understanding. Thanks for this video

dorgeswati
Автор

Thank you Aman, Please keep up the good work ..Best Wishes !

souravthakur
Автор

10/10 video, gensim documentation sucks so this helped a lot

TheTakenKing
Автор

@ 8:30 aproch - after applying word2vec model we can do multiclass classification using any classification algorithm like Xgboost in that using objective='multi:softprob'
Correct me if I am wrong and video is really verryy nice..started following✨✨😊

nareshjadhav
Автор

I have trained my own w2v model but am having trouble implementing it into my keras model, i would really appreciate some guidance as I am quite new and would love to learn

yasminzamrin
Автор

Great video, keep it up. Can you make video on Design of experiments topic or if you have any resources share with me. Reaching out as you have lots of knowledge

MrDEBONTUBE
Автор

Clear explanation and implementation, thanks Aman. Also I have few doubts regarding data science project structure. Please let me know how to connect with you.

santhoshgattoji
Автор

Thanks Aman for clearly explaining the concept. No NLP video since one month. Can you please post few series ..

akd
Автор

Hi, is our model size going to increase if we use 3.4gb of pre-trained word2vec model and in case if it increase how to deploy that much so big model?

Rockleev
Автор

One of the best tutorial. Could you please share video on How to classify documents based on word embedding.

asheeshmathur
Автор

I have a question, I have my own corpus and I have built multiple word embeddings such as word2vec, glove, TF-IDf, BERT for the same corpus, This is for a document similarity task, How to evaluate these models and how am I gonna choose the best one???

uwaisahamedimad
Автор

Excuse me, Is there any criterion to decide the number of elements each word vector has? In other words, why Word2Vec decides each word represents a vector of 100 elements instead of 200? Thank you so much

alvaroradajczyk
Автор

Thank you so much for, a clear explanation.. But what padded_docs and labels parameter represent

ashagireadinew
Автор

Am I understanding correctly that the entire vectorization process and the "semantic" meaning is derived from how frequently one word is in proximity to other words in the data it's been trained on?

sma
Автор

this both custom word2vec and Keras embedding layer uses the same methodologies(context, target, window size, vector representation)?.
if yes any performance difference between them. and which one is best to use in most of the cases.

maYYidtS
Автор

What to do if we have a word in our dataset which is not recognized by the word2vec model? In my case, it is giving me an error. What is a solution for that?

chandvachhani
Автор

Nice video but can you rectify mistake at time 10.28 minute onwards where you called vocab size 30 is similar to size of 100 in previous video but those 100 dimensions were semantic numbers .. in this case vocab size is 30 and dimension is 8 . Correct me if I am wrong

sachinshelar