Introduction to NLP | Word Embeddings & Word2Vec Model

preview_player
Показать описание
Learn everything about word embeddings and word2vec model! I've explained CBOW and skip-gram models. I've also shown how to visualize higher dimensional word vectors in 2D.
#nlp #word2vec #machinelearning

For more videos please subscribe -

Support me if you can ❤️

NLP playlist -

Source Code -

References -

Facebook -
Instagram -
Twitter -
Рекомендации по теме
Комментарии
Автор

The most underrated channel for Machine learning

harshpal
Автор

The best.. you elucidated this topic with charm !! Thanks Sujan

sreedharsree
Автор

Very clear description. I was struggling to understand it but your video was very simple and provided required information

TechResearch
Автор

Thanks, that is one of the best explanations, I Understand a lot.

ZohairAhmed
Автор

Commenting after 1/3rd part f the video. It is really very clear. Wait up and continue this. You will get huge lots of subs. Keep it up

magelauditore
Автор

Really good explanation, now understood the concept

debjyotibanerjee
Автор

Beautiful Explanation, I love it!! 👍👍

manikant
Автор

Very Nice explaination sir. Thank you so much sir.

mastercomputersciencewitha
Автор

please continue making NLP videos, please we want more and more if possible entire AI we would love u hear from you!

shivanineeli
Автор

Really awsum vdo.... So easy and clear explanation... Loved it..
Please make more vdos.. Thanks a lot..

swagatmishra
Автор

Thanks bruhhh🤍.... it's more clear dan compare regular classes # nlp

pratibhagoudar
Автор

Hi,
This could sound bit naive but i just want to know how did you figure out the parameter that you are passing to "api.load()" which is "word2vec-google-news-300". I mean there must be a list of API from where you got this right ? I googled it but i found there are just links and its bit confusing too.
Thanks.

edwardrouth
Автор

Nice video.Does word2vec represent medical vocabularies? I have a medically text corpus that has about tokens. What do you think I should do?

r_pydatascience
Автор

intro video remind me its wednesday my dudddde

rexwan
Автор

I want to ask a question is it all vector of words in same length?
Because I have an idea if we use DNA sequence(of course not in same length) instead of just words can we train a model to get a better classify result?

ccuuttww
Автор

please make a video about how back propagation works in skip gram.

coxixx
Автор

You said Skipgram predicts the context words from the target word, but then later you just compute the sigmoid (so not a softmax) to know if one pair of a target word and a context word is correct. I don't really see how this is "predicting" the context words.

Is there something else going on? I'm very confused since it seems like every explanation is saying something different...

tobiascornille
Автор

Thank you so much for your video, can you turn on subtitles for this video? Because I'm not from England, I can't hear you clearly but the video has no subtitles

ThoTran-oixi
Автор

Hi,

Your videos on NLP are great.
For, most_similar(positive = ['boy', 'queen'], negative='girl', topn=1) I am getting :
[('teenage_girl', 0.35459333658218384)]. What could be happening here? Krish

krishcp
Автор

Bro just tell me one thing, while creating vectors of the words, do we need to remove stopwords, and lemmatize our text data, cause I believe if we do the mentioned text pre-processing steps, then may be the word2vec model may be not able to understand the context, and the training will not happen properly. If you could say something, that would help me a lot in my project.

debjyotibanerjee