Coding Word2Vec : Natural Language Processing

preview_player
Показать описание
Code word2vec with me!!

Рекомендации по теме
Комментарии
Автор

Wow, thank you, now I believe that small videos can be cool too!

zix
Автор

You are a natural teacher, which is a gift.

YingleiZhang
Автор

Another fantastic video! Thanks a lot!

nobodyelse
Автор

Excellent way to understand W2V, Thank you
however, I will not give up :) and I will keep asking you to do the CRF video

muhammadal-qurishi
Автор

Thank for this and yeah like the other commenter would you be able to do a CRF video?

salimibrahim
Автор

Thanks for the video, I learned a lot! FYI - I think you may have a small bug in how you're iterating to create the training data. When you look at the unique word count for 'text' it is len 34. When you look at the unique word count for 'words' is is len 33. Converting both to a set and then doing a diff, you see that the word 'today' is not included in 'words'. That is the very first word in the text list, so I think your indexing in your enumerator is probably just off by one.

javidjamae
Автор

Hi @ritvikmath,
Please can you explain why you chose to run the function 25 times ?
Thank you

HamzaElkina
Автор

Hello Ritvik, thank you so much for your super explanation!
There's just one part that isn't clear to me:
In this update, context_embeddings -= updates_df_context.loc[context_embeddings.index],
why do we subtract the update for the context embedding instead of adding it, as done for the main embedding?

lucasantagata
Автор

If my data set is in the form of question and answer then how should I proceed

nishantsutar