Tensorflow for Deep Learning Research - Lecture 4

preview_player
Показать описание
This is the fourth lecture in the series of tutorials on tensorflow and is based on the publicly available slides from the Stanford University class - CS20SI -being offered in the winter of 2016-2017 session.
I have been using tensorflow for computer vision and NLP for the past year or so and thought that it might be useful for folks to have these videos to go along with the public slides from Stanford.
Please note that I am a Stanford alum, but have no current affiliation with the university. Thanks a lot to Chip Huyen for making the slides and the assignments available to the general public.
In this video, we cover word2vec- an important construct that allows us to capture the semantic meanings of words.
I also go into explaining Noise Contrastive Estimation in a somewhat detailed fashion.
We also cover tensorflow name scopes, t-sne and embedding visualization in this video.
There is a slight one minute glitch in the screen casting app from 14:40min to 15:40mins. The explanation should still be understandable.
Рекомендации по теме
Комментарии
Автор

Thank you for providing these lectures. Just one note: In time 5:22 I think it is not the count that a word comes AFTER another word. It is just the count that a word comes BESIDE another word.

hossein.amirkhani
Автор

3:32 Haha, greetings from France! Thank you soooo much for those awesome lectures (and also to Stanford for making these slides available to the public). Keep up the good work, can't wait for lecture 6!

alexandrecarlier
Автор

Thank you for your great tutorials. I've learn a lot from these.

toantruong
Автор

Thanks Labhesh for these amazing lectures,
looking forward for lecture 5

abdulmajidmurad
Автор

21:20 starts NCE and Negative Sampling

xindong
Автор

Thank you for making the great video! Can I ask when you plan for the next lecture?

weilizhang
Автор

Thanks for these videos!!
Keep it up :)

MrAndri
Автор

Hi Labhesh, thanks for great explanation. Any plan on doing CNNs in near future

sandeepbhaskaran
Автор

Where does the target vectors words get assigned to vectors (Initialization)? Is it inside the In the nce_loss function since we feed the embed inside it ?

shaz
Автор

Do you have any explanation on sampled softmax ?

shaz