filmov
tv
NLP Demystified 12: Capturing Word Meaning with Embeddings

Показать описание
We'll learn a method to vectorize words such that words with similar meanings have closer vectors (aka "embeddings"). This was a breakthrough in NLP and boosted performance on a variety of NLP problems while addressing the shortcomings of previous approaches. We'll look at how to create these word embeddings and how to use them in our models.
Timestamps
00:00:00 Word Vectors
00:00:37 One-Hot Encoding and its shortcomings
00:02:07 What embeddings are and why they're useful
00:05:12 Similar words share similar contexts
00:06:15 Word2Vec, a way to automatically create word embeddings
00:08:08 Skip-Gram With Negative Sampling (SGNS)
00:17:11 Three ways to use word vectors in models
00:18:48 DEMO: Training and using word vectors
00:41:29 The weaknesses of static word embeddings
This video is part of Natural Language Processing Demystified --a free, accessible course on NLP.
NLP Demystified 12: Capturing Word Meaning with Embeddings
NLP Demystified 5: Basic Bag-of-Words and Measuring Document Similarity
Word Embeddings | NLP | Deep Learning
NLP Demystified 11: Essential Training Techniques for Neural Networks
NLP Demystified 1: Introduction
NLP Demystified 13: Recurrent Neural Networks and Language Models
NLP Demystified 3: Basic Preprocessing (case-folding, stop words, stemming, lemmatization)
NLP Demystified 2: Text Tokenization
Tutorial 12: Word Embeddings for Descriptive Corpus Analysis: Analogies, Polysemy, and Stability
Word Embedding
NLP Demystified 15: Transformers From Scratch + Pre-training and Transfer Learning With BERT/GPT
NLP Demystified 6: TF-IDF and Simple Document Search
Tutorial Python from zero to hero #12 word2vec # M Tutorial
Natural Language Processing: Word Embeddings & Sequence Models
NLPE3: Word Embeddings Beyond Word2Vec
NLP Demystified 9: Automatically Finding Topics in Documents with Latent Dirichlet Allocation
NLP Demystified 14: Machine Translation With Sequence-to-Sequence and Attention
Word Embeddings (Embeddings in NLP)
NLP Demystified 7: Building Models (ML modelling overview, bias, variance, evaluation)
Word Embeddings Demystified
Word Embedding Interpretation using Co Clustering
Analogies Using Word Embedding
NLP Demystified 8: Text Classification With Naive Bayes (+ precision and recall)
WebHack#40 Intro to Japanese Tokenizers
Комментарии