filmov
tv
NLP Demystified 8: Text Classification With Naive Bayes (+ precision and recall)
Показать описание
In this module, we'll apply everything we've learned so far to a core task in NLP: text classification. We'll learn:
- how to derive Bayes' theorem
- how the Naive Bayes classifier works under the hood
- how to train a Naive Bayes classifier in scikit-learn and along the way, deal with issues that come up.
- how things can go wrong when using accuracy for evaluation
- precision, recall, and using a confusion matrix
In the demo, we'll apply everything from the slides to build a full text classifier with spaCy and scikit-learn. We'll go from a bunch of raw text, preprocess and vectorize it, and build multiple versions of our text classifier, improving it each iteration.
Timestamps:
00:00:00 Naive Bayes
00:00:25 Classification as a core task in NLP
00:01:11 Revisiting conditional probability
00:03:26 Deriving Bayes' Theorem
00:04:12 The parts of Bayes' Theorem
00:05:43 A spatial example using Bayes' Theorem
00:07:33 Bayes' Theorem applied to text classification
00:08:30 The "naive" in Naive Bayes
00:09:34 The need to work in log space
00:10:05 Naive Bayes training and usage
00:13:27 How the "accuracy" metric can go wrong
00:14:10 Precision, Recall, and Confusion Matrix
00:17:47 DEMO: Training and using a Naive Bayes classifier
00:36:28 Naive Bayes recap and other classification models
This video is part of Natural Language Processing Demystified --a free, accessible course on NLP.
Комментарии