13.4.4 Sequential Feature Selection (L13: Feature Selection)

preview_player
Показать описание

This video explains how sequential feature selection works. Sequential feature selection is a wrapper method for feature selection that uses the performance (e.g., accuracy) of a classifier to select good feature subsets in an iterative fashion. You can think of sequential feature selection method as an efficient approximation to an exhaustive feature subset search.



-------

This video is part of my Introduction of Machine Learning course.

-------

Рекомендации по теме
Комментарии
Автор

I'm using your work of effort, mlxtend, in my research and it's awesome to have your lecture along with the package, can't ask for more! Please continue the series.
Big thanks from water sciences community!

rezajalali
Автор

At 9:20, wouldn't there be a total of 29 subsets (each containing 28 variables), and not 28 subsets? Each subset consists of the entire dataset minus one feature. So there would in total be one subset for each feature in the original dataset, right? That makes 29 subsets each consisting of 28 features. Am I missing something, or is this an error?

leoquentin
Автор

- High-end video quality & thorough content 💙 Really enjoy your lecture 👨‍💻 Thanks for posting Dr Sabastian!

nguyenhuuuc
Автор

I appreciate the video, your explanation style and your mlxtend package. Thank you very much for all the work you do!!

TrainingDay
Автор

During Sequential Forward Selection (around 16:30): Do you also add a new feature to the set if the performance is worse than without any new feature?

juliangermek
Автор

this video is really helpful. Highly recommended!

guliteshabaeva
Автор

In the sequential backward selection (time = 10:55), stage 02, though we remove 1 features making the feature count as 28, we still get 29 feature subsets right? (It says 28). Can you help me clarify this?

oshanaiddidissanayake
Автор

Hi, thank you for the excellent explanation.
One quetion please, in sequential floating forward selection, when will the floating round happen?
Is the algorithm do it every round after adding a new feature? or it just do it randomly?

sf-rl
Автор

great explanation. very clear. keep going

nitishkumar
Автор

Thanks so much for the videos. Great presentation.
I believe in your in your Feature Permutation Importance video you stated that the process was model agnostic.
Is SFS also model agnostic? I would like to use this with a LSTM model but am not sure if it would be a correct application.

russwedemeyer
Автор

Hi Sebastian, thank you so much for the videos. I really loved watching them. Just a few questions on feature selection techniques.
1. How to pick one of the wrapper methods? I mean how to select a feature selection technique(wrapper) 😅
2. Why do I even have to use wrapper methods? Can't I simply put all the features in a random forest model and use its feature importance for selecting the features and train a new model with selected features. It seems a lot simpler and faster to me than training 10-20 different models in any wrapper method.

abhishek-shrm
Автор

what if you got mixing of several categoricals and continuous variables ?

rolfjohansen
Автор

its a little bit out of the box question but can you tell me if we consider the feature selection as multiobjective problem, in which most of the people are consider objectives as sensitivity and specificity or accuracy and feature number by using the optimization (one of the wrapper based approach). what are the other objectives we might look into? Thank you sir in advance.

chandrimadebnath
visit shbcf.ru