Practical Hyperparameter Optimisation

preview_player
Показать описание
Any machine learning algorithm needs hyperparameters to define how the algorithm learns from your data and how the model looks like. These hyperparameters can have a large impact on how good the model learns from your data and how well it performs on data it has never seen before.

The optimization of these hyperparameters is often a time-intensive and computationally hard process. In the next Rootlabs @ Lunch, Hans Tierens explains several different approaches on how we can optimize and automate this process, so we can build better machine learning models in no time.
He will give an overview on what we need, how we need it, when we need it and how we can start doing it now!
Рекомендации по теме
Комментарии
Автор

I think the lecturer misunderstands how pruning works. For it to be effective, there should be opportunity to continue or skip evaluation. In boosting or trees case, it's partial_fit. If you don;t utilize it, it does not matter that you receive ShouldPrune signal from the pruner: you are not exploiting it anyway, as you have already finished training & scoring. That's why with "pruning" the author had the same runtime as without it. Btw, Optuna docs suffer from initializing and partitioning data within the objective function. I don't understand why is everyone copying that without any thinking.

anatolyalekseev