Python Tutorial: Hyperparameter tuning in python | Intro

preview_player
Показать описание

---
Welcome to the first lecture of Hyperparameter Tuning in Python. I am Alex, a Data Scientist from Sydney, Australia.

So why study this course? Today algorithms are getting more and more complex, and so the number of hyperparameters to choose from increases.

It becomes increasingly important to learn how to efficiently find optimal combinations, as this search will likely take up a large portion of your time.

Often it is quite easy to simply run Scikit Learn functions on the default settings or perhaps code from a tutorial or book without really digging under the hood. However, what lies underneath is of vital importance to good model building. You may be surprised what you find!

This course will use a dataset about credit card defaults.

It contains a number of variables related to the demographics and financial history of a group of people. The target column shows whether or not they defaulted on their next loan payment.

It has already been pre-processed and split ready to model. Note that at times we will take smaller samples to ensure we can run the code.

You can find out more about it at the link in the slides.

To understand hyperparameters, let's first start with parameters. What are parameters?

Parameters are components of the final model that are learned through the modeling process.

Crucially, you do not set these. You cannot set these.

The algorithm discovers them through undertaking its steps.

To make this concrete, consider a simple logistic regression model.

We create the estimator and fit to the data with default settings.

Since the logistic regression model is a linear model, we will get beta coefficients on our variables. These are found in the coef_ property of our logistic regression object.

However, if we print these out we can see it is a bit messy.

Let us clean this up by creating a list of original variable names,

zipping this up with the coefficients

and formatting into a neat DataFrame for easy viewing.

We can now sort the DataFrame and print the top 3 results for brevity.

Do you recall setting PAY_0 to have a coefficient of 0.000751? I don't. The coefficients are parameters because we did not set them ourselves and were learned during the modeling process. In our data, the PAY variables relate to how many months people have previously delayed their payments. We can see that having a high number of months of delayed payments, makes someone more likely to default next month.

To know what parameters an algorithm will produce, you need to

Know a bit about the algorithm itself and how it works.

And consult the Scikit Learn documentation to see where the parameter is stored in the returned object. The parameters are found in the documentation for that particular algorithm under the 'Attributes' section, not the parameters section

So what are the parameters in tree-based models that do not have linear coefficients?

The parameters of this model are in the nodes of the trees used to build the model such as what feature was split on and at what value.

To demonstrate, let us firstly build a random forest estimator & fit to our data, setting the max_depth to be quite low only for visualization purposes.

Then we can pull out a single tree, found in the random forest estimator 'estimators_' attribute to visualize.

For simplicity we will just show the image but you can explore visualizing this yourself using the mentioned packages.

Here we see a graph of the nodes including the variables and values used in the splits.

We can see that the very first split was on the variable PAY_4 and it sent samples left or right depending if they had a value above or below 1 for this variable.

Do you remember setting this decision? I certainly don't!

So how do we pull out the splits we saw here visually in a programmatic way? Let's say, the left, second-from-top node.

The tree we pulled out is a Scikit Learn 'tree' object so we can find the variable it split on by indexing into the .feature attribute of this tree and matching up with our X_train columns to get the name.

The level used to split is then found in the .threshold attribute.

And we can then print this out.

Let's do some exercises to further explore the parameters of these models!

#Python #PythonTutorial #DataCamp #Hyperparameter #Tuning
Рекомендации по теме
welcome to shbcf.ru