189 - Hyperparameter tuning for dropout, # neurons, batch size, # epochs, and weight constraint

preview_player
Показать описание
Code generated in the video can be downloaded from here:
Рекомендации по теме
Комментарии
Автор

Thanks for your lesson it's very help to me. I' m Mechanical Engineer and trying to find another tools to solve issues like machine and deep learning and your channel is cleary and much education. Don't stop your way to teach us, and one more time thank you.

MatheusGuitarRock
Автор

All your videos lectures are very useful..

Thanks a lot, sir..

ebrahimahmedal-rahawe
Автор

This was so helpful, it’s scary how I have to use different parameters to train over and over again. Thanks 👍

bobbyorr
Автор

Great lesson for hyperparam tuning - thank you! Do you have any tipps or articles about hyperparam tuning in Nuronal Networks for Time Series problems?

danja
Автор

very great tutorial, btw can i use this grid search for time series problem?? do i need to change keras.wrappers.scikit_learn import KerasClassifier to regressor or something? i cant quite find the documentation for this wrapper. thanks

azwraithlance
Автор

Great video: How can I force Keras Tuner to use default hyperparameter values for the first optimization iteration

programerahmed
Автор

Your videos are really helpful!
Please allow me to ask this question: How to proceed when an unseen class comes in?
I'd say setup base model and freeze all its layers except the last one, compile and fit again
But before all that, how does the model realize it is before a new class?
Could you point me in the rigth direction?
Thanks!!

silviasanmartindeporres
Автор

Hi Sreeni...Do you know of any resources for how to apply Bayesian Optimization via HyperOpt for MLP, CNN and LSTM model optimization?

JJGhostHunters
Автор

How about optimizing fine-tuning through layer reduction?

XX-vujo
Автор

you have not evaluated the model with best parameters on the unseen data?

marknguyen
Автор

simply just a waste of to provide some intuitive base rather than showing the results of you share the code with someone he can run it and observe it by what's special over here in your

lazy.researcher
Автор

The current version of scikeras changes the optimizer to RMSprop even if you declare optimizer ='adam' in the build_function. I tried to check the optimizer after converting the keras model to a sklearn compatible model using but the method .get_params() didn't work. I didn't have time to explore it deeply ... so maybe if you are using any of these methods it worth to check if the wrapper is not changing the optimizer you defined in the build_funciton.

pedroramon
Автор

Thank you very much
how can use my data from google drive
this part

mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

plt.figure(figsize=(10, 10))
for i in range(20):
plt.subplot(5, 5, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train[i], cmap=plt.cm.binary)
plt.xlabel(y_train[i])


# normalize the data
x_train, x_test = x_train / 255.0, x_test / 255.0

# reshape images to 1D so we can just work with dense layers
#For this demo purposes
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)

mulugetashitie