Deep Learning Hyperparameter Tuning in Python, TensorFlow & Keras

preview_player
Показать описание


Don't forget to subscribe if you enjoyed the video :D
Рекомендации по теме
Комментарии
Автор

That was excellent. Need more videos on DL

iftekharanam
Автор

Thank you for this video! I have been learning about deep learning algorithms over the holiday break! Hope we see more videos from you! I love your channel and content! Keep up the awesome work, happy holidays and happy new year! :)

billybobandboshow
Автор

Thank you . I am learning deep learning .This helped me much

rudrathakkar
Автор

@GreggHogg Hi,
I got stuck with keras tuner. It seems that code below will only only create the function 'model_builder' once. If I change anything like add Dropout layer and rerun the function it keeps displaying the comment (see below the code), like it was consistenly reaching to the first version of function.
Any clues on how to fix that? I would like to experiment with the 'model_builder' function (add/remove layers, dropouts, etc) and then observe what parameters tuner generates.

def model_builder(hp) :

model = Sequential()

hp_activation = hp.Choice('activation', values = ['relu', 'tanh'])
hp_layer_1 = hp.Int('layer_1', min_value = 2, max_value = 32, step = 2)
hp_layer_2 = hp.Int('layer_2', min_value = 2, max_value = 32, step = 2)
hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4])

model.add(Dense(units = hp_layer_1, activation = hp_activation))
model.add(Dense(units = hp_layer_2, activation = hp_activation))
model.add(Dense(units = 1, activation = 'sigmoid'))

model.compile(optimizer = = hp_learning_rate),
loss = 'binary_crossentropy',
metrics = [tf.keras.metrics.Recall()])

return model

tuner = kt.Hyperband(model_builder,
objective = kt.Objective("val_recall", direction = "max"),
max_epochs = 50,
factor = 3,
seed = 42)

Comment : Reloading Tuner from

tomaszzielonka
Автор

Side comment - we divide x by 255, because the image is grayscale. An RGB of white is (255, 255, 255), so we are converting the values upon dividing to (1, 1, 1), then leaving black as (0, 0, 0). So, an important note when training images is first convert the images to grayscale.

BB-
Автор

Thanks for an amazing video! Is there way to tune hyperparameters like in sklearn w/o using keras-tuner?

haneulkim
Автор

Great Video Man but tbh I was actually expecting some sort of automation of the hyperparameter tuning.

dakshbhatnagar
Автор

Can you suggest data science course?
I already read numpy, pandas and matplotlib.

prabinbasyal
Автор

good afternoon, I have a task and I have not been able to create the keras tuner for 5000 rows with 4 columns, in each column the numbers are random from 0 to 9 and I need an output of only 4 numbers this is the code # Initialising the RNN
model = Sequential()
# Adding the input layer and the LSTM layer
model.add(Bidirectional(LSTM(neurons1,
input_shape=(window_length, number_of_features),
return_sequences=True)))
# Adding a first Dropout layer
model.add(Dropout(0.2))
# Adding a second LSTM layer
model.add(Bidirectional(LSTM(neurons2,
input_shape=(window_length, number_of_features),
return_sequences=True)))
# Adding a second Dropout layer
model.add(Dropout(0.2))
# Adding a third LSTM layer
model.add(Bidirectional(LSTM(neurons3,
input_shape=(window_length, number_of_features),
return_sequences=True)))
# Adding a fourth LSTM layer
model.add(Bidirectional(LSTM(neurons4,
input_shape=(window_length, number_of_features),
return_sequences=False)))
# Adding a fourth Dropout layer
model.add(Dropout(0.2))
# Adding the first output layer with ReLU activation function
model.add(Dense(output_neurons, activation='relu'))
# Adding the last output layer with softmax activation function
model.add(Dense(number_of_features, activation='softmax')) Thank you very much

luisalbertoburbano