152 - How to visualize convolutional filter outputs in your deep learning model?

preview_player
Показать описание
This tutorial explains the few lines of code to visualize outputs of convolutional layers in any deep learning model.

Code generated in the video can be downloaded from here:
Рекомендации по теме
Комментарии
Автор

thank you I urge any one who want to understand machine learning to see all the videos because Mr seerni is covering the process in a way that it is very essential to any one who wants to understand ML just look at the toics and see what I mean

fadilyassin
Автор

I am learning so much from you in the two days I found your channel! Thank you:)

MaralSheikhzadeh
Автор

This is very valuable when model debugging is necessary . . . . !

PUBUDUCG
Автор

Thank you very much, this is really help me to visualize the process of the cnn layer

rhezapaleva
Автор

Thank you it's a very useful video for who wants to dig into more. Can you make more and deeper videos for different problems on semantic segmentation on U-net or etc

Ahmetkumas
Автор

Thank you Sreeni for the great video, it came out at the right time :)

ausialfrai
Автор

every video is a gem <3 simple and easy

RizwanAli-jyub
Автор

Sir there are 128 filter in the 3rd convolution layer then why it is showing the output for only 64 filters...

VivekKumar-zwwx
Автор

You say that by stating the conv_layer_index (1, 3, 6) command you will be working with the 6th-ish first layers, but the model that gets called is a totally different one, actually the 3 conv2d counting from the end, thats why the shape is different than the prev examples. It is a confusing example.

juanodonnell
Автор

Thank you Sir it was too helpful and clear

qusayhamad
Автор

I never thought about simply cutting the model and predicting on obvious now.
thanks

seaniam
Автор

Thank you very much for your amazing tutorial

ahmedalaa
Автор

Thanks. What happens if I’m just predicting something like age? How can I apply this?

iangleeson
Автор

Amazing, amazing video. thank you so much

azaralizadeh
Автор

Amazing tutorial sir I'm learning so much from you. Can you please make tutorial on Capsule network for image analysis.

samarafroz
Автор

Thank you it's a useful video for me!

armankarimipy
Автор

Sir, how do we do this with the UNet from your earlier used this code...but it ain't working with this at the console it shouts..."
File "F:\UNet1.py", line 137, in <module>
filters, biases =

ValueError: not enough values to unpack (expected 2, got 0)"

I placed this code just after the model.summary() me paste the entire code here for your convenience...(leaving the resizing part)

#Build the model
inputs = tf.keras.layers.Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
s = tf.keras.layers.Lambda(lambda x: x / 255)(inputs)

#Contraction path
c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(s)
c1 =
c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c1)
p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1)

c2 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p1)
c2 =
c2 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c2)
p2 = tf.keras.layers.MaxPooling2D((2, 2))(c2)

c3 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p2)
c3 =
c3 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c3)
p3 = tf.keras.layers.MaxPooling2D((2, 2))(c3)

c4 = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p3)
c4 =
c4 = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c4)
p4 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(c4)

c5 = tf.keras.layers.Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p4)
c5 =
c5 = tf.keras.layers.Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c5)

#Expansive path
u6 = tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(c5)
u6 = tf.keras.layers.concatenate([u6, c4])
c6 = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u6)
c6 =
c6 = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c6)

u7 = tf.keras.layers.Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(c6)
u7 = tf.keras.layers.concatenate([u7, c3])
c7 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u7)
c7 =
c7 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c7)

u8 = tf.keras.layers.Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(c7)
u8 = tf.keras.layers.concatenate([u8, c2])
c8 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u8)
c8 =
c8 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c8)

u9 = tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same')(c8)
u9 = tf.keras.layers.concatenate([u9, c1], axis=3)
c9 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u9)
c9 =
c9 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c9)

outputs = tf.keras.layers.Conv2D(1, (1, 1), activation='sigmoid')(c9)

model = tf.keras.Model(inputs=[inputs], outputs=[outputs])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', tf.keras.metrics.MeanIoU(num_classes=2)])
model.summary()


#Understand the filters in the model
#Let us pick the first hidden layer as the layer of interest.
layer = model.layers #Conv layers at 1, 3, 6, 8, 11, 13, 15
filters, biases =
print(layer[1].name, filters.shape)


# plot filters

fig1=plt.figure(figsize=(8, 12))
columns = 8
rows = 8
n_filters = columns * rows
for i in range(1, n_filters +1):
f = filters[:, :, :, i-1]
fig1 =plt.subplot(rows, columns, i)
fig1.set_xticks([]) #Turn off axis
fig1.set_yticks([])
plt.imshow(f[:, :, 0], cmap='gray') #Show only the filters from 0th channel (R)
#ix += 1
plt.show()

#### Now plot filter outputs

#Define a new truncated model to only include the conv layers of interest
#conv_layer_index = [1, 3, 6, 8, 11, 13, 15]
conv_layer_index = [1, 3, 6] #TO define a shorter model
outputs = [model.layers[i].output for i in conv_layer_index]
model_short = model(inputs=model.inputs, outputs=outputs)
print(model_short.summary())

#Input shape to the model is 224 x 224. SO resize input image to this shape.
from keras.preprocessing.image import load_img, img_to_array
img = load_img('monalisa.jpg', target_size=(224, 224)) #VGG user 224 as input

# convert the image to an array
img = img_to_array(img)
# expand dimensions to match the shape of model input
img = np.expand_dims(img, axis=0)

# Generate feature output by predicting on the input image
feature_output = model_short.predict(img)


columns = 8
rows = 8
for ftr in feature_output:
#pos = 1
fig=plt.figure(figsize=(12, 12))
for i in range(1, columns*rows +1):
fig =plt.subplot(rows, columns, i)
fig.set_xticks([]) #Turn off axis
fig.set_yticks([])
plt.imshow(ftr[0, :, :, i-1], cmap='gray')
#pos += 1
plt.show()



#Modelcheckpoint
checkpointer = tf.keras.callbacks.ModelCheckpoint('model_for_nuclei.h5', verbose=1, save_best_only=True)

callbacks = [
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
tf.keras.callbacks.TensorBoard(log_dir='logs', histogram_freq=1)]

results = model.fit(X_train, Y_train, validation_split=0.1, batch_size=16, epochs=25, callbacks=callbacks)



idx = random.randint(0, len(X_train))


preds_train = model.predict(X_train[:int(X_train.shape[0]*0.9)], verbose=1)
preds_val = model.predict(X_train[int(X_train.shape[0]*0.9):], verbose=1)
preds_test = model.predict(X_test, verbose=1)


preds_train_t = (preds_train > 0.5).astype(np.uint8)
preds_val_t = (preds_val > 0.5).astype(np.uint8)
preds_test_t = (preds_test > 0.5).astype(np.uint8)


# Perform a sanity check on some random training samples
ix = random.randint(0, len(preds_train_t))
imshow(X_train[ix])
plt.show()

plt.show()

plt.show()

# Perform a sanity check on some random validation samples
ix = random.randint(0, len(preds_val_t))

plt.show()

plt.show()

plt.show()


Sir can you please tell me where is the problem...

omrahulpandey
Автор

Thank you it's a very useful video

kemmounramzy
Автор

i have 5*5 filter but it print 3*3 filter

nirajgautam
Автор

Sir, I copied this code from your GitHub page. But I give an error as follows.

Negative dimension size caused by subtracting 2 from 1 for '{{node max_pooling2d_2/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 2, 1], padding="VALID", strides=[1, 2, 2, 1]](Placeholder)' with input shapes: [?, 1, 1, 256].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:/Users/Bilen-Intel/PycharmProjects/TenSorFlow/ConvulationLayersVision.py", line 36, in <module>
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\training\tracking\base.py", line 517, in _method_wrapper
result = method(self, *args, **kwargs)
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\sequential.py", line 223, in add
output_tensor = layer(self.outputs[0])
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 951, in __call__
return self._functional_construction_call(inputs, args, kwargs,
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1090, in
outputs =
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 822, in _keras_tensor_symbolic_call
return self._infer_output_signature(inputs, args, kwargs, input_masks)
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 863, in _infer_output_signature
outputs = call_fn(inputs, *args, **kwargs)
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\layers\pooling.py", line 295, in call
outputs = self.pool_function(
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 4606, in max_pool
return gen_nn_ops.max_pool(
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 5326, in max_pool
_, _, _op, _outputs =
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 748, in _apply_op_helper
op = g._create_op_internal(op_type_name, inputs, dtypes=None,
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\func_graph.py", line 590, in _create_op_internal
return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\ops.py", line 3528, in _create_op_internal
ret = Operation(
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\ops.py", line 2015, in __init__
self._c_op = _create_c_op(self._graph, node_def, inputs,
File "C:\Users\Bilen-Intel\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\ops.py", line 1856, in _create_c_op
raise ValueError(str(e))
ValueError: Negative dimension size caused by subtracting 2 from 1 for '{{node max_pooling2d_2/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 2, 1], padding="VALID", strides=[1, 2, 2, 1]](Placeholder)' with input shapes: [?, 1, 1, 256].

Process finished with exit code 1

guardrepresenter