235 - Pre-training U-net using autoencoders - Part 1 - Autoencoders and visualizing features

preview_player
Показать описание
Code generated in the video can be downloaded from here:

The video summarizes the concept of autoencoders and walks you though the code for using autoencoder to reconstruct a single image. It also walks you through the code for displaying feature responses of various layer in a deep learning model.
Рекомендации по теме
Комментарии
Автор

This is exciting. I am having trouble with improving the performance of my segmentation models. Looking forwards to Part 2. Thanks for the great work that you are doing.

aggreym.muhebwa
Автор

I wanted to take a moment to reach out and thank you for creating such informative and helpful videos.

mdyounusahamed
Автор

Just watched this and without any hesitation clicked the subscribed and like button. YOU are absolutely great. Keep it up.

lion
Автор

It is a pretty good and clear video for explaining how to the training network and display intermediate parameters and can learn skills for tuning it in Unet pattern.

yuanchen
Автор

You are the on of the best classic content creator for computer vision. Kindly do it for nlp as well.

akashravi
Автор

Thanks Sir. I cann't wait til you have uploaded the code to the github. I am typing codes from your screen. After a few typo fixes, i can see what you explained, on my screen.

kyawnaingwin
Автор

Great concept and this blew my mind. Never knew you could use pre-trained weights other than the ones trained on datasets like imagenet.

KarthikArumugham
Автор

please start some series on mask rcnn. Thanks for your contribution to this computer vision world.

umairsabir
Автор

Since the original images might be divided into 256x256 patches, when training the Encoder do you recommend also including the patches that doesn't include the region-of-interest. What about when training the Decoder. Also, whats the effect of "Smooth blending" of patches on the Encoder vs Decoder training.

maxmaximus
Автор

Is there possible to add blocks in UNet's encoder and decoder to add new image restoration algorithm like ( grey world, retinex )

aravindangovindharajou
Автор

thank you for this amazing content, do u have any idea about bushfire
satellite datasets? I can only find satellite imagery of the fires
in csv.
in csv files i can't see semantic segmentation as a result !!!!, and are fires possible for semantic segmentation ?

farahhaddad
Автор

line 77 : model_for_visualization = Model(inputs = my_model.input, outputs = outputs)
i am getting error as
name 'Model' is not defined
please help me in this

shubhamshevale
Автор

thanks dear sir, could you please upload next video.

wahidullah
Автор

How much RAM your device have ? Mine crashed while trying to running this autoencoder.

neerajsaxena
Автор

hi sir, i want to ask, could we use CNN with HOG and linear SVM classifier for object detection?

fraoney
Автор

How can we apply u net segmentation on images which have an odd shape such as 450x1450 how do we use pachify in this case ?

shriniketankulkarni
Автор

Nice video sreeni, send me link for the dataset.

padma
Автор

Would you please make atutorial about mask RCNN

SindyanCartoonJordanian
Автор

@DigitalSreeni  one more question if you're free answer me without fail....
Case 1: I trained a yolo v4 model with two classes. Now i has to train same model with adding another two classes. Train the model without losses of previous two classes weight...is this possible.
My answer : reserving extra node in output layer. Can i do this?
Your answer for case1 :

Case 2 :
Dataset description: 4k images with two classes and balanced classes. Using this data set i trained two model using tiny-yolov4.
My question is:
Model 1 : trained all 4k images. 20k max_batches . getting 84% accuracy avg loss 0.12xxx
Model 2:
Cycle 1 :i trained 3k images with 20k max_batch getting 94% accuracy.
Cycle 2 : i trained 1k images with 20k max batch using last weight of cycle 1. After completion i am getting 94% accuracy. And avg loss 0.0xx.

My question is both the model i trained same set of images why result is different. Training small set of image is good?
Even though i increased model 20k+20k max batch thare is no improvement.
Note: cfg file are same for both model.
Thanks

kamaleshkarthi
Автор

Sir, can we use this concept for image classification problem?

geethaneya
welcome to shbcf.ru