236 - Pre-training U-net using autoencoders - Part 2 - Generating encoder weights for U-net

preview_player
Показать описание
Code generated in the video can be downloaded from here:

The video walks you through the process of training an autoencoder model and using the encoder weights for U-net.
Рекомендации по теме
Комментарии
Автор

Your videos have that sense of unique dedication and motivation about this profound subject! Please consider transformers (i know not exactly in microscopy domain)

switches_slips_turnouts
Автор

Hi Sir! This is so amazing! I watched another video of yours "Tutorial 124 - using pretrained models as encoders in U-net" and I wonder what is the difference between using pretrained models and autoencoders to initialize encoders weights in U-net? When might me we prefer one over another?
Also I'm a bit confused about the autoencoders structure. I though that it needs a bottleneck layer that aggressively reduce the dimension of the encoder embeddings. However, I think in the video, you directly transitioned from the Activation layer of the encoders (shape 16, 16, 1024) to the conv layer of the decoder. Can you help with my confusion? Thanks a lot!

MyTran-dy
Автор

Hello Sir, I learn a lot from your profound knowledge and way of presenting it. I want to segment smoke plumes from images. Then what type of segmentation will be suitable and what should be the refinements (considering smokes have no general structure or shape) to the existing models? I will be grateful if you give some advice regarding this. Thank You very much.

abhishekdey
Автор

Thank you, you are amazing Sir. Much love from Italy

David-pwfr
Автор

Amazing sir already subscribed we’re learning so much from you thank you please make videos on Transfer learning approach in pathology

Anonymous-ztjt
Автор

Thank you for the video. Wondering if using an backbone like resnet or efficientnet will have better or worse performance than training my own weight from autoencoder?

minipc
Автор

Sir,
The program is crashing on google colab as img_array uses excessing memory. Now how to do this without crashing? the image sizes are 512. the code works fine when Size=256.

img_array = np.reshape(img_data, (len(img_data), SIZE, SIZE, 3))
img_array = img_array.astype('float32') / 255.

shuvrodas
Автор

Super Tutorial. Then how many images would you mask to train the U-net with the autoencoder weights?

faceprofesor
Автор

Make videos on capsule network and medical imaging

Anonymous-ztjt
Автор

Super Nice Tutorials...very helpfull. I hace a question: the ultimate goal of using an autoencoder is used to speed up the process? or do the weights help to find specific sections of the image?

faceprofesor
Автор

thank you for this amazing content, do u have any idea about bushfire

satellite datasets? I can only find satellite imagery of the fires
in csv.
in csv files i can't see semantic segmentation as a result !!!!, and are fires possible for semantic segmentation ?

farahhaddad
Автор

Thanks for explaining, very informative videos. I am just wondering if is it possible to teach a model (CNN u-net, or autoencoders) with inputs with masked zones and outputs with ground truth and try to predict the continuous values by using this model to the new data?

Nurassyl
Автор

For our model to be perfect or nearly perfect. we train our model by varying hyperparameter and we find the loca or global minima for optimize solution
My point is if we need global minima then why don't we just plot our hole dataset and find global minima
By this we don't need much time for finding optimize solution

markadyash
Автор

Dataset description: 4k images with two classes and balanced classes. Using this data set i trained two model using tiny-yolov4.

Model 1 : trained all 4k images. 20k max_batches . getting 84% accuracy avg loss 0.12xxx
Model 2:
Cycle 1 :i trained 3k images with 20k max_batch getting 94% accuracy.
Cycle 2 : i trained 1k images with 20k max batch using last weight of cycle 1. After completion i am getting 94% accuracy and avg loss 0.0xx.

Even though i increased model: 1 20k+20k max batch thare is no improvement.

My question is for both the model i trained with same dataset. why result is different.

Training small dataset is good?

Note: cfg file are same for both model.
Computer configuration are same and gpu resources also same for both the model.

Can you justify it... please

Thanks.

kamaleshkarthi
Автор

very well explained sir. Can you please explain the code in Pytorch framework for UNet ?

rohitgupta
Автор

Hello Sir, I religiously follow your videos. Always bang on content and so much aligned with current area of project. I had a question regarding loading only encoder weights. Forgive me for the long text that follows.
So, I am trying to apply transfer learning from one crop to another. I trained my U-net model for Crop A (binary segmentation). Then, to segment crop B I loaded all pre-trained weights in both encoder and decoder part and gradually from bottom-up made 0/1/2/3 layers in encoder part as trainable freezing the others (initial layers in encoder). In each case, my model performs same as training the model (for Crop B) from scratch (without pre-trained weights). My question is, why did you use pre-trained weights only in encoder? Can I use that in decoder as well? Is that causing my model to perform poorly? Is there anything else you can advice for model improvement for Crop B?


2) Also, while unfreezing a layers in encoder which layer to consider ? Should it be always output of 2nd conv-2D layer from conv_block (as per your code)?

Would highly appreciate if you can please advise on this.

krishnakabi
Автор

ImportError: cannot import name 'img_to_array' from 'keras.preprocessing.image'

liutprandofeinstaub
visit shbcf.ru