216 - Semantic segmentation using a small dataset for training (& U-Net)

preview_player
Показать описание
What to expect when you perform semantic segmentation using small datasets (less than 100 images) and U-net architecture? Does augmentation help and how about transfer learning?

Code generated in the video can be downloaded from here:

To annotate images and generate labels, you can use APEER (for free):
Рекомендации по теме
Комментарии
Автор

If I have gained any understanding about unets and image processing, in general, it is thanks to you :)

SoumyadipMal
Автор

Thank you for this very helpful video.

kavithashagadevan
Автор

Hi. thanks for this video. Can u please make a video on YOLO v3 and Mask RCNN ?

umairsabir
Автор

Hi. I want to thank you very much for your efforts. I really learned a lot from your videos. I have a favor could you make a content for object detection using faster and mask RCNN. I loved you explanation very much and I want to hear you explanation and implementation from you. Thank you very much. God safe you ❤❤❤❤

mohamedramzyibrahim
Автор

Sir, I have a question . In your video you show us the data augmentation method for single class segmentation with the additional code: preprocessing_function = lambda x: np.where(x>0, 1, 0).astype(x.dtype)) to replace post augmented mask pixel value with 1 instead of 0 for augmented labelled mask image. How do I change the code when dealing with multi-class labelled mask images ? or should i just ignore the additional code ? Thank you

James-urhx
Автор

Hi Sreeni, I'm trying your approach on my own dataset wherein my images are RGB jpeg and label in png(single class as you). What changes in the method should I consider?

sumodnandanwar
Автор

Thank you for the incredibly helpful session. It has been instrumental in aiding my dissertation, and I sincerely appreciate it. During my research, I came across several papers that utilize the LSTM model to train a CNN model, incorporating a large image patch stack. I would greatly appreciate it if you could explain to me how this approach enhances feature extraction and training based on localized features. Additionally, I'm curious to know whether this method improves the overall classification accuracy of the model.

dilendrasajini
Автор

Thank you so much. I wanted to ask you: in what instances should you augment your validation data? You mentioned that if it is for a classification problem, we would not do that. Why? (at 21:30) I would high appreciate an answer from you.

OGIMxGaMeR
Автор

that's a great video! very detailed explanations. thanks a lot!

basicscientist
Автор

I have one question:
In your 208 video, you say we can use class_weights to handle unbalanced dataset. The problem is, it is not compatible with the output's shape. But you said in the video it worked for you.
Can I ask how you made it work ?
Thank you :)

finlyk
Автор

I have a question:
Why do you define the function "my_image_mask_generator" in 23:42?
Couldn't just the following work?
my_generator = zip(image_generator, mask_generator)
validation_dataset = zip(valid_img_generator, valid_mask_generator)

What is the difference of what you do?

P.S. One "thank you" is not enough for the help I take from you. I am a very fresh PhDc and your videos are a real treasure for me. You are amazing!

mager
Автор

Sir, If the image dimensions are not divisible by 256, how can we still make use of patchify?. eg: 1620x1444 is the size of image that I am dealing with and I cannot crop the image width wise, since the I will lose some image info then....

rachelbj
Автор

Hi, I want to use echocardiogrphy frames for annotating the left ventricle, is it possible? If possible how to download annotated images from apeer?

MadanKumarmadan
Автор

Thank you for the enriching videos. How can I visualize in plot these models architecture ?

manjunathhegde
Автор

Thanks you Dr. I was waiting a class like this.
I have a question.. you know some model like U-Net to count fruits in the tree 🌴.. here we have oclusion.. I tested Mask R-CNN with 100 images.. I'm no getting good results.. Any idea?

surflaweb
Автор

Thanks for your informative content and it is highly helpful

venkatesanr
Автор

at 20:40 you present the preprocessing function. it will only work on binary classification or on multiclass too?

bielmonaco
Автор

dear sir
i study your code cariffully but when i use kitt data set the number of class im masks is :
print("Unique values in the mask are: ", np.unique(mask_for_plot))
Unique values in the mask are: [ 4 7 8 11 13 14 17 21 22 23 24 26]
and this range between [0 and 36]
how can change your code in 121 tutorial to match this type od data
please help me

زهراءطلالعبدالمختار-هندسةالحاس
Автор

Hello Sir. When i save the model in my google drive it is not stored as a directory. How can i convert it to tflite?

radiator
Автор

As always you are awesome🤩!!!
Thank you!!!

rushikeshdarge