177 - Semantic segmentation made easy (using segmentation models library)

preview_player
Показать описание
Code generated in the video can be downloaded from here:

Segmentation Models library info:
pip install segmentation-models

Recommended for colab execution
TensorFlow ==2.1.0
keras ==2.3.1

For this demo it is working on a local workstation...
Python 3.5
TensorFlow ==1.
keras ==2

Рекомендации по теме
Комментарии
Автор

You are definitely the best hands on tutor online Sreeni.

goodwilrv
Автор

I am also a biomedical engineer your tutorials are the best

pallavi_
Автор

Your lectures are amazing and so easy to follow. Thank you so much for all your work!

Dustyinc
Автор

Hi Sreeni, I have recently discovered your channel and found it extremely useful. It would be really helpful if you can create a video on how to create masked images for images with more number of classes (non-binary).

deepalisharma
Автор

Thank you very much for the tutorial. I learnt a lot from your videos. I hope you would do tutorials on semantic segmentation using HRNet model one day. God bless you, Sreeni...

windiasugiarto
Автор

Thanks for the tutorial. 3D UNet would be very interesting for an upcoming video, since I work with 3D localization microscopy data

ownhaus
Автор

Thanks a lot for the really good content, I am learning a lot from your videos daily. I have one question regarding image size. I have high res microscopic images (2048 x 2048) and I want to do cell segmentation

- Do I need to crop these images and make smaller patches to train this model. If yes, do I need to do this patching operation during inference as well?

or I can use high res 2048 x 2048 images and start training. If I can train the model with high res images, how does the model deal with the change of dimension (original model architecture is not suitable for high res input images, or I am misunderstanding something)?

jaydip
Автор

Hi, thanks for your vids, super helpful!
I am playing with segmentation-models library and this dataset you used in your 73-78 vids. At the beginning, I was using only Unet and some heavy backbones, like resnets or vgg, results were fine. Now I switched into playing with PSPnet (with the same dataset) and no matter which backbone I choose, I always get like 0.1614 accuracy and I just wonder - is it because PSPnet is that awful for bio-datasets or am I doing something wrong? I am aware, that results actually should be worse, but such low and repeating accuracy is kinda worrying for me. Should it be this way?

montsegur
Автор

Another great video, thanks!

QUESTION: Do you prefer this method or pre-trained CNN with VGG16 & RF as in video 159b?

Thanks!

jharris
Автор

Hey everyone,
I trained my model, it showing good result while predicting segmentation on image. But during training it giving negative loss and IOU more than 1, Can anyone please tell what am doing wrong?

Hmmm
Автор

Please add some videos regarding instance segmentation and how to make its datasets.

danishMalik_
Автор

Hi Dr Sreeni, I must confess that following your teachings has made me see that I can continue in this field! thank you for the effort, time, and resources you put into making these videos. Two years later this is still evergreen... while following your videos I ran into an issue that I've tried to resolve but without success. it is with the segmentation library and the error I get when I try to import it -AttributeError: module 'keras.utils' has no attribute 'generic_utils'... I've gone on stack overflow and tried out the solution of downgrading the version of keras but it still isn't working... Please kindly assist to resolve this issue as I'd love to explore this library. Thank you so much

successlucky
Автор

In many articles on segmentation in the field of remote sensing, it is mentioned that the input of networks is patchs, for example, 24 by 24 or 50 by 50, etc. However, I do not understand that a network that is trained on the dimensions of 50 by 50 Has he seen how he can segment high-resolution satellite images, for example 8, 000 by 8, 000 pixels? Also, does a patch contain only one complication, such as a building or a road or ...?

alirezasoltani
Автор

I think data split should be before augmentation to avoid data leakage

talha_anwar
Автор

Can I use this techniques of segmentation for cracks and damage segmentation on the walls or concrete

KushalBansal-vd
Автор

sir please make us more familiar with 3d image processing as you have created on bratts data set am working in neuro-imaging domain on brain aneurysms detection and classification

tapansharma
Автор

Hi Sreeni,

Thanks a lot for the video! It is very clear and explains the thought process very well.
I was trying to re-implement it, and have two questions to you:
1) in your video at 20:16 you have a negative loss value, why is that?
I have a similar problem (regardless of whether I'm using jaccard or bse etc.)
Any suggestions how to resolve this issue?
2) could you please provide some detail why you do not freeze the encoder weights? If I understand correctly, we would like to initialize the pretrained eoncoder and only train the decoder, but sm does not by default freeze the weights and you did not do it either. I tried both but I think because of question (1) I still dont get proper results.

Thanks a lot!

carpelev
Автор

Awesome video, good job and thanks for sharing this with us, Sreeni. Can you tell me how can I do data augmentation on device in this case? No needing to create two new folders/paths of images and masks

marcusbranch
Автор

Congratulations on your channel, it is really useful and very well organized.
Is the preprocess image (preprocess_input(x_train)) only used at the time of training while in inference is not necessary?

diegostaubfelipe
Автор

Thanks for this wonderful video Mr. Sreeni.

umairsabir