224 - Recurrent and Residual U-net

preview_player
Показать описание
Residual Networks:
Residual networks were proposed to overcome the problems of deep CNNs (e.g., VGG). Stacking convolutional layers and making the model deeper hurts the generalization ability of the network. To address this problem, ResNet architecture was introduced which adds the idea of “skip connections”.

In traditional neural networks, each layer feeds into the next layer. In networks with residual blocks, each layer feeds into the next layer and directly into the layers about 2–3 hops away. Inputs can forward propagate faster through the residual connections (shortcuts) across layers.

Recurrent convolutional networks:
The recurrent network can use the feedback connection to store information over time. Recurrent networks use context information; as time steps increase, the network leverages more and more neighborhood information. Recurrent and CNNs can be combined for image-based applications. With recurrent convolution layers, the network can evolve over time though the input is static. Each unit is influenced by its neighboring units, includes the context information of an image.

U-net can be built using recurrent or residual or a combination block instead of the traditional double-convolutional block.
Рекомендации по теме
Комментарии
Автор

Just yesterday I was discussing about what advantages you would have by including residual blocks in a microscopy segmentation UNET and today you cover this exact argument on a video.
You're a gift from heaven.

filippocastelli
Автор

I'm preparing my last exam (of my master's degree) that is based on this subjects. I find your videos very helpful, thank you!

aleandropresta
Автор

its very easily, you taught. very nice!

vivekyadav-ebic
Автор

Dear expert
i was very comfortable with your explanation my question is it is same for even lung CT images

nandeeshnandy
Автор

Wow how did i miss this video, this is very very good content. Thank you so much

ajay
Автор

Wow, great video. Just wanted to ask if you are thinking about covering the topic of attention based Unet in you up coming videos. It will be great to see that.

soumyadrip
Автор

Best explanation I have seen. Can't wait for the next video

jacobusstrydom
Автор

Big fan of your work sir. Thank you for uploading.

rojanbasnet
Автор

Brilliant explanations. Thanks so much.

RohanPaul-AI
Автор

Sir can you please make a video on the challenges in instance segmentation

reemawangkheirakpam
Автор

Thanks for your great video, your video is really helpful.
I have a quick question about U-Net,

Does it really different between using/replacing residual block in the encoder part when U-net model builds and using ResNet as backbone for U-net?
As far as I understand, those two models are very similar in terms of using residual block in encoder part, I know the structure of layers is different, but I am still confused by how different they are.
Thanks,

brianmoon
Автор

if we are doing R(x) = f(x) - x and after that we are adding x ... then how it is making a difference ... we are just passing f(x)

MyStudents
Автор

Thank you sir for the best explanation on this topic....Subscribed :)

sabrinadhalla
Автор

In residual Unet architecture, the dropout function seems to misbehave with the architecture.

When I tried loading the weights of model after training with dropout, it displayed an error message of shape mismatch.

Can you kindly explain the reason behind this?

sanjeetpatil
Автор

HEllo sir do u have any idea how we can use the GRU instead of a simple RNN in the 3D Unet architecture? which means I want to use gru for 3D data without flatten them

salmahayani
Автор

Thanks sir great content... Kindly explain Attention modules in s egmentation and classification

Ajaysharma-yvzp
Автор

Still waiting for your attention-guided U-Net

XX-vujo