Lesson 7: Deep Learning 2019 - Resnets from scratch; U-net; Generative (adversarial) networks

preview_player
Показать описание
In the final lesson of Practical Deep Learning for Coders we'll study one of the most important techniques in modern architectures: the *skip connection*. This is most famously used in the *resnet*, which is the architecture we've used throughout this course for image classification, and appears in many cutting edge results. We'll also look at the *U-net* architecture, which uses a different type of skip connection to greatly improve segmentation results (and also for similar tasks where the output structure is similar to the input).

We'll then use the U-net architecture to train a *super-resolution* model. This is a model which can increase the resolution of a low-quality image. Our model won't only increase resolution--it will also remove jpeg artifacts, and remove unwanted text watermarks.

In order to make our model produce high quality results, we will need to create a custom loss function which incorporates *feature loss* (also known as *perceptual loss*), along with *gram loss*. These techniques can be used for many other types of image generation task, such as image colorization.

Finally, we'll learn about a recent loss function known as *generative adversarial* loss (used in generative adversarial networks, or *GANs*), which can improve the quality of generative models in some contexts, at the cost of speed.

The techniques we show in this lesson include some unpublished research that:
- Let us train GANs more quickly and reliably than standard approaches, by leveraging transfer learning
- Combines architectural innovations and loss function approaches that haven't been used in this way before.

The results are stunning, and train in just a couple of hours (compared to previous approaches that take a couple of days).
Рекомендации по теме
Комментарии
Автор

Gaaah! Don't they know that I've been raised on Netflix?! I need to binge watch the entire series in a couple weeks or else I'm left with this terrible feeling of emptiness. And it was such a cliff hanger too! Will the RNN get its memory back, will the GAN ever converge, and whose the father of the CNN's love child?!

Tokazamaable
Автор

"GANs hate momentum" 1:11:29

kevalan
Автор

Awesome "de-oldify" at 1:33:23

kevalan
Автор

How do i create a library like PyTorch? what I need to know in order to implement my deep learning library for recognition? And where to learn it?

denismerigold
Автор

1:33:23
It's funny that the old photos on the wall also got colorized :D

jonatani
Автор

I think in the last part about RNN (lesson7-human-numbers) there is inconsistency of what Jeremy talks in the lecture and current version of the notebook (and probably fast.ai libraries) regarding a dimension of BS and BPTT parameters.

vladimirgetselevich
Автор

Any idea why the y_range is -3, 3 55:25 ?

whateverhonestly
Автор

hello.Thanks. what do i need to know to create my own python deep learning framework? tell me the books and courses to get knowledge for this.

aidenstill
Автор

Anyone know when part 2 is being released? Thank for the course!

MasayoMusic
Автор

Where is the code for lr_find? I cant find it in source, I wanna see how it works.

WhoForgotFlush