WGANs: A stable alternative to traditional GANs || Wasserstein GAN

preview_player
Показать описание
In this video, we'll explore the Wasserstein GAN with Gradient Penalty, which addresses the instability issues in traditional GANs. Unlike traditional GANs, WGANs use the Wasserstein distance as their loss function to measure the difference between the real and generated data distributions. The Gradient penalty is used to ensure that the gradients from the discriminator don't explode or vanish. We'll implement the WGAN with Gradient Penalty from scratch and use the anime faces dataset for training. Watch the video to learn how to create this type of GAN and improve its performance.

And as always,
Thanks for watching ❤️

Chapters:
0:00 Intro
0:34 Wasserstein distance
1:15 Wasserstein as loss function
2:43 Gradient Penalty (Lipschitz continuity)
4:38 Code from scratch
11:45 Things to remember
Рекомендации по теме
Комментарии
Автор

I was watching a coursera course on gans and couldnt understand this loss function. Thanks for explaining it in such an illustrative manner.

chirag
Автор

What a great visual graphics and excellent explanation!!! Thank you for sharing!!!

amazing_performances
Автор

Great explanation! Just what I was looking for. Thanks

MTalha
Автор

thanks for the video.very helpful and clear tutorial as always.keep it up

hasihasi
Автор

Wow! You are great 🤘👍thanks for all the hard work🙏

us.reza.abbaszadeh
Автор

thanks for the video.very helpful and clear tutorial

clrs