Creating and Training a Generative Adversarial Networks (GAN) in Keras (7.2)

preview_player
Показать описание

Implement a Generative Adversarial Networks (GAN) from scratch in Python using TensorFlow and Keras. Using two Kaggle datasets that contain human face images, a GAN is trained that is able to generate human faces.

Code for This Video:

** Follow Me on Social Media!
Рекомендации по теме
Комментарии
Автор

This is the best possible explanation of GAN in practise. I have been working with GAN and i havent understood a few things. This video helped me to, thanks a TON Hea"TON"!

kalaiselvan
Автор

As always, good amount of theory makes it easier to understand the application, thanks for the quality work Jeff.

TshepoMoagi
Автор

Absolutely Amazing. I just visited the site and my mind is blown by the produced results. Bravo!

TheRealKitWalker
Автор

So cool. Very glad I found your channel. You deserve to have 10x the subscribers. You make some of the best and most informative videos out there. This is real educational content, not just entertainment. Keep up the great work! 👏👏👏

joliver
Автор

Fantastic video, best GANs tutorial on YouTube. Great work.

nickcarmont
Автор

This is the first video of yours that I'm going to have to watch a second (and perhaps third) time to understand the structure of the network and how it's trained. I'm just slow.

rchuso
Автор

Thank you for the materials. I loved your video!
I found some issue: when you change GENERATE_RES, you have to keep in mind the fact: UpSampling with default params increases output shape two times: in rows and columns (4*4->8*8).
That's why we have to check GENERATE_RES using math.log(GENERATE_RES, 2).is_integer() (for avoid to write more code), because in other case we have discrepancy between generator output shape and discriminator output shape.

iv_lucky
Автор

oh how I have waited for this video. Thank you so much

debajyotisg
Автор

Im trying to apply this to a different set of images with a single channel. somehow the network stops training (loss doesnt change) after 2-3 epochs. I tried to reduce the learning rate to 1e-5 for both generator and discriminator but to no avail. any ideas?

Dustbinexpress
Автор

Hello Jeff,
I am having issues with loading the data (see 7:24) as I get the following error:

ValueError: could not broadcast input array from shape (128, 128, 3) into shape (128, 128)

The error occurs when trying to reshape the training_data list... Could it be that np.reshape() does not work well with lists?

Any idea how to solve this? Much appreciated!

badrskalli
Автор

The kaggle dataset was removed, anyone has the data in google drive? I really want to get a hold of it to follow the tutorial.

shivanshsuhane
Автор

This video helps a lot for a beginner like me. I would like to know what type of GAN is most suitable to create a large dataset for car damage, Sir. Is it possible or not, please advise me, Sir.

mayphyu
Автор

This is an amazing lesson, thank you!

kseniyaburaya
Автор

Hi Jeff, I have tried to change "GENERATE_RES = 3" and I get an error showing weights miss-match. The same mismatch happens for GENERATE_RES = 4. Can YOu please help me with that.


I have tried the Latest Updated 2.0 version too!!, and it all works fine for any all the values of GENERATE_RES, but I am not satisfied with the output I get after the generation, so I wish to continue using the previous version, but I am able to train the data only on 64Xg4 images due to the error mismatch. Thank you


this is the error "ValueError: Error when checking input: expected conv2d_19_input to have shape (128, 128, 3) but got array with shape (256, 256, 3)" for GENERATE_RES = 4, am I missing on something?

crackheads
Автор

Hi, Jeff. Thank you so much for the video. Would you mind answering some questions? 1) Why we still need to use the ZeroPadding2D layer when 'padding = 'same' has already been set in the build_discriminator? 2) Is there any particular reason for using 'strides = 2'? Thanks

erciyoung
Автор

I'm not quite seeing how the generator and discriminator gets combined

I don't see something obvious like, `combined_model = combine(generator, discriminator)`

naisanza
Автор

At 15:48, should we have calculated the generator metric with x_fake instead?

debajyotisg
Автор

Hello Jeff! How can I apply it to the video sequence? Schould I generate it frame by frame?

iana_go
Автор

Can we use GAN in speaker recognition? The code looks the same for GAN compared to DNN. What is the difference in code for GAN compared to DNN ??

sreeharivr
Автор

Fantastic video! I was able to run it and save the .h5 file at the end. I have two questions. 1) How do you read the h5 file? I tried with pandas but wasn't able to do it. 2) How did you create that little video that shows the training at it progresses (3:25). I would love to do the same. Thank you!

ladahlberg