WAY Better Than Stable Diffusion! - GAN AI Art Models

preview_player
Показать описание
Nord’s 30-day money-back guarantee! ✌ Don’t forget to use my discount code: mattvidproai

▼ Link(s) From Today’s Video:

-------------------------------------------------

▼ Extra Links of Interest:

-------------------------------------------------

Thanks for watching Matt Video Productions! I make all sorts of videos here on Youtube! Technology, Tutorials, and Reviews! Enjoy Your stay here, and subscribe!

All Suggestions, Thoughts And Comments Are Greatly Appreciated… Because I Actually Read Them.

-------------------------------------------------

Рекомендации по теме
Комментарии
Автор

Nord’s 30-day money-back guarantee! ✌ Don’t forget to use my discount code: mattvidproai

MattVidPro
Автор

I think having super fast generation at ok quality might be good for generating like 100 variations, taking the best one, and then improving it with something like stable diffusion.

ДаниилРабинович-бп
Автор

Small correction: GAN stands for *generative* adversarial network

IPutFishInAWashingMachine
Автор

I’m literally learning everything I know about AI generation from this channel. Awesome work

kokotxa
Автор

almost 6 months later, and we still can't even run gigaGAN locally cause there is no support for it anywhere.

SytanOfficial
Автор

Very enlightening video, There’s this woman I got in touch with during the 2020 lockdown which cost me my job. Rose Gardner helped me manage my assets by introducing my to the best trading platform and strategies, I earned a lot of $$$ working with Gardner at the comfort of my home. I still keep in touch with the amazing lady

carlyanderson
Автор

6:12 - It even managed to reconstruct chromatic aberration on the left, white edge of the painting in the background! That's incredible!

Looki
Автор

The upscaling could work for video compression! When the computing power becomes enough on the client side.

Anders
Автор

Wow. This week has been a revolutionary week in ai

ethancross
Автор

great video .. only thing with GAN models is, is that it may seem to be fast but you have to re iterate multiple times to get to the end results shown with the samples (up to 7-15 times which is equal to doing a low step count Txt2Img in SD and then do 5-10 times a higher step count in img2img with a different bas model)

kuromiLayfe
Автор

without your videos im not gonna know any of these informations

Steve_Fid
Автор

Since youve been talking about prompt modification that really makes me want to see something that would use chat GPT to generate these gan images.
One thing ive noticed chat GPT is really good at is modifying results, and that's great through text, but being able to just use plain language to modify a generated image would be wonderful. You could chat with the AI until you get the exact image youre looking for.

Yipper
Автор

As a graphic designer, I'll tell you this is somewhat of a relief. I can't tell you how many clients have asked for similar things that are not fun (i.e. "take this product and render it in 3D and make it look shiny and interesting without physically changing it"--see? I can write my own prompts to myself). Robots will be outperforming humans in individual ways, though it will be somewhat longer for us to face extinction as batteries for the Matrix or SkyNet fodder. At that point, the robot that created the universe might step in and end things once and for all (kidding, I think).

AdamWestish
Автор

They were expiramenting with training a neural network to just predict the last step of a diffusion model, which is a lot faster.

nathansuar
Автор

Combining GANs with Stable diffusion to speed up image generation would be nice. I have used GANs for image generation for a couple of years now, mainly Facemorph and Artbreader. They are great for generating new faces by mixing two or more faces. It's also possible to change style and details of an image (remove/add hair, swap/remove color, shrink/enlarge, rotate etc..) and, yes it's very fast.

SkepticalCaveman
Автор

dang, ai is advancing so fast so quickly. first gpt-4, then midjourney v5, now this.

jj
Автор

7:37
Wake me when you can show GigaGAN handling how many fingers a hand is supposed to have and the multiple ways they fold.

_________
Автор

Can we pose characters, detect edge lines, and train AI to generate custom characters with Lora and ControlNet as we have in Automatic 1111? If not, it won't be worth trying. But, thanks for sharing! It's always good to know what's going on in AI world.

Amelia_PC
Автор

Amazing stuff! Thank you for reporting this!
Also, GigaGAN's upscaler is unreal!

RonMar
Автор

5:50 What I’ve noticed about upscalers is they clearly only understand certain particular noise. That noise on that example looks particularly controlled, so I’m quite suspicious about their result. I’d want something trained on OLD COMPRESSION from, say, the iPhone 3GS which I’ve never found upscalers yet that can properly deal with it. 6:33 see yes again that compression on that dog looks VERY controlled, like pixelart-like, and so I can imagine that’s a lot easier than trying to deal with more chaotic noise. I’m reminded of those good upscalers that do faces really well on many image but the rest can still be left blurry or low quality.

It doesn’t mean it’s not useful, but in terms of dealing with actually highly compressed and/or noisy footage/images it’s not the holy grail unless it can do that.

Seems to me a huge gap in the market is a service that provides every single tool possible for you to take an image from Midjourney or something (or any image) and process it to be perfectly able to be printed on a large canvas. I’m scared about doing it because there’s a variety of things to be concerned about

Edbrad