Train a LORA with JUST 1 IMAGE!!! - Koyha_ss, A1111, Vlad Diffusion

preview_player
Показать описание
I trained a Lora with just 1 image. Here is how is my guide on how to train a Lora with just 1 image. I give advice on what to do and what to avoid. I explain my Lora Training Process. I show how to prompt in A1111 and Vlad to get the best Lora results. Also a work-around on how to avoid unwanted results with Prompt Weights

Members Reward Download:

#### Links from the Video ####
(check right side for "latest releases" and download that)

#### Join and Support me ####
Рекомендации по теме
Комментарии
Автор

One idea that comes to mind after getting a decent-enough training would be to collect a bunch of the images you like that were generated with the LoRA, with interesting variations and possibly edited as shown at the end, and then retrain on them to get more freedom for the next iteration

ToniCorvera
Автор

Your videos never age out even in this bleeding edge topic. We can go back to a video that is a year old and its still valuable information.

ronaldp
Автор

Members Reward Download:

#### Links from the Video ####
(check right side for "latest releases" and download that)

#### Join and Support me ####

OlivioSarikas
Автор

Right on! So nice to see content that is not just about portraits! ❤

audiogus
Автор

What I have done, in the past is create an image on MJ, then load that image into A1111 Img2Img and do a Clip interrogate on it. Once that is done, Throw it over to OutPainting Ext in A111 and go to work. You will be super surprised how well some of the inpainting models can fill out your image. No training was really needed in my case. Once the outpainting is done, do a Tile Upscale if you need it larger.

Theexplorographer
Автор

Great idea. I believe something similar can be done with characters because I've seen posts/guides detailing the workflow of someone creating an OG character LoRA from just a couple images. I think making a LoRA from one image would be almost impossible. The guide was in Japanese so kinda hard to follow/understand even with translating it. That was before the release of controlnet reference mode though and I think if you find a character you like and want to create a LoRA out of them, you'd just have to use the reference mode for controlnet and make a bunch of different images to train with.

Dannyk
Автор

This is a really clever technique.
If you still use a rare keyword you can use that to control the weight of the desired target. IE mshfz is the activation word and you can weight that in the promt to control the whole style influence. That way you can still chose to weight words like mushroom, forest, etc on an individual basis. Simply put the activation word gives you more control if you want it. It works better with a model now that I think about it but it should still offer more control without using the Lora weight itself.
For the contrast try training with noise offset, I have not used it but it's what you want to fix that.
Excelent video... =]

Thank you.

moonusaco
Автор

If you used DIFFERENT dimensions besides 512x512, buckets setting would allow you to still train, but the concept would work even better in different dimensions rather than favor 1:1 aspect ratio

doctor.dongle
Автор

3:40 going on a bit of a tangent here but I've found WD14, despite being anime-focused, works great for actual photos of people too. I usually combine it with BLIP and most of the times it picks up much more details than BLIP.

ToniCorvera
Автор

Amazing! Thank you for this. It is very helpful

nadavIOY
Автор

To have more dynamic range you culd either train the lora with 0.4 noise offset, or use a noise offset lora !

Really cool R&D as always !

Mandraw
Автор

I was the 420th like. Just thought you should now. 🌲

SamDutter
Автор

Very interesting video actually. Learned some stuff for sure.

Mocorn
Автор

Very clever. Someone should automate. This technique.

Otis
Автор

@Olivio Sarikas I'd really love to see your (or anyone's) artistic process for generating multi actor scenes. Loras will blend together when using a person (often of the same gender, because of how tokens work), but I have experimented with textual inversion files of people and been able to get scenes with two different people (same gender) to work with control mapping/region control and things of that nature. But I have had no success training textual inversion faces/people compared to loras. But, again, loras will blend people together, even with mapping and region control and I'll just get two people that are hybrids of the two loras. One of my goals with ai art is to be able to create these multi actor scenes without needing to just straight up inpaint over incorrect faces. So I'm really curious as to the best way known so far to do something like this with two or more people that you want to use in an image. Because I know it can be done with textual inversion, but I suppose that I'm also interested in knowing the best/most efficient way to do it with Loras. So, for example, a guide on how to use both to achieve that goal would be super useful.

ddrguy
Автор

Great idea ! Thanks for sharing ideas.

RdBubbleButNerfed
Автор

What are some ways that we can use our knowledge and skills that we've developed to make money? I think this would be one of your best videos, especially if you did an experiment trying each one for a week and posting the results at the end sharing how you did each one. I mention this because i work in the business services world of setting up, and maintaining peoples business. I've had long conversations about people in how they're utilizing AI in their industry. Every time i helped someone and gave them my information they were over the moon not having any idea how useful some of the tools could be in their business. I think an AI consultant would be making bank right now. People would gladly pay 500-1000 dollars for information and techniques that save them time and money in labor, as well as leading their industry in advancement and staying competitive. For example i had really long conversation with a guy who was actually setting up a C-corp for investers to fund his project for utilizing AI to decode the electrical impulses generated from thoughts coming down your spinal cord and form a pattern that allows you to control things on your phone and computer as a peripheral like a mouse. But just imagine this being used for prosthetics ? He calls it mind control, because it's literally mind control. I ended up teaching him about stable diffusion and control net, using it for marketing purposes instead of hiring actors.

Smashachu
Автор

Thanks again Olivio for the great creative input. BTW how do you record your videos? What software do you use?

julle
Автор

Another thing you could do is use a noise offset embedding or lora to get better contrast.

Magnulus
Автор

tell me I need your help where the LAURA folder has disappeared, everything is on the way.

K-A_Z_A-K_S_URALA