filmov
tv
Kasucast #9 - Stable Diffusion style technique comparison: Hypernetwork vs. Textual Inversion
Показать описание
#stablediffusion #characterdesign #conceptart #digitalart #machinelearning #hypernetwork #textualinversion #digitalillustration
Previous Stable Diffusion videos:
This time, I documented my process using hypernetworks in stable diffusion for stylized character art. In this video, I'll be covering hypernetworks, and analyzing their performance through a series of experiments. It will also be used in img2img with my own past artwork to produce stylized character art.
My process:
1. Generate style hypernetwork using training tab from AUTOMATIC1111's repository.
2. Sketch a base character (optional, but strongly advised, especially if you work in a studio that requires iterative character concepts). If you can't draw/paint/sculpt, generate a base character from txt2img.
3. Input base character to img2img and use hypernetwork and/or textual inversion embedding if you like.
4. Take img2img results into 2D software to paint over/refine. Use masks to paint in or out aspects from the images you like.
I used my own base fanart as an initial weight to stable diffusion in order to generate multiple images. The final painting is a character design that I've been thinking about for a while.
*The final piece can be found here*:
NOTES:
*Youtube does not allow angled brackets*
Remember to put angled brackets around the name of your style embedding! You will see whether or not if it's loaded in when generating images from prompts. Same with hypernetwork.
The Photoshop tool that I use most to correct irregularities in images is called the "spot healing brush". Alternatives are the "content aware fill". Please use whatever you feel fits you best.
Example prompts:
masterpiece, 1girl, painted exquisite detailed portrait of beautiful female combat maid in the style of (myfavoritefantasyartists_automatic_v7:1) in ((tactical streetwear)), detailed face, (pretty face:1.5), slim face, ((detailed pupils)), (looking at viewer:1.5), full color spectrum, golden hour, (beautiful face:1.5), beautiful anatomy, ((detailed pupils)), symmetrical eyes, Makoto Shinkai, studio Ghibli, James Gilleard, Atey Ghailan, rim light, exquisite lighting, (final fantasy xiv), by Serpieri, by chocofing R, vtuber
Negative prompt:
(((saturated))), (high contrast), (hair covering face:1.5), (obscured face:1.5), (obscured neck:1.5), (((multiple people))), (multiple heads), asymmetrical eyes, ((((ugly)))), (((duplicate))), (((mutation))), ((morbid)), ((mutilated)), (out of frame), medium breasts, (((large breasts))), extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))
🎉 Social Media:
Timestamps:
00:00 - Intro/Preview of character artwork created with the help of stable diffusion
01:09 - Some context about environment/models I'm using
02:17 - Short rundown on how to use BIRME for preprocessing
03:18 - How to train a hypernetwork
05:27 - Temporarily paused the training to change preview options
06:07 - Initial analysis of results from hypernetwork
06:44 - Comparing textual inversion and hypernetworks on TXT2IMG results
08:22 - How to enable hypernetworks
10:28 - Comparing textual inversion and hypernetworks on IMG2IMG
13:37 - Lighting in photoshop
21:49 - Bringing lightly edited image back into img2img inpaint
22:18 - Short break to show current steps in work in process
24:34 - Turning imperfections into new features using img2img inpaint
27:00 - Updated composition
28:00 - How to light the final image
29:54 - Changing the face
30:54 - Changing the face (again!)
31:29 - Closing thoughts
Previous Stable Diffusion videos:
This time, I documented my process using hypernetworks in stable diffusion for stylized character art. In this video, I'll be covering hypernetworks, and analyzing their performance through a series of experiments. It will also be used in img2img with my own past artwork to produce stylized character art.
My process:
1. Generate style hypernetwork using training tab from AUTOMATIC1111's repository.
2. Sketch a base character (optional, but strongly advised, especially if you work in a studio that requires iterative character concepts). If you can't draw/paint/sculpt, generate a base character from txt2img.
3. Input base character to img2img and use hypernetwork and/or textual inversion embedding if you like.
4. Take img2img results into 2D software to paint over/refine. Use masks to paint in or out aspects from the images you like.
I used my own base fanart as an initial weight to stable diffusion in order to generate multiple images. The final painting is a character design that I've been thinking about for a while.
*The final piece can be found here*:
NOTES:
*Youtube does not allow angled brackets*
Remember to put angled brackets around the name of your style embedding! You will see whether or not if it's loaded in when generating images from prompts. Same with hypernetwork.
The Photoshop tool that I use most to correct irregularities in images is called the "spot healing brush". Alternatives are the "content aware fill". Please use whatever you feel fits you best.
Example prompts:
masterpiece, 1girl, painted exquisite detailed portrait of beautiful female combat maid in the style of (myfavoritefantasyartists_automatic_v7:1) in ((tactical streetwear)), detailed face, (pretty face:1.5), slim face, ((detailed pupils)), (looking at viewer:1.5), full color spectrum, golden hour, (beautiful face:1.5), beautiful anatomy, ((detailed pupils)), symmetrical eyes, Makoto Shinkai, studio Ghibli, James Gilleard, Atey Ghailan, rim light, exquisite lighting, (final fantasy xiv), by Serpieri, by chocofing R, vtuber
Negative prompt:
(((saturated))), (high contrast), (hair covering face:1.5), (obscured face:1.5), (obscured neck:1.5), (((multiple people))), (multiple heads), asymmetrical eyes, ((((ugly)))), (((duplicate))), (((mutation))), ((morbid)), ((mutilated)), (out of frame), medium breasts, (((large breasts))), extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))
🎉 Social Media:
Timestamps:
00:00 - Intro/Preview of character artwork created with the help of stable diffusion
01:09 - Some context about environment/models I'm using
02:17 - Short rundown on how to use BIRME for preprocessing
03:18 - How to train a hypernetwork
05:27 - Temporarily paused the training to change preview options
06:07 - Initial analysis of results from hypernetwork
06:44 - Comparing textual inversion and hypernetworks on TXT2IMG results
08:22 - How to enable hypernetworks
10:28 - Comparing textual inversion and hypernetworks on IMG2IMG
13:37 - Lighting in photoshop
21:49 - Bringing lightly edited image back into img2img inpaint
22:18 - Short break to show current steps in work in process
24:34 - Turning imperfections into new features using img2img inpaint
27:00 - Updated composition
28:00 - How to light the final image
29:54 - Changing the face
30:54 - Changing the face (again!)
31:29 - Closing thoughts
Комментарии