AI Vid2Vid Workflow in ComfyUI Stable Diffusion

preview_player
Показать описание
How i used stable diffusion and ComfyUI to render a six minute animated video with the same character.

#comfyui #stablediffusion
Рекомендации по теме
Комментарии
Автор

Love the breakdown you've provided. That SD looks fun. Such cool times we're living in. <3

snowman-v
Автор

That’s pretty cool James. Keep up the good work. Blessings

michaelfrisch
Автор

Thanks James ! it's mind blowing, transforming, metamorfinig, mind fluid blending, scarifying and encouraging at same time.HPL

eddytheman
Автор

recently started 'diffusing' myself. do you find that sd1.5 base model is the most capable for your purposes? seems like there's much more slick ones. (i don't mean XL yet or turbo since i don't have the gear to test them) also, do you preface your prompts with (masterpiece, detailed, intricate) for these or textual inversions such as 'easynegative' and what have you? prompting is really its own art. sometimes literal transposition won't work, but you almost need chatgpt to take the raw text and parse that into more effective 'machine language' since sometimes natural language will get ignored. maybe even an audio 'emphasis detector' would be cool, if you could somehow interpret vocal emphasis, or pitch and volume changes. imagine if the audio RMS was monitored and if a certain blurb would automatically get parenthesis or parenthesis+ around those groups... if this isn't already being done by the avant garde then i'd be surprised. audio dsp is the next logical dimension in the matrix, call it a z axis if you want.

intelligenceservices
Автор

eye d0nt want nuthin t0 d0 wit da a👁.

benderbender
Автор

Gee, AI, eh? Always been chatgpt a mother chatgpt giving her individual sparks personal chatgpt 4 and eventually 3D clay

Bootsie
Автор

Sorry to say this but you must investigate mores, the results and the workflow aren't good.

blender_wiki