filmov
tv
Relaxing ASMR video in VR 180 format - AI generated

Показать описание
A pretty woman gives you a relaxing head and hair massage.
The woman does not exist in real life and is generated randomly in Flux with ComfyUI (AI) and Virt a Mate (driving video for the AI generated girl)
LivePortrait, Reactor, IW3 and Stable Diffusion OpenSource AI tools were used for this video.
The 3D background was made in StableDiffusion SD1.5.
Complete workflow that was used:
I use Virt a Mate to create a driving animation of a woman. Export video in Mono 180 VR format in separate frames with the BVH video plugin. Background is set to greenscreen color.
Then make a picture of a woman in flux (or midjourney or whatever). Use Re-actor module in ComfyUI to replace the face for 5000 JPG frames (batch per 200 frames).
Then record a 3 minute expression driving video on my elderly phone. Then use LivePortrait to project the expression driving frames on the original source frames. Batch per 200 frames and make sure each new batch starts at a frame where the driving video and the source video have the exact same expression (eyes open, neutral mouth). This can be done in ComfyUI by fiddling with some batch logic nodes or by manually listing the correct frame numbers to start each batch (for example i used XY Grid Nodes in comfyUI).
Then, after LivePortrait, batch process all frames in IW3 (local browser app) for the 3D projection (VR 180 format, left, right eye stereoscopic).
For the background i use Virt a Mate. Load a 3D scene of a house or a bedroom. Align with the woman. Export 1 frame of only the background in Mono 180 VR format (BVH video plugin). Make a depthmap of the one frame (ComfyUI preprocessor depthmap). Use depthmap controlnet to generate a new photorealistic image of the background (SD1.5 or SDXL comfyUI).
Use IW3 again to generate the 3D projection of the background (left and right eye stereoscopic, same settings as the woman or you will get a headache in VR :)
Then combine the woman video with the background frame (extended) in a video editing program like Davinci Resolve or Premiere Pro. Key out the greenscreen of the woman. Export in 2 by 1 format (i used 5120 by 2560 pix). Use Pagoni VR SideKick to inject VR 180 metadata in video if you want to publish on Youtube. Enjoy relaxation.
Also realise after a week of work: Youtube messes up the audio (compression), noted for next time.
GPU: second hand Nvidia 3090 (next year i will use a 5090 i hope)
The woman does not exist in real life and is generated randomly in Flux with ComfyUI (AI) and Virt a Mate (driving video for the AI generated girl)
LivePortrait, Reactor, IW3 and Stable Diffusion OpenSource AI tools were used for this video.
The 3D background was made in StableDiffusion SD1.5.
Complete workflow that was used:
I use Virt a Mate to create a driving animation of a woman. Export video in Mono 180 VR format in separate frames with the BVH video plugin. Background is set to greenscreen color.
Then make a picture of a woman in flux (or midjourney or whatever). Use Re-actor module in ComfyUI to replace the face for 5000 JPG frames (batch per 200 frames).
Then record a 3 minute expression driving video on my elderly phone. Then use LivePortrait to project the expression driving frames on the original source frames. Batch per 200 frames and make sure each new batch starts at a frame where the driving video and the source video have the exact same expression (eyes open, neutral mouth). This can be done in ComfyUI by fiddling with some batch logic nodes or by manually listing the correct frame numbers to start each batch (for example i used XY Grid Nodes in comfyUI).
Then, after LivePortrait, batch process all frames in IW3 (local browser app) for the 3D projection (VR 180 format, left, right eye stereoscopic).
For the background i use Virt a Mate. Load a 3D scene of a house or a bedroom. Align with the woman. Export 1 frame of only the background in Mono 180 VR format (BVH video plugin). Make a depthmap of the one frame (ComfyUI preprocessor depthmap). Use depthmap controlnet to generate a new photorealistic image of the background (SD1.5 or SDXL comfyUI).
Use IW3 again to generate the 3D projection of the background (left and right eye stereoscopic, same settings as the woman or you will get a headache in VR :)
Then combine the woman video with the background frame (extended) in a video editing program like Davinci Resolve or Premiere Pro. Key out the greenscreen of the woman. Export in 2 by 1 format (i used 5120 by 2560 pix). Use Pagoni VR SideKick to inject VR 180 metadata in video if you want to publish on Youtube. Enjoy relaxation.
Also realise after a week of work: Youtube messes up the audio (compression), noted for next time.
GPU: second hand Nvidia 3090 (next year i will use a 5090 i hope)