Robust Video Matting (RVM) Windows Installation Tutorial

preview_player
Показать описание
My main channel where I introduce the latest fascinating AI tools

Related video for info/explanation

RVM - Robust Video Matting

I didn't write a GitHub page because video matting live would be overshadowing this soon.
Commands:
conda create -n RVMatting python=3.6
conda activate RVMatting

20 series or earlier
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch

30 series
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge

Рекомендации по теме
Комментарии
Автор

What about the 40 series like rtx 4070 super gpu

suganesan
Автор

I get AssertionError: Torch not compiled with CUDA enabled...

GOAT.
Автор

(RVMatting) R:\Useful Software\Work again\RWM>python inference.py --variant mobilenetv3 --checkpoint --device cuda --input-source input/da.mp4 --output-type video --output-composition output/a1.mp4 --output-alpha output/alp.mp4 --output-foreground output/for.mp4 --output-video-mbps 10 --seq-chunk 1
Traceback (most recent call last):
File "inference.py", line 18, in <module>
from torchvision import transforms
File "R:\Useful Software\Work again\Zadnyi\envs\RVMatting\lib\site-packages\torchvision\__init__.py", line 7, in <module>
from torchvision import datasets
File "R:\Useful Software\Work again\Zadnyi\envs\RVMatting\lib\site-packages\torchvision\datasets\__init__.py", line 1, in <module>
from .lsun import LSUN, LSUNClass
File "R:\Useful Software\Work again\Zadnyi\envs\RVMatting\lib\site-packages\torchvision\datasets\lsun.py", line 2, in <module>
from PIL import Image
File "R:\Useful Software\Work again\Zadnyi\envs\RVMatting\lib\site-packages\PIL\Image.py", line 114, in <module>
from . import _imaging as core
ImportError: DLL load failed: Не найден указанный модуль.

nomberfax
Автор

Every time I use it
conda create -n RV Matting python=3.6
conda activate RV Matting
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forgepip install -r requirements_inference.txt
Is this something you should always do?

wkfjvlzv
Автор

ImportError: DLL load failed: Не найден указанный модуль.

ronlinetutorialssurvive
Автор

Is it possible to batch run video files with this? Thanks

jmk
Автор

Great tutorial! But my cpu goes 100% while my 3060 is bearly used. I used the correct cudatoolkit installer

jobennebur
Автор

Help me understand why would someone make something like this?. Who sits at their computer and makes a program that will mostly be used by "artists" and assumes they will have the knowledge, the skills and the time to do half the job?, download extra software, write command lines and compile a program. Why not do the rest of the job, create an intuitive user interface and sell it?.

jsserch
Автор

Thanks for this in depth tutorial. I get an error message saying "modultnotfounderror: no module named torch" - any advice on what I should do?

SitinprettyProductions
Автор

Is there any way to use this while Streaming? or would that be too demanding on the GPU? It doesn't have to be 4k

jakobtehwookiee
Автор

Why there's just Anaconda Distribution version? No Individual Edition in Google?

cg.man_aka_kevin
Автор

please make a tutorial for livestreaming/virtual webcam

minibobber
Автор

error: inference.py: error: the following arguments are required: --output-type

(RVMatting) C:\ANACONDA Codes\RVM> --output-type video --output-composition output/com.mp4 --output-alpha output/alp.mp4 --output-foreground output/for.mp4 --output-video-mbps 10 --seq-chunk 1
'--output-type' is not recognized as an internal or external command,
operable program or batch file.

what is happening?

niu
Автор

Neat. Saved me collab hassle and wait.
Question though. That python inference command. The file contains all these commands but they're commented. Is there a way to uncomment them and just run the inference file. I tried remove the comments but got a syntax error
What does seq chunk number do? The github isn't exactly clear to me what it does? 1 is better or the higher the better?

Is this how I added downsample_ratio to the code line? --downsample-ratio 0.6

vodkaru
Автор

ho i've done all and its says i have not NVidia.. i had hopes because it works live in the Browser !

bloomp
Автор

Is there any way to generate transparent background video with video matting?

kunshail
Автор

Will it run on a NV-Quadro P5000 / vers:445.87 ?

multivrsum
Автор

This is insanely good! Life saver. Thank you so much

RioWebFest
Автор

I'm stuck on 'Solving environment'. There's a few lines of it failing to solve and then it just seems to get stuck. Anyone got any ideas?

Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: -

PeopleVersusTV
Автор

Is there a way to automatically do this to every video in a folder? Renaming the files is boring.

Zanroff