Vegas 15 NVENC GPU Video Rendering PART 2

preview_player
Показать описание
A follow-up comparing the NVIDIA GTX-1050 vs the GTX-750Ti using NVENC video encoder in Vegas 15.
Pascal vs Maxwell video chipset.
Рекомендации по теме
Комментарии
Автор

Someone on Twitter pointed out that I mihgt have the graphics card in the CPU2 PCI slot, so the QPI link is the bottleneck. Indeed I did! Will fix this and try again...

EEVblog
Автор

I'm doing this for more than 20 years professionally and hardly saw any noticeable improvement in GPU encoding/Rendering. in my experience, the only thing improved was previews, 3D rendering and general navigation in the software. I meant to leave a comment on previous videos and say don't fall for it, but I feard people get mad at me as always happens when you advise against new hardware or upgrade ...
recently I asked for an upgrade in a workplace and I told The IT department to get the highest spec previous generation CPU with the best available MB for a limited budget and everybody laughed at me!!! they get the newest CPU at that time with a moderate motherboard and now they have problems with the system and they still can't find the culprit they meticulously tested the parts on multiple occasions and spent more than 40 hours testing and still cannot isolate the issue .

NoLandMandi
Автор

This is the software's fault. Back in Vegas 13 days I tried a 9600GT vs a 750 Ti and it didn't make any difference. Trying another encoder with GPU acceleration and the difference was massive. So essentially, the Encoder is at fault, it does not leverage the card properly, it usually never does anyways because its not full GPU rendering, it uses the CPU as well and with that comes other complications, bottlenecks, etc.

Encoding acceleration is an odd ball with GPUs. I never seen a GPU being used 100% by an encoder.*

*This goes ofc for video editing software, video encoders usually use the cards to full potential.

arwlyx
Автор

4:46 maybe it took longer cause you have open fucking 100 pages!!!???? liek wtf bro

JonelKingas
Автор

It is inconceivable that 2 different chips from different generations would give the exact same rendering time, to the second. That is if the GPU was doing the heavy lifting. So obviously the bottleneck here is not the GPU - it's the CPU. Check the CPU utilization while rendering.

tangerinq
Автор

Have you noticed the latest version of Windows 10 includes both CPU and GPU usage in the Performance tab of the Task Manager? That might be usefull to check if the software uses CPU or GPU while encoding, and find the bottleneck in the pipeline.

connexionetablie
Автор

It's using too small buffer size for frames and does not look on actual GPU capabilities, NvEncoder API gives you full control on buffers, sync/async operations and so on, so technically you could load GPU fully and utilize full power of any card with just NvEncoder, but it requires you to code this manually and according to actual GPU chip, it looks like they just got NvEncoder example from SDK and changed some parameters like buffer size without actual GPU testing and getting all best dynamic parameters on buffer size vs speed on different chips. Technically it should be possible to write custom encoder for Vegas 15 based on NvEncoder to speed things up.

multiarc
Автор

for the CPU to ultilize the turbo with a high performace power setting you need to set the minimium processor state to 99% instead of 100% because it disables Intel SpeedStep
Sony perhaps didnt update the sdk? also maybe the NVENC features exposed to third party programs aren't as good as what geforce experience uses it for. unless someone really tackles gpu encoding, nvenc isnt that worthy for standard rendering. it's usefulness is more for avoiding fps hit during recording.

yyu
Автор

Cpu is bottleneck.. the E5-2680 are clocked a bit low.. like 3.3 GHz turbo which it wouldn't hold turbo to long in dual Xeon boards..

i tried i7-3970X and a dual E7-2670 config (Dell T7600 I got on eBay cheap).. both using same GPU config.. clock for clock.. the E5 was faster notibably..because more cores at the same speed.. however.. i7-3970X as default turbos up to 4.0Ghz.. it was massively faster than even dual E5 xeons... The GPU I used was dual GTX1050Ti. It just the CPU clocks way higher and holds a much better turbo...

You can try both 750Ti and 1050 in same rig and see if it will imrpove it any... But I'm guessing the CPU is the limiting factor

Honeypot-xs
Автор

Usually for content creation/editing, CPU cores are more important than GPU. See how blender tests compare on GamersNexus or similar channels.

ADR
Автор

My renders are ~3 times faster with GPU encoding. It seems to take my CPU down to 80% usage from 100% and uses ~15% of the gpu. It's not using cuda, only nvenc, but it still increases the speed significantly for me. I don't have some dual Xeon system though just a 6 core/12 thread, so that's probably why. I have a 1080 ti, and it's only getting 15% utilization like i said, so you dont need a powerful gpu. Maybe try a larger clip and you'll see the difference. I would imagine its because your dual Xeon is just shitting all over a measly 1:30 render.

limitisillusion
Автор

Rendering is different from Encoding. Rendering is using CPU or Cuda cores (if supported), while Encoding can use NVENC encoder. Adobe After Effects uses rendering 3d elements Software only (CPU) and maybe Cuda if supported, while Vegas and Premiere uses NVENC and Cuda cores with plugins.

deraid
Автор


It literally makes no difference what GPU you use, they will all perform the same under rendering. The software developers will tell you "oh, there must be a bug", but the truth is, they are intentionally gimping the consumer/Pro-Sumer software from rendering past a certain speed limit. Call it collusion, call it bait and switch...i have no idea what it is. But it generally doesnt matter what method, or what software you use, generally all GPU encoding turns out exactly the same. Irrespective of encoder, or brand of GPU used. In some of the open source encoders specifically designed to use AMD's OpenCL, you MIGHT get some improved performance (and you have the right AMD card for doing that), but it is probably more headache than its worth in time saved.

bigbuckoramma
Автор

I like this guy! And I am disappointed too. Bought the 1050 card only to find out that it's not comparable with my Vegas Pro 13 and the old 570 card which broke down was faster. I need Vegas Pro 15 at least now

MrMuppetbaby
Автор

All this is because Your raw material decoding system. Try with a short RED or other high level cinema format. Then You will see a big difference. Because after, there is no unknown garbage decoder between your raw video and the final encoding stage, just pure and mostly uncompressed frames. Long GOP is Your enemy:) This is the reason why I used MJPG for more than 10 year in the linear editing. i just had a specific hardware to encode and decode. Some Zoran architecture. I always encoded in half time of play time. When the Mpeg comes, I used a Matrox hardware encoder decoder. Results are the same. half time. When the new age of video formats arrived, it’s just get slower and slower, whatever hardware I build for the work. Very frustrating and You aren’t alone.

BMRStudio
Автор

I do not render my videos, no recoding, just cut and paste. You could do the same in most of your videos.

proyectosledar
Автор

A pipeline like the {video decoding + compositing (even if there's no fancy compositing going on) + encoding} one you're running here can only run as fast as the slowest part. So, for example, if the bottleneck is the video decoding process, and the GTX-750Ti wasn't even running at 100% before, then indeed you would expect precisely no improvement in processing time from a GTX-1050. And no further improvement from multiple cards. The key thing to remember: don't be fooled by the fact that you're running an "export" or "encode" command. Sony Vegas has to generate the torrent of pixels first, and then the NVENC is only helping consume and encode those pixels -- if Sony Vegas can't generate the pixels as fast as the NVENC can encode them, then your limitation is (presumably) the CPU, and only CPU improvements (or GPU-based *decoders*, if that's something you can do) can help.

TheHuesSciTech
Автор

Hi, I've been using Vegas 15 for months now with working NVENC but after the latest Nvidia driver it seems like it stopped working (render time increased drastically), anyone with the same problem?

TenkaichiMeister
Автор

NVENC does use a dedicated encoder, but the area where the CUDA cores come into play is when you use GPU accelerated effects and adjustments. Many GPU accelerated video editors will use either CUDA or open CL for effects and decoding the source footage, and then use nvenc if selected for the encode. The nvenc in the 1050 should be far quicker than the one in the 750, It should be roughly 6 times as fast. Beyond that, newer generations of nvenc support a wider range of formats when it comes to providing full acceleration. e.g., the one on the 750 will not offer full acceleration on 4K video, and would not support h.265

Razor
Автор

Not sure why you would use NVENC for offline rendering, it's far more suited to streaming or at a pinch, machines with anemic CPUs. Does Vegas not support proper GPGPU (CUDA/OpenCL)?

hellcoreproductions