filmov
tv
RTX 3090 Ti vs RTX 3060 Ultimate Showdown for Stable Diffusion, ML, AI & Video Rendering Performance

Показать описание
Summary And Conclusions PDF ⤵️
Playlist of StableDiffusion Tutorials, Automatic1111 and Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Pix2Pix, Img2Img ⤵️
Technology & Science: News, Tips, Tutorials, Tricks, Best Applications, Guides, Reviews ⤵️
The GitHub gist file shown in the video ⤵️
Whisper tutorial ⤵️
How to install Python and Automatic1111 Web UI tutorial ⤵️
Whisper github ⤵️
Davinci Resolve tutorial ⤵️
Best DreamBooth training settings ⤵️
How to install Torch 2 for Stable Diffusion Automatic 1111 Web UI ⤵️
0:00 Box opening of Gainward #RTX3090 Ti and Cougar #GEX1050
0:51 Installation of RTX 3090 GPU and Cougar GEX1050 PSU into the computer case
5:03 Final view of the installed case
5:23 CPU, Ram and other hardware overview of the used PC
6:34 Used gist file explanation in this video
7:04 How to install latest Nvidia GeForce driver
7:32 What is the difference between Nvidia Game Ready drivers and Studio drivers?
8:43 OpenAI Whisper speech to text transcription benchmarks
9:52 How to verify installed and used PyTorch, CUDA and cuDNN versions via my custom script
10:30 How to update Whisper to latest version
10:53 Testing command used for Whisper
11:20 Demo of Whisper transcription benchmarks
12:32 How to install Torch version 2 on main Python installation
13:13 How to install cuNDD latest DLL files
14:24 Benchmark results of all Whisper tests
17:00 When RTX 3090 and #RTX3060 transcribing speech at the same time
18:01 4K Video rendering tests in Davinci Resolve
19:10 How to change rendering GPU in Davinci Resolve
19:35 Rendering results of Davinci Resolve benchmarks
20:22 Bug in Davinci Resolve, RTX 3060 is not used
23:00 Where to download FFmpeg with hardware acceleration - CUDA and GPU support
24:00 How to set default FFmpeg via environment variables path
25:27 Testing setup of the FFmpeg 8k video rendering
27:19 Demo demonstration of FFmpeg benchmark
27:58 Final results of FFmpeg benchmarks on both RTX 3060 and RTX 3090
29:45 Starting to benchmark Stable Diffusion via Automatic1111 Web UI
30:06 How to see used Torch, CUDA and cuDNN DLL version of your Web UI
30:38 How to update Web UI xFormers version
31:55 it/s iteration per second testing
32:20 Demo of testing methodologies that will be used for Stable Diffusion benchmarks
36:57 Starting result analysis of Stable Diffusion benchmarks Torch 1.13
42:48 Used DreamBooth training settings for benchmarking
44:00 Stable Diffusion benchmarks with Torch 2.0
46:48 How to make sure that Web UI uses second device in all cases
48:26 opt-sdp-attention benchmark results with Stable Diffusion
51:19 The discovery I made about optimizers used in Stable Diffusion Web UI
53:32 Solution for Stable Diffusion NansException : A Tensor with all NaNs in Unet
The world of artificial intelligence and machine learning is rapidly growing, and as it expands, the demand for powerful and efficient hardware is skyrocketing. Among the most critical components of this hardware are graphics cards, which play a pivotal role in the performance and capability of machine learning applications. The Nvidia RTX 3090 and RTX 3060 are two notable examples of this new generation of graphics cards, designed with machine learning in mind. This article will explore the features of these two cards, and discuss the importance of graphics cards in the field of machine learning.
The Nvidia RTX 3090 and RTX 3060
Nvidia's GeForce RTX 3090 and RTX 3060 are part of the company's Ampere architecture, which aims to provide a significant leap in performance and efficiency compared to previous generations. The RTX 3090, known as the "BFGPU" (Big Ferocious GPU), is the flagship model, boasting 24 GB of GDDR6X memory, 10,496 CUDA cores, and a memory bandwidth of 936 GB/s. This card delivers unparalleled performance, making it ideal for high-end machine learning applications, rendering, and gaming.
The RTX 3060, on the other hand, is a more budget-friendly option, but still packs a punch in terms of performance. With 12 GB of GDDR6 memory, 3,584 CUDA cores, and a memory bandwidth of 360 GB/s, the RTX 3060 provides excellent value for money, while still offering enough power to handle many machine learning tasks.
Комментарии