How to choose a GPU for Computer Vision? | Deep Learning, CUDA, RAM and more

preview_player
Показать описание
You can Build Computer Vision software to DETECT and TRACK any Object.

We will see in this video how to properly choose a GPU for computer vision to train Object Detection and Segmentation Models locally.
We'll talk about brand (Nvidia vs Amd), memory size, cuda cores and more.

This video will help you make the right choice before buying a GPU in order to get the best performance for the most valuable price.

➤ Courses:

➤ Follow me on:

➤ For business inquiries:

#nvidia #computervision #deeplearning
Рекомендации по теме
Комментарии
Автор

don't forget more memory in many cases mean more speed, because you use bigger batch sizes

dimitrisspiridonidis
Автор

Is there a big difference in performance and speed in AI tasks like stable diffusion & video rendering etc between RTX 4080 super and RTX 4090?Which one should i buy as I seldom play games or should i wait for 5090 at the end of the year?I am not a video editor or hold any jobs related to designing or editing, just a casual home user.

FederickTan
Автор

What about laptop RTX 3060 is 6GB and not 12 GB. So are you saying purchasing laptop of RTX 3060 is of no use

shankn
Автор

Can you use laptop with Nvidia quadro T1000 graphics for computer vision

Mr_K
Автор

Cuda cores is one thing, but these days we also consider Tensor cores, and I have a suggestion to those who may want to buy GTX 1080 Ti. My suggestion is to go for RTX 2060 instead as it has 240 Tensor cores compared to no Tensor cores in GTX 1080 Ti. In my opinion, RTX 2060 is the right option in the right amount of Money!

zainbaloch
Автор

Do you need a GPU? Yes.
Nvidia or AMD? Nvidia. Unless you're strictly going to develop using PyTorch. Then can use AMD and ROCm.
How much RAM? As much as you can afford.

cyberhard
Автор

hello sir, honetly, the people who lives in turkey, can rarely afford these components, reason : currency issues.
still thank you, I'll wait your keras, dnn, ömachine learning, model training and one more thing, recently I figured out that .pt files , I could not use .pt files in my codes. maybe you make a video about yolov5 pytorch, deploying the pretrained model to our codes in pycharm. thank you again.

ilkay
Автор

is it the dedicated vram that should be more than 4gb?

Mr_K
Автор

So will we need two GPU, one for the monitor and other for deep learning?

gplgomes
Автор

Is GPU for mining the same for Deep Learning?

gplgomes
Автор

Hi, how we can sizing the requirement if we want to run object detection from multiple IP camera? Thank you

liftup
Автор

3070 ti or 2080 ...if your not creating huge sized pics these cards work great just fine...obviously more for hobbyists..but not suitable giant projects that the pro's create...3060 and 2060 are ultra slow and have inferior vram tests have shown ...you never mentionerd cuda cores or 256 bandwith..not to mention way faster ddr6x vram..like the 3080ti is the minimum..4080 is the sweet spot..16gb vram..large bandwidth and fast vram with tons of cuda cores

kdzvocalcovers
Автор

Thank you so much.
How about CPU and memeory?
I9 or Ryzen ?? 64GO or 128 ???

maloukemallouke
Автор

Sir graphic card which should I buy in 2023 . I am doing project in live face detection in a shopping mall, I m using deep learning pytorch open cv, yolo. Thank you

pankajjoshi
Автор

does LHR vga card affect deep learning?

muhammadsabrimas
Автор

If I could only afford $150 I would get a 3050 8GB. I personally use two 3060 12GB and a 3090 24GB, but I would go for the 4060 TI 16GB for $500 if I couldn't upgrade my power supply. The 3090 24GB is a good deal, but the 4090 is way more power efficient so you'll end up paying for it in the power bill.

CaleMcCollough
Автор

i had two rtx a2000 6gb will it consider 12gb mem and double the cores?

RossTang