What are Tensor Cores?

preview_player
Показать описание


▶ Our studio gear on Amazon:

Subscribe to our channel!

MUSIC:
'Orion' by Sundriver
Provided by Silk Music

Рекомендации по теме
Комментарии
Автор

Seems you made an error, around 2:33... You completely confused flops and FP32/FP16
FP16 and FP32 stands for the Floating (point) Precison, basically how many bit's the value is stored in, not - flops, the theoretical max floating operations per seconds

Arloh
Автор

"Should we expect to see tensor cores in consumer grade graphics cards? Dont count on it."
Who else is watching this after RTX reveal? xd

KnifeChampion
Автор

4:38 owh that didn't age well, we have Tensor cores in all Geforce RTX cards now lol

TechLevelUpOfficial
Автор

Greg: Tensor cores are not likely to be in consumer grade GPU's any time soon
NVidia: Hold my drink

josephbaker
Автор

Google is really going too far. Usually when I start googling something new to learn about it, I would have dozens of adds or suggestions thrown my way for the next couple of days. But this time, Google actually commissioned a guy on Youtube I've been watching to make a VIDEO explaining the concept I've been trying to understand. Wow, that's just freaky.

Myvoetisseer
Автор

So it's mostly a different way to solve problems using better resources for a specific kind of operations, I watched the video two times to see if I got it right, great one Greg.
Video liked as always

Chemy.
Автор

They need to start making relaxor cores. They might chill out at Nvidia and drop some new GPU's

zeke
Автор

not gonna lie this video was loooking spot on as hell, the camera is just making everything look so clean and crisp

TheRespawnRebel
Автор

Tensor cores sound like this saying from Bruce Lee. "I fear not the man who has practiced 10, 000 kicks once, but I fear the man who has practiced one kick 10, 000 times."

joechevy
Автор

A few corrections/clarifications, the matrices, as the text says, are 4x4x4 i.e. 3 dimensional. (not 4x4 i.e. 2 dimensional).
And without knowing for sure, I will bet that FP16 and FP32 are referring to 16 bit (two bytes) and 32 bit (4 bytes) precision rather than anything to do with speed.

SALSN
Автор

The only youtuber that can takes him time to actually explain the engineering behind these stuff. Thanks Greg :)

faezlimpbizkit
Автор

I really appreciate the way you’re able to reduce these ridiculously complex descriptions into moderately complex examples. I still have no idea what a Tensor Core is/ does because I was just staring at the RGB in the PC behind you...

writtenradio
Автор

This didn't age well. Welcome to the world of Nvidia RTX. 😳

Cuplex
Автор

I'm an electrical engineer and Tensor is a term we never use, that term is for civil and mechanical engineering. But it seems similar to a State-space representation matrix used in Dynamical Systems and Control.

diegomireles
Автор

FLOP is derived from FP, not the other way round.
FP16 is a 16 bit number, where the decimal point can be at any position of the number - hence floating point.
FLOP/s is just the measurement of how many operations with this kind of numbers can be done per second - but the single precision FLOP count refers to FP32 numbers - 32bit floating point numbers.

Vizeroy
Автор

time line 2:38, The FP16 and 32 are not Flop related. They are regarding the Floating Point bits representation, half precision = 16 bits and single precison =32 bits.

Pastor_virtual_NsbR
Автор

FP is not short for FLOPS, it's short for Floating Point. FP16 means Half Precision floating point number. FP32 means Single Precision floating point number. FP64 means Double Precision floating point number. The number 16, 32, and 64 refers to the number of bits that it takes in the memory to store floating point numbers with respect to their precision !!!

mirxzh
Автор

One correction, at 2:35 you refer to the "FP16 or FP32" as being an abbreviation for FLOPS - this is incorrect. In the document you are referencing, the FP in FP16 and FP32 stands for "Floating Point" and the 16 and 32 correspond to bits of precision.

Alfer
Автор

4:39 rtx 2000 and rtx 3000 series have them. Used for DLSS and other AI

jehare
Автор

You must be reading my mind because I've been looking for videos on tensor cores lately

dorryoku