Nvidia GeForce RTX 2080Ti & RTX 2080 | Specs, Rumors & Leaks

preview_player
Показать описание
Over the past several hours, quite a number of details came to light regarding the form Nvidia’s Turing will come in for the GeForce lineup, and the answer is – very similar to the Quadro RTX 5000 it would seem, but with a few snips and tucks here and there.

Several leaks (from Baidu and TechPowerUp) seem to indicate that the RTX 2080 will indeed be snipped from the 16GB of 14gbps RAM of the Quadro RTX 5000 to just 8GB. It’s clearly a cost saving measure from Nvidia, given DRAM of this capacity is much cheaper.

It’ll be interesting how Ray Tracing and other techniques affect VRAM and frame buffer requirements – something for us to certainly investigate in the future. The TDP is listed as 210W for the RTX 2080 as well, confirming our theories from yesterday’s video that indeed we’ll be seeing the part well under 250W.

Рекомендации по теме
Комментарии
Автор

Based on the techpowerup specs, that RTX 2080 would have a die size around 500mm, or slightly bigger than a 1080ti with 16% fewer cuda cores for rendering, but a stock boost clock that's only 10% higher and no reference to overclocking being any better. With the extra memory bandwidth that could make the 8% faster difference overall, but it's not really a 104 class chip, it's a 102 using historical size comparisons.

While it's feasible for the 2080 to shift up in price to cover the die size, the 2080ti would need be close to the 750mm of the RTX 8000, meaning it's going to cost a hell of a lot as it'll be the largest consumer class die ever offered, close enough in size to the $3000 Titan V to suggest a retail price between $1500 and $2000. But if the 2080 only ships with 8GB, why then does the RTX 8000 come with 48GB and the 6000 which is the same chip but with only 24GB, would the lack of memory stiffle the claimed 384 tensor cores and associated RT cores for an RTX card?
Also why the pathetic FP16 performance when the cards got 384 tensor cores and brand new cuda architecture derived from Volta, shouldn't these cards be 2:1 FP16 as well.

But here's the thing, if a RTX 2080 costs as much as a Pascal based 1080ti, but only offers 8% faster standard rendering performance with less memory after 2 years waiting, that's going to be a huge downgrade for expectations in price/performance and every chance that a 7nm Vega will stomp all over it.

Whichever way you look at it raytracing in games is all about eye candy and we all know that eye candy takes precedent over wait...

It's all bullshit clickbait or Nvidia have lost their minds.

magottyk
Автор

I was hoping the RTX 2080 would have at least 12GB of VRAM

Terry
Автор

Based on the info we have, the 2080's probably going to be a ~370 - 400mm^2 die or ~480mm^2 with 2944 CUDA cores and 1 SM disabled, and the 2080 Ti's probably going to be a ~590 - 620mm^2 die or 754mm^2 with 4608 Ccs and 2 SMs disabled. I'm expecting the prior since it lines up with the test PCB we saw for a Ti class card that seemed to support a GPU with a die size of anywhere between 600 - 650mm^2.

With these die sizes though, and Nvidia's returns, the 2080 ends up being a card aimed at the $500 or $600 segment which it may or may not be at launch, and the 2080 Ti's in the $800 or $900 area, maybe $700.


I know this because the RTX 8000 with a 754mm^2 die and 4608 Ccs, is probably using the GT/TU100 die with 2 SMs disabled. Since historically, for both Pascal and Volta, they've had 4 SMs disabled in their 100-class GPUs, which would translate to 2 SMs for Turing. What this means is that the GT/TU100 GPU has 4864 Ccs at its disposal which we'll probably see in the Titan T/Xt or whatever it's called.

It makes sense for them to do this too, because what Nvidia have essentially done then is take that ~3% die shrink left after the Tcs from 16nm to 12nm, and used it up for RT cores. That also means that there's almost linear scaling between Pascal and Turing in terms of die size based on Cc count (e.g. both ~100mm^2 with 100 Ccs), if Tensor cores and RT cores are present in the Turing die.

What's also true is that Nvidia remove FP units from their 100-class GPUs for their lower class ones because they're not needed for gaming and such. Taking Pascal's GP100 at 610mm^2 and ~23% reduced GP102 at 471mm^2 as examples, that's roughly the amount of difference we should expect between the GT/TU100 and the GT/TU102. Unless they don't remove any, which is where the "or" die sizes came in at the start (e.g. ~480mm^2).

From the performance we can expect, since Turing probably doesn't have higher clocks than Pascal, and since the 2080 doesn't have more shaders than the 1080 Ti, it means that there's maybe a 10 - 20% improvement in ILP (same effect as IPC) which we have rumour of too.

With all of this taken into account, it means that without the Tcs and RTcs in use, the 2080 is going to roughly have the same amount of performance as the 1080 Ti whilst pulling ~100 - 150W less at ~200W. With Tcs and RTcs in use, you're looking at 250 - 270W under load, similar to the 1080 but a little bit more.

7nm Vega's going to roughly converge with the 2080 in performance in that 250 - 270W range too, so that's going to be interesting to see if we get one in the mainstream.

Najvalsa
Автор

ATI(AMD) introduced unified shader architecture not nvidia. Xbox 360 Xenos was the first graphics core that was unified vertex and pixel shaders.

DonnieB
Автор

Rather unimpressive, to be honest. As a 1080 Ti owner (which honestly looks like 2080 won't really beat at this point) I'll just stick with it until AMD comes out with a really strong high end 7 or 5nm GPU.

quicogil
Автор

Man you're the man with these rumors! I just hope you're getting enough rest in between. 💙

ashenone
Автор

If Nvidia says that this is the best architecture they had in over a decade this means that you'll see 3-4 next generations based on this and that they probably already have an idea on how they will increase performance for 21XX series :p It certainly feels like we're being screwed here for not getting best of what they have, but I can understand if no one complains seeing 50% increase of performance in 20XX series :)

TheVerrm
Автор

why would anyone buy 12nm right now? 7 nm vega and im sure whatever nvidia is putting out next is just around the corner.

PyroManiacbwl
Автор

I wondered please if you could enunciate the beginning of your videos, specifically "RedGamingTech video" because it slurs together and almost comes across as you've have a few too many which then dilutes your authority. It's picky I know, but it's something I've noticed more than 20 times and damn it, I couldn't take it any longer. Just slow it down a bit or speak more clearly. There, I've said it.

throughsoul
Автор

1:35 Congratulations Paul, after a gzillion tech videos you finally could pick one nomenclture ^^

SrchangwaytogoC
Автор

210 watts non-overclocked? Unholy Batman, Robin!!!

ZZstaff
Автор

Titan Five... is this some kind of Insider joke?

TheJohdu
Автор

Now you are just messing with us, "Red Gaming Ted" video! Great video though, keep it up :)

ElectroFriedBees
Автор

4:13 GDDR5 on a 2070 but GDDR6 on 2060?

Crashbanksbuysilver
Автор

I wonder how AMD will respond to this RTX technology leap. Will their next lineup of GPUs also have their own version of AI/Tensor/RTX cores to better support the new technology?

sgomez
Автор

I'm thinking the 16GB cut to 8GB VRAM is really about keeping the consumer cards from being used in "pro" environments such as GPU rendering that really wants TONS of memory... I'm thinking NVidia is just doing the segmenting the market thing again and want the "pro" users to pay 3X the $$ for the same performance but with a bit more RAM...

CalgaryCalamari
Автор

I really miss the old style of your videos, I prefer seeing gaming videos or related content playing while you speak rather than face camera videos. Your channel was different and why I subscribed, now you are trying to mimic other news channels. Maybe that's why you have been sitting at just over 50k subs for quite a while.

chavostyle
Автор

... it's nVidia's preemptive strike, not just against AMD, but all of intel's gpu efforts...

Автор

Prediction: RTX Titan has NV-Link letting 2 cards at as one. This gets around the issue with temporal effect killing SLI support.

abram
Автор

Well ! That acer/asus 4k 144hz HDR will have to wait til nvidia decides to really engage with it

blackmambalim