GeForce GTX 2080 teased, Threadripper 2 review round up, Q&A | The Full Nerd Ep. 63

preview_player
Показать описание
Join The Full Nerd gang as they talk about the latest PC hardware topics. In today's show they cover the breaking NVIDIA news of the Turing GPU, Quadro RTX and GeForce RTX 2080. Gordon will also talk about his review of AMD's 2nd Generation Threadripper 2990WX. As always we will be answering your live questions so speak up in the chat.

Check out the audio version of the podcast on iTunes and Google Play so you can listen on the go and be sure to subscribe so you don't miss the latest live episode!

Follow PCWorld for all things PC!
------------------------------­----
Рекомендации по теме
Комментарии
Автор

I think the skepticism is justified, and glad there was some tentative sentiments from Brad. I totally have a feeling Nvidia will try to pull a fast one, Raytracing isn't wizardry it's very serialized and parallel, it's the epitomy of general purpose compute. current hardware and even these upcoming Nvidia GTX/RTX 20XX turing cards will NOT have the computational throughput to pull off true full sample ray tracing at acceptable resolutions.

We already know from the GDC demonstration, the sample count is very low and noisy. The secret sauce and arguably only huge breakthrough is the real-time denoising. ( Even that's arguable, as OTOY have a software near-realtime AI denoiser for their lauded ray/path-tracing Octane render engine. And they even teased their implementations for Unity, which looked like it was doing the same approach of hybridizing with forward rendering to do etc. So it seemed inevitable. Nvidia are just jumping in now that the technology is mature enough for it to be completely viable going forward. Unarguable they are trying to steal the limelight and attention from anyone else's efforts. )

Hybrid ray-tracing is nothing new, it's been tech demoed many times in recent years, and something many of us have been looking forward to seeing in games, but if Nvidia heavily patent their denoiser and make it difficult for competitors to implement similar methods, they'll be stifling innovation and competition more than anything else.

Even PowerVR have been creating hybrid RTRT(real-time ray-tracing) capable GPU architectures for years, they even demo'd it as early as GDC 2014, and there have been many real-time hybrid RTRT tech demos showcased at previous SIGGRAPH conferences.

At GDC Nvidia showcased the extent of their implementation, and it's a few ray-tracing passes (most notably real-time reflections and physically accurate Global Illumination). What Nvidia don't want you to be aware of, is that RTRT can be done on any parallel compute architecture. They may have some really impressive hardware implementations of their denoiser etc. But I'm concerned of the possibility if they wield it as a weapon just like they did PhysX/Gameworks/hairworks which is bad for gaming and competition as a whole. That said they just open sourced their MDL physically based material implementation, so if they don't get anti-competetive with Ray-Tracing, and it's limited to aggressive marketing, it could be the impetus needed to push RTRT into the mainstream (well Hybrid-RTRT, for now...).



Fun bit of History, Larabee: It was a wide-pipeline highly parallelized GPGPU (general purpose compute) architecture, and even after it was discontinued it was still salvaged and used as a compute card. The main reason they failed is they were attempting to do full sample realtime ray-tracing, which was super naive. (keep in mind 1024x768/720p was the HD standard at the time (1600x1200 was the high-end, analogous to 4K now)). Worth mentioning they faked their demos and got caught doing their Quake:ET demo on a server rack. :P

mitthjarta
Автор

Come on Gordon, dual socket thread ripper = Epyc :P

SciePhi
Автор

Well then.... This all just adds up to a big ol yay for me. It's going to take at least six months for me to figure out optimal configuration:economy now and it just so happens my health is requiring I put the brakes on my build. Everything is coming up something.... not roses but something. You guys rock and I'll tell you the same thing I told the GN crew. Please take this seriously. If you EVER wonder if what you are doing is making any real difference in the world, just go look st the time stamps on my views. When I'm really not doin so hot or just feeling low, you guys, alls yous alls, help me get through.

robertkeeney
Автор

oh and i wonder about the combination of TR2 32 and 2080 and the combination of cores and raytracing

keptinkaos
Автор

The actual issue from what I can tell (mainly from reviewers and AMD) is not just memory bandwidth but in total memory per core. The total system memory used didn't go up so you are now dividing the same amount of memory with twice the cores. Let's say each core really likes 3 GB of memory to fill its needs at full cycles. That would mean a 2990wx would need 96GB of system memory to saturate the needs of the CPU. It really doesn't matter the bytes/sec of bandwidth, it becomes an issue of not enough system memory to keep up. However most tests have been performed with 64 GB of ram at various speeds, when TRv2 really needs about 96 GB to be at optimal memory config. That is when I believe the memory bandwidth issue will really show its head. Just my opinion I don't have tested data to back it up but I would love to see the results of that testing!

bigsportsman
Автор

2-0-8-0 is also the same numbers for the date of the possible announcement 08-20

jimhall
Автор

Meh, I think I'll wait a couple generations before I invest in an RTX card. I just got a GTX 1080 recently, and we won't be seeing ray tracing too commonly in games for a while anyway. Better to wait for the process to be refined enough that it can fully handle it.

SpinDlsc
Автор

technical we can`t made it... the Light will not stop.... lol... Gordon... u made my Day... :-)

heisenberg
Автор

*PC Gamers particularly Maxwell and Pascal owners:* You are about to be fucked royally because Nvidia, courtesy of their new hardware, is going to force _Async Compute_ and 16-bit precision on the Gaming community which AMD already has the hardware for. And let's call a spade a spade here, Maxwell/Pascal cards can't do Async Compute well, e.g. Gears of War 4, Doom and Wolfenstein 2. Why do you think Nvidia was stalling productions of DX12 games, e.g. Watch Dogs 2.

But now, with these new Tensor Cores, Nvidia can do Async Compute and they will enforce this feature heavily onto the Gaming Community so if you had just bought a 1080Ti recently, you got analed, once again from Nvidia with their vintage Planned Obsolescence M.O.

But in the end, it doesn't matter because AMD will be the victor out of all of this because AMD has always been stronger than Nvidia in regards to RAW compute power, Nvidia just had better Geometry; hence, why they doused _The Witcher 3_ with so much Tessellations in Gameworks that time; but AMD is trying to rectify this shortcoming with the advent of *Primitive Shaders* .

However, with PC Gaming becoming more compute-based than Geometric-based, AMD will win.

moorishbrutha
Автор

i will try my best NOT to give nvidia any more of my money. will hold my 1080ti sli for as long as I can... cmon amd show up with something competitive

sacamentobob
Автор

My prediction, Titan RTX has NV-Link. Single card able to act as one card, so temporal effects are not an issue like with SLI. Bet they charge $3k if I'm right.

abram
Автор

it better be leaps and bounds better than 1080ti with all that money they made from crypto shortage

drunkredninja
Автор

Guys, a realitycheck on raytracing: it takes at least several seconds to render just 1 frame with raytracing and you can bet that this is just light ray-tracing with a realtively low number of light-rays and reflections etcetera.
We need >60 times a second a rendered frame to truly have raytracing in games. It will take many years before an ordinary card can even handle this simple form which Nvidia now does: decide with 'AI' which calculations can be neglected without having too much impact on the quality of the light.
4 Volta cards together have more graphics horsepower than 8 1070 cards. Assume 25-30% improvement per generation and 2 years per generation, it would take at least 10 years before a $400 card handles ray-tracing well and that is a kind estimate. It is just a little napkin-calculation but it should make it clear.

peterjansen
Автор

Is GTX 1080 better than GTX 2060 and 2070 in assumption ? Also will the 20 series cards more expensive than GTX 1080 at that time of the very first released ?

nicktan
Автор

I kinda like the idea of RTX and GTX for a quick how powerful the card is. Might be cool to do GTX for the low cards, then when you hit powerful enough for VR go to VTX then raytracing would be RVTX. In my VR groups I see alot of Newb confusion on if their GPU is powerful enough for VR, it would be good if you can just glance at the model and know, esp for those of us who try to help the new people.

drewpickard
Автор

Brad Chacos, I like it that you looked on Phoronix too. I have been curious about it too to what extend Windows is not sufficiently optimized for that many cores, I suspect that that is a part of the problem. It is well known that Linux is better optimized for many cores (servers).

peterjansen
Автор

I wonder about undervolting. In my application, I may be using a generator and UPS out in the field and want to limit the power used.

leoyoung
Автор

metro exodus is supposed to use raytracing. they actually put it off just to be able to use raytracing i think i heard. wouldnt be surprised if the BF V finished game maybe adds some raytracing as a last minute surpise. async compue is supposed to be helping amd cards right now in BF V and its an nvidia sponsored title this time around. and there is some speculation that the next gen of nvidia cards will support async better so theres that.

7nm is gonna use quad patterning i think. thats whats been giving intel fits with 10nm. nvidia are telling tsmc to make the process as small as possible without using quad patterning that way they know they will get good yields with a decently dense chip. nvidia dont sell gpus at cost like amd do. amd will put a small gpu on 7nm first so they dont get hit too hard on yields and then later when yields get better they will move to a larger one. nvidia will have gpus in numbers quickly and with good yields so by the time amd gets their cards out everyone will have already bought nvidia cards. but the latter amd card might compete decently though for those who did wait. we'll have to see

bobhumplick
Автор

guys never say never someone said no when I asked about 32 core TR looking at you Gordon

keptinkaos
Автор

here we go Nvidia wrapping up real time ray tracing in a black box with a pretty bow on :(

MrHarney