AMD says these are the same... We DISAGREE. - Testing 12 of the same CPUs for Variance

preview_player
Показать описание


Do you really know if your CPU is performing the same as the ones we review? We don’t know. But we know that if we want to increase our testing capacity, we need to PARALLELIZE. But that means we need nearly identical test benches. And trying to make that happen sent us down a far deeper rabbit hole than we could have anticipated.

Purchases made through some store links may provide some compensation to Linus Media Group.

Purchases made through some store links may provide some compensation to Linus Media Group.

FOLLOW US
---------------------------------------------------

MUSIC CREDIT
---------------------------------------------------
Intro: Laszlo - Supernova

Outro: Approaching Nirvana - Sugar High

CHAPTERS
---------------------------------------------------
0:00 Intro
2:25 Why Same Model ISN'T Same Performance
5:32 Sources of Variance
8:21 Gaming Results
10:43 CS:GO is Wonky
11:41 Gaming Results Cont.
12:39 Productivity Results
13:59 Our Selections
15:15 Testing Mobos and RAM
17:16 Final Discussion
21:26 Outro
Рекомендации по теме
Комментарии
Автор

This is LTT using their higher budget compared to most tech channels for something genuinely useful. This is really good to see! Well done LTT, this deserve genuine praise.

XenFPV
Автор

glad to see factorio used as a gauge for cpus

Joshfarmpig
Автор

As a data scientist, I absolutely love this video and greatly appreciate this perspective. The silicon lottery is absolutely real, and sample size of one is far from sufficient. Glad to see someone doing more in depth analysis. I don't expect the methodology of ltt labs to dive into frequentist vs bayesian stats, but it would be interesting to use public benchmarks (for an incredibly nerdy perspective/audience) as a prior distributions and see how the results differ. Regardless of the depth of it, just seeing more stats in hardware/software benchmarks is a fantastic breath of air. Keep up the good work!

spacenrd
Автор

Looks like the transparency we wanted. You clearly stated the challenges, how you worked around them and the variables you can't really control. It was very interesting, at least to me. Good job guys!

TechDaddyFr
Автор

Incredible leveraging youtube videos to finance an independent lab to preform tests on all kinds of tech hardware and then making vids showing and explaining all of the data collected which in turn can finance the next round of tests. Bangin job LTT I love it.

brendan
Автор

I never realized how much work goes into ensuring consistency in benchmark testing. It's worrying to think about the lack of oversight in computer hardware compared to other industries and the issues that could ensue.

RILDIGITAL
Автор

Well done. Possibly the most thorough and well-explained video on the effects* of the Silicon Lottery I've ever seen.
*Not covering much on the causes, or it would've been a two hour video!
In fact, can we have an updated video on that please?

jamesmatthews
Автор

Genuine praise to linus for taking time to address this and how confusing and weird these companies are becoming launching their cpus and gpus in today’s market

smayaan
Автор

I'm glad to see linus and his team working towards doing better, especially after that whole thing where they were admitting lack of accuracy and that they failed their community. Seeing them be so passionate about this really brings a smile to my face.

kaizen_unknown
Автор

Always excited for labs stuff! That was exceptionally well presented and explained. Really great how you adressed inconsistencies and the multitude of possible causes. Probably one of the best ways I've seen to explain testing methodology, the reasoning behind it and the resulting discrepancies when compared to real world applications. Loved it!

hardyhousinger
Автор

As a professional test engineer, I am truly impressed with the level of detail, thought and effort that has gone into the LTT Labs. 
👏👏👏👏

Now if you could just coming and explain this level of dedication to some of my colleagues in other disciplines of engineering that would be great. They do seem to think that everything will just work, and don’t appreciate that us test professionals have to think around corners sometimes…

thedorsetflyer
Автор

I love seeing this kind of testing across multiple channels.
And another shining example of why you should always check across multiple review sources.

(Loved the Pokemon referencing of the processors)

Grandwigg
Автор

As PhD researcher, this is my favorite video that you guys have published in a while. Love the data and scientific process used here guys. Keep it up, go labs go!!

Just_another_tom
Автор

i love the fact that you come out and say "yes, our tests are inconsistent, and here's why" in such a fascinating, entertaining fashion. there's a reason i always watch every LTT video that pops up in my feed.

squatzandoatz
Автор

1:00 yes, silicon lottery, btw it happened the same with 5K series with AM4, they simply overclock themselves until 90c

srit
Автор

you can maybe work around the inconsistencies in red dead 2 by using cheat engine with the unrandomizer, it forces the random number generator to always return the same values, so you could end up with a deterministic benchmark run :)

dahahaka
Автор

The silicone lottery strikes again, It's crazy how much variation there can be between CPUs, especially within the same model

ツッ
Автор

I never got lucky in silicon lottery but once. That was an Northwood P4 that can be pushed from 1.8 to 2.8 Ghz. For RAM I'm usually happy if I get the values printed on the sticks to run stable.

XTRLFX
Автор

Loved this video as a statistician, and really like the approach you’re taking. It may be a bit much, but if you did an equivalence check pre/post for any gpu lineup you’re reviewing, that would be pretty solid evidence that any differences you found in the tests across the gpus is due to differences in the gpus.

The other option would be to model the specific test rig in the regression, but then you’d need to put each gpu in each test rig, which would defeat the purpose of parallelizing the test in the first place.

claytonstanley
Автор

HUGE respect to all members of the team. It must have been really exhausting finishing all these tests.

MrThevirus