It Takes Two To Tango: A New Era of Performance

preview_player
Показать описание
I heard you like performance, so here's some FP64 performance! Traditional HPC isn't getting the same love it used to because of the recent advent of machine learning. NextSilicon is a company going after HPC with their new Maverick2 chip - a dynamic reconfigurable architecture that optimizes itself around your code. I sat down with Elad Raz, CEO, at the Supercomputing conference, about the latest noise made around these chips.

[00:00] Giving some love to HPC
[02:58] Why go after scientific computing?
[07:24] How much of the HPC market today is CPU-only?
[09:08] How would you describe the Maverick architecture?
[17:21] What chips have you announced?
[19:01] What about machine learning?
-----------------------
Need POTATO merch? There's a chip for that!

If you're in the market for something from Amazon, please use the following links. TTP may receive a commission if you purchase anything through these links.

-----------------------
Welcome to the TechTechPotato (c) Dr. Ian Cutress
Ramblings about things related to Technology from an analyst for More Than Moore

#nextsilicon #eladraz #hpc
------------
More Than Moore, as with other research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, which may include advertising on the More Than Moore newsletter or TechTechPotato YouTube channel and related social media. The companies that fall under this banner include AMD, Applied Materials, Armari, ASM, Ayar Labs, Baidu, Dialectica, Facebook, GLG, Guidepoint, IBM, Impala, Infineon, Intel, Kuehne+Nagel, Lattice Semi, Linode, MediaTek, NordPass, NVIDIA, ProteanTecs, Qualcomm, SiFive, SIG, SiTime, Supermicro, Synopsys, Tenstorrent, Third Bridge, TSMC, Untether AI, Ventana Micro.
Рекомендации по теме
Комментарии
Автор

FP co-processors are back on the menu, boys

SwordQuake
Автор

The CEO seems like a lovely guy who knows his stuff. We need more Engineers as CEOs

artemis
Автор

I would love to see a revolution regarding optimizing software.

SaccoBelmonte
Автор

The idea that I could run dataflow directly is exciting enough that I'd happily drop my existing languages tbh

capability-snob
Автор

Even back in the day many users of old 36-bit scientific computers weren’t at all happy about having to step down to 32-bit (or up all the way to 64-bit) FP, IIUC.

leocomerford
Автор

I'm (slowly) chugging through "Computer Architecture: A Quantitative Approach 6th Edition" thanks to George and I do have to say that I start to appreciate complexity of those gigantic "Super Computers". There is a lot that goes into such a system and so many bottlenecks.
So, while I wait for 7th Edition next year, I am happy that you make such Interviews. Especially, because everyone talks about AI and that sort of makes less space for learning about scientific computing.

jannegrey
Автор

Love this guy's enthusiasm! Thx for finding these great interviews.

tomschmo
Автор

A processor like this sounds like a very fun thing to play around with, and seeing how it responds to different ways of performing the same data manipulations in "performance over time" or however you'd want to quantify that. Watching how quickly it responds, what the performance ceiling is, how quickly it reaches it, all those silly details. Yummy chip, bigbyte/10

TheDoomerBlox
Автор

You can tell he loves what he does, I bet that'll transfer into the product

lesserlogic
Автор

IT TAKES TWO TO MAKE A THING GO RIGHT
IT TAKES TWO TO MAKE IT OUTTA SIGHT

interrobangings
Автор

I no longer code. I no longer run scientific models. YET THIS IS EXCITING!!!

I may need to dig out my 287 as a reminder of this new era!

jaymacpherson
Автор

6:43 this may be a dumb question, but what is Wolf?

dogonthego
Автор

My computer crashed attempting to compile this interview into meaningful code

likbezpapuasov
Автор

0:15 Why don’t high-performance computing systems ever make good DJs? 💕💕
Because no matter how many cores they spin, they can’t drop the cache! ☮☮

phantomtec
Автор

Sounds like they have a clear longterm direction. It would be fantastic if the optimization cycle they have gets applied to software more broadly.

davidgunther
Автор

So... something I'm not understanding here is... what kind of a chip is this? The Architecture-description was very diffuse and opaque, I felt? But, my impression, is that this is something similar to those Almost-FPGA things which Ian has mentioned in the past? The ones that have higher reconfigurability than GPU's, but have more optimized blocks than something like a traditional FPGA? (I mean when compared to a LUT, or some other reconfigurable block, I'm aware that most high-performance FPGA's are eFPGA's nowadays, with several hard ip's integrated)
EDIT: Ok, he actually mentions this towards the very end of his architecture-description, most of the talk he didn't really give a definition.

What was the names of those types of architectures again? I recall there actually being suggested standardized nomenclature.

predabot__
Автор

-YouTube- randomly recommended this to me, tried to watch it for curiosity sake... talk about deep end of the pool & this being a topic way over my head.😅😅

ARKY-vx
Автор

What's old is new again. I love it haha

kelownatechkid
Автор

Exciting! Maybe useful in accelerating finite element solutions?

properpropeller
Автор

The tech power up listing for battlemage B580 says it has 1.7 TFlops FP64 with a 1:8 ratio. A few days ago FP64 was missing entirely from the listing. Can I get confirmation that B580 does have 1.7 TFlops of FP64? Because that would make it the king of FP64 per dollar, especially for a consumer card. Thanks.

tappy
visit shbcf.ru