A Deep Dive into IBM's New Machine Learning Chip

preview_player
Показать описание
There's one big player who has kept relatively quiet when it comes to AI Hardware, and it's IBM - they've been working on several generations of machine learning IP and they've announced the first standalone hardware with it on. This Artificial Intelligence Unit (AIU) is a 5nm chip boasting support from FP16 down to INT2. The card can do training and inference, and here are some of the details.

[0:00] Hands-On
[1:30] Back in the office
[1:56] Slides
[3:42] Reduced Precision
[4:36] The SoC
[5:36] IBM's Dr. Khare
[7:14] Images of the AIU
[7:38] Chip Layout
[8:35] 1 core benchmarks on 7nm
[9:53] Software Stack
[11:12] Why?

-----------------------

If you're in the market for something from Amazon, please use the following links. TTP may receive a commission if you purchase anything through these links.

-----------------------
Welcome to the TechTechPotato (c) Dr. Ian Cutress
Ramblings about things related to Technology from an analyst for More Than Moore

#techtechpotato #ibm #sponsored
------------
More Than Moore, as with other research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, which may include advertising on TTP. The companies that fall under this banner include AMD, Armari, Facebook, IBM, Intel, Linode, MediaTek, NordPass, Qualcomm.
Рекомендации по теме
Комментарии
Автор

It's always fascinating to see what IBM has been working on, because it's usually something on the leading edge, and it's always something interesting, at the very least.

nunyobiznez
Автор

Do not know why I never found your channel till now. YT sucks and apparently needs these very cards LOL. Thanks you cover a LOT of tech no one else will attempt to explain, Much Appreciated. Is it me or was that a substantial piece of copper on the card?

RATTLR
Автор

yes, I've been poking you on twitter to talk more about these ai accelerators!

monstercameron
Автор

audio is very weird from 1:30 to the end
...only the first section is nicely mixxed.

anyway, thanks and keep it up Ian ;)

shinokami
Автор

0:40 looking forward to when FP2 foating-point comes out (-NaN, -0, +0, +NaN)

ProjectPhysX
Автор

Awesome.

Does IBM have a separate product that they (or their customers) use to train these models for Int2?

benjaminlynch
Автор

The board looks good but I am extremely skeptical of what a quantized neural network with INT2 can deliver. Especially after using FP16 quantized networks on MVIDIA cards and noticing greater than expected degradation during inference.

fpgamachine
Автор

AI hardware wars are only continuing to heat up ! Lots to look forward into the future !

billykotsos
Автор

Please Fix Audio ! as you some noise in the left channel. Thanx in-advance

ibps
Автор

It would be nice if you could have shown an allround view. In a more calm way instead of floping the cards around.
The view and how its cooled is interesting too

FlaxTheSeedOne
Автор

Ibm is so interesting how does this chip relate to the NorthPole chip they announced last year? ik that chip is an inference only ASIC is that research related to this chip or are they completely separate? will they have a training only chip next 😅

ConsistentlyAwkward
Автор

What if there was a realistic physics chip, like those billion particle simulations ?

walter
Автор

12:15 when you mention analog, do you mean as opposed to digital? That would be very interesting to have neural networks with something like op-amps.

esra_erimez
Автор

Unrelated, but you triggered a wish to rewatch Ex Machina. The better looking and smarter AI :)
This card looks good too, but still light years behind (which is distance just in case).

vensroofcat
Автор

10:28 "any computer in the world has a pcie attachment" nope. A lot of embedded, mobile & soc computers don't. PCIe is mostly used in PCs & servers.

FrankHarwald
Автор

By chance did you ask IBM about in memory computing?

monstercameron
Автор

"So, I just happen to be here at IBM"

sniffulsquack
Автор

Int 2? Wow you can't reduce precision much more than that. 🤣

nellyx
Автор

I am currently looking at building a new workstation PC at home. And for it, I am entertaining the possibility of putting a dedicated "AI" accelerator. I want something to run models that are like 6B or maybe even 16B parameters to use for my own research and a lot of fun. And It should be an matx build where there is a dedicated GPU for gaming/video editing.

But it's difficult to find actual products that get sold to consumers.

Veptis
Автор

lol, thank you for the video. whats up with the coloring tho?

anarekist