All about AI Accelerators: GPU, TPU, Dataflow, Near-Memory, Optical, Neuromorphic & more (w/ Author)

preview_player
Показать описание
#ai #gpu #tpu

This video is an interview with Adi Fuchs, author of a series called "AI Accelerators", and an expert in modern AI acceleration technology.
Accelerators like GPUs and TPUs are an integral part of today's AI landscape. Deep Neural Network training can be sped up by orders of magnitudes by making good use of these specialized pieces of hardware. However, GPUs and TPUs are only the beginning of a vast landscape of emerging technologies and companies that build accelerators for the next generation of AI models. In this interview, we go over many aspects of building hardware for AI, including why GPUs have been so successful, what the most promising approaches look like, how they work, and what the main challenges are.

OUTLINE:
0:00 - Intro
5:10 - What does it mean to make hardware for AI?
8:20 - Why were GPUs so successful?
16:25 - What is "dark silicon"?
20:00 - Beyond GPUs: How can we get even faster AI compute?
28:00 - A look at today's accelerator landscape
30:00 - Systolic Arrays and VLIW
35:30 - Reconfigurable dataflow hardware
40:50 - The failure of Wave Computing
42:30 - What is near-memory compute?
46:50 - Optical and Neuromorphic Computing
49:50 - Hardware as enabler and limiter
55:20 - Everything old is new again
1:00:00 - Where to go to dive deeper?

Read the full blog series here:

Links:

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Рекомендации по теме
Комментарии
Автор

OUTLINE:
0:00 - Intro
5:10 - What does it mean to make hardware for AI?
8:20 - Why were GPUs so successful?
16:25 - What is "dark silicon"?
20:00 - Beyond GPUs: How can we get even faster AI compute?
28:00 - A look at today's accelerator landscape
30:00 - Systolic Arrays and VLIW
35:30 - Reconfigurable dataflow hardware
40:50 - The failure of Wave Computing
42:30 - What is near-memory compute?
46:50 - Optical and Neuromorphic Computing
49:50 - Hardware as enabler and limiter
55:20 - Everything old is new again
1:00:00 - Where to go to dive deeper?

Read the full blog series here:

YannicKilcher
Автор

Banger video fellas. One time I told my mom via text that I purchased a GPU and when I called her later she kept trying to pronounce “GPU”, but not as an acronym . Her best attempt was “guppy-ooh”

Stwinky
Автор

In truth, my profession has nothing to do with computers, but I learnt everything about ML from the sheer amount of videos I watched from this channel, to a point that I understand most of the videos that come out now.

Started from attention is all you need. I like whenever you draw annotations of flow charts because it makes it so much easier to follow what a paper is trying to do.

With your interviews with papers authors, I think it would be more insightful if you explained the paper first, then the interviewee gets to see you explainer before being interviewed. Almost like the peer review process. But then they are able to say if they agree on your interpretation, or to expand on things that they felt were the potential.

This video was really nice, got to understand the bigger picture of how the system turns

vzxvzvcxasd
Автор

Yannic, thanks for this guest. Please continue identifying the core and leading edge components of technology and finding guests to explain them. Much better than channels who focus on the surface things everyone else is talking about

johnshaff
Автор

oh come on !

IVE GOT so much stuff on my plate !!

Oh dear !

But I will watch it ! for sure !

billykotsos
Автор

Thank you for putting the time and energy into this interview. It was exactly what I needed.

javiergonzalezsanchez
Автор

11:22 I am very glad you guys got the history right well done. I really appreciate hearing from someone who I as well lived though those phases of technology. He is right!

khmf
Автор

Great video! I will read the blog for sure, this guy is a good and clear commutator ❤️

TheEbbemonster
Автор

Yannic! You are nailing it! I love it 😍

karolkornik
Автор

Such a great talk! I think it's an amazing and helpful introduction to AI acceleration for anyone who is interested in the topic (as it was for me). Thanks for sharing your information!

parsabsh
Автор

He's such a cool guy, and he works at Speedata!

BlackHermit
Автор

My experience with any ML accelerator other than GPU's is that my code won't run cause my model isn't 6 years old and the hardware doesn't support the new functions.

asnaeb
Автор

38:20;
very good.

You can compare this very nicely to what
Stephen Wolfran is doing with his whole
Mathematica project.

That he's taking the focus away from the idea
of the traditional teaching of mathematics with
individual computational tasks to the idea of
the functional description of the mathematical
of the problem at hand.

silberlinie
Автор

perceptron, recurrence, and memory cells trained on temporal information is all you need

alanhere
Автор

I love your content, until today i am still very confused/wondering: why do you have a green screen when you don’t color key/remove it in general?

nicohambauer
Автор

thanks for the video, love the content. I will appertiate it, for more future videos explaining content like that

jasdeepsingh
Автор

Reciprocal sqrt is useful for normalizing vectors, because e.g. three multiplies (x, y, z) are much faster than three divides. :)

EmilMikulic
Автор

51:57 check out Graphcore. They've made the bet that graph-nns are the future and are developing hardware to support them.

spaghettihair
Автор

11:39 Pentium 4 wasn't out until the 2000s. You are right. I still have one.

khmf
Автор

Nice he already talked about Groq 2 years ago!

sucim