Kolmogorov Arnold Networks (KAN) Paper Explained - An exciting new paradigm for Deep Learning?

preview_player
Показать описание
This is a paper breakdown video of the paper: Kolmogorov Arnold Networks, which brilliantly provides an alternative to standard Multi Layer Perceptrons. The video discusses the main contributions and core ideas of the paper, visually explaining the math, concepts, and challenges ahead.

#deeplearning #machinelearning #neuralnetworks

To access the animations, narration scripts, slides, notes, etc for the video, consider joining us on Patreon.

Timestamps:
0:00 - Intro
1:03 - Kolmogorov Arnold Representation Theorem
5:05 - KAN Layers
8:00 - Comparisons
9:00 - B-splines
11:08 - Grid Extension, Sparsification, Continual Learning
14:00 - KANs get the best of MLPs and Splines
15:00 - Advantages and Challenges for KANs

Check out the paper:

Check out code:
Рекомендации по теме
Комментарии
Автор

At 7:10 there is a correction. The notations aren't consistent with the matrix shown at 5:44.
x_1 will pass through phi_{11}, phi_{21}, ..., phi_{51}; and x_2 would pass through phi_{21}, phi_{22}, ..., phi_{52}.

Basically, the activation functions should be labeled in this order: phi_{11}, phi{21}, phi{31}, phi{41}, phi{51}, phi_{21}, phi_{22}, phi_{32}, phi_{42}, phi_{52}

Credit to @bat.chev.hug.0r for pointing it out!

avb_fj
Автор

Even as a person who isn't great at math, your explanation was clear and helped me a lot in understanding this quite exciting paper! Thank you :)

jayd
Автор

2:54 The example really helps me understand...this is an amazing and simple to understand KAN. Kudos to you!

foramjoshi
Автор

This is an excellent explanation of the paper (now i can ease into reading the paper). Learnable activations is new and exciting and most researchers would be kicking themselves saying, "why didn't I think of that?" The next step (for the authors of the paper) may be to work with "attention", because as far as we know, that's "all you need".

AurobindoTripathy
Автор

This is what I call: the democratization of the Math. The true scientist can explain the most hardest things in Math with simple terms.

Автор

Great work bud. I also appreciate your high quality sound and gentle voice.

darkhydrastar
Автор

Simple and to-the-point explanation. You avoided the mathematical jargos cleverly.

soumilyade
Автор

this reminds me of harmonics in sound, where the function is one-dimensional (the strength of the sound depends on time), but we can say that a sound wave is also a complex function that consists of simpler functions, namely different frequencies or harmonics of the sound wave. I have this analogy in my head

alexeypankov
Автор

Excellent explanation, and great examples, thanks for sharing your knowledge !

johnandersontorresmosquera
Автор

Awesome explanation. The approach taken to understand a paper is really good. Solid job, mate.

saichaithanya
Автор

it was great explanation. you make the concepts very easy to understand. Thank you!

NasrinAkbari-gepm
Автор

Amazing video! Great explanation & visuals. I tried to read the paper, but couldn't fully grasp it. Your video really helped my understanding.

ajk
Автор

This is the best explanation of the theorem I've found so far. I think I understood most of it when going through the paper, but this has really solidified and clarified what the proof is about.

pladselsker
Автор

Wow. I am sold bro. This explaination was really good.

braineaterzombie
Автор

I get an itch in the back of my brain that KANs should be able to use some support-vector tricks. In particular, there should be a sub-set of training examples that support the learned splines, with the others being hit "well enough" by interpolation. It's kind of like learning the support vectors + kernel at the same time. It perhaps should be possible to train an independent KAN per minibatch with a really restricted number of free params, and use this to a) drop out the non-supporting training examples, and b) concat/combine the learned parameters recursively.

mrpocock
Автор

Finally !!
A clear explanation.
Thanks bro 🇮🇶

AliKamel
Автор

I cannot believe I actually understood this! Thank you very much ❤️👏👏👏👏🇧🇷🇧🇷🇧🇷🇧🇷

fatau_sertaneja
Автор

I loved your mathematic explanations! Thanks for this. Will sub to your patreon :)

AdmMusicc
Автор

Great and simple explanation. Worthy of A. Karpathy
😀

jeankunz
Автор

Such an amazing work. Thank you for the video!

federicocolombo