Universal Approximation Theorem

preview_player
Показать описание
Can a neural network approximate ANY function?
00:00 Theorem
04:38 Proof
15:55 Joke Break
16:53 Python Demo
20:35 Limits of the Theorem
Рекомендации по теме
Комментарии
Автор

Very clear presentation, thank you. Just curious : is this proof of universal approximation the one given in Cybenko's original paper ? I never read it, but I remembered that the argument had something to do with Fourier transforms...

StratosFair
Автор

Very clear and nice lecture! Thanks a lot!

t-gee
Автор

Analysis nerd here. I think you should be muttering things about compact spaces having finite subcovers to show that you have a finite number of pieces.

tomwright
Автор

Thank you for such a lucid explanation!!

ranjanmukherjee
Автор

Accuracy can also be achieved by means of increasing the number of neurons in the single output-layer.

zofe
Автор

Your proof is very easy to understand. How can I get your source code in this video? Thanks.

lehoangtuan
Автор

Couldn't the error of your approximation from the true value be arbitrarily big?

nohofoos
Автор

In higher dimensions, when we construct the piece-wise constant functions, do we need to multiply multiple heaviside step functions together, one per dimension?

ibrahim-work
Автор

are alphas in the summation of G's expression diffenent for different x?
as if not how can i select the right peice function for that x?

sanketgandhi
Автор

Assume N in Theorem 2 is a fixed number. Can functions G(x) still be dense in C(I_n)?

yonac.-kenv