Visualization of the universal approximation theorem

preview_player
Показать описание
Illustration of how a neural net with one hidden layer can approximate a function.

Рекомендации по теме
Комментарии
Автор

hmm..makes sense ...so it finds the best possible linear function then the activation..n then finally add them up together to join them all

ikartikthakur
Автор

Hello, thank you for your video, it helps to understand, but I have a question : How does the NN choose the "steps" of cut ? Because from what I understood, for next layer (i.e here f hat), we simply do the sum, as you did in blue below. Because if we do the real sum of the functions, we'll get simply a non linear function but that looks like a ReLU right ?

y.
Автор

Thanks for the clear visualization! In this case, the activation function is ReLU right? Sigmoid will looks different

jingli
Автор

Can you do this with more layers? I want to know how adding more of them can increase the complexity of the function

Sammy
Автор

Could you visualize the same thing but with multiple hidden layers?

cffex
Автор

how you do these visualizations please help.

UCPlay