Lecture 2 | The Universal Approximation Theorem

preview_player
Показать описание
Carnegie Mellon University
Course: 11-785, Intro to Deep Learning
Offering: Fall 2019

Contents:
• Neural Networks as Universal Approximators
Рекомендации по теме
Комментарии
Автор

a great explanation...Thank you so much

cedricmanouan
Автор

Couldn't help but think of 3B1B videos on hamming codes watching this.

ian-haggerty
Автор

Very nice lecture. I feel I understand better why neural networks work.

adhoc
Автор

Such a cool explanation. Can anyone (in particular any student from this course) provide a link to a mathematical explanation behind the content from 35:00 till 45:00. Usually lecturers do provide references to such material. Please do not share the reference papers already listed in this video.

Learner_
Автор

A great and very clear lecture. Thank you.

samanrazavi
Автор

At 15:25, isn't the total input coming L-N if first L inputs are 0 and last N-L inputs 1?

bhargavram
Автор

can any one explain the inequality in 41:30

and the equation in 42:00

thanks

husamalsayed
Автор

This guy is confusing. No good explanations. I have doubts about the two circle one hidden layer solution. He nees an OP operation as a third layer, otherwise also other regions outside of the two cicles will be above the threshold.

smftrsddvjiou
Автор

Is this a undergrad level course or grad level course?

ITPCD