The Epistemology of Deep Learning - Yann LeCun

preview_player
Показать описание
Deep Learning: Alchemy or Science?

Topic: The Epistemology of Deep Learning
Speaker: Yann LeCun
Affiliation: Facebook AI Research/New York University
Date: February 22, 2019

Рекомендации по теме
Комментарии
Автор

absolutely fantastic overview. only missed some early stuff (Most people who don't actually read Rosenblatt and don't know that he was inspired by Hayek's first publication on neurons which has lots of great insights) but great to see the historical footage as well as anecdotes from the source.

ArtOfTheProblem
Автор

This one, about rejecting paper (based on TABU), happens even today with Theory Ether vs Space-Time vs Zero Energy field (for me all three is the same concept, but first one is a TABU in scientists)

redradist
Автор

"Theory prunes our empirical search space". This is circular argument because empiricism itself is a theoretical concept that elevates or uniquely identifies sensory experience as source of knowledge.

alihammadshah
Автор

This talk should be titled "Lessons from the History of Neural Networks" instead of "The Epistemology of Deep Learning". The epistemological paradigm of deep learning was not explored in depth, and the talk was more about the history of neural networks.

liuhh
Автор

Deep learning is to computer science what quantum mechanics was (and still is) to physics. By the same analogy, that does not mean that we lack a theory.

InaCentaur
Автор

It is interesting and thought-provoking, but the epistemological part presented at the end of the speech should be better examined.

After a long review of the history of neural nets and statistical machine learning, he essentially
explains the 2nd neural net winter by various historical factors [43:00]
And then states an opinion (his "controversial proposition" [52:00]) on the detrimental role of rigor mathematics that presumably slowed down the develoment of research in neural networks at the time.

I think that his "proposition" should be address more thoroughly. The main points I see are:

1) The 'proposition' is not at all needed to explain the 2nd NN winter (1996-2006) since an historical explanation was given just before . (Mainly based on technical limitations ).

2) Most of the mathematical approaches he cites were practical tools that worked (for the time), and was not only rigorous maths. However they all have rigorous mathematical fundations that allow us to understand how they work.
Deep learning on the other hand was only beginning at the time and became successful enough to justify a paradigm switch only after 2010. So yeah we know NOW that DL is way better (for some tasks) than many of these approaches, that was not necessarily clear at the time. SVM was even regarded as the state-of-the art.
And still, and now, DL has no proper theory yet to explain why it works.

3) On the theoretical side, there is an equivalence between deep learning models, gaussian processes and kernel methods. So it is very possible that the rigorous theories that explain the approaches we don't use by now could still be relevant to eventually explain deep learning in some way.

If we had a DL theory, we also could generalize the approach and it will surely be fruitful. Maybe we could also have a better idea about Deep spike neural networks (for neuromorphic chips) (for instance).

4) The relations about empiricism and theory are in two ways and can be very complicated. In all cases there should be people working on both sides.

和平和平-ci
Автор

I can never understand why do we make heroes and then accept anything they say (insecurity?). There are so many false claims/statements in that talk that it baffles me who was listening and agreeing throughout like they were baby robots. Sad, to say the least

sabawalid
Автор

thanks for sharing Institute for Advanced Study

yank
Автор

Nature has demonstrated emergent novelty is NP-hard, so while deep AI may out-smart on the condensation of knowledge, it will never out-invent or expand the Hilbert space.

michaelmilbocker
Автор

1:03:00 I'm not sure an infant _doesn't_ need a million training samples to tell a dog from a cat.

IsaacGerg