Debate: 'Does AI Need More Innate Machinery?' (Yann LeCun, Gary Marcus)

preview_player
Показать описание
Debate between Yann LeCun and Gary Marcus at NYU, October 5 2017. Moderated by David Chalmers. Sponsored by the NYU Center for Mind, Brain and Consciousness.
Рекомендации по теме
Комментарии
Автор

(1) Beyond his critique paper, Gary identifies that 90% of nips papers are not in the regime of heavily considering causal priors/innate stuff. In other words beyond his critique paper and instead, in the talk above, Gary correctly points out that some papers are greatly considering this innate stuff he talks about, although very few.

(2) Gary is very helpful by publishing his critique paper, because it is a reminder that it is reasonable to consider innate stuff. (Especially if his reports are correct that 90% of nips papers are not in the regime of considering causal priors/innate stuff)

(3.a) Yann brings up a crucial point by mentioning that there was no real motivation to heavily explore building innate stuff beyond convolutional neural configuration because:

(note that Yann didn’t exactly say the following, but I am paraphrasing and injecting information based on my own knowledge)

(3.b) Researchers largely merely knew a lot about convolutional notations (i.e.W*x+b, instead of W(dot)x+b…) as an example of innate biological constraint, and so researchers worked on that mostly up until now.

(3.c) Working on convolution was already challenging enough, and successful convolution detectors merely emerged in the last 4 years.


(4) So the big take away from this may be that it is time for many many more researchers to start considering more biological constraints, especially when visual constraints (convolution equations) are already so successful.

However, as Yann points out, it is important to minimize the amount of priors or innate stuff manually integrated in the learning models, to be careful not to negatively affect overall learning performance.

This is challenging, but following cognitive science should help guarantee that researchers find the right priors or innate stuff, that reasonably guarantee optimal performance rather than stifle the learning model!

godbennett
Автор

In his talk (34:20), Yann is certainly correct, that we had been _very bad_ at rationalizing what the built-in structures should be. When we put into a system the constraints that we imagine to "make sense", they more often than not do not work in reality. This is why DeepNN got so much better when allowed to learn features on their own.

What we need, is a principled way to find the clever constraints that would make learning the "structure of reality" easier, not harder. I think only by paying attention to the evolution of Cognitive Architecture in nature we could make progress in this direction.

cogoid
Автор

This is a very nice and insightful talk. Thank you for sharing!

OttoFazzl
Автор

This was a great discussion - explained a lot about the actual state of art. Fortunately, they didn't touch such lofty things like "consciousness". It means they're quite sober, both of 'em. Great.

reinerwilhelms-tricarico
Автор

The reason why we only need a few crashes when learning to drive (or to walk etc.), is not related to understanding that it's a bad idea (we know that immediately from the pain, which could be built-in to a robot as well, such as having a force sensor with a certain threshold) or having a good predictive model of the world.

The reason is that we know about other working options and how to pick them. We brake the problem down into a few concepts/known situations, and then can reduce the search space very efficiently, because we only consider what the important differences are.

As LeCunn says himself, there is no perfect prediction of the world, a spoon will fall differently every time, and we never have enough, complete enough, or precise enough data to predict what exactly will happen. Intelligence is about knowing enough tools to describe the problem at hand efficiently, and thereby allowing better pattern matching (finding the right known approach), and then specializing it to a concrete situation.

A machine learning system just minimizes a cost function, so it ideally knows that it's costly to bump in a tree, but it still has to discover a better option. However instead of changing the approach conceptually, it just tries minor modifications to improve, so it will still keep bumping into the tree many times, but at slightly different angles.

It shows that there is no high level understanding of the scene and no concept of the tree as an obstacle. Predictive models alone are not enough, they need to predict useful units/categories. We see the tree as a unit with a clearly defined occupation of the space, the naive machine learning system sees the whole space as one grid of forces that oppose it (or color pixels etc.).

The real issue is that so far, machine learning is not good at reusing learned knowledge, and sharing or formulating abstract concepts, that get instantiated to a specific situation.

You can try to reformulate this problem as an optimization problem, but this is really missing the point: you first have to define the basic building blocks of generating novel ideas, recognizing similarity of already known concepts (even if they are stored in an NN), combining/chaining them in useful ways, and then finally, optimizing them at least to fine tune some parameters.

Extracting concepts, sharing them accross differently trained systems (in other words, real communication, not just linguistic translations), adapting them to new situations, etc.
These are the reasons why humans are much more efficient in learning/adapting to new situations.

maeltill
Автор

Generally, I agree with Gary's talk 10:20

Cognitive Architecture of animals contains enormous amount of built-in machinery, which enables horses, for example, to run around an hour after birth. Like Gary says, in humans, slow maturation makes it *seem* that "everything is learned" where in fact nothing would be learnable if the brain was merely a blob of "neuronium" with just a few learning principles built into it.

I only wish he would devote more attention to how we, animals got to be this way, But the question of evolution of Cog Architecture is barely mentioned: 1:33:18

cogoid
Автор

I wonder what would they think of the current gpt4's reasoning ability.

yuluqin
Автор

Marcus just destroyed the guy before even Lecun begins, he 8-miled him.

haluk
Автор

We had to wait 2 whole weeks for Deep Mind to publish a paper that nullifies all of Gary's arguments against Alpha Go.

anticlementous
Автор

Innate behavior is just hard-wired structure in the brain, such as a cat "knowing" how to give birth and care for it's kittens. Structure that occurs later in life, over generations, gets passed along at the DNA level (a child born today would have no problem with abstract usage of a computer, but a child born hundreds of years ago wouldn't have the structure on which to build on.. but could still learn). What is missing in NN, DNN, ML, RL, etc, is a structure that changes. Dynamic and asymmetric interconnects would be a better model of the brain, but then you have the problem of large, discrete structures that don't allow for abstract thinking (still a local maxima problem).

sdmarlow
Автор

Lecun not knowing babies have blurry eyes, constantly only caring about their toys and mothers before 5 months, having tons of other strong senses like touch somatosensors and proprioception (eventhough no control over them), and having 2 EYES and a FREAKING BODY: "Babies watch videos to learn about joint tracking, object permanence solidity, categories, Guys!" Cochlea, attention and explicit representations, mother.ucker, did you even hear about them? You use your ear to see and your eyes to hear sometimes. It is cool that those are soo automatic that you can't figure this out, but, figure it out man.

haluk