Do Statistical Models Understand the World? - Ian Goodfellow, Research Scientist, Google

preview_player
Показать описание
'Do Statistical Models Understand the World?' - Ian Goodfellow, Research Scientist, Google

Machine learning algorithms have reached human-level performance on a variety of benchmark tasks. This raises the question of whether these algorithms have also reached human-level 'understanding' of these tasks. By designing inputs specifically to confuse machine learning algorithms, we show that statistical models ranging from logistic regression to deep convolutional networks fail in predictable ways when presented with statistically unusual inputs. Our results suggest that deep networks have the potential to overcome this problem, but modern deep networks behave too much like shallow, linear models.

At the time of this presentation, Ian Goodfellow was a research scientist at Google. He earned a PhD in machine learning from Université de Montréal in 2014. His PhD advisors were Yoshua Bengio and Aaron Courville. His studies were funded by the Google PhD Fellowship in Deep Learning. During his PhD studies, he wrote Pylearn2, the open-source deep learning research library, and introduced a variety of new deep learning algorithms. Previously, he obtained a BSc and MSc in computer science from Stanford University, where he was one of the earliest members of Andrew Ng's deep learning research group.
Рекомендации по теме
Комментарии
Автор

So-called ‘adversarial examples’ is a misnomer. There are no adversarial examples. All examples are the same. It is the model itself, or, if you like, the purely statistical learning paradigm that looks at ‘data’ only. There will never be any true generalization without moving from ‘data’ to ‘information’ and indeed, ‘knowledge’. Objects are not “truly” equal if they are equal just by their ‘data’ value. This is extensional equality — the equality that works in most cases where the numeric data value is all that matters. But true equality is ‘inetnsional’ equality (yes, with an ‘s’ not a ‘t’ — so this is not a typo). Here’s an example:
(1) SQRT(256) = 16
(2) Sandy taught her little brother that 7 + 9 = 16
(1) is true because that is what our grade school teachers told us. (2) is true because it happened, we saw that!
Now if we replace ‘16’ in (2) by a (data) value that is equal to it, namely, SQRT(256), we get from two true statements something that is false:
(3) Sandy taught her little brother that 7 + 9 = SQRT(256)
What happened???
What happened is that 16 and SQRT(256) are equal but in a very simplistic way, that is, they are equal in ONE ATTRIBUTE ONLY, namely their data value, but as objects they are not equal at all — they differ in many attributes.
That is what is behind the so-called ‘adversarial examples’ in DNNs.
Little knowledge is dangerous. Time to respect 300 years of work in formal logic, intensions, semantics and metaphysics.
Or, keep wasting time…..

sabawalid