filmov
tv
NeurIPS 2019 Outstanding New Directions Paper Award w/ slides
![preview_player](https://i.ytimg.com/vi/JzwsiYfg_GA/hqdefault.jpg)
Показать описание
Join the channel membership:
Subscribe to the channel:
Support and Donation:
BTC ⇢ bc1q2r7eymlf20576alvcmryn28tgrvxqw5r30cmpu
ETH ⇢ 0x58c4bD4244686F3b4e636EfeBD159258A5513744
Doge ⇢ DSGNbzuS1s6x81ZSbSHHV5uGDxJXePeyKy
Wanted to own BTC, ETH, or even Dogecoin? Kickstart your crypto portfolio with the largest crypto market Binance with my affiliate link:
for more videos with slides.
NeurIPS 2019 Outstanding New Directions Paper Award: Uniform convergence may be unable to explain generalization in deep learning
Vaishnavh Nagarajan, J. Zico Kolter
Abstract:
We cast doubt on the power of uniform convergence-based generalization bounds to provide a complete picture of why overparameterized deep networks generalize well. While it is well-known that many existing bounds are numerically large, through a variety of experiments, we first bring to light another crucial and more concerning aspect of these bounds: in practice, these bounds can {\em increase} with the dataset size. Guided by our observations, we then present examples of overparameterized linear classifiers and neural networks trained by stochastic gradient descent (SGD) where uniform convergence provably cannot `explain generalization,’ even if we take into account implicit regularization {\em to the fullest extent possible}. More precisely, even if we consider only the set of classifiers output by SGD that have test errors less than some small ϵ, applying (two-sided) uniform convergence on this set of classifiers yields a generalization guarantee that is larger than 1−ϵ and is therefore nearly vacuous.
Subscribe to the channel:
Support and Donation:
BTC ⇢ bc1q2r7eymlf20576alvcmryn28tgrvxqw5r30cmpu
ETH ⇢ 0x58c4bD4244686F3b4e636EfeBD159258A5513744
Doge ⇢ DSGNbzuS1s6x81ZSbSHHV5uGDxJXePeyKy
Wanted to own BTC, ETH, or even Dogecoin? Kickstart your crypto portfolio with the largest crypto market Binance with my affiliate link:
for more videos with slides.
NeurIPS 2019 Outstanding New Directions Paper Award: Uniform convergence may be unable to explain generalization in deep learning
Vaishnavh Nagarajan, J. Zico Kolter
Abstract:
We cast doubt on the power of uniform convergence-based generalization bounds to provide a complete picture of why overparameterized deep networks generalize well. While it is well-known that many existing bounds are numerically large, through a variety of experiments, we first bring to light another crucial and more concerning aspect of these bounds: in practice, these bounds can {\em increase} with the dataset size. Guided by our observations, we then present examples of overparameterized linear classifiers and neural networks trained by stochastic gradient descent (SGD) where uniform convergence provably cannot `explain generalization,’ even if we take into account implicit regularization {\em to the fullest extent possible}. More precisely, even if we consider only the set of classifiers output by SGD that have test errors less than some small ϵ, applying (two-sided) uniform convergence on this set of classifiers yields a generalization guarantee that is larger than 1−ϵ and is therefore nearly vacuous.