From simplicity to complexity: the double descent effect in modern machine learning

preview_player
Показать описание
Yizhe Zhu (University of Southern California, Los Angeles, USA)
Рекомендации по теме
Комментарии
Автор

Is W ~ N(0, I) randomly sampled at every training iteration or is it just sampled randomly once at the beginning of the training and then stays fixed at those sampled values?

jfjfcjcjchcjcjcj