5.3 Prox Gradient -- Rates of Convergence

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

At 5:07 it's actually gradient of g(x)

rodasyt
Автор

Sir, why are we calling the convergence rate for smooth and strongly convex case "Linear"?

sandippaul
Автор

Thank you so much for these lectures. A quick question: when upper bounding (a) term using \beta-smoothness (timestamp: 4:40), we assume beta <= eta, right? But in the previous video when you introduced this key lemma, you mentioned that eta <= 1/beta, but I think proof goes through by assuming beta <= eta instead. Am I missing something?

lylekim