Modified Newton method | Backtracking Armijo | Theory and Python Code | Optimization Techniques #5

preview_player
Показать описание
In this one, I will show you what the modified newton algorithm is and how to use it with the backtracking search by Armijo rule. We will approach both methods from intuitive and animated perspectives. The difference between Damped and its modified newton method is that the Hessian may run into singularities at some iterations, and so we apply diagonal loading, or Tikhonov regularization at each iteration. As a reminder, Damped newton, just like newton’s method, makes a local quadratic approximation of the function based on information from the current point, and then jumps to the minimum of that approximation. Just imagine fitting a little quadratic surface in higher dimensions to your surface at the current point, and then going to the minimum of the approximation to find the next point. Finding the direction towards the minimum of the quadratic approximation is what you are doing. As a matter of fact, this animation shows you why in certain cases, Newton's method can converge to a saddle or a maximum. If the eigenvalues of the Hessian are non positive - in those cases the local quadratic approximation is an upside down paraboloid. Next, we talk about the line search we are going to use in this tutorial, which is the Armijo backtracking method. This is achieved by the Armijo condition, which sufficiently decreases our function ! Of course, looking at the Armijo condition equation as is might not reveal any insights, but geometrically looks beautiful, let me show you how.

⏲Outline⏲
00:00 Introduction
00:57 Modified Newton Method
03:44 Backtracking by Armijo
06:41 Python Implementation
24:41 Animation Module
40:12 Animating Iterations
43:32 Outro

📚Related Courses:



🔴 Subscribe for more videos on CUDA programming
👍 Smash that like button, in case you find this tutorial useful.
👁‍🗨 Speak up and comment, I am all ears.

💰 If you are able to, donate to help the channel
BTC wallet - 3KnwXkMZB4v5iMWjhf1c9B9LMTKeUQ5viP
ETH wallet - 0x44F561fE3830321833dFC93FC1B29916005bC23f
DOGE wallet - DEvDM7Pgxg6PaStTtueuzNSfpw556vXSEW
API3 wallet - 0xe447602C3073b77550C65D2372386809ff19515b
DOT wallet - 15tz1fgucf8t1hAdKpUEVy8oSR8QorAkTkDhojhACD3A4ECr
ARPA wallet - 0xf54bEe325b3653Bd5931cEc13b23D58d1dee8Dfd
QNT wallet - 0xDbfe00E5cddb72158069DFaDE8Efe2A4d737BBAC
AAVE wallet - 0xD9Db74ac7feFA7c83479E585d999E356487667c1
AGLD wallet - 0xF203e39cB3EadDfaF3d11fba6dD8597B4B3972Be
AERGO wallet - 0xd847D9a2EE4a25Ff7836eDCd77E5005cc2E76060
AST wallet - 0x296321FB0FE1A4dE9F33c5e4734a13fe437E55Cd
DASH wallet - XtzYFYDPCNfGzJ1z3kG3eudCwdP9fj3fyE

This lecture contains many optimization techniques.

#python #optimization #algorithm
Рекомендации по теме
Комментарии
Автор

It just needs more videos to get rocket growth !! Very Good Quality stuff ..

IQRAAHMED
Автор

Thank you for your amazing comment :-)

diyonex-sosyalicerikplatfo
Автор

GWAKAMOLE !!!! I really wish I came across your video much before I took the painful ways to learn all this… definitely a big recommendation for all the people I know who just started with optimization courses. Great work

rvan
Автор

Amazing explaination! This is very helful for understanding. Thanks a lot sir.

angryprashnepal
Автор

This course has literally changed my life. 2 years ago i started learning optimization from this course and convex optimization from Ahmad's famous course and now i am a research scientist at a great research institute. Thanks Ahmad♥️

kartalgaming
Автор

Thank you Ahmad for this amazing tutorial video you have made. I am a researcher and work on large datasets. Basically, I am a wet lab scientist but really want to python to write different codes that will help me analyze the huge dataset. When I started working with large datasets I did all the calculations and sorted the data manually which is so labor intensive. Therefore, python will certainly help me in minimizing my efforts and provide more robust data analysis power.

twobros
Автор

Hi Ahmad! A big thank you from India. I am an engineer in material science from last 6 years and never learnt any programming language to be frank and today I am starting my Python journey in a quest to enter in to the field of machine learning and artificial intelligence for simulation of advanced materials. You have made this course super easy for the zero beginners like me. Hopefully I will complete this course and then will proceed to Python mastery course. And happy teacher's day.

namansharma
Автор

This is so high quality stuff! Thanks for the graphical explanation at the beginning!

djhiploza
Автор

Excellent video. I especially liked how you linked it back to the root finding version we learned in school. My one beef with this video is that that's an unfair depiction of Tuco.

lyrex
Автор

WoW! This is amasing work man, thank you.

samarendradash
Автор

This is brilliant thank you, hope you give us more visual insight into calculus related things

mustafasamet
Автор

Another problem is for a negative curvature, the method climbs uphill. E.g. ML Loss functions tend to have a lot of saddle points, which attract the method, so gradient descent is used, because it can find the direction down from the saddle

PubgMobile-veij
Автор

Brillant explanation, thank you so much.

youssef
Автор

Gorgeous tutorial ! I have never even saw the pyhton interface in my life before, but with the help of your videos i feel like i understand a lot. Also, when you let us try with the 'homework' that`s where the experience shows. my code is always longer and less intuitive. also, i seem to opt for the input command a lot more. something about interracting with the code always gets me !

prdgy
Автор

Crystal clear explanation, thank you!

beyazhacker
Автор

Thank you for the words of encouragement, I appreciate it!

sysaa
Автор

I love this video, I feel so privileged to be growing up in an era where knowledge is so easily available, Ahmad is really helping to improve my and many other's opportunities.

GLiveStreamVoice
Автор

Sure. Consider the quadratic approximation f(x) ~ f(xk) + f'(xk) (x - xk) + 1/2 f''(xk) (x-xk)^2 at the bottom of the screen at 7:06. To minimize the right hand side, we can take the derivative with respect to x and set it to zero (i.e., f'(xk) + f''(xk) (x - xk) = 0). If you solve for x, you get x = xk - 1 / f''(xk) * f'(xk).

relaxandsleep
Автор

Hello, great video. I am currently following a course on non-linear optimization and I would like to make videos like this for my own problems. I'm a visual learner and this video is exactly what I'm looking for! Great content!

iredikstr
Автор

This was such an awesome explanation, so grateful thank you.

twitchclass