Constrained Optimization: Intuition behind the Lagrangian

preview_player
Показать описание
This video introduces a really intuitive way to solve a constrained optimization problem using Lagrange multipliers. We can use them to find the minimum or maximum of a function, J(x), subject to the constraint C(x) = 0.

--------------------------------------------------------------------------------------------------------

© 2023 The MathWorks, Inc. MATLAB and Simulink are registered trademarks of The MathWorks, Inc.
Рекомендации по теме
Комментарии
Автор

my god, two weeks of lectures explained in one video. you are great man.

d_chip
Автор

You are a single piece, bro. You're explaining intuitions, makes me excited all the time.

vnagamohankrishnap
Автор

Most inspiring video I ever seen. I got two takeaways: transferring none resolvable problem to an equivalent resolvable problem; gradient is a good way.

ryanfeng
Автор

Wish this was the way it was explained in university. Liked and subbed

Joshjson
Автор

“You’re not going to be solving it by hand.”

*laughs then cries in graduate student*

KHMakerD
Автор

Thanks Brian, I always look forward to new Tech Talks! Could you do a video on MPC? That would be awesome!

faraway
Автор

Brian, can you do for us a summer school course for control engineers I'll be the first one to attend if it's you talking about the intuition behind control!

SarahImeneKhelil
Автор

I really have to learn to try ideas and equations with simple examples. I was so afraid Lagrange multipliers and Lagrange equation and its sense that I just dropped it off. How lucky that I just saw with the corner of my eye that thumbnail on my recommendation list with a characteristic Brianish drawing style with the "Lagrangian" word within the title. I knew before watching that you will help as always. Gosh you are a great educator man.

MrPepto
Автор

had an undergrad professor so determined to stop cheaters that he only allowed scientific calculators which didn't bother me until he expected us to do regression

duydangdroid
Автор

Nice video! Looking forward to the nonlinear constrained optimization part!

harrytsai
Автор

Great video. In the interest of being precise and thinking about what might trip up new learners, someone who's paying really close attention will find 2:45 confusing since you can't have " *thee* partial derivative with respect to both x_1 and x_2". Instead, the gradient is a vector of all of the partial derivativeS, plural, of f( *x* ), where the ith element of the gradient is the partial derivative of f with respect to the ith element of *x*

Sorry for the pedantry, but from my own experience, the problem is that we often ask math students to pay close attention to exactly that kind of fine distinction in other contexts, so a description of the gradient that, taken literally, can't exist is likely to cause minor confusion for talented students.

That said, phenomenal video. This would be very useful for teaching someone who has only a knack for scalar calculus one of the most important ideas in multivariable calculus quite efficiently.

griffinbur
Автор

I am confused about the slope obtained by differentiation. They are the slopes of dz/dx(i) but not the projection to the x-y plane. Thus, I cannot understand how it can be parallel?
However, they are parallel if the "projections" slopes, ie. dx(2)/dx(1) is calculated and used. However, it is just 0 and were not used in the calculation.

blower
Автор

5:45 the visual illusion make the dark line look curved .... XD

dpynzfl