Proof for the meaning of Lagrange multipliers | Multivariable Calculus | Khan Academy

preview_player
Показать описание

Here, you can see a proof of the fact shown in the last video, that the Lagrange multiplier gives information about how altering a constraint can alter the solution to a constrained maximization problem. Note, this is somewhat technical

Missed the previous lesson?

Multivariable Calculus on Khan Academy: Think calculus. Then think algebra II and working with two variables in a single equation. Now generalize and combine these two mathematical concepts, and you begin to see some of what Multivariable calculus entails, only now include multi dimensional thinking. Typical concepts or operations may include: limits and continuity, partial differentiation, multiple integration, scalar functions, and fundamental theorem of calculus in multiple dimensions.

About Khan Academy: Khan Academy offers practice exercises, instructional videos, and a personalized learning dashboard that empower learners to study at their own pace in and outside of the classroom. We tackle math, science, computer programming, history, art history, economics, and more. Our math missions guide learners from kindergarten to calculus using state-of-the-art, adaptive technology that identifies strengths and learning gaps. We've also partnered with institutions like NASA, The Museum of Modern Art, The California Academy of Sciences, and MIT to offer specialized content.

For free. For everyone. Forever. #YouCanLearnAnything

Subscribe to KhanAcademy’s Multivariable Calculus channel:
Рекомендации по теме
Комментарии
Автор

can you also cover the inequality constraint optimization, please?

chrislam
Автор

You're back! Can we except more multivariable calculus vids? Love your animations!

cyancoyote
Автор

Lagrangians are fairly advanced maths, degree level stuff, seem to remember doing this on my physics degree - was really boring, nice to see other applications, this is great!

davidsweeney
Автор

@15:20, why do we not need to change all the L on the right-hand side to L* here if we change what we are differentiating?

xiaoweilin
Автор

Great video and series. One important thing that wasn't covered is whether L* is a differentiable function with respect to h*, s*, and lambda*. I'm not convinced that it is (though I have no reason to believe that it isn't, either). If it's not differentiable, then it doesn't seem valid to consider the derivative of L*.

richardshandross
Автор

Is this the last video for the Lagrange multipliers??? I wanted to learn the Karush–Kuhn–Tucker conditions for inequality constraints... :( Where can I find a course that is easy to follow?? please HELP!

MauPP
Автор

It would be really helpful if you covered the difference between maximizing vs minimizing using Lagrange multipliers / Lagrangians.

DrKvo
Автор

Oh I absolutely love this proof! I mean my heart's totally pumping!

abdullaalmosalami
Автор

Suppose I have three interdependent functions: A, B and C.
Then, I will write a Lagrangian L1 = A-l1((B-b)-l2(C-c)
Extremizing the Lagrangian will give me the multipliers l1 and l2.
But then, could I write two more Lagrangians
L2=C-l3(A-a)-l4(B-b)
L3=B-l5(C-c)-l6(A-a)
Extremizing the latter two Lagrangians should give me the multipliers l3, l4, l5 and l6, right?

petripaavonpoika
Автор

You messed this part up I believe . Appreciate most your effort

Dusadof
Автор

At 8:30-8:50, when Grant considers the Lagrangian as a function of functions of b and b itself, why isn't it the case (as it was earlier, when b was fixed) that B(h*(b), s*(b))=b? I mean, aren't h*(b) and s*(b) defined to be precisely such that they maximize R for every b while also making the budget B(h*(b), s*(b))=b? Hence, why doesn't that term cancel?

uimasterskill
Автор

@9:30, why here the second term of -lambda*(B(h*, s*)-b) is not evaluated to 0?

xiaoweilin
Автор

Couldn't we consider the Lagrangian as a 3 variable function, with each being a function of b and use implicit differentiation of the lagrangian on b?

guilhermegondin
Автор

Is it not sufficient to say that ∇f(x, y) = λ∇g(x, y) ⇒ λ = ∇f/∇g = df/dg, which is equivalent to saying that λ is the constant of proportionality between the gradients over the (x, y) domain, and therefore the rate of change of f with respect to g? Since the gradients are parallel, they are equivalent to a directional derivative in some direction u⃗, so df/du⃗ = λdg/du⃗ ⇒ (df/du⃗)•(du⃗/dg) = λ ?

Also, I'm not sure if I follow why dL/db = dM/db at x⃗* just because L(x⃗*) = M(x⃗*)? I imagine two planes with different slopes passing through the same point somewhere, with different gradients at that point.

Love your videos btw.

dionsilverman
Автор

I know you posted this a long time ago, but;

If the gradient of the Lagrangian is 0,

And the gradient is the (vectorized) amount of maximum ascent,

Does this mean that, by definition, the Lagrangian is maximized since it’s value is 0?

In other words, because it’s Lagrangian is 0, it can be maximized no more?

mjackstewart
Автор

I think there was a rare, but critical mistake made by Grant in the notation when he didn't close the green parenthesis around another b in L*. around 8:46. Makes the derivative he comes to at the end make sense.

cleverclover
Автор

The moment you enter b all the other variables become constants

foxyelen
Автор

Hasn't this topic already been covered a year ago? Also why is this vid not added to the multivar calc playlist (as far as I cant tell), does it just take time?

farissaadat
Автор

This took some time, but when it clicked, I suddenly realised this *actually was* mathematically rigorous...

ojussinghal
Автор

For me, this series ends here. I'll head to look for the next Grant video.

atriagotler