5. Positive Definite and Semidefinite Matrices

preview_player
Показать описание
MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018
Instructor: Gilbert Strang

In this lecture, Professor Strang continues reviewing key matrices, such as positive definite and semidefinite matrices. This lecture concludes his review of the highlights of linear algebra.

License: Creative Commons BY-NC-SA
Рекомендации по теме
Комментарии
Автор

DR. Strang thank you for another classic lecture and selection of examples on Positive Definite and Semidefinite Matrices.

georgesadler
Автор

For everyone asking about the bowl and eigenvalues analogy:
Let X= (x, y) be the input vector (so that I can write X as a vector) and consider the energy functional f(X)=X^t S X. What would happen if we evaluate on the eigenvalues?
First, why would I think to do this? The eigenvectors of the matrix give the "natural coordinates" to express the action of the matrix as a linear transformation, which then gives rise to all the "completing the square" type problems with quadratic forms in usual LA classes. The natural coordinates rotate the quadratic so it doesn't have off-diagonal terms. This means the function changes from something like f(x, y)=3x^2+6y^2+4xy to something like f(x, y)=(x^2+y^2)=(||X||^2), where ||X||^2 denotes the squared norm. So the functional looks like a very nice quadratic in this case, like the ones you may learn how to draw in a multivariate calc course.
Going back to the current calculation which f(X)=X^tSX: if we evaluate in the eigen-directions, then our function becomes f(X_1)=X_1^t S X_1=X_1 lambda_1 X_1= lambda_1 ||X_1||^2 (a nice quadratic) and
f(X_2)=X_2^t S X_2=X_2 lambda_2 X_2= lambda_2 ||X_2||^2 (another nice quadratic). The eigenvalues lambda_1, lambda_2 become scaling coefficients in the eigen-directions. A large scaling coefficient means we have a steep quadratic and a small coefficient means we have a quadratic that is stretched out horizontally.
If the eigenvalue is close to zero, the quadratic functional will almost look like a horizontal plane (really, the tangent plane will be horizontal) and hence not be invertible, so any solver will have difficulty finding a solution due to infinitely many approximate solutions. Since the solver will see a bunch of feasible directions, it will bounce around the argmin vector without being able to confidently declare success. Poor solver. Of course, these are purely mathematical problems; rounding error will probably mitigate the search even further.

Edit: changed "engenvalue" to "eigenvector" in 2nd paragraph.

spoopedoop
Автор

Positive Semi-Definite matricies: 38.01

marekdude
Автор

listening to Strang is like getting a brain massage

mariomariovitiviti
Автор

Staring at 22:00, should not we follow in the opposite of the gradient direction to reach minima? Gradient gives the steepest ascent directions as far as I know.

justsomerandomguy
Автор

This professor is the platonic version of a professor

alexandersanchez
Автор

came here from 18.06 fall 2011 Singular value decomposition taught by Professor Strang

quirkyquester
Автор

I am doing a project on this topic it really helped me a lot..thank you

samirroy
Автор

at 41:20, why the rank 1 matrix has 2 zero eigenvalues? because 3 - 1 = 2? does the professor mean that number of zero eigenvalues always equals to nullity of that matrix?

hangli
Автор

at 14:18, the energy can also so be EQUAL to 0 (not JUST bigger than 0)! Then does this not mean that the matrix is positive SEMI definite as opposed to positive definite?

MLDawn
Автор

These are great lectures! Is the autograder and programming assignment available somewhere?

imranq
Автор

who's that eager student answering every question for everyone else on every class?

lazywarrior
Автор

@32:00, Prof mentions "if the eigenvalues are far apart, that's when we have problems". What does he mean by that?

sriharsha
Автор

At 28:00 what is the intuition behind shape of the bowl and large/small eigenvalues? He made it sound like a quite obvious statement.

Also at 36:50, given that S and Q-1SQ are similar implies they have same eigen values. However, how do you show S and Q-1SQ are similar?
OK I figured out the 36:50 part. It is the spectral theorem which sir had covered in previous class. S = Q (lambda) Q-1.
Lambda = Q-1 S Q. As, lambda is defined as the matrix of eigen values of S, this implies that S and Q-1 S Q are similar.

Please explain the part at 28:00 . Thanks!

anubhav
Автор

Where was the energy equation mentioned in previous lectures?

heretoinfinity
Автор

Hopefully I can still love science at this age

rayvinlai
Автор

I think the shape of the bowl will change when we add (x^T)b at 17:00 . Am I right???

jeeveshjuneja
Автор

Where can I find the online homework? I can't find it in OCW.

csl
Автор

What is meant by energy whe X^t S X multiplication is carried?

CM-Gram
Автор

At 41min, Why is the number of nonzero eigenvalues the same as rank(A)?

quanyingliu