SVD: Eigenfaces 3 [Matlab]

preview_player
Показать описание
This video describes how the singular value decomposition (SVD) can be used to efficiently represent human faces, in the so-called "eigenfaces" (Matlab code, part 3).

These lectures follow Chapter 1 from: "Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Brunton and Kutz

This video was produced at the University of Washington
Рекомендации по теме
Комментарии
Автор

The dog and coffee examples blew me away

abcddd
Автор

This is a great example to illustrate that UU* only approximates the identity matrix unless full rank is kept. It was a lightbulb moment for me!

erockromulan
Автор

Steve Brunton, you utter hero. Chapeau.

jonathanmarshall
Автор

Great explanation from minute 2:00 to minute 2:38, the projection of the x into the big X, so that getting the coordinates of the new face into the X, like the coefficients of the column base of X to have the linear combination of x. Awesome. Thanks Professor Brunton.

xondiego
Автор

It took me a while to understand that by doing UrT * X you are creating a linear combination of the image in the Ur basis coordinates (by measuring the similarity or dot product of the image with each eigen face) to be represented and projected again using the same Ur basis. I was confused because it seemed like we should include sigma and VT in there somewhere as their product would give you the exact linear combination needed to express any image in the training set. However, once it clicked for me that the eigen faces were used just purely as a basis and that test data could be used for it, it all made sense. I was blown away when even Mort could be displayed using your orthogonal matrix. Great video Professor!

evanparshall
Автор

Thanks for making this valuable content openly available !!

egidioln
Автор

Thanks for your interesting lectures about SVD. Could u plz give us some lectures about K-SVD?

tech_science_tutos
Автор

Great teacher! Btw, what is the system/technology you are using for the transparent blackboard?

LeCranky
Автор

Sir, I love your videos. Please make videos on random projections and UMAP as well. Thanks!!!

nehamanpreet
Автор

Professor Dr. Brunton -- Great lectures! I have tried the above in my Spyder IDE using Python code, and with each face as a row (instead of a column). Ur.dot(Ur.T.dot(X[0, :])) shows me some fuzzy image. Instead if I use the right singular matrix, I am closer to being able to identify which closest neighbors the new out-of-sample face closely relates to. I guess my question is ... of what use there is of the left singular matrix? I realize that U will have the same dimensions as the features matrix X. I also understand that U pertains aspects of the rows of the features matrix X, whereas VT contains aspects of the columns of X. Thanks for clarifying what use there is of the left singular matrix, and what I might be missing here.

SK-wwzf
Автор

Professor Steve, as we increase the ranks, shouldn't the U* U transpose gravitate towards becoming an Identity matrix? In that case, what we are actually doing is taking the test person's face and subtracting the average features and then in the next line of the code, we are adding the avg faces with the test face multiplied by U*U transpose (also with higher ranks this should become identity matrix and therefore should make no difference. So for very high ranks, aren't we subtracting and then adding the test faces in the next step sir. Could you demystify this concept sir? Thank you so much professor for all the cool videos.

audacityofimagineering
Автор

"We're going to use the eigenfaces to construct new faces not in the sample data." -- Skynet, as it goes sentient

Phi
Автор

Intresting! Could you find a set of optimal faces to use as eigenfaces to minimize the number of eigenfaces needed?

kennettallgren
Автор

The dog and the coffee examples are very visual ways of understanding that as r goes to full rank n the projection matrix U_r•(U_r)* becomes closer and closer to the identity matrix Id. In fact the Frobenius norm ||Id-U_r•(U_r)*||=√(n-r).

individuoenigmatico
Автор

i don't quite understand why in the dog and coffee case, the approxiamation graph initial looks at a human face then gradually translate to a dog/coffee image.My guess is that we are approxiating dog/coffee image using eigenvectors tranined from human face but with different coefficient? Is my understanding correct?

bingxiong
Автор

will there a lecture on how SVD connect to DeepDream?

abdjahdoiahdoai
Автор

Is it equivalent to say that projection matrix is A*(AT*A)^-1*AT*x, then - since AT*A is identity - the above reduces to A*AT*x ?

wojtekskaba
Автор

How do I import the image of mort in python? I’m very confused

jeffreyhaile