Why we take Eigenvectors of the Similarity Matrix

preview_player
Показать описание
A video breaking down the intuition behind a large family of "spectral" dimensionality reduction algorithms e.g. KPCA, LLE, Laplacian eigenmaps and many others

By Michael Lin

Music: "F*ck That''
-Death Grips
Рекомендации по теме
Комментарии
Автор

Love the intro, sounds like the Akira soundtrack. Nice how you explain the math with a focus on understanding the 'intuition' behind dimensionality reduction opposed to focussing on the formula's. I wished more people would approach it like you do, math would appeal to many more people.

nielszondervan
Автор

thanks, very clear and at the same time laconic explanation!

nataliiadeshko
Автор

Thank you for the videos. How does one assign value / colors of the plots at the end? The plots of the eigenvectors of the kernel? I often see various different units assigned to them, and I wonder if you can please comment on general strategies for assign such colors? Ex) what do the red specks mean over the blue ones here? Sometimes I see a clear rainbow pattern across the plots. Other times I see a more scattered case like you have

nickelandcopper
Автор

Could you please explain why doesnt PCA use eigendecomposition?
So far I always understood that PCA is completely based on eigenvectors and values.
Maybe I didnt understand what is the "decomposition" part.

torrecuso
Автор

A PCA can be conducted via eigendecomposition of either a gram, covariance, or correlation matrix fyi.

christophgonzalez
Автор

Just wondering which packages you use for visualization, R ggplot?

nielszondervan
Автор

i just want to build CFD and somehow stuck in this rabit hole

mickolesmana
Автор

t-SNE does not involve taking the eigen vectors???? Or does it. Now you have confused me.... I dont think it does!!!

fazlfazl
Автор

Video is meant for people who hv PhD in dimensionality reduction

bhupensinha
Автор

A layman would find it impossible to understanding anything from your explanation.

ankur