Locally Linear Embedding (LLE) (optional)

preview_player
Показать описание
Instructor of course: Prof. Mark Crowley
Teaching assistant and presenter of slides: Benyamin Ghojogh
Data and Knowledge Modeling and Analysis (ECE 657A) course
ECE Department, University of Waterloo, ON, Canada

This lecture includes:
1- Introduction
2- k-Nearest Neighbors (kNN) graph
3- Linear reconstruction by neighbors
4- Linear embedding
5- Examples

Note: In this lecture, we assume that the embedded points are put row-wise in the matrix Y, although the input dataset X is column-wise. In other words:
X = [x1, x2, ..., xn] \in R^{d x n} and
Y = [y1, y2, ..., yn].T \in R^{n x p}
where n, d, and p are the sample size, dimensionality of data, and dimensionality of embedding space, respectively.

Useful related resources:

1- Tutorial paper: Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, Mark Crowley. "Locally Linear Embedding and its Variants: Tutorial and Survey." arXiv preprint arXiv:2011.10925 (2020).

2- Tutorial paper: Benyamin Ghojogh, Fakhri Karray, Mark Crowley. "Eigenvalue and generalized eigenvalue problems: Tutorial." arXiv preprint arXiv:1903.11240 (2019).

3- Tutorial YouTube videos by Prof. Ali Ghodsi at University of Waterloo:
Рекомендации по теме
Комментарии
Автор

Excellent, for the development of technical knowledge. I'm proud that you are my child. I am also very grateful to the university professors.

yousef.ghojogh
Автор

I have a question.

Lets assume k=4

So lets say there is a point X1 for which I am learning the weights w11, w12, w13, w14 to find the value of X1.

for X2, I will find w21, w22, w23, w24

Objective of optimization is to make w11 == w21 == w31 ... such cost is lower

If not, then I am finding weights (wi1...wik) for a particular Xi
As soon as I change i my weights change,

How will I use this weights for unseen data???
___

ideally i should be finding the weight w1 for the 1st closest neighbour, w2 for the 2nd closest neightbour, ... wk for the kth closest neightbour

This way irrespective of my value of i, the weight would be the same, for the 1st closest neighbour, and that weight matrix can be used on unseen data... Am I missing some logic?

samriddhlakhmani
Автор

Nicely explained, thanks! Can you please also post a link to the slides?

RA-oolj