Lecture 8: Norms of Vectors and Matrices

preview_player
Показать описание
MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018
Instructor: Gilbert Strang

A norm is a way to measure the size of a vector, a matrix, a tensor, or a function. Professor Strang reviews a variety of norms that are important to understand including S-norms, the nuclear norm, and the Frobenius norm.

License: Creative Commons BY-NC-SA
Рекомендации по теме
Комментарии
Автор

In times of Covid, I hope this makes young people realize why older people are so important. Long live Prof Strang.

veeecos
Автор

I'd say without any doubt that Professor Strang is the best Algebra professor in the entire world. I'm sure he has helped tons of students all around the world to understand the beauty of algebra

EduardoGarcia-tvfc
Автор

After reading so many texts finally some actual geometric interpretation of L1 and L2 ...he explains it so beautifully.Came here only to understand definition but his charsima made me watch whole 50 mins

deeptendusantra
Автор

Best linear algebra course ever! Best wishes for Prof. Strang's health during this horrible pandemic

JulieIsMe
Автор

What a smart and humble person! Long live Prof. Strang!

andrewmeowmeow
Автор

Teaching norms with their R2 pictures is just brilliant. So much insight, even emerging while teaching (sparsity of L1 optimum: it's on the axis!!). An absolute joy to watch & learn from

rogiervdw
Автор

This man does not stop giving, many thanks.

abdulghanialmasri
Автор

This lecture needs to reach more people asap.
Total respect for the Professor!

KirtiDhruv
Автор

I'm currently reading Calculus by Dr. Strang. One of the best books on the subject I have ever come across.

atulsrmcem
Автор

DR. Strang, thank you explaining and analyzing Norms. I understand this lecture from start to finish.

georgesadler
Автор

After passing the linear algebra course, i was kind of disappointed no need to see your lecture again . but for data analysis u came again in a HD resolution. So glad to see you professor .

asifahmed
Автор

"You start from the origin and you blow up the norm until you get a point on the line that satisfies your constraint, and because you are blowing up the norm, when it hit first, that's the smallest blow up possible, that's min, that's the guy that minimize" (31:23-31:42) that's 2-D optimization in a nutshell...clear and simple, thanks very much Professor Strang..

abdowaraiet
Автор

Love this man, thanks MIT for looking out for us!

diysumit
Автор

This lecture just brought my understanding of norms to a whole new level! Thank you so much Professor Strang!

supersnowva
Автор

Probably, this has been said before, so forgive me if I repeat someone else's words.

I acknowledge here, that professor Strang is a good pedagogue. I learnt some math over the years. I completely support the use of the geometrical visualization of some properties, as it is a learning need. I can say that for me it is easy to see how to derive properties like the one he gave for the assignment on the Frobenius norm. I say this, because I may not be the only one thinking it and I wanted to tell those people that there is more to math here.

Only recently, I understood the huge degree of humility and teaching wit that it takes one to pass one's knowledge along. It requires to pretend or to honestly feel you are no better than any of your students. For instance, as I could witness here, Pr Strang shared with his students the latest cool research topics as if they were his colleagues, he thanked them for contributing to the course by giving out some answers. That's what allows him to successfully challenge them in solving some assignments, like the Frobenius norm - SVD problem. All of it is summarized by Gilbert himself at the very end in 48:12, when he explains his view of his relationship with the students (such as "We have worked to do!", an honest use of the pronoun "we" by the lecturer).

This 48 min long lecture, honestly impressed me in this regard. Today, I had the privilege of a double lecture: one in math (that could have been compressed to 15 min, since most proofs were skipped) and one in being a better passer of knowledge (that could be extended to 10+ years). Hat off!

arnaud
Автор

I highly recommend doing the Frobenius norm proof he mentions. It is elegant and uses some nice properties of linear algebra. If you took 18.06 (or watched the lectures) using the column & row picture of matrix multiplication really helps. I'll finalize my proof and post a link - hopefully I didn't make a mistake ;)

naterojas
Автор

Frobenius norm squared = trace of (A transpose times A) = sum of eigenvalues of (A transpose times A) = sum of squares of singular values

wangxiang
Автор

He is such a sweet man and a genius teacher at the same time

hieuphamngoc
Автор

Feeling so emotional watching him teaching at the age of 84😢

ashutoshpatidar
Автор

Great point on comparing matrix Nuclear norm with vector L1 norm, which tends to find the most sparse winning vector. I guess the matrix Nuclear norm may tend to find 'least' weights during the optimization.

xingjieli