A classic problem from the most important math class.

preview_player
Показать описание
🌟Support the channel🌟

🌟my other channels🌟

🌟My Links🌟

🌟How I make Thumbnails🌟

🌟Suggest a problem🌟
Рекомендации по теме
Комментарии
Автор

I heard a math professor once say: "You can never get enough linear algebra."

wagsman
Автор

“Or as I like to call it, Linear Algebra” 😂😂😂

agbenfante
Автор

This problem is also a great example of the power of using invariant subspaces. Here is an alternate proof that uses invariant subspaces:

Let E be the eigenspace of A for some eignevalue lambda. You showed that v \in E => Bv \in E, i.e. B is E invariant. But then B can be thought of as an operator acting on the vector space E. (i.e. B restricted to E is a totally valid operator) Since we are working over the complex numbers, this operator must have eigenvectors which are in E. But everything in E is eigenvectors for A, they must be eignevectors for both! (The polynomial approach used in the video is essentially the proof that every linear operator has eigenvectors) I believe with a bit more thought, this could tell you somethign about how the dimensions of the eigenspaces of A and B must be compatible. (For example if E is k dimensional, then the eigenvectors of B restricted to E must have algebraic multiplicy that sum to k).

MihaiNicaMath
Автор

A good linear algebra video is always appreciated. Thank you and I look forward to your next video.

abrahammekonnen
Автор

Another great video. Today, for some reason, I finally thought, "gotta be a climber." Looked you up, and sure enough, a really good one!

bentoomey
Автор

I love your videos of olympiad and similar problems! Great work, all the way to explaining basic mathematical facts. Here I think you made yourself more work than necessary: the vectors p(B)v, for arbitrary polynomials p, form a linear subspace V of C^n, which is spanned by v, Bv, B^2v, etc, and thus is non-empty. At the same time it consists entirely of lambda-eigenvectors of A, and is stable under B, i.e. B maps V to V. The restriction of B to this subspace, say of dimension k=dim V is equivalent to a kxk-matrix; denote the restriction of B to V by B'. Thus B' has at least an eigenvector w in V (something you took for granted for A, using already the fact that we're over an algebraically closed field): Bw = B'w = mu.w, and as noted before Aw = lambda.w.

andreaswinter
Автор

If E is an eigenspace of A, then BE is contained in E. So B restricts to E. Call this restriction B*. B* has an eigenvector by the algebraic closure of C, which is a common eigenvector of B and A. Am i missing something ?

l.a.s
Автор

NOT THE RIGTH WAY TO DO IT !!!

A has at least one eigenvalue (we’re in C), let’s call it m with x as one of its eigenvector,

ABx =BAx = mBx so S = the sub-eigenspace of m for A is stable by B, so B is an endomorphism of S
=> because we’re in C, like at the start, there is r, and y in S (it’s important) such that By = ry, but by definition y is in S so :
Ay = my

y is what u want

cyrillechevallier
Автор

This is a nice solution. Still there is a bit of discussion that was omitted for the case that Bv is actually 0. Of course one could pick a different v.

In fact, if there exists some eigenvector of A such that Bv=0, the problem is much easier. Namely, in this case, due to commutativity, BAv=0.

If Av=0 as well (namely the eigenvalue of A associated to v was 0), then we have that Av=Bv=0 so v is the common eigenvector corresponding to the 0 eigenvalue for both matrices.

If Av \neq 0, then, since v was chosen as an eigenvector of A, let the associated eigenvalue be a \neq 0, i.e., Av=av. But then BAv=0 implies Bav=0, so Bv=0 and thus v is an eigenvector of B as well (with eigenvalue 0).

AnCar
Автор

Not first but still earlier than Good Place to Stop

dpscriberz
Автор

A lot of people in the comments seem to be confusing this result with the (somewhat famous in quantum mechanics) result that *hermitian* linear transformations are simultaneously *unitarily* diagonalizable iff they commute. By contrast, this video gives a different result which says *any* pair of commuting linear transformations shares at least 1 eigenvector (but, importantly, may still fail to be simultaneously diagonlizable. Indeed, neither of the matrices need be diagonlizable in the first place). No assumption on the linear transformations is required other than that the field is algebraically closed and the vector space is finite dimensional.

chuckaway
Автор

The topic has applications in quantum mechanics.
For example, that there are identical (up to a factor depending on other variables) eigenfunctions of the operators of the square of the orbital moment and its projection on any axis. Which means the existence of a quantum state in which these quantities can have simultaneously certain values determined by the orbital and magnetic quantum numbers.

Vladimir_Pavlov
Автор

Nice result! And nice costume change between Act 2 and Act 3. 😂

davidblauyoutube
Автор

13:10 you can prove that each term of the factorization commute with the others and put the factor that gives zero(miu m) to be the first one that multiply V so V would be a eigenvector of B. This would prove that each eigenvector of A is an eigenvector of B and vice versa.

taraszablotskyi
Автор

7:16 shirt transformation. But is it linear?

synaestheziac
Автор

Thumbnail equation is very easy to solve. It’s a pop group.

RickyisSwan
Автор

If A is the identity matrix, its eigenvectors are the unit base vectors (1, 0, 0, 0, ...), (0, 1, 0, 0, ...), ...
Does it mean any matrix will have at least a unit base vector as eigenvector?

amidhmi
Автор

At 12:30, why must one of the matrices be 0? There can exist nonzero matrices that multiply to zero.

washieman
Автор

This is a very important result in quantum mechanics

replicaacliper
Автор

And thus the center of GL(n, CC) is the subgroup of the scalar matrices.

tracyh