What do Matrices Represent? - Learning Linear Algebra

preview_player
Показать описание
This video is about why we use matrices and how every matrix is related to a function that takes vectors as inputs. Understanding what a matrix represents is important in order to learn about the more advanced ideas in linear algebra!

Subscribe to see more new math videos!

Music: OcularNebula - The Lopez
Рекомендации по теме
Комментарии
Автор

Bravo! Your channel and 3Blue1Brown are just a perfect combination.

sanelprtenjaca
Автор

I could've have use this in late 1976. I took linear algebra without even knowing what a vector was! Yes, Grant Sanderson has an excellent channel, but now I'm thinking that I wish my professor had been left-handed. David Shirokoff and this fellow seem better at explaining the whole picture of mathematics rather than getting lost in the details. Supposedly our right brain sees the whole picture and dominates the left side. So now I finally understand what a matrix is.

BuddyNovinski
Автор

You are very very Good explainer.
You are the one of the best mathematics teacher on youtube like, 3 blue one brown, mathelozer

mrwclasseshazaribagh
Автор

BOOM! Very well explained! Thank you for clearing the confusions that has been in my mind for years!

impc
Автор

Crap!😂 I just realise that I don´t know nothing about linear algebra!!! Thank for sharing your knowledge, you´ve an accute sense of reasoning, I´m really dazzled by your sight.

adolfocarrillo
Автор

Firstly when I knew about Matrix I was learning that like a new language. Just like know the text without know a words means. But now I understand. Maybe not exactly, but around near the truth for why I leaning the Matrix. A lot of thanks for your great work!

flamewings
Автор

Holy shit, I'm studying matrices and your video is such a huge help! for uploading this video!!!! Love your channel!

bjornchan
Автор

Your videos are incredible, they give such good foundational understanding

ohno
Автор

It is always amazing to get back to linear algebra and study matrix related stuff!
I would love to see you explaining some tensor algebra someday!
Amazing video!!!!

larzcaetano
Автор

What a great lesson! Congratulations! 👏👏👏

estudematematica
Автор

Can you please make a video on minors, cofactors, adjoint and why do we need them?
Why do we use them to calculate determinants and how are they related to determinants.

programmer
Автор

Some tutor, yar! Out and out awesome!

rktiwa
Автор

thank you, really helpful and concise :)

kingplunger
Автор

Can you make a video about the method of finding square root of numbers. Why, that method works?

mrwclasseshazaribagh
Автор

How can i deaign my studio like your...

AjayKumar-jbqe
Автор

Now you see what unit vectors are. They are the column vectors that are 1 by 2.

joetursi
Автор

feel like 3Blue1Brown video but drier hmm...

argamnex
Автор

In actuality, it is conceptually more fundamental than this. Your explanation of a vector space described vectors as being tuples of numbers. That is not what a vector is. A vector space is just comprised of arbitrary objects you can add, and you can scale them by some numerical factor, an element of a field. As long as the operations are well-defined and satisfy the axioms of a space, it does not matter what the vectors are. They could be tuples of numbers, but they could also be sequences, tuples of tuples, functions of real numbers, and funnily enough, they could even be matrices. In fact, you can have a vector space whose elements are vector spaces themselves. If I have a collection of apples, and I give this collection of apples some form of addition, and a way to scalar multiply, then I have a vector space whose vectors are my apples. That is, fundamentally, what a vector space is. In color science, we deal with infinite-dimensional vector spaces, but the vectors are not tuples of numbers or arrows in space. The vectors are the colors themselves. Color vision is a phenonenon that can be completely formalized as a projection between vector spaces.

A matrix, meanwhile, is a function that takes tuples of numbers from a field, and maps them to a single number. They do not have to be 2-tuples either. They could be 3-tuples. With 3-tuples, you can have things 2-by-3-by-4 matrices, and the entries are arranged not in a rectangular 2-dimensional array, but a 3-dimensional array that looks like a rectangular prism. That is just what matrices are. A different question is to explain why matrices are important, and how they are related to vector spaces. If you have an n-dimensional vector space, then the space of n-by-1 column matrices just so happens to be isomorphic to that n-dimensional vector space. If you have a fixed basis, then you can arrange the coefficients of any linear combination of that basis into the entries of the n-by-1 column matrix. That is what allows us to represent multilinear operators between finite-dimensional vector spaces as matrices. The vector space could be R^n, but it could be something completely different. Also, the basis vectors do not have to be (1, 0) and (0, 1). I could just as easily say the basis vectors are (1/2, 1/2) and (1/2, –1/2). It does not matter. As long as they are linearly independent, and they span the space, they can be arbitrarily chosen as basis vectors. The idea is simply this: with multilinear operators, all the information you need to uniquely identify them is to know what they do to the basis vectors, regardless of what you choose those basis vectors to be. How you represent an operator as a matrix depends on the basis you choose, just as how you represent a vector as a matrix depends on the basis you choose, but the magic about matrices is that regardless of the choice, the structure of the equations will always look the same, and the operations between the matrices will always be the same.

Finally, the importance of linear operators in higher dimensional vector spaces is the same as that of linear functions in 1-dimensional calculus. Any function between two vector spaces, regardless of whether that function is linear or not, can be expanded into a Taylor series, and the linearization process that gives rise to this expansion uses linear operators. This gives matrices additional importance as functions between vector spaces. This is why the definition of the derivative of f at p that you find in many textbooks for functions f : R^n —> R^n is that there exists a linear operator A(p) on R^n such that lim ||f(x) – f(p) – A(p)·(x – p)||/||x – p|| (x —> p) = 0. This definition actually works if f is a function between any two normed vector spaces, but if the vector spaces are finite dimensional, then A(a) is always representable as a matrix.

angelmendez-rivera
Автор

Pfft tensors are better

And multilinear maps are even better

Where's your blades and orbifolds?

codatheseus
welcome to shbcf.ru