Why {1,x,x²} Is a Terrible Basis

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

Never thought of approximation perspective ! Thanks

hogun
Автор

This was so insightful! If ever I construct basis functions (Legendre or something else), this will be an additional reason to perform a Gram-Schmidt process 👏🏽 Funny how I only have a high school degree but I feel like I’m learning sooo much because of educators like you, on YouTube 😭🙏🏽🎊

ozzyfromspace
Автор

I have 3 questions.
1. I can see why the monomials are a terrible basis in this inner product space, but is there an inner product space where the monomials do form an orthogonal basis? It would probably be a useful inner product for studying Taylor series and analytic functions.

2. What is the span of the set of all Legendre polynomials? Is it the set of all analytic functions just like the monomials which build Taylor series?

3. I wouldn’t think the Graham Schmitt process could change the span of the basis vectors, but is it possible when you have an infinite dimensional vector space?

ryanlangman
Автор

Around the 3:30 mark, when you give the "magnitude of error" argument to justifiy why one basis is better than the other, I could not help but think about the concept of continuity.
It seems that for the first basis which causes a small error to provoke [0, 2] to go to [2.5, 0], the "function" that would associate an error of measurement to the vector representation in the basis would be "less continuous" than that one of the 2nd, orthogonal basis, since for the first basis, if I change sligthly the input, it causes vast changes in the output.

I'm also tempted to say that the first "fonction" would not be "continuous" at all since it can happen that small changes swap two zeroes in the output vector basis representation.
To my intuition, continuity in this context should keep the 0 components of the vectors where they are and not swap them like the example did.

I'm not sure if "continuity" is the correct terminology to communicate what I'm trying to say, but this notion just struck me while you were explaining it!

Thanks for your videos, they are awesome!

ThemJazzyBeats
Автор

Dear Prof Grinfeld, after this lecture I felt that you were" pulling a fast one on us" when it came to the decomposition of x²+x. When dealing with vectors out of all the possible inner products we chose the dot product based on geometrical arguments (i.e. the cos(α) did what we needed it to do). On the other had when it came to polynomials you just presented one possible inner product and hence we obtained "orthogonal" polynomials but these were explicitly linked to the inner product you chose ∫+1-1 p(x)q(x) dx. Here is where I felt a little short changed: Could you comment on other sets of othogonal polymnomials that one could get using a different inne product and then explain of how mathematicians chose amongs the different orthogonal bases. Thank you, George

georgeorourke
Автор

Amazing work and really got the intuition behind the concept of Numerical Analysis thanks to your lectures but I have one question. Why do we limit the concept of orthogonality to the interval [-1, 1]?

rookiecookie
Автор

Dear Prof Grinfeld, this was an amazing insight for me in understanding why orthogonal matrices are well-conditioned! Thanks.
One quick question, if you can help: why did you say that you should have said "x^7 and x^9" instead of "x^7 and x^8" ? Just because x^7 and x^9 are very similar also in the interval (-1, 0) or for some deeper reason?

ferdinandoinsalata
Автор

Orthogonal is easier to find coefficients of linear combinations

duckymomo
Автор

What troubles me is that even though {1, x, x^2, ...} is a terrible basis, we still have to specify Legendre polynomials in terms of that basis. So, how do we know we can avoid some computer precision problems with Legendre polynomials if we're going to run into precision problems defining them in the first place?

alexcwagner
Автор

Stable, linear, under small perturbations small errors turn into linear functions of small errors. "First order analysis". In physics speak

styx
Автор

Are two functions said to be orthogonal if their points of intersections are in the interval where they form the orthogonal condition?

duckymomo
Автор

Do you have a png of the image in your final note available somewhere for download? Does the scaling you used imply that the ones in this chart are orthonormal?

ekandrot
Автор

Ahhh yes, Scientific Computing...where it all comes together.

isaackay
Автор

The graphic representation of the polynomial functions has nothing to do with the vector space that they create. So IMHO your argument at 4:45 is moot. Actually, the set {1, x, x^2} is the standard base of the polynomial vector space with 3 dimensions and as such also orthonormal. I guess that makes it the perfect base?

dimitriosmenounos
Автор

Professor! This is realy important.
What is that joke?

nhanNguyen-wofy