Derivative diagonalizable ?

preview_player
Показать описание
Can you turn the derivative into a diagonal matrix? Watch this video and find out!


Рекомендации по теме
Комментарии
Автор

I think we should introduce a norm in our vector space (L^2 for example) to consider the action of the operator so that we can distinguish between eigenfunctions and almost-eigenfunctions

Indeed, I think that the Sobolev space W^(1, 2)[a, b] is what you are looking for, namely the space of functions in L^2[a, b] with the first derivative also in L^2[a, b]
This way we know that the derivative maps L^2 functions into L^2 that is separable, in particular L^2[a, b] has a suitable Fuorier complete ortonormal system depending on the interval [a, b]

in this sense the differential operator T(f) = f ' is diagonalizable, since the function e^{i k x} with k in R is a complete ortonormal system for the hilbert space L^2[a, b]

danieleferretti
Автор

I think they are a basis. Like you said imaginary eigenvalues lead to the Fourier transform and more generally for complex (real) eigenvalues to the laplace transform

fNktn
Автор

Thinking over it, there's a generalization you can make at the end about the eigenfunctions.
You end up with y'=λy, equivalently y' - λy = 0.
You can express this with the operator [D - λ](y)=0.
Applying the operator p more times gives us multiple roots of order p
which generalizes to y(t) = C(t^q)(e^λt), q<p.

This is important, because it proves that we have a true basis for finite polynomials and finite exponential polynomials. If we generalize the notion of basis to any convergent linear combination of eigenfunctions (as is suggested in other comments), things get funky.

Besides getting any function *with* a taylor series; for λ ∈ ℂ, you also get things like the Weierstrass Function, which isn't differentiable, so it can't even have a vector in our original vector space.
I think that's pretty cool.

PeterBarnes
Автор

I think to make sin(x) diagonizable, you'd have to split it into (e^it)/2i - (e^-it)/2i
However, for sin(x)+cos(x) you could probably diagonalize it directly.

ThAlEdison
Автор

Btw, a student asked me about linear algebra problems today. Oh man, I should pick up my linear algebra again...

blackpenredpen
Автор

The integral you got at the end resembled the Laplace transform (assuming we stick to real numbers only). I haven't thought about this question much, but I think you'd have to think about how to get the inverse of Laplace transforms (or similar integrals). Sadly, I don't know much about them.


I really appreciate the question posed. Makes me want to revisit so much of what I learned!

maxxie
Автор

Using an inifinite-dimensional polynomial basis { 1, x, x^2, x^3, ... } works, of course.

T ( v0, v1, v2, ... ) = t (v1, 2 v2, 3 v3, ... )

v1 = t v0
v2 = (1/2) t v1 = (1/2) t^2 v0
v3 = (1/3) t v2 = (1/3!) t^3 v0
etc

burpleson
Автор

Dr p u r the coolest mathematician ever great work p

AditYa-svnz
Автор

With a purely algebraic definition of "basis" (i.e. every vector is a finite linear combination of basis elements), the putative basis of exponentials is not one. If you allow infinite linear combinations, then you get into topology and the answer depends on which one you choose.

scottgoodson
Автор

It makes sense you cannot diagonalize the derivative matrix: if you could, then you could easily compute its matrix powers, and hence powers of the derivative operator i.e. repeated application of the derivative. But that would mean even more - since taking the powers of a diagonal matrix is simply obtained by powering the diagonal entries, you could use this to compute fractional derivatives, like the half-derivative or ith derivative. Yet the half-derivative of a polynomial is not a polynomial, and these matrices must map polynomials into polynomials, a contradiction.

mikety
Автор

It really depends on the Vectorspace you are talking about. I suppose there are vector spaces, where it is possible to do this. Namely since lambda is any real number, the vector space would be the space of all functions, which can be uniquely written as f(x)=Integral over R d(lambda) C(Lambda)*exp(lambda*x) with some unique C(lambda).

leonardromano
Автор

Keep in mind that the span of a set is the space of finite linear combinations over the set, even if the set is infinite.
You can not get, for example, the function x by a FINITE linear combination of exponentials.

You can discretize and approximate, but you can not truly achieve the function x.

orangeguy
Автор

Doc seriously you gotta make a playlist for undergrad math Because honestly I don't understand this and your way of telling and teaching makes me wanna learn

yashovardhandubey
Автор

If you bound domain not to all real numbers but on positive semi-axis you will get basis for Laplace transform. As was said in comment below.
Also, if you restrict domain from both sides you will get overdetermined function system. I am not sure if discrete basis can be extracted from it and that closure of this function system in C^inf is possible. It feels like yes. 😅

If you use formal series you can get exp(mu*(t-a))/(1 + exp(lambda*(t-b)))^N. Then using 1/cosh(l*(t-x))^N one can have nicer kernels for Laplace-like transformations but on all real domain... Just first thoughts on this...
It is hard to mess with infinity.
Functional analysis is not easy. 😅

danielmilyutin
Автор

I tried to write a function as a real exponential series for a while, but the constants that I ended up with didn't go to zero over time, so I don't know if it's actually possible to make a convergent series with only real values for general functions.

MuPrimeMath
Автор

let's say the base space is C_0(R)∩L_2(R), by integrate by parts we have <Df, g>=\int(Dfg)dx=-\int(fDg)dx=<f, -Dg>, therefore as an operator D^*=-D, then DD^*=D^*D=-D^2, D is normal, hence diagonalizable.

looming
Автор

on the basis e^x, is e^kx diagonalizable? f(x) = b^k, so... the derivative is kb^k. it's perfectly invertable, and can be extended to trigonometric functions (by the complex definitions of sin & cos) i wonder, however, if the space of derivatives for all polynomials of degree k-1 divided by x^k are diagonalizable? the derivative of a/x + b/x^2 + ... doesn't ever delete information, and is therefore perfectly invertable.

MrRyanroberson
Автор

I know I'm late to the party, but wouldn't the answer to the diagnolizability of functions Cinf(R)->Cinf(R) be a no? I remember from my RA class in undergrad, we studied a function defined like so: f(x) = 0 if x = 0, and else f(x) = e^(-1/x^2). We proved that this function is in Cinf(R), and every derivative of f at 0 is equal to 0 (Basically, it was an example of an infinitely differentiable function, that nonetheless didn't have a Taylor Sequence). If we assume that f is a sum of functions C_k *e^(L_k * x), then that would mean that the sum of C_k is 0, and so is the sum of C_k * L_k, and C_k * L_k^2 = 0, C_k * L_k^3 = 0 etc etc. I didn't cook up a formal proof, but my spidey-senses are tingling, and it feels like there is a contradiction hiding somewhere here.

tomaszgruszka
Автор

It's not a basis. In order for a set to be a basis every element must be written as a _finite_ linear combination of elements of the basis. So under this, purely algebraic definition, of (Hamel) basis the set outlined is not a basis.

If you introduce a norm, or an inner product there are different definitions of basis that become available. But you need to provide a norm/inner product for that.

thephysicistcuber
Автор

These do not form a basis.
Proof: I claim sin is not in the span. If it was, we could write sin(x)=sum c_t e^tx where t ranges over R.
Now, differentiate both sides 4 times. We get sin(x)=sum t^4c_te^tx. Since writing a vector in a basis gives unique coefficients, we must have t^4c_t=c_t for all t in R. The only way this can happen is if c_t=0 or t=1 or t=-1.
So we can now write sin(x)=ae^x+be^-x, for some real a, b. But this clearly cannot happen.

willnewman