Laplace transform of t^n, using series

preview_player
Показать описание
Laplace transform of t^n, using series,
properties of laplace transform,
laplace transform examples,
differential equations with laplace transform,

blackpenredpen
Рекомендации по теме
Комментарии
Автор

I'm sorry I bump in again. I didn't have time yesterday to respond properly (different time zones). L(exp(at))(s) is well defined when s>a, but we want to have s>0 for L(t^n). However to expand Taylor series we need |a/s|<1 so since s>0 that means |a|<s, so a can be negative.
This is important in 8:50. In general linearity works in finite case and to have it in infinite case we need to work a little bit more. Pretty much it's usually using one of Lebesgue's convergence theorems (dominated or monotone one). Here we can try using dominated one with help of triangle inequality. So we want we want to show that
(a*t)^n/n!), we write it down as integral (\Int means integrating)
exp(-st)(a*t)^n/n!)
So now we need to find function g_{a, s}(t)>0 (for different a and s it's different function of t, we can think that a and s are fixed but |a|<s has to be satisfied which will be shown) such that |\sum_{n=1}^N exp(-st)(a*t)^n/n!|<g_{a, s}(t) for every N. So we find candidate using triangle inequality.
We take g_{a, s}(t)=\sum_{n=1}^{\infty} (since t>0 so we can skip writing |t|).
Nowe we need this function to be integratable for Lebesgue theorem to work. That means that L(exp(|a|t))(s) has to be well definded and it means |a|<s. So we prove infinite linearity for |a|<s. For other a's and s's it doesn't hold because then Taylor expansion is not converging. So when we fix s>0 we have equality of 2 functions given by Taylor series expanded around 0 defined for a's such that |a|<s. So all derivatives in 0 has to be equal and so coefficiants are equal. If you take only 0<a<s you end up with 2 functions given by Taylor series expanded around 0 which are equal for 0<a<s and equality of coefficiants is not easily seen right now. But it holds but again to show it we need to prolong these functions so domain has a neighbourhood of 0 and then you use equality of derivatives. So we want to expand it in a way these functions are still analytic and we need to know this prolonging is unique and it's true that it is unique so if we take these series for any |a|<s they still have to be equal. That's why we sometimes say that analytic functions are stiff (kinda like polynomials because in fact they are like polynomials but with infinite rank). So like polynomials need only finite number of points to determine them (assuming rank of this polynomial) for analytic functions we need infinite countable set of points. Obviously these points can't be any points because sin(x)=0 for x=k*\pi, sin(x) is analytic but sin(x) isn't 0 for every x. But if this set of points has density point then it pretty much determines analytic function. So if we have 2 Taylor series expended around 0 that are equal in some points as close to 0 as it's possible (that is in some points converging to 0) then we have equality of these functions in radius of convergence and also equality of coefficiants. They only time when we can lose uniquness of prolonging is when we want to prolong outside of our radius (by expanding function around some point close to the edge). Then we can get different radius and go out of our previous ball. Only reason for which radius is not infinite is that there are some singularities on the surface of a ball and we can try to prolong function by avoiding singularities. And so when we try to go around singularity we can end up with completely different values for same arguments. Example is logarithm on complex plane. We cannot define logarithm around 0. We always need to make a cut. And this way of cutting defines different logarithm function (I think it's called branch of logarithm). In your case 0<a<s is sufficient to have equality of coefficiants because we have a's as close to 0 as we want but we need some knowledge about analytic functions. It's easier to have equality of functions in neighbourhood of 0 and then just say coefficiants are equal because all derivatives are equal. But still we need to show that infinite linearity still holds and I think you can't just skip this part because it's also crucial. I know it's tempting to just manipulate formulas and count on hidden beauty and harmony of mathematics but to be honest pretty much every time we want to go out with lim from under integral our change order of lims we always have to prove it because there are many cases in which it's not valid. Linearity works in general in finite case, in infinite case always a proof is needed (even just a sketch in one's head). And pretty much all 'automatic' tools we have for proving it (that I can think of at least) is one of Lebesgue's theorems.

PS. Sorry for wall of text.

kokainum
Автор

Find it amazing that because the laplace transform is lineal you can do this, im learning a lot recently with your videos! Thanks!

LuisSanchezDev
Автор

Great
Thank you so much my dear teacher ❤️

wuyqrbt
Автор

Blackpenredpen, could you add a "Latest Videos" or "Recently Uploaded" playlist to your channel. It's a small quality of life change for the viewers that should be simple to implement.
P.S. I love your videos

amitbentsur
Автор

excellent explanation....it really helps a lot...

uesugikenshin
Автор

I might be missing something here but did you just assume that if the sums are equal then the components of the sums are equal?

Like i can take the sum of all (2/3)*(1/3)^n
and then the sum of all (1/2)*(1/2)^n which both converge to 1 by best friend theorem, and then just assume that (2/3)*(1/3)^n = (1/2)*(1/2)^n
for all n?? How is this different?

teodorlamort
Автор

or use: Lap{t^n} = -d/ds Lap{t^(n-1)} where Lap{1} = 1/s
so Lap{t} = -d/ds (1/s) = 1/s^2
Lap{t^2} = -d/ds (1/s^2) = 2/s^3
Lap{t^3} = -d/ds (2/s^3) = 2*3/s^4
...
Lap{t^n} = n!/s^(n+1)

rob
Автор

8:50 : where did you find this property; can you prove it?

david-ytoo
Автор

can you put vedio of probability? please can You? !

hanwadou