Constant Coefficient ODEs: Real & Distinct vs Real & Repeated vs Complex Pair

preview_player
Показать описание

While we saw a specific example of Constant Coefficient Homogeneous ODEs in my previous video in the playlist, in this video we're going to solve it in generality and really elaborate on the three main cases. The characteristic equation aka auxiliary equation can have either real and distinct root, real and repeated roots, or complex conjugates. Each of these three cases has a specific form of the solution to the differential equation.

0:00 Characteristic Equation
1:50 Three Cases
4:00 Real Distinct Roots
4:24 Real Repeated Roots
6:19 Complex Roots

OTHER COURSE PLAYLISTS:

OTHER PLAYLISTS:
► Learning Math Series
►Cool Math Series:

BECOME A MEMBER:

MATH BOOKS & MERCH I LOVE:

SOCIALS:
Рекомендации по теме
Комментарии
Автор

Exceellent lecture. I learnt all this stuff in engineering long ago, and even went through the stuff after seeing your video, which is not so easy to understand! You made this look like pure magic! You just need to carry a wand and wear a hat in your next video, and whoa, there you are- Math Magic! Thanks so much Mr. Bazet (I am from India, and 60+). You make math fun. God Bless you

utuberaj
Автор

Hi Trefor, I was in your calculus class last summer. Just wanted to say big thanks for putting out content like this. Thanks to you and other math/physics channels on youtube I went from "lets just get though the class and get the grade" to starting to see the beauty in math. Your work is very much appreciated, thank you!

niteslaya
Автор

Nice to see some people are passionate about what they teach. Cheers.

kiddcamel
Автор

You may be wondering why the case with repeated roots behaves so differently from the case when every root is distinct. This video presents having two real roots and two complex roots are being different cases, but the video acknowledges that in both cases, you can just simply write the general solution as A·exp(r·t) + B·exp(s·t) if you really want to, where r and s are the roots to the characteristic polynomial. This is definitely not so in the case of repeated roots, where one of the solutions has a factor of t, and this happens only specifically in this case. Why?

In the previous video, in the comments section, I explained that, in fact, you can solve second-order linear equations with constant coefficients without having to "guess" the solutions, and the method I presented for solving the example equation relies on the fact that differential equations can always be rewritten so that they look like linear algebra equations, and that is because the nth order detivative, for every natural n, is always a linear operator, and so it behaves like a matrix. Specifically, the example equation was y''(t) – y'(t) – 6·y(t) = 0, which I said can be written as (D^2 – D – 6·I)[y(t)] = 0, where D is the derivative operator and I the identity "matrix." Since D^2 – D – 6·I is a polynomial in D, this can be factored as (D + 2)·(D – 3), and this was the key to solving the equation. The same concept actually does apply to an arbitrary second-order linear differential equation. In particular, a·y''(t) + b·y'(t) + c·y(t) = 0 can always be written as (a·D^2 + b·D + c·I)[y(t)] = 0, and since a·D^2 + b·D + c·I is a polynomial of degree 2, it can always be factored as a·(D – s·I)·(D – r·I), where s and r are the roots of the polynomial. I already gave the details in that one comment I wrote in the previous video in this channel, so for the rest of this explanation, I am going to directly work with this factorization instead.

So when solving the equation [a·(D – s·I)·(D – r·I)][y(t)] = 0, what to do? Let (D – r·I)[y(t)] = z(t), so the equation to solve is simply a·(D – s·I)[z(t)] = 0. This is just a first-order linear equation, which after dividing by a, is simply (D – s·I)[z(t)] = 0, which can be rewritten as z'(t) – s·z(t) = 0. This has solutions z(t) = A·exp(s·t), where A is just a constant of integration. So (D – r·I)[y(t)] = A·exp(s·t), which can just be written as y'(t) – r·y(t) = A·exp(s·t). The integration factor is exp(–r·t), so multiplying by it results in exp(–r·t)·y'(t) – r·exp(–r·t)·y(t) = A·exp[(s – r)·t].

This is the key moment. This is the moment where having repeated roots, as opposed to distinct roots, makes an important difference. Why? Because if s = r, which is the case with repeated roots, then s – r = 0, so A·exp[(s – r)·t] = A. Therefore, when you antidifferentiate both sides of exp(–r·t)·y'(t) – r·exp(–r·t)·y(t) = A, you simply get exp(–r·t)·y(t) = A·t + B, so y(t) = A·t·exp(r·t) + B·exp(r·t), and this gives you the same result as in the video. However, if s and r are distinct, then s – r is nonzero, so A·exp[(s – r)·t] is simply an exponential function. Therefore, when antidifferentiating exp(–r·t)·y'(t) – r·exp(–r·t)·y(t) = A·exp[(s – r)·t], you get exp(–r·t)·y(t) = A/(s – r)·exp[(s – r)·t] + B, hence y(t) = A/(s – r)·exp(s·t) + B·exp(r·t), which matches the result in the video if you simply acknowkedge that, since s – r is nonzero, A/(s – r) is just another arbitrary constant.

The differences and similarities between both cases are more clear when you just leave A/(s – r)·exp(s·t) + B·exp(r·t) written as {A/(s – r)·exp[(s – r)·t] + B}·exp(r·t). Both solution forms {A/(s – r)·exp[(s – r)·t] + B}·exp(r·t) and (A·t + B)·exp(r·t) have the factor exp(r·t) in them, and the repeated roots case replaces exp[(s – r)·t]/(s – r) with t. This makes sense if seen as a way of avoiding division by 0, but ultimately, it results from the fact that, in the repeated roots case, a constant was being antidifferentiated, instead of an exponential. Another way to interpret this is via a strange limit argument. If we expand exp[(s – r)·t] as its Maclaurin series definition, then exp[(s – r)·t]/(s – r) = 1/(s – r) + t + (s – r)·t^2·f(t), with f(0) = 1/2, and f(t) is the Maclaurin series (s – r)^(n + 1)·t^n/(n + 2)! for every natural n. 1/(s – r) does not exist if s – r = 0 or even if s – r —> 0, but if one could somehow "regularize" this summand, so that it actually becomes 0 when s – r = 0, then the result would just be t, as expected, since then, (s – r)·t^2·f(t) = 0. In fact, the integral of exp[(s – r)·t'] on the interval [0, t] with respect to t' is (exp[(s – r)·t] – 1)/(s – r), which does have limit t as s —> r.

angelmendez-rivera
Автор

Greetings from Brazil. Your explanations are awesome. Thank you very much for your kindness in helping so many people around the world learn/review so important stuff.

olivioarmandocordeirojunio
Автор

Thank you so much for going further to prove exactly why the form of the particular integral is that . Really appreciate what you do and you deserve so much more recognition . Thank you so much !!!!

warunilokuge
Автор

Your content is so unbelievably useful. Thanks for consistently providing us with free education. It's really helped me at university.

santi
Автор

Professor Bazett, thank you for explaining the different cases that's involved in the Constant Coefficient Ordinary Differential Equations. The three cases are Real, Repeated and Complex Roots, which comes from solving the characteristic equation.

georgesadler
Автор

Sir you are a GEM! You really helped me get rid of many many confusions. Thank you Sir. ❤

ferdowsalom
Автор

Amazing explanation sir! There's this idea I thought of to solve homogenous equations with linear coefficients (for second order this is, this can be extended to higher order for sure). It is a bit inefficient, but I've learnt only first orders ODEs, so this is really my first exposure to higher order ODEs.

Say we have the differential equation
y'' + ay' + by = 0
what I did was substitute h(x) = (y' + Ay)
h'(x) = y'' + Ay'
Say our equation is h' + Bh = 0
y'' + (A+B)y' + ABy = 0

A+B = a, AB = b, which will take us to the same complex roots, real and distinct etc. cases.

h = e^-Bx + c
y' + Ay = e^-Bx + c1
IF = e^(Ax)
d(e^Ax y) = e^(A-B)x + c1 e^Ax

e^Ax y = [e^(A-B)x]/(A-B) + c1/A e^Ax + c2
y = [e^-Bx]/(A-B) + c1/A + c2 e^(-Ax) = c1 e^(-Bx) + c2 e^(-Ax) + c3

If we set c1, c2 = 0, y = c3 can only be a solution if c3 = 0, hence c3 = 0

y = c1 e^-Bx + c2 e^-Ax, whcih gives the same output as assuming y = e^rt and solving for r

The method of assuming e^rx as a solution, and using linear combination of solutions is a much quicker method for sure, this is just something I had to work around cuz we needed to show working and we only knew first order ODEs at that point.

saath
Автор

For the second time, thank you for saving my upcoming engineering math exam next week!

branndn_
Автор

Excellent explanation! Superb work done.

JimKnoxAimbie
Автор

One of the best contents on mathematics 💯

imranbinazadsiyam
Автор

Love from india
Thankyou for your efforts

kundan.rajput
Автор

Sir plz upload lecture on topology?
Your teaching method is awesome!

laibazahid
Автор

bro looked so hyped explaining the quadratic formula

Conceptual_Space
Автор

Teachers like him instill love for maths

saadhassan
Автор

Studying my first year of engineering, and the math is way to hard. But this helps so much you don't even know.

isakbrevik
Автор

ty sir for all of your efforts, plz add PDE course too

helllv
Автор

Hmmm, I'm wondering what's the motivation for arbitrarily having te^rt as the second solution in the case of a repeated root?
is there perhaps a more direct approach
Edit: Nevermind, just saw the brilliant comment explaining this all via a direct approach with the derivative as a linear operator, and the te^rt guess beautifully turns out to be very much analogous to the first order case when your inhomogeneous part solves the homogeneous equation.
Loving this series by the way - it really makes the course seem so much more approachable than it might look from afar.

fahrenheit