Real Analysis 33 | Some Continuous Functions

preview_player
Показать описание


Please consider to support me if this video was helpful such that I can continue to produce them :)

🙏 Thanks to all supporters! They are mentioned in the credits of the video :)

This is my video series about Real Analysis. We talk about sequences, series, continuous functions, differentiable functions, and integral. I hope that it will help everyone who wants to learn about it.

x

00:00 Intro
00:25 Exponential Function
03:27 Logarithm Function
05:00 Polynomials
05:33 Power Series
08:49 Credits

#RealAnalysis
#Mathematics
#Calculus
#LearnMath
#Integrals
#Derivatives
#Studying

I hope that this helps students, pupils and others. Have fun!

(This explanation fits to lectures for students in their first and second year of study: Mathematics for physicists, Mathematics for the natural science, Mathematics for engineers and so on)

Рекомендации по теме
Комментарии
Автор

2:29 *_exp()_* is strictly monotonically increasing
“monotonically increasing” or “monotonically decreasing” is fair enough.
“strictly monotonic” used when we either don’t know or don’t care if the function is _non-decreasing_ or _non-increasing_ .
“strictly monotonically increasing” seems a bit mixed up to me.
I am used to “ *_strictly increasing_* ” or “ *_strictly decreasing_* ” functions (as a naming convention).

3:23 exp: *R* → (0, ♾) is bijective
Using the codomain to limit the domain seems a bit ‘ugly’ to me.
I might have preferred:
exp: *R⁺* → *R⁺* is bijective (where the *R* is supposed to be double struck)

😃

Leslie.Green_CEng_MIEE
Автор

The video talks about the equality exp(x) = e^x, where e = exp(1), but in this video series, it has never been explained what b^x should be taken to be for an arbitrary real x, given some base b, nor has it been explained which bases b result in b^x being well-defined for every x, or how one reconciles with the more basic definition of exponentiation. In a real analysis course, this is important to talk about, because people tend to take the expression a^x for granted, but I highly doubt that most people actually know how a^x is defined in various contexts. We know that for natural numbers n, a^n denotes the product of n copies of a, and when extending the definition to a^m for integers m, we simply say a^0 = 1, a^(m + 1) = a·a^m, or a^(m + n) = a^m·a^n in general, for integers m, n. These definitions for work for every real number a, except maybe 0 if the exponent is negative. But when q is a rational number, the definition for a^q becomes much less clear, and in many circumstances, it becomes better to leave a^q only defined for real a > 0, or not work with fractional exponents altogether. And even more difficulty emerges when q is replaced by a real number x. In light of this, I think a video on this subject may be very fruitful for this series. In many common constructions, a^x is defined as exp[ln(a)·x], with a > 0, while other constructions use a piggy-back approach, defining natural exponents, then integer exponents in terms of natural exponents, then rational exponents in terms of integer exponents, then real exponents in terms of rational exponents. These constructions come with problems, though, as they are unable to then properly account for 0^x, and also for things like (–1)^3, since for example, (–1)^3 is not equal exp[3·ln(–1)], since ln(–1) is undefined, and there is also no way of accounting for by using suprema of sets. So the approach is this very-inelegant piecewise approach that depends on whether a number is rational or not or negative or not, which makes it rather inconvenient. I find that in general, if you work with real or complex analysis, you want to just avoid using expressions such as a^x, as they are not necessarily well-defined, and just stick to expressing things, whenever necessary and possible, in terms of exp.

In this series, limits with x —> ♾ for functions R —> R have not been properly defined, although this not much of an issue, since the definition can be made a straightforward generalization of the limit of a sequence. But for the purposes of this series in particular, I propose the definitions lim f(x) (x —> +♾) := lim f(1/x) (x > 0, x —> 0) and lim f(x) (x —> –♾) := lim f(1/x) (x < 0, x —> 0). In fact, for the purposes of notational elegance, I would also note that lim f(x) (x —> x0) = +♾ iff lim 1/f(x) (x —> x0) = 0 and f > 0, while lim f(x) (x —> x0) = –♾ iff lim f(x) (x —> x0) = 0 and f < 0. This just makes limits with ♾ for functions R —> R special cases of limits of functions on open intervals as previously defined in this series. I prefer this approach over the generalization, because the definition for limits of sequences do not mesh well with how limits of functions on open intervals are defined, even though one definition is related to the other, and strictly speaking, they are different analytic concepts, so I think it would be less confusing to keep them separate, rather than introduce a blurred line between the two.

angelmendez-rivera
Автор

Thanks for the great video!
Is the convergence of power series a uniform convergence or point-wise convergence?

ahmedamr
Автор

Hallo. Is there a way to prove if for 2 functions f and g from [0, 1] -> [0, infinity) with sup f(x) = sup g(x) they intersect at some point.

edztyMKWII
Автор

To prove that exp: R -->(0, oo) is bijective. The injectivity is because it's monotonically increasing so if x>y then f(x) > f(y) then f(x) != f(y). But the surjectivity? Can I use the intermediate value theorem? I can take any a, b with b>a, then f:[a, b] to (0, oo) with f continous by the theorem, x in [a, b] with f(x)=y as a, b arbitrary the function is surjective?? Or I just need to use that exp is continous and monotonically increasing and try another explanation?

MrWater