The Missing Operation | Epic Math Time

preview_player
Показать описание
Why isn't exponentiation commutative? Or associative? Is exponentiation really the next operation after multiplication? Join me in this exploration of binary operations and ring theory.

#mathematics #math #ringtheory #abstractalgebra #epicmathtime

Special thanks to my Patreon supporters: Dru Vitale, RYAN KUEMPER, AlkanKondo89, John Patterson, Johann, Speedy, Zach Ager, Joseph Wofford, Rainey Lyons, Holden Higgins, Jaeger, and Mark Araujo-Levinson.

For your commenting convenience:
[Credit to r/math for compiling all of these in an organized format.]

Basic Math Symbols
≠ ± ∓ ÷ × ∙ – √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °

Geometry Symbols
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛

Algebra Symbols
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘∏ ∐ ∑ ⋀ ⋁ ⋂ ⋃ ⨀ ⨁ ⨂ 𝖕 𝖖 𝖗 ⊲ ⊳

Set Theory Symbols
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟

Logic Symbols
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ↔ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣

Calculus and Analysis Symbols
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ

Greek Letters
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔

Other sources:
Рекомендации по теме
Комментарии
Автор

Based on several of the comments, I was wondering about the following:

Let φ : (R, +) → (R₊, ×) be a group isomorphism. We can define a sort of (↑_φ) by:
a(↑_φ)b = φ(φ⁻¹(a) × φ⁻¹(b)).
And this (↑_φ) will have the same relation to multiplication as multiplication does to addition.

Similarly, we can do a "φ-commutative-logarithm" (↓_φ) where we define
a(↓_φ)b = φ⁻¹(φ(a) + φ(b)).
And this "φ-commutative-logarithm" (↓_φ) will have the same relation to addition as addition does to multiplication.

An interesting question to consider now is if we can get an alternate multiplication operation. For example, maybe we combine (↑_φ) and (↓_ψ) for different isomorphisms φ and ψ, what happens?

Like, define (↓_ψ, ↑_φ)-multiplication as follows:
a(↓_ψ, ↑_φ)b = ψ⁻¹(ψ(a)(↑_φ)ψ(b)) = ψ⁻¹(φ(φ⁻¹ψ(a) × φ⁻¹ψ(b)))

For example, let's take φ(x) = 2^x and ψ(x) = e^x.

Then we get
a(↓_ψ, ↑_φ)b = ln(2^[lb(e^a) × lb(e^b)]) = [lb(e^a) × lb(e^b)] × ln(2) = a lb(e) × b lb(e) × ln(2) = a × b × ln(2) × lb(e)^2
(where lb is the "binary logarithm" - base two). We can then use the change of base formula to get:
a(↓_ψ, ↑_φ)b = (a × b)/ln(2)

This operation is clearly commutative. It is also associative!
[a(↓_ψ, ↑_φ)b](↓_ψ, ↑_φ)c
= ([a(↓_ψ, ↑_φ)b] × c)/ln(2)
= ((a × b)/ln(2) × c)/ln(2)
= (a × (b × c)/ln(2))/ln(2)
= (a × [b(↓_ψ, ↑_φ)c])/ln(2)
= a(↓_ψ, ↑_φ)[b(↓_ψ, ↑_φ)c]

Moreover, this operation has an identity element, ln(2):
a(↓_ψ, ↑_φ)ln(2) = (a × ln(2))/ln(2) = a

Moreover, every positive real number a has an inverse, (ln(2))^2/a:
a(↓_ψ, ↑_φ)[(ln(2))^2/a] = (a × (ln(2))^2/a)/ln(2) = (ln(2))^2/ln(2) = ln(2)

Now, an interesting question... is there a group isomorphism (R₊, ×) → (R₊, (↓_ψ, ↑_φ))?
Indeed, f(a) = a × ln(2) is a map! (Is this unique?)
f(a × b) = a × b × ln(2) = (a × b × (ln(2))^2)/ln(2) = ([a × ln(2)] × [b × ln(2)])/ln(2) = (f(a) × f(b))/ln(2) = f(a) (↓_ψ, ↑_φ) f(b).

This awkward multiplication doesn't just work on all positive real numbers: it makes sense for _all_ real numbers, and every nonzero real number has a multiplicative inverse. Moreover, it does distribute over addition! So we got a new interesting multiplication which relates to addition in the same way normal multiplication does.

Now what about commutative hyperoperations over some awkward multiplication like this? :P Do all of the (-∞)-order ones end up being the same operation? It's pretty clear that if we take the continuous exponential maps r^x for a positive real number r other than 1, we get something similar (multiplication of the standard product by an appropriate constant), but what about if we use the discontinuous ones guaranteed by the axiom of choice? Do these product a legitimate "multiplication"?

So much to consider.

MuffinsAPlenty
Автор

Friendship ended with exponentiation, now powerlog is my best friend.

This is the kind of content I want to make. Is there an audience out there for it? I think there is. Help me find more of our fellow travelers of the abstract void by sharing this video wherever it may find its audience... Facebook groups, real life, your secretive group chat with your closest friends... we're building an army.


I've since learned that the operation(s) talked about in this video are called the commutative hyperoperations, first proposed by a mathematician(?) named Albert Bennett in 1915. Unfortunately, there is not very much information out there. However, due to the very "natural" feel of this progression, perhaps this operation is making unnamed appearances throughout mathematics?

EpicMathTime
Автор

Where did you go Epic Math Time? You were the sexiest math channel on YouTube and you are sorely missed

se
Автор

So sad this man hasn't posted any videos in so long. His channel is a gold mine

quantumgaming
Автор

I miss this channel so much!


Where are you, sir ? :(

aadhar
Автор

I thought the video is going to be about Knuth's up-arrow operator.

rentristandelacruz
Автор

You forgot to mention that the annihilator (1) of powerlog is the multiplicative identity in the same way that the annihilator of multiplication (0) is the additive identity.

davidpement
Автор

I believe the official base should be √2, so that:
2+2 = 4
2×2 = 4
2^log_√2(2) = 4

assiddiq
Автор

This was the first time I revisited this channel in a while and as someone who is trying to dabble in abstract algebra I was blown away.

sety
Автор

I remember you made this announcement on instagram. I left a comment saying something like "math is about heart, go where your heart is." Jon, if this is where your heart is please stay the fuck there because this is so good. More of this, please!

lilithgaither
Автор

Without checking possible invertibility issues, at first glance my thinking would be to write a^log(b) a bit more symmetrical as exp(log(a) · log(b)). The commutativity in a and b of this is inherited from the commutativity of multiplication. Where exp and log are inverses of each other, the associativity of this is also evident from the commutativity, as exp(log(a) · log(exp(log(b) · log(c)))) reduces to exp(log(a) · log(b) · log(c)). In fact this all holds true even if we consider f(m(f^{-1}(a), f^{-1}(b))) for generic invertible f and commutative/associative m. It also reminds me of log-semirings and tropical stuff. The appearance of the exp-map, mediating between addition and multiplication, can in statistical physics be explained by the fact that when considering subsystems, energy is additive (by the differential relations in the axioms), while probabilities of joint events is multiplicative (by Kolmogorov's laws, if you will.)

NikolajKuntner
Автор

Astonishing production values on this channel. Every video a masterpiece.

cycklist
Автор

My solution to your exercises, and some thoughts:

The tier-3 operation is given by: exp exp (ln ln (a) * ln ln (b)), where truncated functions are understood as nested. I also prefer to write Tier-2 as exp (ln(a) * ln(b)), to reflect the symmetry of the operation better, as Nikolaj-K mentioned. The pattern then appears clear: just keep nesting exp's and ln's in the appropriate places.

The (-1)-tier operation is given by ln(exp(a) + exp(b)). The (-2)-tier operation is given by ln ln (exp exp(a) + exp exp(b)).

In general, the (-N)-tier operation is given by ln_N (exp_N(a) + exp_N(b)), where _N denotes an N-fold nested function. Note: this also applies to negative values of N, and we can use this to rederive the 2-tier, 3-tier, and other positive order operations. Here, we define exp_(-N) = ln_N.

Exp and ln are raising and lowering operators between different orders of operations in the following sense: Exp takes N-order operations to (N+1)-order operations. ln takes (N+1)-order operations to N-order operations.

As noted in another comment, the negative order operations appear tend towards max(a, b) as N --> infinity. I'm not really sure if the positive orders tend towards anything interesting.

Fantastic video: I too wonder if this has any significance.

scottdow
Автор

I can't believe I only came across your channel today after years of following other math channels. Truly a hidden gem, I love the effects and keep up the great work!

MultiKB
Автор

Black Pen Red Pen shared this, it is very cool! Thanks for the video!

ShoeboxInAShoebox
Автор

Subscribed!

This is the kind of content I *need* to feed my mathematical curiosity! Just when I thought my love for maths was fizzing out, you came along and reminded me of why I love group theory so much! Thanks for the awesome video.

Keep uploading content like this and you will build a strong followship. I'm definitely going to be with you on this journey!

Ghost____Rider
Автор

oh i found this when i was in high school! it was a lot of fun. also, if you use log base sqrt(2) instead of the natural logarithm, you can continue the pattern of 2 * 2 = 4.

Wafflical
Автор

Since there are comments covering the limit of negative-tier operations, I wanted to look at the positive limit.
Let's say we are looking at the 3-tier operation: a⇡⇡b = exp(exp(ln(ln(a)) * ln(ln(b)))).
If a and b are both greater than 1, all is fine, but if, say, a<1, then A = ln(a) < 0, so ln(A) is undefined. Someone had the idea to take the complex logarithm, so let's take a look at log(A).
We get log(A) = ln|A| + i Arg(A) = ln|ln(a)| + i pi.
Seems alright, so let's continue: what is log(log(A))?
log(log(A)) = ln|log(A)| + i Arg(log(A)) = 1/2 ln(ln^2|A| + pi^2) + i arctan2(pi/ln|A|).
This is getting unwieldy, so let's think about where on the complex plane we are. Since log(A) is on the horizontal line where Im(z) = pi, the argument of log(A) can be anything between 0 and pi, depending on the value of ln|A|. This means that the image covers the horizontal slice of the complex plane between the real axis and the line Im(z) = pi (excluding the origin). But this is fine, because taking another logarithm just lands us here again! (Note that I'm using a non-standard branch of log here, where I've cut out the negative imaginary half line instead of the negative real one.)
Alternatively, we could make sure that we don't have to take the ln of a negative number be restricting the real line further. For powerlog, we'd have to use the positive real line, for 3-tier we'd have to use (1, ∞), for 4-tier it would be (e, ∞), for 5-tier it would be (e^e, ∞), and so on, where the lower limit becomes a power stack of e's.

The problem with this alternative, is that we get in trouble trying to determine the behaviour of the ∞-tier operation, since we have no real line left for the operation to act on. So I tried to work out the limit in the complex approach I explained above. However, I couldn't find any sort of limit; to me, it seems repeatedly applying logarithms would just jump the number around either randomly, or with some sort of pattern without a limit like an alternating series. Perhaps someone else can help out?

Next, I read that there would be some identity problem for the negative-tier operations. The identity of addition is 0, the identity of -1-tier is ln(0) = -∞, but what would be the identity of -2-tier? It should be ln(-∞), but that is undefined. So what if we use the complex logarithm again? Then log(-∞) = ∞ + i pi, which does indeed work. Repeating, we get the identity of -3-tier, namely log(∞ + i pi) = ln(∞^2 + pi^2) + i arctan2(pi/∞) = ∞ + i*0 = ∞. But this is where it breaks, because ∞ obviously doesn't work. Any ideas?

Finally, I read a comment where someone suggested that given a binary operation & and an invertible function f, we can transport the operation using using f^-1(f(a) & f(b)).
In the case of converting addition to multiplication or the other way around, we are forced to use exp and ln, and so we arrive at powerlog and the other tiers. So I thought, what if we didn't necessarily want to addition and multiplication, but instead wanted to avoid the messiness that comes with using exp and ln. What if, instead, we took any _bijective_ function f on R (or C), and looked at what happened?

Well, the identity function is a little boring, so let's look at polynomials. Really only x^n with n>0 odd comes to mind (which coincidentally includes the identity). Let's take n=3 as an example.
Transporting addition upwards becomes a⇡b = cbrt(a^3 + b^3), which is fine, but not much can be done with it. Multiplication becomes a⇡b = cbrt(a^3 * b^3) = a*b, so it stays the same. The same is true for transportation downwards, so polynomials are only really interesting for addition.
An interesting one may be 1/x, which is its own inverse on R*; but on second thought, we are just looking at x^n with n<0 odd, and the same applies: addition gets modified, multiplication does not.
Since polynomials and radicals work (for odd numbers), what about fractions as powers? Well, if we reduce the fraction fully, we are simply looking at the power of a radical, or the radical of a power. It seems logical then that fractions should only work if both numerator and denominator are odd, but I didn't work this out.

So what's a bijective function that modifies multiplication? Well, we've kinda run out of elementary functions at this point, so if anyone has an idea, do tell!

Edit: I just realised we can make sine invertible on [-1, 1] by setting f(x) = sin(pi/2 x), so that the inverse is f^-1(x) = 2/pi arcsin(x). This would at least yield a function that modifies both addition and multiplication on [-1, 1].

mranonymous
Автор

Man im gonna take abstract algebra next semester but this is so interesting, thank you

michaelherrera
Автор

Now is it possible to find an operation "below" addition? I was thinking ln(e^a + e^b) but identity elements aren't possible. Anyone have any ideas?

sirmixalot