A Finite Game of Infinite Rounds #SoME2

preview_player
Показать описание
A short video about a random variable with no expected value. Made for the Summer of Maths Exposition 2.

0:00 Let's play a game
2:33 A better-behaved example
4:49 Working through the maths
7:08 Does the game always finish?
9:34 Discussion, and another example
11:04 A challenge problem

Correction: the sum at 4:32 should be P(1)+2P(2)+3P(3)+…

Some extra details I found later after some discussion in the comments:

The Cauchy distribution has a cool property that intuitively explains why it behaves so weirdly: the probability distribution of the average of a number independent Cauchy-distributed variables is the exact same Cauchy distribution! No matter how many runs you average, the distribution doesn't narrow, making the Cauchy distribution a counterexample to the central limit theorem.

Why do the simulations at the start of the video seem to go up logarithmically? Let's say you run N games and average the results. The mean is infinite, but we could shoddily get around this by assuming no game runs for K rounds or longer, the probability of which is (1-1/K)^N. Intuitively K should vary with N, so let's fix this to some chosen threshold probability p (e.g. p=0.05) and rearrange to get K=1/(1-p^(1/N)). We can now calculate the expected value only summing up to the K-th term, and after approximating the harmonic numbers with a logarithm and using a small Taylor expansion I got an expected value of ln(N) - ln(-ln(p)) + γ - 1. So yes, by some weird metric it is logarithmic with the number of games! However I should mention that I actually rerecorded the programs a couple of times before I got results that fit with the flow of the video; in reality the average tends to jump around a lot more.
Рекомендации по теме
Комментарии
Автор

You take the blue cotton ball, the story ends... You wake up in your bed and believe... whatever you want to believe. You take the red cotton ball, you stay in wonderland, and burn your processor.

cocoscacao
Автор

In Magic: the Gathering, it's fairly common that combining a number of card effects allows you to repeat a process an unlimited number of times. Because of this, the Magic Comprehensive Rules - a 200+ page document, rigorously defining every piece of the game and how they interact - define a process for shortcutting loops. Essentially, the player demonstrates the loop once, then says something like "I repeat that a billion times." Their opponent, then, has the opportunity to accept the shortcut as-is, or interrupt the loop at some point with their own effect.
The loop shortcut rules also provide for naming a desired end state instead. For example, if one loop gives you one surplus mana, and another loop lets you spend five mana to create three 1/1 tokens, you could just say, "I repeat these two loops until I have a billion tokens, " without having to do any further calculation. However, to be able to do this, it must be clear that your desired end state will definitely happen in some finite number of attempts. In particular, no part of the loop can be random. This restriction isn't without controversy, and there are known card combinations that would be playable in decks if you were allowed to shortcut random processes with a tail probability of 0.
Seeing examples like this really helps cement why the Magic rules team made that decision. Infinities are weird; play with them at your own peril.

curtmack
Автор

I was pretty sure from very early in the video that you were headed toward the infinite sum 1 + 1/2 + 1/3 + 1/4 + 1/5 + . . . (or something very similar), which diverges to infinity, but which goes to infinity very, very slowly. But even though I knew pretty much where you were going, the journey was very entertaining and informative. (And the journey went on a bit further too, which was cool.)

PaulH
Автор

Everything about this video is great, but the sound design on this video is amazing! I don't think I've seen this done with math/STEM videos before. The sound really emphasized the weirdness/bizarreness. It gave me a similar vibe to the game Braid.

joseville
Автор

Makes me appreciate Minimize and Double Team not allowing for infinite evasion boosts.

SmallerRidley
Автор

1 round of this game, 9 rounds thumb twiddling, 1 round this game, 89 rounds thumb twiddling. 1 round this game 899 thumb twiddling. So the rounds where you do anything are rounds 1, 10, 100 ..., and there you use the red ball, blue ball. Infinite expected value of digits.

donaldhobson
Автор

An excellent explanation, and a deep problem. It is somewhat similar to St. Petersburg paradox by Bernoulli, where you play a game that also has an infinite expected payout. Yet, due do ever decreasing probability of a large payout, no one would be willing to pay a large sum of money to play the game. In economics, this was traditionally used as an argument against simply computing expected values and in favour of the expected utility theory, which itself has its own problems.

mathscharts
Автор

This is my favourite of the SoME2 videos I've seen so far. When it showed the expected value of rounds increasing in the python program I immediately thought, oh, it's got to be 1/2+1/3+1/4...! Even though I didn't know exactly how the maths would work out. Same enjoyment when you're reading an Agatha Christie and you get an idea who the killer is even though you can't say precisely how they did it. Great job!!

erint
Автор

Here’s a way to do the challenge:

The probability of the ball being found in n tries or less is n/(n+1). You can subtract out probabilities from before. Digits wise you’re interested in the cases where n = 9, 99, 999, etc. So you get:
9/10 + 2*(99/100-9/10) + 3*(999/1000-99/100)…etc.
This simplifies to 9/10 + 2*9/100 + 3*9/1000 …etc. or the sum from n=1 to infinity of 9n/10^n. The 9 can be factored out, so you now have 9 Σ n/10^n. Rewrite this sum as starting with n=0: 9*(1/10)*Σ[n*(1/10)^(n-1)]. Inside the summation is a version of d[x^n]/dx with x=1/10. So you can rearrange the expression (limits from 0 to infinity):
(9/10)*d[Σ(x^n)]/dx evaluated at x=1/10. The inside of the summation is a geometric series that converges (since x<1), and this convergent value is 1/(1-x). Taking the derivative, you get 1/(1-x)^2. Substitute back in and you get 9/10 * [1/(1-1/10)^2] which simplifies to 9/10 * 100/81, which is 10/9. or 1.1111…

timolson
Автор

Challenge question solution:
The number of (base 10) digits in the number n is 1+floor(log10(n)). The probability stays the same obviously. First let's show that the sum converges:
1. the terms in the sum are positive, so it's enough to show that a sum of greater terms converges
2. [1 + floor(log(n))] / [n(n+1)] <= [1 + log(n)] / [n²+n] = 1/[n(n+1)] + log(n)/[n(n+1)] < 1/n² + log(n)/n² < 1/n² + 1/n^(3/2)
3. with both parts being power series this converges.
I skipped over the sqrt(n) < log(n) proof and the power series convergence proof, but I think that should be fine?

Now I don't think the sum from n=1 to inf of [1 + floor(log(n))] / [n(n+1)] can be solved analytically. Floor is just really annoying for stuff like that. But summing 1e9 terms I get ~1.1111

EDIT:
Found an exact analytical solution. Trick was just to turn it into a double sum of Sum_{N_digits=1}^{Infinity} Sum_{n=10^(N_digits -1)}^{10^N_digits-1} N_digits * 1/(n(n+1)) to eliminate the ugly floor(log(n))

1. Let N denote the number of digits and n a specific number with n digits. Sum bounds are 1 to infinity for N and 10^(N-1) to 10^N -1 for n.
E = Sum_N N * Sum_n 1/(n(n+1))
2. use the telescoping property
E = Sum_N N * ( 1/10^(N-1) - 1/(10^N) )
3. split into two sums
E1 = Sum_N N / 10^(N-1) and E2 = N/(10^N)
4. These are just decimal expansions! and even neater, E1 = 10*E2. The decimal expansion for E1 reads 1.23456790123... which is just 100/81.
5. 100/81 - 10/81 = 90/81 = 10/9 = 1.1111... (alternatively without going to fractions, it's just 1.2345... - 0.1234... = 1.1111...)

chalkchalkson
Автор

In python, printing the output is very very slow. Simply removing the print function for each round makes the run time nearly instant.

SoaringMoon
Автор

i just love how the music is matched to the video

norbertvalterkalocsai
Автор

Sup dude! Nevermind the video, which was excellent, the audio (Quality, voiceover, and sounds/music) was one of the best I've personally seen. Great all around! Congrats.

Iamg
Автор

I get a surprising amount of joy from the harmonic series and the weird, unintuitive results that derive from it

JacksonBockus
Автор

Amazing video! I watched a ton of some2 videos during the whole exposition, but how I'm going back and watching the videos I didn't get to see, and I really love all of them!

smorcrux
Автор

I love this! You kept the video interesting throughout. Would love to see more from you!

danielfernandes
Автор

The funny and unique thing with Cauchy is that the mean of N samples has *the same distribution* as a single sample!

landsgevaer
Автор

i thought back to this video because i've been playing a game where dice have different effects on the sides, like attacking enemies, blocking damage, and so on. One of the rare effects is "gain N rerolls" and i was trying to calculate the probability of stuff when that's involved, which is tricky! If one of the sides is +1 reroll then it's basically like there's just the 5 sides left, but when N gets higher it's hard to figure out the exact stats... but then again, that usually means you're clearly winning anyway. 😄 When the total N across all sides is =6 then i don't think there's an expected value but it's not infinite (unless it's exactly 1 per side), but i'm not quite sure – it's close to equivalent to asking how long it takes for a random walk to go negative. Total N >6 is easy though, it means there's a good chance of going infinite!

AzureLazuline
Автор

I had a teacher explain to me once that if "math is the language of the universe", that means that, like spoken language, there are things for which we don't have words for, as well as things for which words do not convey the full meaning. When facing a problem that doesn't make any reasonable sense, like a problem that's simultaneously finite and infinite, you've hit that limitation of mathematical "language". It's the kind of problem that led to the development of mathematical concepts we take for granted today, from irrational numbers to calculus to the very concept of zero. Or, more relevant to this video, infinity.

rhettorical
Автор

I don't have time to work it out right now, but regarding the question at the end, I got the feeling that the expected value for the number of digits converges because it is the logarithm of the expected value, and the series 1/2 + 1/3 + 1/4 + 1/5... diverges at a logarithmic rate, as such, when applying the logarithm it perfectly cancels the growth, making it a constant.

kikones