'A Random Variable is NOT Random and NOT a Variable'

preview_player
Показать описание
What is a random variable? Why do some people say "its not random and its not a variable"?
What is "expected value"? What is the difference between a random variable and a probability distribution? An example where you can do the problem two ways either with probability distributions or with pure random variables

Chapters:
0:00 Are random variables random?
0:55 Example sum of two dice
2:40 A random variable is a collection of events
5:52 A random variables is a FUNCTION
8:49 Level sets of the function are events
10:12 How to use it as a variable
11:50 Definition of Expected Value
13:30 Linearity of Expectation
14:30 Probability Distribution vs A Random Variable
18:10 Two different formulas for the expected value
21:44 Expected value of binomial random variable example with two solutions
23:00 Solution 1 Probability Distribution Solution
25:34 Solution 2 Random Variables Only Solution
Рекомендации по теме
Комментарии
Автор

Student: What's a Dr. Pepper?

Me: Well for starters it is not a doctor and it's not a pepper.

coreyyanofsky
Автор

In physics we have: "What is particle spin?"

"Well imagine a ball is spinning, except it's not a ball and it's not spinning"

Bodge
Автор

I study maths but always avoided statistics but this video actually got me interested in learning more

Chris-ywis
Автор

Incredible explaination of random variables. I just got a job as a data scientist and I am trying to deepen my knowledge in probabilty and statistics, and this really throws some light into the darkness.
Super useful, cheers!

AlejandroMeloMaths
Автор

This was such a lovely explanation for a topic that so often confuses students!

DrTrefor
Автор

This was the clearest explanation of random variables and their relation to probability distributions I’ve had in 35 years of being around probability and statistics. Very nice job. I hope to see more explanations like this, I’m subscribing!

MarkMadsenSJI
Автор

What a wonderfully clear and precise presentation! It cleared up several doubts, especially the distinction between a probability mass function and a random variable.
Would be interested to see an example of a more complex problem that’s more easily solved through random variables than by probability mass function.
(Incidentally - is the converse ever true, i.e. is there a problem that’s more easily solved using probability mass function rather than the random variable? Assuming that the random variable is known.)

ilikehandsprings
Автор

This was one of the first things in stat that needed clarification. A random variable is a FUNCTION. It's a function from the sample space to the set of real numbers. It's job is to assign "numerical values" to the elements of the sample space (like say 0 to a tail and 1 to a head in a single coin toss).

tiyassahu
Автор

“Is a function a variable? Not really.”

The lambda calculus: bet.

Studio_salesmen
Автор

The best explanation I’ve come across so far

Interstellar
Автор

1:05 isn’t a probability space the triplet (omega, sigma, P) with sigma being the sigma field and P being the measure? I’ve always heard omega called the fundamental set.

adam_jri
Автор

Iti multumesc. Este chiar singura mea problema care m-a intepa cand invatam la probabilitati

quantumgaming
Автор

Thank you for the title of the evideo. Finally someone spoke THE TRUTH.

PoolWaterPiano
Автор

yes, that's one of the most insane things in most statistical textbooks (they also confuse models with fitted models, samples and groups, etc).

imo, the only way one can understand statistics is through programming, bc you can see how these abstract concepts map to operations on real data (and there are a few textbooks that do this)

Daniel_Zhu_af
Автор

this is a great video! I love how you explained also captures the measure theory underlying probability. I would love to hear how you conceptualise conditional expectations on measure spaces :D (I'm currently struggling with the intuition there😆)

cowsome
Автор

A "random variable" X conceptually means to extract partial)information or translate information of an event space that you are interessted in modelling to another hopefully more simple/useful event space (often IR). The informations distribution in the simplified event space is determined by the distribution on the original event space as well as X. That's also why measureability as a requirement makes sense. This seemlessly extends to Operators we call "expection": They coarsen the "resolution" of the random variable depending on how much information you assume to be known (i.e. your a priori knowledge about which events will happen), e.g. the usual expectation value we know is a constant function but with full information it would be the random variable X itself. The expectation (given sub information modelled by a sub sigma algebra) operator is optimal in the sense as it gives the best prognosis of how X behaves on the random event space as possible - it is an optimal approximation of X based on the a priori given knowledge of events and in fact its a measureable function too! Another useful thing is thinking about distributions as analogues to laws of motion in physics and this is even less strange while thinking about QM.

I will now watch the video and see how you explain things.

IsomerSoma
Автор

To solve the exercise you mentioned at 21:00, first note that the nonzero contribution to the sum is over the image of Z, which is discrete by assumption (otherwise you would need an integral). Then (in the language of Latex) you have $\sum_{\omega \in \Omega} Z(\omega) \mathbb{P}(\omega) = \sum_{ z \in Im(Z) } \left( \sum_{\omega \in Z^{-1}(z)} Z(\omega) \mathbb{P}(\omega) \right) = \sum_{ z \in Im(Z) } z \left( \sum_{\omega \in Z^{-1}(z)} \mathbb{P}(\omega) \right)$ as $Z(\omega) = z$ is constant over the inner sum, and then using the additivity axiom of a probability measure we have $ =\sum_{ z \in Im(Z) } z \in Z^{-1}(z)} \omega\right) = \sum_{ z \in Im(Z) } z \mathbb{P}(Z = z)$.

also, to show the equality mentioned at 25:19, proceed as follows: first the $k=0$ term of the sum is zero. Then we have $\sum_{k=1}^n k {n\choose k}\frac{1}{2^n} = \sum_{k=1}^n after unpacking the n choose k and then canceling factors of $k$ on the numerator and denominator. Pull out an $n$ from the $n!$ to get $=\sum_{k=1}^n n = \frac{n}{2^n} \sum_{k=1}^n {n-1 \choose k-1}$ where terms which dont depend on $k$ have been pulled out as they are constant with respect to the summation index $k$. Finally, the binomial formula $\sum_{k=1}^n {n-1 \choose k-1} = 2^{n-1}$ yields the desired result $n/2$. Some mention of these details would have been nice to see in the video.

Note: If anyone reading this comment is not familiar with Latex, a good place to start is Overleaf.com. You will need to import some math packages, namely copy and paste \usepackage{amsmath, amssymb, amsfonts} at the top of your document and the above Latex Code should compile.

oceansofmath
Автор

Student: What's a X Y?
Me: Well for starters it's not an X and it's not a Y.

Substitute X Y for: Random Variable, Big Bang, Halting Problem, Neural Network, Elliptic Curve, Fundamental (theorem of) Algebra.

Naming things is hard...

ruroruro
Автор

Holy hell u made me realised why some prob classes touch upon measure cause a random variale is a measure

dogmaticka
Автор

The way i think of it is...Its just a function that maps events to real number space.
The occurrence of those events can itself be random and thus the output of function as well. Hence the random in randome variable.

Arkvis
visit shbcf.ru