Content deleted Content added
Maximus Rex (talk | contribs) mNo edit summary |
m Cleanup: use `<math>' for complicated expressions, use `\operatorname' etc. |
||
Line 5:
=== Distribution functions ===
If a random variable ''X'':Ω->'''R''' defined on the probability space (Ω, ''P'') is given, we can ask questions like "How likely is it that the value of ''X'' is bigger than 2?". This is the same as the probability of the event {''s'' in Ω : ''X''(''s'') > 2} which is often written as
Recording all these probabilities of output ranges of a real-valued random variable ''X'' yields the [[probability distribution]] of ''X''. The probability distribution "forgets" about the particular probability space used to define ''X'' and only records the probabilities of various values of ''X''. Such a probability distribution can always be captured by its [[cumulative distribution function]]
:<math>F_X(x) = \operatorname{P}(X < x)</math>
and sometimes also using a [[probability density function]]. In [[measure theory|measure-theoretic]] terms, we use the random variable ''X'' to "push-forward" the measure ''P'' on Ω to a measure d''F'' on '''R'''.
The underlying probability space Ω is a technical device used to guarantee the existence of random variables, and sometimes to construct them. In practice, one often disposes of the space Ω altogether and just puts a measure on '''R''' that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables.
Line 16 ⟶ 18:
If we have a random variable ''X'' on Ω and a [[measurable function]] ''f'':'''R'''->'''R''', then ''Y''=''f''(''X'') will also be a random variable on Ω, since the composition of measurable functions is measurable. The same procedure that allowed one to go from a probability space (Ω,P) to ('''R''',dF<sub>''X''</sub>) can be used to obtain the probability distribution of ''Y''.
The cumulative distribution function of ''Y'' is
:<math>F_Y(y) = \operatorname{P}(f(X) < y).</math>
==== Example ====
Let ''X'' be a real-valued random variable and let ''Y'' = ''X''<sup>2</sup>. Then,
:<math>F_Y(y) = \operatorname{P}(X^2 < y).</math>
If ''y''
=== Moments ===▼
If ''y'' ≥ 0, then
The probability distribution of random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of [[expected value]] of a random variable, denoted E[''X'']. Note that in general, E[''f''(''X'')] is '''not''' the same as ''f''(E[''X'']). Once the "average value" is known, one could then ask how far from this average value the values of ''X'' typically are, a question that is answered by the [[variance]] and [[standard deviation]] of a random variable.▼
:<math>\operatorname{P}(X^2 < y) = \operatorname{P}(|X| < \sqrt{y})
Mathematically, this is known as the (generalised) [[problem of moments]]: for a given class of random variables ''X'', find a collection {''f<sub>i</sub>''} of functions such that the expectation values E[''f<sub>i</sub>''(''X'')] fully characterize the distribution of the random variable ''X''. ▼
= \operatorname{P}(-\sqrt{y} < X < \sqrt{y}),</math>
so
:<math>F_Y(y) = F_X(\sqrt{y}) - F_X(-\sqrt{y})\qquad\hbox{if}\quad y \ge 0.</math>
▲=== Moments ===
▲The probability distribution of random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of [[expected value]] of a random variable, denoted E[''X'']. Note that in general, E[''f''(''X'')] is '''not''' the same as ''f''(E[''X'']). Once the "average value" is known, one could then ask how far from this average value the values of ''X'' typically are, a question that is answered by the [[variance]] and [[standard deviation]] of a random variable.
▲Mathematically, this is known as the (generalised) [[problem of moments]]: for a given class of random variables ''X'', find a collection {''f<sub>i</sub>''} of functions such that the expectation values E[''f<sub>i</sub>''(''X'')] fully characterize the distribution of the random variable ''X''.
=== Equivalence of random variables ===
Line 44 ⟶ 50:
There are saveral different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, equal in mean, or equal in distribution.
In increasing order of strength, the precise definition of these notions of equivalence is
==== Equality in distribution ====
Two random variables ''X'' and ''Y'' are ''equal in distribution'' if
▲:<math>P[X\le x]=P[Y\le x]\quad\hbox{for all}\quad x.</math>
:<math>\operatorname{P}(X \le x) = \operatorname{P}(Y \le x)\quad\hbox{for all}\quad x.</math>
To be equal in distribution, random variables need not be defined on the same probability space, but without loss of generality they can be made into independent random variables on the same probability space. The notion of equivalence in distribution is associated to the following notion of distance between probability distributions,
:<math>d(X,Y)=\sup_x|\operatorname{P
which is the basis of the [[Kolmogorov-Smirnov test]].
==== Equality in mean ====
Two random variables ''X'' and ''Y'' are ''equal in p-th mean'' if the ''p''th moment of |''X''
:<math>\operatorname{E
Equality in ''p''th mean implies equality in ''q''th mean for all ''q''<''p''. As in the previous case, there is a related distance between the random variables, namely
:<math>d_p(X, Y) = \operatorname{E
==== Almost sure equality ====
Two random variables ''X'' and ''Y'' are ''equal almost surely'' if, and only if, the probability that they are different is zero:
:<math>\operatorname{P
For all practical purposes in probability theory, this notion of equivalence is as strong as actuall equality. It is associated to the following distance:
:<math>d_\infty(X,Y)=\sup_\omega|X(\omega)-Y(\omega)|,</math>
where 'sup' in this case represents the [[essential supremum]] in the sense of [[measure theory]].
Line 72 ⟶ 89:
Finally, two random variables ''X'' and ''Y'' are ''equal'' if they are equal as functions on their probability space, that is,
:<math>X(\omega)=Y(\omega)\qquad\hbox{for all}\quad\omega</math>
Line 82 ⟶ 100:
=== Examples ===
The following are examples of random integers i, 1
17 12 17 89 64
|