Random variable: Difference between revisions

Content deleted Content added
... to generate a random result.
Miguel~enwiki (talk | contribs)
No edit summary
Line 1:
We can think of a '''random variable''' as the numeric result of operating a non-deterministic mechanism or performing a non-deterministic experiment to generate a [[random]] result. For example, rolling a die and recording the outcome yields a random variable with range {1,2,3,4,5,6}. Picking a random person and measuring their height yields another random variable.
 
Mathematically, a random variable is defined as a [[measurable function]] from a [[probability space]] to some [[measurable space]]. This measurable space is the space of possible values of the variable, and it is usually taken to be the [[real number|real numbers]] with the [[Borel algebra|Borel σ-algebra]], and we will always assume this in this encyclopedia, unless otherwise specified.
This measurable space is usually taken to be the [[real number]]s with the [[Borel algebra|Borel σ-algebra]], and we will always assume this in this encyclopedia, unless otherwise specified.
 
If a random variable ''X'',:Ω->'''R''' defined on the probability space (Ω, ''P''), is given, we can ask questions like "How likely is it that the value of ''X'' is bigger than 2?". This asksis the same aboutas the probability of the event {''s'' in Ω : ''X''(''s'') > 2} which is often written as ''P''(''X'' > 2) for short.
 
Recording all these probabilities of ouput ranges of a real-valued random variable ''X'' yields the [[probability distribution]] of ''X''. The probability distribution "forgets" about the particular probability space used to define ''X'' and only records the probabilities of various values of ''X''. Such a probability distribution can always be captured by its [[cumulative distribution function]] and sometimes also using a [[probability density function]].
:''F_X''(x) = P(X<=x)
and sometimes also using a [[probability density function]]. In [[measure theory|measure-theoretic]] terms, we use the random variable ''X'' to "push-forward" the measure ''P'' on &Omega; to a measure d''F'' on '''R'''. This is a technical device used to guarantee the existence of random variables, and sometimes to construct them. In practice, one disposes of the space &Omega; altogether and just puts a measure on '''R''' that assigns measure 1 to the whole real line.
 
Given aA random variable can often be characterised by a small number of quantities, which also have a practical interpretation. For example, it is often importantenough to know what its "average value" is. For instance, what is the average of the results you get when you roll a die represtedly, or measure a human's heightheights? This is captured by the mathematical concept of [[expected value]] of a random variable, denoted E[''X''].
 
Mathematically, this is called as the (generalised) [[problem of moments]]: to find a collection {''f<sub>i</sub>''}of functions of ''X'' such that their expectation values E[''f<sub>i</sub>''(''X'')] fully characterize the distribution of the random variable ''X''.
See also: [[discrete random variable]], [[continuous random variable]], [[probability distribution]]