Content deleted Content added
Montgolfière (talk | contribs) Mixture distributions generally do have densities |
Citation bot (talk | contribs) Removed URL that duplicated identifier. | Use this bot. Report bugs. | #UCB_CommandLine |
||
(37 intermediate revisions by 30 users not shown) | |||
Line 2:
{{Probability fundamentals}}
A '''random variable''' (also called '''random quantity''', '''aleatory variable''', or '''stochastic variable''') is a [[Mathematics| mathematical]] formalization of a quantity or object which depends on [[randomness|random]] events.<ref name=":2">{{cite book|last1=Blitzstein|first1=Joe|title=Introduction to Probability|last2=Hwang|first2=Jessica|date=2014|publisher=CRC Press|isbn=9781466575592}}</ref> The term 'random variable'
* the [[ * the [[Range of a function|range]] is a [[measurable space]] (e.g. corresponding to the ___domain above, the range might be the set <math>\{-1, 1\}</math> if say heads <math>H</math> mapped to -1 and <math>T</math> mapped to 1). Typically, the range of a random variable is a subset of the [[Real number|real numbers]].
[[File:Random Variable as a Function-en.svg|thumb|This graph shows how random variable is a function from all possible outcomes to real values. It also shows how random variable is used for defining probability mass functions.]]
Informally, randomness typically represents some fundamental element of chance, such as in the roll of a [[dice|die]]; it may also represent uncertainty, such as [[measurement error]].<ref name=":2" /> However, the [[interpretation of probability]] is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous [[Axiom|axiomatic]] setup.
In the formal mathematical language of [[measure theory]], a random variable is defined as a [[measurable function]] from a [[probability measure space]] (called the ''sample space'') to a [[measurable space]]. This allows consideration of the [[pushforward measure]], which is called the ''distribution'' of the random variable; the distribution is thus a [[probability measure]] on the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may be [[independence (probability theory)|independent]].
It is common to consider the special cases of [[discrete random variable]]s and [[Probability_distribution#Absolutely_continuous_probability_distribution|absolutely continuous random variable]]s, corresponding to whether a random variable is valued in a
According to [[George Mackey]], [[Pafnuty Chebyshev]] was the first person "to think systematically in terms of random variables".<ref name=":3">{{cite journal|journal=Bulletin of the American Mathematical Society |series=New Series|volume=3|number=1|date=July 1980|title=Harmonic analysis as the exploitation of symmetry – a historical survey|author=George Mackey}}</ref>
Line 16 ⟶ 19:
==Definition==
A '''random variable''' <math>X</math> is a [[measurable function]] <math>X \colon \Omega \to E</math> from a sample space <math> \Omega </math> as a set of possible [[outcome (probability)|outcome]]s to a [[measurable space]] <math> E</math>. The technical axiomatic definition requires the sample space <math>\Omega</math> to
The probability that <math>X</math> takes on a value in a measurable set <math>S\subseteq E</math> is written as
: <math>\operatorname{P}(X \in S) = \operatorname{P}(\{ \omega \in \Omega \mid X(\omega) \in S \})</math>.
===Standard case===
Line 26 ⟶ 29:
In many cases, <math>X</math> is [[Real number|real-valued]], i.e. <math>E = \mathbb{R}</math>. In some contexts, the term [[random element]] (see [[#Extensions|extensions]]) is used to denote a random variable not of this form.
{{Anchor|Discrete random variable}}When the [[Image (mathematics)|image]] (or range) of <math>X</math> is finite or [[countable set|
Any random variable can be described by its [[cumulative distribution function]], which describes the probability that the random variable will be less than or equal to a certain value.
Line 34 ⟶ 37:
The term "random variable" in statistics is traditionally limited to the [[real number|real-valued]] case (<math>E=\mathbb{R}</math>). In this case, the structure of the real numbers makes it possible to define quantities such as the [[expected value]] and [[variance]] of a random variable, its [[cumulative distribution function]], and the [[moment (mathematics)|moment]]s of its distribution.
However, the definition above is valid for any [[measurable space]] <math>E</math> of values. Thus one can consider random elements of other sets <math>E</math>, such as random [[Boolean-valued function|
This more general concept of a [[random element]] is particularly useful in disciplines such as [[graph theory]], [[machine learning]], [[natural language processing]], and other fields in [[discrete mathematics]] and [[computer science]], where one is often interested in modeling the random variation of non-numerical [[data structure]]s. In some cases, it is nonetheless convenient to represent each element of <math>E</math>, using one or more real numbers. In this case, a random element may optionally be represented as a [[random vector|vector of real-valued random variables]] (all defined on the same underlying probability space <math>\Omega</math>, which allows the different random variables to [[mutual information|covary]]). For example:
Line 40 ⟶ 43:
*A random sentence of given length <math>N</math> may be represented as a vector of <math>N</math> random words.
*A [[random graph]] on <math>N</math> given vertices may be represented as a <math>N \times N</math> matrix of random variables, whose values specify the [[adjacency matrix]] of the random graph.
*A [[random function]] <math>F</math> may be represented as a collection of random variables <math>F(x)</math>, giving the function's values at the various points <math>x</math> in the function's ___domain. The <math>F(x)</math> are ordinary real-valued random variables provided that the function is real-valued. For example, a [[stochastic process]] is a random function of time, a [[random vector]] is a random function of some [[index set]] such as <math>1,2,\ldots, n</math>, and [[random field]] is a random function on any set (typically time, space, or a discrete set).
==Distribution functions==
Line 58 ⟶ 61:
===Discrete random variable===
Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum <math>\operatorname{PMF}(0) + \operatorname{PMF}(2) + \operatorname{PMF}(4) + \cdots</math>.
Line 64 ⟶ 67:
In examples such as these, the [[sample space]] is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed.
If <math display = "inline">\{a_n\}, \{b_n\}</math> are countable sets of real numbers, <math display="inline">b_n >0</math> and <math display="inline">\sum_n b_n=1</math>, then <math display="inline"> F=\sum_n b_n \delta_{a_n}(x)</math> is a discrete distribution function. Here <math> \delta_t(x) = 0</math> for <math> x < t</math>, <math> \delta_t(x) = 1</math> for <math> x \ge t</math>. Taking for instance an enumeration of all rational numbers as <math>\{a_n\}</math> , one gets a discrete function that is not necessarily a [[step function]] ([[piecewise]] constant).
====Coin toss====
Line 298 ⟶ 301:
This notion is typically the least useful in probability theory because in practice and in theory, the underlying [[measure space]] of the [[Experiment (probability theory)|experiment]] is rarely explicitly characterized or even characterizable.
===Practical difference between notions of equivalence===
Since we rarely explicitly construct the probability space underlying a random variable, the difference between these notions of equivalence is somewhat subtle. Essentially, two random variables considered ''in isolation'' are "practically equivalent" if they are equal in distribution -- but once we relate them to ''other'' random variables defined on the same probability space, then they only remain "practically equivalent" if they are equal almost surely.
For example, consider the real random variables ''A'', ''B'', ''C'', and ''D'' all defined on the same probability space. Suppose that ''A'' and ''B'' are equal almost surely (<math>A \; \stackrel{\text{a.s.}}{=} \; B</math>), but ''A'' and ''C'' are only equal in distribution (<math>A \stackrel{d}{=} C</math>). Then <math> A + D \; \stackrel{\text{a.s.}}{=} \; B + D</math>, but in general <math> A + D \; \neq \; C + D</math> (not even in distribution). Similarly, we have that the expectation values <math> \mathbb{E}(AD) = \mathbb{E}(BD)</math>, but in general <math> \mathbb{E}(AD) \neq \mathbb{E}(CD)</math>. Therefore, two random variables that are equal in distribution (but not equal almost surely) can have different [[covariance|covariances]] with a third random variable.
==Convergence==
|