Random variable: Difference between revisions

Content deleted Content added
Reverting spam
Citation bot (talk | contribs)
Removed URL that duplicated identifier. | Use this bot. Report bugs. | #UCB_CommandLine
 
(34 intermediate revisions by 28 users not shown)
Line 2:
{{Probability fundamentals}}
 
A '''random variable''' (also called '''random quantity''', '''aleatory variable''', or '''stochastic variable''') is a [[Mathematics| mathematical]] formalization of a quantity or object which depends on [[randomness|random]] events.<ref name=":2">{{cite book|last1=Blitzstein|first1=Joe|title=Introduction to Probability|last2=Hwang|first2=Jessica|date=2014|publisher=CRC Press|isbn=9781466575592}}</ref> The term 'random variable' canin beits misleadingmathematical asdefinition itrefers isto notneither actually randomrandomness nor a variable,variability<ref>{{Cite book |last=Deisenroth |first=Marc Peter |url=https://www.worldcat.org/oclc/1104219401 |title=Mathematics for machine learning |date=2020 |others=A. Aldo Faisal, Cheng Soon Ong |isbn=978-1-108-47004-9 |___location=Cambridge, United Kingdom |oclc=1104219401 |publisher=Cambridge University Press}}</ref> but rather itinstead is a mathematical [[function (mathematics)|function]] fromin possiblewhich

* the [[OutcomeDomain (probability)of a function|outcomes___domain]] (e.g.,is the possible upper sidesset of apossible flipped[[Outcome coin such as heads <math>H</math> and tails <math>T</math>(probability)|outcomes]] in a [[sample space]] (e.g., the set <math>\{H,T\}</math>) towhich aare [[measurablethe space]]possible (e.g.,upper <math>\{-1,1\}</math>sides inof whicha 1flipped correspondingcoin heads to <math>H</math> andor −1 corresponding totails <math>T</math>), often toas the realresult numbers.from tossing a coin); and
* the [[Range of a function|range]] is a [[measurable space]] (e.g. corresponding to the ___domain above, the range might be the set <math>\{-1, 1\}</math> if say heads <math>H</math> mapped to -1 and <math>T</math> mapped to 1). Typically, the range of a random variable is a subset of the [[Real number|real numbers]].
 
[[File:Random Variable as a Function-en.svg|thumb|This graph shows how random variable is a function from all possible outcomes to real values. It also shows how random variable is used for defining probability mass functions.]]
 
Informally, randomness typically represents some fundamental element of chance, such as in the roll of a [[dice|die]]; it may also represent uncertainty, such as [[measurement error]].<ref name=":2" /> However, the [[interpretation of probability]] is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous [[Axiom|axiomatic]] setup.
 
In the formal mathematical language of [[measure theory]], a random variable is defined as a [[measurable function]] from a [[probability measure space]] (called the ''sample space'') to a [[measurable space]]. This allows consideration of the [[pushforward measure]], which is called the ''distribution'' of the random variable; the distribution is thus a [[probability measure]] on the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may be [[independence (probability theory)|independent]].
 
It is common to consider the special cases of [[discrete random variable]]s and [[Probability_distribution#Absolutely_continuous_probability_distribution|absolutely continuous random variable]]s, corresponding to whether a random variable is valued in a discretecountable set (such as a finite set)subset or in an interval of [[real number]]s. There are other important possibilities, especially in the theory of [[stochastic process]]es, wherein it is natural to consider [[random sequence]]s or [[random function]]s. Sometimes a ''random variable'' is taken to be automatically valued in the real numbers, with more general random quantities instead being called ''[[random element]]s''.
 
According to [[George Mackey]], [[Pafnuty Chebyshev]] was the first person "to think systematically in terms of random variables".<ref name=":3">{{cite journal|journal=Bulletin of the American Mathematical Society |series=New Series|volume=3|number=1|date=July 1980|title=Harmonic analysis as the exploitation of symmetry – a historical survey|author=George Mackey}}</ref>
Line 16 ⟶ 19:
==Definition==
 
A '''random variable''' <math>X</math> is a [[measurable function]] <math>X \colon \Omega \to E</math> from a sample space <math> \Omega </math> as a set of possible [[outcome (probability)|outcome]]s to a [[measurable space]] <math> E</math>. The technical axiomatic definition requires the sample space <math>\Omega</math> to bebelong a sample space ofto a [[probability space|probability triple]] <math>(\Omega, \mathcal{F}, \operatorname{P})</math> (see the [[#Measure-theoretic definition|measure-theoretic definition]]). A random variable is often denoted by capital [[Latin script|Roman letters]] such as <math>X, Y, Z, T</math>.<ref>{{Cite web|title=Random Variables|url=https://www.mathsisfun.com/data/random-variables.html|access-date=2020-08-21|website=www.mathsisfun.com}}</ref>
 
The probability that <math>X</math> takes on a value in a measurable set <math>S\subseteq E</math> is written as
 
: <math>\operatorname{P}(X \in S) = \operatorname{P}(\{ \omega \in \Omega \mid X(\omega) \in S \})</math>.
 
===Standard case===
Line 26 ⟶ 29:
In many cases, <math>X</math> is [[Real number|real-valued]], i.e. <math>E = \mathbb{R}</math>. In some contexts, the term [[random element]] (see [[#Extensions|extensions]]) is used to denote a random variable not of this form.
 
{{Anchor|Discrete random variable}}When the [[Image (mathematics)|image]] (or range) of <math>X</math> is finite or [[countable set|countablecountably]] infinite, the random variable is called a '''discrete random variable'''<ref name="Yates">{{cite book | last1 = Yates | first1 = Daniel S. | last2 = Moore | first2 = David S | last3 = Starnes | first3 = Daren S. | year = 2003 | title = The Practice of Statistics | edition = 2nd | publisher = [[W. H. Freeman and Company|Freeman]] | ___location = New York | url = http://bcs.whfreeman.com/yates2e/ | isbn = 978-0-7167-4773-4 | url-status = dead | archive-url = https://web.archive.org/web/20050209001108/http://bcs.whfreeman.com/yates2e/ | archive-date = 2005-02-09 }}</ref>{{rp|399}} and its distribution is a [[discrete probability distribution]], i.e. can be described by a [[probability mass function]] that assigns a probability to each value in the image of <math>X</math>. If the image is uncountably infinite (usually an [[Interval (mathematics)|interval]]) then <math>X</math> is called a '''continuous random variable'''.<ref>{{Cite web|title=Random Variables|url=http://www.stat.yale.edu/Courses/1997-98/101/ranvar.htm|access-date=2020-08-21|website=www.stat.yale.edu}}</ref><ref>{{Cite journal|last1=Dekking|first1=Frederik Michel|last2=Kraaikamp|first2=Cornelis|last3=Lopuhaä|first3=Hendrik Paul|last4=Meester|first4=Ludolf Erwin|date=2005|title=A Modern Introduction to Probability and Statistics|url=https://doi.org/10.1007/1-84628-168-7|journal=Springer Texts in Statistics|language=en-gb|doi=10.1007/1-84628-168-7|isbn=978-1-85233-896-1|issn=1431-875X|url-access=subscription}}</ref> In the special case that it is [[absolutely continuous]], its distribution can be described by a [[probability density function]], which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous.<ref>{{cite book|author1=L. Castañeda |author2=V. Arunachalam |author3=S. Dharmaraja |name-list-style=amp |title = Introduction to Probability and Stochastic Processes with Applications | year = 2012 | publisher= Wiley | page = 67 | url=https://books.google.com/books?id=zxXRn-Qmtk8C&pg=PA67 |isbn=9781118344941 }}</ref>
 
Any random variable can be described by its [[cumulative distribution function]], which describes the probability that the random variable will be less than or equal to a certain value.
Line 34 ⟶ 37:
The term "random variable" in statistics is traditionally limited to the [[real number|real-valued]] case (<math>E=\mathbb{R}</math>). In this case, the structure of the real numbers makes it possible to define quantities such as the [[expected value]] and [[variance]] of a random variable, its [[cumulative distribution function]], and the [[moment (mathematics)|moment]]s of its distribution.
 
However, the definition above is valid for any [[measurable space]] <math>E</math> of values. Thus one can consider random elements of other sets <math>E</math>, such as random [[Boolean-valued function|booleanBoolean value]]s, [[categorical variable|categorical value]]s, [[Covariance matrix#Complex random vectors|complex numbers]], [[random vector|vector]]s, [[random matrix|matrices]], [[random sequence|sequence]]s, [[Tree (graph theory)|tree]]s, [[random compact set|set]]s, [[shape]]s, [[manifold]]s, and [[random function|function]]s. One may then specifically refer to a ''random variable of [[data type|type]] <math>E</math>'', or an ''<math>E</math>-valued random variable''.
 
This more general concept of a [[random element]] is particularly useful in disciplines such as [[graph theory]], [[machine learning]], [[natural language processing]], and other fields in [[discrete mathematics]] and [[computer science]], where one is often interested in modeling the random variation of non-numerical [[data structure]]s. In some cases, it is nonetheless convenient to represent each element of <math>E</math>, using one or more real numbers. In this case, a random element may optionally be represented as a [[random vector|vector of real-valued random variables]] (all defined on the same underlying probability space <math>\Omega</math>, which allows the different random variables to [[mutual information|covary]]). For example:
Line 40 ⟶ 43:
*A random sentence of given length <math>N</math> may be represented as a vector of <math>N</math> random words.
*A [[random graph]] on <math>N</math> given vertices may be represented as a <math>N \times N</math> matrix of random variables, whose values specify the [[adjacency matrix]] of the random graph.
*A [[random function]] <math>F</math> may be represented as a collection of random variables <math>F(x)</math>, giving the function's values at the various points <math>x</math> in the function's ___domain. The <math>F(x)</math> are ordinary real-valued random variables provided that the function is real-valued. For example, a [[stochastic process]] is a random function of time, a [[random vector]] is a random function of some [[index set]] such as <math>1,2,\ldots, n</math>, and [[random field]] is a random function on any set (typically time, space, or a discrete set).
 
==Distribution functions==
Line 58 ⟶ 61:
===Discrete random variable===
 
InConsider an experiment where a person may beis chosen at random,. An andexample oneof a random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to the person'stheir height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190&nbsp;cm, or the probability that the height is either less than 150 or more than 200&nbsp;cm.
 
Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum <math>\operatorname{PMF}(0) + \operatorname{PMF}(2) + \operatorname{PMF}(4) + \cdots</math>.
Line 64 ⟶ 67:
In examples such as these, the [[sample space]] is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed.
 
If <math display = "inline">\{a_n\}, \{b_n\}</math> are countable sets of real numbers, <math display="inline">b_n >0</math> and <math display="inline">\sum_n b_n=1</math>, then <math display="inline"> F=\sum_n b_n \delta_{a_n}(x)</math> is a discrete distribution function. Here <math> \delta_t(x) = 0</math> for <math> x < t</math>, <math> \delta_t(x) = 1</math> for <math> x \ge t</math>. Taking for instance an enumeration of all rational numbers as <math>\{a_n\}</math> , one gets a discrete function that is not necessarily a [[step function]] ([[piecewise]] constant).
====Coin toss====
 
Line 298 ⟶ 301:
 
This notion is typically the least useful in probability theory because in practice and in theory, the underlying [[measure space]] of the [[Experiment (probability theory)|experiment]] is rarely explicitly characterized or even characterizable.
 
===Practical difference between notions of equivalence===
 
Since we rarely explicitly construct the probability space underlying a random variable, the difference between these notions of equivalence is somewhat subtle. Essentially, two random variables considered ''in isolation'' are "practically equivalent" if they are equal in distribution -- but once we relate them to ''other'' random variables defined on the same probability space, then they only remain "practically equivalent" if they are equal almost surely.
 
For example, consider the real random variables ''A'', ''B'', ''C'', and ''D'' all defined on the same probability space. Suppose that ''A'' and ''B'' are equal almost surely (<math>A \; \stackrel{\text{a.s.}}{=} \; B</math>), but ''A'' and ''C'' are only equal in distribution (<math>A \stackrel{d}{=} C</math>). Then <math> A + D \; \stackrel{\text{a.s.}}{=} \; B + D</math>, but in general <math> A + D \; \neq \; C + D</math> (not even in distribution). Similarly, we have that the expectation values <math> \mathbb{E}(AD) = \mathbb{E}(BD)</math>, but in general <math> \mathbb{E}(AD) \neq \mathbb{E}(CD)</math>. Therefore, two random variables that are equal in distribution (but not equal almost surely) can have different [[covariance|covariances]] with a third random variable.
 
==Convergence==