Random variable: Difference between revisions

Content deleted Content added
removing the word probability from the example in the lead to make it non controversial
Citation bot (talk | contribs)
Removed URL that duplicated identifier. | Use this bot. Report bugs. | #UCB_CommandLine
 
(17 intermediate revisions by 16 users not shown)
Line 2:
{{Probability fundamentals}}
 
A '''random variable''' (also called '''random quantity''', '''aleatory variable''', or '''stochastic variable''') is a [[Mathematics| mathematical]] formalization of a quantity or object which depends on [[randomness|random]] events.<ref name=":2">{{cite book|last1=Blitzstein|first1=Joe|title=Introduction to Probability|last2=Hwang|first2=Jessica|date=2014|publisher=CRC Press|isbn=9781466575592}}</ref> The term 'random variable' in its mathematical definition refers to neither randomness nor variability<ref>{{Cite book |last=Deisenroth |first=Marc Peter |url=https://www.worldcat.org/oclc/1104219401 |title=Mathematics for machine learning |date=2020 |others=A. Aldo Faisal, Cheng Soon Ong |isbn=978-1-108-47004-9 |___location=Cambridge, United Kingdom |oclc=1104219401 |publisher=Cambridge University Press}}</ref> but instead is a mathematical [[function (mathematics)|function]] in which
 
* the [[Domain of a function|___domain]] is the set of possible [[Outcome (probability)|outcomes]] in a [[sample space]] (e.g. the set <math>\{H,T\}</math> which are the possible upper sides of a flipped coin heads <math>H</math> or tails <math>T</math> as the result from tossing a coin); and
* the [[Range of a function|range]] is a [[measurable space]] (e.g. corresponding to the ___domain above, the range might be the set <math>\{-1, 1\}</math> if say heads <math>H</math> mapped to -1 and <math>T</math> mapped to 1). Typically, the range of a random variable is seta subset of the [[Real number|real numbers]].
 
[[File:Random Variable as a Function-en.svg|thumb|This graph shows how random variable is a function from all possible outcomes to real values. It also shows how random variable is used for defining probability mass functions.]]
 
Informally, randomness typically represents some fundamental element of chance, such as in the roll of a [[dice|die]]; it may also represent uncertainty, such as [[measurement error]].<ref name=":2" /> However, the [[interpretation of probability]] is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous [[Axiom|axiomatic]] setup.
 
In the formal mathematical language of [[measure theory]], a random variable is defined as a [[measurable function]] from a [[probability measure space]] (called the ''sample space'') to a [[measurable space]]. This allows consideration of the [[pushforward measure]], which is called the ''distribution'' of the random variable; the distribution is thus a [[probability measure]] on the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may be [[independence (probability theory)|independent]].
Line 19:
==Definition==
 
A '''random variable''' <math>X</math> is a [[measurable function]] <math>X \colon \Omega \to E</math> from a sample space <math> \Omega </math> as a set of possible [[outcome (probability)|outcome]]s to a [[measurable space]] <math> E</math>. The technical axiomatic definition requires the sample space <math>\Omega</math> to bebelong a sample space ofto a [[probability space|probability triple]] <math>(\Omega, \mathcal{F}, \operatorname{P})</math> (see the [[#Measure-theoretic definition|measure-theoretic definition]]). A random variable is often denoted by capital [[Latin script|Roman letters]] such as <math>X, Y, Z, T</math>.<ref>{{Cite web|title=Random Variables|url=https://www.mathsisfun.com/data/random-variables.html|access-date=2020-08-21|website=www.mathsisfun.com}}</ref>
 
The probability that <math>X</math> takes on a value in a measurable set <math>S\subseteq E</math> is written as
Line 29:
In many cases, <math>X</math> is [[Real number|real-valued]], i.e. <math>E = \mathbb{R}</math>. In some contexts, the term [[random element]] (see [[#Extensions|extensions]]) is used to denote a random variable not of this form.
 
{{Anchor|Discrete random variable}}When the [[Image (mathematics)|image]] (or range) of <math>X</math> is finitelyfinite or infinitely [[countable set|countablecountably]] infinite, the random variable is called a '''discrete random variable'''<ref name="Yates">{{cite book | last1 = Yates | first1 = Daniel S. | last2 = Moore | first2 = David S | last3 = Starnes | first3 = Daren S. | year = 2003 | title = The Practice of Statistics | edition = 2nd | publisher = [[W. H. Freeman and Company|Freeman]] | ___location = New York | url = http://bcs.whfreeman.com/yates2e/ | isbn = 978-0-7167-4773-4 | url-status = dead | archive-url = https://web.archive.org/web/20050209001108/http://bcs.whfreeman.com/yates2e/ | archive-date = 2005-02-09 }}</ref>{{rp|399}} and its distribution is a [[discrete probability distribution]], i.e. can be described by a [[probability mass function]] that assigns a probability to each value in the image of <math>X</math>. If the image is uncountably infinite (usually an [[Interval (mathematics)|interval]]) then <math>X</math> is called a '''continuous random variable'''.<ref>{{Cite web|title=Random Variables|url=http://www.stat.yale.edu/Courses/1997-98/101/ranvar.htm|access-date=2020-08-21|website=www.stat.yale.edu}}</ref><ref>{{Cite journal|last1=Dekking|first1=Frederik Michel|last2=Kraaikamp|first2=Cornelis|last3=Lopuhaä|first3=Hendrik Paul|last4=Meester|first4=Ludolf Erwin|date=2005|title=A Modern Introduction to Probability and Statistics|url=https://doi.org/10.1007/1-84628-168-7|journal=Springer Texts in Statistics|language=en-gb|doi=10.1007/1-84628-168-7|isbn=978-1-85233-896-1|issn=1431-875X|url-access=subscription}}</ref> In the special case that it is [[absolutely continuous]], its distribution can be described by a [[probability density function]], which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous.<ref>{{cite book|author1=L. Castañeda |author2=V. Arunachalam |author3=S. Dharmaraja |name-list-style=amp |title = Introduction to Probability and Stochastic Processes with Applications | year = 2012 | publisher= Wiley | page = 67 | url=https://books.google.com/books?id=zxXRn-Qmtk8C&pg=PA67 |isbn=9781118344941 }}</ref>
 
Any random variable can be described by its [[cumulative distribution function]], which describes the probability that the random variable will be less than or equal to a certain value.
Line 43:
*A random sentence of given length <math>N</math> may be represented as a vector of <math>N</math> random words.
*A [[random graph]] on <math>N</math> given vertices may be represented as a <math>N \times N</math> matrix of random variables, whose values specify the [[adjacency matrix]] of the random graph.
*A [[random function]] <math>F</math> may be represented as a collection of random variables <math>F(x)</math>, giving the function's values at the various points <math>x</math> in the function's ___domain. The <math>F(x)</math> are ordinary real-valued random variables provided that the function is real-valued. For example, a [[stochastic process]] is a random function of time, a [[random vector]] is a random function of some [[index set]] such as <math>1,2,\ldots, n</math>, and [[random field]] is a random function on any set (typically time, space, or a discrete set).
 
==Distribution functions==
Line 61:
===Discrete random variable===
 
Consider an experiment where a person is chosen at random. An example of a random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to his or hertheir height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190&nbsp;cm, or the probability that the height is either less than 150 or more than 200&nbsp;cm.
 
Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum <math>\operatorname{PMF}(0) + \operatorname{PMF}(2) + \operatorname{PMF}(4) + \cdots</math>.
Line 67:
In examples such as these, the [[sample space]] is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed.
 
If <math display = "inline">\{a_n\}, \{b_n\}</math> are countable sets of real numbers, <math display="inline">b_n >0</math> and <math display="inline">\sum_n b_n=1</math>, then <math display="inline"> F=\sum_n b_n \delta_{a_n}(x)</math> is a discrete distribution function. Here <math> \delta_t(x) = 0</math> for <math> x < t</math>, <math> \delta_t(x) = 1</math> for <math> x \ge t</math>. Taking for instance an enumeration of all rational numbers as <math>\{a_n\}</math> , one gets a discrete function that is not necessarily a [[step function]] ([[piecewise]] constant).
====Coin toss====
 
Line 301:
 
This notion is typically the least useful in probability theory because in practice and in theory, the underlying [[measure space]] of the [[Experiment (probability theory)|experiment]] is rarely explicitly characterized or even characterizable.
 
===Practical difference between notions of equivalence===
 
Since we rarely explicitly construct the probability space underlying a random variable, the difference between these notions of equivalence is somewhat subtle. Essentially, two random variables considered ''in isolation'' are "practically equivalent" if they are equal in distribution -- but once we relate them to ''other'' random variables defined on the same probability space, then they only remain "practically equivalent" if they are equal almost surely.
 
For example, consider the real random variables ''A'', ''B'', ''C'', and ''D'' all defined on the same probability space. Suppose that ''A'' and ''B'' are equal almost surely (<math>A \; \stackrel{\text{a.s.}}{=} \; B</math>), but ''A'' and ''C'' are only equal in distribution (<math>A \stackrel{d}{=} C</math>). Then <math> A + D \; \stackrel{\text{a.s.}}{=} \; B + D</math>, but in general <math> A + D \; \neq \; C + D</math> (not even in distribution). Similarly, we have that the expectation values <math> \mathbb{E}(AD) = \mathbb{E}(BD)</math>, but in general <math> \mathbb{E}(AD) \neq \mathbb{E}(CD)</math>. Therefore, two random variables that are equal in distribution (but not equal almost surely) can have different [[covariance|covariances]] with a third random variable.
 
==Convergence==