This page allows you to examine the variables generated by the Edit Filter for an individual change.

Variables generated for this change

VariableValue
Whether or not the edit is marked as minor (no longer in use) (minor_edit)
false
Edit count of the user (user_editcount)
0
Name of the user account (user_name)
'MasterEzzy'
Age of the user account (user_age)
208
Groups (including implicit) the user is in (user_groups)
[ 0 => '*', 1 => 'user' ]
Rights that the user has (user_rights)
[ 0 => 'createaccount', 1 => 'read', 2 => 'edit', 3 => 'createtalk', 4 => 'writeapi', 5 => 'editmyusercss', 6 => 'editmyuserjs', 7 => 'viewmywatchlist', 8 => 'editmywatchlist', 9 => 'viewmyprivateinfo', 10 => 'editmyprivateinfo', 11 => 'editmyoptions', 12 => 'abusefilter-view', 13 => 'abusefilter-log', 14 => 'abusefilter-log-detail', 15 => 'centralauth-merge', 16 => 'vipsscaler-test', 17 => 'ep-bereviewer', 18 => 'collectionsaveasuserpage', 19 => 'reupload-own', 20 => 'move-rootuserpages', 21 => 'move-categorypages', 22 => 'createpage', 23 => 'minoredit', 24 => 'purge', 25 => 'sendemail', 26 => 'applychangetags', 27 => 'ep-enroll', 28 => 'mwoauthmanagemygrants' ]
Global groups that the user is in (global_user_groups)
[]
Whether or not a user is editing through the mobile interface (user_mobile)
true
Page ID (page_id)
25685
Page namespace (page_namespace)
0
Page title without namespace (page_title)
'Random variable'
Full page title (page_prefixedtitle)
'Random variable'
Last ten users to contribute to the page (page_recent_contributors)
[ 0 => 'Golopotw', 1 => '72.182.55.186', 2 => 'Tsirel', 3 => 'AnomieBOT', 4 => 'Nbro', 5 => 'Dicklyon', 6 => '115.64.140.60', 7 => 'Eassin', 8 => '119.93.155.192', 9 => '2A00:23C4:6266:2400:C4A0:CCFC:9B1:DA69' ]
Action (action)
'edit'
Edit summary/reason (summary)
'Added content'
Old content model (old_content_model)
'wikitext'
New content model (new_content_model)
'wikitext'
Old page wikitext, before the edit (old_wikitext)
'{{More footnotes|date=February 2012}} {{Probability fundamentals}} In [[probability and statistics]], a '''random variable''', '''random quantity''', '''aleatory variable''', or '''stochastic variable''' is a variable whose possible values are numerical [[Outcome (probability)|outcomes]] of a [[Randomness|random]] phenomenon.<ref>{{cite book|last1=Blitzstein|first1=Joe|last2=Hwang|first2=Jessica|title=Introduction to Probability|date=2014|publisher=CRC Press|isbn=9781466575592}}</ref> As a function, a random variable is required to be [[Measurable function|measurable]], which rules out certain [[Pathological_(mathematics)|pathological]] cases where the quantity which the random variable returns is infinitely sensitive to small changes in the outcome. It is common that these outcomes depend on some physical variables that are not well understood. For example, when you toss a coin, the final outcome of heads or tails depends on the uncertain physics. Which outcome will be observed is not certain. Of course the coin could get caught in a crack in the floor, but such a possibility is excluded from consideration. The [[___domain of a function|___domain]] of a random variable is the set of possible outcomes. In the case of the coin, there are only two possible outcomes, namely heads or tails. Since one of these outcomes must occur, either the event that the coin lands heads or the event that the coin lands tails must have non-zero probability. A random variable is defined as a [[function (mathematics)|function]] that maps outcomes to numerical quantities (labels), typically [[real numbers]]. In this sense, it is a procedure for assigning a numerical quantity to each physical outcome, and, contrary to its name, this procedure itself is neither random nor variable. What is random is the unstable physics that describes how the coin lands, and the uncertainty of which outcome will actually be observed. A random variable's possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the possible outcomes of a past experiment whose already-existing value is uncertain (for example, due to imprecise measurements or [[quantum uncertainty]]). They may also conceptually represent either the results of an "objectively" random process (such as rolling a die) or the "subjective" randomness that results from incomplete knowledge of a quantity. The meaning of the probabilities assigned to the potential values of a random variable is not part of [[probability theory]] itself but is instead related to philosophical arguments over the [[interpretation of probability]]. The mathematics works the same regardless of the particular interpretation in use. A random variable has a [[probability distribution]], which specifies the probability that its value falls in any given interval. Random variables can be [[Discrete random variable|discrete]], that is, taking any of a specified finite or countable list of values, endowed with a [[probability mass function]] characteristic of the random variable's probability distribution; or [[Continuous random variable|continuous]], taking any numerical value in an interval or collection of intervals, via a [[probability density function]] that is characteristic of the random variable's probability distribution; or a mixture of both types. Two random variables with the same probability distribution can still differ in terms of their associations with, or [[independence (probability theory)|independence]] from, other random variables. The realizations of a random variable, that is, the results of randomly choosing values according to the variable's probability distribution function, are called [[random variate]]s. The formal mathematical treatment of random variables is a topic in [[probability theory]]. In that context, a random variable is understood as a [[Measurable function|function]] defined on a [[sample space]] whose outputs are numerical values.<ref name="UCSB">{{cite web | title = Economics 245A – Introduction to Measure Theory | url = http://econ.ucsb.edu/~doug/245a/Lectures/Measure%20Theory.pdf | last = Steigerwald | first = Douglas G. | publisher = University of California, Santa Barbara | accessdate = April 26, 2013}}</ref> ==Definition== A ''random variable'' <math>X\colon \Omega \to E</math> is a [[measurable function]] from a set of possible [[outcome (probability)|outcome]]s <math> \Omega </math> to a [[measurable space]] <math> E</math>. The technical axiomatic definition requires <math>\Omega</math> to be a [[probability space]] (see [[#Measure-theoretic definition|Measure-theoretic definition]]). Usually <math>X</math> is real-valued (i.e. <math>E=\mathbb{R}</math>). The probability that <math>X</math> takes value in a measurable set <math>S\subset E</math> is written as: : <math>\operatorname {Pr} (X \in S) = P(\{\omega\in \Omega|X(\omega) \in S\})</math>, where <math>P</math> is the probability measure equipped with <math>\Omega</math>. ===Standard case=== In many cases, <math>E =</math> [[Real number|<math>\mathbb{R}</math>]]. In some contexts, the term ''[[random element]]'' (see [[#Extensions|Extensions]]) is used to denote a random variable not of this form. When the [[Image (mathematics)|image]] (or range) of <math>X</math> is finite or [[countably infinite]], the random variable is called a '''discrete random variable'''<ref name="Yates">{{cite book | last = Yates | first = Daniel S. | last2 = Moore | first2 = David S | last3 = Starnes | first3 = Daren S. | year = 2003 | title = The Practice of Statistics | edition = 2nd | publisher = [[W. H. Freeman and Company|Freeman]] | ___location = New York | url = http://bcs.whfreeman.com/yates2e/ | isbn = 978-0-7167-4773-4}}</ref>{{rp|399}} and its distribution can be described by a [[probability mass function]] which assigns a probability to each value in the image of <math>X</math>. If the image is uncountably infinite then <math>X</math> is called a '''continuous random variable'''. In the special case that it is [[absolutely continuous]], its distribution can be described by a [[probability density function]], which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous,<ref>{{cite book|author1=L. Castañeda |author2=V. Arunachalam |author3=S. Dharmaraja |last-author-amp=yes |title = Introduction to Probability and Stochastic Processes with Applications | year = 2012 | publisher= Wiley | page = 67 | url=https://books.google.com/books?id=zxXRn-Qmtk8C&pg=PA67 }}</ref> for example a [[mixture distribution]]. Such random variables cannot be described by a probability density or a probability mass function. Any random variable can be described by its [[cumulative distribution function]], which describes the probability that the random variable will be less than or equal to a certain value. ===Extensions=== The term "random variable" in statistics is traditionally limited to the [[real number|real-valued]] case (<math>E=\mathbb{R}</math>). In this case, the structure of the real numbers makes it possible to define quantities such as the [[expected value]] and [[variance]] of a random variable, its [[cumulative distribution function]], and the [[moment (mathematics)|moment]]s of its distribution. However, the definition above is valid for any [[measurable space]] <math>E</math> of values. Thus one can consider random elements of other sets <math>E</math>, such as random [[Boolean-valued function|boolean value]]s, [[categorical variable|categorical value]]s, [[Covariance matrix#Complex random vectors|complex numbers]], [[random vector|vector]]s, [[random matrix|matrices]], [[random sequence|sequence]]s, [[Tree (graph theory)|tree]]s, [[random compact set|set]]s, [[shape]]s, [[manifold]]s, and [[random function|function]]s. One may then specifically refer to a ''random variable of [[data type|type]] <math>E</math>'', or an ''<math>E</math>-valued random variable''. This more general concept of a [[random element]] is particularly useful in disciplines such as [[graph theory]], [[machine learning]], [[natural language processing]], and other fields in [[discrete mathematics]] and [[computer science]], where one is often interested in modeling the random variation of non-numerical [[data structure]]s. In some cases, it is nonetheless convenient to represent each element of <math>E</math> using one or more real numbers. In this case, a random element may optionally be represented as a [[random vector|vector of real-valued random variables]] (all defined on the same underlying probability space <math>\Omega</math>, which allows the different random variables to [[mutual information|covary]]). For example: *A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector whose length equals the size of the vocabulary, where the only values of positive probability are <math>(1 \ 0 \ 0 \ 0 \ \cdots)</math>, <math>(0 \ 1 \ 0 \ 0 \ \cdots)</math>, <math>(0 \ 0 \ 1 \ 0 \ \cdots)</math> and the position of the 1 indicates the word. *A random sentence of given length <math>N</math> may be represented as a vector of <math>N</math> random words. *A [[random graph]] on <math>N</math> given vertices may be represented as a <math>N \times N</math> matrix of random variables, whose values specify the [[adjacency matrix]] of the random graph. *A [[random function]] <math>F</math> may be represented as a collection of random variables <math>F(x)</math>, giving the function's values at the various points <math>x</math> in the function's ___domain. The <math>F(x)</math> are ordinary real-valued random variables provided that the function is real-valued. For example, a [[stochastic process]] is a random function of time, a [[random vector]] is a random function of some index set such as <math>1,2,\ldots, n</math>, and [[random field]] is a random function on any set (typically time, space, or a discrete set). ==Distribution functions== If a random variable <math>X\colon \Omega \to \mathbb{R}</math> defined on the probability space <math>(\Omega, \mathcal{F}, P)</math> is given, we can ask questions like "How likely is it that the value of <math>X</math> is equal to 2?". This is the same as the probability of the event <math>\{ \omega : X(\omega) = 2 \}\,\! </math> which is often written as <math>P(X = 2)\,\!</math> or <math>p_X(2)</math> for short. Recording all these probabilities of output ranges of a real-valued random variable <math>X</math> yields the [[probability distribution]] of <math>X</math>. The probability distribution "forgets" about the particular probability space used to define <math>X</math> and only records the probabilities of various values of <math>X</math>. Such a probability distribution can always be captured by its [[cumulative distribution function]] :<math>F_X(x) = \operatorname{P}(X \le x)</math> and sometimes also using a [[probability density function]], <math>p_X</math>. In [[measure theory|measure-theoretic]] terms, we use the random variable <math>X</math> to "push-forward" the measure <math>P</math> on <math>\Omega</math> to a measure <math>p_X</math> on <math>\mathbb{R}</math>. The underlying probability space <math>\Omega</math> is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as [[correlation and dependence]] or [[Independence (probability theory)|independence]] based on a [[joint distribution]] of two or more random variables on the same probability space. In practice, one often disposes of the space <math>\Omega</math> altogether and just puts a measure on <math>\mathbb{R}</math> that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on [[Quantile function|quantile functions]] for fuller development. ==Examples== ===Discrete random variable=== In an experiment a person may be chosen at random, and one random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to the person's height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190&nbsp;cm, or the probability that the height is either less than 150 or more than 200&nbsp;cm. Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum <math>\operatorname{PMF}(0) + \operatorname{PMF}(2) + \operatorname{PMF}(4) + \cdots</math>. In examples such as these, the sample space (the set of all possible persons) is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed. ====Coin toss==== The possible outcomes for one coin toss can be described by the sample space <math>\Omega = \{\text{heads}, \text{tails}\}</math>. We can introduce a real-valued random variable <math>Y</math> that models a $1 payoff for a successful bet on heads as follows: :<math> Y(\omega) = \begin{cases} 1, & \text{if } \omega = \text{heads}, \\[6pt] 0, & \text{if } \omega = \text{tails}. \end{cases} </math> If the coin is a [[fair coin]], ''Y'' has a [[probability mass function]] <math>f_Y</math> given by: :<math> f_Y(y) = \begin{cases} \tfrac 12,& \text{if }y=1,\\[6pt] \tfrac 12,& \text{if }y=0, \end{cases} </math> ====Dice roll==== [[File:Dice Distribution (bar).svg| right | thumb | If the sample space is the set of possible numbers rolled on two dice, and the random variable of interest is the sum ''S'' of the numbers on the two dice, then ''S'' is a discrete random variable whose distribution is described by the [[probability mass function]] plotted as the height of picture columns here.]] A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers ''n''<sub>1</sub> and ''n''<sub>2</sub> from {1, 2, 3, 4, 5, 6} (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variable ''X'' given by the function that maps the pair to the sum: :<math>X((n_1, n_2)) = n_1 + n_2</math> and (if the dice are [[fair die|fair]]) has a probability mass function ''ƒ''<sub>''X''</sub> given by: :<math>f_X(S) = \frac{\min(S-1, 13-S)}{36}, \text{ for } S \in \{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12\}</math> ===Continuous random variable=== An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, '''''X''''' = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any ''range'' of values. For example, the probability of choosing a number in [0, 180] is {{frac|1|2}}. Instead of speaking of a probability mass function, we say that the probability ''density'' of '''''X''''' is 1/360. The probability of a subset of [0,&nbsp;360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set. ===Mixed type=== An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, '''''X''''' = −1; otherwise '''''X''''' = the value of the spinner as in the preceding example. There is a probability of {{frac|1|2}} that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example. ==Measure-theoretic definition== The most formal, [[axiomatic]] definition of a random variable involves [[measure theory]]. Continuous random variables are defined in terms of [[set (mathematics)|set]]s of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the [[Banach–Tarski paradox]]) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a [[sigma-algebra]] to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the [[Borel σ-algebra]], which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or [[countably infinite]] number of [[union (set theory)|union]]s and/or [[intersection (set theory)|intersection]]s of such intervals.<ref name="UCSB" /> The measure-theoretic definition is as follows. Let <math>(\Omega, \mathcal{F}, P)</math> be a [[probability space]] and <math>(E, \mathcal{E})</math> a [[measurable space]]. Then an '''<math>(E, \mathcal{E})</math>-valued random variable''' is a measurable function <math>X\colon \Omega \to E</math>, which means that, for every subset <math>B\in\mathcal{E}</math>, its [[preimage]] <math>X^{-1}(B)\in \mathcal{F}</math> where <math>X^{-1}(B) = \{\omega : X(\omega)\in B\}</math>.<ref>{{harvtxt|Fristedt|Gray|1996|loc=page 11}}</ref> This definition enables us to measure any subset <math>B\in \mathcal{E}</math> in the target space by looking at its preimage, which by assumption is measurable. In more intuitive terms, a member of <math>\Omega</math> is a possible outcome, a member of <math>\mathcal{F}</math> is a measurable subset of possible outcomes, the function <math>P</math> gives the probability of each such measurable subset, <math>E</math> represents the set of values that the random variable can take (such as the set of real numbers), and a member of <math>\mathcal{E}</math> is a "well-behaved" (measurable) subset of <math>E</math> (those for which you might want to find the probability). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability. When <math>E</math> is a [[topological space]], then the most common choice for the [[σ-algebra]] <math>\mathcal{E}</math> is the [[Borel σ-algebra]] <math>\mathcal{B}(E)</math>, which is the σ-algebra generated by the collection of all open sets in <math>E</math>. In such case the <math>(E, \mathcal{E})</math>-valued random variable is called the '''<math>E</math>-valued random variable'''. Moreover, when space <math>E</math> is the real line <math>\mathbb{R}</math>, then such a real-valued random variable is called simply the '''random variable'''. ===Real-valued random variables=== In this case the observation space is the set of real numbers. Recall, <math>(\Omega, \mathcal{F}, P)</math> is the probability space. For real observation space, the function <math>X\colon \Omega \rightarrow \mathbb{R}</math> is a real-valued random variable if :<math>\{ \omega : X(\omega) \le r \} \in \mathcal{F} \qquad \forall r \in \mathbb{R}.</math> This definition is a special case of the above because the set <math>\{(-\infty, r]: r \in \R\}</math> generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that <math>\{ \omega : X(\omega) \le r \} = X^{-1}((-\infty, r])</math>. ==Moments== The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of [[expected value]] of a random variable, denoted <math>\operatorname{E}[X]</math>, and also called the '''first [[Moment (mathematics)|moment]].''' In general, <math>\operatorname{E}[f(X)]</math> is not equal to <math>f(\operatorname{E}[X])</math>. Once the "average value" is known, one could then ask how far from this average value the values of <math>X</math> typically are, a question that is answered by the [[variance]] and [[standard deviation]] of a random variable. <math>\operatorname{E}[X]</math> can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations of <math>X</math>. Mathematically, this is known as the (generalised) [[problem of moments]]: for a given class of random variables <math>X</math>, find a collection <math>\{f_i\}</math> of functions such that the expectation values <math>\operatorname{E}[f_i(X)]</math> fully characterise the distribution of the random variable <math>X</math>. Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity function <math>f(X)=X</math> of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for a [[categorical variable|categorical]] random variable ''X'' that can take on the [[nominal data|nominal]] values "red", "blue" or "green", the real-valued function <math>[X = \text{green}]</math> can be constructed; this uses the [[Iverson bracket]], and has the value 1 if <math>X</math> has the value "green", 0 otherwise. Then, the [[expected value]] and other moments of this function can be determined. ==Functions of random variables== A new random variable ''Y'' can be defined by applying a real [[Measurable function|Borel measurable function]] <math>g\colon \mathbb{R} \rightarrow \mathbb{R}</math> to the outcomes of a [[real-valued]] random variable <math>X</math>. That is, <math>Y=g(x)</math>. The [[cumulative distribution function]] of <math>Y</math> is then :<math>F_Y(y) = \operatorname{P}(g(X) \le y).</math> If function <math>g</math> is invertible (i.e., <math>g^{-1}</math> exists) and is either increasing or decreasing, then the previous relation can be extended to obtain :<math>F_Y(y) = \operatorname{P}(g(X) \le y) = \begin{cases} \operatorname{P}(X \le g^{-1}(y)) = F_X(g^{-1}(y)), & \text{if } g^{-1} \text{ increasing} ,\\ \\ \operatorname{P}(X \ge g^{-1}(y)) = 1 - F_X(g^{-1}(y)), & \text{if } g^{-1} \text{ decreasing} . \end{cases}</math> With the same hypotheses of [[Inverse function|invertibility]] of <math>g</math>, assuming also differentiability, the relation between the [[probability density function]]s can be found by differentiating both sides of the above expression with respect to <math>y</math>, in order to obtain :<math>f_Y(y) = f_X(g^{-1}(y)) \left| \frac{d g^{-1}(y)}{d y} \right|.</math> If there is no invertibility of <math>g</math> but each <math>y</math> admits at most a countable number of roots (i.e., a finite, or countably infinite, number of <math>x_i</math> such that <math>y = g(x_i)</math>) then the previous relation between the [[probability density function]]s can be generalized with :<math>f_Y(y) = \sum_{i} f_X(g_{i}^{-1}(y)) \left| \frac{d g_{i}^{-1}(y)}{d y} \right| </math> where <math>x_i = g_i^{-1}(y)</math>. The formulas for densities do not demand <math>g</math> to be increasing. In the measure-theoretic, axiomatic approach to probability, if a random variable <math>X</math> on <math>\Omega</math> and a [[measurable function|Borel measurable function]] <math>g\colon \mathbb{R} \rightarrow \mathbb{R}</math>, then <math>Y = g(X)</math> will also be a random variable on <math>\Omega</math>, since the composition of measurable functions is also measurable. (However, this is not true if <math>g</math> is [[Lebesgue measurable]].) The same procedure that allowed one to go from a probability space <math>(\Omega, P) </math> to <math>(\mathbb{R}, dF_{X})</math> can be used to obtain the distribution of <math>Y</math>. ===Example 1=== Let <math>X</math> be a real-valued, [[continuous random variable]] and let <math>Y = X^2</math>. :<math>F_Y(y) = \operatorname{P}(X^2 \le y).</math> If <math>y < 0</math>, then <math>P(X^2 \leq y) = 0</math>, so :<math>F_Y(y) = 0\qquad\hbox{if}\quad y < 0.</math> If <math>y \geq 0</math>, then :<math>\operatorname{P}(X^2 \le y) = \operatorname{P}(|X| \le \sqrt{y}) = \operatorname{P}(-\sqrt{y} \le X \le \sqrt{y}),</math> so :<math>F_Y(y) = F_X(\sqrt{y}) - F_X(-\sqrt{y})\qquad\hbox{if}\quad y \ge 0.</math> ===Example 2=== Suppose <math>X</math> is a random variable with a cumulative distribution :<math> F_{X}(x) = P(X \leq x) = \frac{1}{(1 + e^{-x})^{\theta}}</math> where <math>\theta > 0</math> is a fixed parameter. Consider the random variable <math> Y = \mathrm{log}(1 + e^{-X}).</math> Then, :<math> F_{Y}(y) = P(Y \leq y) = P(\mathrm{log}(1 + e^{-X}) \leq y) = P(X \geq -\mathrm{log}(e^{y} - 1)).\,</math> The last expression can be calculated in terms of the cumulative distribution of <math>X,</math> so :<math> F_{Y}(y) = 1 - F_{X}(-\mathrm{log}(e^{y} - 1)) \, </math> :::<math> = 1 - \frac{1}{(1 + e^{\mathrm{log}(e^{y} - 1)})^{\theta}} </math> :::<math> = 1 - \frac{1}{(1 + e^{y} - 1)^{\theta}} </math> :::<math> = 1 - e^{-y \theta}.\, </math> which is the [[cumulative distribution function]] (cdf) of an [[exponential distribution]]. ===Example 3=== Suppose <math>X</math> is a random variable with a [[standard normal distribution]], whose density is :<math> f_X(x) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2}.</math> Consider the random variable <math> Y = X^2.</math> We can find the density using the above formula for a change of variables: :<math>f_Y(y) = \sum_{i} f_X(g_{i}^{-1}(y)) \left| \frac{d g_{i}^{-1}(y)}{d y} \right|. </math> In this case the change is not [[monotonic]], because every value of <math>Y</math> has two corresponding values of <math>X</math> (one positive and negative). However, because of symmetry, both halves will transform identically, i.e., :<math>f_Y(y) = 2f_X(g^{-1}(y)) \left| \frac{d g^{-1}(y)}{d y} \right|.</math> The inverse transformation is :<math>x = g^{-1}(y) = \sqrt{y}</math> and its derivative is :<math>\frac{d g^{-1}(y)}{d y} = \frac{1}{2\sqrt{y}} .</math> Then, :<math> f_Y(y) = 2\frac{1}{\sqrt{2\pi}}e^{-y/2} \frac{1}{2\sqrt{y}} = \frac{1}{\sqrt{2\pi y}}e^{-y/2}. </math> This is a [[chi-squared distribution]] with one [[degree of freedom (statistics)|degree of freedom]]. ==Equivalence of random variables== There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution. In increasing order of strength, the precise definition of these notions of equivalence is given below. ===Equality in distribution=== If the sample space is a subset of the real line, random variables ''X'' and ''Y'' are ''equal in distribution'' (denoted <math>X \stackrel{d}{=} Y</math>) if they have the same distribution functions: :<math>\operatorname{P}(X \le x) = \operatorname{P}(Y \le x)\quad\hbox{for all}\quad x.</math> To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal [[moment generating function]]s have the same distribution. This provides, for example, a useful method of checking equality of certain functions of [[iid|i.i.d. random variables]]. However, the moment generating function exists only for distributions that have a defined [[Laplace transform]]. ===Almost sure equality=== Two random variables ''X'' and ''Y'' are ''equal almost surely'' (denoted <math>X \stackrel{a.s.}{=} Y</math>) if, and only if, the probability that they are different is zero: :<math>\operatorname{P}(X \neq Y) = 0.</math> For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance: :<math>d_\infty(X,Y)=\mathrm{ess } \sup_\omega|X(\omega)-Y(\omega)|,</math> where "ess sup" represents the [[essential supremum]] in the sense of [[measure theory]]. ===Equality=== Finally, the two random variables ''X'' and ''Y'' are ''equal'' if they are equal as functions on their measurable space: :<math>X(\omega)=Y(\omega)\qquad\hbox{for all }\omega.</math> ==Convergence== A significant theme in mathematical statistics consists of obtaining convergence results for certain [[sequence]]s of random variables; for instance the [[law of large numbers]] and the [[central limit theorem]]. There are various senses in which a sequence (''X''<sub>''n''</sub>) of random variables can converge to a random variable ''X''. These are explained in the article on [[convergence of random variables]]. ==See also== {{Portal|Statistics}} {{colbegin||22em}} *[[Aleatoricism]] *[[Algebra of random variables]] *[[Event (probability theory)]] *[[Multivariate random variable]] *[[Observable variable]] *[[Probability distribution]] *[[Random element]] *[[Random function]] *[[Random measure]] *[[Random number generator]] produces a random value *[[Random vector]] *[[Randomness]] *[[Stochastic process]] {{colend}} ==References== {{reflist}} ===Literature=== {{refbegin}} * {{cite book | last1 = Fristedt | first1 = Bert | last2 = Gray | first2 = Lawrence | title = A modern approach to probability theory | year = 1996 | publisher = Birkhäuser | ___location = Boston | url = https://books.google.com/books/about/A_Modern_Approach_to_Probability_Theory.html?id=5D5O8xyM-kMC | isbn = 3-7643-3807-5 | ref = harv}} * {{cite book | last = Kallenberg | first = Olav | authorlink = Olav Kallenberg | year = 1986 | title = Random Measures | edition = 4th | publisher = [[Akademie Verlag]] | ___location = Berlin | mr = 0854102 | ISBN = 0-12-394960-2 | url = https://books.google.com/books/about/Random_measures.html?id=bBnvAAAAMAAJ}} * {{cite book | last = Kallenberg | first = Olav | year = 2001 | title = Foundations of Modern Probability | edition = 2nd | publisher = [[Springer Verlag]] | ___location = Berlin | ISBN = 0-387-95313-2 | url = https://books.google.com/books/about/Foundations_of_Modern_Probability.html?hl=de&id=L6fhXh13OyMC}} * {{cite book | authorlink = Athanasios Papoulis | last = Papoulis | first = Athanasios | year = 1965 | title = Probability, Random Variables, and Stochastic Processes | publisher = [[McGraw–Hill]] | ___location = Tokyo | edition = 9th | ISBN = 0-07-119981-0 | url = http://www.mhhe.com/engcs/electrical/papoulis/}} {{refend}} ==External links== *{{springer|title=Random variable|id=p/r077360}} * {{citation | last = Zukerman| first = Moshe| year = 2014 | title = Introduction to Queueing Theory and Stochastic Teletraffic Models | url=http://www.ee.cityu.edu.hk/~zukerman/classnotes.pdf}} * {{citation | last = Zukerman| first = Moshe| year = 2014| title = Basic Probability Topics | url=http://www.ee.cityu.edu.hk/~zukerman/probability.pdf}} {{Statistics|state = collapsed}} {{DEFAULTSORT:Random Variable}} [[Category:Statistical randomness]]'
New page wikitext, after the edit (new_wikitext)
'{{More footnotes|date=February 2012}} {{Probability fundamentals}} In [[probability and statistics]], a '''random variable''', '''random quantity''', '''aleatory variable''', or '''stochastic variable''' is a variable whose possible values are numerical [[Outcome (probability)|outcomes]] of a [[Randomness|random]] phenomenon.<ref>{{cite book|last1=Blitzstein|first1=Joe|last2=Hwang|first2=Jessica|title=Introduction to Probability|date=2014|publisher=CRC Press|isbn=9781466575592}}</ref> As a function, a random variable is required to be [[Measurable function|measurable]], which rules out certain [[Pathological_(mathematics)|pathological]] cases where the quantity which the random variable returns is infinitely sensitive to small changes in the outcome. If your lick BOOBS and pussy and ass everyday then you become good. Stick your dick in women. It is common that these outcomes depend on some physical variables that are not well understood. For example, when you toss a coin, the final outcome of heads or tails depends on the uncertain physics. Which outcome will be observed is not certain. Of course the coin could get caught in a crack in the floor, but such a possibility is excluded from consideration. The [[___domain of a function|___domain]] of a random variable is the set of possible outcomes. In the case of the coin, there are only two possible outcomes, namely heads or tails. Since one of these outcomes must occur, either the event that the coin lands heads or the event that the coin lands tails must have non-zero probability. A random variable is defined as a [[function (mathematics)|function]] that maps outcomes to numerical quantities (labels), typically [[real numbers]]. In this sense, it is a procedure for assigning a numerical quantity to each physical outcome, and, contrary to its name, this procedure itself is neither random nor variable. What is random is the unstable physics that describes how the coin lands, and the uncertainty of which outcome will actually be observed. A random variable's possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the possible outcomes of a past experiment whose already-existing value is uncertain (for example, due to imprecise measurements or [[quantum uncertainty]]). They may also conceptually represent either the results of an "objectively" random process (such as rolling a die) or the "subjective" randomness that results from incomplete knowledge of a quantity. The meaning of the probabilities assigned to the potential values of a random variable is not part of [[probability theory]] itself but is instead related to philosophical arguments over the [[interpretation of probability]]. The mathematics works the same regardless of the particular interpretation in use. A random variable has a [[probability distribution]], which specifies the probability that its value falls in any given interval. Random variables can be [[Discrete random variable|discrete]], that is, taking any of a specified finite or countable list of values, endowed with a [[probability mass function]] characteristic of the random variable's probability distribution; or [[Continuous random variable|continuous]], taking any numerical value in an interval or collection of intervals, via a [[probability density function]] that is characteristic of the random variable's probability distribution; or a mixture of both types. Two random variables with the same probability distribution can still differ in terms of their associations with, or [[independence (probability theory)|independence]] from, other random variables. The realizations of a random variable, that is, the results of randomly choosing values according to the variable's probability distribution function, are called [[random variate]]s. The formal mathematical treatment of random variables is a topic in [[probability theory]]. In that context, a random variable is understood as a [[Measurable function|function]] defined on a [[sample space]] whose outputs are numerical values.<ref name="UCSB">{{cite web | title = Economics 245A – Introduction to Measure Theory | url = http://econ.ucsb.edu/~doug/245a/Lectures/Measure%20Theory.pdf | last = Steigerwald | first = Douglas G. | publisher = University of California, Santa Barbara | accessdate = April 26, 2013}}</ref> ==Definition== A ''random variable'' <math>X\colon \Omega \to E</math> is a [[measurable function]] from a set of possible [[outcome (probability)|outcome]]s <math> \Omega </math> to a [[measurable space]] <math> E</math>. The technical axiomatic definition requires <math>\Omega</math> to be a [[probability space]] (see [[#Measure-theoretic definition|Measure-theoretic definition]]). Usually <math>X</math> is real-valued (i.e. <math>E=\mathbb{R}</math>). The probability that <math>X</math> takes value in a measurable set <math>S\subset E</math> is written as: : <math>\operatorname {Pr} (X \in S) = P(\{\omega\in \Omega|X(\omega) \in S\})</math>, where <math>P</math> is the probability measure equipped with <math>\Omega</math>. ===Standard case=== In many cases, <math>E =</math> [[Real number|<math>\mathbb{R}</math>]]. In some contexts, the term ''[[random element]]'' (see [[#Extensions|Extensions]]) is used to denote a random variable not of this form. When the [[Image (mathematics)|image]] (or range) of <math>X</math> is finite or [[countably infinite]], the random variable is called a '''discrete random variable'''<ref name="Yates">{{cite book | last = Yates | first = Daniel S. | last2 = Moore | first2 = David S | last3 = Starnes | first3 = Daren S. | year = 2003 | title = The Practice of Statistics | edition = 2nd | publisher = [[W. H. Freeman and Company|Freeman]] | ___location = New York | url = http://bcs.whfreeman.com/yates2e/ | isbn = 978-0-7167-4773-4}}</ref>{{rp|399}} and its distribution can be described by a [[probability mass function]] which assigns a probability to each value in the image of <math>X</math>. If the image is uncountably infinite then <math>X</math> is called a '''continuous random variable'''. In the special case that it is [[absolutely continuous]], its distribution can be described by a [[probability density function]], which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous,<ref>{{cite book|author1=L. Castañeda |author2=V. Arunachalam |author3=S. Dharmaraja |last-author-amp=yes |title = Introduction to Probability and Stochastic Processes with Applications | year = 2012 | publisher= Wiley | page = 67 | url=https://books.google.com/books?id=zxXRn-Qmtk8C&pg=PA67 }}</ref> for example a [[mixture distribution]]. Such random variables cannot be described by a probability density or a probability mass function. Any random variable can be described by its [[cumulative distribution function]], which describes the probability that the random variable will be less than or equal to a certain value. ===Extensions=== The term "random variable" in statistics is traditionally limited to the [[real number|real-valued]] case (<math>E=\mathbb{R}</math>). In this case, the structure of the real numbers makes it possible to define quantities such as the [[expected value]] and [[variance]] of a random variable, its [[cumulative distribution function]], and the [[moment (mathematics)|moment]]s of its distribution. However, the definition above is valid for any [[measurable space]] <math>E</math> of values. Thus one can consider random elements of other sets <math>E</math>, such as random [[Boolean-valued function|boolean value]]s, [[categorical variable|categorical value]]s, [[Covariance matrix#Complex random vectors|complex numbers]], [[random vector|vector]]s, [[random matrix|matrices]], [[random sequence|sequence]]s, [[Tree (graph theory)|tree]]s, [[random compact set|set]]s, [[shape]]s, [[manifold]]s, and [[random function|function]]s. One may then specifically refer to a ''random variable of [[data type|type]] <math>E</math>'', or an ''<math>E</math>-valued random variable''. This more general concept of a [[random element]] is particularly useful in disciplines such as [[graph theory]], [[machine learning]], [[natural language processing]], and other fields in [[discrete mathematics]] and [[computer science]], where one is often interested in modeling the random variation of non-numerical [[data structure]]s. In some cases, it is nonetheless convenient to represent each element of <math>E</math> using one or more real numbers. In this case, a random element may optionally be represented as a [[random vector|vector of real-valued random variables]] (all defined on the same underlying probability space <math>\Omega</math>, which allows the different random variables to [[mutual information|covary]]). For example: *A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector whose length equals the size of the vocabulary, where the only values of positive probability are <math>(1 \ 0 \ 0 \ 0 \ \cdots)</math>, <math>(0 \ 1 \ 0 \ 0 \ \cdots)</math>, <math>(0 \ 0 \ 1 \ 0 \ \cdots)</math> and the position of the 1 indicates the word. *A random sentence of given length <math>N</math> may be represented as a vector of <math>N</math> random words. *A [[random graph]] on <math>N</math> given vertices may be represented as a <math>N \times N</math> matrix of random variables, whose values specify the [[adjacency matrix]] of the random graph. *A [[random function]] <math>F</math> may be represented as a collection of random variables <math>F(x)</math>, giving the function's values at the various points <math>x</math> in the function's ___domain. The <math>F(x)</math> are ordinary real-valued random variables provided that the function is real-valued. For example, a [[stochastic process]] is a random function of time, a [[random vector]] is a random function of some index set such as <math>1,2,\ldots, n</math>, and [[random field]] is a random function on any set (typically time, space, or a discrete set). ==Distribution functions== If a random variable <math>X\colon \Omega \to \mathbb{R}</math> defined on the probability space <math>(\Omega, \mathcal{F}, P)</math> is given, we can ask questions like "How likely is it that the value of <math>X</math> is equal to 2?". This is the same as the probability of the event <math>\{ \omega : X(\omega) = 2 \}\,\! </math> which is often written as <math>P(X = 2)\,\!</math> or <math>p_X(2)</math> for short. Recording all these probabilities of output ranges of a real-valued random variable <math>X</math> yields the [[probability distribution]] of <math>X</math>. The probability distribution "forgets" about the particular probability space used to define <math>X</math> and only records the probabilities of various values of <math>X</math>. Such a probability distribution can always be captured by its [[cumulative distribution function]] :<math>F_X(x) = \operatorname{P}(X \le x)</math> and sometimes also using a [[probability density function]], <math>p_X</math>. In [[measure theory|measure-theoretic]] terms, we use the random variable <math>X</math> to "push-forward" the measure <math>P</math> on <math>\Omega</math> to a measure <math>p_X</math> on <math>\mathbb{R}</math>. The underlying probability space <math>\Omega</math> is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as [[correlation and dependence]] or [[Independence (probability theory)|independence]] based on a [[joint distribution]] of two or more random variables on the same probability space. In practice, one often disposes of the space <math>\Omega</math> altogether and just puts a measure on <math>\mathbb{R}</math> that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on [[Quantile function|quantile functions]] for fuller development. ==Examples== ===Discrete random variable=== In an experiment a person may be chosen at random, and one random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to the person's height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190&nbsp;cm, or the probability that the height is either less than 150 or more than 200&nbsp;cm. Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum <math>\operatorname{PMF}(0) + \operatorname{PMF}(2) + \operatorname{PMF}(4) + \cdots</math>. In examples such as these, the sample space (the set of all possible persons) is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed. ====Coin toss==== The possible outcomes for one coin toss can be described by the sample space <math>\Omega = \{\text{heads}, \text{tails}\}</math>. We can introduce a real-valued random variable <math>Y</math> that models a $1 payoff for a successful bet on heads as follows: :<math> Y(\omega) = \begin{cases} 1, & \text{if } \omega = \text{heads}, \\[6pt] 0, & \text{if } \omega = \text{tails}. \end{cases} </math> If the coin is a [[fair coin]], ''Y'' has a [[probability mass function]] <math>f_Y</math> given by: :<math> f_Y(y) = \begin{cases} \tfrac 12,& \text{if }y=1,\\[6pt] \tfrac 12,& \text{if }y=0, \end{cases} </math> ====Dice roll==== [[File:Dice Distribution (bar).svg| right | thumb | If the sample space is the set of possible numbers rolled on two dice, and the random variable of interest is the sum ''S'' of the numbers on the two dice, then ''S'' is a discrete random variable whose distribution is described by the [[probability mass function]] plotted as the height of picture columns here.]] A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers ''n''<sub>1</sub> and ''n''<sub>2</sub> from {1, 2, 3, 4, 5, 6} (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variable ''X'' given by the function that maps the pair to the sum: :<math>X((n_1, n_2)) = n_1 + n_2</math> and (if the dice are [[fair die|fair]]) has a probability mass function ''ƒ''<sub>''X''</sub> given by: :<math>f_X(S) = \frac{\min(S-1, 13-S)}{36}, \text{ for } S \in \{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12\}</math> ===Continuous random variable=== An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, '''''X''''' = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any ''range'' of values. For example, the probability of choosing a number in [0, 180] is {{frac|1|2}}. Instead of speaking of a probability mass function, we say that the probability ''density'' of '''''X''''' is 1/360. The probability of a subset of [0,&nbsp;360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set. ===Mixed type=== An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, '''''X''''' = −1; otherwise '''''X''''' = the value of the spinner as in the preceding example. There is a probability of {{frac|1|2}} that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example. ==Measure-theoretic definition== The most formal, [[axiomatic]] definition of a random variable involves [[measure theory]]. Continuous random variables are defined in terms of [[set (mathematics)|set]]s of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the [[Banach–Tarski paradox]]) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a [[sigma-algebra]] to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the [[Borel σ-algebra]], which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or [[countably infinite]] number of [[union (set theory)|union]]s and/or [[intersection (set theory)|intersection]]s of such intervals.<ref name="UCSB" /> The measure-theoretic definition is as follows. Let <math>(\Omega, \mathcal{F}, P)</math> be a [[probability space]] and <math>(E, \mathcal{E})</math> a [[measurable space]]. Then an '''<math>(E, \mathcal{E})</math>-valued random variable''' is a measurable function <math>X\colon \Omega \to E</math>, which means that, for every subset <math>B\in\mathcal{E}</math>, its [[preimage]] <math>X^{-1}(B)\in \mathcal{F}</math> where <math>X^{-1}(B) = \{\omega : X(\omega)\in B\}</math>.<ref>{{harvtxt|Fristedt|Gray|1996|loc=page 11}}</ref> This definition enables us to measure any subset <math>B\in \mathcal{E}</math> in the target space by looking at its preimage, which by assumption is measurable. In more intuitive terms, a member of <math>\Omega</math> is a possible outcome, a member of <math>\mathcal{F}</math> is a measurable subset of possible outcomes, the function <math>P</math> gives the probability of each such measurable subset, <math>E</math> represents the set of values that the random variable can take (such as the set of real numbers), and a member of <math>\mathcal{E}</math> is a "well-behaved" (measurable) subset of <math>E</math> (those for which you might want to find the probability). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability. When <math>E</math> is a [[topological space]], then the most common choice for the [[σ-algebra]] <math>\mathcal{E}</math> is the [[Borel σ-algebra]] <math>\mathcal{B}(E)</math>, which is the σ-algebra generated by the collection of all open sets in <math>E</math>. In such case the <math>(E, \mathcal{E})</math>-valued random variable is called the '''<math>E</math>-valued random variable'''. Moreover, when space <math>E</math> is the real line <math>\mathbb{R}</math>, then such a real-valued random variable is called simply the '''random variable'''. ===Real-valued random variables=== In this case the observation space is the set of real numbers. Recall, <math>(\Omega, \mathcal{F}, P)</math> is the probability space. For real observation space, the function <math>X\colon \Omega \rightarrow \mathbb{R}</math> is a real-valued random variable if :<math>\{ \omega : X(\omega) \le r \} \in \mathcal{F} \qquad \forall r \in \mathbb{R}.</math> This definition is a special case of the above because the set <math>\{(-\infty, r]: r \in \R\}</math> generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that <math>\{ \omega : X(\omega) \le r \} = X^{-1}((-\infty, r])</math>. ==Moments== The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of [[expected value]] of a random variable, denoted <math>\operatorname{E}[X]</math>, and also called the '''first [[Moment (mathematics)|moment]].''' In general, <math>\operatorname{E}[f(X)]</math> is not equal to <math>f(\operatorname{E}[X])</math>. Once the "average value" is known, one could then ask how far from this average value the values of <math>X</math> typically are, a question that is answered by the [[variance]] and [[standard deviation]] of a random variable. <math>\operatorname{E}[X]</math> can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations of <math>X</math>. Mathematically, this is known as the (generalised) [[problem of moments]]: for a given class of random variables <math>X</math>, find a collection <math>\{f_i\}</math> of functions such that the expectation values <math>\operatorname{E}[f_i(X)]</math> fully characterise the distribution of the random variable <math>X</math>. Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity function <math>f(X)=X</math> of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for a [[categorical variable|categorical]] random variable ''X'' that can take on the [[nominal data|nominal]] values "red", "blue" or "green", the real-valued function <math>[X = \text{green}]</math> can be constructed; this uses the [[Iverson bracket]], and has the value 1 if <math>X</math> has the value "green", 0 otherwise. Then, the [[expected value]] and other moments of this function can be determined. ==Functions of random variables== A new random variable ''Y'' can be defined by applying a real [[Measurable function|Borel measurable function]] <math>g\colon \mathbb{R} \rightarrow \mathbb{R}</math> to the outcomes of a [[real-valued]] random variable <math>X</math>. That is, <math>Y=g(x)</math>. The [[cumulative distribution function]] of <math>Y</math> is then :<math>F_Y(y) = \operatorname{P}(g(X) \le y).</math> If function <math>g</math> is invertible (i.e., <math>g^{-1}</math> exists) and is either increasing or decreasing, then the previous relation can be extended to obtain :<math>F_Y(y) = \operatorname{P}(g(X) \le y) = \begin{cases} \operatorname{P}(X \le g^{-1}(y)) = F_X(g^{-1}(y)), & \text{if } g^{-1} \text{ increasing} ,\\ \\ \operatorname{P}(X \ge g^{-1}(y)) = 1 - F_X(g^{-1}(y)), & \text{if } g^{-1} \text{ decreasing} . \end{cases}</math> With the same hypotheses of [[Inverse function|invertibility]] of <math>g</math>, assuming also differentiability, the relation between the [[probability density function]]s can be found by differentiating both sides of the above expression with respect to <math>y</math>, in order to obtain :<math>f_Y(y) = f_X(g^{-1}(y)) \left| \frac{d g^{-1}(y)}{d y} \right|.</math> If there is no invertibility of <math>g</math> but each <math>y</math> admits at most a countable number of roots (i.e., a finite, or countably infinite, number of <math>x_i</math> such that <math>y = g(x_i)</math>) then the previous relation between the [[probability density function]]s can be generalized with :<math>f_Y(y) = \sum_{i} f_X(g_{i}^{-1}(y)) \left| \frac{d g_{i}^{-1}(y)}{d y} \right| </math> where <math>x_i = g_i^{-1}(y)</math>. The formulas for densities do not demand <math>g</math> to be increasing. In the measure-theoretic, axiomatic approach to probability, if a random variable <math>X</math> on <math>\Omega</math> and a [[measurable function|Borel measurable function]] <math>g\colon \mathbb{R} \rightarrow \mathbb{R}</math>, then <math>Y = g(X)</math> will also be a random variable on <math>\Omega</math>, since the composition of measurable functions is also measurable. (However, this is not true if <math>g</math> is [[Lebesgue measurable]].) The same procedure that allowed one to go from a probability space <math>(\Omega, P) </math> to <math>(\mathbb{R}, dF_{X})</math> can be used to obtain the distribution of <math>Y</math>. ===Example 1=== Let <math>X</math> be a real-valued, [[continuous random variable]] and let <math>Y = X^2</math>. :<math>F_Y(y) = \operatorname{P}(X^2 \le y).</math> If <math>y < 0</math>, then <math>P(X^2 \leq y) = 0</math>, so :<math>F_Y(y) = 0\qquad\hbox{if}\quad y < 0.</math> If <math>y \geq 0</math>, then :<math>\operatorname{P}(X^2 \le y) = \operatorname{P}(|X| \le \sqrt{y}) = \operatorname{P}(-\sqrt{y} \le X \le \sqrt{y}),</math> so :<math>F_Y(y) = F_X(\sqrt{y}) - F_X(-\sqrt{y})\qquad\hbox{if}\quad y \ge 0.</math> ===Example 2=== Suppose <math>X</math> is a random variable with a cumulative distribution :<math> F_{X}(x) = P(X \leq x) = \frac{1}{(1 + e^{-x})^{\theta}}</math> where <math>\theta > 0</math> is a fixed parameter. Consider the random variable <math> Y = \mathrm{log}(1 + e^{-X}).</math> Then, :<math> F_{Y}(y) = P(Y \leq y) = P(\mathrm{log}(1 + e^{-X}) \leq y) = P(X \geq -\mathrm{log}(e^{y} - 1)).\,</math> The last expression can be calculated in terms of the cumulative distribution of <math>X,</math> so :<math> F_{Y}(y) = 1 - F_{X}(-\mathrm{log}(e^{y} - 1)) \, </math> :::<math> = 1 - \frac{1}{(1 + e^{\mathrm{log}(e^{y} - 1)})^{\theta}} </math> :::<math> = 1 - \frac{1}{(1 + e^{y} - 1)^{\theta}} </math> :::<math> = 1 - e^{-y \theta}.\, </math> which is the [[cumulative distribution function]] (cdf) of an [[exponential distribution]]. ===Example 3=== Suppose <math>X</math> is a random variable with a [[standard normal distribution]], whose density is :<math> f_X(x) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2}.</math> Consider the random variable <math> Y = X^2.</math> We can find the density using the above formula for a change of variables: :<math>f_Y(y) = \sum_{i} f_X(g_{i}^{-1}(y)) \left| \frac{d g_{i}^{-1}(y)}{d y} \right|. </math> In this case the change is not [[monotonic]], because every value of <math>Y</math> has two corresponding values of <math>X</math> (one positive and negative). However, because of symmetry, both halves will transform identically, i.e., :<math>f_Y(y) = 2f_X(g^{-1}(y)) \left| \frac{d g^{-1}(y)}{d y} \right|.</math> The inverse transformation is :<math>x = g^{-1}(y) = \sqrt{y}</math> and its derivative is :<math>\frac{d g^{-1}(y)}{d y} = \frac{1}{2\sqrt{y}} .</math> Then, :<math> f_Y(y) = 2\frac{1}{\sqrt{2\pi}}e^{-y/2} \frac{1}{2\sqrt{y}} = \frac{1}{\sqrt{2\pi y}}e^{-y/2}. </math> This is a [[chi-squared distribution]] with one [[degree of freedom (statistics)|degree of freedom]]. ==Equivalence of random variables== There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution. In increasing order of strength, the precise definition of these notions of equivalence is given below. ===Equality in distribution=== If the sample space is a subset of the real line, random variables ''X'' and ''Y'' are ''equal in distribution'' (denoted <math>X \stackrel{d}{=} Y</math>) if they have the same distribution functions: :<math>\operatorname{P}(X \le x) = \operatorname{P}(Y \le x)\quad\hbox{for all}\quad x.</math> To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal [[moment generating function]]s have the same distribution. This provides, for example, a useful method of checking equality of certain functions of [[iid|i.i.d. random variables]]. However, the moment generating function exists only for distributions that have a defined [[Laplace transform]]. ===Almost sure equality=== Two random variables ''X'' and ''Y'' are ''equal almost surely'' (denoted <math>X \stackrel{a.s.}{=} Y</math>) if, and only if, the probability that they are different is zero: :<math>\operatorname{P}(X \neq Y) = 0.</math> For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance: :<math>d_\infty(X,Y)=\mathrm{ess } \sup_\omega|X(\omega)-Y(\omega)|,</math> where "ess sup" represents the [[essential supremum]] in the sense of [[measure theory]]. ===Equality=== Finally, the two random variables ''X'' and ''Y'' are ''equal'' if they are equal as functions on their measurable space: :<math>X(\omega)=Y(\omega)\qquad\hbox{for all }\omega.</math> ==Convergence== A significant theme in mathematical statistics consists of obtaining convergence results for certain [[sequence]]s of random variables; for instance the [[law of large numbers]] and the [[central limit theorem]]. There are various senses in which a sequence (''X''<sub>''n''</sub>) of random variables can converge to a random variable ''X''. These are explained in the article on [[convergence of random variables]]. ==See also== {{Portal|Statistics}} {{colbegin||22em}} *[[Aleatoricism]] *[[Algebra of random variables]] *[[Event (probability theory)]] *[[Multivariate random variable]] *[[Observable variable]] *[[Probability distribution]] *[[Random element]] *[[Random function]] *[[Random measure]] *[[Random number generator]] produces a random value *[[Random vector]] *[[Randomness]] *[[Stochastic process]] {{colend}} ==References== {{reflist}} ===Literature=== {{refbegin}} * {{cite book | last1 = Fristedt | first1 = Bert | last2 = Gray | first2 = Lawrence | title = A modern approach to probability theory | year = 1996 | publisher = Birkhäuser | ___location = Boston | url = https://books.google.com/books/about/A_Modern_Approach_to_Probability_Theory.html?id=5D5O8xyM-kMC | isbn = 3-7643-3807-5 | ref = harv}} * {{cite book | last = Kallenberg | first = Olav | authorlink = Olav Kallenberg | year = 1986 | title = Random Measures | edition = 4th | publisher = [[Akademie Verlag]] | ___location = Berlin | mr = 0854102 | ISBN = 0-12-394960-2 | url = https://books.google.com/books/about/Random_measures.html?id=bBnvAAAAMAAJ}} * {{cite book | last = Kallenberg | first = Olav | year = 2001 | title = Foundations of Modern Probability | edition = 2nd | publisher = [[Springer Verlag]] | ___location = Berlin | ISBN = 0-387-95313-2 | url = https://books.google.com/books/about/Foundations_of_Modern_Probability.html?hl=de&id=L6fhXh13OyMC}} * {{cite book | authorlink = Athanasios Papoulis | last = Papoulis | first = Athanasios | year = 1965 | title = Probability, Random Variables, and Stochastic Processes | publisher = [[McGraw–Hill]] | ___location = Tokyo | edition = 9th | ISBN = 0-07-119981-0 | url = http://www.mhhe.com/engcs/electrical/papoulis/}} {{refend}} ==External links== *{{springer|title=Random variable|id=p/r077360}} * {{citation | last = Zukerman| first = Moshe| year = 2014 | title = Introduction to Queueing Theory and Stochastic Teletraffic Models | url=http://www.ee.cityu.edu.hk/~zukerman/classnotes.pdf}} * {{citation | last = Zukerman| first = Moshe| year = 2014| title = Basic Probability Topics | url=http://www.ee.cityu.edu.hk/~zukerman/probability.pdf}} {{Statistics|state = collapsed}} {{DEFAULTSORT:Random Variable}} [[Category:Statistical randomness]]'
Unified diff of changes made by edit (edit_diff)
'@@ -2,5 +2,5 @@ {{Probability fundamentals}} -In [[probability and statistics]], a '''random variable''', '''random quantity''', '''aleatory variable''', or '''stochastic variable''' is a variable whose possible values are numerical [[Outcome (probability)|outcomes]] of a [[Randomness|random]] phenomenon.<ref>{{cite book|last1=Blitzstein|first1=Joe|last2=Hwang|first2=Jessica|title=Introduction to Probability|date=2014|publisher=CRC Press|isbn=9781466575592}}</ref> As a function, a random variable is required to be [[Measurable function|measurable]], which rules out certain [[Pathological_(mathematics)|pathological]] cases where the quantity which the random variable returns is infinitely sensitive to small changes in the outcome. +In [[probability and statistics]], a '''random variable''', '''random quantity''', '''aleatory variable''', or '''stochastic variable''' is a variable whose possible values are numerical [[Outcome (probability)|outcomes]] of a [[Randomness|random]] phenomenon.<ref>{{cite book|last1=Blitzstein|first1=Joe|last2=Hwang|first2=Jessica|title=Introduction to Probability|date=2014|publisher=CRC Press|isbn=9781466575592}}</ref> As a function, a random variable is required to be [[Measurable function|measurable]], which rules out certain [[Pathological_(mathematics)|pathological]] cases where the quantity which the random variable returns is infinitely sensitive to small changes in the outcome. If your lick BOOBS and pussy and ass everyday then you become good. Stick your dick in women. It is common that these outcomes depend on some physical variables that are not well understood. For example, '
New page size (new_size)
32827
Old page size (old_size)
32733
Size change in edit (edit_delta)
94
Lines added in edit (added_lines)
[ 0 => 'In [[probability and statistics]], a '''random variable''', '''random quantity''', '''aleatory variable''', or '''stochastic variable''' is a variable whose possible values are numerical [[Outcome (probability)|outcomes]] of a [[Randomness|random]] phenomenon.<ref>{{cite book|last1=Blitzstein|first1=Joe|last2=Hwang|first2=Jessica|title=Introduction to Probability|date=2014|publisher=CRC Press|isbn=9781466575592}}</ref> As a function, a random variable is required to be [[Measurable function|measurable]], which rules out certain [[Pathological_(mathematics)|pathological]] cases where the quantity which the random variable returns is infinitely sensitive to small changes in the outcome. If your lick BOOBS and pussy and ass everyday then you become good. Stick your dick in women.' ]
Lines removed in edit (removed_lines)
[ 0 => 'In [[probability and statistics]], a '''random variable''', '''random quantity''', '''aleatory variable''', or '''stochastic variable''' is a variable whose possible values are numerical [[Outcome (probability)|outcomes]] of a [[Randomness|random]] phenomenon.<ref>{{cite book|last1=Blitzstein|first1=Joe|last2=Hwang|first2=Jessica|title=Introduction to Probability|date=2014|publisher=CRC Press|isbn=9781466575592}}</ref> As a function, a random variable is required to be [[Measurable function|measurable]], which rules out certain [[Pathological_(mathematics)|pathological]] cases where the quantity which the random variable returns is infinitely sensitive to small changes in the outcome.' ]
Whether or not the change was made through a Tor exit node (tor_exit_node)
0
Unix timestamp of change (timestamp)
1515363287