In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when n = 1, then the binomial distribution is the Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.
Binomial | |||
---|---|---|---|
Parameters |
number of trials (integer) success probability (real) | ||
Support | |||
PMF | |||
CDF | |||
Mean | |||
Median | one of | ||
Mode | |||
Variance | |||
Skewness | |||
Excess kurtosis | |||
Entropy | |||
MGF | |||
CF |
- See binomial (disambiguation) for a list of other topics using that name.
Occurrence
A typical example is the following: assume 5% of the population is green-eyed. You pick 500 people randomly. How likely is it that you get 30 or more green-eyed people? The number of green-eyed people you pick is a random variable X which follows a binomial distribution with n = 500 and p = 0.05 (when picking the people with replacement). We are interested in the probability Pr[X ≥ 30].
Specification
Probability mass function
In general, if the random variable X follows the binomial distribution with parameters n and p, we write X ~ B(n, p). The probability of getting exactly k successes is given by the probability mass function:
for and where
is the binomial coefficient "n choose k" (also denoted C(n, k) or nCk), hence the name of the distribution. The formula can be understood as follows: we want k successes (pk) and n − k failures ((1 − p)n − k). However, the k successes can occur anywhere among the n trials, and there are C(n, k) different ways of distributing k successes in a sequence of n trials.
In creating reference tables for binomial distribution probability, usually the table is filled in up to n/2 values. This is because of the fact that for k > n/2, the probability can be calculated by its complement as
So, one must look to a different k and a different p (the binomial is not symmetrical in general).
Cumulative distribution function
The cumulative distribution function can be expressed in terms of the regularized incomplete beta function, as follows:
provided k is an integer and 0 ≤ k ≤ n. If x is not necessarily an integer or not necessarily positive, one can expresse it thus:
where is the greatest integer less than or equal to x.
For , upper bounds for the lower tail of the distribution function can be derived. In particular, Hoeffding's inequality yields the bound
and Chernoff's inequality can be used to derive the bound
Mean, standard deviation, and mode
If X ~ B(n, p) (that is, X is a binomially distributed random variate), then the expected value of X is
and the variance is
This fact is easily proven as follows. Suppose first that we have exactly one Bernoulli trial. We have two possible outcomes, 1 and 0, with the first having probability p and the second having probability 1 − p; the mean for this trial is given by μ = p. Using the definition of variance, we have
Now suppose that we want the variance for n such trials (i.e. for the general binomial distribution). Since the trials are independent, we may add the variances for each trial, giving
The most likely value or mode of X is given by the largest integer less than or equal to (n + 1)p; if m = (n + 1)p is itself an integer, then m − 1 and m are both modes.
Is it a binomial distribution? A mnemonic
- Bi = Are there TWO possible outcomes? (i.e., yes or no, win or lose)
- Nom = Is there a fixed NUMBER of observations or items of interest?
- I = Is each observation INDEPENDENT?
- Al = Is the probability for ALL outcomes equal?
(However, the letters nom are actually derived from the Greek word 'nomos' meaning "portion, usage, custom, law, division, district", not "number".)
Relations to other distributions
- If X ~ B(n, p) and Y ~ B(m, p) are independent binomial variables, then X + Y is again a binomial variable; its distribution is
- Two other important distributions arise as approximations of binomial distributions:
- If n is large enough, the skew of the distribution is not too great, and a suitable continuity correction is used, then an excellent approximation to B(n, p) is given by the normal distribution
- Various rules of thumb may be used to decide whether n is large enough. One rule is that both np and n(1 − p) must be greater than 5. However, the specific number varies from source to source, and depends on how good an approximation one wants; some sources give 10. Another commonly used rule holds that the above normal approximation is appropriate only if
- The following is an example of applying a continuity correction: Suppose one wishes to calculate Pr(X ≤ 8) for a binomial random variable X. If Y has a distribution given by the normal approximation, then Pr(X ≤ 8) is approximated by Pr(Y ≤ 8.5). The addition of 0.5 is the continuity correction. Warning: The normal approximation gives inaccurate results unless a continuity correction is used.
- This approximation is a huge time-saver (exact calculations with large n are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1733. Nowadays, it can be seen as a consequence of the central limit theorem since B(n, p) is a sum of n independent, identically distributed 0-1 indicator variables.
- For example, suppose you randomly sample n people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If you sampled groups of n people repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation σ = (p(1 − p)/n)1/2. Large sample sizes n are good because the standard deviation gets smaller, which allows a more precise estimate of the unknown parameter p.
- If n is large and p is small, so that np is of moderate size, then the Poisson distribution with parameter λ = np is a good approximation to B(n, p).
The formula for Bézier curves was inspired by the binomial distribution.
Limits of binomial distributions
- As n approaches ∞ and p approaches 0 while np remains fixed at λ > 0 or at least np approaches λ > 0, then the Binomial(n, p) distribution approaches the Poisson distribution with expected value λ.
- As n approaches ∞ while p remains fixed, the distribution of
- approaches the normal distribution with expected value 0 and variance 1.
References
- Luc Devroye, Non-Uniform Random Variate Generation, New York: Springer-Verlag, 1986. See especially Chapter X, Discrete Univariate Distributions.
- Voratas Kachitvichyanukul and Bruce W. Schmeiser, Binomial random variate generation, Communications of the ACM 31(2):216–222, February 1988. doi:10.1145/42372.42381