- This article is about the mathematical concept of statistics. For statistics of Wikipedia, see Special:Statistics.
Statistics refers to the branch of mathematics that deals with the collection, analysis, interpretation and presentation of masses of numerical data. It is applicable to a wide variety of academic disciplines from the physical and social sciences to the humanities, as well as to business, government, and industry.

Key concepts and terms of statistics assume probability theory; among the terms are: population, sample, sampling, sampling unit and probability. Warning: systems are known to science that violate probability theory empirically.
Once data has been collected, either through a formal sampling procedure or by recording responses to treatments in an experimental setting (cf experimental design), or by repeatedly observing a process over time (time series), graphical and numerical summaries may be obtained using descriptive statistics.
Patterns in the data are modeled to draw inferences about the larger population, using inferential statistics to account for randomness and uncertainty in the observations. These inferences may take the form of answers to essentially yes/no questions (hypothesis testing), estimates of numerical characteristics (estimation), prediction of future observations, descriptions of association (correlation), or modeling of relationships (regression).
The framework described above is sometimes referred to as applied statistics. In contrast, mathematical statistics (or simply statistical theory) is the subdiscipline of applied mathematics which uses probability theory and analysis to place statistical practice on a firm theoretical basis.
The word statistics is also the plural of statistic (singular), which refers to the result of applying a statistical algorithm to a set of data.
Origin
The word statistics ultimately derives from the modern Latin term statisticum collegium ("council of state") and the Italian word statista ("statesman" or "politician"). The German Statistik, first introduced by Gottfried Achenwall (1749), originally designated the analysis of data about the state, signifying the "science of state". It acquired the meaning of the collection and classification of data generally in the early nineteenth century. It was introduced into English by Sir John Sinclair. Thus, the original principal purpose of statistics was data to be used by governmental and (often centralized) administrative bodies. The collection of data about states and localities continues, largely through national and international statistical services; in particular, censuses provide regular information about the population. Later, the field merged with the more mathematically oriented field of inverse probability, referring to the estimation of a parameter from experimental data in the experimental sciences (most notably astronomy). Today, however, the use of statistics has broadened far beyond the service of a state or government, to include such areas as business, natural and social sciences, and medicine, among others. Because of this history, not all statisticians regard statistics as a subfield of mathematics; it if often considered to be a separate, albeit allied, field. As a practical matter, although statistics is taught in university departments as diverse as psychology, education, public health, and many others, it is most strongly associated with mathematics departments, as typically only the largest schools will afford statisticians a separate department.
Statistical methods
Experimental and observational studies
A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on a response or dependent variable. There are two major types of causal statistical studies, experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types is in how the study is actually conducted.
An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation may have modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead data are gathered and correlations between predictors and the response are investigated.
An example of an experimental study is the famous Hawthorne studies which attempted to test changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured productivity in the plant then modified the illumination in an area of the plant to see if changes in illumination would affect productivity. Due to errors in experimental procedures, specifically the lack of a control group, the researchers while unable to do what they planned were able to provide the world with the Hawthorne effect.
An example of an observational study is a study which explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then perform statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers and then look at the number of cases of lung cancer in each group.
The basic steps for an experiment are to:
- plan the research including determining information sources, research subject selection, and ethical considerations for the proposed research and method,
- design the experiment concentrating on the system model and the interaction of independent and dependent variables,
- summarize a collection of observations to feature their commonality by suppressing details (descriptive statistics),
- reach consensus about what the observations tell us about the world we observe (statistical inference),
- document and present the results of the study.
Levels of measurement
There are four types of measurements or measurement scales used in statistics. The four types or levels of measurement (ordinal, nominal, interval, and ratio) have different degrees of usefulness in statistical research. Ratio measurements, where both a zero value and distances between different measurements are defined, provide the greatest flexibility in statistical methods that can be used for analysing the data. Interval measurements, with meaningful distances between measurements but no meaningful zero value (such as IQ measurements or temperature measurements in degrees Celsius). Ordinal measurements have imprecise differences between consecutive values but a meaningful order to those values. Nominal measurements have no meaningful rank order among values.
Statistical techniques
Some well known statistical tests and procedures for research observations are:
Probability
Statistics makes extensive use of the concept of probability. The probability of an event is often defined as a number between one and zero. In reality however there is virtually nothing that has a probability of 1 or 0. You could say that the sun will certainly rise in the morning, but what if an extremely unlikely event destroys the sun? What if there is a nuclear war and the sky is covered in ash and smoke?
We often round the probability of such things up or down because they are so likely or unlikely to occur, that it's easier to recognize them as a probability of one or zero.
However, this can often lead to misunderstandings and dangerous behaviour, because people are unable to distinguish between, e.g., a probability of 10−4 and a probability of 10−9, despite the very practical difference between them. If you expect to cross the road about 105 or 106 times in your life, then reducing your risk of being run over per road crossing to 10−9 will make it unlikely that you will be run over while crossing the road for your whole life, while a risk per road crossing of 10−4 will make it very likely that you will have an accident, despite the intuitive feeling that 0.01% is a very small risk.
Use of prior probabilities of 0 (or 1) causes problems in Bayesian statistics, since the posterior probability is then forced to be 0 (or 1) as well. In other words, the data are not taken into account at all! As Dennis Lindley puts it, if a coherent Bayesian attaches a prior probability of zero to the hypothesis that the Moon is made of green cheese, then even whole armies of astronauts coming back bearing green cheese cannot convince him. Lindley advocates never using prior probabilities of 0 or 1. He calls it Cromwell's rule, from a letter Oliver Cromwell wrote to the synod of the Church of Scotland on August 5th, 1650 in which he said "I beseech you, in the bowels of Christ, consider it possible that you are mistaken."
Important contributors to statistics
- Carl Gauss
- Blaise Pascal
- Sir Francis Galton
- William Sealey Gosset (known as "Student")
- Karl Pearson
- Sir Ronald Fisher
- Gertrude Cox
- Charles Spearman
- Pafnuty Chebyshev
- Aleksandr Lyapunov
- Isaac Newton
- Abraham De Moivre
- Adolph Quetelet
- Florence Nightingale
- John Tukey
- George Dantzig
- Thomas Bayes
See also list of statisticians.
Specialized disciplines
Some sciences use applied statistics so extensively that they have specialized terminology. These disciplines include:
- Actuarial science
- Biostatistics
- Business statistics
- Data mining (applying statistics and pattern recognition to discover knowledge from data)
- Economic statistics (Econometrics)
- Engineering statistics
- Statistical physics
- Demography
- Psychological statistics
- Social statistics (for all the social sciences)
- Statistical literacy
- Process analysis and chemometrics (for analysis of data from analytical chemistry and chemical engineering)
- Reliability engineering
- Statistics in various sports, particularly baseball and cricket
Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as in statistical process control or SPC), for summarizing data, and to make data-driven decisions. In these roles it is a key tool, and perhaps the only reliable tool.
Software
Modern statistics is supported by computers to perform some of the very large and complex calculations required.
Whole branches of statistics have been made possible by computing, for example neural networks.
The computer revolution has implications for the future of statistics, with a new emphasis on 'experimental' and 'empirical' statistics.
Statistical packages in common use include:
Open source or Freeware | proprietary |
See also
- Analysis of variance (ANOVA)
- Confidence interval
- Extreme value theory
- Instrumental variables estimation
- List of academic statistical associations
- List of national and international statistical services
- List of publications in statistics
- List of statistical topics
- List of statisticians
- Machine learning
- Misuse of statistics
- Multivariate statistics
- Prediction
- Prediction interval
- Regression analysis
- Resampling (statistics)
- Statistical package
- Statistical phenomena
- Trend estimation
External links
- Clear explanation of the three Statistical Distributions studied throughout secondary school great for younger students.
General sites and organizations
- Statlib: Data, Software and News from the Statistics Community (Carnegie Mellon)
- International Statistical Institute
- The Probability Web
Link collections
- Free Statistical Tools on the WEB (at ISI)
- Materials for the History of Statistics (Univ. of York)
- Statistics resources and calculators (Xycoon)
- StatPages.net (statistical calculations, free software, etc.)
- Bioethics Resources on the Web from the U.S. National Institute of Health (links to tutorials, case studies, and on-line courses)
Online courses and textbooks
- Electronic Statistics Textbook (StatSoft,Inc.)
- Teach/Me Data Analysis (a Springer-Verlag book)
- Statistics: Lecture Notes (from a professor at Richland Community College)
- CyberStats: Electronic Statistics Textbook (CyberGnostics, Inc)
- A variety of class notes and educational materials on probability and statistics
Statistical software
- R Project for Statistical Computing (free software)
- Statistics Online Computational Resource (UCLA)
- Root Analysis Framework (CERN)
- Multidimensional Scaling Software
- Software for interactive graphical analyses
- Website Analytics and Monitoring
- Software Reports by Statistical Software Newsletter
- Tanagra (free software), including machine learning and data mining techniques
Other resources
- ANOVA
- Virtual Laboratories in Probability and Statistics (Univ. of Alabama) (requires MathML and Java 2 Runtime Environment)
- Resources for Teaching and Learning about Probability and Statistics (ERIC Digests)
- Resampling: A Marriage of Computers and Statistics (ERIC Digests)
- Statistical Resources on the Web
- Statistics Glossary at statistics.com
- Statistics Glossary - and other teaching and learning resources
- Statistician Job Outlook - Analysis of wages and working environment for the occupation
- Statistics in Sports (Section of the ASA)
- Statistics - Meta, statistics of Wikimedia projects
- OmniStat The FactLab - Where facts become your knowledge
Additional references
- Lindley, D. (1985). Making Decisions, Second Edition. London, New York: John Wiley. ISBN 0471908088 (paperback edition.)
- Tijms, H., Understanding probability : chance rules in everyday life . Cambridge, New York: Cambridge University Press. 2004. ISBN 0521833299.
- Desrosières, Alain. La politique des grands nombres. Histoire de la raison statistique ("The politics of great numbers. History of the statistic reason" - a very complete account of the historical formation of statistics and epistemological problems) - La Découverte, 2000. ISBN 2707133531.