Local case-control sampling: Difference between revisions

Content deleted Content added
some obvious punctuation corrections required by WP:MOS
deleting the "too technical" tag; also some obvious notation corrections
Line 1:
{{
Multiple issues|
{{Jargon|date=June 2015}}
}}
 
In [[machine learning]], '''local case-control sampling''' <ref name="LCC">{{cite journal|last1=Fithian|first1=William|last2=Hastie|first2=Trevor|title=Local case-control sampling: Efficient subsampling in imbalanced data sets|journal=The Annals of Statistics|date=2014|volume=42|issue=5|page=1693–1724|ref=http://arxiv.org/abs/1306.3706}}</ref> is an [[algorithm]] used to reduce the complexity of training a [[logistic regression]] classifier. The algorithm reduces the training complexity by selecting a small subsample of the original dataset for training. It assumes the availability of a (unreliable) pilot estimation of the parameters. It then performs a single pass over the entire dataset using the pilot estimation to identify the most "surprising" samples. In practice, the pilot may come from prior knowledge or training using a subsample of the dataset. The algorithm is most effective when the underlying dataset is imbalanced. It exploits the structures of conditional imbalanced datasets more efficiently than alternative methods, such as [[Logistic_regression#Case-control_sampling|case control sampling]] and weighted case control sampling.
 
== Imbalanced datasets ==
In [[Statistical classification|classification]], a dataset is a set of ''N'' data points <math> (x_i, y_i)_{i=1}^N </math>, where <math> x_i \in\mathbb R^d </math> is a feature vector, <math> y_i \in \{0,1\} </math> is a label. Intuitively, a dataset is imbalanced when certain important statistical patterns are rare. The lack of observations of certain patterns does not always imply their irrelevance. For example, in medical studies of rare diseases, the small number of infected patients (cases) conveys the most valuable information for diagnosis and treatments.
 
Formally, an imbalanced dataset exhibits one or more of the following properties:
* ''Marginal Imbalance''. A dataset is marginally imbalanced if one class is rare compared to the other class. In other words, <math> \mathbb{P}(Y=1) \approx 0 </math>.
* ''Conditional Imbalance''. A dataset is conditionally imbalanced when it is easy to predict the correct labels in most cases. For example, if <math> X \in \{0,1\} </math>, the dataset is conditionally imbalanced if <math> \mathbb{P}(Y=1|\mid X=0) \approx 0 </math> and <math> \mathbb{P}(Y=1|\mid X=1) \approx 1 </math>.
 
== Algorithm outline ==
In logistic regression, given the model <math> \theta = (\alpha, \beta) </math>, the prediction is made according to <math> \mathbb{P}(Y=1|\mid X; \theta) = \tilde{p}_{\theta}(x) = \frac{\exp^{\alpha+\beta^T x}}{1+\exp^{\alpha+\beta^T x}} </math>. The local-case control sampling algorithm assumes the availability of a pilot model <math>\tilde{\theta} = (\tilde{\alpha}, \tilde{\beta}) </math>. Given the pilot model, the algorithm performs a single pass over the entire dataset to select the subset of samples to include in training the logistic regression model. For a sample <math> (x,y) </math>, define the acceptance probability as <math> a(x,y) = |y-\tilde{p}_{\tilde{\theta}}(x)| </math>. The algorithm proceeds as follows:
 
# Generate independent <math> z_i \sim \text{Bernoulli}(a(x_i,y_i)) </math> for <math> i \in \{1, \ldots, N\} </math>.