== Scenarios ==
*'''Pool-Basedbased Samplingsampling''': In this approach, which is the most well known scenario,<ref>{{cite web |last1=DataRobot |title=Active learning machine learning: What it is and how it works |url=https://www.datarobot.com/blog/active-learning-machine-learning |website=DataRobot Blog |publisher=DataRobot Inc. |access-date=30 January 2024}}</ref> the learning algorithm attempts to evaluate ''the entire dataset'' before selecting data points (instances) for labeling. It is often initially trained on a fully labeled subset of the data using a machine-learning method such as logistic regression or SVM that yields class-membership probabilities for individual data instances. The candidate instances are those for which the prediction is most ambiguous. Instances are drawn from the entire data pool and assigned a confidence score, a measurement of how well the learner "understands" the data. The system then selects the instances for which it is the least confident and queries the teacher for the labels. <br />The theoretical drawback of pool-based sampling is that it is memory-intensive and is therefore limited in its capacity to handle enormous datasets, but in practice, the rate-limiting factor is that the teacher is typically a (fatiguable) human expert who must be paid for their effort, rather than computer memory.
*'''Stream-Basedbased Selectiveselective Samplingsampling''': Here, each consecutive unlabeled instance is examined ''one at a time'' with the machine evaluating the informativeness of each item against its query parameters. The learner decides for itself whether to assign a label or query the teacher for each datapoint. As contrasted with Pool-based sampling, the obvious drawback of stream-based methods is that the learning algorithm does not have sufficient information, early in the process, to make a sound assign-label-vs ask-teacher decision, and it does not capitalize as efficiently on the presence of already labeled data. Therefore, the teacher is likely to spend more effort in supplying labels than with the pool-based approach.
*'''Membership Queryquery Synthesissynthesis''': This is where the learner generates synthetic data from an underlying natural distribution. For example, if the dataset are pictures of humans and animals, the learner could send a clipped image of a leg to the teacher and query if this appendage belongs to an animal or human. This is particularly useful if the dataset is small.<ref>{{Cite journal|last1=Wang|first1=Liantao|last2=Hu|first2=Xuelei|last3=Yuan|first3=Bo|last4=Lu|first4=Jianfeng|date=2015-01-05|title=Active learning via query synthesis and nearest neighbour search|url=http://espace.library.uq.edu.au/view/UQ:344582/UQ344582_OA.pdf|journal=Neurocomputing|volume=147|pages=426–434|doi=10.1016/j.neucom.2014.06.042|s2cid=3027214 }}</ref> <br />The challenge here, as with all synthetic-data-generation efforts, is in ensuring that the synthetic data is consistent in terms of meeting the constraints on real data. As the number of variables/features in the input data increase, and strong dependencies between variables exist, it becomes increasingly difficult to generate synthetic data with sufficient fidelity. <br />For example, to create a synthetic data set for human laboratory-test values, the sum of the various [[white blood cell]] (WBC) components in a [[White_blood_cell_differential|Whitewhite Bloodblood Cellcell differential]] must equal 100, since the component numbers are really percentages. Similarly, the enzymes [[Alanine_transaminase|Alaninealanine Transaminasetransaminase]] (ALT) and [[Aspartate_transaminase|Aspartateaspartate Transaminasetransaminase]] (AST) measure liver function (though AST is also produced by other tissues, e.g., lung, pancreas) A synthetic data point with AST at the lower limit of normal range (8-338–33 Unitsunits/L) with an ALT several times above normal range (4-354–35 Unitsunits/L) in a simulated chronically ill patient would be physiologically impossible.
==Query strategies==
*'''[[Conformal prediction]]''': predicts that a new data point will have a label similar to old data points in some specified way and degree of the similarity within the old examples is used to estimate the confidence in the prediction.<ref>{{Cite journal|last1=Makili|first1=Lázaro Emílio|last2=Sánchez|first2=Jesús A. Vega|last3=Dormido-Canto|first3=Sebastián|date=2012-10-01|title=Active Learning Using Conformal Predictors: Application to Image Classification|journal=Fusion Science and Technology|volume=62|issue=2|pages=347–355|doi=10.13182/FST12-A14626|bibcode=2012FuST...62..347M |s2cid=115384000|issn=1536-1055}}</ref>
*'''Mismatch-first farthest-traversal''': The primary selection criterion is the prediction mismatch between the current model and nearest-neighbour prediction. It targets on wrongly predicted data points. The second selection criterion is the distance to previously selected data, the farthest first. It aims at optimizing the diversity of selected data.<ref name='zhaos' />
*'''User-centered Centeredlabeling Labeling Strategiesstrategies:''' Learning is accomplished by applying dimensionality reduction to graphs and figures like scatter plots. Then the user is asked to label the compiled data (categorical, numerical, relevance scores, relation between two instances.<ref name=":3">{{Cite journal |last1=Bernard |first1=Jürgen |last2=Zeppelzauer |first2=Matthias |last3=Lehmann |first3=Markus |last4=Müller |first4=Martin |last5=Sedlmair |first5=Michael |date=June 2018 |title=Towards User-Centered Active Learning Algorithms |url= |journal=Computer Graphics Forum |volume=37 |issue=3 |pages=121–132 |doi=10.1111/cgf.13406 |s2cid=51875861 |issn=0167-7055}}</ref>
A wide variety of algorithms have been studied that fall into these categories.<ref name="settles" /><ref name="olsson" /> While the traditional AL strategies can achieve remarkable performance, it is often challenging to predict in advance which strategy is the most suitable in aparticular situation. In recent years, meta-learning algorithms have been gaining in popularity. Some of them have been proposed to tackle the problem of learning AL strategies instead of relying on manually designed strategies. A benchmark which compares 'meta-learning approaches to active learning' to 'traditional heuristic-based Active Learning' may give intuitions if 'Learning active learning' is at the crossroads <ref>{{cite conference|last1=Desreumaux |first1=Louis |last2=Lemaire|first2=Vincent|title=Learning Active Learning at the Crossroads? Evaluation and Discussion |date=2020 |conference=Proceedings of the Workshop on Interactive Adaptive Learning co-located with European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases ({ECML} {PKDD} 2020), Ghent, Belgium, 2020 |s2cid=221794570 }}</ref>
<ref name="Bouneffouf(2016)">{{cite journal |last1=Bouneffouf |first1=Djallel |title=Exponentiated Gradient Exploration for Active Learning |journal=Computers |date=8 January 2016 |volume=5 |issue=1 |pages=1 |doi=10.3390/computers5010001|arxiv=1408.2196 |s2cid=14313852 |doi-access=free }}</ref>
<ref name="shubhomoydas_github">{{Cite web|url=https://github.com/shubhomoydas/ad_examples#query-diversity-with-compact-descriptions|title=shubhomoydas/ad_examples|website=GitHub|language=en|access-date=2018-12-04}}</ref>
<ref name="zhaos">{{Cite journal|arxiv=2002.05033|title=Active learning for sound event detection|language=en|journal=IEEE/ACM Transactions on Audio, Speech, and Language Processing|last1=Zhao|first1=Shuyang|last2=Heittola|first2=Toni|last3=Virtanen|first3=Tuomas|year=2020|doi=10.1109/TASLP.2020.3029652}}</ref>
}}
|