Multiple-instance learning: Difference between revisions

Content deleted Content added
m Fix links to disambiguation page Digit
Added a few details and examples
Line 1:
'''Multiple-instance learning''' (MIL) is a variation on [[supervised learning]]. Instead of receiving a set of instances which are labeled positive or negative, the learner receives a set of ''bags'' that are labeled positive or negative. Each bag contains many instances. The Amost common assumption is that a bag is labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept.
 
Multiple-instance learning was originally proposed under this name by {{harvtxt|Dietterich|Lathrop|Lozano-Pérez|1997}}, but earlier examples of similar research exist, for instance in the work on [[handwriting|handwritten]] [[Numerical digit|digit]] [[optical character recognition|recognition]] by {{harvtxt|Keeler|Rumelhart|Leow|1990}}.
 
Examples of where MIL is applied are:
* Molecule activity
* Image classification {{harvtxt|Maron|Ratan|1998}}
* Text or document categorization
 
 
Numerous researchers have worked on adapting classical classification techniques, such as [[support vector machines]] or [[Boosting (meta-algorithm)|boosting]], to work within the context of multiple-instance learning.
Line 21 ⟶ 27:
| year = 1990 | pages = 557–563
| unused_data = Proceedings of the 1990 Conference on Advances in Neural Information Processing Systems (NIPS 3)}}.
 
 
*{{citation
| first1 = O. | last1 = Maron
| first2 = A.L. | last2 = Ratan
| title= Multiple-instance learning for natural scene classification
| year= 1998 | pages = 341-349
| unused_data Proceedings of the Fifteenth International Conference on Machine Learning}}.
 
 
 
 
[[Category:Machine learning]]