Multiple-instance learning

This is an old revision of this page, as edited by Citation bot (talk | contribs) at 10:30, 17 April 2009 (Citation maintenance. Formatted: unused_data. You can use this bot yourself! Please report any bugs.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept from data consisting of a sequence of instances, each labeled as positive or negative, and each described as a set of vectors. The instance is positive if at least one of the vectors in its set lies within the intended concept, and negative if none of the vectors lies within the concept; the task is to learn an accurate description of the concept from this information.

Multiple-instance learning was originally proposed under this name by Dietterich, Lathrop & Lozano-Pérez (1997), but earlier examples of similar research exist, for instance in the work on handwritten digit recognition by Keeler, Rumelhart & Leow (1990).

Numerous researchers have worked on adapting classical classification techniques, such as support vector machines or boosting, to work within the context of multiple-instance learning.

References

  • Dietterich, Thomas G.; Lathrop, Richard H.; Lozano-Pérez, Tomás (1997), "Solving the multiple instance problem with axis-parallel rectangles", Artificial Intelligence, 89 (1–2): 31–71, doi:10.1016/S0004-3702(96)00034-3.
  • Keeler, James D.; Rumelhart, David E.; Leow, Wee-Kheng (1990), Integrated segmentation and recognition of hand-printed numerals, pp. 557–563 {{citation}}: Cite has empty unknown parameter: |unused_data= (help); Text "Proceedings of the 1990 Conference on Advances in Neural Information Processing Systems (NIPS 3)" ignored (help).