Preference learning: Difference between revisions

Content deleted Content added
Created page with 'Preference learning is a subfield in machine learning in which the goal is to learn a predictive preference model from observed pr...'
Tag: repeating characters
 
categorization/tagging using AWB
Line 1:
'''Preference learning''' is a subfield in [[machine learning]] in which the goal is to learn a predictive [[Preference (economics)|preference]] model from observed preference information. In the view of [[supervised learning]], preference learning trains on a set of items which have preferences toward labels or other items and predicts the preferences for all items.
 
While the concept of preference learning has been emerged for some time in many fields such as [[economics]] ,<ref name="SHOG00" />, it's a relatively new topic in [[Artificial Intelligence]] research. Several workshops have been discussing preference learning and related topics in the past decade. <ref name="WEB:WORKSHOP" />
 
==Tasks==
Line 33:
===Preference relations===
 
The binary representation of preference information is called preference relation. For each pair of alternatives (instances or labels), a binary predicate can be learned by conventional supervising learning approach. F\"{u}nkranz, Johannes and H\"{u}lermeier proposed this approach in label ranking problem .<ref name="FURN03" />. For object ranking, there is an early approach by Cohen et al.<ref name="COHE98" />.
 
Using preference relations to predict the ranking will not be so intuitive. Since preference relation is not transitive, it implies that the solution of ranking satisfying those relations would sometimes be unreachable or more than one solution. A more common approach is to find a ranking solution which is maximally consistent with the preference relations. This approach is a natural extension of pairwise classification. <ref name="FURN03" />
 
==Uses==
 
Preference learning can be used in ranking search results according to feedback of user preference. Given a query and a set of documents, a learning model is used to find the ranking of documents corresponding to the relevance with this query. More discussions on research in this field can be found in Tie-Yan Liu's survey paper .<ref name="LIU09" />.
 
Another application of preference learning is [[recommender systems]]. <ref name="GEMM09" /> Online store may analyze customer's purchase record to learn a preference model and then recommend similar products to customers. Internet content providers can make use of user's ratings to provide more user preferred contents.
 
==See also==
Line 56:
|first = Jason F.
|coauthors = List, John A.; Hayes, Dermot J.
|dateyear = 2000
|title = Preference Learning in Consecutive Experimental Auctions
|url = http://ideas.repec.org/a/bla/ajagec/v82y2000i4p1016-21.html
|journal = American Journal of Agricultural Economics
|volume = 82
|pages = 1016-10211016–1021
}}</ref>
 
Line 81:
|chapterurl = http://books.google.com/books?id=nc3XcH9XSgYC&pg=PA4
|publisher = Springer-Verlag New York, Inc.
|pages = 3-83–8
|isbn = 978-3-642-14124-9
}}</ref>
Line 90:
|first = Sariel
|coauthors = Roth, Dan; Zimak, Dav
|dateyear = 2003
|title = Constraint classification for multiclass classification and ranking
|journal = In Proceedings of the 16th Annual Conference on Neural Information Processing Systems, NIPS-02
|pages = 785-792785–792
}}</ref>
 
Line 101:
|first = Johannes
|coauthors = H&uuml;llermeier, Eyke
|dateyear = 2003
|title = Pairwise Preference Learning and Ranking
|journal = Proceedings of the 14th European Conference on Machine Learning
|pages = 145-156145–156
}}</ref>
 
Line 112:
|first = William W.
|coauthors = Schapire, Robert E.; Singer, Yoram
|dateyear = 1998
|title = Learning to order things
|url = http://dl.acm.org/citation.cfm?id=302528.302736
|journal = In Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems
|pages = 451-457451–457
}}</ref>
 
Line 123:
|last = Liu
|first = Tie-Yan
|dateyear = 2009
|title = Learning to Rank for Information Retrieval
|url = http://dl.acm.org/citation.cfm?id=1618303.1618304
Line 129:
|volume = 3
|issue = 3
|pages = 225-331225–331
|doi = 10.1561/1500000016
}}</ref>
Line 138:
|first = Marco De
|coauthors = Iaquinta, Leo; Lops, Pasquale; Musto, Cataldo; Narducci, Fedelucio; Semeraro,Giovanni
|dateyear = 2009
|title = Preference Learning in Recommender Systems
|url = http://www.ecmlpkdd2009.net/wp-content/uploads/2008/09/preference-learning.pdf#page=45
|journal = PREFERENCE LEARNING
|volume = 41
|pages = 387-407387–407
}}</ref>
 
}}
 
 
==External links==
*[http://www.preference-learning.org/ Preference Learning site]
 
{{Uncategorized|date=December 2011}}
*[http://www.preference-learning.org/ Preference Learning site]