==Tasks==
The main task in preference learning concerns problems in "[[learning to rank]]". According to different types of preference information observed, the tasks are categorized as three main problems in the book ''Preference Learning'':<ref>{{Cite namebook |url="FURN11"https://books.google.se/books?id=nc3XcH9XSgYC&pg=PA4&redir_esc=y#v=onepage&q&f=false |title=Preference learning |date=2010 |publisher=Springer |isbn=978-3-642-14124-9 |editor-last=Fürnkranz |editor-first=Johannes |___location= |pages=3-8 |editor-last2=Hüllermeier |editor-first2=Eyke}}</ref>
===Label ranking===
In label ranking, the model has an instance space <math>X=\{x_i\}\,\!</math> and a finite set of labels <math>Y=\{y_i|i=1,2,\cdots,k\}\,\!</math>. The preference information is given in the form <math>y_i \succ_{x} y_j\,\!</math> indicating instance <math>x\,\!</math> shows preference in <math>y_i\,\!</math> rather than <math>y_j\,\!</math>. A set of preference information is used as training data in the model. The task of this model is to find a preference ranking among the labels for any instance.
It was observed that some conventional [[Classification in machine learning|classification]] problems can be generalized in the framework of label ranking problem:<ref>{{Cite namejournal |date="HARP03"2002 |title=Constraint classification for multiclass classification and ranking |url=https://proceedings.neurips.cc/paper_files/paper/2002/file/16026d60ff9b54410b3435b403afd226-Paper.pdf |journal=NeurIPS}}</ref> if a training instance <math>x\,\!</math> is labeled as class <math>y_i\,\!</math>, it implies that <math>\forall j \neq i, y_i \succ_{x} y_j\,\!</math>. In the [[Multi-label classification|multi-label]] case, <math>x\,\!</math> is associated with a set of labels <math>L \subseteq Y\,\!</math> and thus the model can extract a set of preference information <math>\{y_i \succ_{x} y_j | y_i \in L, y_j \in Y\backslash L\}\,\!</math>. Training a preference model on this preference information and the classification result of an instance is just the corresponding top ranking label.
===Instance ranking===
===Preference relations===
The binary representation of preference information is called preference relation. For each pair of alternatives (instances or labels), a binary predicate can be learned by conventional supervising learning approach. Fürnkranz and Hüllermeier proposed this approach in label ranking problem.<ref name="FURN03:0">{{Cite journal |last=Fürnkranz |first=Johannes |last2=Hüllermeier |first2=Eyke |date=2003 |editor-last=Lavrač |editor-first=Nada |editor2-last=Gamberger |editor2-first=Dragan |editor3-last=Blockeel |editor3-first=Hendrik |editor4-last=Todorovski |editor4-first=Ljupčo |title=Pairwise Preference Learning and Ranking |url=https://link.springer.com/chapter/10.1007/978-3-540-39857-8_15 |journal=Machine Learning: ECML 2003 |language=en |___location=Berlin, Heidelberg |publisher=Springer |pages=145–156 |doi=10.1007/978-3-540-39857-8_15 |isbn=978-3-540-39857-8}}</ref> For object ranking, there is an early approach by Cohen et al.<ref>{{Cite journal name|last="COHE98"Cohen |first=William W. |last2=Schapire |first2=Robert E. |last3=Singer |first3=Yoram |date=1998-07-31 |title=Learning to order things |url=https://dl.acm.org/doi/10.5555/302528.302736 |journal=NeurIPS |series= |___location=Cambridge, MA, USA |publisher=MIT Press |pages=451–457 |doi= |isbn=978-0-262-10076-2}}</ref>
Using preference relations to predict the ranking will not be so intuitive. Since observed preference relations may not always be transitive due to inconsistencies in the data, finding a ranking that satisfies all the preference relations may not be possible or may result in multiple possible solutions. A more common approach is to find a ranking solution which is maximally consistent with the preference relations. This approach is a natural extension of pairwise classification.<ref name="FURN03:0" />
==Uses==
Preference learning can be used in ranking search results according to feedback of user preference. Given a query and a set of documents, a learning model is used to find the ranking of documents corresponding to the [[relevance (information retrieval)|relevance]] with this query. More discussions on research in this field can be found in [[Tie-Yan Liu]]'s survey paper.<ref>{{Cite namejournal |last="LIU09"Liu |first=Tie-Yan |date=2007 |title=Learning to Rank for Information Retrieval |url=http://www.nowpublishers.com/article/Details/INR-016 |journal=Foundations and Trends® in Information Retrieval |language=en |volume=3 |issue=3 |pages=225–331 |doi=10.1561/1500000016 |issn=1554-0669}}</ref>
Another application of preference learning is [[recommender systems]].<ref>{{Citation name|last="GEMM09"Gemmis |first=Marco de |title=Learning Preference Models in Recommender Systems |date=2010 |work=Preference Learning |pages=387–407 |editor-last=Fürnkranz |editor-first=Johannes |url=http://link.springer.com/10.1007/978-3-642-14125-6_18 |access-date=2024-11-05 |place= |publisher=Springer |language=en |doi=10.1007/978-3-642-14125-6_18 |isbn=978-3-642-14124-9 |last2=Iaquinta |first2=Leo |last3=Lops |first3=Pasquale |last4=Musto |first4=Cataldo |last5=Narducci |first5=Fedelucio |last6=Semeraro |first6=Giovanni |editor2-last=Hüllermeier |editor2-first=Eyke}}</ref> Online store may analyze customer's purchase record to learn a preference model and then recommend similar products to customers. Internet content providers can make use of user's ratings to provide more user preferred contents.
==See also==
==References==
{{Reflist|}}
refs=
<ref name="SHOG00">{{
cite journal
|last = Shogren
|first = Jason F. |author2=List, John A. |author3=Hayes, Dermot J.
|year = 2000
|title = Preference Learning in Consecutive Experimental Auctions
|url = http://econpapers.repec.org/article/oupajagec/v_3a82_3ay_3a2000_3ai_3a4_3ap_3a1016-1021.htm
|journal = American Journal of Agricultural Economics
|volume = 82
|issue = 4 |pages = 1016–1021
|doi=10.1111/0002-9092.00099
|s2cid = 151493631 }}</ref>
<ref name="WEB:WORKSHOP">{{
cite web
|title = Preference learning workshops
|date = 23 January 2024
|url = http://www.preference-learning.org/#Workshops
}}</ref>
<ref name="FURN11">{{
cite book
|last = Fürnkranz
|first = Johannes
|author2=Hüllermeier, Eyke
|year = 2011
|title = Preference Learning
|url = https://books.google.com/books?id=nc3XcH9XSgYC
|chapter = Preference Learning: An Introduction
|chapter-url = https://books.google.com/books?id=nc3XcH9XSgYC&pg=PA4
|publisher = Springer-Verlag New York, Inc.
|pages = 3–8
|isbn = 978-3-642-14124-9
}}</ref>
<ref name="HARP03">{{
cite journal
|last = Har-peled | author1-link = Sariel Har-Peled
|first = Sariel |author2=Roth, Dan |author3=Zimak, Dav
|year = 2003
|title = Constraint classification for multiclass classification and ranking
|journal = In Proceedings of the 16th Annual Conference on Neural Information Processing Systems, NIPS-02
|pages = 785–792
}}</ref>
<ref name="FURN03">{{
cite journal
|last = Fürnkranz
|first = Johannes
|author2=Hüllermeier, Eyke
|year = 2003
|title = Pairwise Preference Learning and Ranking
|journal = Proceedings of the 14th European Conference on Machine Learning
|pages = 145–156
}}</ref>
<ref name="COHE98">{{
cite journal
|last = Cohen
|first = William W. |author2=Schapire, Robert E. |author3=Singer, Yoram
|year = 1998
|title = Learning to order things
|url = http://dl.acm.org/citation.cfm?id=302528.302736
|journal = In Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems
|pages = 451–457
|isbn = 978-0-262-10076-2 }}</ref>
<ref name="LIU09">{{
cite journal
|last = Liu
|first = Tie-Yan
|year = 2009
|title = Learning to Rank for Information Retrieval
|url = http://dl.acm.org/citation.cfm?id=1618303.1618304
|journal = Foundations and Trends in Information Retrieval
|volume = 3
|issue = 3
|pages = 225–331
|doi = 10.1561/1500000016
}}</ref>
<ref name="GEMM09">{{
Cite book
|last = Gemmis
|first = Marco De
|author2=Iaquinta, Leo |author3=Lops, Pasquale |author4=Musto, Cataldo |author5=Narducci, Fedelucio |author6= Semeraro, Giovanni
|chapter = Learning Preference Models in Recommender Systems
|year = 2009
|title = Preference Learning in Recommender Systems
|url = http://www.ecmlpkdd2009.net/wp-content/uploads/2008/09/preference-learning.pdf#page=45
|journal = Preference Learning
|volume = 41
|pages = 387–407
|doi=10.1007/978-3-642-14125-6_18
|isbn = 978-3-642-14124-9
}}</ref>
}}
==External links==
|