Preference learning: Difference between revisions

Content deleted Content added
Shirki (talk | contribs)
m Preference relations: fixed umlauts
OAbot (talk | contribs)
m Open access bot: url-access updated in citation with #oabot.
 
(42 intermediate revisions by 26 users not shown)
Line 1:
{{Short description|Subfield of machine learning}}
'''Preference learning''' is a subfield in [[machine learning]] in which the goal is to learn a predictive [[Preference (economics)|preference]] model from observed preference information. In the view of [[supervised learning]], preference learning trains on a set of items which have preferences toward labels or other items and predicts the preferences for all items.
'''Preference learning''' is a subfield of [[machine learning]] that focuses on modeling and predicting preferences based on observed preference information.<ref>{{Cite Mehryar Afshin Ameet 2012}}</ref> Preference learning typically involves [[supervised learning]] using datasets of pairwise preference comparisons, rankings, or other preference information.
 
While the concept of preference learning has been emerged for some time in many fields such as [[economics]],<ref name="SHOG00" /> it's a relatively new topic in [[Artificial Intelligence]] research. Several workshops have been discussing preference learning and related topics in the past decade.<ref name="WEB:WORKSHOP" />
 
==Tasks==
 
The main task in preference learning concerns problems in "[[learning to rank]]". According to different types of preference information observed, the tasks are categorized as three main problems in the book ''Preference Learning'' :<ref>{{Cite namebook |url="FURN11"https://books.google.com/books?id=nc3XcH9XSgYC&pg=PA4 |title=Preference learning |date=2010 |publisher=Springer |isbn=978-3-642-14124-9 |editor-last=Fürnkranz |editor-first=Johannes |___location= |pages=3–8 |editor-last2=Hüllermeier |editor-first2=Eyke}}</ref>:
 
===Label ranking===
Line 11 ⟶ 10:
In label ranking, the model has an instance space <math>X=\{x_i\}\,\!</math> and a finite set of labels <math>Y=\{y_i|i=1,2,\cdots,k\}\,\!</math>. The preference information is given in the form <math>y_i \succ_{x} y_j\,\!</math> indicating instance <math>x\,\!</math> shows preference in <math>y_i\,\!</math> rather than <math>y_j\,\!</math>. A set of preference information is used as training data in the model. The task of this model is to find a preference ranking among the labels for any instance.
 
It was observed that some conventional [[Classification in machine learning|classification]] problems can be generalized in the framework of label ranking problem :<ref>{{Cite namejournal |date="HARP03"2002 |title=Constraint classification for multiclass classification and ranking |url=https://proceedings.neurips.cc/paper_files/paper/2002/file/16026d60ff9b54410b3435b403afd226-Paper.pdf |journal=NeurIPS}}</ref>: if a training instance <math>x\,\!</math> is labeled as class <math>y_i\,\!</math>, it implies that <math>\forall j \neq i, y_i \succ_{x} y_j\,\!</math>. In the [[Multi-label classification|multi-label]] situationcase, <math>x\,\!</math> is associated with a set of labels <math>L \subseteq Y\,\!</math> and thus the model can extract a set of preference information <math>\{y_i \succ_{x} y_j | y_i \in L, y_j \in Y\backslash L\}\,\!</math>. Training a preference model on this preference information and the classification result of aan instance is just the corresponding top ranking label.
 
===Instance ranking===
Line 29 ⟶ 28:
If we can find a mapping from data to real numbers, ranking the data can be solved by ranking the real numbers. This mapping is called [[utility function]]. For label ranking the mapping is a function <math>f: X \times Y \rightarrow \mathbb{R}\,\!</math> such that <math>y_i \succ_x y_j \Rightarrow f(x,y_i) > f(x,y_j)\,\!</math>. For instance ranking and object ranking, the mapping is a function <math>f: X \rightarrow \mathbb{R}\,\!</math>.
 
Finding the utility function is a [[Regression analysis|regression]] learning problem{{citation needed|date=March 2025}} which is well developed in machine learning.
 
===Preference relations===
===<span class="khlinks"><a lang="en" href="http://sherpa-plus.com/topic/en/Preference" style="color:rgb(0, 0, 0)!important;font-size:11px!important;font-weight:normal!important">Preference</a></span> relations=== The binary representation of preference information is called preference relation. For each pair of alternatives (instances or labels), a binary predicate can be learned by conventional supervising learning approach. Fünkranz, Johannes and Hülermeier proposed this approach in label ranking problem.<ref name="FURN03" /> For object ranking, there is an early approach by Cohen et al.<ref name="COHE98" /> Using preference relations to predict the ranking will not be so intuitive. Since preference relation is not transitive, it implies that the solution of ranking satisfying those relations would sometimes be unreachable or more than one solution. A more common approach is to find a ranking solution which is maximally consistent with the preference relations. This approach is a natural extension of pairwise classification.<ref name="FURN03" />
 
The binary representation of preference information is called preference relation. For each pair of alternatives (instances or labels), a binary predicate can be learned by conventional supervised learning approach. Fürnkranz and Hüllermeier proposed this approach in label ranking problem.<ref name=":0">{{Cite book |last1=Fürnkranz |first1=Johannes |last2=Hüllermeier |first2=Eyke |chapter=Pairwise Preference Learning and Ranking |series=Lecture Notes in Computer Science |date=2003 |volume=2837 |editor-last=Lavrač |editor-first=Nada |editor2-last=Gamberger |editor2-first=Dragan |editor3-last=Blockeel |editor3-first=Hendrik |editor4-last=Todorovski |editor4-first=Ljupčo |title=Machine Learning: ECML 2003 |chapter-url=https://link.springer.com/chapter/10.1007/978-3-540-39857-8_15 |language=en |___location=Berlin, Heidelberg |publisher=Springer |pages=145–156 |doi=10.1007/978-3-540-39857-8_15 |isbn=978-3-540-39857-8}}</ref> For object ranking, there is an early approach by Cohen et al.<ref>{{Cite journal |last1=Cohen |first1=William W. |last2=Schapire |first2=Robert E. |last3=Singer |first3=Yoram |date=1998-07-31 |title=Learning to order things |url=https://dl.acm.org/doi/10.5555/302528.302736 |journal=NeurIPS |series= |___location=Cambridge, MA, USA |publisher=MIT Press |pages=451–457 |doi= |isbn=978-0-262-10076-2}}</ref>
 
Using preference relations to predict the ranking will not be so intuitive. Since observed preference relations may not always be transitive due to inconsistencies in the data, finding a ranking that satisfies all the preference relations may not be possible or may result in multiple possible solutions. A more common approach is to find a ranking solution which is maximally consistent with the preference relations. This approach is a natural extension of pairwise classification.<ref name=":0" />
 
==Uses==
 
Preference learning can be used in ranking search results according to feedback of user preference. Given a query and a set of documents, a learning model is used to find the ranking of documents corresponding to the [[relevance (information retrieval)|relevance]] with this query. More discussions on research in this field can be found in [[Tie-Yan Liu]]'s survey paper.<ref>{{Cite namejournal |last="LIU09"Liu |first=Tie-Yan |date=2007 |title=Learning to Rank for Information Retrieval |url=http://www.nowpublishers.com/article/Details/INR-016 |journal=Foundations and Trends in Information Retrieval |language=en |volume=3 |issue=3 |pages=225–331 |doi=10.1561/1500000016 |issn=1554-0669|url-access=subscription }}</ref>
 
Another application of preference learning is [[recommender systems]].<ref>{{Citation name|last1="GEMM09"Gemmis |first1=Marco de |title=Learning Preference Models in Recommender Systems |date=2010 |work=Preference Learning |pages=387–407 |editor-last=Fürnkranz |editor-first=Johannes |url=http://link.springer.com/10.1007/978-3-642-14125-6_18 |access-date=2024-11-05 |place= |publisher=Springer |language=en |doi=10.1007/978-3-642-14125-6_18 |isbn=978-3-642-14124-9 |last2=Iaquinta |first2=Leo |last3=Lops |first3=Pasquale |last4=Musto |first4=Cataldo |last5=Narducci |first5=Fedelucio |last6=Semeraro |first6=Giovanni |editor2-last=Hüllermeier |editor2-first=Eyke|url-access=subscription }}</ref> Online store may analyze customer's purchase record to learn a preference model and then recommend similar products to customers. Internet content providers can make use of user's ratings to provide more user preferred contents.
 
==See also==
*[[Learning to rank]]
 
==References==
 
{{Reflist|}}
refs=
 
<ref name="SHOG00">{{
cite journal
|last = Shogren
|first = Jason F.
|coauthors = List, John A.; Hayes, Dermot J.
|year = 2000
|title = Preference Learning in Consecutive Experimental Auctions
|url = http://ideas.repec.org/a/bla/ajagec/v82y2000i4p1016-21.html
|journal = American Journal of Agricultural Economics
|volume = 82
|pages = 1016–1021
}}</ref>
 
<ref name="WEB:WORKSHOP">{{
cite web
|title = Preference learning workshops
|url = http://www.preference-learning.org/#Workshops
}}</ref>
 
<ref name="FURN11">{{
cite book
|last = F&uuml;rnkranz
|first = Johannes
|coauthors = H&uuml;llermeier, Eyke
|year = 2011
|title = Preference Learning
|url = http://books.google.com/books?id=nc3XcH9XSgYC
|chapter = Preference Learning: An Introduction
|chapterurl = http://books.google.com/books?id=nc3XcH9XSgYC&pg=PA4
|publisher = Springer-Verlag New York, Inc.
|pages = 3–8
|isbn = 978-3-642-14124-9
}}</ref>
 
<ref name="HARP03">{{
cite journal
|last = Har-peled
|first = Sariel
|coauthors = Roth, Dan; Zimak, Dav
|year = 2003
|title = Constraint classification for multiclass classification and ranking
|journal = In Proceedings of the 16th Annual Conference on Neural Information Processing Systems, NIPS-02
|pages = 785–792
}}</ref>
 
<ref name="FURN03">{{
cite journal
|last = F&uuml;rnkranz
|first = Johannes
|coauthors = H&uuml;llermeier, Eyke
|year = 2003
|title = Pairwise Preference Learning and Ranking
|journal = Proceedings of the 14th European Conference on Machine Learning
|pages = 145–156
}}</ref>
 
<ref name="COHE98">{{
cite journal
|last = Cohen
|first = William W.
|coauthors = Schapire, Robert E.; Singer, Yoram
|year = 1998
|title = Learning to order things
|url = http://dl.acm.org/citation.cfm?id=302528.302736
|journal = In Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems
|pages = 451–457
}}</ref>
 
<ref name="LIU09">{{
cite journal
|last = Liu
|first = Tie-Yan
|year = 2009
|title = Learning to Rank for Information Retrieval
|url = http://dl.acm.org/citation.cfm?id=1618303.1618304
|journal = Foundations and Trends in Information Retrieval
|volume = 3
|issue = 3
|pages = 225–331
|doi = 10.1561/1500000016
}}</ref>
 
<ref name="GEMM09">{{
cite journal
|last = Gemmis
|first = Marco De
|coauthors = Iaquinta, Leo; Lops, Pasquale; Musto, Cataldo; Narducci, Fedelucio; Semeraro,Giovanni
|year = 2009
|title = Preference Learning in Recommender Systems
|url = http://www.ecmlpkdd2009.net/wp-content/uploads/2008/09/preference-learning.pdf#page=45
|journal = PREFERENCE LEARNING
|volume = 41
|pages = 387–407
}}</ref>
 
}}
 
==External links==
*[http://www.preference-learning.org/ Preference Learning site]
 
[[Category:Information retrieval techniques]]
[[Category:Machine learning]]