Content deleted Content added
m grammar |
m Open access bot: url-access updated in citation with #oabot. |
||
(5 intermediate revisions by 5 users not shown) | |||
Line 1:
{{Short description|Subfield of machine learning}}
'''Preference learning''' is a subfield of [[machine learning]] that focuses on modeling and predicting preferences based on observed preference information.<ref>{{Cite Mehryar Afshin Ameet 2012}}</ref> Preference learning typically involves [[supervised learning]] using datasets of pairwise preference comparisons, rankings, or other preference information.
==Tasks==
The main task in preference learning concerns problems in "[[learning to rank]]". According to different types of preference information observed, the tasks are categorized as three main problems in the book ''Preference Learning'':<ref>{{Cite book |url=https://books.google.
===Label ranking===
Line 27 ⟶ 28:
If we can find a mapping from data to real numbers, ranking the data can be solved by ranking the real numbers. This mapping is called [[utility function]]. For label ranking the mapping is a function <math>f: X \times Y \rightarrow \mathbb{R}\,\!</math> such that <math>y_i \succ_x y_j \Rightarrow f(x,y_i) > f(x,y_j)\,\!</math>. For instance ranking and object ranking, the mapping is a function <math>f: X \rightarrow \mathbb{R}\,\!</math>.
Finding the utility function is a [[Regression analysis|regression]] learning problem{{citation needed|date=March 2025}} which is well developed in machine learning.
===Preference relations===
The binary representation of preference information is called preference relation. For each pair of alternatives (instances or labels), a binary predicate can be learned by conventional supervised learning approach. Fürnkranz and Hüllermeier proposed this approach in label ranking problem.<ref name=":0">{{Cite
Using preference relations to predict the ranking will not be so intuitive. Since observed preference relations may not always be transitive due to inconsistencies in the data, finding a ranking that satisfies all the preference relations may not be possible or may result in multiple possible solutions. A more common approach is to find a ranking solution which is maximally consistent with the preference relations. This approach is a natural extension of pairwise classification.<ref name=":0" />
Line 37 ⟶ 38:
==Uses==
Preference learning can be used in ranking search results according to feedback of user preference. Given a query and a set of documents, a learning model is used to find the ranking of documents corresponding to the [[relevance (information retrieval)|relevance]] with this query. More discussions on research in this field can be found in [[Tie-Yan Liu]]'s survey paper.<ref>{{Cite journal |last=Liu |first=Tie-Yan |date=2007 |title=Learning to Rank for Information Retrieval |url=http://www.nowpublishers.com/article/Details/INR-016 |journal=Foundations and Trends
Another application of preference learning is [[recommender systems]].<ref>{{Citation |
==References==
|