Preference learning: Difference between revisions

Content deleted Content added
m clean up, typos fixed: a instance → an instance using AWB
Line 6:
==Tasks==
 
The main task in preference learning concerns problems in "[[learning to rank]]". According to different types of preference information observed, the tasks are categorized as three main problems in the book ''Preference Learning'' :<ref name="FURN11" />:
 
===Label ranking===
Line 12:
In label ranking, the model has an instance space <math>X=\{x_i\}\,\!</math> and a finite set of labels <math>Y=\{y_i|i=1,2,\cdots,k\}\,\!</math>. The preference information is given in the form <math>y_i \succ_{x} y_j\,\!</math> indicating instance <math>x\,\!</math> shows preference in <math>y_i\,\!</math> rather than <math>y_j\,\!</math>. A set of preference information is used as training data in the model. The task of this model is to find a preference ranking among the labels for any instance.
 
It was observed some conventional [[Classification in machine learning|classification]] problems can be generalized in the framework of label ranking problem :<ref name="HARP03" />: if a training instance <math>x\,\!</math> is labeled as class <math>y_i\,\!</math>, it implies that <math>\forall j \neq i, y_i \succ_{x} y_j\,\!</math>. In [[Multi-label classification|multi-label]] situation, <math>x\,\!</math> is associated with a set of labels <math>L \subseteq Y\,\!</math> and thus the model can extract a set of preference information <math>\{y_i \succ_{x} y_j | y_i \in L, y_j \in Y\backslash L\}\,\!</math>. Training a preference model on this preference information and the classification result of aan instance is just the corresponding top ranking label.
 
===Instance ranking===