Ranking: Difference between revisions

From David's Wiki
Created page with "Some notes on ranking techniques ==Basics== [https://medium.com/@mayurbhangale/pointwise-pairwise-and-listwise-learning-to-rank-baf0ad76203e Pointwise, Pairwise and Listwise Learning to Rank] ===Point-wise ranking=== In point-wise ranking, you have some scores for you document <math>y_i</math> so you can train your model <math>f</math> to predict such scores in a ==Metrics== ===Cumulative Gain=== Suppose you have a list of results <math>x_1,..., x_n</math> with rel..."
 
 
(5 intermediate revisions by the same user not shown)
Line 5: Line 5:


===Point-wise ranking===
===Point-wise ranking===
In point-wise ranking, you have some scores for you document <math>y_i</math> so you can train your model <math>f</math> to predict such scores in a  
In point-wise ranking, you have some scores for you document <math>y_i</math> so you can train your model <math>f</math> to predict such scores in a supervised manner.


===Pair-wise ranking===
If you data is of the form: <math>y(x_a) > y(x_b)</math> then you can train so that your model maximizes <math>f(x_a) - f(x_b)</math> using a hinge loss:
<math>
\begin{equation}
L(x_a, x_b) = max(0, 1-(f(x_a) - f(x_b)))
\end{equation}
</math>
===Listwise ranking===
Use something like [https://auai.org/uai2014/proceedings/individuals/164.pdf ListMLE]


==Metrics==
==Metrics==
 
See https://medium.com/swlh/rank-aware-recsys-evaluation-metrics-5191bba16832
===Cumulative Gain===
===Cumulative Gain===
Suppose you have a list of results <math>x_1,..., x_n</math> with relevency <math>r_1,...,r_n</math>.<br>
Suppose you have a list of results <math>x_1,..., x_n</math> with relevency <math>r_1,...,r_n</math>.<br>
Line 29: Line 39:
<math>
<math>
\begin{equation}
\begin{equation}
NCDG_p = \frac{DCG_g(\mathbf{r})}{\max_{\mathbf{r}DCG_p(\mathbf{r})}}
NCDG_p = \frac{DCG_g(\mathbf{r})}{\max_{\mathbf{r}}DCG_p(\mathbf{r})}
\end{equation}
\end{equation}
</math>
</math>
===Mean Reciprocal Rank===
{{main | Wikipedia: Mean reciprocal rank}}
If you only have one correct answer which is placed in rank <math>i</math> then the reciprocal rank is <math>1/i</math>.<br>
For multiple queries and results, the mean reciprocal rank is simply <math>\operatorname{mean}(1/rank)</math>.

Latest revision as of 21:01, 15 April 2024

Some notes on ranking techniques

Basics

Pointwise, Pairwise and Listwise Learning to Rank

Point-wise ranking

In point-wise ranking, you have some scores for you document \(\displaystyle y_i\) so you can train your model \(\displaystyle f\) to predict such scores in a supervised manner.

Pair-wise ranking

If you data is of the form: \(\displaystyle y(x_a) \gt y(x_b)\) then you can train so that your model maximizes \(\displaystyle f(x_a) - f(x_b)\) using a hinge loss: \(\displaystyle \begin{equation} L(x_a, x_b) = max(0, 1-(f(x_a) - f(x_b))) \end{equation} \)

Listwise ranking

Use something like ListMLE

Metrics

See https://medium.com/swlh/rank-aware-recsys-evaluation-metrics-5191bba16832

Cumulative Gain

Suppose you have a list of results \(\displaystyle x_1,..., x_n\) with relevency \(\displaystyle r_1,...,r_n\).
Then the cumulative gain at position \(\displaystyle p\) is the sum of the relevency of the first \(\displaystyle p\) results: \(\displaystyle \begin{equation} CG_p = \sum_{i=1}^{p} r_i \end{equation} \)

The discounted cumulative gain (DCG) takes the position into account, discounting lower-ranked results: \(\displaystyle \begin{equation} DCG_p = \sum_{i=1}^{p} \frac{r_i}{\log_2 (i+1)} \end{equation} \)

The normalized discounted cumulative gain (NDCG) is 1-normalized by dividing over the best possible ranking: \(\displaystyle \begin{equation} NCDG_p = \frac{DCG_g(\mathbf{r})}{\max_{\mathbf{r}}DCG_p(\mathbf{r})} \end{equation} \)

Mean Reciprocal Rank

If you only have one correct answer which is placed in rank \(\displaystyle i\) then the reciprocal rank is \(\displaystyle 1/i\).
For multiple queries and results, the mean reciprocal rank is simply \(\displaystyle \operatorname{mean}(1/rank)\).