Ranking using pairwise preferences

15
Ranking Using Pairwise Preferences Himanshi Sinha(12202) Lakshmi Bansal(13014) Sweta Sharma(12392)

Transcript of Ranking using pairwise preferences

Page 1: Ranking using pairwise preferences

Ranking Using Pairwise Preferences

Himanshi Sinha(12202)Lakshmi Bansal(13014)Sweta Sharma(12392)

Page 2: Ranking using pairwise preferences

Problem Definition

• There is a set of n items and one is given pairwise comparisons among these items. The problem is to compare the algorithms on the basis of the number of comparisons required to converge to an optimal ranking.

Page 3: Ranking using pairwise preferences

Algorithms Implemented

• Rank Centrality:– Each pair(i,j) of items is compared ‘K’ number of times – The transition matrix P is defined using the outcome of

these comparisons.– The Limiting distribution π of the matrix P defines the

numerical scores of the n items.

Sahand Negahban, Sewoong Oh, and Devavrat Shah. Rank centrality: Ranking from pair-wise comparisons arXiv preprint arXiv:1209.1688, 2012

Page 4: Ranking using pairwise preferences

Rank Aggregation via NNM

• Each pair (i,j) item is compared ‘K’ times and a skew-symmetric matrix Y is defined as follows:

• Then singular-value-projection is used to find Rank-2 approximation of the matrix Y.

• This approximation is being used to calculate the final scores of the items.

David F Gleich and Lek-heng Lim. Rank aggregation via nuclear norm minimization. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining

Page 5: Ranking using pairwise preferences

SVM-Rank Aggregation

• Inducing the Dataset: SP = {vij,zij}i<j as consisting of vectors vij = (Pi-Pj), where Pi denotes i-th column of P, together with binary labels zij = sign(Pji-Pij).

• We find the hyperplane separating the above dataset.

• Calculate scores for each item using w vector

Arun Rajkumar and Shivani Agarwal. A statistical convergence perspective of algorithms for rank aggregation from pairwise data. In Proceedings of the 31st International Conference on Machine Learning, 2014.

Page 6: Ranking using pairwise preferences

DATASETS

• Dataset generated synthetically using Bradley-Terry-Luce model with which we also get the true ranking for evaluating correctness of our implementations.

• From REAL data set of SOC(strict ordering complete list) we generated pair wise data which also gives us the ground truth.

Page 7: Ranking using pairwise preferences

Pair-wise Disagreement error(BTL)

Rank Centrality:

Page 8: Ranking using pairwise preferences

• Rank-Aggregation via NNM • SVM Rank Aggregation

Page 9: Ranking using pairwise preferences

DL1 error(BTL)

Rank Centrality:

Page 10: Ranking using pairwise preferences

• Rank-Aggregation via NNM

• SVM Rank Aggregation

Page 11: Ranking using pairwise preferences

DL1 error(SOC)

Rank Centrality:

• N = 24, k=64

Page 12: Ranking using pairwise preferences

• Rank-Aggregation via NNM

• SVM Rank Aggregation

Page 13: Ranking using pairwise preferences

Empirical Time Estimation

Page 14: Ranking using pairwise preferences

Conclusion and Future Work• It can be seen from the results that Rank Centrality

performs better than NNM and SVM-Aggregation on PDE metric for BTL Dataset. NNM performs slightly better than SVM.

• We can see from DL1 error on BTL and SOC Dataset that Rank Centrality is more robust to missing values as compared to NNM and SVM Rank Aggregation.

• The Empirical time taken by each of the algorithms increases as we increase the number of items n. Rank Centrality and NNM are more time efficient than SVM Aggregation.

Page 15: Ranking using pairwise preferences

THANK YOU