Ranking using pairwise preferences
-
Upload
sweta-sharma -
Category
Science
-
view
39 -
download
1
Transcript of Ranking using pairwise preferences
Ranking Using Pairwise Preferences
Himanshi Sinha(12202)Lakshmi Bansal(13014)Sweta Sharma(12392)
Problem Definition
• There is a set of n items and one is given pairwise comparisons among these items. The problem is to compare the algorithms on the basis of the number of comparisons required to converge to an optimal ranking.
Algorithms Implemented
• Rank Centrality:– Each pair(i,j) of items is compared ‘K’ number of times – The transition matrix P is defined using the outcome of
these comparisons.– The Limiting distribution π of the matrix P defines the
numerical scores of the n items.
Sahand Negahban, Sewoong Oh, and Devavrat Shah. Rank centrality: Ranking from pair-wise comparisons arXiv preprint arXiv:1209.1688, 2012
Rank Aggregation via NNM
• Each pair (i,j) item is compared ‘K’ times and a skew-symmetric matrix Y is defined as follows:
• Then singular-value-projection is used to find Rank-2 approximation of the matrix Y.
• This approximation is being used to calculate the final scores of the items.
David F Gleich and Lek-heng Lim. Rank aggregation via nuclear norm minimization. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
SVM-Rank Aggregation
• Inducing the Dataset: SP = {vij,zij}i<j as consisting of vectors vij = (Pi-Pj), where Pi denotes i-th column of P, together with binary labels zij = sign(Pji-Pij).
• We find the hyperplane separating the above dataset.
• Calculate scores for each item using w vector
Arun Rajkumar and Shivani Agarwal. A statistical convergence perspective of algorithms for rank aggregation from pairwise data. In Proceedings of the 31st International Conference on Machine Learning, 2014.
DATASETS
• Dataset generated synthetically using Bradley-Terry-Luce model with which we also get the true ranking for evaluating correctness of our implementations.
• From REAL data set of SOC(strict ordering complete list) we generated pair wise data which also gives us the ground truth.
Pair-wise Disagreement error(BTL)
Rank Centrality:
• Rank-Aggregation via NNM • SVM Rank Aggregation
DL1 error(BTL)
Rank Centrality:
• Rank-Aggregation via NNM
• SVM Rank Aggregation
DL1 error(SOC)
Rank Centrality:
• N = 24, k=64
• Rank-Aggregation via NNM
• SVM Rank Aggregation
Empirical Time Estimation
Conclusion and Future Work• It can be seen from the results that Rank Centrality
performs better than NNM and SVM-Aggregation on PDE metric for BTL Dataset. NNM performs slightly better than SVM.
• We can see from DL1 error on BTL and SOC Dataset that Rank Centrality is more robust to missing values as compared to NNM and SVM Rank Aggregation.
• The Empirical time taken by each of the algorithms increases as we increase the number of items n. Rank Centrality and NNM are more time efficient than SVM Aggregation.
THANK YOU