What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web...

95
LyncP PageRank ULocalRank U U Hilltop U U HITS U UAT(k) U U NORM(p) U U more 〉〉 U Searching for a better search… HTURadhika Gupta UTH HTU Nalin Moniz UTH HTU Sudipto Guha UTH CSE 401 Senior Design. April 11P th P, 2005. LYNC Search I’m Feeling Luckier

Transcript of What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web...

Page 1: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Lync P

PageRank ULocalRankU UHilltopU UHITSU UAT(k) U UNORM(p) U Umore ⟩⟩ U

Searching for a better search…

HTURadhika GuptaUTH HTUNalin MonizUTH HTUSudipto Guha UTH

CSE 401 Senior Design. April 11P

thP, 2005.

LYNC Search I’m Feeling Luckier

Page 2: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Lync

T"for" is a very common word and was not included in your search. [HTTUdetailsUTTH]

Table of Contents Pages 1 – 31 for HTUSearching for a better search UTH (2 Semesters) P

P

Sponsored Links HTUAbstractUTH A summary of the topics covered in our paper. Pages 1 – 2 - HTUCachedUTH - HTUSimilar pagesUTH

UPROBLEM SolvedU Beta Power!

www.PROBLEM.com

HTUIntroduction and Definitions UTH An introduction to web searching algorithms and the Link Analysis Rank Algorithms space as well as a detailed list of key definitions used in the paper. Pages 3 – 7 - HTUCachedUTH - HTUSimilar pagesUTH

UFree CANDDEU Come get it. Don’t be

left dangling! www.CANDDE.gov

HTUSurvey of the LiteratureUTH A detailed survey of the different classes of Link Analysis Rank algorithms including PageRank based algorithms, local interconnectivity algorithms, and HITS and the affiliated family of algorithms. This section also includes a detailed discuss of the theoretical drawbacks and benefits of each algorithm. Pages 8 – 31 - HTUCachedUTH - HTUSimilar pagesUTH

UA PAT on the BackU The Best Authorities

on Every Subject www.PATK.edu

HTUPage Ranking Algorithms UTH An examination of the idea of a simple page rank algorithm and some of the theoretical difficulties with page ranking, as well as a discussion and analysis of Google’s PageRank algorithm. Pages 8 – 15 - HTUCachedUTH - HTUSimilar pagesUTH

UPaging PAGE U The shortest path to

perfect search. www.PAGE.net

HTUIntroducing Local Inter-ConnectivityUTH A discussion of the motivation for using local connectivity in page rank algorithms, as well as an detailed discussion and analysis of both the Hilltop and the LocalRank algorithms. Pages 15 – 23 - HTUCachedUTH - HTUSimilar pagesUTH

HTUHub and Authority Based Ranking AlgorithmsUTH A discussion of the HITS algorithm and the ideas of hubs and authorities as well as an examinations of variations of the HITS algorithm including PHITS, Hub Averaging HITS, SALSA, BFS, and the non-linear dynamic algorithms, AT(k) and NORM(p). Pages 23 – 31 - HTUCachedUTH - HTUSimilar pagesUTH

Lynnnnnnnnc Result Page: 1 U2U U3U UNext U

Coming Soon! HTULYNC Goes PublicUTH: LYNC Gets Bigger and Better!!!

Searching for a better search…

HTULYNC HomeUTH - HTUBusiness SolutionsUTH - HTUAbout LYNCUTH

©2005 LYNC Nalin Moniz and Radhika Gupta

PageRank ULocalRankU UHilltopU UHITSU UAT(k) U UNORM(p) U Umore ⟩⟩ U

Searching for a better search … LYNC Search

LYNC Search

Page 3: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Lync

"for" is a very common word and was not included in your search. [HTUdetailsUTH]

Table of Contents Pages 32 – 81 for HTUSearching for a better search UTH (2 Semesters) P

P

Sponsored Links HTUSurvey of Rank Merging and AggregationUTH A summary of our research on rank merging and aggregation including a description of key distance measures as well as rank aggregation algorithms that we use in our implementation. Pages 32 – 39 - HTUCachedUTH - HTUSimilar pagesUTH

USimple Borda U Bored of Being Alone?

Lets Merge ☺ www.sborda.com

HTUHybrid Algorithm ProposalsUTH A discussion of the technical details and motivations behind six of our own hybrid algorithms developed by analyzing the flaws and strengths of the different classes of Link Analysis Rank algorithms. Pages 40 – 50 - HTUCachedUTH - HTUSimilar pagesUTH

UGeometric Borda U Really Bored? Like

Really Bored? www.gborda.gov

HTUTechnical ApproachUTH A discussion of our approach to this project including the research done in algorithms, the implementation of the algorithms, as well as the analysis of the results. Includes system architecture diagrams. Pages 51 – 65 - HTUCachedUTH - HTUSimilar pagesUTH

UMarkov Merge U Mark My Words…

This is really good ☺ www.markov.uk

HTUTechnical Approach AppendixUTH Samples of the XML schema of our software, sample output, and sample surveys and results. Pages 66 – 72 - HTUCachedUTH - HTUSimilar pagesUTH

HTUAnalysis of Survey ResultsUTH A detailed statistical analysis of the results of our survey including a discussion of the performance of different hybrid algorithms as well as possible explanations. Pages 73 – 79 - HTUCachedUTH - HTUSimilar pagesUTH

HTUFuture ImprovementsUTH A discussion of possible future improvements that could be made to the hybrid algorithms. Pages 80 – 81 - HTUCachedUTH - HTUSimilar pagesUTH

Lynnnnnnnnc Result Page: UPreviousU 1 2 U3U UNext U

Coming Soon! HTULYNC Goes PublicUTH: LYNC Gets Bigger and Better!!!

Searching for a better search…

HTULYNC HomeUTH - HTUBusiness SolutionsUTH - HTUAbout LYNCUTH

©2005 LYNC Nalin Moniz and Radhika Gupta

PageRank ULocalRankU UHilltopU UHITSU UAT(k) U UNORM(p) U Umore ⟩⟩ U

Searching for a better search … LYNC Search

LYNC Search

Page 4: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Lync

"for" is a very common word and was not included in your search. [HTUdetailsUTH]

Table of Contents Pages 82 – 91 for HTUSearching for a better search UTH (2 Semesters) P

PPP

Sponsored Links HTUMilestones and TimelineUTH The milestones and timeline for this project. Pages 82 – 85 - HTUCachedUTH - HTUSimilar pagesUTH

UCSE 400 U 004 Sleepless Nights

www.hell.com

HTUConclusion and Reflections UTH Our thoughts, reflections, and observations on this year long project. Pages 86 – 88 - HTUCachedUTH - HTUSimilar pagesUTH

UCSE 401 U 104 Sleepless Nights

www.hellreloaded.com

HTUReferencesUTH A list of all our references. Pages 89 – 91 - HTUCachedUTH - HTUSimilar pagesUTH

UCSE 401 is Over! U No More

Sleepless Nights www.sleep.com

Lynnnnnnnnc Result Page: UPreviousU U1U U2U 3

Coming Soon! HTULYNC Goes PublicUTH: LYNC Gets Bigger and Better!!!

Searching for a better search…

HTULYNC HomeUTH - HTUBusiness SolutionsUTH - HTUAbout LYNCUTH

©2005 LYNC Nalin Moniz and Radhika Gupta

PageRank ULocalRankU UHilltopU UHITSU UAT(k) U UNORM(p) U Umore ⟩⟩ U

Searching for a better search … LYNC Search

LYNC Search

Page 5: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Efficient and accurate ranking of web pages in response to a query is at the core of the information retrieval on the Internet. The problem of web search has been most commonly approached through Link Analysis Rank algorithms. Google’s PageRank is one of the more well known algorithms in this class, and was reasonably successful, until the growth of blogs and link exchanges allowed for the manipulation of the system. These developments gave rise to the idea of rankings should focus on links coming from the relatively more important sources – the notion at the heart of the LocalRank and Hilltop algorithms. At the same time, researchers began to investigate algorithms that used a dual ranking system instead of a single ranking system. Kleinberg popularized the idea of ranking pages separately for outgoing and incoming hyperlinks in his seminal paper on Hyperlink Induced Topic Distillation (HITS). HITS and related algorithms have met with some success, yet they have fundamental symmetry flaws that mandate a shift away from the linear to the non linear system paradigm if they are to be solved. The early classes of non-linear algorithms – NORM(p) and AT(k), have performed well on a small samples of queries and moderate sized systems despite their computational limitations, but have yet to gain widespread popularity. Our project looks at the work done in non linear dynamic systems for ranking web pages, and develops hybrid algorithms that bring together the simplicity and local focus of linear algorithms like LocalRank, while exploiting the benefits of the non-linear and other interesting paradigms. The models we propose draw from Hilltop, LocalRank, HITS, and the AT(k) classes of algorithms. Our first algorithm PROBLEM, uses the concept of beta distributed user web surfing, instead of the random surfer model of PageRank, while our second algorithm, PAGE is based on a percentage of shortest path ideas. Our third algorithm, PAT(k) is a modification of the non linear dynamic algorithm, AT(k), while our fourth algorithm CANDDE, takes a new approach to dangling links.

1

We test our algorithms in practice by implementing a demonstrative system that crawls the web on the upenn.edu domain and retrieves the top ranked pages for a particular query on a particular algorithm. We then measure our results by asking

Page 6: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

users on the Penn campus to participate in a survey, which asks them to rate these particular different rankings. The experimental approach we will adopt is drawn from Tsaparas (Tsaparas 67) and Kleinberg and involves comparing the performance of our algorithm with PageRank, LocalRank, Hilltop, HITS, and AT(k). To explore the newer field of rank aggregation, we also include as benchmarks, three algorithms that essentially merge PageRank and HITS using different merging schemes (Borda and Markov Chain ideas). We perform a detail statistical analysis of our results, from which we gather, that for these particular queries, the algorithms tested fall into four different performance buckets. CANDDE out performed all the other algorithms, largely because of the influence of dangling links on a small graph. PROBLEM and PAT(k) fell into the second performance bucket, that included the HITS algorithm and a Borda merge. PROBLEM performed relatively well because the Beta distribution model was effectively able to capture the fact that upenn.edu has a few central pages that contain most of the information and to which users are most likely to jump to. PAT(k) on the other hand performed well because it captured the essence of the non-linear dynamical system filtering. In the third bucket, we saw Page Rank, AT(k), and the Markov algorithms, and in the fourth, and worst performing bucket, we saw PAGE and LocalRank. Our suspicion is that the poor performance of LocalRank is best attributed to an incorrect model of user behavior.

2

Page 7: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Efficient and accurate ranking of query results is central to any information retrieval problem, particularly Web search. Given the massive size of the Internet, a ranking system must be efficient, capture the information in a query accurately, and return the most relevant web sites to the user within the first few results. Ranking algorithms that solve this problem can be roughly divided into two categories: Text Analysis Algorithms and Link Analysis Rank algorithms. In this study, we focus on Link Analysis Rank (LAR) algorithms which rank pages based on the intrinsic structure of the world wide web. The structure of the web can be represented as a graph where the nodes represent web pages, and the edges represent hyperlinks between these web pages. For a given web query, the goal of a ranking system is to rank the particular pages or nodes in terms of their relevance to the query. The rank or importance of a given web page is determined by looking at the Web graph of hyperlinks and ranking a particular page or node depending on the pages linked to it and the pages that link to it. Intuitively, a hyperlink from page a to page b can be thought of as a vote for page b from page a. LAR algorithms like PageRank, LocalRank, Hilltop and HITS build upon this idea to determine the relevance of web pages related to a certain query. A LAR algorithm takes a hyperlink graph as an input and returns a measure of importance for each node as an output. In this respect, LAR algorithms are distinct from Text Analysis Rank (TAR) algorithms which rank pages based on the frequency of the query words in a particular page (Tsaparas 59). Systems based on LAR algorithms, such as Google’s search engine, have proven to be more robust than TAR algorithms because LAR algorithms are innately harder to manipulate by flooding pages with popular search keywords or by maliciously manipulating the placement of text on a page.

3

Most LAR algorithms fall into the category of dynamic systems. A dynamic system is defined as a system that begins with an initial state for each node or page, and performs a series of repeated calculations to determine a final set of weights for

Page 8: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

4

the nodes. Dynamic system based algorithms run through an iterative process that updates the node weights until they converge to a steady state (Tsaparas 59). The steady state is a vector of weights indicating of the relative importance of the pages. Dynamical systems can be broken down into linear and non-linear systems, which differ in the function )(xg that is repeated applied to the original vector of weights. In the special case where the function Mxxg =)( where M is an time invariant

nn× matrix, we have a linear dynamical system and the steady state solution corresponds to the principal eigenvector of the matrix M (Tsaparas 62). PageRank, LocalRank, Hilltop and HITS are all linear dynamical systems. A non-linear dynamical system is one where )(xg is anything except a time invariant matrix. Terminology P

_________________________________________________________________________________________ P

Below we define some of the terms we use in the context of different algorithms in the paper. The terms defined below are highlighted when they occur in the paper. Definitions are cited for sources where they occur in the document. Affiliated pages Affiliated pages are pages that are considered similar under certain criteria. The specific definition of affiliated pages is discussed in detail in the context of the Hilltop algorithm. Authority node A page that has at least one other page pointing to it. Discussed in the context of HITS and affiliated algorithms. Back link A link coming into a page Borda’s Algorithm A heuristic for rank aggregation that assigns a rank to each element and sorts the candidates by cumulative total score.

Page 9: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Condorcet Criteria The notion that if there is an element that defeats every other element in a majority voting, it should be ranked the highest. Dangling link A link that points to a page that is not in the set of pages in the hyperlink graph. Discussed in the context of PageRank. Expert page A page that has many links to numerous non-affiliated pages on a particular topic. Discussed in the context of LocalRank and Hilltop. Full List Referring to lists in a rank aggregation problem, a full list is a list that is a permutation of the universe of ranked items. Geodesic The shortest path between two vertices in a graph. Discussed in the context of the hybrid PAGE algorithm. Hub node A page that points to at least one other page. Discussed in the context of HITS and affiliated algorithms. Linear Dynamical System Any algorithm where the iterative operator is a linear or matrix function. Link Analysis Rank (LAR) The name for the class of algorithms which rank pages based on the intrinsic structure of the web. Local Kemenization Similar to the Kemeny optimal aggregation, a solution that satisfies the Condorcet criteria but is computationally tractable.

5

Page 10: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Kemeny Optimal Aggregation A rank aggregation solution that minimizes the Kendall Tau distance between two lists. Kendall’s Tau A distance measure in rank aggregation that counts the number of pairwise inversions between two ranking lists. Non-Linear Dynamical System Any algorithm where the iterative operator is not a linear or matrix function. Page Ranking Algorithms We will use this term for the general class of algorithms that take advantage of the link structure of the web to produce a global ranking of web pages. Most of these algorithms are based on Google’s PageRank algorithm. Partial List Referring to lists in a rank aggregation problem, a partial list is a list that is a subset of the universe of ranked items. Rank Aggregation The process of merging two sets of ranking lists into a single list. Rank sink A web page or a set of web pages that have no outgoing links. Discussed in the context of PageRank. Scaled Footrule A distance measure in rank aggregation that weights the contribution of an element in the list based on the length of the list. Similar pages Pages that share the same domain name.

6

Page 11: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Spearman Footrule A distance measure in rank aggregation that is based on the absolute difference between the rank of an element in the lists. Text Analysis Rank (TAR)

7

The name for the class of algorithms which rank pages based on the frequency of the query words in a particular page (Tsaparas 59).

Page 12: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

8

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PP

SURVEY OF LITERATURE P

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

PIn this section, we survey some of the major classes of Link Analysis Rank Algorithms. We begin by examining the simplest category of page ranking algorithms, including Google’s PageRank, and analyzing the theoretical benefits and drawbacks of its approach. We then look at two other classes of algorithms that build upon Google’s PageRank – LocalRank and Hilltop, as well as the HITS algorithm and its variations. Again we provide an analysis of the classes of algorithms, discussing some of the theoretical benefits and drawbacks of the variations. Page Ranking AlgorithmsP

__________________________________________________________________________ P

In general, we will refer to page ranking algorithms as algorithms that take advantage of the link structure of the web to produce a global ranking of a set of web pages given a specific query. The algorithms rest on the assumption that a web page with a number of other web pages pointing to it, i.e. a web page with a large number of incoming links or back links, is an important page. Of course, in addition to the number of back links, the quality of those back links is also important. For instance, a page with back links from a major site such as Yahoo or MSN will clearly be relatively more important than a page with the same number of back links from an individual’s blog (Craven). Simple Algorithms for Ranking Page P

_____________________________________________________________P

Based on this intuition, we can define a rudimentary equation for calculating the page rank of a web page, v. Define F(v) as the set of pages pointed to by v (the forward links of the page) and B(v) as the set of pages that point to v (the back links of the page). The page rank PR of v can then be defined as:

∑∈

=n

vBi iFiPRdvPR

)( )()()(

(1)

Page 13: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Intuitively, this definition can be thought of as a summation of all the incoming links of a web page v, normalized by the number of outgoing links for each incoming link (Brin, Motwani, Page, and Winograd 3). A web page with a high page rank thus is a web page not only with a large number of incoming links, but also with a large number of high quality incoming links. In the equation above, the factor d is a normalization factor between 0 and 1 to account for pages with no forward links. We can restate the above equation in terms of a square matrix C with rows and columns that represent web pages. C(i, j) is 1 if there is a link from node i to j, and if 0 otherwise. The simple page rank equation (1), can then be written as a vector where: PR = dC*PR

(2)

The page rank matrix PR is an eigenvector of the matrix C with corresponding eigenvalue d. We can compute the value of the page rank matrix by simply repeatedly applying the matrix C to PR until the values of the page rank matrix converge and we reach a steady state solution (Brin, Motwani, Page, and Winograd 4). Theoretical Difficulties with Page Rank Algorithms ____________________________________________

9

The simple page rank equation, equation 1, already gives us some insight into some of the theoretical challenges of using the link structure of the web to arrive at a measure of the importance of a web page. In this section, we discuss some of the theoretical flaws inherent in the very simple page rank equation, and how they point towards Google’s solution to page rank, PageRank.

Page 14: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

10

The Non-Ergodic Structure of the Web P

_______________________________________________________P

The first interesting challenged posed by page ranking stems from the representation of the web as a hyperlinked graph. Innately, hyperlinked graphs, even undirected ones, are not ergodic. The universe of web pages can be split into multiple smaller connected components, which can be ranked separately. Computationally, this is efficient because it is cheaper than trying to calculate the principal eigenvector of the Markov matrix for the universe of pages. Each iteration involves a matrix multiplication which is O(n P

2.376P) (Coppersmith and Winograd 2).

The problem is that a page rank for connected components of the undirected graph is sensitive to the initial distribution of weights across the nodes. Even if the weights are uniformly distributed, there is no way to compare the page rank of two nodes in separate connected components. This appears to be a theoretical flaw in the notion of page rank based on link analysis ranking that has no immediate or obvious solution (Rogers). Rank Sinks P

________________________________________________________________________________________P

Another interesting issue, and one that is a little easier to address, is the issue of rank sinks that arise within each connected component of an undirected graph. A rank sink is defined as a web page or a set of web pages that have no outgoing links. Mathematically, one can write this definition as:

})(,:{ SwvFwSvvS ∈→∈∈∀=

(3)

Looking back at the simple equation we developed for page rank calculations (1), we can see that the issue of rank sinks creeps up. If we run the algorithm on a simple example of three web pages, we can see how rank sink pages accumulate rank but never distribute them. Consider a web structure where these three web pages point only to each other and to no other page, and consider an additional fourth page that points to one of these three pages. The loop of web pages forms a rank sink (Brin, Motwani, Page, and Winograd 4).

Page 15: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

11

When we developed a simple equation for page ranks, we assumed that once a user enters a loop like this, they never leave it. In reality, this is not a sensible assumption because a user is unlikely to circle around in a loop more than a certain number of times. Rather they will soon leave the loop by entering a new unrelated URL into their browser. As a solution to the issue of rank sinks, Brin, Motwani, Page, and Winograd proposed the idea of a vector over all the nodes in the hyperlinked web graph, where the vector corresponded to some source of rank. A source of rank is any page that has outgoing links. Based on this additional information, the page rank equation can now be written in modified form as:

))()()(()(

)(vE

iFiPRcvPR

vBi+= ∑

(4)

The value of c is still the same normalization factor we had seen in (1) (Brin, Motwani, Page, and Winograd 4). In matrix notation, equation 2 now becomes: PR = d(C*PR + E)

(5)

Since R is in normalized form, this can also be written as PR = d(C + E*1)PR

(6)

where 1 is a vector of 1s. PR is an eigenvector of (A + E * 1).

Page 16: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Dangling Links ___________________________________________________________________________________

Dangling links pose another problem to page ranking algorithms. A dangling link is a link that points only to pages with no outgoing links. Since the web has so many of these dangling links and it is uncertain how they actually affect a page ranking algorithm, they cannot be ignored altogether (Brin, Motwani, Page, and Winograd 5). Later in this section, we analyze the benefits and drawbacks of the manner in which Google’s PageRank algorithm deals with dangling links. Google’s PageRank Algorithm ___________________________________________________________________

Brin and Page developed a formalized algorithm for page ranking, now known as Google’s PageRank by drawing from the simple idea of a page rank developed in (1) and addressing some of the issues of dangling links and rank sinks raised in the previous section. The intuition behind the algorithm is best understood by thinking about the user as a random web surfer who follows links on the web accordingly to the probability distribution of a Markov chain. The probability distribution here refers to the probability distribution of a random walk on the hyperlinked graph of the web. In this paradigm, the rank of each page is represented by the steady state or stationary distribution of the Markov chain, and is based on the percentage of time the user would spend in a state or on a particular page over a long period of time (Brin and Page 7). One can incorporate the rank sink question by thinking about the web surfer occasionally getting into a small loop of web pages. Clearly, it is unlikely that the surfer will stay in the loop forever. Instead, it is likely they will jump to another page. The factor E in (4) models the user getting bored of going around in the same loop and typing a new URL into the browser.

12

Given those assumptions, PageRank is defined in terms of the simple page rank equation (1), discussed at the beginning of the section. The normalization factor d is used again and the value 1 – d is used to model the probability of the user jumping to a random web page (thus taking into account rank sinks). Brin and Page define PageRank mathematically as:

Page 17: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

13

∑=

+−=n

i i

i

TCTPRddAPR

1 )()()1()(

(7)

We use the original notation used by Brin and Page to define the terms below:

:A The actual page to rank. This is v in our earlier discussion.

:iT A page pointing to A. There are n pages pointing to A. This is i in our earlier

discussion.

:PR The PageRank of the page

:C The number of outgoing links from a page T. This is F in our earlier discussion.

:d Dampening factor, arbitrarily determined to be 0.85 by Brin and Page in their original paper. This is the normalization factor d used in our earlier discussion (Brin and Page 7). Analysis of PageRank P

______________________________________________________________________________P

In this section, we analyze the PageRank algorithm and discuss both some of its significant strengths, as well as its theoretical weaknesses and computational drawbacks. The analysis from this section is the foundation of the hybrid models developed in a later section. THEORETICAL STRENGTHS Computational Ease PageRank is easy to compute. The PageRank of the i-th webpage is the i-th element of the principal eigenvector of the Markov chain representing a

Page 18: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

random walk on the hyperlinked web graph. PageRank is thus computationally feasible for a universe with a large number of web pages. Scalable Given the Structure of the Web PageRank improves as the universe of web pages grows larger. The number of pages on the web is increasing exponentially and every page added to the web is just another node on the hyperlinked graph. PageRank is thus scalable given the changing structure of web. Reliable Foundations PageRank relies on links which are one of the quickest and most accurate feedback loops on the web. THEORETICAL WEAKNESSES Uniform User’s Preferences (E) Google’s PageRank assumes the vector E in (4) to be uniform over all web pages. This is effectively assuming that the user is equally likely to jump to any particular page on the web. Although the assumption makes PageRank computationally more tractable, it is not necessarily an accurate model of a user’s behavior. In general , users are more likely to jump to particular page or set of pages that they have a preference for more frequently. For example, a user is much more likely to jump to a page in his or her Favorites than to a randomly chosen page. Ignoring Dangling Links PageRank removes dangling links from the calculation until the end, thus effectively ignoring them. Since in a computation a node distributes its weight across a number of links, including dangling links, this is a problem. Ignoring dangling links from the computation means that real links from pages with dangling links receive a disproportionate weight in the normalization process.

14

Page 19: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

PRACTICAL DRAWBACKS Flawed Assumption about Link Validity PageRank assumes that each link is a valid vote for a website when often links are not really valid links at all. This is particularly problematic since guest books are being used to spam PageRank (Search Marketing Information). PageRank is also vulnerable to blog optimization. Vulnerable to Manipulation Despite its theoretical soundness, PageRank has been vulnerable to manipulation. One common attack against PageRank is to leave a bogus comment in a blog with a highly optimized link. Blogs contain a large number of links interacting back and forth and are linked to a number of times. Even though blogs by themselves are harmless, their tightly linked structure leaves room for users to exploit PageRank (Zawodny). Ignoring Dangling Links PageRank removes dangling links from the calculation until the end, thus effectively ignoring them. Since in a computation a node distributes its weight across a number of links, including dangling links, this is a problem. Ignoring dangling links from the computation means that real links from pages with dangling links receive a disproportionate weight in the normalization process. Introducing Local Interconnectivity ______________________________________________________________

15

Simple Link Analysis Ranking algorithms like PageRank that analyze the hyperlinks between pages assume that pages on a particular topic link to each other, and that important or authoritative pages point to other authoritative pages. Although the assumption in itself is not unreasonable, PageRank cannot distinguish between pages that are authoritative in general and pages that authoritative on a particular query topic. In addition to the theoretical drawbacks highlighted in the previous section, PageRank is thus also query independent. Query independence may not be a correct assumption in many cases – one can fathom numerous instances where a website is important in general and may contain a single page on a query topic,

Page 20: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

but may not be authoritative on the topic on the whole. The query “Indian food” on the website <www.cnn.com> is one example. PageRank would not take the notion of query relevance into account and rank CNN at the top of the search results, even though the community of users who are experts on Indian food may not consider CNN a valuable resource for Indian food (SEO Rank). In this section, we examine two algorithms, Hilltop and LocalRank, that draw off this idea and try to take into account query relevance and local inter-connectivity, while trying to retain the broad appeal of PageRank. Hilltop is an algorithm developed by Krishna Bharat at Google and George Mihaila, and explicitly incorporates the query string into the ranking process. LocalRank was also developed by Krishna Bharat more recently. However, unlike Hilltop, LocalRank does not explicitly work with the query string but tries to use this notion of local interconnectivity in a two step filtering process. In this section, we analyze both algorithms, and discuss some of their more interesting theoretical strengths and weaknesses. This analysis will filter into our development of hybrid models in later sections. The Hilltop Algorithm ______________________________________________________________________________

Hilltop is a query specific algorithm, unlike PageRank and LocalRank. At a broad level, the algorithm starts from a particular search query, and then searches for a set of expert pages that are relevant to the query (Bharat and Mihaila). An expert page can be defined as one that has many links to numerous non-affiliated pages on a particular topic. Two pages are considered affiliated if they satisfy one or more of the following criteria: 1) They originate from the same domain. The pages

<www.upenn.edu/careerservices>, <www.upenn.edu/directories> and, <www.seas.upenn.edu> would be considered part of the same domain according to the Hilltop criteria.

16

2) They originate from the same domain but have different top level and second level suffixes. The pages <www.ibm.com> and <www.ibm.co.uk> are examples of two pages that would be considered similar under this criteria.

Page 21: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

3) They originate from the same IP neighborhood because they have the same first three octets in their IP addresses. 123.45.67.123 and 123.45.67.321 along with any page whose IP address began with 123.45.67 would be considered similar under this criterion.

4) They originate from affiliates of affiliates. If <www.abc.com> is hosted on the

same IP octet as <www.ibm.com>, then <www.abc.com> is an affiliate of <www.ibm.co.uk> even if they are on different IP octets. This criterion ensures that affiliation is a transitive relation.

The threshold number for non-affiliated pages is arbitrarily set to five (Bharat and Mihaila). Therefore, a page has to point to at least five non affiliated pages on a particular topic to be considered an expert on that topic. Given a set of expert pages, each of these expert pages have links that match the query string exactly and other links that do not. Hilltop discards the latter and only considers pages that are pointed to by links that match the search query string exactly and also are linked to from at least two expert pages that are not affiliates. This filtered subset of pages is then ranked in a manner similar to PageRank to obtain a final ranking (SEO Rank). A Closer Look at Hilltop ________________________________________________________________________

An expert page must have at least one URL which contains all the query keywords in the key phrases that qualify it. As an approximation, we can require expert pages to contain all the query keywords. Each expert page is assigned a score that reflects “the number and importance of the key phrases that contain the query keywords, as well as the degree to which these phrases match the query.” (Bharat and Mihaila) Let the search query have k terms. Let S0 be the score computed from phrases containing all the query terms, S1 be the score computed from phrases containing k – 1 query terms and, S2 be the score computed from phrases containing k – 2 query terms.

17

Page 22: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

18

Let P Bi B is the set of phrases that match (k – i) of the query terms. Define, the Level Score (LS) to be the score that characterizes the type of phrase. LS(p) is 16 if p is a title phrase, 6 if p is a heading and 1 if p is anchor text. The Level Score uses the assumption that title text is more valuable than heading text which is more valuable than anchor text, to determine the relative importance of phrases (Bharat and Mihalia). The Fullness Factor (FF) is a measure of the number of terms in p covered by terms in the query, q. Let the length of the p be l and the number of terms in p that are not in the query be m. Then the Fullness Factor is defined as:

1)( =pFF if m ≤ 2,

lmpFF 21)( −

−= if m > 2

(8)

Then a page’s score is defined by:

∑∈

×=iPp

i pFFpLSS )()(

(9)

The goal of this weighting scheme is to prefer expert pages that match a greater proportion of the query keywords. Hence, experts are ranked by SB0 B. Ties in SB0B are broken by comparing SB1 B scores and ties in S B1B are broken by comparing SB2 B scores. The score of each expert is converted to a scalar by taking the weighted sum of the three scores (Bharat and Mihaila).

2116

032 22 SSSeExpertScor ++=

(10)

Page 23: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Analysis of Hilltop__________________________________________________________________________________

In this section, we analyze the Hilltop algorithm and discuss both some of its significant strengths, as well as its theoretical weaknesses. The analysis from this section is the foundation of the hybrid models developed in a later section. THEORETICAL STRENGTHS Query Specific Rankings As compared to PageRank, Hilltop’s strength is that its rankings are query specific. It considers the hyperlink graph that is specific to the user’s query and can hence evaluate relevance of content within a specific setting. Looks at All the Experts Hilltop enumerates and considers all good experts on a given subject and hence all good target pages on the subject. If a page does not appear in the output it definitely lacks the support from other pages to justify its inclusion. Incorporates Placement of Text on Page Hilltop does not ignore the text of a page and builds the relative position of a phrase on a page into its final page rank. Titles are considered to be more important than section headings, which themselves are considered more important than hyperlink anchor-text. This combination of graph and content analysis is more robust than PageRank and LocalRank. THEORETICAL WEAKNESSES Over-Filtering and Poor Performance on Narrow Queries Hilltop’s biggest strength is also its biggest weakness. The stringent filtering criteria can result in no pages being returned if very few expert documents match the search query. While PageRank and LocalRank’s performance improve for more specific search queries, Hilltop’s performance can actually deteriorate for the same queries.

19

Page 24: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Text based manipulation Hilltop’s use of relative text placement in its ranking scheme opens it up to the same text based manipulation that TAR algorithms suffered from.

Local Rank __________________________________________________________________________________________

While Hilltop is a powerful algorithm, it suffers from an over-filtering problem. The filtering algorithm requires the existence of a set of expert pages, and while these expert pages are easily located for broader queries, for narrow queries, it is often hard to find a set of expert pages to begin the process with. As a result, Hilltop frequently returns no results. LocalRank, an algorithm developed by Krishna Bharat, is a more recent attempt at trying to leverage the idea of unaffiliated sources used by Hilltop, while maintain a simpler two step filtering system (Bharat). At a broad level, LocalRank is a ranking, re-ranking, and weighting procedure that runs by: 1) Filtering pages according to the standard PageRank algorithm developed by

Google and taking the top 1000 pages. Each of these pages is assigned an OldRank which is their standard Google PageRank.

2) Running every one of the 1000 pages in this set through a new ranking

procedure, and obtaining a LocalRank for each page. 3) Obtaining a NewRank for each page which is assigned by normalizing the

OldRank and LocalRank and then weighting them to get a single new rank for every page.

4) Pages are returned to the user based on the single NewRank.

20

Thus instead of a single level of ranking, pages are passed through two independent tiers of ranking before being shown to the users (Bharat). The first tier is filter on general usefulness and authority, and the second a filter on the Hilltop notion of “local connectivity”.

Page 25: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

21

This notion of “local connectivity” or LocalRank is developed by taking every page v in the selected set A of 1000 filtered web pages, and finding every page in A

that has a link to v. Call this subset of pages with links to v the set B where B ⊆ A. Within B, LocalRank looks for affiliated pages. The notion of affiliated pages is defined in the same way as it is in the Hilltop algorithm – pages with the same three octets in the IP address and pages that contain similar or identical documents. The set B is thus partitioned into sets of affiliated pages or neighborhoods. Within

each neighborhood C, again where C ⊆ B, LocalRank will discard every page except the one with the largest PageRank. Thus of all the links coming into v from the same site/domain, only the single most relevant page is taken into account (Search Guild).

This process of filtering out the affiliate pages of a page v is repeated for all the remaining v pages in the set B until there is only a single page from every distinct neighborhood C in B. The resulting pages are thus a collection of unaffiliated links, and from within this set of pages, the top k pages, sorted by PageRank are filtered for further consideration. The number k is an arbitrary predetermined integer. Thus while a page v may have a large number of inbound links into it, only the top k unaffiliated links will count towards its new LocalRank (Bharat).

The top k pages filtered out by this tier is referred to as the BackSet of pages. At this stage the LocalRank computation is performed on each page where the LocalRank of a page is:

∑=

=k

i

miBackSetPRLocalRank1

))((

(11)

The factor m is a factor that is set depending on the nature of the existing PageRanks (OldRank). Typically values of m have been known to be between 1 and 3.

Based on the above computation of LocalRank, the NewRank which is finally presented to the user can be computed as:

Page 26: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

22

))((MaxOldRank

OldRankbnkMaxLocalRa

LocalRankaNewRank ++= (12)

where we can define the following:

:,ba arbitrary constants determined by Google.

:nkMaxLocalRa maximum of the LocalRank values among the top N pages or some threshold value if this is too small.

:MaxOldRank the maximum values of all the PageRank values (Krishna). Analysis of Local Rank P

____________________________________________________________________________P

In this section, we analyze Bharat’s LocalRank algorithm and discuss some of its significant strengths, as well as its theoretical weaknesses. The analysis from this section is the foundation of the hybrid models developed in a later section. THEORETICAL STRENGTHS Limited Set of Pages Under Consideration LocalRank is more robust in comparison to PageRank because it considers a limited set of 1000 pages in its search. Since users typically consider the top 10-20 pages from a search result, this restriction is relevant. Focus on Unaffiliated Pages LocalRank like Hilltop places a greater focus on unrelated links referencing a user’s page. This is relevant for two reasons. The first is that users place value of a number of unrelated results. Providing <timesofindia.com> and <cnn.com> as two query results in response to a query on the Indian election is more valuable to a user than provide two related articles on <cnn.com> because: 1) The user can easily trace one article from another, and 2) The articles are likely to provide the same perspective.

Page 27: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

The second reason why unrelated links are relevant is that Local Rank is harder to manipulate. This was one of the fundamental flaws seen in the practical use of Google’s original algorithm. With LocalRank site administrators can no longer boost PageRank by adding large number of internal links between the sites. Less Rigid Filtering than Hilltop LocalRank despite using the notion of interconnectivity provides a less rigid level of filtering than the Hilltop algorithm. LocalRank starts with the 1000 pages that are returned on a query result, as opposed to demanding that there be a set of expert pages on a particular query topic. Thus LocalRank is more likely to work for both broad and narrow queries whereas Hilltop will produce no output on narrow and uncommon queries. THEORETICAL WEAKNESSES Query Independence Even though LocalRank uses this notion of local interconnectivity, LocalRank is query independent. The algorithm ignores the fact that page relevance is query dependent. Even if page A is more relevant than page B for query q1, page B might be more relevant than page A for query q2. Hub and Authority Algorithms ____________________________________________________________________

In this section, we survey a class of algorithms based on modifications of the Hyperlink Induced Topic Distillation (HITS) algorithm developed by Jon Kleinberg. We first look at the HITS algorithms and the concepts of hubs and authorities. We then look at some of the interesting variations to the algorithm including PHITS, SALSA, Hub Averaging HITS, and PSALSA. Finally we conclude with some of the work done by Tsaparas in the non-linear dynamical systems space, again building on Kleinberg’s original HITS algorithm.

23

Page 28: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

24

The HITS Algorithm P

________________________________________________________________________________P

The task of ranking pages has different challenges for broad and specific queries. Narrowly defined searches face a scarcity problem, since very few pages contain the information required by the query. The fundamental problem is to find the right pages that match the specific query. On the other hand, broad queries face an abundance problem. The principal challenge with broad queries is to rank the thousands of web pages that match the query’s keywords and filter out the most authoritative pages. Kleinberg conceptualized HITS to effectively filter out these authorities. An authority node is a defined as a page that has at least one other page pointing to it. On the other hand, a hub node is a page that points to at least on other page. Authorities and hubs can be thought of mutually reinforcing each other in the hyperlink graph. A good authority is one that is linked to from many good hubs, while a good hub is one that links to many good authorities (Kleinberg 5). Unlike PageRank, HITS constructs query specific subgraphs. Kleinberg starts with a root set, σR consisting of the top 200 pages returned by a TAR algorithm for the

search query. For each page p ∈ σR , σR is then augmented with the set of all the

pages that p points to and at most d of the pages that point to p. If there are more than d pages that point to p, the d pages are chosen at random. Kleinberg set d = 50 when benchmarking the performance of HITS (Kleinberg 5). The augmented σR is then filtered for similar pages. Two pages are defined to be

similar if they share the same domain name. Finally, as Kleinberg points out a large number of pages from a single domain often all point to a single page. This could represent a mass endorsement, advertisement or a plain attempt to increase the page’s ranking. This phenomenon can be eliminated by allowing only up to m

pages (m ≈ 4 – 8) from a single domain to point to any given page p (Kleinberg 7). Mathematically, for a hyperlinked graph, define an adjacency matrix A such that:

EjiAij ∈⇔= ),(1 else 0 (13)

Page 29: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

25

Let )(iB be the set of nodes that point to node i and let )(iF be the set of nodes that i points to. Then,

{ }1][)( =∋∈= jiAnjiB

{ }1][)( =∋∈= ijAnjiF (14)

Let ia and ih be the authority and hub weights of node i. The mutual reinforcement

between hubs and authorities is defined so that hub and authority weights are:

∑∈

=)(iBj

ji ha ][ni∈∀

∑∈

=)(iFj

ji ah ][ni∈∀ (15)

The HITS algorithm starts with uniform vectors for both the hubs and authorities that have normalized in the LB2 B norm and then applies (15) iteratively. At each stage, the authority and hub vectors are normalized in the LB2 B norm. The algorithm usually converges to a stable state in about 20 iterations (Kleinberg 9). The steady authority and hub vectors, a P

*P and h P

*P converge to the principal

eigenvectors of AAT and TAA respectively. The magnitude of the elements of a* can thought of as relevance scores. When the elements of aP

*P are sorted, they

represent rankings of pages. Ties are broken, by ranking the elements of the eigenvector with the second largest eigenvalue (Kleinberg 11). In comparison to LocalRank, HITS calculates hub and authority weights in a complementary manner and furthermore, considers equally all the authority weights of pages that are pointed to by a page (Tsaparas 60). However, HITS has a conceptual problem in that hubs and authorities are not the same. A node with a large in-degree is likely to be an important authority but a hub node with a large out-degree is not necessarily a good hub. If this were the case, then adding links to random pages would increase the hub ranking of a page. In any ranking system, quantity should not never dominate quality.

Page 30: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

The PHITS Algorithm _______________________________________________________________________________

PHITS is an algorithm developed by Cohn and Chang that makes a small statistical improvement to HITS. The algorithm assumes that the incoming links (i) of a page p are driven by a latent factor z. Cohn and Chang propose that there are conditional distributions )( ziP of an incoming link i given z and conditional

distributions )( pzP of the factor z given page p. )( ziP and )( pzP can be

combined to produced a likelihood function that can be maximized to solve for z (Borodin, Roberts, Rosenthal, and Tsaparas 2). The problem with the PHITS algorithm is that it requires one to specify in advance the number of z factors that need to be considered. Cohn and Chang’s approach is not computational feasible in practice has little intuition behind the number of z factors or their estimated values. Hub Averaging HITS _______________________________________________________________________________

One of the flaws of HITS is that it considers hubs that point to a lot of authorities to be good hubs. However, it makes more intuitive sense to average the authority weights to eliminate the effect of a hub pointing to many poor authorities. Under Hub Averaging HITS, another variation of Kleinberg’s algorithm, a hub is superior only if it links to good authorities on average. The algorithm rightly prefers quality over quantity. As an example of a case where Hub Averaging HITS is superior to HITS, consider a graph with two hubs and a very large number of authorities. The first hub points to only the top authority, while the second hub points to all the authorities except the top one. Under HITS, it is possible that the second hub will be considered stronger because the weight of the best authority can be less than the sum of the weights of the other authorities. However, the first hub is intuitively superior. Hub Averaging HITS takes this into account by averaging authority weights and clearly indicating that the first hub is superior.

26

Page 31: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

27

SALSA and PSALSA P

_______________________________________________________________________________P

The Stochastic Approach to Link Structure Analysis or SALSA approaches the ranking problem through Kleinberg’s authorities and hubs framework. However, SALSA uses a two step Markov chain model instead of PageRank and HITS’ one step Markov chain model. The set of authorities and hubs can be thought of vertices in a directed bipartite graph, whose edges represent links from hubs to authorities (Borodin, Roberts, Rosenthal, and Tsaparas 2). SALSA uses a random walk over this hub-authority graph where a step consists of a combination of a following a link forward and following a link backward. The algorithm constructs two Markov chains – A and H for authorities and hubs. The authority Markov chain models a random walk that follows a link backward and one forward. Two authorities are connected if and only if there is a hub that points to both of them. On the other hand, the hub Markov chain models a random walk that follows a link forward and then one backward. Two hubs are connected in this model, if and only if they both point to a common authority (Lempel and Moran). Define the set of all nodes pointing to a page i (all the incoming links into page i) as:

{ }ikkiB →= :)(

(16)

and all the outgoing links of page i (all the nodes we can follow out from node i) as:

{ }kikiF →= :)( (17)

The transition probabilities of the authority Markov chain are defined as:

Page 32: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

28

∑∩∈

=)()(: )(

1)(

1),(jBiBkk

a kFiBjiP (18)

The Markov chain for the hubs similarly has probability distribution:

∑∩∈

=)()(: )(

1)(

1),(jFiFkk

h kBiFjiP (19)

In the special case of a single connected component, the stationary distribution of the Markov chains for authorities and hubs satisfy:

BiB

ai

)(= for ),...,,( 21 naaaa = (20)

and,

HiH

fi

)(= for ),...,,( 21 nhhhh = (21)

where U

i

iBB )(= and Ui

iFF )(= are the set of backward and forward links

respectively (Lempel and Moran). SALSA is less vulnerable to the tightly-knit community effect than HITS. A tightly-knit community is a set of vertices that form a clique or closely approximate a clique. The tightly-knit community effect occurs when a LAR algorithm ranks the vertices in a clique or approximate clique highly even though the pages are not authoritative on the query topic. Lempel and Moran show that the authority-hub mutual reinforcement effect of HITS is vulnerable to the tightly-knit community effect, while SALSA eliminates this effect by considering the two step Markov chain (Lempel and Moran). PSALSA is an incremental improvement on SALSA that takes into account the popularity of a page within a neighborhood when computing the authority weights.

Page 33: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

29

The Breadth First Search Algorithm P

______________________________________________________________P

Before we survey the Breadth First Search or BFS algorithm, we define some terminology. Let a B path be a path we follow if we follow a link backwards and an F path a path if we follow a link forward. A BF path is a path we follow if we follow a link backwards then forwards (Borodin, Roberts, Rosenthal, and Tsaparas 4). The BFS algorithm is a hybrid of the PSALSA and the HITS algorithm. PSALSA takes into account the popularity of a page within a neighborhood when computing the authority weights while HITS takes the structure of the graph into account rather than doing a detailed link analysis. BFS takes the idea of local popularity in PSALSA and extends it from a single link to an n-link neighborhood. Instead of looking at the number of nBF )( paths that leave i, it considers the number of

nBF )( neighbors of node i (Borodin, Roberts, Rosenthal, and Tsaparas 5). We define nBF )( (i) as the set of nodes that can be reached from i by following a

nBF )( path. The contribution of a node j to the weight of a node i depends on the distance of j from i. The weight of i can be written as:

...)(2)(2)(2 321 +++= −−− iBFBiBFiBa nnni

(21)

The BFS algorithm then starts at node i and does a breadth first search on its neighbors. It takes a backward or forward step on each iteration and includes the new nodes it comes across. The weight factors are updated accordingly to compute the final weights (Borodin, Roberts, Rosenthal, and Tsaparas 5). Non-Linear Dynamic Systems P

_____________________________________________________________________P

Many of the algorithms that we examined including PageRank, Local Rank, and HITS fall into the category of linear dynamic systems based algorithms because the iterative function we apply is a linear or matrix operator. In this section, we study

Page 34: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

30

variations on HITS where the iterative operator is not a linear function, specifically looking at the AT(k) and the NORM(p) algorithms. Both algorithms fall into the category of non linear dynamic systems because the iterative function is not a matrix operator. Non-linear dynamic systems are a less researched area within Link Analysis Rank algorithms simply because the algorithms lack a closed form solution and thus the weights have to be computed through an iterative process that is computationally and memory expensive for large matrices. In addition, there is no guarantee that the algorithms converge to a steady state of weights or page relevance measures. Depending on the choice of parameters, the algorithms might converge to a cycle or the system may become chaotic and converge to a strange attractor. Since commercial systems cannot compromise on quality or speed of search results, non linear systems have not caught on, though small tests on hyperlinked graphs have shown promising results (Tsaparas 62). That said, we look at AT(k) and NORM(p) for potential ideas for improvements on the HITS algorithm that could be applied to a hybrid model. The AT(k) Algorithm P

____________________________________________________________________________P

A variation on HITS, the Authority Threshold (AT(k)) algorithm considers only the k most important authorities when calculating hub weights. Let )(iFk be the subset of

)(iF that considers only the k most important authorities. Then the hubs and authorities can be calculated as:

∑∈

=)(iBj

ji ha ][ni∈∀

∑∈

=)(iFj

jik

ah ][ni∈∀ (22)

If k is greater than the maximum out-degree of the hyper link graph, AT(k) reduces to standard HITS (Tsaparas 61). The NORM(p) Algorithm P

_______________________________________________________________________P

Another non-linear dynamic variation of HITS the NORM(p) family of algorithms scale the authority weights instead of considering only a subset of authority nodes.

Page 35: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

31

Instead of choosing arbitrary scaling parameters, a NORM(p) algorithm uses the norm of authority weight vector. The hubs and authorities can be calculated as:

∑∈

=)(iBj

ji ha ][ni∈∀

p

iFj

pji ah

/1

)(⎟⎟⎠

⎞⎜⎜⎝

⎛= ∑

][ni∈∀ (23)

As p increases, the bigger authority weights dominate increasingly. If p is 1, NORM(p) reduces to HITS (Tsaparas 61). P

Page 36: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

32

PB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BB

SURVEY OF RANK MERGING AND AGGREGATION P

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

PAn interesting dimension to web page ranking is rank aggregation, the problem of merging two different rankings of a data set in an optimal manner. One could think for instance, of running a search query using two distinct ranking algorithms such as PageRank and HITS, and merging the results of those rankings into a final user ranking. Rank aggregation of results from different web search algorithms is particularly interesting given the number of different search algorithms, the fact that no single algorithm can be optimal for all users, and that no search engine is fully comprehensive in its web search. While our research is still primarily focused around the development of hybrid algorithms, we chose to explore the rank merging area, at least briefly this semester. In this section, we describe a survey of the literature on rank merging, as well as the additional algorithm ideas that stem out of this survey. Definitions and TermsP

______________________________________________________________________________P

We define a list τ with respect to a universe U as an ordering of a subset S of U, where:

Sxxxx id ∈∋>>>= ]...[ 21τ

(24)

and > is an ordering relation on S. We can also denote τ(i) as the rank of i. If τ

contains all the elements in U, then we call it a full list, i.e. a permutation of U. If τ is a subset of U, on the other hand, then we define it as a partial list (Dwork, Kumar, Naor and Sivakumar 2). Distance Measures P

________________________________________________________________________________P

In this section, we discuss various criteria to measure the distance between two particular list rankings. Both the Spearman footrule and Kendall tau distance

Page 37: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

33

measures discussed below are common metrics of judging the performance of rank aggregation schemes. Spearman’s Footrule P

_______________________________________________________________________________P

The Spearman footrule distance is defined as the sum over all the elements i in S, of the difference (absolute) between the rank of element i in the two lists. Given two

lists, υ and τ, we define Spearman footrule as:

∑ −=i

iiSF )()(),( τυτυ (25)

We can normalize this value by 2

21 S to obtain the standardized footrule.

Spearman’s footrule is a linear time computation (Dwork, Kumar, Naor and Sivakumar 2). Kendall Tau P

_________________________________________________________________________________________P

The Kendall Tau distance is a metric that counts the number of pairwise disagreements between two lists. Formally, we can define the Kendall tau distance

between two lists, υ and τ as:

)()()()(,:),(),( jijijijiKT ττυυτυ >∧<<=

(26)

We can normalize this value by )1(21

−SS to obtain the normalized Kendall tau

distance. Intuitively, we can think of Kendall tau as a bubble sort distance, and hence it is easy to see that the Kendall tau distance between two lists with n elements is computed in n log n time.

Page 38: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

34

We can define the Kendall tau and Spearman footrule metrics above for partial lists, as well as multiple lists (Fagin, Kumar, and Sivakumar 2). Scaled Footrule P

____________________________________________________________________________________P

The scaled footrule is a metric to measure distances between a full and partial list. Scaled footrule weights the contribution of an element i in a list L based on the

length of the list L. (Fagin, Kumar, and Sivakumar 2). If υ is a full list and τ a partial list, then the scaled Spearman footrule is:

∑∈

−=τ

ττυυτυi

iiSSF /)(/)(),(

(27)

Optimal Rank Aggregation P

______________________________________________________________________P

The “best” rank aggregation is a truly subjective term because it depends on the distance that is optimized. For instance, if we minimize the Kendall tau distance, we have a Kemeny optimal aggregation that theoretically corresponds to the geometric median of the inputs. Kemeny optimizations have a maximum likelihood interpretations, and eliminate the “noise” from different rankings. This notion of eliminating noise is linked to the notion of eliminating spam in the web page world and is particularly relevant to our research. Computationally, a Kemeny optimal aggregation is an NP hard problem. In their paper, “Rank Aggregation Methods for the Web”, Kumar and Sivakumar show that the Kemeny optimization is well approximated by the Spearman footrule

optimization, which is a polynomial time computation. Mathematically, if σ is the

Kemeny optimization of lists iπ and ω is the footrule optimization, then (Dwork,

Kumar, Naor, and Sivakumar 4) show that:

),...,,(2),...,,( 11 kk KK ττσττϖ ≤

(28)

Page 39: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

35

Rank Aggregation Methods P

______________________________________________________________________P

In this section, we discuss different algorithms for rank aggregation, some of which we use in our hybrid implementations. Borda’s Algorithm P

__________________________________________________________________________________P

Borda’s method is a simple method of rank aggregation that assigns a score to the rank an element has, and then sorts candidates by the cumulative total score. It is computationally simple (linear time), but does not satisfy the Condorcet criterion.

Formally, given lists τB1 B, ..., τBk Bfor every element c in the universe S, Borda’s

aggregation assigns a score B(i, c) for every full list τBi B where B(i, c) is the number of

candidates ranked below c in the full list, and the Borda score B(c) is defined as Σ B(i, c). Candidates are ranked by decreasing Borda score. Borda’s method has numerous variations such as sorting on different L(p) norms, sorting by median values, sorting by geometric means, etc (Fagin, Kumar, and Sivakumar 4). Footrule and Scaled Footrule Optimization P

____________________________________________________P

In the previous section, we discussed Kemeny optimal aggregation as well approximated by scaled footrule aggregation. In the case of full lists, foot rule optimal aggregation is akin to the median of values in a position vector. Given a set of full lists if the median positions of the candidates i in the lists form a permutation, then this is a footrule optimal aggregation. Computationally, the footrule optimal aggregation of a full list can be computed in polynomial time because it is equivalent to find a minimum cost perfect matching in a bipartite graph. Kumar and Sivakumar prove this in their paper. In the case of a partial list, the problem is NP hard. For full lists, Sivakumar and Kumar define a bipartite graph, where the weight W (c, p) is the scaled footrule distance from the partial lists of a ranking that places c at position p in the list, and is denoted by

Page 40: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

36

∑ −i

ii npc )/()/)(( ττ

(29)

The minimum cost matching solution gives us the scaled footrule aggregation (Fagin, Kumar, and Sivakumar 6). Markov Chain Methods P

___________________________________________________________________________P

In this section, we discuss different proposed methods for aggregation lists using Markov chain. Broadly speaking, the n candidates correspond to the states of the chain, and the transition probabilities depends on the lists. The Markov chain ranking is the aggregated final ranking. Markov chain methods have gained attention because:

1) They handle partial lists by exploiting the connectively of the chain to infer comparisons between pairs that were not explicitly ranked.

2) They are more meaningful than ad hoc majority based references. 3) They handle uneven comparisons, for instance if a page appears in the

bottom half of a majority of the lists, and the top of the other minority, then the Markov chain methods look at the quality of the lists.

4) They enhance heuristics from the HITS and Page Rank algorithms. 5) They are computationally efficient, i.e. polynomial time algorithms.

In this brief overview, we examine four of the Markov chains ideas proposed by Kumar and Sivakumar (Dwork, Kumar, Naor, and Sivakumar 5).

Page 41: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Markov Chain I ___________________________________________________________________________________

Transition Matrix: If the current state is P, then the next state is chosen from the pages that were ranked higher than P by all lists that ranked P. Intuition: We move from the current page to a better page, with a 1/k probability of staying in the same page, where k is the average rank of the current page (Dwork, Kumar, Naor, and Sivakumar 7). Markov Chain II __________________________________________________________________________________

Transition Matrix: If the current state is P, then the next state is chosen by picking a ranking x from all the lists that rank P, and then picking a page Q from the set of pages Q where x(Q) is at most x(P). Intuition: This chain accounts for several lists of rankings, rather than a pairwise comparison. This scheme protects minority viewpoints, and generalizes the geometric mean analogue from Borda’s method (Dwork, Kumar, Naor, and Sivakumar 7). Markov Chain III _________________________________________________________________________________

Transition Matrix: If the current state is P, then the next state is chosen by picking a ranking x from the partial lists containing P, and then picking a page Q ranked by x. If x(Q) < x(P), we move to Q else we stay at P. Intuition: This idea generalizes the Borda scheme, in particular if the initial state is chosen uniformly, after one step, the chain produces a ranking such that P is ranked higher than Q if the Borda score of P is higher than that of Q (Dwork, Kumar, Naor, and Sivakumar 7). Markov Chain IV _________________________________________________________________________________

Transition Matrix: If the current state is P, then the next state is chosen by picking a page Q from the union of all pages ranked by the search engines. If x(Q) < x(P) for a majority of lists x, then we move to Q else we stay in P.

37

Intuition: This idea generalizes the idea of sorting by the number of pairwise contests an element as won (Dwork, Kumar, Naor, and Sivakumar 7).

Page 42: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Spam Resistance and the Condorcet Criterion _________________________________________________

The Condorcet criteria refers to the simple notion that if there is an element that defeats every other in majority voting, it should be ranked the highest. The extension to this is that if there is a partition (C, D) of a universe S where for any x in C and y in D, the majority prefers x to y, then x should be ranked above y. This principle, known as the extended Condorcet criterion, has a number of properties that can be classified as “spam fighting” and have received much interest of late. As Kumar and Sivakumar show, spam pages are likened to Condorcet losers, and occupy the bottom partition of any aggregated ranking that satisfies the Condorcet criterion. Moreover, if good pages are preferred by the majority to bad ones, they will be Condorcet winners and ranked highly in an aggregation. The above aggregation methods do not ensure a Condorcet winner. Kumar and Sivakumar suggest a process of modifying the initial aggregation of input lists so that the Condorcet losers are pushed to the bottom of the ranking process. This method of local Kemenization has shown significant metric improvements when tested on the Borda and Markov chain aggregation methods in practice (Fagin, Kumar, and Sivakumar 8). Local Kemenization ________________________________________________________________________________

A local Kemeny optimal aggregation satisfies the Condorcet principle, and is similar to a Kemeny optimal aggregation. We can define a list M as a local Kemeny optimal aggregation of partial lists if it is impossible to reduce the total distance to the partial lists by flipping any adjacent pair. The Kemeny optimal aggregation can be calculated in n log n time. A local Kemenization of a full list with respect to a set of partial lists computes a locally Kemeny optimal aggregation of the partial lists consistent with the full list. In particular, in such an aggregation,

1) The Condorcet losers receive the lower ranks and the winners receive the higher ranks.

38

2) The result disagrees with the full list on any pair (i, j) if a majority of the partial lists disagree with the aggregation on (i, j).

Page 43: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

3) For all values between 1 and the length of the aggregation x, length x prefix of the output is a local Kemenization of the top x elements in the full list (Fagin, Kumar, and Sivakumar 9).

Implementation of the Rank Aggregation Algorithms _________________________________________

39

We incorporate some of the above research on rank aggregation into our hybrid algorithms analysis, by testing in addition to our own hybrids and existing algorithms, rank aggregation on two existing algorithms: PageRank and HITS. We take the results of ranking web pages on PageRank and HITS, and then merge them using different schemes – an arithmetic sum Borda, Borda with a geometric mean, and Markov Chain I. We compare these results to the results from our other ranking tests to see whether merging the results of existing algorithms buys us anything. The different algorithms are discussed in the section on hybrid algorithms.

Page 44: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

40

PB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BB

HYBRID ALGORITHM PROPOSALS P

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

PIn this section, we discuss the motivations and technical details behind six hybrid algorithms that represent improvements to existing LAR algorithms. Each hybrid algorithm takes the strengths of one or more of the existing PageRank, LocalRank, Hilltop, HITS and SALSA algorithms and puts them together to form more robust algorithms. The PROBLEM AlgorithmP

__________________________________________________________________________ P

PageRank assumes that the user is equally likely to jump any page on the Internet not linked to the current page. However, this is not an accurate description of user’s behavior. In practice, users jump to a few selected pages such Favorites with high probability and do not ever visit other pages. In the PROBLEM (Page Rank on Beta-Distributed Link E Matrix) algorithm, we model our intuition that users have different affinities for different pages by assuming that users have a probability p of jumping to any page. The probabilities p are drawn from a beta distribution. The beta distribution gives us tremendous flexibility in

modeling user’s preferences by using just two non-negative parameters - α and β. The probability density function for p is

1)1(),(

1}Pr{ −−== βα

βαβxxxp

where )()()(),(

βαβαβαβ

+ΓΓΓ

=

and dtetx tx −∞

−∫=Γ0

1)(

(30)

Page 45: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

41

The beta distribution is a generalization of PageRank’s uniform distribution, because

it reduces to the uniform distribution from 0 to 1, when α = β = 1. The elements of E can be drawn from this beta distribution and then normalized in the L B1 B norm. The beta distribution gives us the flexibility to model user’s preferences. Let:

βααµ+

= and 1

1++

=βα

φ (31)

The parameter µ is defined to be the mean of the beta distribution, while φ can be

thought of as a dispersion index. As α and β approach 0, φ approaches 1 and the values of p become concentrated near 0 and 1. This models the fact that web pages can be segmented into broad categories. Pages that the user jumps to very

frequently have p ≈ 1, while pages that the user rarely jumps to have p ≈ 0. As α

and β approach ∞, φ approaches 0 and the values of p become concentrated

around µ and the normalized vector E begins to approximate Brin, Page, Motwani

and Winograd’s uniform vector. Hence, φ measures the polarization of a user’s preference for web pages. Figures 1 and 2 illustrate the probability distribution

function for the beta distribution for different combinations of α and β.

Beta Distributions - I

0 0.2 0.4 0.6 0.8 1

x

p(x)

a=5,b= 5a=1,b=1a=0.5,b=0.5

Figure 1

Page 46: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

42

Given an intuitive model for a user’s preferences, we can set values of µ and φ and then use the fact that

φφµα )1( −

= , φ

φµβ )1)(1( −−= (32)

to obtain α and β for the beta distribution to draw the values of E from. The

distribution that seems the most plausible is the blue function from Figure 2, with α =

0.5 and β = 1.5. If we use this function to model the elements of E, we will have a few pages with a high probability of being jumped to and

Beta Distributions - II

0 0.2 0.4 0.6 0.8 1

x

p(x)

a=1.5,b=0.5a=0.5,b=1.5a=2,b=4

Figure 2

a large number of pages with a low probability of being jumped to. The beta distribution needs to have the “reverse J-shape” of the blue function in Figure 2 if our model is to make intuitive sense.

Page 47: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

43

Bet

a

Alpha

U-Shaped

Reverse J-Shaped

J-Shaped

Mode at (a-1)/(a+b-2)

1

10

Figure 3 Figure 3 below illustrates the different shapes of the beta distribution for different

combinations of α and β. From the graph, we can see that we need α < 1 and β > 1 in order to obtain a reverse J- shaped probability distribution function. THEORETICAL STRENGTHS Flexibility of the Beta Distribution The beta distribution gives us tremendous flexibility in modeling the heterogeneity user’s preferences with just two extra parameters. Figures 1 and 2 illustrate this flexibility with 6 possible shapes that the beta distribution could take on. Stronger “Jump” Model The PROBLEM algorithm gives us a much more intuitive view of how people behave while actually surfing the web. THEORETICAL WEAKNESSES Difficult to estimate model parameters

Even though we established that the beta distribution needs to have α < 1 and β

> 1, we do not have any heuristic to determine the optimal values of α and β.

Page 48: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

44

Sensitivity to model parameters The PROBLEM algorithm’s rankings are sensitive to the beta distribution’s

parameters. Therefore, the choice of α and β is very important. PageRank’s inherited flaws Since the PROBLEM algorithm is built on top of the PageRank algorithm, it inherits PageRank’s shortcomings such as dangling links, query independence and link farming described in the PageRank section. The CANDDE AlgorithmP

___________________________________________________________________________ P

Given an incomplete graph of the web page, it is too important to ignore dangling links like PageRank does. This artificially boosts the importance of legitimate outgoing links from pages with dangling links and can have dramatic effects on the final ranking of web pages. The CANDDE (Centralized And Neutrally Dispersed Dangling Edges) algorithm corrects the omission of dangling links, by augmenting the web graph with a central dummy vertex to which all the dummy links are assumed to point to. The central dummy vertex disperses page rank uniformly across all the vertices by linking to each of the original nodes once. The central dummy node eliminates the effect of dangling links on legitimate links from pages with dangling links. The only concern with the addition of this dummy vertex is its impact on the ranking of pages. The principal eigenvector of the PageRank Markov matrix corresponds to a steady state vector. Intuitively, we can think of the i-th element of the principal eigenvector as the proportion of time that a random walk would visit page i over a very long period of time. With the addition of the central dummy vertex, a random walk spends a smaller proportion of time in each of the real vertices of the web graph. Mathematically, this means that the elements of the principal eigenvector of the augmented web graph are smaller than their corresponding elements in the principal eigenvector of the PageRank web graph. The effect of the dummy vertex is most pronounced on vertices which have a large proportion of dangling links. The ranking of these pages will be diminished because

Page 49: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

their links to other real pages in the web graph become less important. Similarly, pages which have a large number of incoming links from pages with dangling links will have their ranking reduced.

An Example of the CANDDE Algorithm _________________________________________________________

Figure 4 illustrates a graph with three pages, A, B and C. Pages A and B have dangling links that are eliminated by PageRank.

A

C B

Dangling

Dangling Dangling

Figure 4: Original Graph

Figure 5: PageRank Solution The PageRank graph is shown in Figure 5, while Figure 6 illustrates the graph created by the CANDDE algorithm. The principal eigenvectors are

45

Page 50: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

46

⎥⎥⎥

⎢⎢⎢

⎡=

5.05.0

0

PageRankV and ⎥⎥⎥⎥

⎢⎢⎢⎢

=

37/937/1137/1437/3

CANDDEV

where the last element of VBCANDDE B is the dummy vertex element. After we remove the dummy vertex element and take the LB1 B norm, we get the final ranking

vector,⎥⎥⎥

⎢⎢⎢

⎡=

28/1128/1428/3

CANDDEV . As mentioned earlier, B’s rank gets boosted and A has a

non-negative rank because it has an incoming link from the dummy vertex.

Figure 6: CANDDE Solution THEORETICAL STRENGTHS Democratic Treatment of Dangling Links The CANDDE algorithm presents perhaps the most democratic and fair manner of dealing with dangling links. Every sample hyperlink graph can be expected to have a significant number of dangling links and hence it is important that we do not sweep them under the carpet.

Page 51: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

47

THEORETICAL WEAKNESSES PageRank’s inherited flaws Since the CANDDE algorithm is built on top of the PageRank algorithm, it inherits PageRank’s shortcomings such with regards to query independence ranking and link farming described in the PageRank section. Rankings are graph dependent As we increase the size of the hyperlink graph, the percentage of dangling links is expected to go down. Since, the CANDDE algorithm is sensitive to dangling links, the rankings are dependent on the size of the web graph. The Percentage Authority Threshold or the PAT(k) Algorithm P

_______________________________ P

The AT(k) algorithm considers only the k most important authorities when calculating hub weights. However, if the hub links to less than k authorities, this filtration is rendered useless. The PAT(k) algorithm, corrects this problem by considering the top k% of authorities. This captures the intuition behind the AT(k) algorithm and ensures that the filtration of weak authorities when calculating hub weights always works. Let )(iFk be the subset of )(iF that considers only the k% most important authorities.

Then the hubs and authorities can be calculated as:

∑∈

=)(iBj

ji ha ][ni∈∀

∑∈

=)(iFj

jik

ah ][ni∈∀ (33)

If k = 100%, PAT(k) reduces to standard HITS. THEORETICAL STRENGTHS More Intuitive Filtering of Authorities Percentage based filtering is a far more intuitive and robust way of retaining the top authority nodes than absolute number based filtering.

Page 52: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

48

THEORETICAL WEAKNESSES Hubs and Authorities Framework’s inherited flaws Since the PAT(k) algorithm is built on top of the hubs and authorities framework, it has the same weaknesses as HITS and other related algorithms. The PAGE Algorithm P

______________________________________________________________________________ P

As an Internet user surfs the world wide web, he/she follows hyperlinks, with the aim of getting from one page to another. A plausible model of a user’s behavior is to assume the user follows the shortest path from one page to another page. The PAGE (Percentage of Aggregate Geodesic Edges) algorithm exploits this idea to rank web pages. The rank of a page is defined to be the percentage of shortest paths that pass through the page. The PAGE algorithm first reduces the unweighted hyperlink multi-graph to a weighted simple directed graph in which the weights of the edges are the number of edges between the two vertices in the original multi-graph. Given a hyperlink graph, ),( EVG = , there are )( 2nO shortest path between the

vertices. These shortest paths or geodesics can be computed in )( 3nO using the dynamic programming approach of the Floyd-Warshall algorithm. Let S BijB be the shortest path between vBi B and vBj B. Let ),,( kjiσ be a binary filtration on a vertex, vBk B. Then, we can define:

),,( kjiσ = 1, if ijk Sv ∈ and 0

otherwise. (34)

Also, let the total number of directed shortest paths between all the vertices be T. Then the ranking of a vertex is then defined as:

∑∑= =

=n

i

n

jk T

kjivR1 1

),,()( σ (35)

Page 53: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

49

There are )( 2nO shortest paths, each of which have length )(nO . Therefore, the

calculation of ][),( nkvR k ∈∀ takes )( 3nO time.

THEORETICAL STRENGTHS Computationally efficient The PAGE algorithm is more computational efficient than HITS. The algorithm runs in )( 3nO which is very efficient for large hyperlink graphs.

THEORETICAL WEAKNESSES In Degree Dependence The PAGE algorithm’s biggest weakness is that it favors pages with high in-degree. This is a direct consequence of the shortest path computation. The Percentage Local Rank Algorithm P

__________________________________________________________ P

PLocalRank(k) and PAT(k) have a similar philosophy. LocalRank only considers the top 1000 pages, while PLocalRank(k) considers the minimum of 1000 and the top k% of pages. This makes more intuitive sense than considering a fixed number of pages for every query. If the query returns less than 1000 pages initially, LocalRank will consider all the pages and its filter becomes essentially useless. On the other hand, PLocalRank(k)’s filter always works by considering the top k% of pages. PLocalRank(k) reduces to LocalRank, when k is 100% and approximates the behavior of LocalRank for broad queries. Even so, the rankings of the two algorithms can differ significantly because of subtle changes to the underlying link structure of the reduced sub graph. The Relaxed Hilltop Algorithm P

___________________________________________________________________ P

Hilltop has the most stringent filtration criteria among all the algorithms that we surveyed in this paper. As mentioned earlier, this can lead to no results being

Page 54: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

returned if the query is too specific. To correct this flaw, Relaxed Hilltop makes the filter for affiliated pages less stringent. Instead of requiring to two pages fulfill one of the four Hilltop criteria in order to be classified as affiliated pages, the algorithm can require web pages to satisfy two, three or all four of the criteria. This ensures that the algorithm ranks a larger subset of the universe of web pages and has a lower probability of not returning any results. The Hilltop HITS Algorithm _______________________________________________________________________

Kleinberg attempts to filter affiliated pages before running his HITS algorithm for ranking pages. However, Kleinberg’s definition of affiliated pages lacks the real world rigor of Hilltop’s definition. Hilltop-HITS combines Hilltop’s filtration of affiliated pages with Kleinberg’s powerful HITS algorithm. The analysis of Hilltop-HITS is the same as that of HITS. Implementation of the Rank Aggregation Algorithms _________________________________________

We incorporate some of the research on rank aggregation into our hybrid algorithms analysis, by testing in addition to our own hybrids and existing algorithms, rank aggregation on two existing algorithms: PageRank and HITS. We take the results of ranking web pages on PageRank and HITS, and then merge them using different schemes – simple Borda, Borda with a geometric mean, and Markov Chain I. We compare these results to the results from our other ranking tests to see whether merging the results of existing algorithms buys us anything.

50

Page 55: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

51

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BB

TECHNICAL APPROACH P

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

PIn this section, we summarize the technical approach we took towards this search, as well as the technical implementations of our hybrid algorithms. We discuss the process of algorithms research, developing the hybrid models, implementing the algorithms and running the algorithms on the test graphs, and carrying out user surveys to analyze our data. Finally we describe the challenges, achievements, and learning lessons over the last one year. Our Hybrid SystemP

________________________________________________________________________________ P

We spent the last one year researching and developing various algorithms for web page ranking. What we have at the end of our project is a set of hybrid algorithms, some built on existing algorithms and ideas we have researched, and some more radical in so as they represent a change in the way of thinking about this problem. We also have some early algorithms that essentially use rank aggregation to merge the results of searching with two different algorithms, notably HITS and PageRank. The following set of hybrid algorithms that we have developed, are described in detail in the section, Hybrid Algorithms.

• PROBLEM

• PAT(K)

• CANDDE

• PAGE In addition, the implementation side of our work consists of simple Java implementations of our hybrid algorithms, as well as of the existing benchmark algorithms, including Page Rank, Local Rank, ATK, and HITS. In order to run some benchmarking tests on our algorithm, we looked at different seed pages to run our algorithm on, and finally ended up choosing the upenn.edu domain. We implemented Java web parsing code to crawl the web, and output

Page 56: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

the result of different pages into XML. We then parse this XML in Java to create web graphs, on which we run our ranking algorithms. We ran our different algorithms on different queries within this domain, and asked users how they viewed the different ranking algorithms. We created a web survey, asking users to rank these different rankings and received 103 user responses. We analyzed these responses, and made some conclusions about our hybrid algorithms, which are described in the section Statistical Analysis and Conclusions. System Breakdown: Technical Approach to the Process ______________________________________

In this subsection, we look at the different components of our system, and describe the technical process behind the specific component. Algorithms Research ____________________________________________________________________________

Since our hybrid models stem from existing work in Link Analysis Rank algorithms, we had planned to spend a lot of time researching the literature in this space. We had initially planned to look at PageRank, LocalRank, and the non-linear algorithms developed by Tsaparas, as mentioned in Revision 1. With each algorithm, we had planned to understand in detail the mathematics of the algorithm and get a sense of the theoretical benefits and drawbacks of the solution.

52

We began to do our research after submitting Revision 1 and started by doing a detailed study of PageRank and its precursors. Based on this we developed a sense of where PageRank was strong and weak – the issue of query independence was probably the most key issue. We then looked at algorithms that addressed this issue. We did a detailed study of LocalRank as we had planned to do, and in addition we look at the Hilltop algorithm because we wanted to examine how Hilltop incorporated the query string into its ranking. Again we did an analysis of both algorithms. When doing our research we came across a lot of literature on the HITS algorithm and did a deeper study of the algorithm. Although in terms of modifications to HITS we had just planned to look at AT(k) and NORM(p), we found some interesting literature on hybrids of the HITS algorithm. This included the SALSA algorithm, Hub Averaging HITS, and BFS which we also examined in detail.

Page 57: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Later into the first semester, we developed our hybrid models based on this algorithms research. Our survey of the literature is concretely written into this paper, under a section Survey of Literature. The section includes most of the algorithms we surveyed and in many cases, our analysis of the benefits and drawbacks of each algorithm. Into the second semester, as we were developing our hybrid algorithms, our advisor encouraged us to look at another part of this space – that of rank merging, i.e. merging the results of two different ranked lists into a single list. In the spirit of research, we spent quite some time looking at rank aggregation, and summarized our work in a separate section on Rank Aggregation. Our research included an understanding of different distance measures, theoretically optimal algorithms, and computationally feasible algorithms. We have also incorporated our work in this area into our comparison of different algorithms, by implementing hybrid algorithms that are aggregations of HITS and Page Rank. Hybrid Model Development ___________________________________________________________________

After analyzing the space we started thinking about hybrid algorithms. We got started on this process a little later than expected because we surveyed a little more literature on HITS than we had initially planned on. We believe this survey has been valuable to us in developing hybrid algorithms. We also continued this survey in our second semester when we researched rank aggregation. In terms of algorithms, it seemed a lot of hybrid algorithms would draw from a combination of HITS, LocalRank, and PageRank. We would say there are two kinds of hybrid models we have worked on, excluding the models developed from rank aggregation. The first set of models are models that draw from some existing literature but represent a fundamental change in the assumptions of or structure of the underlying algorithm. We worked towards developing four such algorithms:

53

Page 58: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

54

• PROBLEM (Page Rank on Beta-Distributed Link E Matrix): This algorithm changes the assumptions behind the parameter E in the PageRank algorithm. E instead of a uniform value now has a beta distribution which we discuss in detail in the Hybrids section.

• CANDDE (Centralized And Neutrally Dispersed Dangling Edges): In CANDDE we deviate from the elimination of dangling edges in PageRank and create a new framework for dangling link analysis.

• PAT(k): In this algorithm we provide a non-linear dynamical solution based on Tsaparas’ original algorithm, changing the assumption of the parameter k.

• PAGE (Percentage Aggregate Geodesic Edges): In PAGE, we approach the problem of page ranking from a completely new paradigm of user behavior.

All four of the algorithms are described in detail in the section on Hybrid Algorithms. Since these algorithms did represent dramatic changes in the space, we implemented them to benchmark against existing algorithms. We had initially planned to finalize a single algorithm as a hybrid, but we have chosen instead to implement and test all of the four different algorithms to gain a better of understanding of how each algorithm performs on different test cases, and what the merits and flaws of each are. The second set of models that we had developed were incremental improvements to existing algorithms. While these models did not comprise full new algorithms, they represented interesting tweaks that we could possibly incorporate into our bigger algorithms. We worked on three such incremental models (Hilltop-HITS, Relaxed Hilltop, Percentage Local Rank) and they are described in detail in the Hybrids section. Eventually when it came to a question of implementation, we chose not to implement these algorithms, firstly because they did represent only incremental changes, and secondly, because in the interest of time, we wanted to focus on new paradigm shifting ideas.

Page 59: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

55

Parsing Software Development P

_______________________________________________________________P

In Revision 1 we had planned on completing the code architecture of the parsing software. We put some time into this in the fall semester and completed the parsing software in the fall. The software now scans a given URL and creates an XML file with certain information included in the schema, crawling web pages up to 4 levels deep. The schema for the data collected and a sample instance of the schema are included at the end of this section. We chose to let our parsing software run four levels deep because we found it struck the right balance between a large amount of information sufficient enough to generate a large web graph and a small enough amount of information so that reading the data from the resulting XML was computationally feasible. To summarize the specifications of our parsing software, the software enables the user to:

• Users can enter in multiple search keywords separated by spaces, since we do all the string tokenization internally.

• Work with .html, .shtml, .htm, .jsp, .asp, cold fusion, CGI, Perl, and php web pages. All other links i.e. to Word documents and PDF files are filtered out because we cannot parse them.

• Continue searching if a page times out. The software is also fault tolerant with web pages loading. We use a timeout of 15 seconds on a page and retry each page 10 times before moving on to the next page.

• Make sure a single page is ranked only once. All redirection links to the same document (denoted with the “#” symbol in HTML) are eliminated.

• Parse relative and absolute URLs. Every URL in the final XML output is absolute.

Page 60: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

56

• Crawl the web page down to four levels of links. Seed Page Collection P

____________________________________________________________________________P

We had planned in Revision 1 to have a list of 100 seed pages on which we would run our algorithm, and we had a tentative list of seed pages. The approach we took to developing this seed pages is what we had planned in Revision 1, based on the methodology of Kleinberg. To recap this, we had stated that nodes in a sample graph consist of

• A seed node S.

• The set of nodes that S points to. Call this set A.

• The set of nodes that points to S. Call this set B.

• The set of nodes that point to all the nodes in A. Call this set C.

• The set of nodes that all the nodes in B point to. Call this set D.

• The vertex set V of the graph is DCBASV ∪∪∪∪= . Some of the links between nodes in V were purely for navigational purposes. Such intrinsic links are said to be between pages with the same domain name. Transverse links are links between pages not on the same domain. These links convey much more information than intrinsic links and are the essence of the problem at hand. Therefore, we can eliminate all intrinsic links from the vertex set. When we actually developed the algorithms and parsed the data, we decided it would be more instructive to run the algorithms on a single seed page with multiple queries. We decided to focus our attention to the single domain upenn.edu. We now run all our queries using http://www.upenn.edu as a seed page. We also chose to use this domain because the pages that would result would be familiar to the participants of the survey, and they would be able to give more instructive feedback on our results. As far as the actual query words are concerned, we tested the software on multiple different query results. We chose query words that were broad enough to be comprehensible to a wide range of users, and narrow enough so as to not have an

Page 61: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

57

interesting meaning. Our Statistical Analysis section discusses the queries we ran our algorithm on. Algorithms Implementation P

___________________________________________________________________P

We chose to implement the following algorithms (existing) in Java for the purposes of benchmarking:

• PageRank

• LocalRank

• ATK

• HITS We felt this set represented a good range of work in this area starting from the simple Page Rank algorithms to the hubs and authorities model to the non linear dynamical model. We chose to implement the algorithms in Java, rather than our original plan of implementing the algorithms in C++, because of our comparative familiarity with Java. We also implemented the following hybrid solutions in Java:

• PAGE

• CANDDE

• PROBLEM

• PATK

• Rank Merging Algorithms (HITS and PageRank) o Simple Borda o Geometric Mean Borda o Markov Chain I

System Architecture P

_____________________________________________________________________________P

Our system can be split up into the algorithms engine which contains implementations of the various algorithms that we were comparing, the parsing software and various utilities. The java files for each part are:

Page 62: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

1) Algorithm Engine

Filename Implements

ATK.java Tsaparas’ AT(k) and our hybrid PAT(k) algorithms

Borda.java Borda’s simple and geometric merging schemes

Hits.java Kleinberg’s HITS algorithm

LocalRank.java Google’s LocalRank

MarkovMerge.java Kumar & Sivakumar’s Markov Chain Rank Merging algorithm

Page.java Our PAGE algorithm

PageRank.java Google’s PageRank algorithm

Problem.java Our PROBLEM algorithm

Spearman.java Spearman’s Footrule measure 2) Parsing and graph creation

Filename Implements

LyncXML.java The conversion of parsed HTML to a storable XML format and the conversion of the XML representation of the webgraph into an adjacency matrix

Parse.java The recursive parsing of HTML, HTM, ASP, JSP, PHP, Cold Fusion, CGI and Perl web pages

WebPage.java Java representation of a webpage and its outgoing links used during parsing and reading in from file

3) Utilities

58

Filename Implements

Functions.java Basic mathematical functions

Lync.java The main file which controls the overall execution of our project

Pairs.java Helper class used to pair an element with its rank.

RankHelper.java Various utilities including converting an adjacency matrix to

Page 63: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

a Markov Chain, searching, sorting and filtering

StatFunctions A numerical approximation to the beta function which is used by PROBLEM

UserInput Class to collect parameters from the user. The general order of execution is: 1) Prompt the user for input

a) Prompt for the name of the file where the XML web graph will be printed. b) Prompt for the name of the file where the final rankings will be printed. c) The search query. d) The seed URL.

2) Recursively parse the HTML of webpages beginning with the seed URL and

going down an arbitrarily specified number of levels. We ran our parsing 4 levels deep for all our test queries.

3) Print the parsed HTML to the XML output file.

4) Read in the XML output file.

5) Create the adjacency matrix of the web graph.

6) Run the following algorithms on the adjacency matrix:

) PageRank ) LocalRank ) HITS ) AT(k) ) PAT(k) ) PROBLEM ) CANDDE ) PAGE ) Borda Simple Rank Merging using HITS and PageRank

59

) Borda Geometric Rank Merging using HITS and PageRank

Page 64: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

) Markov Chain Rank Merging 7) Print the rankings to the output file.

60

A diagram describing the implementation of our system is below.

Page 65: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

LYNC System Architecture

61

Page 66: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

62

Analysis and Surveys P

__________________________________________________________________________P

We ran the above algorithms on the upenn.edu domain for predetermined queries. In order to measure the performance of the different algorithms, we turned the question to the users, and performed a user survey of our results. For every particular query, we put forth the rankings suggested by our different algorithms, and asked the user to point out how good each ranking was. We judged the results of our survey based on high relevance ratios as Kleinberg and Tsaparas had done. The survey was created using Microsoft Word and Excel and a sample is included in the Technical Approach Appendix. We have performed a detailed statistical analysis of our findings. This is detailed in the Statistical Analysis section of our paper. Technical ChallengesP

_______________________________________________________________________________P

Over the course of the two semesters, we faced numerous technical challenges as researched different algorithms, developed our hybrid model, and ultimately implemented our algorithms. Algorithms Challenges and Key Design Decisions P

______________________________________________P

• One of the biggest challenges we faced while doing our research was obtaining

relevant research. PageRank is a very popular algorithm, and there is some information about it, but beyond very simple layman’s descriptions, there is very little strong mathematical information about it and other algorithms.

• Another major technical challenge was finding information on non-linear

dynamical systems. While linear dynamical systems and the hubs and authorities model have been studied for a while, non linear dynamic systems are a new area of research and there is little information about them to rely on.

• A third major challenge we faced was that most of the existing algorithms for

page ranking start from the same model of user behavior, the random surfer

Page 67: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

63

model. When one looks to build on this foundation, most of the algorithms one comes up with are naturally incremental, since they derive from the same inherent model. In the process of our research, we wanted to look at new models of user behavior, many of which had not been applied to page ranking.

• A fourth major challenge we faced was in understanding ranking ideas not

originally written for ranking web pages. Rank aggregation was one area where this came up multiple times. Borda’s algorithm and the Condorcet criteria for instance were ideas developed in the seventeen and eighteen hundreds to be applied to voting. Translating these ideas to the web ranking space was a mental shift.

• A fifth challenge was that a lot of the research, because of the commercial

nature of web page ranking, is proprietary. For example, there is very little information about patented algorithms such as LocalRank. Similarly, there are algorithms proprietary to Google, Alta Vista, MSN, and other commercial search engines, that we could not tap into as foundations for our hybrids.

• A sixth challenge on the algorithms side was that it is often easy to come up with

an idea, but hard to refine it and prove its correctness. Arguing the correctness and rationale behind our hybrids was one of the biggest problems we faced.

• A more specific challenge we faced was characterizing different input

parameters. The non linear dynamical systems described by PROBLEM and PAT(k) are sensitive to their input parameters. For instance, the output of

PROBLEM depends on the values of α and β that determine the shape of the

beta distribution. As we discussed earlier, we require α < 1 and β > 1.

However, the model needs to be calibrated for different combinations of α and

β. The PAT(k) algorithm depends on the value of k. In order to determine the correct values of parameters we tried to look for a user behavior rationale rather than an optimal data mining rationale. For instance on PAT(k), we identified a range of optimal percentages through small sample tests and then narrowed in on the correct value of k through research and testing. With the beta distribution algorithm, PROBLEM, we again looked at different models of

Page 68: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

64

user preferences to determine what the correct values of the two parameters would be.

• Another problem, that we imagine occurs with algorithms in practice, is finding

the correct balance between accuracy and speed. Often algorithms that compromise speed but approximate results correct, such as the scaled foot rule optimization, are what we are looking for.

• Convergence is a big question with Markov Chain algorithms. PageRank,

PROBLEM, and a couple of other algorithms converge at some point and determining what this convergence value is and how many iterations to run an algorithm on is a challenge.

• One of the key decisions we made in this process was to run the tests on 4 of the

original 6 algorithms. Our plan at the outset was to select one hybrid model and run the algorithm on the hybrid. At the end however we choose to discard only the two incremental improvement algorithms and run the tests on the 4 major shift algorithms. We chose to retain all four of our major algorithms because we believe no one algorithm can predict a model of user behavior. The results of the four different algorithms are constructive in that they help us understand why certain algorithms perform well on certain test cases, and looking forward there is perhaps a middle ground where we can incorporate elements of certain algorithms into yet another hybrid. Also we may be able to constructively combine some of the hybrids themselves using rank merging.

Implementation Challenges and Key Design Decisions P

________________________________________P

• On the implementation side, one of the biggest questions was how many levels

deep our web crawling should go. On one hand, a large number of levels gives us a larger web graph to run our results on. On the other hand, the number of entries in our XML file grows exponentially with an increase in levels, and makes the reading and writing of information extremely cumbersome. Our experience showed that the program tended to crash over 5 levels deep because of buffer limitations, and the graph was too sparse at 2 levels. We choose to run the algorithm 3-4 levels deep to make this tradeoff.

Page 69: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

65

• A second big challenge was whether we needed to store HTML during our web

parsing. Storing HTML we decided would have lead to very large files and mandated a database and hence we decided to do everything with XML intermediaries. However, since the ranking algorithms run just once on each hyperlink graph and the focus of our project is the hybrid algorithm, we felt that there was no need to build a database for storing the graphs. The Java program will output the hyperlink graphs as instance of the schema defined in the section on Technical Approaches.

• A third question was what seeds to use. Initially we had planned on using

different pages identified in revision 2 as our seeds. At the end we decided on running all our algorithms on the upenn.edu domain. There were two reasons for this. Firstly we wanted to stick to a dense tight knit web graph. Secondly we felt pages in this domain would be particularly relevant to our survey participants.

• Two other questions that came up were what kind of files we wanted to parse

and how to perform file IO. At the end in the interest of reading large files efficiently, we decided to use String Buffers in Java. Also, we chose a subset of file types that we would parse (see our implementation details).

• Another question, since the page ranks were often very close, was numerical

precision. We tackled this problem by using arbitrary precision arithmetic (BigInteger and BigDouble) in Java.

• Filtering pages that match a query was also challenge. The queries are specific to a given graph and consist of commonly occurring words in the given set of web pages. For example, a graph with a medical website as its seed such as <www.webmd.com> might be associated with the query for the word “doctor”. The presence or absence of the query word was finally be stored as a Boolean variable along with the adjacency lists.

B

Page 70: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

TECHNICAL APPROACH APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

This section details the schema pages and list of seeds described in the section on Technical Approach. It also includes samples of our result sets and our survey. XSD Schema and XML Instance of the Schema _________________________________________________

Below is the schema of the data to be collected by the parsing software.

66

<xsd:schema xmlns:xsd="http://www.w3.org/2004/XMLSchema">

<xsd:annotation>

<xsd:documentation xml:lang="en">

Schema for parsing output. Copyright 2004 Nalin Moniz and Radhika Gupta. All rights reserved. Also copyrighted by Scholes the Cat.

</xsd:documentation>

<xsd:annotation>

<xsd:element name="document" type="DocumentType"/>

<xsd:complexType name="DocumentType">

<xsd:sequence>

<xsd:element name="query" type="QueryType"/>

<xsd:element name="page" type="PageType" minOccurs="0" maxOccurs="unbounded"/>

</xsd:sequence>

</xsd:complexType>

<xsd:complexType name="QueryType">

<xsd:sequence>

Page 71: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

<xsd:element name="queryword" type="string" minOccurs="1" maxOccurs="unbounded"/>

</xsd:sequence>

</xsd:complexType>

<xsd:complexType name="PageType">

<xsd:sequence>

<xsd:element name="URL" type="URL" use="required"/>

<xsd:element name="IPAddress" type="IPAddressType" use="required"/>

<xsd:element name="matchword" type="string" minOccurs="0" maxOccurs="unbounded"/>

<xsd:element name="link" type="URL" minOccurs="0" maxOccurs="unbounded"/>

</xsd:sequence>

</xsd:complexType>

<xsd:simpleType name="IPAddressType">

<xsd:restriction base="xsd:string">

<xsd:pattern value="(([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0- 9]|25[0-5])\.){3} ([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0- 9]|25[0-5])"/>

</xsd:restriction>

</xsd:simpleType>

</xsd:schema>

67

Page 72: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Below is an instance of the schema described above, a sample set of data collected by the parsing software.

<document>

<query>

<queryword>web</queryword>

<queryword>search</queryword>

<queryword>algorithms</queryword>

</query>

<page>

<URL>www.google.com</URL>

<IPAddress>123.45.67.89</IPAddress>

<matchword>web</matchword>

<matchword>search</matchword>

<link>www.google.com/jobs</link>

<link>www.google.com/aboutus</link>

</page>

<page>

<URL>www.radsblog.com</URL>

<IPAddress>23.45.91.234</IPAddress>

<link>www.nalinsblog.com</link>

<link>www.google.com</link>

</page>

</document>

68

Page 73: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Sample Output from Parsing Software _________________________________________________________

Below is a sample of the output generated by our software on the word “bank” with four levels of depth. The entire output is not shown in the interest of space. The Excel file generated basically shows a list of URLs crawled and then for the eleven different algorithms, shows the associated Page Ranks. The pages with the highest normalized page rank are the ones with the highest rank.

69

Page 74: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Query: Bank

70

Page 75: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

71

Page 76: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Sample Survey and Results _______________________________________________________________________

Below is a sample of the survey we provided to users and a filled in response. The survey took the above output, and extracted the top 5 pages generated by the 11 different algorithms. We then removed the algorithm names for anonymous letters (which only we knew) and presented the survey for 10 different query words. For each ranking set generated by each algorithm on a particular result, we asked the participant to rank the set on a scale of 1-10. The survey was administered to 103 students, faculty, and staff on our campus. We only show one query of the ten we showed to users in the interest of space.

72

Page 77: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ANALYSIS OF SURVEY RESULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

We ran a survey with the top five results of 11 algorithms on 10 different queries. The ten algorithms compared were: 1) PageRank 2) LocalRank 3) HITS 4) AT(k) 5) PAT(k) 6) PROBLEM 7) CANDDE 8) PAGE 9) Borda’s Simple Rank Merging using HITS and PageRank 10) Borda’s Geometric Rank Merging using HITS and PageRank 11) Markov Chain Rank Merging The ten queries run on the upenn.edu domain were: 1) Statistics 2) Technology 3) Weather 4) Safety 5) Housing 6) Health 7) Food 8) Fees 9) Bank 10) Calendar

73

We asked users to rate each of the search results on a scale from 1 to 10, with 1 being the lowest and 10 being the highest. The users were asked to rate the search results for relevancy, accuracy and heterogeneity. The queries we tested the

Page 78: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

algorithms on were broad enough to represent multiple spheres of interest. For example, the query Safety returned links to Building Maps, Penn’s Emergency site, the office of the Vice Provost for University Life, the Penn Almanac, Facilities, the Daily Pennsylvanian and the admissions site for prospective students among others. We wanted users to consider how well a particular algorithm captured this heterogeneity. We used a blind randomized survey to prevent users from being biased for or against a particular survey. We used placeholders for the algorithm names and randomly assigned placeholders for each query. We received 103 responses to our survey. The results are summarized in the tables below.

Table 1 – Median Response

Table 2 – Mean Response

74

Page 79: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

75

UTable 3 – Standard Deviation

We ran a two sample t-test to test the differences in the mean performance of the algorithms. Our null hypothesis was that the mean performance of two algorithms is

the same, 210 : xxH = . The t-test is calculated for every pair of algorithms in Table

4, with the mean of the algorithm in the row being 1x and the mean of the algorithm

in the column being 2x . The p-values for the test are calculated in Table 5 below. The p-values highlighted in red indicate that we reject the null hypothesis for a given pair of algorithms. The Hasse diagram below illustrates the partial ordering of the algorithm’s performance based on the two sample t-test. If algorithm A lies above algorithm B, then algorithm A is statistically better than B. From Tables 4 and 5 we can see that CANDDE outperforms all the other 10 algorithms at the 5% significance level. This is probably because it is based on the robust PageRank technology and it eliminates the effect of the numerous dangling links that exist on our relative small web graphs. As we recursively parse pages more levels down, we can expect CANDDE’s advantage to diminish. The second best algorithm was PROBLEM. PROBLEM’s good performance suggests that the beta distribution on the E matrix accurately captures a user’s behavior over a small domain such upenn.edu because most critical information is found on a few central web pages. PageRank’s assumption that the user is equally likely to jump to a random page instead of following a hyperlink is a little far fetched. Instead the beta distribution captures a user’s affinity for the small set of central pages when surfing the upenn.edu domain.

Page 80: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

In contrast to CANDDE and PROBLEM which performed incredibly well, PAGE performed very poorly among users. This suggests that PAGE’s model of user behavior was inadequate. PAGE returned the “Course Roster” as its top response for the query Statistics, “Penn Safety” as its top page for the query Weather and had “Penn Publications” as its top choice for the query Food, among its other failures. In spite of its poor performance, PAGE was able to return a few results that made survey takers take notice. For example, in response to the query Health, PAGE was the only algorithm that returned “Undergraduate Admissions” in response to the query Health, let alone as its top ranking. People taking our survey found this interesting because student health is a big consideration for parents considering sending their children to Penn and the undergraduate admissions page addresses this sentiment with a lot of information about health.

76

Page 81: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Borda’s Geometric merging algorithm outperforms Borda’s Simple merging algorithm as expected because the geometric mean is a more robust statistic than the arithmetic mean. PAT(k) and Kleinberg’s HITS performed as well as PROBLEM and Borda’s Geometric merging algorithm as a result of the strength of the hubs and authorities framework. PAT(k) outperformed better than AT(k) because as we mentioned in the section on algorithm design, filtering a percentage of the best authorities is a much better heuristic than filtering a fixed number of authorities, AT(k) keeps all of a hub’s authorities when the number of authorities is less than the threshold k, whereas PAT(k) continues to filter only the top percentage as intended. The Markov merging algorithm outperformed Borda’s Simple merging algorithm, but there was no statistical difference between its performance and that of the Borda’s geometric merging algorithm. Both of Borda’s rank merging heuristics used Google’s PageRank and Kleinberg’s HITS as a base. Since rank merging prevents spurious results, we expected Borda’s algorithms to do no worse than either PageRank or HITS. However, HITS did just as well as Borda’s Geometric Merging algorithm and PageRank did just as well as Borda’s Simple Merging algorithm. Borda’s Simple Merging algorithm actually underperformed HITS! This suggests that merging rankings might not be a good heuristic. Users tend to consider only the top couple of web pages and if merging changes the ordering of the top pages, it can produce dramatic shifts in the quality of the search results. Surprisingly, LocalRank did quite poorly on the upenn.edu. This could be because of excessively stringent filtering of pages on a small web graph. In a larger web graph, we could expect LocalRank to do much better There was no statistical difference between the performance of AT(k), Borda’s Simple merging algorithm and Kumar and Sivakumar’s Markov merging algorithm and PageRank.

77

For the most part, user found that the algorithms performed at approximately the same level for most of the queries. The two outliers were the queries Housing and Bank. The algorithms performed poorly for the query Housing because most of the

Page 82: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

algorithms returned “Penn Portal” as their top choice along with many spurious but interesting results such as “Graduate Admissions”, “Campus Emergency”, “The School of Social Work”, “Philadelphia”, “Philadelphia Information” and many more. In contrast in response to the query Bank, most of the algorithms agreed that the top websites were “Penn Portal”, “The Student Federal Credit Union” and “Wharton” among others. The relative stability of the performance of the algorithms over different queries suggests that there is a good chance that the performance of the algorithms might be scalable.

Table 4 – Differences In Algorithm Mean Performance

Table 5 – P-values for Algorithm Mean Performance

78

Page 83: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Table 6 – Differences in Query Performance

Table 7 – P-values for Query Performance

79

Page 84: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

FURTHER IMPROVEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Our research focused on the efficiency, accuracy, scalability and robustness of new ranking algorithms and even though we addressed each of these issues throughout our paper, we still have many outstanding questions.

1) As we run LYNC on larger and larger web graphs will the performance of CANDDE converge to PageRank’s performance? This depends on the percentage and influence of dangling links in the web graph. For small subgraphs of the Internet, many link point to pages outside the set of pages under consideration, but as the size of the subgraph increases, outdated links form a larger portion of the dangling links. Therefore, CANDDE and PageRank may or may not converge in their performance.

2) How will PROBLEM perform on larger subgraphs. As we mentioned in the

section on Algorithm Performance, PROBLEM performed really well on the upenn.edu domain because the domain has a few central pages which contain a considerable amount of information. As we expand the size of our web graph beyond the upenn.edu, a few central pages might not have a monopoly on information. We believe that PROBLEM will continue to perform well out of sample, however we believe that its performance will decrease because of a diffusion in information.

1) We believe that PAGE’s model of user’s behavior is fundamentally flawed

and that it does not warrant further investigation. In addition, PAGE is while the other algorithms are . )( 4nO )( 3nO

4) We tested all our algorithms on one word queries to isolate the effect of

specific words. Multiple word queries could produce very different results.

80

5) When crawling the upenn.edu domain we ignored all web pages outside the domain to reduce the influence of dangling links. In spite of this, CANDDE was

Page 85: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

the top algorithm. If dangling links from outside upenn.edu were included, we might have seen a dramatic improvement in CANDDE’s performance.

6) If we had included the relative placement of text on a page, our algorithms

might have performed differently.

81

We can expect LocalRank to perform much better out of sample as its filtration will be much more effective on larger web graphs.

Page 86: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MILESTONES AND FINAL TIMELINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Our key milestones and deadlines throughout the year are below. The blue and green comments indicate comments made for Revision 2, and the red comments are the comments made since then referring to new work.

82

Date Milestone Deliverable Status 9/27/2004

Revision 1 Submit Revision 1 to Professor Taylor and review Project Proposal. Done preliminary research on dynamic systems.

Complete

10/31/2004

Research Complete

Complete research on 1) non-linear systems and 2) existing ranking algorithms. Come up with some alternatives for hybrid algorithms and discuss with Professor Guha.

Complete

11/15/2004

Hybrid Model Draft 1

Finalize a draft of the hybrid algorithm and discuss open issues with Professor Guha.

Hybrid Algorithm still under discussion. Will finalize by the end of the semester.

11/29/2004 Date Changed to 12/2/2004

Revision 2 Submit Revision 2 to Professor Taylor. Summarize key findings and present the hybrid algorithm. Initial list of 100 seed pages. Code architecture of parsing software.

Complete Parsing software architecture and code is also complete.

12/20/2005

Algorithm Finalized

Hybrid Algorithm finalized with Professor Guha before end of semester.

NEW DEADLINE

1/31/20 Graph Complete parsing software. DONE Testing

Page 87: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

83

05 Creation Complete

Complete testing of 100 seed pages and identify 10-15 pages to test. Code architecture of all algorithms

completed, 50% of code complete for algorithms.

2/28/2005

Algorithm Code Complete

Working Java (not C++ anymore) code for all mentioned algorithms including hybrid model. Code architecture of survey page. Research of rank aggregation and development of rank aggregation ideas.

Algorithms code complete. Change in plans: seed pages now give way to upenn.edu domain

3/31/2005

Survey Complete

Survey web interface complete. Survey run for 15 days.

Survey created and run for 20 days. 103 responses collected.

4/15/2005 4/07/2005

Analysis Complete

Statistical Analysis of Survey complete. Draft of final paper.

Change in anticipated deadline, survey analysis and write up pre-poned from plan in Rev II.

4/30/2005 4/11/2005

Final Paper Complete

Final Paper and Presentation Complete Paper completed!

Page 88: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

84

4/15/2005

Poster Complete

Aim to have poster completed! ☺

4/21/2005

Demo Day Senior Design Demo Day!!! ☺

Senior Design Demo Day P

__________________________________________________________________________P

At the Demo Day, we will present our Hybrid Model and the summary of our Final Paper. We will also present the survey results from the different algorithms we have written and a statistical analysis of these results. We are also happy to demonstrate how our algorithms run on sample queries. Updated Project Timeline P

_________________________________________________________________________P

Project Timeline from Rev 1:

Project Timeline from Rev 2:

Page 89: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Final Project Timeline:

85

Page 90: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CONCLUSIONS AND REFLECTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

It has been, in conclusion, a long, difficult, challenging, and yet extremely rewarding year. For our first independent algorithms research project, we feel as if we have accomplished quite a bit, and yet could do quite a bit more. Our results suggest that CANDDE, a twist on dangling links, has performed extremely well for a small domain, because of the relative importance of dangling links on a small graph. PROBLEM and PAT(k), while not as powerful as CANDDE, have performed comparably to HITS, the former because it describes a model of user behavior that is reasonably correct for the upenn.edu domain, and the second because of how it exploits the non-linear dynamical system idea. PAGE has been disappointing, but we realize perhaps it is based on a model of user behavior that may be incorrect for this domain. The biggest achievement for us is being able to absorb difficult academic material in a space unfamiliar to us, understand what the key problems and the direction of thought is and then do some independent thinking of our own. We have gone from knowing very little about non-linear dynamical systems or page ranking, beyond Google’s PageRank, to design four hybrid algorithms of our own. While we have only done small tests on these algorithms for performance, we are excited to have suggested new models of user behavior that might be accurate, at least in a subset of cases. We are also pleased that we have been able to incorporate some implementation into a primarily research project.

86

The process of research is new to us, and we have learned as we did our Senior Design, that research is complex and dynamic. We started out with a specific plan for exactly what we were going to do and when we were going to do it. Days into the project, we discovered that things do not go as planned. When you are researching, some information is not found, and at the same time new information is found. Some questions are not answered, and new questions keep coming up. For instance, researching linear dynamical systems proved to be more difficult than we anticipated. At the same time, rank merging and aggregation was not an area we

Page 91: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

had considered exploring, but we did look into it, and as our Statistical Analysis suggested, we got some value out of it. That things keep changing in research is exciting for the learning, and frustrating because of unanticipated challenges. One big thing we learned was that there is a major difference between researching existing ideas and coming up with new ideas. It is easy to critique other people’s work but when the burden of new ideas is on you, things become more challenging. For instance, we realized that all the existing rank algorithms derive from the random surfer model, and that it is very difficult to make that paradigm shift from a consistently used user model. Making incremental improvements is often easy, but it is harder to make radical changes – sometimes you plan to come up with 4 algorithms in three days, but you cannot even come up with one in a week. Another thing our Senior Design project highlighted was the difference between theory and implementation. We take algorithms classes and we take programming languages classes but for once, we put algorithms and implementation together. Merging the two is not as simple as we thought it would be. We assumed coding our algorithms would not take much time, because they were simple ideas and we just had to iron out the details. That ironing took a lot of time, and combined with scalability issues, implementation proved to be more time consuming than we could estimate. Had we to go back and do this again, we would probably have spend more time thinking about scalability and development time. We felt we spend a lot of time on good coding standards and quality architecture, but could have benefited from one more week of coding time. We also realized that implementation forces us to make choices, and this holds true for software development as a whole. There is always one more thing to code and one more bug to fix – we wanted to code our incremental improvements as well, but did not do so in the interest of time. I am sure, all our classmates, feel there is something more they could have done. Another thing we realized is that often keeping your options open and not discarding things is helpful. Our original plan was to work with one hybrid but our analysis has shown us that each hybrid has merits that we can capitalize on. Had we held back and not implemented each of these hybrids, we could not have had the benefit of this knowledge.

87

Page 92: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

88

Finally, the best part of a theory project, is that you never stop thinking about it. The research may have been over for the purposes of Senior Design, but page ranking is a question that will still be at the back of our mind, and unintentionally or intentionally, we may find ourselves visiting CANDDE, PAGE, PROBLEM or PAT(k) sometime again. Who knows what’s in store for LYNC?

Page 93: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Bharat, Krishna. “Ranking search results by re-ranking the results based on local inter-connectivity”. Patent 6,725,259. Primary Examiner. Vu, Viet. USPTO Patent Full Text and Image Database. 27 Jan. 2003. 24 Sep. 2004 (A). <http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=/netahtml/search-bool.html&r=1&f=G&l=50&co1=AND&d=ptxt&s1=6,526,440&OS=6,526,440&RS=6,526,440>.

Bharat, Krishna and Mihaila, George. “Hilltop: A Search Engine Based on Expert

Documents.” University of Toronto. Home Page. 2004. 10 Oct. 2004. <http://www.cs.toronto.edu/~georgem/hilltop/>.

Borodin, Allan, Roberts, Gareth, Rosenthal, Jeffrey, and Tsaparas, Panayiotis.

“Finding Hubs and Authorities from Link Structures on the World Wide Web.” Michigan State University. 2004. 19 Oct. 2004.<web.cse.msu.edu/~cse960/ Papers/LinkAnalysis/hubs-journal.pdf>.

Brin, Sergey and Page, Larry. “The anatomy of a large-scale hypertextual Web

search engine.” Proceedings of the International World Wide Web Conference. Brisbane, Australia: 1998.

Brin, Sergey, Motwani, Rajeev, Page, Larry, and Winograd, Terry. “The PageRank

Citation Ranking: Bringing Order to the Web.” Stanford University. 2004. 8 Oct. 2004. <http://www.stanford.edu/~backrub/pageranksub.ps>.

Coppersmith, Don and Winograd, Shmuel. “Matrix Multiplications via Arithmetic Progressions.” Annual Symposium on the Theory of Computing. 2004. 16 Oct. 2004. <http://portal.acm.org/citation.cfm?id=28396>.

89

Craven, Phil. “Google’s PageRank Explained.” Web Workshop. 2004. 19 Sep. 2004. <http://www.webworkshop.net/pagerank.html>.

Page 94: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

De Witt, David and Wang, Yuan. “Computing PageRank in a Distributed Internet

Search System.” Very Large Data Bases (VLDB). Ed. Mario A. Nascimento, M. Tamer Ozsu, et al. Toronto, Canada: August 31-September 3, 2004. 420-431.

Dwork, Cynthia, Kumar, Ravi, Naor, Moni, and Sivakumar, D. “Rank Aggregation

Methods for the Web.” 20 February 2005. <http://portal.acm.org>. Fagin, Ronald, Kumar, Ravi, and Sivakumar, D. “Efficient Similarity Search and

Classification via Rank Aggregation.” 22 February 2005. <http://portal.acm.org>.

Google Corporation. Home Page. 19 Sep. 2004

<http://www.google.com/technology>. Kleinberg, Jon. “Authoritative Sources in a Hyperlinked Environment.” Proceedings

of the Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. 1998. 668-677.

Lempel, Ronny and Moran, Shlomo. “The Stochastic Approach for Link Structure

Analysis (SALSA).” 9th International World Wide Web Conference. 2000. 29 Oct. 2004. <http://www9.org/w9cdrom/175/175.html>.

Rogers, Ian. “Google’s PageRank and how to make the most of it.” IRC Computing.

Home Page. 2003. 20 Sep. 2004. <http://www.iprcom.com/papers/pagerank/>.

Schmidt, Claus. “Google’s 2 Rankings and You.” CLCS. Home Page. 20 Sep.

2004. <http://clsc.net/research/googles-2-rankings.htm>. Search Engines Organization. Home Page. 9 Oct. 2004. <http://www.seorank.com/analysis-of-hilltop-algorithm.htm>.

90

Page 95: What is Pairs Trading - Penn Engineeringcse400/CSE400_2004_2005/16writeup.pdfthe most relevant web sites to the user within the first few results. Ranking algorithms that solve this

Search Guild. Home Page. 12 Oct. 2004. <http://www.searchguild.com/tpage176-0.html>. Search Marketing Information. Home Page. 21 Sep. 2004. <http://www.search-

marketing.info/search-engines/major-search-engines/google-pagerank.htm>.

Search Marketing Information. Home Page. 21 Sep. 2004. <http://www.search-

marketing.info/newsletter/articles/commodification-pagerank.htm>. Tsaparas, Panayiotis. “Using Non-Linear Dynamical Systems for Web Searching

and Ranking.” Principles of Database Systems. Ed. Paris, France: June 14-16, 2004. 59 – 69.

Zawodny, Jeremy. Home Page. 22 Sep. 2004.

<http://jeremy.zawodny.com/blog/archives/000751.html>

91