Recommendation Systems and Web Search

57
Recommendation Systems and Web Search Eytan Adar HP Labs Baruch Awerbuch Johns Hopkins University Boaz Patt- Shamir Tel Aviv University David Peleg Weizmann Institute Mark Tuttle HP Labs

description

Recommendation Systems and Web Search. Recommendation systems. eBay Buyers and sellers rate each transaction Users check the ratings before each transaction Amazon System monitors user purchases System recommends products based on this history SpamNet (a collaborative filter) - PowerPoint PPT Presentation

Transcript of Recommendation Systems and Web Search

Page 1: Recommendation Systems and Web Search

Recommendation Systemsand

Web Search

Eytan Adar HP Labs

Baruch Awerbuch Johns Hopkins University

Boaz Patt-Shamir Tel Aviv University

David Peleg Weizmann Institute

Mark Tuttle HP Labs

Page 2: Recommendation Systems and Web Search

Slide 2

Recommendation systems

• eBay– Buyers and sellers rate each transaction– Users check the ratings before each transaction

• Amazon– System monitors user purchases– System recommends products based on this history

• SpamNet (a collaborative filter)– SpamNet filters mail, users identify mistakes– Users gain reputations– Reputations affect weight of recommendations

Page 3: Recommendation Systems and Web Search

Slide 3

The problem• There are many known attacks

– Many villains boost each other’s reputation– One villain acts honest for a while

• How can honest users collaborate effectively

… in the presence of dishonest, malicious users

• What is a good model for recommendation systems?• What are algorithms can we prove something about?• We give fast, robust, practical algorithms.

Page 4: Recommendation Systems and Web Search

Slide 4

A recommendation model

• n players, both honest and dishonest (n honest)– dishonest players can be arbitrarily malicious, collude

• m objects, both good and bad (m good)– players probe objects to learn whether good or bad– there is a cost to probing a bad object

• a public billboard– players post the results of their probes (the honest do…)– billboard is free: no cost to read or write

• We think this is a direct abstraction of eBay

Page 5: Recommendation Systems and Web Search

Slide 5

A simple game

• At each step of the game, one player takes a turn:– consult the billboard– probe an object– post the result on the billboard

• Honest players follow the protocol• Dishonest players can collude in arbitrary

ways

Goal: help honest users find good objects at minimal cost.

Page 6: Recommendation Systems and Web Search

Slide 6

Bad ways to play the game

• Always try the object with the highest number of positive recommendations– dishonest players recommend bad objects– honest players try all bad objects first

• Always try the object with the least number of negative recommendations– dishonest players slander good objects– honest players try all bad objects first– (a popular strategy on eBay, but quite vulnerable)

• Simple combinations also fail

Page 7: Recommendation Systems and Web Search

Slide 7

What about other approaches?• Collaborative filtering

[Goldberg, Nichols, Oki, Terry 1998], [Azar, Fiat, Karlin, McSherry, Saia 2001], [Drineas, Kerenidis, Raghavan 2002], [Kleinberg, Sandler 2003]

– all players are honest, solutions are centralized• Web-searching algorithms [Brin, Page 1998], [Kleinberg 1999]

– compute a “transitive popular vote” of the participants– easily spammed by a clique of colluding crooks

• Trust [Kamvar, Schlosser, Garcia-Molina 2003]

– use trusted authorities to assign trust to players– do we really care about trust? we care about cost!

Simple randomized algorithms can minimize cost.

Page 8: Recommendation Systems and Web Search

Slide 8

Our results

• A simple, efficient algorithm for many contexts– Objects come and go over time– Players access different subsets of objects– Players have different tastes for objects

• The most interesting model• Our algorithm can be substantially better than others

• Players probe few objects– Only a constant when most players are honest

• Corporate web search is the perfect application– Implemented on HP’s internal search engine

Page 9: Recommendation Systems and Web Search

Slide 9

Good ways to play the game

• Exploration rule: – “choose a random object, and probe it”– okay if most objects are good

• Exploitation rule:– “choose a random player, and probe an object it liked”– okay if most players are honest

• But are there many good objects/honest players?!?

Page 10: Recommendation Systems and Web Search

Slide 10

The balanced rule

• The balanced rule: flip a coin– if heads, follow exploration rule– if tails, follow exploitation rule

• How well does it work?

• We compute expected cost of probe sequence cost() = number of bad objects probed by honest players

Page 11: Recommendation Systems and Web Search

Slide 11

• Split each time a good player finds a good object = … a b c d e d e d e e …

D0 D1 D2

• expected cost of Di =

– each probe hits good object with probability

• total expected cost is

• expected cost of D0 =

– each probe hits good object with probability

Page 12: Recommendation Systems and Web Search

Slide 12

Understanding that number

Finding a good object

Spreading the good news

Probes by a player to learn the good news

This is just a log factor away from optimal (lower bounds later)

Page 13: Recommendation Systems and Web Search

Slide 13

Different models

• Dynamic object model– objects come and go over time

• Partial access model– players access overlapping subsets of the objects

• Taste model– players have different notions of good and bad

• In each model, analysis has similar feel…

Page 14: Recommendation Systems and Web Search

Slide 14

Dynamic object model

• Objects come and go with each step:– first the objects change (changes are announced to all)– then some player takes a step

• Algorithm for player p:if p has found a good object o

then p probes o

else p follows the Balanced Rule

Page 15: Recommendation Systems and Web Search

Slide 15

Competitive analysis

• Optimal probe sequence opt has simple form: opt = a a a x x x c c c c d d d e e e …

good objectno good objects

• Optimality cost depends on the environment:– player and object schedules, honest players, good

objects– adversary à powerful, adaptive, and Byzantine!

switches(opt) = number of distinct objects in opt

• Our probe sequence partitioned by the switches: = o o o o o o o o o o o o o o o o …

Page 16: Recommendation Systems and Web Search

Slide 16

Competitive analysis

• Compare cost of and opt switch-by-switch

cost()

· cost(opt) + switches(opt) (cost to satisfy)

· cost(opt) + switches(opt) (1/ + n log n)

· cost(opt) + switches(opt) (m + n log n)

• Lower bound (Yao’s Lemma):cost() ¸ cost(opt) + switches(opt) (m)

Page 17: Recommendation Systems and Web Search

Slide 17

Partial access model

• Each player can access only some objectswxaybz

p

q

r good objectbad object

• common interest essential for help from neighbors– q gets help from p, but not r– r gets no useful help at all

P O

• hard to measure help from neighbors accurately– we bound work of players P with common

interest O

Page 18: Recommendation Systems and Web Search

Slide 18

Partial access algorithm• Similar algorithm:

– choose randomly from neighbors, not all players

work until one in P finds O

work until all in P learn of O

+=

¿ m + n log n

exploration + exploitation

• Similar analysis:

• Good analysis in this model is very difficult:– we know common interest is essential for others to

help– sometimes others don’t help (players might as well

sample)even if each player has common interest with many players

Page 19: Recommendation Systems and Web Search

Slide 19

Simulation

0

20

40

60

80

100

120

Max

Ave

Min

exploration probability

step

s

10,000 players, 50% honest10,000 objects, 1% good 1,000 objects/player

• Balanced rule tolerates coin bias quite well• Can we optimize the rule by changing the bias?

• many good objects emphasize exploration• many honest players emphasize exploitation

Page 20: Recommendation Systems and Web Search

Slide 20

Differing tastes model

• Player tastes differ: good(player) µ objects• Player tastes overlap: good(p1) Å good(p2) ;

• Special interest group S=(P,O):

“anything in O could satisfy anyone in P”O µ good(p) 8 p 2 P

• Players don’t know which SIG they are in.• Players follow the Balanced Rule.

Page 21: Recommendation Systems and Web Search

Slide 21

Complexity

• Theorem: Given a SIG (P,O), the expected total work to satisfy everyone in P is

• Pretty good: within a log factor when |P|=O(n)

Page 22: Recommendation Systems and Web Search

Slide 22

A very popular model

• Most algorithms are off-line: prepare in advance– Assume underlying object set structure: The web

• PageRank [Google], HITS [Kleinberg]

– Assume underlying stochastic model: Analyze past choices• [KRRT’98], [KS’03], [AFKMS’01]

– Heuristically identify similar users/objects• GroupLens Project

• Drineas, Kerenidis, Raghavan: first on-line algorithm– Algorithm recommends, observers user response

Page 23: Recommendation Systems and Web Search

Slide 23

DKR’02

• Assumes:– User preferences given by types (up to noise)– O(1) dominant types cover most users

• Must know the number of dominant types

– Type orthogonality: • No two types like the same object

– Type gap:• The dominant types are by far the most popular

• The algorithm:– Uses SVD to compute the types, learn user types– Runs in time O(mn)

Page 24: Recommendation Systems and Web Search

Slide 24

The Balanced Rule

• No types: SIGs a much looser characterization• No type orthogonality: SIGs can overlap, even

“subtyping” is allowed• No type gap: SIG popularity irrelevant• A simple distributed algorithm• Tolerates malicious players• Shares work evenly if players run at similar rates• Fast: runs in time O(m + n log n)

Page 25: Recommendation Systems and Web Search

Slide 25

A look at individual work

• Consider a synchronous, round-based model– Each player probes an object each round until satisfied– One model of players running at similar speeds– A good model for studying individual work

• Theorem: Lower bounds

• Theorem: Balanced rule halts in

Page 26: Recommendation Systems and Web Search

Slide 26

p n + 1 objects, n honest probes

good object gets p n votes

n objects, n honest probesgood object gets a vote

How is constant even possible? honest players, 1 good objectnn

all objects

all objects with 1 vote

all objects with votes

2n

probe, probe

probe, probe

probe, probe, probe

Contains the good object

Expect the good objectat most bad objectsn

Expect the good objectat most 2 bad objects

so probe them all

Page 27: Recommendation Systems and Web Search

Slide 27

Candidate sets work well

• Theorem: “Distilling candidate sets” is faster:– Constant 1/ rounds if n1-honest players – Even with many dishonest players, expected

rounds is only

• Remember: Balanced rule was

Page 28: Recommendation Systems and Web Search

Slide 28

Conclusions

• Contributions– a new model for the study of reputation– simple algorithms for collaboration in spite of

• asynchronous behavior• changing objects, differing access, differing tastes• arbitrarily malicious behavior from players

• Our work generalizes to multiple values– players want a “good enough” object

• reduces to our binary values, even for multiple thresholds

– players want “the best” or “in the top 10%”• early stopping is no longer possible

Page 29: Recommendation Systems and Web Search

Slide 29

Future work• Better lower bounds, better analysis…• We never used

– the identities of the malicious players– the negative votes on the billboard– the number of good objects or honest players

• can we get rid of the log factors if we do?

• What if the supply of each object is finite?• What if objects have prices? And the prices

affect whether a player wants the object?

Page 30: Recommendation Systems and Web Search

Slide 30

Many interesting problems remain…

But now, an application to Intranet search…

Page 31: Recommendation Systems and Web Search

Slide 31

xSearch: Improving Intranet search

• Internet search engines are wonderful• Intranet search is disappointing

• Some simple ideas to improve intranet search– Heuristic for deducing user recommendations– Algorithms for reordering search results

• Implemented on the @HP search engine– Getting 10% of the queries – Collecting data on performance– Results are preliminary, but look promising…

Page 32: Recommendation Systems and Web Search

Slide 32

Who is “lampman”?

@hp says:

Page 33: Recommendation Systems and Web Search

Slide 33

Who is “lampman”?

We say:

Page 34: Recommendation Systems and Web Search

Slide 34

What is going on?

• Searching corporate webs should be so easy

• Google is phenomenal on the external web• Why not just run Google on the internal web?

• Because it doesn’t work, and not just at HP…

Page 35: Recommendation Systems and Web Search

Slide 35

IBM vs Google et al (2003)

• Identified the top 200 and median 150 queries• Found the “right” answer to each query by hand • Ran Google over the IBM internal web• Now how does Google do on these queries?

– Popular queries: only 57% succeed– Median queries: only 46% succeed– Weak notion of success: “right answer in top 20 results”

• Compare that to your normal Google experience!– “Right answer in top 5 results 85% of the time”

Page 36: Recommendation Systems and Web Search

Slide 36

Link structure is crucial

• Internet search algorithms depend on link structure:– Good pages point to great pages, etc.– Important pages are at the “center of the web”

• Link structure has a strongly connected component:

strongly connected component

incoming

pages

outgoing

pages

otherreachable

pages

30% of the Internet,10% of IBMThe meaty part of the Intranet is just a third that of the

Internet!

Page 37: Recommendation Systems and Web Search

Slide 37

Corporate webs are different

• More structure– Small groups build large parts of web (silos)– Periodically reviewed and branded

• No outgoing links from many pages– Documents intended to be informative not “interesting”– Pages generated automatically from databases

• No reward for creating/updating/advertising pages– Do you advertise your project page?– Do you advertise your track club page?

Page 38: Recommendation Systems and Web Search

Slide 38

Corporate queries are different

• Queries are focused and have few “right” answers

• What is “vacation” looking for?– Inside HP:

• what are the holidays and how to report vacation time• one or two pages on the HR web.

– Outside HP: • interesting vacation spots or vacation deals• any number of answers pages would be satisfactory

Page 39: Recommendation Systems and Web Search

Slide 39

What does @hp do now?

• Based on the Verity Ultraseek search engine– Classical information retrieval on steroids– Many tunable parameters and knobs

• Manually constructs “best bets” for popular queries• Works as well as any other intranet search engine

Page 40: Recommendation Systems and Web Search

Slide 40

Our approach: user collaboration

@hp user

interface

our reorderin

g heuristic

s

@hp search engine

query

hits

How to collect user feedback?

How to use user feedback?

query’

hits’

Page 41: Recommendation Systems and Web Search

Slide 41

Collecting feedback: last clicks• The “last click” heuristic:

– Assume the last link the user clicks is the right link– Assume users stop hunting when they find the right page

• Easy, unobtrusive implementation:– Tag each query by a user with a session id– Log each link clicked on with the session id – Periodically scan the log to compute the “last clicks”

• Effective heuristic: – Agreement: 44% of last clicks go to same page – Quality: 72% of last clicked pages are good pages

Page 42: Recommendation Systems and Web Search

Slide 42

Last clicks example: “mis”

455

http://persweb.corp.hp.com/comp/employee/program/tr/sop/mis_guide.htm

78 http://hrexchange.corp.hp.com/HR_News/newslink040204-sopenroll.htm

13 http://hrexchange.corp.hp.com/HR_News/newslink100303-SOPenroll.htm

10 http://dimpweb.dublin.hp.com/factsyseng/mis/

5 http://hrexchange.corp.hp.com/HR_News/newslink121203-stock_holidayclosure.htm

2 http://canews-stage.canada.hp.com/archives/april04/SOP/index.asp

2 http://canews.canada.hp.com/archives/april04/SOP/index.asp

1 http://hpcc886.corp.hp.com/comp/employee/program/tr/sop/stockcert.htm

1 http://hpcc886.corp.hp.com/comp/employee/program/tr/sop/purhpshares.htm

“Mellon Investor Services” or “Manufacturing Information Systems”

Surprisingly, the @hp top result was never the last click

Page 43: Recommendation Systems and Web Search

Slide 43

Using feedback: ranking algorithms• Statistical ordering

– Interpret last clicks as recommendation– Rank by popularity (most recommended first, etc)– Robust: highly stable in presence of spam

• Move-to-front ordering– Each time a page is recommended (last-clicked),

move it to the top of the list.– Fast: once the good page is found, it moves to the top (tsunami)

• Probabilistic ordering– Choose pages one after another– Probability of selection = frequency of endorsement– Best of both worlds, and many theoretical results– Plus: all new pages have some chance of being seen!

Page 44: Recommendation Systems and Web Search

Slide 44

An experimental implementation

• Working with @hp search (Anne Murray Allen):– Ricky Ralston– Chris Jackson– Richard Boucher

• Now part of the @hp search engine:– Getting 10% of queries sent to @hp search– A few months of experience shows promising results…

Page 45: Recommendation Systems and Web Search

Slide 45

Example: “paycheck” at @hp

a movie!!

notice from 1999

three copies of slides on redeemin

g eawards

Page 46: Recommendation Systems and Web Search

Slide 46

Example: “paycheck” with move2front

how to access

(was #6)how to set

up(was #7)May 2004 upgrade notice

(was #41)

401(k) catchup

(almost no votes)

Page 47: Recommendation Systems and Web Search

Slide 47

Example: “active gold” at @hp

hydra migration

sports award

can’t read

can’t read

dead link

Page 48: Recommendation Systems and Web Search

Slide 48

Example: “active gold” with statisticalhome page(was #8)(10 of 12

votes)

Page 49: Recommendation Systems and Web Search

Slide 49

Example: “payroll” at @hp

Page 50: Recommendation Systems and Web Search

Slide 50

Example: “payroll” with statistical

US main payroll

(was #6)payroll forms

(was #17)

the best bet(was #16)

payroll phone numbers

(was #33)

Page 51: Recommendation Systems and Web Search

Slide 51

Example: “payroll” with move2front

lots of churn among top hits

Page 52: Recommendation Systems and Web Search

Slide 52

Moving in the right direction

• Many compelling instances of progress

• How effective are we in general?– Let’s track position of last-clicked link– We win when we move it up the page– We lose when we move it down

Page 53: Recommendation Systems and Web Search

Slide 53

Number of times we win

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

35.00%

1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61

stat win

stat lose

mtf win

mtf lose

Page 54: Recommendation Systems and Web Search

Slide 54

Number of positions we move

0

1

2

3

4

5

6

7

8

9

10

1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61

stat win

stat lose

mtf win

mtf lose

Page 55: Recommendation Systems and Web Search

Slide 55

Some improvements hidden

• Some effects of our improvements are masked– Manually constructed best bet lists– Extensive fine-tuning of Ultraseek engine

• Our hope or claim:– We should be able to replace best bets– We are cheaper/easier than tuning Ultraseek

• Even now we improve 5 places 20% of the time

• Hope for a real user study…

Page 56: Recommendation Systems and Web Search

Slide 56

Conclusion

• Intranet search is notoriously hard• User collaboration and feedback can help

• Intranet search small part of larger vision:– How to use user feedback and collaboration for

• Searching unstructured data: eg, books scanned at a9.com• Building general purpose recommendation systems

Page 57: Recommendation Systems and Web Search