The Vector Space Model LBSC 796/CMSC828o Session 3, February 9, 2004 Douglas W. Oard.
-
date post
20-Dec-2015 -
Category
Documents
-
view
228 -
download
7
Transcript of The Vector Space Model LBSC 796/CMSC828o Session 3, February 9, 2004 Douglas W. Oard.
The Vector Space Model
LBSC 796/CMSC828o
Session 3, February 9, 2004
Douglas W. Oard
Agenda
• Thinking about search– Design strategies
– Decomposing the search component
• Boolean “free text” retrieval– The “bag of terms” representation
– Proximity operators
• Ranked retrieval– Vector space model
– Passage retrieval
Supporting the Search Process
SourceSelection
Search
Query
Selection
Ranked List
Examination
Document
Delivery
Document
QueryFormulation
IR System
Indexing Index
Acquisition Collection
Design Strategies
• Foster human-machine synergy– Exploit complementary strengths– Accommodate shared weaknesses
• Divide-and-conquer – Divide task into stages with well-defined interfaces– Continue dividing until problems are easily solved
• Co-design related components– Iterative process of joint optimization
Human-Machine Synergy
• Machines are good at:– Doing simple things accurately and quickly– Scaling to larger collections in sublinear time
• People are better at:– Accurately recognizing what they are looking for– Evaluating intangibles such as “quality”
• Both are pretty bad at:– Mapping consistently between words and concepts
Divide and Conquer
• Strategy: use encapsulation to limit complexity
• Approach:– Define interfaces (input and output) for each component
• Query interface: input terms, output representation
– Define the functions performed by each component• Remove common words, weight rare terms higher, …
– Repeat the process within components as needed
• Result: a hierarchical decomposition
Search Goal
• Choose the same documents a human would– Without human intervention (less work)– Faster than a human could (less time)– As accurately as possible (less accuracy)
• Humans start with an information need– Machines start with a query
• Humans match documents to information needs– Machines match document & query representations
Search Component Model
Comparison Function
Representation Function
Query Formulation
Human Judgment
Representation Function
Retrieval Status Value
Utility
Query
Information Need Document
Query Representation Document Representation
Que
ry P
roce
ssin
g
Doc
umen
t P
roce
ssin
g
Relevance
• Relevance relates a topic and a document– Duplicates are equally relevant, by definition
– Constant over time and across users
• Pertinence relates a task and a document– Accounts for quality, complexity, language, …
• Utility relates a user and a document– Accounts for prior knowledge
• We seek utility, but relevance is what we get!
“Bag of Terms” Representation
• Bag = a “set” that can contain duplicates “The quick brown fox jumped over the lazy dog’s back”
{back, brown, dog, fox, jump, lazy, over, quick, the, the}
• Vector = values recorded in any consistent order {back, brown, dog, fox, jump, lazy, over, quick, the, the}
[1 1 1 1 1 1 1 1 2]
Bag of Terms Example
The quick brown fox jumped over the lazy dog’s back.
Document 1
Document 2
Now is the time for all good men to come to the aid of their party.
the
quick
brown
fox
over
lazy
dog
back
now
is
time
forall
good
men
tocome
jump
aid
of
their
party
00110110110010100
11001001001101011
Term Doc
umen
t 1
Doc
umen
t 2
Stopword List
Boolean “Free Text” Retrieval
• Limit the bag of words to “absent” and “present”– “Boolean” values, represented as 0 and 1
• Represent terms as a “bag of documents”– Same representation, but rows rather than columns
• Combine the rows using “Boolean operators”– AND, OR, NOT
• Result set: every document with a 1 remaining
Boolean Operators
0 1
1 1
0 1
0
1A OR B
A AND B A NOT B
AB
0 0
0 1
0 1
0
1
AB
0 0
1 0
0 1
0
1
AB
1 0
0 1B
NOT B
(= A AND NOT B)
Boolean Free Text Example
quick
brown
fox
over
lazy
dog
back
now
time
all
good
men
come
jump
aid
their
party
00110000010010110
01001001001100001
Term Doc
1
Doc
2
00110110110010100
11001001001000001
Doc
3D
oc 4
00010110010010010
01001001000101001
Doc
5D
oc 6
00110010010010010
10001001001111000
Doc
7D
oc 8
• dog AND fox – Doc 3, Doc 5
• dog NOT fox – Empty
• fox NOT dog – Doc 7
• dog OR fox – Doc 3, Doc 5, Doc 7
• good AND party – Doc 6, Doc 8
• good AND party NOT over
– Doc 6
Why Boolean Retrieval Works
• Boolean operators approximate natural language– Find documents about a good party that is not over
• AND can discover relationships between concepts– good party
• OR can discover alternate terminology– excellent party
• NOT can discover alternate meanings– Democratic party
The Perfect Query Paradox
• Every information need has a perfect doc set– If not, there would be no sense doing retrieval
• Almost every document set has a perfect query– AND every word to get a query for document 1– Repeat for each document in the set– OR every document query to get the set query
• But users find Boolean query formulation hard– They get too much, too little, useless stuff, …
Why Boolean Retrieval Fails
• Natural language is way more complex– She saw the man on the hill with a telescope
• AND “discovers” nonexistent relationships– Terms in different paragraphs, chapters, …
• Guessing terminology for OR is hard– good, nice, excellent, outstanding, awesome, …
• Guessing terms to exclude is even harder!– Democratic party, party to a lawsuit, …
Proximity Operators
• More precise versions of AND– “NEAR n” allows at most n-1 intervening terms– “WITH” requires terms to be adjacent and in order
• Easy to implement, but less efficient– Store a list of positions for each word in each doc
• Stopwords become very important!
– Perform normal Boolean computations• Treat WITH and NEAR like AND with an extra constraint
Proximity Operator Example
• time AND come– Doc 2
• time (NEAR 2) come– Empty
• quick (NEAR 2) fox– Doc 1
• quick WITH fox– Empty
quick
brown
fox
over
lazy
dog
back
now
time
all
good
men
come
jump
aid
their
party
0 1 (9)
Term1 (13)1 (6)
1 (7)
1 (8)
1 (16)
1 (1)
1 (2)1 (15)1 (4)
0
00
0
00
0
0
0
0
0
0
00
0
0
1 (5)
1 (9)
1 (3)
1 (4)
1 (8)
1 (6)
1 (10)
Doc
1
Doc
2
Strengths and Weaknesses
• Strong points– Accurate, if you know the right strategies
– Efficient for the computer
• Weaknesses– Often results in too many documents, or none
– Users must learn Boolean logic
– Sometimes finds relationships that don’t exist
– Words can have many meanings
– Choosing the right words is sometimes hard
Ranked Retrieval Paradigm
• Exact match retrieval often gives useless sets– No documents at all, or way too many documents
• Query reformulation is one “solution”– Manually add or delete query terms
• “Best-first” ranking can be superior– Select every document within reason– Put them in order, with the “best” ones first– Display them one screen at a time
Advantages of Ranked Retrieval
• Closer to the way people think– Some documents are better than others
• Enriches browsing behavior– Decide how far down the list to go as you read it
• Allows more flexible queries– Long and short queries can produce useful results
Ranked Retrieval Challenges
• “Best first” is easy to say but hard to do!– The best we can hope for is to approximate it
• Will the user understand the process?– It is hard to use a tool that you don’t understand
• Efficiency becomes a concern– Only a problem for long queries, though
Partial-Match Ranking
• Form several result sets from one long query– Query for the first set is the AND of all the terms– Then all but the 1st term, all but the 2nd, …– Then all but the first two terms, …– And so on until each single term query is tried
• Remove duplicates from subsequent sets
• Display the sets in the order they were made– Document rank within a set is arbitrary
Partial Match Exampleinformation AND retrieval
Readings in Information RetrievalInformation Storage and RetrievalSpeech-Based Information Retrieval for Digital LibrariesWord Sense Disambiguation and Information Retrieval
information NOT retrieval
The State of the Art in Information Filtering
Inference Networks for Document RetrievalContent-Based Image Retrieval SystemsVideo Parsing, Retrieval and BrowsingAn Approach to Conceptual Text Retrieval Using the EuroWordNet …Cross-Language Retrieval: English/Russian/French
retrieval NOT information
Similarity-Based Queries
• Treat the query as if it were a document– Create a query bag-of-words
• Find the similarity of each document– Using the coordination measure, for example
• Rank order the documents by similarity– Most similar to the query first
• Surprisingly, this works pretty well!– Especially for very short queries
Document Similarity
• How similar are two documents?– In particular, how similar is their bag of words?
1
1
1
1: Nuclear fallout contaminated Montana.
2: Information retrieval is interesting.
3: Information retrieval is complicated.
1
1
1
1
1
1
nuclear
fallout
siberia
contaminated
interesting
complicated
information
retrieval
1
1 2 3
The Coordination Measure
• Count the number of terms in common– Based on Boolean bag-of-words
• Documents 2 and 3 share two common terms– But documents 1 and 2 share no terms at all
• Useful for “more like this” queries– “more like doc 2” would rank doc 3 ahead of doc 1
• Where have you seen this before?
Coordination Measure Example
1
1
1
1
1
1
1
1
1
nuclear
fallout
siberia
contaminated
interesting
complicated
information
retrieval
1
1 2 3
Query: complicated retrievalResult: 3, 2
Query: information retrievalResult: 2, 3
Query: interesting nuclear falloutResult: 1, 2
Counting Terms
• Terms tell us about documents– If “rabbit” appears a lot, it may be about rabbits
• Documents tell us about terms– “the” is in every document -- not discriminating
• Documents are most likely described well by rare terms that occur in them frequently– Higher “term frequency” is stronger evidence– Low “collection frequency” makes it stronger still
The Document Length Effect
• Humans look for documents with useful parts– But probabilities are computed for the whole
• Document lengths vary in many collections– So probability calculations could be inconsistent
• Two strategies– Adjust probability estimates for document length– Divide the documents into equal “passages”
Incorporating Term Frequency
• High term frequency is evidence of meaning– And high IDF is evidence of term importance
• Recompute the bag-of-words– Compute TF * IDF for every element
Let be the total number of documents
Let of the documents contain term
Let be the number of times term appears in document
Then
N
n N i
i j
wN
n
i j
i j i j
tf
tf log
,
, ,
Weighted Matching Schemes
• Unweighted queries– Add up the weights for every matching term
• User specified query term weights– For each term, multiply the query and doc weights– Then add up those values
• Automatically computed query term weights– Most queries lack useful TF, but IDF may be useful– Used just like user-specified query term weights
TF*IDF Example
4
5
6
3
1
3
1
6
5
3
4
3
7
1
nuclear
fallout
siberia
contaminated
interesting
complicated
information
retrieval
2
1 2 3
2
3
2
4
4
0.50
0.63
0.90
0.13
0.60
0.75
1.51
0.38
0.50
2.11
0.13
1.20
1 2 3
0.60
0.38
0.50
4
0.301
0.125
0.125
0.125
0.602
0.301
0.000
0.602
idfi Unweighted query: contaminated retrievalResult: 2, 3, 1, 4
Weighted query: contaminated(3) retrieval(1)Result: 1, 3, 2, 4
IDF-weighted query: contaminated retrievalResult: 2, 3, 1, 4
tf ,i jwi j,
Document Length Normalization
• Long documents have an unfair advantage– They use a lot of terms
• So they get more matches than short documents
– And they use the same words repeatedly• So they have much higher term frequencies
• Normalization seeks to remove these effects– Related somehow to maximum term frequency– But also sensitive to the of number of terms
“Cosine” Normalization
• Compute the length of each document vector– Multiply each weight by itself– Add all the resulting values– Take the square root of that sum
• Divide each weight by that lengthLet be the unnormalized weight of term in document
Let be the normalized weight of term in document
Then
w i j
w i j
ww
w
i j
i j
i j
i j
i jj
,
,
,
,
,
2
Cosine Normalization Example
0.29
0.37
0.53
0.13
0.62
0.77
0.57
0.14
0.19
0.79
0.05
0.71
1 2 3
0.69
0.44
0.57
4
4
5
6
3
1
3
1
6
5
3
4
3
7
1
nuclear
fallout
siberia
contaminated
interesting
complicated
information
retrieval
2
1 2 3
2
3
2
4
4
0.50
0.63
0.90
0.13
0.60
0.75
1.51
0.38
0.50
2.11
0.13
1.20
1 2 3
0.60
0.38
0.50
4
0.301
0.125
0.125
0.125
0.602
0.301
0.000
0.602
idfi
1.70 0.97 2.67 0.87Length
tf ,i jwi j, wi j,
Unweighted query: contaminated retrieval, Result: 2, 4, 1, 3 (compare to 2, 3, 1, 4)
Why Call It “Cosine”?
d2
d1 w j1,
w j2,
w2 2,
w2 1,
w1 1,w1 2,
Let document 1 have unit length with coordinates and
Let document 2 have unit length with coordinates and
Then
w w
w w
w w w w
1 1 2 1
1 2 2 2
1 1 1 2 2 1 2 2
, ,
, ,
, , , ,cos ( ) ( )
Interpreting the Cosine Measure
• Think of a document as a vector from zero
• Similarity is the angle between two vectors– Small angle = very similar– Large angle = little similarity
• Passes some key sanity checks– Depends on pattern of word use but not on length– Every document is most similar to itself
“Okapi” Term Weights
5.0
5.0log*
5.05.1 ,
,,
j
j
jii
jiji DF
DFN
TFLL
TFw
0.0
0.2
0.4
0.6
0.8
1.0
0 5 10 15 20 25
Raw TF
Oka
pi
TF 0.5
1.0
2.0
4.4
4.6
4.8
5.0
5.2
5.4
5.6
5.8
6.0
0 5 10 15 20 25
Raw DF
IDF Classic
Okapi
LL /
TF component IDF component
Passage Retrieval
• Another approach to long-document problem– Break it up into coherent units
• Recognizing topic boundaries is hard– But overlapping 300 word passages work fine
• Document rank is best passage rank– And passage information can help guide browsing
Summary
• Goal: find documents most similar to the query
• Compute normalized document term weights– Some combination of TF, DF, and Length
• Optionally, get query term weights from the user– Estimate of term importance
• Compute inner product of query and doc vectors– Multiply corresponding elements and then add
Before You Go!
On a sheet of paper, please briefly answer the following question (no names):
What was the muddiest point in today’s lecture?