Information Retrieval
description
Transcript of Information Retrieval
Information Retrieval
CSE 8337 (Part E)Spring 2009
Some Material for these slides obtained from:Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto
http://www.sims.berkeley.edu/~hearst/irbook/Data Mining Introductory and Advanced Topics by Margaret H. Dunham
http://www.engr.smu.edu/~mhd/bookIntroduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze
http://informationretrieval.org
CSE 8337 Spring 2009 2
CSE 8337 Outline• Introduction• Simple Text Processing• Boolean Queries• Web Searching/Crawling• Indexes• Vector Space Model• Matching• Evaluation• Feedback/Expansion
CSE 8337 Spring 2009 3
Query Operations Introduction IR queries as stated by the user
may not be precise or effective. There are many techniques to
improve a stated query and then process that query instead.
CSE 8337 Spring 2009 4
How can results be improved? Options for improving result
Local methods Personalization Relevance feedback Pseudo relevance feedback
Query expansion Local Analysis Thesauri Automatic thesaurus generation
Query assist
CSE 8337 Spring 2009 5
Relevance Feedback Relevance feedback: user feedback on
relevance of docs in initial set of results User issues a (short, simple) query The user marks some results as relevant or
non-relevant. The system computes a better representation
of the information need based on feedback. Relevance feedback can go through one or
more iterations. Idea: it may be difficult to formulate a
good query when you don’t know the collection well, so iterate
CSE 8337 Spring 2009 6
Relevance Feedback After initial retrieval results are
presented, allow the user to provide feedback on the relevance of one or more of the retrieved documents.
Use this feedback information to reformulate the query.
Produce new results based on reformulated query.
Allows more interactive, multi-pass process.
CSE 8337 Spring 2009 7
Relevance feedback We will use ad hoc retrieval to refer to
regular retrieval without relevance feedback.
We now look at four examples of relevance feedback that highlight different aspects.
CSE 8337 Spring 2009 8
Similar pages
CSE 8337 Spring 2009 9
Relevance Feedback: Example Image search engine
CSE 8337 Spring 2009 10
Results for Initial Query
CSE 8337 Spring 2009 11
Relevance Feedback
CSE 8337 Spring 2009 12
Results after Relevance Feedback
CSE 8337 Spring 2009 13
Initial query/results Initial query: New space satellite
applications1. 0.539, 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer2. 0.533, 07/09/91, NASA Scratches Environment Gear From Satellite
Plan3. 0.528, 04/04/90, Science Panel Backs NASA Satellite Plan, But
Urges Launches of Smaller Probes4. 0.526, 09/09/91, A NASA Satellite Project Accomplishes Incredible
Feat: Staying Within Budget5. 0.525, 07/24/90, Scientist Who Exposed Global Warming Proposes
Satellites for Climate Research6. 0.524, 08/22/90, Report Provides Support for the Critics Of Using
Big Satellites to Study Climate7. 0.516, 04/13/87, Arianespace Receives Satellite Launch Pact From
Telesat Canada8. 0.509, 12/02/87, Telecommunications Tale of Two Companies
User then marks relevant documents with “+”.
++
+
CSE 8337 Spring 2009 14
Expanded query after relevance feedback 2.074 new 15.106 space 30.816 satellite 5.660
application 5.991 nasa 5.196 eos 4.196 launch 3.972 aster 3.516 instrument 3.446 arianespace 3.004 bundespost 2.806 ss 2.790 rocket 2.053 scientist 2.003 broadcast 1.172 earth 0.836 oil 0.646 measure
CSE 8337 Spring 2009 15
Results for expanded query1. 0.513, 07/09/91, NASA Scratches Environment Gear From
Satellite Plan2. 0.500, 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer3. 0.493, 08/07/89, When the Pentagon Launches a Secret
Satellite, Space Sleuths Do Some Spy Work of Their Own4. 0.493, 07/31/89, NASA Uses ‘Warm’ Superconductors For Fast
Circuit5. 0.492, 12/02/87, Telecommunications Tale of Two Companies6. 0.491, 07/09/91, Soviets May Adapt Parts of SS-20 Missile For
Commercial Use7. 0.490, 07/12/88, Gaping Gap: Pentagon Lags in Race To Match
the Soviets In Rocket Launchers8. 0.490, 06/14/90, Rescue of Satellite By Space Agency To Cost
$90 Million
21
8
CSE 8337 Spring 2009 16
Relevance Feedback Use assessments by users as to the
relevance of previously returned documents to create new (modify old) queries.
Technique:1. Increase weights of terms from relevant
documents.2. Decrease weight of terms from
nonrelevant documents.
CSE 8337 Spring 2009 17
Relevance Feedback Architecture
RankingsIRSystem
Documentcorpus
RankedDocuments
1. Doc1 2. Doc2 3. Doc3 . .
1. Doc1 2. Doc2 3. Doc3 . .
Feedback
Query String
RevisedQuery
ReRankedDocuments
1. Doc2 2. Doc4 3. Doc5 . .
QueryReformulation
CSE 8337 Spring 2009 18
Query Reformulation Revise query to account for
feedback: Query Expansion: Add new terms to
query from relevant documents. Term Reweighting: Increase weight of
terms in relevant documents and decrease weight of terms in irrelevant documents.
Several algorithms for query reformulation.
CSE 8337 Spring 2009 19
Relevance Feedback in vector spaces We can modify the query based on
relevance feedback and apply standard vector space model.
Use only the docs that were marked. Relevance feedback can improve recall
and precision Relevance feedback is most useful for
increasing recall in situations where recall is important Users can be expected to review results and
to take time to iterate
CSE 8337 Spring 2009 20
The Theoretically Best Query
xx
x xo oo
Optimal query
x non-relevant documentso relevant documents
o
o
o
x x
xxx
x
x
x
x
x
x
xx
x
CSE 8337 Spring 2009 21
Query Reformulation for Vectors Change query vector using vector
algebra. Add the vectors for the relevant
documents to the query vector. Subtract the vectors for the irrelevant
docs from the query vector. This both adds both positive and
negatively weighted terms to the query as well as reweighting the initial terms.
CSE 8337 Spring 2009 22
Optimal Query Assume that the relevant set of
documents Cr are known. Then the best query that ranks all
and only the relevant queries at the top is:
rjrj Cd
jrCd
jr
opt dCN
dC
q
11
Where N is the total number of documents.
CSE 8337 Spring 2009 23
Standard Rocchio Method Since all relevant documents
unknown, just use the known relevant (Dr) and irrelevant (Dn) sets of documents and include the initial query q.
njrj Dd
jnDd
jr
m dD
dD
: Tunable weight for initial query.: Tunable weight for relevant documents.: Tunable weight for irrelevant documents.
CSE 8337 Spring 2009 24
Relevance feedback on initial query
xx
xxo oo
Revised query
x known non-relevant documentso known relevant documents
o
o
ox
x
x x
xx
x
x
xx
x
x x
x
Initial query
CSE 8337 Spring 2009 25
Positive vs Negative Feedback
Positive feedback is more valuable than negative feedback (so, set < ; e.g. = 0.25, = 0.75).
Many systems only allow positive feedback (=0).
Why?
CSE 8337 Spring 2009 26
Ide Regular Method Since more feedback should
perhaps increase the degree of reformulation, do not normalize for amount of feedback:
njrj Dd
jDd
jm ddqq
: Tunable weight for initial query.: Tunable weight for relevant documents.: Tunable weight for irrelevant documents.
CSE 8337 Spring 2009 27
Ide “Dec Hi” Method Bias towards rejecting just the
highest ranked of the irrelevant documents:
)(max jrelevantnonDd
jm ddqqrj
: Tunable weight for initial query.: Tunable weight for relevant documents.: Tunable weight for irrelevant document.
CSE 8337 Spring 2009 28
Comparison of Methods Overall, experimental results
indicate no clear preference for any one of the specific methods.
All methods generally improve retrieval performance (recall & precision) with feedback.
Generally just let tunable constants equal 1.
CSE 8337 Spring 2009 29
Relevance Feedback: Assumptions A1: User has sufficient knowledge for
initial query. A2: Relevance prototypes are “well-
behaved”. Term distribution in relevant documents will
be similar Term distribution in non-relevant documents
will be different from those in relevant documents
Either: All relevant documents are tightly clustered around a single prototype.
Or: There are different prototypes, but they have significant vocabulary overlap.
Similarities between relevant and irrelevant documents are small
CSE 8337 Spring 2009 30
Violation of A1
User does not have sufficient initial knowledge.
Examples: Misspellings (Brittany Speers). Cross-language information retrieval. Mismatch of searcher’s vocabulary vs.
collection vocabulary Cosmonaut/astronaut
CSE 8337 Spring 2009 31
Violation of A2 There are several relevance
prototypes. Examples:
Burma/Myanmar Contradictory government policies Pop stars that worked at Burger King
Often: instances of a general concept
Good editorial content can address problem Report on contradictory government
policies
CSE 8337 Spring 2009 32
Relevance Feedback: Problems Long queries are inefficient for typical IR
engine. Long response times for user. High cost for retrieval system. Partial solution:
Only reweight certain prominent terms Perhaps top 20 by term frequency
Users are often reluctant to provide explicit feedback
It’s often harder to understand why a particular document was retrieved after applying relevance feedback
Why?
CSE 8337 Spring 2009 33
Evaluation of relevance feedback strategies
Use q0 and compute precision and recall graph Use qm and compute precision recall graph
Assess on all documents in the collection Spectacular improvements, but … it’s cheating! Partly due to known relevant documents ranked
higher Must evaluate with respect to documents not seen
by user Use documents in residual collection (set of
documents minus those assessed relevant) Measures usually lower than for original query But a more realistic evaluation Relative performance can be validly compared
Empirically, one round of relevance feedback is often very useful. Two rounds is sometimes marginally useful.
CSE 8337 Spring 2009 34
Evaluation of relevance feedback
Second method – assess only the docs not rated by the user in the first round Could make relevance feedback look worse
than it really is Can still assess relative performance of
algorithms Most satisfactory – use two collections
each with their own relevance assessments q0 and user feedback from first collection qm run on second collection and measured
CSE 8337 Spring 2009 35
Why is Feedback Not Widely Used?
Users sometimes reluctant to provide explicit feedback.
Results in long queries that require more computation to retrieve, and search engines process lots of queries and allow little time for each one.
Makes it harder to understand why a particular document was retrieved.
CSE 8337 Spring 2009 36
Evaluation: Caveat True evaluation of usefulness must
compare to other methods taking the same amount of time.
Alternative to relevance feedback: User revises and resubmits query.
Users may prefer revision/resubmission to having to judge relevance of documents.
There is no clear evidence that relevance feedback is the “best use” of the user’s time.
CSE 8337 Spring 2009 37
Pseudo relevance feedback Pseudo-relevance feedback automates
the “manual” part of true relevance feedback.
Pseudo-relevance algorithm: Retrieve a ranked list of hits for the user’s
query Assume that the top k documents are
relevant. Do relevance feedback (e.g., Rocchio)
Works very well on average But can go horribly wrong for some
queries. Several iterations can cause query drift. Why?
CSE 8337 Spring 2009 38
PseudoFeedback Results Found to improve performance on
TREC competition ad-hoc retrieval task.
Works even better if top documents must also satisfy additional boolean constraints in order to be used in feedback.
CSE 8337 Spring 2009 39
Relevance Feedback on the Web
Some search engines offer a similar/related pages feature (this is a trivial form of relevance feedback)
Google Altavista
But some don’t because it’s hard to explain to average user:
Yahoo Excite initially had true relevance feedback, but
abandoned it due to lack of use.
α/β/γ ??
CSE 8337 Spring 2009 40
Excite Relevance FeedbackSpink et al. 2000 Only about 4% of query sessions from a
user used relevance feedback option Expressed as “More like this” link next to
each result But about 70% of users only looked at
first page of results and didn’t pursue things further So 4% is about 1/8 of people extending
search Relevance feedback improved results
about 2/3 of the time
CSE 8337 Spring 2009 41
Query Expansion
In relevance feedback, users give additional input (relevant/non-relevant) on documents, which is used to reweight terms in the documents
In query expansion, users give additional input (good/bad search term) on words or phrases
CSE 8337 Spring 2009 42
How do we augment the user query?
Manual thesaurus E.g. MedLine: physician, syn: doc, doctor, MD,
medico Can be query rather than just synonyms
Global Analysis: (static; of all documents in collection) Automatically derived thesaurus
(co-occurrence statistics) Refinements based on query log mining
Common on the web Local Analysis: (dynamic)
Analysis of documents in result set
CSE 8337 Spring 2009 43
Local vs. Global Automatic Analysis
Local – Documents retrieved are examined to automatically determine query expansion. No relevance feedback needed.
Global – Thesaurus used to help select terms for expansion.
CSE 8337 Spring 2009 44
Automatic Local Analysis At query time, dynamically determine
similar terms based on analysis of top-ranked retrieved documents.
Base correlation analysis on only the “local” set of retrieved documents for a specific query.
Avoids ambiguity by determining similar (correlated) terms only within relevant documents. “Apple computer”
“Apple computer Powerbook laptop”
CSE 8337 Spring 2009 45
Automatic Local Analysis Expand query with terms found in local
clusters. Dl – set of documents retrieved for
query q. Vl – Set of words used in Dl. Sl – Set of distinct stems in Vl. fsi,j –Frequency of stem si in document dj
found in Dl. Construct stem-stem association matrix.
CSE 8337 Spring 2009 46
Association Matrixw1 w2 w3 …………………..wn
w1
w2
w3
.
.wn
c11 c12 c13…………………c1n
c21
c31
.
.cn1
cij: Correlation factor between stems si and stem sj
k l
ij sik sjkd D
c f f
fik : Frequency of term i in document k
CSE 8337 Spring 2009 47
Normalized Association Matrix Frequency based correlation factor
favors more frequent terms. Normalize association scores:
Normalized score is 1 if two stems have the same frequency in all documents.
ijjjii
ijij ccc
cs
CSE 8337 Spring 2009 48
Metric Correlation Matrix Association correlation does not
account for the proximity of terms in documents, just co-occurrence frequencies within documents.
Metric correlations account for term proximity.
iu jvVk Vk vu
ij kkrc
),(1
Vi: Set of all occurrences of term i in any document.r(ku,kv): Distance in words between word occurrences ku and kv
( if ku and kv are occurrences in different documents).
CSE 8337 Spring 2009 49
Normalized Metric Correlation Matrix
Normalize scores to account for term frequencies:
ji
ijij VV
cs
CSE 8337 Spring 2009 50
Query Expansion with Correlation Matrix
For each term i in query, expand query with the n terms, j, with the highest value of cij (sij).
This adds semantically related terms in the “neighborhood” of the query terms.
CSE 8337 Spring 2009 51
Problems with Local Analysis
Term ambiguity may introduce irrelevant statistically correlated terms. “Apple computer” “Apple red fruit
computer” Since terms are highly correlated
anyway, expansion may not retrieve many additional documents.
CSE 8337 Spring 2009 52
Automatic Global Analysis Determine term similarity through
a pre-computed statistical analysis of the complete corpus.
Compute association matrices which quantify term correlations in terms of how frequently they co-occur.
Expand queries with statistically most similar terms.
CSE 8337 Spring 2009 53
Automatic Global Analysis There are two modern variants
based on a thesaurus-like structure built using all documents in collection Query Expansion based on a Similarity
Thesaurus Query Expansion based on a Statistical
Thesaurus
CSE 8337 Spring 2009 54
Thesaurus A thesaurus provides information
on synonyms and semantically related words and phrases.
Example: physician syn: ||croaker, doc, doctor, MD, medical, mediciner, medico, ||sawbones
rel: medic, general practitioner, surgeon,
CSE 8337 Spring 2009 55
Thesaurus-based Query Expansion
For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus.
May weight added terms less than original query terms.
Generally increases recall. May significantly decrease precision,
particularly with ambiguous terms. “interest rate” “interest rate fascinate
evaluate”
CSE 8337 Spring 2009 56
Similarity Thesaurus The similarity thesaurus is based on term to
term relationships rather than on a matrix of co-occurrence.
This relationship are not derived directly from co-occurrence of terms inside documents.
They are obtained by considering that the terms are concepts in a concept space.
In this concept space, each term is indexed by the documents in which it appears.
Terms assume the original role of documents while documents are interpreted as indexing elements
CSE 8337 Spring 2009 57
Similarity Thesaurus The following definitions establish
the proper framework t: number of terms in the collection N: number of documents in the
collection fi,j: frequency of occurrence of the
term ki in the document dj tj: vocabulary of document dj
itfj: inverse term frequency for document dj
CSE 8337 Spring 2009 58
Similarity Thesaurus Inverse term frequency for document
dj
To ki is associated a vector
jj t
titf log
),....,,(k ,2,1,i Niii www
CSE 8337 Spring 2009 59
Similarity Thesaurus where wi,j is a weight associated to
index-document pair[ki,dj]. These weights are computed as follows
N
lj
lil
li
jjij
ji
ji
itff
f
itff
f
w
1
22
,
,
,
,
,
))(max
5.05.0(
))(max
5.05.0(
CSE 8337 Spring 2009 60
Similarity Thesaurus The relationship between two
terms ku and kv is computed as a correlation factor c u,v given by
The global similarity thesaurus is built through the computation of correlation factor cu,v for each pair of indexing terms [ku,kv] in the collection
jd
jv,ju,vuvu, wwkkc
CSE 8337 Spring 2009 61
Similarity Thesaurus This computation is expensive Global similarity thesaurus has to
be computed only once and can be updated incrementally
CSE 8337 Spring 2009 62
Query Expansion based on a Similarity Thesaurus
Query expansion is done in three steps as follows:1 Represent the query in the concept space
used for representation of the index terms2 Based on the global similarity thesaurus,
compute a similarity sim(q,kv) between each term kv correlated to the query terms and the whole query q.
3 Expand the query with the top r ranked terms according to sim(q,kv)
CSE 8337 Spring 2009 63
Query Expansion - step one To the query q is associated a
vector q in the term-concept space given by
where wi,q is a weight associated to the index-query pair[ki,q]
iqk
qi kwi
,q
CSE 8337 Spring 2009 64
Query Expansion - step two Compute a similarity sim(q,kv)
between each term kv and the user query q
where cu,v is the correlation factor
qk
vu,qu,vvu
cwkq)ksim(q,
CSE 8337 Spring 2009 65
Query Expansion - step three Add the top r ranked terms according to
sim(q,kv) to the original query q to form the expanded query q’
To each expansion term kv in the query q’ is assigned a weight wv,q’ given by
The expanded query q’ is then used to retrieve new documents to the user
qkqu,
vq'v,
u
w)ksim(q,w
CSE 8337 Spring 2009 66
Query Expansion Sample Doc1 = D, D, A, B, C, A, B, C Doc2 = E, C, E, A, A, D Doc3 = D, C, B, B, D, A, B, C, A Doc4 = A
c(A,A) = 10.991 c(A,C) = 10.781 c(A,D) = 10.781 ... c(D,E) = 10.398 c(B,E) = 10.396 c(E,E) = 10.224
CSE 8337 Spring 2009 67
Query Expansion Sample Query: q = A E E sim(q,A) = 24.298 sim(q,C) = 23.833 sim(q,D) = 23.833 sim(q,B) = 23.830 sim(q,E) = 23.435
New query: q’ = A C D E E w(A,q')= 6.88 w(C,q')= 6.75 w(D,q')= 6.75 w(E,q')= 6.64
CSE 8337 Spring 2009 68
WordNet A more detailed database of semantic
relationships between English words. Developed by famous cognitive
psychologist George Miller and a team at Princeton University.
About 144,000 English words. Nouns, adjectives, verbs, and adverbs
grouped into about 109,000 synonym sets called synsets.
CSE 8337 Spring 2009 69
WordNet Synset Relationships Antonym: front back Attribute: benevolence good (noun to
adjective) Pertainym: alphabetical alphabet (adjective
to noun) Similar: unquestioning absolute Cause: kill die Entailment: breathe inhale Holonym: chapter text (part-of) Meronym: computer cpu (whole-of) Hyponym: tree plant (specialization) Hypernym: fruit apple (generalization)
CSE 8337 Spring 2009 70
WordNet Query Expansion Add synonyms in the same synset. Add hyponyms to add specialized
terms. Add hypernyms to generalize a
query. Add other related terms to expand
query.
CSE 8337 Spring 2009 71
Statistical Thesaurus Existing human-developed thesauri
are not easily available in all languages.
Human thesuari are limited in the type and range of synonymy and semantic relations they represent.
Semantically related terms can be discovered from statistical analysis of corpora.
CSE 8337 Spring 2009 72
Query Expansion Based on a Statistical Thesaurus
Global thesaurus is composed of classes which group correlated terms in the context of the whole collection
Such correlated terms can then be used to expand the original user query
This terms must be low frequency terms However, it is difficult to cluster low frequency
terms To circumvent this problem, we cluster
documents into classes instead and use the low frequency terms in these documents to define our thesaurus classes.
This algorithm must produce small and tight clusters.
CSE 8337 Spring 2009 73
Query Expansion based on a Statistical Thesaurus
Use the thesaurus class to query expansion.
Compute an average term weight wtc for each thesaurus class C
C
wwtc
C
1iCi,
CSE 8337 Spring 2009 74
Query Expansion based on a Statistical Thesaurus
wtc can be used to compute a thesaurus class weight wc as
5.0C
wtcWc
CSE 8337 Spring 2009 75
Query Expansion Sample
TC = 0.90 NDC = 2.00 MIDF = 0.2
sim(1,3) = 0.99sim(1,2) = 0.40sim(1,2) = 0.40sim(2,3) = 0.29sim(4,1) = 0.00sim(4,2) = 0.00sim(4,3) = 0.00
Doc1 = D, D, A, B, C, A, B, CDoc2 = E, C, E, A, A, DDoc3 = D, C, B, B, D, A, B, C, ADoc4 = A
C1
D1 D2D3 D4
C2C3 C4
C1,3
0.99
C1,3,2
0.29
C1,3,2,4
0.00
idf A = 0.0idf B = 0.3idf C = 0.12idf D = 0.12idf E = 0.60 q'=A B E E
q= A E E
CSE 8337 Spring 2009 76
Query Expansion based on a Statistical Thesaurus
Problems with this approach initialization of parameters TC,NDC and MIDF TC depends on the collection Inspection of the cluster hierarchy is almost
always necessary for assisting with the setting of TC.
A high value of TC might yield classes with too few terms
CSE 8337 Spring 2009 77
Complete link algorithm This is document clustering algorithm with
produces small and tight clusters Place each document in a distinct cluster. Compute the similarity between all pairs of
clusters. Determine the pair of clusters [Cu,Cv] with the
highest inter-cluster similarity. Merge the clusters Cu and Cv Verify a stop criterion. If this criterion is not met
then go back to step 2. Return a hierarchy of clusters.
Similarity between two clusters is defined as the minimum of similarities between all pair of inter-cluster documents
CSE 8337 Spring 2009 78
Selecting the terms that compose each class
Given the document cluster hierarchy for the whole collection, the terms that compose each class of the global thesaurus are selected as follows Obtain from the user three parameters
TC: Threshold class NDC: Number of documents in class MIDF: Minimum inverse document frequency
CSE 8337 Spring 2009 79
Selecting the terms that compose each class
Use the parameter TC as threshold value for determining the document clusters that will be used to generate thesaurus classes
This threshold has to be surpassed by sim(Cu,Cv) if the documents in the clusters Cu and Cv are to be selected as sources of terms for a thesaurus class
CSE 8337 Spring 2009 80
Selecting the terms that compose each class
Use the parameter NDC as a limit on the size of clusters (number of documents) to be considered.
A low value of NDC might restrict the selection to the smaller cluster Cu+v
CSE 8337 Spring 2009 81
Selecting the terms that compose each class
Consider the set of document in each document cluster pre-selected above.
Only the lower frequency documents are used as sources of terms for the thesaurus classes
The parameter MIDF defines the minimum value of inverse document frequency for any term which is selected to participate in a thesaurus class
CSE 8337 Spring 2009 82
Global vs. Local Analysis Global analysis requires intensive term
correlation computation only once at system development time.
Local analysis requires intensive term correlation computation for every query at run time (although number of terms and documents is less than in global analysis).
But local analysis gives better results.
CSE 8337 Spring 2009 83
Example of manual thesaurus
CSE 8337 Spring 2009 84
Thesaurus-based query expansion
For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus
feline → feline cat May weight added terms less than original query
terms. Generally increases recall Widely used in many science/engineering fields May significantly decrease precision, particularly
with ambiguous terms. “interest rate” “interest rate fascinate evaluate”
There is a high cost of manually producing a thesaurus
And for updating it for scientific changes
CSE 8337 Spring 2009 85
Automatic Thesaurus Generation Attempt to generate a thesaurus automatically by
analyzing the collection of documents Fundamental notion: similarity between two words Definition 1: Two words are similar if they co-occur
with similar words. Definition 2: Two words are similar if they occur in a
given grammatical relation with the same words. You can harvest, peel, eat, prepare, etc. apples and
pears, so apples and pears must be similar. Co-occurrence based is more robust, grammatical
relations are more accurate.
CSE 8337 Spring 2009 86
Co-occurrence Thesaurus Simplest way to compute one is based on term-
term similarities in C = AAT where A is term-document matrix.
wi,j = (normalized) weight for (ti ,dj)
For each ti, pick terms with high values in C
ti
dj N
M
What does C contain if A is a term-doc incidence (0/1) matrix?
CSE 8337 Spring 2009 87
Automatic Thesaurus GenerationExample
CSE 8337 Spring 2009 88
Automatic Thesaurus GenerationDiscussion Quality of associations is usually a problem. Term ambiguity may introduce irrelevant
statistically correlated terms. “Apple computer” “Apple red fruit computer”
Problems: False positives: Words deemed similar that are
not False negatives: Words deemed dissimilar that
are similar Since terms are highly correlated anyway,
expansion may not retrieve many additional documents.
CSE 8337 Spring 2009 89
Query Expansion Conclusions
Expansion of queries with related terms can improve performance, particularly recall.
However, must select similar terms very carefully to avoid problems, such as loss of precision.
CSE 8337 Spring 2009 90
Conclusion Thesaurus is a efficient method to
expand queries The computation is expensive but it
is executed only once Query expansion based on
similarity thesaurus may use high term frequency to expand the query
Query expansion based on statistical thesaurus need well defined parameters
CSE 8337 Spring 2009 91
Query assist
Would you expect such a feature to increase the queryvolume at a search engine?
CSE 8337 Spring 2009 92
Query assist Generally done by query log
mining Recommend frequent recent
queries that contain partial string typed by user
A ranking problem! View each prior query as a doc – Rank-order those matching partial string …