Latent Semantic Indexing
-
Upload
zachery-cruz -
Category
Documents
-
view
53 -
download
0
description
Transcript of Latent Semantic Indexing
![Page 1: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/1.jpg)
Prasad L18LSI 1
Latent Semantic Indexing
Adapted from Lectures by
Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann
![Page 2: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/2.jpg)
Today’s topic
Latent Semantic Indexing Term-document matrices are very
large But the number of topics that people
talk about is small (in some sense) Clothes, movies, politics, …
Can we represent the term-document space by a lower dimensional latent space?
Prasad 2L18LSI
![Page 3: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/3.jpg)
Linear Algebra Background
Prasad 3L18LSI
![Page 4: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/4.jpg)
Eigenvalues & Eigenvectors
Eigenvectors (for a square mm matrix S)
How many eigenvalues are there at most?
only has a non-zero solution if
this is a m-th order equation in λ which can have at most m distinct solutions (roots of the characteristic polynomial) – can be complex even though S is real.
eigenvalue(right) eigenvector
Example
4
![Page 5: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/5.jpg)
Matrix-vector multiplication
S 30 0 0
0 20 0
0 0 1
has eigenvalues 30, 20, 1 withcorresponding eigenvectors
0
0
1
1v
0
1
0
2v
1
0
0
3v
On each eigenvector, S acts as a multiple of the identitymatrix: but as a different multiple on each.
Any vector (say x= ) can be viewed as a combination ofthe eigenvectors: x = 2v1 + 4v2 + 6v3
6
4
2
![Page 6: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/6.jpg)
Matrix vector multiplication
Thus a matrix-vector multiplication such as Sx (S, x as in the previous slide) can be rewritten in terms of the eigenvalues/eigenvectors:
Even though x is an arbitrary vector, the action of S on x is determined by the eigenvalues/eigenvectors.
Sx S(2v1 4v2 6v 3)
Sx 2Sv1 4Sv2 6Sv 321v1 42v2 63v 3
Sx 60v1 80v2 6v 3
Prasad 6L18LSI
![Page 7: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/7.jpg)
Matrix vector multiplication Key Observation: the effect of “small”
eigenvalues is small. If we ignored the smallest eigenvalue (1), then instead of
we would get
These vectors are similar (in terms of cosine similarity), or close (in terms of Euclidean distance).
60
80
6
60
80
0
Prasad 7L18LSI
![Page 8: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/8.jpg)
Eigenvalues & Eigenvectors
0 and , 2121}2,1{}2,1{}2,1{ vvvSv
For symmetric matrices, eigenvectors for distincteigenvalues are orthogonal
TSS and 0 if ,complex for IS
All eigenvalues of a real symmetric matrix are real.
0vSv if then ,0, Swww Tn
All eigenvalues of a positive semi-definite matrix
are non-negative
Prasad 8L18LSI
![Page 9: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/9.jpg)
Example
Let
Then
The eigenvalues are 1 and 3 (nonnegative, real). The eigenvectors are orthogonal (and real):
21
12S
.01)2(21
12 2
IS
1
1
1
1
Real, symmetric.
Plug in these values and solve for eigenvectors.
Prasad
![Page 10: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/10.jpg)
Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix)
Theorem: There exists an eigen decomposition
(cf. matrix diagonalization theorem)
Columns of U are eigenvectors of S
Diagonal elements of are eigenvalues of
Eigen/diagonal Decomposition
diagonalUnique
for distinct eigen-values
Prasad 10L18LSI
![Page 11: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/11.jpg)
Diagonal decomposition: why/how
nvvU ...1Let U have the eigenvectors as columns:
n
nnnn vvvvvvSSU
............
1
1111
Then, SU can be written
And S=UU–1.
Thus SU=U, or U–1SU=
Prasad 11
![Page 12: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/12.jpg)
Diagonal decomposition - example
Recall .3,1 ;21
1221
S
The eigenvectors and form
1
1
1
1
11
11U
Inverting, we have
2/12/1
2/12/11U
Then, S=UU–1 =
2/12/1
2/12/1
30
01
11
11
RecallUU–1 =1.
![Page 13: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/13.jpg)
Example continued
Let’s divide U (and multiply U–1) by 2
2/12/1
2/12/1
30
01
2/12/1
2/12/1Then, S=
Q (Q-1= QT )
Why? Stay tuned …
Prasad 13L18LSI
![Page 14: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/14.jpg)
If is a symmetric matrix:
Theorem: There exists a (unique) eigen
decomposition
where Q is orthogonal: Q-1= QT
Columns of Q are normalized eigenvectors
Columns are orthogonal.
(everything is real)
Symmetric Eigen Decomposition
TQQS
Prasad 14L18LSI
![Page 15: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/15.jpg)
Exercise
Examine the symmetric eigen decomposition, if any, for each of the following matrices:
01
10
01
10
32
21
42
22
Prasad 15L18LSI
![Page 16: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/16.jpg)
Time out!
I came to this class to learn about text retrieval and mining, not have my linear algebra past dredged up again … But if you want to dredge, Strang’s Applied
Mathematics is a good place to start.
What do these matrices have to do with text? Recall M N term-document matrices … But everything so far needs square matrices – so
…Prasad 16L18LSI
![Page 17: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/17.jpg)
Singular Value Decomposition
TVUA
MM MN V is NN
For an M N matrix A of rank r there exists a factorization(Singular Value Decomposition = SVD) as follows:
The columns of U are orthogonal eigenvectors of AAT.
The columns of V are orthogonal eigenvectors of ATA.
ii
rdiag ...1 Singular values.
Eigenvalues 1 … r of AAT are the eigenvalues of ATA.
Prasad
![Page 18: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/18.jpg)
Singular Value Decomposition
Illustration of SVD dimensions and sparseness
Prasad 18L18LSI
![Page 19: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/19.jpg)
SVD example
Let
01
10
11
A
Thus M=3, N=2. Its SVD is
2/12/1
2/12/1
00
30
01
3/16/12/1
3/16/12/1
3/16/20
Typically, the singular values are arranged in decreasing order.
![Page 20: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/20.jpg)
SVD can be used to compute optimal low-rank approximations.
Approximation problem: Find Ak of rank k such that
Ak and X are both mn matrices.
Typically, want k << r.
Low-rank Approximation
Frobenius normFkXrankX
k XAA
min)(:
Prasad 20L18LSI
![Page 21: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/21.jpg)
Solution via SVD
Low-rank Approximation
set smallest r-ksingular values to zero
Tkk VUA )0,...,0,,...,(diag 1
column notation: sum of rank 1 matrices
Tii
k
i ik vuA
1
k
![Page 22: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/22.jpg)
If we retain only k singular values, and set the rest to 0, then we don’t need the matrix parts in red.
Then Σ is k×k, U is M×k, VT is k×N, and Ak is M×N. This is referred to as the reduced SVD.
It is the convenient (space-saving) and usual form for computational applications.
Reduced SVD
k 22
![Page 23: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/23.jpg)
Approximation error
How good (bad) is this approximation? It’s the best possible, measured by the Frobenius
norm of the error:
where the i are ordered such that i i+1. Suggests why Frobenius error drops as k increases.
1)(:
min
kFkFkXrankX
AAXA
Prasad 23L18LSI
![Page 24: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/24.jpg)
SVD Low-rank approximation
Whereas the term-doc matrix A may have M=50000, N=10 million (and rank close to 50000)
We can construct an approximation A100 with rank 100. Of all rank 100 matrices, it would have the lowest
Frobenius error.
Great … but why would we?? Answer: Latent Semantic Indexing
C. Eckart, G. Young, The approximation of a matrix by another of lower rank. Psychometrika, 1, 211-218, 1936.
![Page 25: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/25.jpg)
Latent Semantic Indexing via the SVD
Prasad 25L18LSI
![Page 26: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/26.jpg)
What it is
From term-doc matrix A, we compute the approximation Ak.
There is a row for each term and a column for each doc in Ak
Thus docs live in a space of k<<r dimensions These dimensions are not the original
axes But why?Prasad 26L18LSI
![Page 27: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/27.jpg)
Vector Space Model: Pros Automatic selection of index terms Partial matching of queries and documents
(dealing with the case where no document contains all search terms)
Ranking according to similarity score (dealing with large result sets)
Term weighting schemes (improves retrieval performance)
Various extensions Document clustering Relevance feedback (modifying query vector)
Geometric foundationPrasad 27L18LSI
![Page 28: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/28.jpg)
Problems with Lexical Semantics
Ambiguity and association in natural language Polysemy: Words often have a multitude
of meanings and different types of usage (more severe in very heterogeneous collections).
The vector space model is unable to discriminate between different meanings of the same word.
Prasad 28
![Page 29: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/29.jpg)
Problems with Lexical Semantics
Synonymy: Different terms may have identical or similar meanings (weaker: words indicating the same topic).
No associations between words are made in the vector space representation.
Prasad 29L18LSI
![Page 30: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/30.jpg)
Polysemy and Context
Document similarity on single word level: polysemy and context
carcompany
•••dodgeford
meaning 2
ringjupiter
•••space
voyagermeaning 1…
saturn...
…planet
...
contribution to similarity, if used in 1st meaning, but not if in 2nd
Prasad 30L18LSI
![Page 31: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/31.jpg)
Latent Semantic Indexing (LSI)
Perform a low-rank approximation of document-term matrix (typical rank 100-300)
General idea Map documents (and terms) to a low-
dimensional representation. Design a mapping such that the low-dimensional
space reflects semantic associations (latent semantic space).
Compute document similarity based on the inner product in this latent semantic space
Prasad 31L18LSI
![Page 32: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/32.jpg)
Goals of LSI
Similar terms map to similar location in low dimensional space
Noise reduction by dimension reduction
Prasad 32L18LSI
![Page 33: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/33.jpg)
Latent Semantic Analysis
Latent semantic space: illustrating example
courtesy of Susan Dumais
Prasad 33L18LSI
![Page 34: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/34.jpg)
Performing the maps
Each row and column of A gets mapped into the k-dimensional LSI space, by the SVD.
Claim – this is not only the mapping with the best (Frobenius error) approximation to A, but in fact improves retrieval.
A query q is also mapped into this space, by
Query NOT a sparse vector.
1 kkT
k Uqq
Prasad 34L18LSI
![Page 35: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/35.jpg)
Performing the maps
ATA is the dot product of pairs of documents ATA ≈ Ak
TAk = (UkkVkT)T (UkkVk
T)
= VkkUkT UkkVk
T
= (Vkk) (Vkk) T
Since Vk = AkTUkk
-1 we should transform query q to qk as follows
qk qTUkk 1
Sec. 18.4
35
![Page 36: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/36.jpg)
Empirical evidence Experiments on TREC 1/2/3 – Dumais Lanczos SVD code (available on netlib)
due to Berry used in these expts Running times of ~ one day on tens of
thousands of docs [still an obstacle to use] Dimensions – various values 250-350
reported. Reducing k improves recall. (Under 200 reported unsatisfactory)
Generally expect recall to improve – what about precision? 36L18LSI
![Page 37: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/37.jpg)
Empirical evidence
Precision at or above median TREC precision Top scorer on almost 20% of TREC topics
Slightly better on average than straight vector spaces
Effect of dimensionality: Dimensions Precision
250 0.367
300 0.371
346 0.374Prasad 37L18LSI
![Page 38: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/38.jpg)
But why is this clustering?
We’ve talked about docs, queries, retrieval and precision here.
What does this have to do with clustering?
Intuition: Dimension reduction through LSI brings together “related” axes in the vector space.
Prasad 39L18LSI
![Page 39: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/39.jpg)
Intuition from block matrices
Block 1
Block 2
…
Block k0’s
0’s
= Homogeneous non-zero blocks.
Mterms
N documents
What’s the rank of this matrix?
Prasad
![Page 40: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/40.jpg)
Intuition from block matrices
Block 1
Block 2
…
Block k0’s
0’sMterms
N documents
Vocabulary partitioned into k topics (clusters); each doc discusses only one topic.
![Page 41: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/41.jpg)
Intuition from block matrices
Block 1
Block 2
…
Block kFew nonzero entries
Few nonzero entries
wipertireV6
carautomobile
110
0
Likely there’s a good rank-kapproximation to this matrix.
![Page 42: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/42.jpg)
Simplistic pictureTopic 1
Topic 2
Topic 3Prasad 43
![Page 43: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/43.jpg)
Some wild extrapolation
The “dimensionality” of a corpus is the number of distinct topics represented in it.
More mathematical wild extrapolation: if A has a rank k approximation of low
Frobenius error, then there are no more than k distinct topics in the corpus.
Prasad 44L18LSI
![Page 44: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/44.jpg)
LSI has many other applications
In many settings in pattern recognition and retrieval, we have a feature-object matrix. For text, the terms are features and the docs are
objects. Could be opinions and users … This matrix may be redundant in dimensionality. Can work with low-rank approximation. If entries are missing (e.g., users’ opinions), can
recover if dimensionality is low. Powerful general analytical technique
Close, principled analog to clustering methods.Prasad 45
![Page 45: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/45.jpg)
Hinrich Schütze and Christina LiomaLatent Semantic Indexing
46
![Page 46: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/46.jpg)
Overview
❶ Latent semantic indexing
❷ Dimensionality reduction
❸ LSI in information retrieval
47
![Page 47: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/47.jpg)
Outline
❶ Latent semantic indexing
❷ Dimensionality reduction
❸ LSI in information retrieval
48
![Page 48: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/48.jpg)
49
Recall: Term-document matrix
This matrix is the basis for computing the similarity between documents and queries. Today: Can we transform this matrix, so that we get a better measure of similarity between documents and queries? . . .
Anthony and Cleopatra
Julius Caesar
TheTempest
Hamlet Othello Macbeth
anthony 5.25 3.18 0.0 0.0 0.0 0.35
brutus 1.21 6.10 0.0 1.0 0.0 0.0
caesar 8.59 2.54 0.0 1.51 0.25 0.0
calpurnia 0.0 1.54 0.0 0.0 0.0 0.0
cleopatra 2.85 0.0 0.0 0.0 0.0 0.0
mercy 1.51 0.0 1.90 0.12 5.25 0.88
worser 1.37 0.0 0.11 4.15 0.25 1.95
![Page 49: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/49.jpg)
50
Latent semantic indexing: Overview
We decompose the term-document matrix into a product of
matrices.
The particular decomposition we’ll use is: singular value
decomposition (SVD).
SVD: C = UΣV T (where C = term-document matrix)
We will then use the SVD to compute a new, improved term-
document matrix C′.
We’ll get better similarity values out of C′ (compared to C).
Using SVD for this purpose is called latent semantic indexing
or LSI.
![Page 50: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/50.jpg)
51
Example of C = UΣVT : The matrix C
This is a standard term-document matrix. Actually, we use a non-weighted matrix here to simplify the example.
![Page 51: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/51.jpg)
52
Example of C = UΣVT : The matrix U
One row per term, one column per min(M,N) where M is the number of terms and N is the number of documents. This is an orthonormal matrix:(i) Row vectors have unit length. (ii) Any two distinct row vectorsare orthogonal to each other. Think of the dimensions (columns) as “semantic” dimensions that capture distinct topics like politics, sports, economics. Each number uij in the matrix indicates how strongly related term i is to the topic represented by semantic dimension j .
![Page 52: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/52.jpg)
53
Example of C = UΣVT : The matrix Σ
This is a square, diagonal matrix of dimensionality min(M,N) × min(M,N). The diagonal consists of the singular values of C. The magnitude of the singular value measures the importance of the corresponding semantic dimension. We’ll make use of this by omitting unimportant dimensions.
![Page 53: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/53.jpg)
54
Example of C = UΣVT : The matrix VT
One column per document, one row per min(M,N) where M is the number of terms and N is the number of documents. Again: This is an orthonormal matrix: (i) Column vectors have unit length. (ii) Any two distinct column vectors are orthogonal to each other. These are again the semantic dimensions from the term matrix U that capture distinct topics like politics, sports, economics. Each number vij in the matrix indicates how strongly related document i is to the topic represented by semantic dimension j .
![Page 54: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/54.jpg)
55
Example of C = UΣVT : All four matrices
55
![Page 55: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/55.jpg)
56
LSI: Summary
We’ve decomposed the term-document matrix C into a product of three matrices.The term matrix U – consists of one (row) vector for each termThe document matrix VT – consists of one (column) vector for each documentThe singular value matrix Σ – diagonal matrix with singular values, reflecting importance of each dimensionNext: Why are we doing this?
56
![Page 56: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/56.jpg)
Outline
❶ Latent semantic indexing
❷ Dimensionality reduction
❸ LSI in information retrieval
57
![Page 57: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/57.jpg)
58
How we use the SVD in LSI
Key property: Each singular value tells us how important its dimension is.By setting less important dimensions to zero, we keep the important information, but get rid of the “details”.These details may
be noise – in that case, reduced LSI is a better representation because it is less noisy.make things dissimilar that should be similar – again reduced LSI is a better representation because it represents similarity better.
![Page 58: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/58.jpg)
59
How we use the SVD in LSI
Analogy for “fewer details is better”
Image of a bright red flower
Image of a black and white flower
Omitting color makes it easier to see similarity
![Page 59: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/59.jpg)
60
Recall unreduced decomposition C=UΣVT
60
![Page 60: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/60.jpg)
61
Reducing the dimensionality to 2
61
![Page 61: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/61.jpg)
62
Reducing the dimensionality to 2
Actually, weonly zero outsingular valuesin Σ. This hasthe effect ofsetting thecorrespondingdimensions inU and V T tozero whencomputing theproductC = UΣV T .
62
![Page 62: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/62.jpg)
63
Original matrix C vs. reduced C2 = UΣ2VT
We can viewC2 as a two-dimensionalrepresentationof the matrix.
We haveperformed adimensionalityreduction totwo dimensions.
63
![Page 63: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/63.jpg)
64
Why is the reduced matrix “better”
64
Similarity of d2 and d3 in the original space: 0.
Similarity of d2 and d3 in the reduced space:0.52 * 0.28 + 0.36 * 0.16 + 0.72 * 0.36 + 0.12 * 0.20 + - 0.39 * - 0.08 ≈ 0.52
![Page 64: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/64.jpg)
65
Why the reduced matrix is “better”
65
“boat” and “ship” are semantically similar.
The “reduced” similarity measure reflects this.
What property of the SVD reduction is responsible for improved similarity?
![Page 65: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/65.jpg)
Another Contrived ExampleIllustrating SVD
(1) Two clean clusters
(2) Perturbed matrix
(3) After dimension reduction
Prasad L18LSI 66
![Page 66: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/66.jpg)
Prasad L18LSI 67
Term Document Matrix : Two Clusters
![Page 67: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/67.jpg)
Term Document Matrix : Disjoint vocabulary clusters
Prasad L18LSI 68
D = 1 1 1 0 0 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 1
![Page 68: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/68.jpg)
Term Document Matrix : Disjoint vocabulary clusters
Prasad L18LSI 69
![Page 69: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/69.jpg)
Prasad L18LSI 70
Term Document Matrix : Perturbed (Doc 3)
![Page 70: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/70.jpg)
Prasad L18LSI 71
Term Document Matrix : Recreated Approximation
![Page 71: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/71.jpg)
Outline
❶ Latent semantic indexing
❷ Dimensionality reduction
❸ LSI in information retrieval
72
![Page 72: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/72.jpg)
73
Why we use LSI in information retrievalLSI takes documents that are semantically similar (= talk about the same topics), . . .. . . but are not similar in the vector space (because they use different words) . . .. . . and re-represent them in a reduced vector space . . . . . in which they have higher similarity.Thus, LSI addresses the problems of synonymy and semantic relatedness.Standard vector space: Synonyms contribute nothing to document similarity.Desired effect of LSI: Synonyms contribute strongly to document similarity.
![Page 73: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/73.jpg)
74
How LSI addresses synonymy and semantic relatedness
The dimensionality reduction forces us to omit “details”.We have to map different words (= different dimensions of the full space) to the same dimension in the reduced space.The “cost” of mapping synonyms to the same dimension is much less than the cost of collapsing unrelated words.SVD selects the “least costly” mapping (see below).Thus, it will map synonyms to the same dimension.But, it will avoid doing that for unrelated words.
74
![Page 74: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/74.jpg)
75
LSI: Comparison to other approaches
Recap: Relevance feedback and query expansion are
used to increase recall in IR – if query and documents
have (in the extreme case) no terms in common.
LSI increases recall and can hurt precision.
Thus, it addresses the same problems as (pseudo)
relevance feedback and query expansion . . .
. . . and it has the same problems.
75
![Page 75: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/75.jpg)
76
Implementation
Compute SVD of term-document matrixReduce the space and compute reduced document representationsMap the query into the reduced spaceThis follows from:
Compute similarity of q2 with all reduced documents in V2.
Output ranked list of documents as usualExercise: What is the fundamental problem with this approach?
![Page 76: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/76.jpg)
77
OptimalitySVD is optimal in the following sense.Keeping the k largest singular values and setting all others to zero gives you the optimal approximation of the original matrix C. Eckart-Young theoremOptimal: no other matrix of the same rank (= with the same underlying dimensionality) approximates C better.Measure of approximation is Frobenius norm:
So LSI uses the “best possible” matrix.Caveat: There is only a tenuous relationship between the Frobenius norm and cosine similarity between documents.
![Page 77: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/77.jpg)
Example from Dumais et al
Prasad L18LSI 78
![Page 78: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/78.jpg)
Latent Semantic Indexing (LSI)
![Page 79: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/79.jpg)
Prasad L18LSI 80
![Page 80: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/80.jpg)
Prasad L18LSI 81
![Page 81: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/81.jpg)
Reduced Model (K = 2)
Prasad L18LSI 82
![Page 82: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/82.jpg)
Prasad L18LSI 83
![Page 83: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/83.jpg)
LSI, SVD, & Eigenvectors
SVD decomposes: Term x Document matrix X as
X=UVT
Where U,V left and right singular vector matrices, and is a diagonal matrix of singular values
Corresponds to eigenvector-eigenvalue decompostion: Z1=ULUT Z2=VLVT
Where U, V are orthonormal and L is diagonal U: matrix of eigenvectors of Z1=XXT
V: matrix of eigenvectors of Z2=XTX : diagonal matrix L of eigenvalues
![Page 84: Latent Semantic Indexing](https://reader035.fdocuments.us/reader035/viewer/2022081501/568135b5550346895d9d1b75/html5/thumbnails/84.jpg)
Computing Similarity in LSI