Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

12
Evaluation of N-grams Conflation Approach in Text-based Information Retrieval Serge Kosinov University of Alberta, Computing Science Department, Edmonton, AB, Canada

description

Evaluation of N-grams Conflation Approach in Text-based Information Retrieval. Serge Kosinov University of Alberta, Computing Science Department, Edmonton, AB, Canada. N-gram Conflation Method. - PowerPoint PPT Presentation

Transcript of Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

Page 1: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

Evaluation of N-grams Conflation Approach in Text-based

Information Retrieval

Serge KosinovUniversity of Alberta, Computing Science Department,Edmonton, AB, Canada

Page 2: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

N-gram Conflation Method

Goalstudy a conflation method based on n-gram approach with some enhancements and evaluate its performance in textual information retrieval

What is conflation good for?matching non-identical words that refer to the same principle concept

Why is it important?Avoid problems of strong dependence of retrieval results on the exact wording of a user's query account for richness and redundancy of the natural language

Page 3: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

N-grams Method: Basic Idea

* Subdivide words into N-grams - set of overlapping substrings of length N

Example: N=2: (radio) ( ra - ad - di - io ) N=3: (radio) ( rad - adi - dio )

* Treat as similar and group together words with identical N-gram structure

Page 4: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

N-grams Method: Basic Idea (Continued)

Word Counts1 Photography 10 (9)2 Photographic 11 (10)3 Phonetic Ph ho on ne et ti ic 7 (7)

Similarity Word 1 and 2 8 2*8/(9+10)= 0.8421 Word 1 and 3 Ph ho 2 2*2/(9+7)=0.2500 Word 2 and 3 Ph ho ic 3 2*3/(10+7)= 0.3529

Bigram structure Ph ho ot to og gr ra ap ph hy Ph ho ot to og gr ra ap ph hi ic

Common unique bigrams: Ph ho ot to og gr ra ap

Page 5: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

Experiment Implementation* Pre-process text collections

(remove stop words, punctuation, special characters, etc.)

* Find the set of unique terms and compute their similarity matrix

* Cluster this data and compute IDF-like correction multipliers for each N-gram

* Process queries by replacing the terms that fit into obtained clusters with cluster ID both in document collections and in queries, then pick the best match via standard vector model representation

Page 6: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

Computing Similarity Matrix

Photography Photographic Phonetic

Photography 1.0000 0.8421 0.2500

Photographic 1.0000 0.3529

Phonetic 1.0000

Page 7: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

* Clustering technique

complete link agglomerative clustering (aka HCA)Example: C325 = {computer, computing, computer-based}

* IDF-like adjustments of weights

wij - weight of bigram B

j in term cluster C

i

bfij - frequency of bigram B

j in term cluster C

i

N - number of term clustersn - number of term clusters where bigram B

j occurs

at least once

Clustering Data and Adjusting IDF

Page 8: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

Processing Queries: Example

Best match : cosine similarity coefficient betweendocument vector ( ... , C325, C487, Torvalds, ... )and query vector ( ... , C325, Torvalds, ... )

Document Collection QueryComputer C325 Computed C325Computing C325 by (dropped)Stable C487 Torvalds TorvaldsTorvalds Torvalds

Page 9: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

Experimental ResultsText collections used:

Results: 3 point precision average (at 20, 50, 80 % recall)

Text collection Documents Unique terms QueriesADI 82 1368 35CISI 1460 12085 112MED 1033 14488 30

Text collectionN-grams Porter First peak Last peak Max peak

ADI 0.53418 0.49068 0.37626 0.46144 0.385008.9% 42.0% 15.8% 38.7%

CISI 0.14715 0.13534 0.14257 0.13805 0.141638.7% 3.2% 6.6% 3.9%

MED 0.56925 0.54906 0.51062 0.54237 0.500953.7% 11.5% 5.0% 13.6%

The percentage values underneath each number show improvement achieved by N-gram conflation

Conflation method

Page 10: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

Inverse Frequency Weights Effect

0.40 0.44 0.46 0.48 0.50 0.52 0.54 0.56 0.60

0.45

0.46

0.47

0.48

0.49

0.50

0.51

0.52

0.53

0.54

0.55

ADI dataset: 3- pt Average vs. AHC Cutoff value

3- pt AVG with IDF

3- pt AVG no IDF

Porter 3- pt AVG

AHC cutoff value

3-

pt

Ave

rag

e

Association of unseen query terms with clusters:

* With IDF-like correction:consolidation {console}

editing {editor, edition}

* Without IDF-like correction :consolidation {condensation}

editing {accrediting}

Page 11: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

Individual Query Analysis Example

* Other examples in which N-gram conflation outperforms other methods: criteria-criterion, exchange-interchange, system-subsystem, etc.

Query #14 Query #14 (translated) Document #20 (transl.) Document #20future C77 C139 photographicautomatic C301 C23 computermedical C32 C307 systemsdiagnosis diagnosis C32 biomedical

C273 informationC214 handlingC139 photographicC23 computer-basedC307 systemsC309 proposedC100 attainingcomprehensive comprehensiveC199 understandingC295 specializedC32 biomedicalC312 termsC280 concepts

Page 12: Evaluation of N-grams Conflation Approach in Text-based Information Retrieval

Conlusions and Directions for Further Study

Advantages of N-gram conflation:* is a language-independent approach* tackles well misprints and orthographical errors * best gain for special form and compound words* enhanced with IDF-like correction performs better than traditional stemming (Porter, etc.)

Disadvatages:* clusters have homophone noise * straightforward HCA impractical on large-scale datasets

Prospects:* apply the method for more inflected languages * combine N-grams and Porter* enhance clustering routines