Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading...

61
Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    215
  • download

    0

Transcript of Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading...

Page 1: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Index Construction:sorting

Paolo FerraginaDipartimento di Informatica

Università di Pisa

Reading Chap 4

Page 2: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Indexer steps

Dictionary & postings: How do we construct them? Scan the texts Find the proper tokens-list Append to the proper

tokens-list the docID

How do we:• Find ? …time issue…• Append ? …space issues …• Postings’ size?• Dictionary size ? … in-memory issues …

Page 3: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Indexer steps: create token

Sequence of pairs: < Modified token, Document ID >

What about var-length strings?

I did enact JuliusCaesar I was killed

i' the Capitol; Brutus killed me.

Doc 1

So let it be withCaesar. The noble

Brutus hath told youCaesar was ambitious

Doc 2

Page 4: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Indexer steps: Sort

Sort by docIDs

Core indexing step

Page 5: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Indexer steps: Dictionary & Postings

Multiple term entries in a single document are merged.

Split into Dictionary and Postings

Doc. frequency information is added.

Sec. 1.2

Page 6: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Some key issues…

6Pointers

Terms and

counts Now:• How do we sort?• How much storage is needed ?

Sec. 1.2

Lists of docIDs

Page 7: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Keep attention on disk...

If sorting needs to manage strings

Key observations:

Array A is an “array of pointers to objects”

For each object-to-object comparison A[i] vs A[j]: 2 random accesses to 2 memory locations A[i] and A[j] (n log n) random memory accesses (I/Os ??)

Memory containing the strings

A

Again chaching helps,But it may be less effective than before

You sort ABut this is an indirect sort

Page 8: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Binary Merge-Sort

Merge-Sort(A,i,j)01 if (i < j) then02 m = (i+j)/2; 03 Merge-Sort(A,i,m);04 Merge-Sort(A,m+1,j);05 Merge(A,i,m,j)

Merge-Sort(A,i,j)01 if (i < j) then02 m = (i+j)/2; 03 Merge-Sort(A,i,m);04 Merge-Sort(A,m+1,j);05 Merge(A,i,m,j)

DivideConquer

Combine

1 2 8 10 7 9 13 19

1 2 7

Merge is linear in the

#items to be merged

Page 9: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Few key observations

Items = (short) strings = atomic...

On english wikipedia, about 109 tokens to sort (n log n) memory accesses (I/Os ??)

[5ms] * n log2 n ≈ 3 years

In practice it is a “faster”, why?

Page 10: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Use the same algorithm for disk?

multi-way merge-sortaka BSBI: Blocked sort-based Indexing

Mapping term termID to be kept in memory for constructing the

pairs Consumes memory to pairs’ creation Needs two passes, unless you use hashing

and thus some probability of collision.

Page 11: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Implicit Caching…

10 2

2 10

5 1

1 5

13 19

13 19

9 7

7 9

15 4

4 15

8 3

3 8

12 17

12 17

6 11

6 11

1 2 5 10 7 9 13 19 3 4 8 15 6 11 12 17

1 2 5 7 9 10 13 19 3 4 6 8 11 12 15 17

1 2 3 4 5 6 7 8 9 10 11 12 13 15 17 19

log2 N

MN/M runs, each sorted in internal memory (no I/Os)

2 passes (one Read/one Write) = 2 * (N/B)

I/Os

— I/O-cost for binary merge-sort is ≈ 2 (N/B) log2 (N/M)

Log

2 (

N/M

)

2 passes

(R/W)

2 passes

(R/W)

Page 12: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

B

A key inefficiency

1 2 4 7 9 10 13 19 3 5 6 8 11 12 15 17

B

After few steps, every run is longer than B !!!

B

We are using only 3 pagesBut memory contains M/B pages ≈ 230/215 = 215

B

OutputBuffer Disk

1, 2, 3

1, 2, 3

OutputRun

4, ...

Page 13: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Multi-way Merge-Sort

Sort N items with main-memory M and disk-pages B:

Pass 1: Produce (N/M) sorted runs. Pass i: merge X = M/B-1 runs logX N/M passes

Main memory buffers of B items

Pg for run1

Pg for run X

Out Pg

DiskDisk

Pg for run 2

. . . . . .

. . .

Page 14: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

How it works

1 2 5 10 7 9 13 19

1 2 5 7….

3 4 6 8 11…..

1 2 3 4 5 6 7 8 9 10 11 12 13 15 17 19

MN/M runs, each sorted in internal memory = 2 (N/B) I/Os

2 passes (one Read/one Write) = 2 * (N/B)

I/Os

— I/O-cost for X-way merge is ≈ 2 (N/B) I/Os per level

Log

X (

N/M

)

2 passes

(R/W)

M

X

X

Page 15: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Cost of Multi-way Merge-Sort

Number of passes = logX N/M logM/B (N/M)

Total I/O-cost is ( (N/B) logM/B N/M ) I/Os

Large fan-out (M/B) decreases #passes

In practice

M/B ≈ 105 #passes =1 few mins

Tuning dependson disk features

Compression would decrease the cost of a pass!

Page 16: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

SPIMI: Single-pass in-memory indexing

Key idea #1: Generate separate dictionaries for each block (No need for term termID)

Key idea #2: Don’t sort. Accumulate postings in postings lists as they occur (in internal memory).

Generate an inverted index for each block. More space for postings available Compression is possible

Merge indexes into one big index. Easy append with 1 file per posting (docID are

increasing within a block), otherwise multi-merge like

Page 17: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

SPIMI-Invert

Page 18: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Distributed indexing

For web-scale indexing: must use a distributed computing cluster of PCs

Individual machines are fault-prone

Can unpredictably slow down or fail

How do we exploit such a pool of machines?

Page 19: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Google data centers Google data centers mainly contain commodity

machines. Data centers are distributed around the world.

Estimate: Google installs 100,000 servers each quarter. Based on expenditures of 200–250 million

dollars per year

Page 20: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Distributed indexing

Maintain a master machine directing the indexing job – considered “safe”.

Break up indexing into sets of (parallel) tasks.

Master machine assigns tasks to idle machines

Other machines can play many roles during the computation

Page 21: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Parallel tasks

We will use two sets of parallel tasks Parsers and Inverters

Break the document collection in two ways:

• Term-based partitionone machine handles a subrange of terms

• Doc-based partitionone machine handles a subrange of documents

• Term-based partitionone machine handles a subrange of terms

• Doc-based partitionone machine handles a subrange of documents

Page 22: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Data flow: doc-based partitioning

splits

Parser

Parser

Parser

Master

Inverter

Inverter

Inverter

Postings

IL1

assign assign

IL2

ILk

Set1

Set2

Setk

Each query-term goes to many machines

Page 23: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Data flow: term-based partitioning

splits

Parser

Parser

Parser

Master

a-f g-p q-z

a-f g-p q-z

a-f g-p q-z

Inverter

Inverter

Inverter

Postings

a-f

g-p

q-z

assign assign

Set1

Set2

Setk

Each query-term goes to one machine

Page 24: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

MapReduce

This is a robust and conceptually simple framework

for distributed computing … without having to write code for the

distribution part.

Google indexing system (ca. 2002) consists of a number of phases, each implemented in MapReduce.

Page 25: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Data flow: term-based partitioning

splits

Parser

Parser

Parser

Master

a-f g-p q-z

a-f g-p q-z

a-f g-p q-z

Inverter

Inverter

Inverter

Postings

a-f

g-p

q-z

assign assign

Mapphase

Reducephase

Segmentfiles

(local disks)

16Mb -64Mb

Guarantee fitting in

one machine ?

Guarantee fitting in

one machine ?

Guarantee fitting in

one machine ?

Guarantee fitting in

one machine ?

Page 26: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Dynamic indexing

Up to now, we have assumed static collections.

Now more frequently occurs that: Documents come in over time Documents are deleted and modified

And this induces: Postings updates for terms already in dictionary New terms added/deleted to/from dictionary

Page 27: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Simplest approach

Maintain “big” main index New docs go into “small” auxiliary index Search across both, and merge the results

Deletions Invalidation bit-vector for deleted docs Filter search results (i.e. docs) by the

invalidation bit-vector

Periodically, re-index into one main index

Page 28: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Issues with 2 indexes

Poor performance Merging of the auxiliary index into the main index is

efficient if we keep a separate file for each postings list.

Merge is the same as a simple append [new docIDs are greater].

But this needs a lot of files – inefficient for O/S.

Assumption: The index is one big file. In reality: Use a scheme somewhere in

between (e.g., split very large postings lists, collect postings lists of length 1 in one file etc.)

Page 29: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Logarithmic merge

Maintain a series of indexes, each twice as large as the previous one: 20 M, 21 M , 22 M , 23 M , …

Keep smallest (Z0) in memory (of size M)

Larger ones (I0, I1, …) on disk (of sizes 21 M , 22 M , …)

If Z0 gets too big (> M), write to disk as I0

or merge with I0 (if I0 already exists) as Z1

Either write merge Z1 to disk as I1 (if no I1)

or merge with I1 to form Z2

etc. # indexes = log2 (T/M)# indexes = log2 (T/M)

Page 30: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Some analysis (T = #postings = #tokens)

Auxiliary and main index: index construction time is O(T2) as each posting is touched in each merge.

Logarithmic merge: Each posting is merged O(log (T/M) ) times, so complexity is O( T log

(T/M) )

Logarithmic merge is more efficient for index construction, but its query processing requires the merging of O( log (T/M) ) lists of results

Page 31: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Web search engines

Most search engines now support dynamic indexing News items, blogs, new topical web pages

But (sometimes/typically) they also periodically reconstruct the index Query processing is then switched to the

new index, and the old index is then deleted

Page 32: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Query processing:optimizations

Paolo FerraginaDipartimento di Informatica

Università di Pisa

Reading 2.3

Page 33: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Augment postings with skip pointers (at indexing time)

How do we deploy them ? Where do we place them ?

1282 4 8 41 48 64

311 2 3 8 11 17 21

3111

41 128

Sec. 2.3

Page 34: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Using skips

1282 4 8 41 48 64

311 2 3 8 11 17 21

3111

41 128

Suppose we’ve stepped through the lists until we process 8 on each list. We match it and advance.

We then have 41 and 11 on the lower. 11 is smaller.

But the skip successor of 11 on the lower list is 31, sowe can skip ahead past the intervening postings.

Sec. 2.3

Page 35: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Placing skips

Tradeoff: More skips shorter skip spans more

likely to skip. But lots of comparisons to skip pointers.

Fewer skips few pointer comparison, but then long skip spans few successful skips.

Sec. 2.3

Page 36: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Placing skips

Simple heuristic: for postings of length L, use L evenly-spaced skip pointers.

This ignores the distribution of query terms. Easy if the index is relatively static; harder if

L keeps changing because of updates.

This definitely used to help; with modern hardware it may not unless you’re memory-based The I/O cost of loading a bigger postings list

can outweigh the gains from quicker in memory merging!

Sec. 2.3

Page 37: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Faster = caching?

Two opposite approaches:

I. Cache the query results (exploits query locality)

II. Cache pages of posting lists (exploits term

locality)

Page 38: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Query processing:phrase queries and positional

indexes

Paolo FerraginaDipartimento di Informatica

Università di Pisa

Reading 2.4

Page 39: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Phrase queries

Want to be able to answer queries such as “stanford university” – as a phrase

Thus the sentence “I went at Stanford my university” is not a match.

Sec. 2.4

Page 40: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Solution #1: Biword indexes

For example the text “Friends, Romans, Countrymen” would generate the biwords friends romans romans countrymen

Each of these biwords is now a dictionary term

Two-word phrase query-processing is immediate.

Sec. 2.4.1

Page 41: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Longer phrase queries

Longer phrases are processed by reducing them to bi-word queries in AND:

stanford university palo alto can be broken into the Boolean query on biwords:

stanford university AND university palo AND palo alto

Need the docs to verify+They are combined with other solutions

Can have false positives!Index blows up

Sec. 2.4.1

Page 42: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Solution #2: Positional indexes

In the postings, store for each term and document the position(s) in which that term occurs:

<term, number of docs containing term;doc1: position1, position2 … ;doc2: position1, position2 … ;etc.>

Sec. 2.4.2

Page 43: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Processing a phrase query

“to be or not to be”. to:

2:1,17,74,222,551; 4:8,16,190,429,433; 7:13,23,191; ...

be: 1:17,19; 4:17,191,291,430,434;

5:14,19,101; ...

Same general method for proximity searches

Sec. 2.4.2

Page 44: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Query term proximity

Free text queries: just a set of terms typed into the query box – common on the web

Users prefer docs in which query terms occur within close proximity of each other

Would like scoring function to take this into account – how?

Sec. 7.2.2

Page 45: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Positional index size

You can compress position values/offsets Nevertheless, a positional index expands

postings storage by a factor 2-4 in English

Nevertheless, a positional index is now commonly used because of the power and usefulness of phrase and proximity queries … whether used explicitly or implicitly in a ranking retrieval system.

Sec. 2.4.2

Page 46: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Combination schemes

BiWord + Positional index is a profitable combination For particular phrases (“Michael Jackson”,

“Britney Spears”) it is inefficient to keep on merging positional postings lists better to store a bi-word

More complicated mixing strategies do exist!

Sec. 2.4.3

Page 47: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Soft-AND

E.g. query rising interest rates

Run the query as a phrase query

If <K docs contain the phrase rising interest

rates, run the two phrase queries rising

interest and interest rates

If we still have <K docs, run the “vector

space query” rising interest rates

Rank matching docs

Sec. 7.2.3

Page 48: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Query processing:other sophisticated queries

Paolo FerraginaDipartimento di Informatica

Università di Pisa

Reading 3.2 and 3.3

Page 49: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Wild-card queries: *

mon*: find all docs containing words beginning with “mon”. Use a Prefix-search data structure

*mon: find words ending in “mon” Maintain a prefix-search data structure for

reverse terms.

How can we solve the wild-card query pro*cent ?

Sec. 3.2

Page 50: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

What about * in the middle?

co*tion We could look up co* AND *tion and

intersect the two lists Expensive

se*ate AND fil*erThis may result in many Boolean AND queries.

The solution: transform wild-card queries so that the *’s occur at the end

This gives rise to the Permuterm Index.

Sec. 3.2

Page 51: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Permuterm index

For term hello, index under: hello$, ello$h, llo$he, lo$hel, o$hell,

$hellowhere $ is a special symbol.

Queries: X lookup on X$ X* lookup on $X* *X lookup on X$* *X* lookup on X* X*Y lookup on Y$X* X*Y*Z ??? Exercise!

Sec. 3.2.1

Page 52: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Permuterm query processing

Rotate query wild-card to the right P*Q Q$P*

Now use prefix-search data structure Permuterm problem: ≈ quadruples lexicon

size Empirical observation for English.

Sec. 3.2.1

Page 53: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

K-gram indexes

The k-gram index finds terms based on a query consisting of k-grams (here k=2).

mo

on

among

$m mace

among

amortize

madden

arond

Sec. 3.2.2

Page 54: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Processing wild-cards

Query mon* can now be run as

$m AND mo AND on

Gets terms that match AND version of our

wildcard query.

Must post-filter these terms against query.

Sec. 3.2.2

Page 55: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Isolated word correction

Given a lexicon and a character sequence Q, return the words in the lexicon closest to Q

What’s “closest”? Edit distance (Levenshtein distance) Weighted edit distance n-gram overlap

Useful in query-mispellings

Sec. 3.3.2

Page 56: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Edit distance

Given two strings S1 and S2, the minimum number of operations to convert one to the other

Operations are typically character-level Insert, Delete, Replace, (Transposition)

E.g., the edit distance from dof to dog is 1 From cat to act is 2 (Just 1 with transpose.) from cat to dog is 3.

Generally found by dynamic programming.

Sec. 3.3.3

Page 57: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Solved by dynamic programming Let E(i,j) = edit distance between T1,j and P1,i.

Approximate String Matching

E(i,0)=E(0,i)=i

E(i, j) = E(i–1, j–1) if Pi=Tj

E(i, j) = min{E(i, j–1),

E(i–1, j),

E(i–1, j–1)}+1 if Pi Tj

Page 58: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Example

T0 1 2 3 4 5 6

p t t a p a

P

0 0 1 2 3 4 5 61 p 1 0 1 2 3 4 52 a 2 1 1 2 2 3 43 t 3 2 1 1 2 3 44 t 4 3 2 1 2 3 4

Page 59: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Weighted edit distance

As above, but the weight of an operation depends on the character(s) involved Meant to capture OCR or keyboard errors,

e.g. m more likely to be mis-typed as n than as q

Therefore, replacing m by n is a smaller edit distance than by q

Requires weight matrix as input Modify dynamic programming to handle

weights

Sec. 3.3.3

Page 60: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

k-gram overlap

Enumerate all the k-grams in the query string as well as in the lexicon

Use the k-gram index (recall wild-card search) to retrieve all lexicon terms matching any of the query k-grams

Threshold by number of matching k-grams If the term is L chars long If E is the number of allowed errors (E*k, k-grams are killed) At least (L-k+1) – E*k of the query k-grams must match a

dictionary term to be a candidate answer.

Sec. 3.3.4

Page 61: Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.

Context-sensitive spell correction

Text: I flew from Heathrow to Narita. Consider the phrase query “flew form

Heathrow” We’d like to respond

Did you mean “flew from Heathrow”?

because no docs matched the query phrase.

Sec. 3.3.5