Introduction to Information Retrieval Acknowledgements to Pandu
Nayak and Prabhakar Raghavan of Stanford, Hinrich Schtze and
Christina Lioma of Stutgart, Lee Giles (and his sources) of Penn
State
Slide 2
So far You have learned to crawl the web Being a good citizen
and not hurting the servers Extracting the kind of information that
you want Storing the retrieved material locally Now, what are you
going to do with those materials? Develop a way to retrieve what
you want from that collection, as you need it.
Slide 3
Finding what you need in a collection of documents Information
Retrieval: Given a collection of documents Retrieve One or more
documents that contain the specific information you want Obtain the
answer to an information need by querying the document
collection
Slide 4
Tonights class Introduce the concepts of Information Retrieval
Assume you have a collection How to look into the documents for
your information need How to do it efficiently
Slide 5
Later Tools to build indices Apache Lucene Apache Solr
Slide 6
A brief introduction to Information Retrieval Recall our
primary resource: Christopher D. Manning, Prabhakar Raghavan and
Hinrich Schtze, Introduction to Information Retrieval, Cambridge
University Press. 2008. The entire book is available online, free,
at http://nlp.stanford.edu/IR-book/information- retrieval-book.html
http://nlp.stanford.edu/IR-book/information- retrieval-book.html I
will use some of the slides that they provide to go with the book.
I will also use slides from other sources and make new ones. The
primary other source is Dr. Lee Giles, Penn State. Information
Retrieval IST441
Slide 7
Authors definition Information Retrieval (IR) is finding
material (usually documents) of an unstructured nature (usually
text) that satisfies an information need from within large
collections (usually stored on computers). Note the use of the word
usually. We will see examples where the material is not documents,
and not text.
Slide 8
Examples and Scaling IR is about finding a needle in a haystack
finding some particular thing in a very large collection of similar
things. Our examples are necessarily small, so that we can
comprehend them. Do remember, that all that we say must scale to
very large quantities.
Slide 9
Searching Shakespeare Which plays of Shakespeare contain the
words Brutus AND Caesar but NOT Calpurnia? See
http://www.rhymezone.com/shakespeare/http://www.rhymezone.com/shakespeare/
One could grep all of Shakespeares plays for Brutus and Caesar,
then strip out lines containing Calpurnia? Why is that not the
answer? Slow (for large corpora) NOT Calpurnia is non-trivial Other
operations (e.g., find the word Romans near countrymen) not
feasible Ranked retrieval (best documents to return)
Slide 10
Document match Go to http://www.rhymezone.com/shakespeare/
http://www.rhymezone.com/shakespeare/ Enter, one at a time, Caesar
Brutus Calpurnia For each, note the collection of plays that
contain the term We need to associate one or more plays with each
term Term Incidence For our example, we will use a subset of the
plays
Slide 11
Term-document incidence 1 if play contains word, 0 otherwise
Brutus AND Caesar BUT NOT Calpurnia All the plays All the terms
First approach make a matrix with terms on one axis and plays on
the other
Slide 12
Incidence Vectors So we have a 0/1 vector for each term. To
answer query: take the vectors for Brutus, Caesar and Calpurnia
(complemented) bitwise AND. 110100 AND 110111 AND 101111 = 100100.
Reminder: 0 AND 0 = 0, 0 AND 1 =0, 1 AND 0 = 0, 1 AND 1 = 1 The
effect of the AND operation is 1 if the items are both 1 and 0 if
they are not both 1
Slide 13
Answer to query Antony and Cleopatra, Act III, Scene ii Agrippa
[Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, When Antony found
Julius Caesar dead, He cried almost to roaring; and he wept When at
Philippi he found Brutus slain. Hamlet, Act III, Scene ii Lord
Polonius: I did enact Julius Caesar I was killed i' the Capitol;
Brutus killed me.
Slide 14
Spot check Try another one What is the vector for the query
Antony and mercy What would we do to find Antony OR mercy?
Slide 15
Basic assumptions about information retrieval Collection: Fixed
set of documents Goal: Retrieve documents with information that is
relevant to the users information need and helps the user complete
a task
Slide 16
The classic search model Corpus TASK Info Need Query Verbal
form Results SEARCH ENGINE Query Refinement Ultimately, some task
to perform. Some information is required in order to perform the
task. The information need must be expressed in words (usually).
The information need must be expressed in the form of a query that
can be processed. It may be necessary to rephrase the query and try
again
Slide 17
The classic search model Corpus TASK Info Need Query Verbal
form Results SEARCH ENGINE Query Refinement Get rid of mice in a
politically correct way Info about removing mice without killing
them How do I trap mice alive? mouse trap
Misconception?Mistranslation?Misformulation? Potential pitfalls
between task and query results
Slide 18
How good are the results? Precision: How well do the results
match the information need? Recall: What fraction of the available
correct results were retrieved? These are the basic concepts of
information retrieval evaluation.
Slide 19
Size considerations Consider N = 1 million documents, each with
about 1000 words. Avg 6 bytes/word including spaces/punctuation 6GB
of data in the documents. Say there are M = 500K distinct terms
among these.
Slide 20
The matrix does not work 500K x 1M matrix has half-a- trillion
0s and 1s. But it has no more than one billion 1s. matrix is
extremely sparse. Whats a better representation? We only record the
1 positions. i.e. We dont need to know which documents do not have
a term, only those that do. Why?
Slide 21
Inverted index For each term t, we must store a list of all
documents that contain t. Identify each by a docID, a document
serial number Can we used fixed-size arrays for this? Brutus
Calpurnia Caesar 124561657132 124113145173174 2 3154 101 What
happens if the word Caesar is added to document 14? More likely,
add document 14, which contains Caesar.
Slide 22
Inverted index We need variable-size postings lists On disk, a
continuous run of postings is normal and best In memory, can use
linked lists or variable length arrays Some tradeoffs in size/ease
of insertion 22 Postings Sorted by docID (more later on why).
Posting Brutus Calpurnia Caesar 124561657132 124113145173 231 174
54101
Slide 23
Tokenizer Token stream. Friends RomansCountrymen Inverted index
construction Linguistic modules Modified tokens. friend
romancountryman Indexer Inverted index. friend roman countryman 24
2 13 16 1 Documents to be indexed. Friends, Romans, countrymen.
Sec. 1.2 Stop words, stemming, capitalization, cases, etc.
Slide 24
Indexer steps: Token sequence Sequence of (Modified token,
Document ID) pairs. I did enact Julius Caesar I was killed i' the
Capitol; Brutus killed me. Doc 1 So let it be with Caesar. The
noble Brutus hath told you Caesar was ambitious Doc 2 Initially,
all the tokens from document 1, then all the tokens from document
2, etc., without regard for duplication.
Slide 25
Indexer steps: Sort Sort by terms docID within terms Core
indexing step
Slide 26
Indexer steps: Dictionary & Postings Multiple term entries
in a single document are merged. Split into Dictionary and Postings
Doc. frequency information is added. Number of documents in which
the term appears ID of documents that contain the term
Slide 27
Where do we pay in storage? 27 Pointers Terms and counts Lists
of docIDs
Slide 28
Storage A small diversion Computer storage Processor caches
Main memory External storage (hard disk, other devices) Very
substantial differences in access speeds Processor caches mostly
used by the operating system for rapid access to data that will be
needed soon Main memory. Limited quantities. High speed access Hard
disk Much larger quantities, speed restricted, access in fixed
units (blocks)
Slide 29
Some size examples From the iMac Memory 4GB (two 2GB SO-DIMMs)
of 1333MHz DDR3 SDRAM; four SO-DIMM slots support up to 16GB Hard
drive 500GB or 1TB 7200-rpm Serial ATA hard drive Optional 2TB
7200-rpm Serial ATA hard drive These are big numbers, but the
potential size of a significant collection is larger still. The
steps taken to optimize use of storage are critical to satisfactory
response time.
Slide 30
Implications of size limits slot 1 slot 2 slot 3 slot 4 slot 5
slot 6 slot 7 slot 8 slot 9 slot 10 slot 11 slot 12 real memory
(RAM) page 1 page 2 page 3 page 4... b page 70 page 71 page 72
virtual memory (on disk) reference to page 71 virtual page 71 is
brought from disk into real memory virtual page 2 generates a page
fault when referencing virtual page 71
Slide 31
How do we process a query? Using the index we just built,
examine the terms in some order, looking for the terms in the
query.
Slide 32
Query processing: AND Consider processing the query: Brutus AND
Caesar Locate Brutus in the Dictionary; Retrieve its postings.
Locate Caesar in the Dictionary; Retrieve its postings. Merge the
two postings: 32 128 34 248163264123581321 Brutus Caesar
Slide 33
The merge Walk through the two postings simultaneously, in time
linear in the total number of postings entries 33 34 12824816 3264
12 3 581321 128 34 248163264123581321 Brutus Caesar 2 8 If the list
lengths are x and y, the merge takes O(x+y) operations. What does
that mean? Crucial: postings sorted by docID. Sec. 1.3
Slide 34
Intersecting two postings lists (a merge algorithm) 34
Slide 35
Spot Check Lets assume that the term mercy appears in documents
1, 2, 13, 18, 24,35, 54 the term noble appears in documents 1, 5,
7, 13, 22, 24, 56 Show the document lists, then step through the
merge algorithm to obtain the search results.
Slide 36
Boolean queries: Exact match The Boolean retrieval model is
being able to ask a query that is a Boolean expression: Boolean
Queries are queries using AND, OR and NOT to join query terms Views
each document as a set of words Is precise: document matches
condition or not. Perhaps the simplest model to build Primary
commercial retrieval tool for 3 decades. Many search systems you
still use are Boolean: Email, library catalog, Mac OS X Spotlight
36 Sec. 1.3
Slide 37
37 Query optimization Consider a query that is an and of n
terms, n > 2 For each of the terms, get its postings list, then
and them together Example query: BRUTUS AND CALPURNIA AND CAESAR
What is the best order for processing this query? 37
Slide 38
38 Query optimization Example query: BRUTUS AND CALPURNIA AND
CAESAR Simple and effective optimization: Process in order of
increasing frequency Start with the shortest postings list, then
keep cutting further In this example, first CAESAR, then CALPURNIA,
then BRUTUS 38
Slide 39
39 Optimized intersection algorithm for conjunctive queries
39
Slide 40
40 More general optimization Example query: ( MADDING OR CROWD
) and ( IGNOBLE OR STRIFE ) Get frequencies for all terms Estimate
the size of each or by the sum of its frequencies (conservative)
Process in increasing order of or sizes 40
Slide 41
Scaling These basic techniques are pretty simple There are
challenges Scaling as everything becomes digitized, how well do the
processes scale? Intelligent information extraction I want
information, not just a link to a place that might have that
information.
Slide 42
How much information is there? Soon most everything will be
recorded and indexed Most bytes will never be seen by humans. Data
summarization, trend detection anomaly detection are key
technologies See Mike Lesk: How much information is there:
http://www.lesk.com/mlesk/ksg97/ksg.ht ml See Lyman & Varian:
How much information
http://www.sims.berkeley.edu/research/projects/how- much-info//
Yotta Zetta Exa Peta Tera Giga Mega Kilo All books (words) All
Books MultiMedia Everything Recorded ! A Photo 24 Yecto, 21 zepto,
18 atto, 15 femto, 12 pico, 9 nano, 6 micro, 3 milli Gray -
Microsoft Slide source: Lee Giles (modified) A Book A movie
Slide 43
Getting the required information Dependent on Acquiring
information Storing information Indexing Interaction with the
information source Evaluation
Slide 44
Getting the required information We have learned about
Acquiring information Storing information Indexing (the basics)
Interaction with the information source Evaluation
Slide 45
Getting the required information Still to come Acquiring
information Storing information Indexing (more) Interaction with
the information source Evaluation
Slide 46
The web and its challenges Unusual and diverse documents
Unusual and diverse users, queries, information needs Beyond terms,
exploit ideas from social networks link analysis, clickstreams How
do search engines work? And how can we make them better? 46
Slide 47
References Christopher D. Manning, Prabhakar Raghavan and
Hinrich Schtze, Introduction to Information Retrieval, Cambridge
University Press. 2008. Book available online at
http://nlp.stanford.edu/IR-book/information-retrieval-book.htmlhttp://nlp.stanford.edu/IR-book/information-retrieval-book.html
Many of these slides are taken directly from the authors slides
from the first chapter of the book. C. Lee Giles, Penn State.
http://clgiles.ist.psu.edu/http://clgiles.ist.psu.edu/ Paging
figure from Vittore Carsarosa, University of Parma, Italy