1 Text Processing Rong Jin. 2 Indexing Process 3 Processing Text Converting documents to index...

Post on 12-Jan-2016

216 views 1 download

Tags:

Transcript of 1 Text Processing Rong Jin. 2 Indexing Process 3 Processing Text Converting documents to index...

1

Text Processing

Rong Jin

2

Indexing Process

3

Processing Text

Converting documents to index terms Why?

Matching the exact string of characters typed by the user is too restrictive i.e., it doesn’t work very well in terms of effectiveness

Not all words are of equal value in a search Sometimes not clear where words begin and end

Not even clear what a word is in some languages (e.g., Chinese, Korean)

4

Tokenizing

Forming words from sequence of characters Surprisingly complex in English, can be

harder in other languages Early IR systems:

any sequence of alphanumeric characters of length 3 or more

terminated by a space or other special character upper-case changed to lower-case

5

Tokenizing

Example: “Bigcorp's 2007 bi-annual report showed profits

rose 10%.” becomes

1. bigcorp s 2007 bi annual report showed profits rose 10

2. bigcorp 2007 annual report showed profits rose

6

Tokenizing

Example: “Bigcorp's 2007 bi-annual report showed profits

rose 10%.” becomes “bigcorp 2007 annual report showed profits rose”

Too simple for search applications or even large-scale experiments

Why? Too much information lost Small decisions in tokenizing can have major

impact on effectiveness of some queries

7

Tokenizing Problems

Small words can be important in some queries, usually in combinations

xp, ma, pm, ben e king, el paso, master p, gm, j lo, world war II

Both hyphenated and non-hyphenated forms of many words are common Sometimes hyphen is not needed

e-bay, wal-mart, active-x, cd-rom, t-shirts At other times, hyphens should be considered either

as part of the word or a word separator winston-salem, mazda rx-7, e-cards, pre-diabetes, spanish-

speaking

8

Tokenizing Problems Special characters are an important part of tags,

URLs, code in documents Capitalized words can have different meaning

from lower case words Bush, Apple

Apostrophes can be a part of a word, a part of a possessive, or just a mistake rosie o'donnell, can't, don't, 80's, 1890's, men's straw

hats, master's degree, england's ten largest cities, shriner's

9

Tokenizing Problems

Numbers can be important, including decimals nokia 3250, top 10 courses, united 93, quicktime

6.5 pro, 92.3 the beat, 288358 Periods can occur in numbers, abbreviations,

URLs, ends of sentences, and other situations I.B.M., Ph.D., cs.umass.edu, F.E.A.R.

Note: tokenizing steps for queries must be identical to steps for documents

10

Tokenizing: SMART Systemtext_stop_file /amd/beyla/d/smart.11.0/lib/common_words

database "."temp_dir ""

trace 0global_trace_start -1global_trace_end 2147483647global_trace_file ""global_start -1global_end 2147483647global_accounting 0

# Indexing Locations doc_loc -query_loc -qrels_text_file ""query_skel ""

# DEFAULTS FOR INDEXING PROCEDURES AND FLAGSindex_pp index.index_pp.index_ppindex_vec index.vec_index.docquery.index_vec index.vec_index.querypreparse index.preparse.genericnext_vecid index.next_vecid.next_vecidquery.next_vecid index.next_vecid.next_vecid_1addtextloc index.addtextloc.add_textlocquery.addtextloc index.addtextloc.emptytoken_sect index.token_sect.token_sectinterior_char "'.@!_"interior_hyphenation falseendline_hyphenation trueparse.single_case falsemakevec index.makevec.makevecexpand index.expand.noneweight convert.weight.weightstore index.store.store_vecdoc.store index.store.store_aux

# DEFAULTS FOR DATABASE FILES AND MODESdict_file dictdoc_file ""textloc_file textlocquery.textloc_file ""inv_file invquery_file ""qrels_file ""collstat_file ""rwmode SRDWRrmode SRDONLYrwcmode SRDWR|SCREATE

## DEFAULTS FOR DOCUMENT DESCRIPTIONS#### GENERIC PREPARSINGnum_pp_sections 0pp_section.string ""pp_section.section_name -pp_section.action copypp_section.oneline_flag falsepp_section.newdoc_flag falsepp.default_section_name -pp.default_section_action discardpp_filter ""

#### PARSING INPUTindex.section.name noneindex.section.ctype -1method index.parse_sect.nonetoken_to_con index.token_to_con.dictstem_wanted 1stemmer index.stem.triestemstopword_wanted 1stopword index.stop.stop_dictstop_file common_wordsthesaurus_wanted 0text_thesaurus_file ""

#### PARSING OUTPUTindex.ctype.name nonecon_to_token index.con_to_token.contok_dictstore_aux convert.tup.vec_invweight_ctype convert.wt_ctype.weight_tridoc_weight nnnquery_weight nnnsect_weight nnn

num_sections 0num_ctypes 0

## DEFAULTS FOR COLLECTION CREATIONstop_file_size 1501dict_file_size 30001

## DEFAULTS FOR CONVERSIONvec_inv.mem_usage 4194000vec_inv.virt_mem_usage 50000000in ""out ""deleted_doc_file ""

########################################### DEFAULTS FOR RETRIEVAL## Proceduresget_query retrieve.get_query.get_q_useruser_qdisp retrieve.user_query.edit_skelqdisp_query retrieve.query_index.std_vecget_seen_docs retrieve.get_seen_docs.emptycoll_sim retrieve.coll_sim.invertedinv_sim retrieve.ctype_coll.ctype_invseq_sim retrieve.ctype_vec.innervecs_vecs retrieve.vecs_vecs.vecs_vecsvec_vec retrieve.vec_vec.vec_vecretrieve.output retrieve.output.ret_trrank_tr retrieve.rank_tr.rank_didsim_ctype_weight 1.0eval.num_queries 0

num_wanted 10

## Filestr_file ""rr_file ""seen_docs_file ""run_name ""

########################################### DEFAULTS FOR FEEDBACK## Proceduresfeedback feedback.feedback.feedbackfeedback.expand feedback.expand.exp_constfeedback.occ_info feedback.occ_info.idefeedback.weight feedback.weight.idefeedback.form feedback.form.form_query

feedback.num_expand 15

feedback.num_iterations 0feedback.alpha 1.0feedback.beta 1.0feedback.gamma 0.5

########################################### DEFAULTS FOR TOP-LEVEL INTERACTIVE DISPLAY AND PRINTING DOCSindivtext print.indivtext.text_filterfilter ""print.format ""print.rawdocflag 0print.doctext print.print.doctextspec_list ""verbose 1max_title_len 68get_query.editor_only 0

## SUBSET OF TOP-LEVEL ROUTINES (these are ones that are often called## elsewhere, and substitution of routines may be desired.)index.doc index.top.doc_collindex.query index.top.query_colltop.exp_coll index.top.exp_colltop.inter inter.interexp_coll index.top.exp_collinter inter.interconvert convert.convert.convertprint print.top.printretrieve retrieve.top.retrieve

## OBSOLETE (only included for backward compatibility)text_loc_prefix ""spec_file ./spectoken index.token.std_tokenparse index.parse.std_parse

# Othersonlyspace_sep 0interior_slash 0interior_semicolon 0

An example of configuration file for SMART

## GENERIC PREPARSERnum_pp_sections 12pp_section.0.string "<DOC>"pp_section.0.oneline_flag truepp_section.0.newdoc_flag truepp_section.0.action discard…pp_section.8.string "<TEXT>"pp_section.8.section_name wpp_section.8.action copy## PARSE INPUT index.num_sections 1index.section.0.name windex.section.0.method index.parse_sect.fullindex.section.0.word.ctype 0index.section.0.proper.ctype 0index.section.0.name.ctype 1

11

Tokenizing Process

First step is to use parser to identify appropriate parts of document to tokenize

Defer complex decisions to other components word is any sequence of alphanumeric characters,

terminated by a space or special character, with everything converted to lower-case

everything indexed example: Wal-Mart → Wal Mart but search finds

documents with Wal and Mart adjacent incorporate some rules to reduce dependence on

query transformation components

12

Stopping

Function words (determiners, prepositions) have little meaning on their own

High occurrence frequencies Treated as stopwords (i.e. removed)

reduce index space, improve response time, improve effectiveness

Can be important in combinations e.g., “to be or not to be”

13

Stopping

Stopword list can be created from high-frequency words or based on a standard list

Lists are customized for applications, domains, and even parts of documents e.g., “click” is a good stopword for anchor text

Best policy is to index all words in documents, make decisions about which words to use at query time

14

Stopwords in Web Search Engine

15

Stemming Many morphological variations of words

inflectional (plurals, tenses) derivational (making verbs nouns etc.)

In most cases, these have the same or very similar meanings

Stemmers attempt to reduce morphological variations of words to a common stem usually involves removing suffixes

Can be done at indexing time or as part of query processing (like stopwords)

16

Stemming Generally a small but significant effectiveness

improvement can be crucial for some languages e.g., 5-10% improvement for English, up to 50%

in Arabic

Words with the Arabic root ktb

17

Stemming

Two basic types Dictionary-based: uses lists of related words Algorithmic: uses program to determine related

words Algorithmic stemmers

E.g. remove ‘s’ endings assuming plural e.g., cats → cat, lakes → lake, wiis → wii

Many mistakes: UPS → up

18

Porter Stemmer

Algorithmic stemmer used in IR experiments since the 70s

Most common algorithm for stemming English Produces stems not words Makes a number of errors and difficult to modify

19

Problem with Rule-based Stemming Sometimes too aggressive in conflation (false

positive) E.g., policy/police, execute/executive, university/universe

Sometimes miss good conflations (false negative) E.g., European/Europe, matrices/matrix,

machine/machinery Produce stems that are not words

E.g., Iteration/iter, general/gener Corpus analysis can improve or replace a stemmer

20

Krovetz Stemmer

Hybrid algorithmic-dictionary Word checked in dictionary

If present, either left alone or replaced with “exception” If not present, word is checked for suffixes that could be

removed After removal, dictionary is checked again

Produces words not stems Comparable effectiveness Lower false positive rate, somewhat higher false

negative

21

Stemming Examples Original Text

marketing strategies carried out by U.S. companies for their agricultural chemicals, report predictions for market share of such chemicals, or report market statistics for agrochemicals.

Porter Stemmer (stopwords removed)market strateg carr compan agricultur chemic report predict market share chemic report market statist agrochem

KSTEM (stopwords removed)marketing strategy carry company agriculture chemical report prediction market share chemical report market statistic

22

Corpus-Based StemmingHypothesis: Word variants that should be conflated co-occur in

documents (text windows) in the corpus

Initial equivalence classes generated by another source E.g., Porter stemmer E.g., Common initial prefix or n-grams

New equivalence classes are clusters formed using co-occur statistics between pairs of words in initial classes

Can be used for other languages

23

Corpus-Based Stemming

(Xu & Croft, 1998)

24

Phrases Many queries are 2-3 word phrases Phrases are

More precise than single words e.g., documents containing “black sea” vs. two words

“black” and “sea” Less ambiguous

e.g., “big apple” vs. “apple”

In IR, phrases are restricted to noun phrases (or a sequence of nouns or adjectives followed by nouns)

25

Phrases

Text processing issue – how are phrases recognized?

Three possible approaches: Identify syntactic phrases using a part-of-speech

(POS) tagger Use word n-grams Store word positions in indexes and use proximity

operators in queries

26

POS Tagging

POS taggers use statistical models of text to predict syntactic tags of words Example tags:

NN (singular noun), NNS (plural noun), VB (verb), VBD (verb, past tense), VBN (verb, past participle), IN (preposition), JJ (adjective), CC (conjunction, e.g., “and”, “or”), PRP (pronoun), and MD (modal auxiliary, e.g., “can”, “will”).

Phrases can then be defined as simple noun groups, for example

27

Pos Tagging Example

28

Pos Tagging Example

29

Example Noun Phrases

30

Word N-Grams

POS tagging too slow for large collections Simpler definition – phrase is any sequence of n

words – known as n-grams bigram: 2 word sequence, trigram: 3 word sequence,

unigram: single words N-grams also used at character level for applications such

as OCR N-grams typically formed from overlapping

sequences of words i.e. move n-word “window” one word at a time “Document will describe…” “document will” and

“will describe”

31

N-Grams

Frequent n-grams are more likely to be meaningful phrases

N-grams form a Zipf distribution Better fit than words alone

Could index all n-grams up to specified length Much faster than POS tagging Uses a lot of storage

e.g., document containing 1,000 words would contain 3,990 instances of word n-grams of length 2 ≤ n ≤ 5

32

Google N-Grams

Web search engines index n-grams Google sample:

Most frequent trigram in English is “all rights reserved” In Chinese, “limited liability corporation”

33

Document Structure and Markup

Some parts of documents are more important than others

Document parser recognizes structure using markup, such as HTML tags Headers, anchor text, bolded text all likely to be

important Metadata can also be important Links used for link analysis

34

Example Web Page

35

Example Web Page

36

Link Analysis

Links are a key component of the Web Important for navigation, but also for search

e.g., <a href="http://example.com" >Example website</a>

“Example website” is the anchor text “http://example.com” is the destination link both are used by search engines

37

Anchor Text Used as a description of the content of the

destination page i.e., collection of anchor text in all links pointing

to a page used as an additional text field Anchor text tends to be short, descriptive, and

similar to query text Retrieval experiments have shown that anchor

text has significant impact on effectiveness for some types of queries i.e., more than link analysis

38

Link Quality

Link quality is affected by spam and other factors e.g., link farms to increase PageRank trackback links in blogs can create loops

39

Information Extraction

Automatically extract structure from text annotate document using tags to identify

extracted structure Named entity recognition

identify words that refer to something of interest in a particular application

e.g., people, companies, locations, dates, product names, prices, etc.

40

Named Entity Recognition

Example showing semantic annotation of text using XML tags

Information extraction also includes document structure and more complex features such as relationships and events

41

Named Entity Recognition

Rule-based Uses lexicons (lists of words and phrases) that

categorize names e.g., locations, peoples’ names, organizations, etc.

Rules also used to verify or find new entity names e.g., “<number> <word> street” for addresses “<street address>, <city>” or “in <city>” to verify

city names “<street address>, <city>, <state>” to find new cities “<title> <name>” to find new names

42

Named Entity Recognition

Rules either developed manually by trial and error or using machine learning techniques

Statistical uses a probabilistic model of the words in and

around an entity probabilities estimated using training data

(manually annotated text) Hidden Markov Model (HMM) is one approach

43

HMM Sentence Model

• Probabilistic finite state automata • Each state is associated with a probability distribution over

words (the output)

44

Internationalization

2/3 of the Web is in English About 50% of Web users do not use English

as their primary language Many (maybe most) search applications have

to deal with multiple languages monolingual search: search in one language, but

with many possible languages cross-language search: search in multiple

languages at the same time

45

Internationalization

Many aspects of search engines are language-neutral

Major differences: Text encoding (converting to Unicode) Tokenizing (many languages have no word

separators) Stemming

Cultural differences may also impact interface design and features provided