47 th Annual Meeting of the Association for Computational Linguistics and

9
47 th Annual Meeting of the Association for Computational Linguistics and 4 th International Joint Conference on Natural Language Processing Of the AFNLP 2-7 Aug 2009

description

47 th Annual Meeting of the Association for Computational Linguistics and 4 th International Joint Conference on Natural Language Processing Of the AFNLP. 2-7 Aug 2009. Do Automatic Annotation Techniques Have Any Impact on Supervised Complex Question Answering?. - PowerPoint PPT Presentation

Transcript of 47 th Annual Meeting of the Association for Computational Linguistics and

Page 1: 47 th  Annual Meeting of the  Association for Computational Linguistics and

47th Annual Meeting of the Association for Computational Linguistics

and

4th International Joint Conference onNatural Language ProcessingOf the AFNLP

2-7 Aug 2009

Page 2: 47 th  Annual Meeting of the  Association for Computational Linguistics and

Do Automatic Annotation Techniques Have Any Impact on Supervised Complex Question Answering?

Shafiq R. JotyDept. of Computer ScienceUniversity of British ColumbiaVancouver, BC, Canada

Yllias Chali and Sadid A. HasanDept. of Computer ScienceUniversity of LethbridgeLethbridge, AB, Canada

Page 3: 47 th  Annual Meeting of the  Association for Computational Linguistics and

“Given a complex question (topic description) and a collection of relevant documents, the task is to synthesize a fluent, well-organized 250-word summary of the documents that answers the question(s) in the topic”.

Example: Describe steps taken and worldwide reaction prior to the introduction of the Euro on January 1, 1999. Include predictions and expectations reported in the press.

-Use supervised learning methods.-Use automatic annotation techniques.

Research Problem and Our Approach

Page 4: 47 th  Annual Meeting of the  Association for Computational Linguistics and

Motivation Huge amount of annotated or labeled data is a

prerequisite for supervised training. When humans are employed, the whole process

becomes time consuming and expensive. In order to produce a large set of labeled data we

prefer the automatic annotation strategy.

Page 5: 47 th  Annual Meeting of the  Association for Computational Linguistics and

Automatic Annotation Techniques Using ROUGE Similarity Measures. Basic Element (BE) Overlap Measure. Syntactic Similarity Measure. Semantic Similarity Measure. Extended String Subsequence Kernel (ESSK).

Page 6: 47 th  Annual Meeting of the  Association for Computational Linguistics and

Support Vector Machines (SVM). Conditional Random Fields (CRF). Hidden Markov Models (HMM). Maximum Entropy (MaxEnt).

Supervised Systems

Page 7: 47 th  Annual Meeting of the  Association for Computational Linguistics and

Feature Space Query-related features:

n-gram overlap, Longest Common Subsequence (LCS), Weighted LCS (WLCS), skip-bigram, exact word overlap, synonym overlap, hypernym/hyponym overlap, gloss overlap, Basic Element (BE) overlap and syntactic tree similarity measure

Important features: position of sentences, length of sentences, Named Entity

(NE), cue word match and title match

Page 8: 47 th  Annual Meeting of the  Association for Computational Linguistics and

Experimental Results

Page 9: 47 th  Annual Meeting of the  Association for Computational Linguistics and

Conclusion

Sem annotation is the best for SVM. ESSK works well for HMM, CRF and MaxEnt

systems.