Novel representations and methods in text classification

91
Novel representations and methods in text classification Manuel Montes, Hugo Jair Escalante Instituto Nacional de Astrofísica, Óptica y Electrónica, México. http://ccc.inaoep.mx/~mmontesg/ http://ccc.inaoep.mx/~hugojair/ {mmontesg, hugojair}@inaoep.mx 7th Russian Summer School in Information Retrieval Kazan, Russia, September 2013

description

Novel representations and methods in text classification . Manuel Montes, Hugo Jair Escalante Instituto Nacional de Astrofísica, Óptica y Electrónica, México. http://ccc.inaoep.mx/~mmontesg / http://ccc.inaoep.mx/~hugojair/ { mmontesg , hugojair } @ inaoep.mx - PowerPoint PPT Presentation

Transcript of Novel representations and methods in text classification

Page 1: Novel representations and methods in text classification

Novel representations and methods in text classification

Manuel Montes, Hugo Jair EscalanteInstituto Nacional de Astrofísica, Óptica y Electrónica, México.

http://ccc.inaoep.mx/~mmontesg/

http://ccc.inaoep.mx/~hugojair/{mmontesg, hugojair}@inaoep.mx

7th Russian Summer School in Information RetrievalKazan, Russia, September 2013

Page 2: Novel representations and methods in text classification

2

Overview of the course• Day 1: Introduction to text classification

• Day 2: Bag of concepts representations

• Day 3: Representations that incorporate sequential and syntactic information

• Day 4: Methods for non-conventional classification

• Day 5: Automated construction of classification models

Page 3: Novel representations and methods in text classification

3

AUTOMATED CONSTRUCTION OF CLASSIFICATION MODELS: FULL MODEL SELECTION

Novel represensations and methods in text classification

Page 4: Novel representations and methods in text classification

4

Outline

• Pattern classification

• Model selection in a broad sense: FMS

• Related works

• PSO for full model selection

• Experimental results and applications

• Conclusions and future work directions

Page 5: Novel representations and methods in text classification

5

Pattern classification

Trainedmachine

1 (is a car)

-1 (is not a car)

Queries

Answers

Learning machine

Training data

Page 6: Novel representations and methods in text classification

6

Applications

• Natural language processing,• Computer vision, • Robotics,• Information technology,• Medicine,• Science,• Entertainment,• ….

Page 7: Novel representations and methods in text classification

7

Example: galaxy classification from images

DataPreprocessing

Feature selection

Classification method

Training modelModel evaluation ( ; )ff x

( ; )gg x( ; )hh x

( ; )ff x ( )l x

ERROR [ ] ?

( ; )zf

f x1

( ; )ff x…

Instances

Features

Data

Page 8: Novel representations and methods in text classification

8

Some issues with the cycle of design• How to preprocess the data? • What method to use?

DataPreprocessing

Feature selection

Classification method

Training modelModel evaluation

Data

Page 9: Novel representations and methods in text classification

9

Some issues with the cycle of design

• What feature selection method to use?• How many features are enough?• How to set the hyperparameters of the FS

method?

DataPreprocessing

Feature selection

Classification method

Training modelModel evaluation

Data

Page 10: Novel representations and methods in text classification

10

Some issues with the cycle of design

• What learning machine to use?• How to set the hyperparameters of the

chosen method?

DataPreprocessing

Feature selection

Classification method

Training modelModel evaluation

Data

Page 11: Novel representations and methods in text classification

11

Some issues with the cycle of design

• What combination of methods works better?• How to evaluate model’s performance • How to avoid overfitting?

DataPreprocessing

Feature selection

Classification method

Training modelModel evaluation

Data

Page 12: Novel representations and methods in text classification

12

Some issues with the cycle of design• The above issues are usually fixed manually:– Domain expert’s knowledge– Machine learning specialists’ knowledge– Trial and error

• The design/development of a pattern classification system relies on the knowledge and biases of humans, which may be risky, expensive and time consuming

• Automated solutions are available but only for particular processes (e.g., either feature selection, or classifier selection but not both)Is it possible to automate the whole process?

Page 13: Novel representations and methods in text classification

13

( ; )f xFull model selection

Data

Full model selection

H. J. Escalante. Towards a Particle Swarm Model Selection algorithm. Multi-level inference workshop and model selection game, NIPS, Whistler, VA, BC, Canada, 2006.H. J. Escalante, E. Sucar, M. Montes. Particle Swarm Model Selection, In Journal of Machine Learning Research, 10(Feb):405--440, 2009.

Page 14: Novel representations and methods in text classification

14

Full model selection

• Given a set of methods for data preprocessing, feature selection and classification select the combination of methods (together with their hyperparameters) that minimizes an estimate of classification performance

H. J. Escalante, E. Sucar, M. Montes. Particle Swarm Model Selection, In Journal of Machine Learning Research, 10(Feb):405--440, 2009.

Page 15: Novel representations and methods in text classification

15

Full model selection• Full model: A model composed of methods for data

preprocessing, feature selection and classification

• Example:

chain { 1:normalize center=0 2:relief f_max=Inf w_min=-Inf k_num=4 3:neural units=10 shrinkage=1e-014 balance=0 maxiter=100 }

chain { 1:standardize center=1 2:svcrfe svc kernel linear coef0=0 degree=1 gamma=0 shrinkage=0.001 f_max=Inf 3:pc_extract f_max=2000 4:svm kernel linear C=Inf ridge=1e-013 balanced_ridge=0 nu=0 alpha_cutoff=-1 b0=0 nob=0 }

Page 16: Novel representations and methods in text classification

16

Full model selection• Pros– The job of the data analyst is considerably reduced – Neither knowledge on the application domain nor on

machine learning is required– Different methods for preprocessing, feature selection and

classification are considered– It can be used in any classification problem

• Cons– It is real function + combinatoric optimization problem– Computationally expensive – Risk of overfitting

Page 17: Novel representations and methods in text classification

17

OVERVIEW OF RELATED WORKSNovel represensations and methods in text classification

Page 18: Novel representations and methods in text classification

18

Model selection via heuristic optimization

• A single model is considered and their hyperparameters are optimized via heuristic optimization: – Swarm optimization, – Genetic algorithms, – Pattern search,– Genetic programming– …

Page 19: Novel representations and methods in text classification

19

GEMS• GEMS (Gene Expression Model Selection) is a system

for automated development and valuation of high-quality cancer diagnostic models and biomarker discovery from microarray gene expression data

A. Statnikov, I. Tsamardinos, Y. Dosbayev, C.F. Aliferis. GEMS: A System for Automated Cancer Diagnosis and Biomarker Discovery from Microarray Gene Expression Data. International Journal of Medical Informatics, 2005 Aug;74(7-8):491-503.

N. Fanananapazir, A. Statnikov, C.F. Aliferis. The Fast-AIMS Clinical Mass Spectrometry Analysis System. Advaces in Bioinformatics, 2009, Article ID 598241.

Page 20: Novel representations and methods in text classification

20

GEMS• The user specifies the models, and methods to be

considered • GEMS explores all of the combinations of methods, using

grid search to optimize hyperparameters

Page 21: Novel representations and methods in text classification

21

GEMS• The user specifies the models, and methods to be

considered • GEMS explores all of the combinations of methods, using

grid search to optimize hyperparameters

Page 22: Novel representations and methods in text classification

22

Model type selection for regression• Genetic algorithms are used for the selection of

model type (learning method, feature selection, preprocessing) and parameter optimization for regression problems

D. Gorissen, T. Dhaene, F. de Turck. Evolutionary Model Type Selection for Global Surrogate Modeling. In Journal of Machine Learning Research, 10(Jul):2039-2078, 2009

http://www.sumo.intec.ugent.be/

Page 23: Novel representations and methods in text classification

23

Meta-learning: learning to learn• Evaluates and compares the application of learning

algorithms on (many) previous tasks/domains to suggest learning algorithms (combinations, rankings) for new tasks

• Focuses on the relation between tasks/domains and learning algorithms

• Accumulating experience on the performance of multiple applications of learning methods

Brazdil P., Giraud-Carrier C., Soares C., Vilalta R. Metalearning: Applications to Data Mining. Springer Verlag. ISBN: 978-3-540-73262-4, 2008.

Brazdil P., Vilalta R, Giraud-Carrier C., Soares C.. Metalearning. Encyclopedia of Machine learning. Springer, 2010.

Page 24: Novel representations and methods in text classification

24

Meta-learning: learning to learn

Brazdil P., Giraud-Carrier C., Soares C., Vilalta R. Metalearning: Applications to Data Mining. Springer Verlag. ISBN: 978-3-540-73262-4, 2008.

Brazdil P., Vilalta R, Giraud-Carrier C., Soares C.. Metalearning. Encyclopedia of Machine learning. Springer, 2010.

Page 25: Novel representations and methods in text classification

25

Google prediction API

• “Machine learning as a service in the cloud” • Upload your data, train a model and perform

queries• Nothing is for free!

https://developers.google.com/prediction/

Page 26: Novel representations and methods in text classification

26

Google prediction API

Your data become property of Google!

Page 27: Novel representations and methods in text classification

27

IBM’s SPSS modeler

Page 28: Novel representations and methods in text classification

28$$$$$

Page 29: Novel representations and methods in text classification

29

Who cares?

Page 30: Novel representations and methods in text classification

30

PSMS: OUR APPROACH TO FULL MODEL SELECTION

Novel represensations and methods in text classification

Page 31: Novel representations and methods in text classification

31

( ; )f xFull model selection

Data

Full model selection

Page 32: Novel representations and methods in text classification

32

PSMS: Our approach to full model selection

• Particle swarm model selection: Use particle swarm optimization for exploring the search space of full models in a particular ML-toolbox

H. J. Escalante, E. Sucar, M. Montes. Particle Swarm Model Selection, In Journal of Machine Learning Research, 10(Feb):405--440, 2009.

H. J. Escalante. Towards a Particle Swarm Model Selection algorithm. Multi-level inference workshop and model selection game, NIPS, Whistler, VA, BC, Canada, 2006.

Normalize + RBF-SVM (γ = 0.01)

PCA + Neural-Net

(10 h. units)

Relief (feat. Sel.) +Naïve

Bayes

Normalize + PolySVM

(d= 2)Neural Net( 3 units)

s2n (feat. Sel.) +K-ridge

Page 33: Novel representations and methods in text classification

33

Particle swarm optimization• Population-based search heuristic

• Inspired on the behavior of biological communities that exhibit local and social behaviors

A. P. Engelbrecht. Fundamentals of Computational Swarm Intelligence. Wiley,2006

Page 34: Novel representations and methods in text classification

34

Particle swarm optimization

• Each individual (particle) i has:– A position in the search space (Xi

t), which represents a solution to the problem at hand,

– A velocity vector (Vit), which determines how a particle

explores the search space

• After random initialization, particles update their positions according to:

A. P. Engelbrecht. Fundamentals of Computational Swarm Intelligence. Wiley,2006

10 1 2( ) ( )t t t t

i i i i g i v v p x p x

1 1t t ti i i

x v x

Page 35: Novel representations and methods in text classification

35

Particle swarm optimization

1. Randomly initialize a population of particles (i.e., the swarm)

2. Repeat the following iterative process until stop criterion is meet:

a) Evaluate the fitness of each particleb) Find personal best (pi) and global best (pg)c) Update particles d) Update best solution found (if needed)

3. Return the best particle (solution)

Fitne

ss v

alue

...

Init t=1 t=2 t=max_it...

Page 36: Novel representations and methods in text classification

36

PSMS : PSO for full model selection

• Set of methods (not restricted to this set)

Classification

Feature selection

Preprocessing

http://clopinet.com/CLOP

Page 37: Novel representations and methods in text classification

37

PSMS : PSO for full model selection• Codification of solutions as real valued vectors

i,pre i,1...Npre i,fs i,1...Nfs i,sel i,class i,1...Nclass, , , , , , i x y x y x x yx

Choice of methods for preprocessing, feature selection and classification

Hyperparameters for the selected methods Preprocessing before feature selection?

Page 38: Novel representations and methods in text classification

38

PSMS : PSO for full model selection

• Fitness function: – K-fold cross-validation balanced error rate – K-fold cross-validation area under the ROC curve

Training data

Train

TestError fold 1

Error fold 2

Error fold 3

Error fold 4

Error fold 5

CV estimat

e

Page 39: Novel representations and methods in text classification

39

SOME EXPERIMENTAL RESULTS OF PSMS

Novel represensations and methods in text classification

Page 40: Novel representations and methods in text classification

40

PSMS in the ALvsPK challenge

• Five data sets for binary classification• Goal: to obtain the best classification model for

each data set• Two tracks: – Prior knowledge– Agnostic learning

http://www.agnostic.inf.ethz.ch/

Page 41: Novel representations and methods in text classification

41

PSMS in the ALvsPK challenge

• Best configuration of PSMS:Entry Description Ada Gina Hiva Nova Sylva Overall Rank

Interim-all-prior Best PK 17.0 2.33 27.1 4.71 0.59 10.35 1st

psmsx_jmlr_run_I PSMS 16.86 2.41 28.01 5.27 0.62 10.63 2nd

Logitboost-trees Best AL 16.6 3.53 30.1 4.69 0.78 11.15 8th

Comparison of the performance of models selected with PSMS with that obtained by other techniques in the ALvsPK challenge

Models selected with PSMS for the different data sets

http://www.agnostic.inf.ethz.ch/

Page 42: Novel representations and methods in text classification

42

PSMS in the ALvsPK challenge

• Official ranking:

http://www.agnostic.inf.ethz.ch/

Page 43: Novel representations and methods in text classification

43

Some results in benchmark data• Comparison of PSMS and pattern search

Page 44: Novel representations and methods in text classification

44

Some results in benchmark data• Comparison of PSMS and pattern search

Page 45: Novel representations and methods in text classification

45

Some results in benchmark data• Comparison of PSMS and pattern search

Page 46: Novel representations and methods in text classification

46

PSMS: Interactive demohttp://clopinet.com/CLOP

Isabelle Guyon, Amir Saffari, Hugo Jair Escalante, Gokan Bakir, and Gavin Cawley, CLOP: a Matlab Learning Object Package. NIPS 2007 Demonstrations, Vancouver, British Columbia, Canada 2007.

Page 47: Novel representations and methods in text classification

47

PSMS: APPLICATIONS AND EXTENSIONS

Novel represensations and methods in text classification

Page 48: Novel representations and methods in text classification

Authorship verification

Page 49: Novel representations and methods in text classification

49

Authorship verification

• The task of deciding whether given text documents were or were not written by a certain author (i.e., a binary classification problem)

• Applications include: fraud detection, spam filtering, computer forensics and plagiarism detection

Page 50: Novel representations and methods in text classification

Authorship verification

50

Dear Mary, I have been busy in the last few weeks, however, I would like to ….….

This paper describes an novel approach to region labeling by using a Markov random field model ….

Hereby I am sending my application for the PhD position offered in your research group …

Hi Jane, how things are going on in Mexico; I hope you are not having problems because …

Sample documents from author “X”

Our paper introduces a novel method for …

Unseen document

Model for author “X”

Feature extraction Model construction and testing

This document was not written by “X”

Prediction

Today in Brazil an Air France plane crashed down, during a storm….

Sample documents from other authors

than “X”

Page 51: Novel representations and methods in text classification

51

PSMS for authorship verification

• Documents are represented by their (binary) bag-of-words

• We apply straight PSMS for selecting the classification model for each author

• We report average performance over several authors

Iberoamerican congresson pattern recognition

Iber

oam

eric

an

Acco

mm

odati

on

Cong

ress

Patte

rn

Reco

gniti

on

0 0 1 01 1 100 0 0

… … … … …

Zorr

o

Hugo Jair Escalante, Manuel Montes, and Luis Villaseñor. Particle Swarm Model Selection for Authorship Verification. LNCS 5856, pp. 563--570, Springer, 2009.

Page 52: Novel representations and methods in text classification

52

Experimental results

• Two collections were considered for experiments:

• Evaluation measures: – Balanced error rate (BER)– F1-measure

• We compare PSMS to results obtained with individual classification models

Collection Training Testing Features Authors Language Domain

MX-PO 281 72 8,970 5 Spanish Poetry

CCAT 2,500 2,500 3,400 50 English News

2E EBER

12 R PFR P

Page 53: Novel representations and methods in text classification

53

Experimental results

Col. / Class.

zarbi naïve lboost neural svc kridge rf lssvm FMS/1 FMS/0 FMS/-1

MX-PO 34.64 30.24 29.08 28.59 30.81 31.90 48.01 33.52 26.18 23.68 26.88

CCAT 14.24 26.21 15.12 41.50 29.18 27.69 47.01 36.64 35.34 13.54 16.39

Average (over the authors) of balanced error rate obtained by each technique in the test sets

Page 54: Novel representations and methods in text classification

54

Experimental results

• Full models selected for the MX-PO data set

Poet Preprocessing Feature Selection

Classifier BER

Efrain Huerta standardize(1) - zarbi 10.28 %

Jaime Sabines - - zarbi 26.79 %

Octavio Paz normalize(0) Zfilter(3070) zarbi 25.09 %

Rosario Castellanos

normalize(0) Zfilter(3400) Kridge (γ=0.44; s= 0.407)

33.04%

Ruben Bonifaz shift-scale (log) - neural (u=3;s=1.81;i=15;b=1); 35.71%

Page 55: Novel representations and methods in text classification

55

Experimental results

Classifiers Feature selection

Preprocessing

zarbi naive neural svc kridge lssvm W WO W WO

Frequency of selection

68% 10% 10% 2% 2% 8% 76% 24% 88% 12%

Balanced error rate (BER)

14.22 9.88 23.42 3.82 44.01 33.51 14.14 22.28 15.64 16.50

Statistics on the selection of methods when using PSMS (FMS/1) for the CCAT collection

Page 56: Novel representations and methods in text classification

56

Experimental results

• Per-author performance by models selected with FMS/1 for the CCA data set

96.91%

[normalize+standardize+shift-scale] + [Ftest (104)] + [naïve]Most relevant features (words) for this author: china, chinesse, beijing, official diplomat

Page 57: Novel representations and methods in text classification

57

Experimental results

• Per-author performance by models selected with FMS/1 for the CCA data set

14.81%[normalize] + [] + [lssvm]

[normalize+standardize+shift-scale] + [Zfilter (234)] + [zarbi]

When using FMS/0 we got

F1= 45.71%

Page 58: Novel representations and methods in text classification

58

Lessons learned

• PSMS selected very effective models for each author,

• Allowing the user to provide/fix a classifier resulted beneficial

• We were not able to outperform baseline methods in the multiclass classification task of authorship attribution

Page 59: Novel representations and methods in text classification

59

ENSEMBLE PSMSNovel represensations and methods in text classification

Page 60: Novel representations and methods in text classification

60

Ensemble PSMS• Many models are evaluated during the search

process of PSMS; although a single model is selected

Page 61: Novel representations and methods in text classification

61

Ensemble PSMS• Idea: taking advantage of the large number of models

that are evaluated during the search for building ensemble classifiers

• Problem: How to select the partial solutions from PSMS so that they are accurate and diverse to each other

• Motivation: The success of ensemble classifiers depends mainly in two key aspects of individual models: Accuracy and diversity

Page 62: Novel representations and methods in text classification

62

Ensemble PSMSHow to select potential models for building ensembles?

BS: store the global best model in each iterationBI: the best model in each iterationSE: combine the outputs of the final swarm

How to fuse the outputs of the selected models?Simple (un-weighted) voting

Fitne

ss v

alue

...

Init I=1 I=4 I=max_it...I=2 I=3

1

1( ) ( )L

ll

g E fL

p

Page 63: Novel representations and methods in text classification

63

How to select potential models for building ensembles?BS: store the global best model in each iterationBI: the best model in each iterationSE: combine the outputs of the final swarm

How to fuse the outputs of the selected models?Simple (un-weighted) voting

1

1( ) ( )L

ll

g E fL

p

Fitne

ss v

alue

...

Init I=1 I=4 I=max_it...I=2 I=3

Ensemble PSMS

Page 64: Novel representations and methods in text classification

64

How to select potential models for building ensembles?BS: store the global best model in each iterationBI: the best model in each iterationSE: combine the outputs of the final swarm

How to fuse the outputs of the selected models?Simple (un-weighted) voting

1

1( ) ( )L

ll

g E fL

p

Fitne

ss v

alue

...

Init I=1 I=4 I=max_it...I=2 I=3

Ensemble PSMS

Page 65: Novel representations and methods in text classification

Experimental results

• Data:– 9 Benchmark machine learning data sets (binary

classification)– 1 Object recognition data set (multiclass, 10 classes)

ID Data set Training Testing Features

1 Breast-cancer 200 77 9

2 Diabetes 468 300 8

3 Flare solar 666 400 9

4 German 700 300 20

5 Heart 170 100 13

6 Image 1300 1010 20

7 Splice 1000 2175 60

8 Thyroid 140 75 5

9 Titanic 150 2051 3

OR SCEF 2378 3300 50

Hugo Jair Escalante, Manuel Montes and Enrique Sucar. Ensemble Particle Swarm Model Selection. Proceedings of the International Joint Conference on Neural Networks (IJCNN2010 – WCCI2010), pp. 1814--1821, IEEE,, 2010

Page 66: Novel representations and methods in text classification

66

Experimental results

• Evaluation:– Average of area under the ROC curve

(performance)

– Coincident failure diversity (ensemble diversity)

10

11 1

0

L

rr

L r pp LCFD

If p0 < 1

If p0 = 1

W. Wang. Some fundamental issues in ensemble methods. Proc. Of IJCNN07, pp. 2244–2251, IEEE, July 2007.

Page 67: Novel representations and methods in text classification

67

Experimental results: performance

• Benchmark data sets: better performance is obtained by ensemble methods

ID PSMS EPSMS-BS EPSMS-SE EPSMS-BI

1 72.03±2.24 73.40±0.78 74.05±0.91 74.35±0.49

2 82.11±1.29 82.60±1.52 74.07±13.7 83.42±0.46

3 68.81±4.31 69.38±4.53 70.13±7.48 72.16±1.42

4 73.92±1.23 73.84±1.53 74.70±0.72 74.77±0.695 85.55±5.48 87.40±2.01 87.07±0.75 88.36±0.886 97.21±3.15 98.85±1.45 95.27±3.04 99.58±0.337 97.26±0.55 98.02±0.64 96.99±1.21 98.84±0.26

8 96.00±4.75 98.18±0.94 97.29±1.54 99.22±0.45

9 73.24±1.16 73.50±0.95 75.37±1.05 74.40±0.91

Avg. 82.90±2.68 83.91±1.59 82.77±3.38 85.01±0.65

Average accuracy over 10-trials of PSMS and EPSMS in benchmark data

Hugo Jair Escalante, Manuel Montes and Enrique Sucar. Ensemble Particle Swarm Model Selection. Proceedings of the International Joint Conference on Neural Networks (IJCNN2010 – WCCI2010), pp. 1814--1821, IEEE,, 2010

Page 68: Novel representations and methods in text classification

68

Experimental results: Diversity of ensemble

• Diversity resultsID EPSMS-BS EPSMS-SE EPSMS-BI

1 0.2055±0.1498 0.5422±0.0550 0.5017±0.1149

2 0.3547±0.1711 0.6241±0.0619 0.5081±0.0728

3 0.1295±0.1704 0.4208±0.1357 0.4012±0.1071

4 0.3019±0.1732 0.5159±0.0596 0.4296±0.0490

5 0.2733±0.1714 0.5993±0.0925 0.5647±0.0655

6 0.7801±0.0818 0.7555±0.0524 0.8427±0.04087 0.5427±0.3230 0.7807±0.0585 0.8050±0.0294

8 0.6933±0.1558 0.8173±0.0626 0.8514±0.0403

9 0.7473±0.0089 0.7473±0.0089 0.7473±0.0089

Avg. 0.4476±0.1562 0.6448±0.0603 0.6280±0.0588

EPSMS-SE models are more diverse

Hugo Jair Escalante, Manuel Montes and Enrique Sucar. Ensemble Particle Swarm Model Selection. Proceedings of the International Joint Conference on Neural Networks (IJCNN2010 – WCCI2010), pp. 1814--1821, IEEE,, 2010

Page 69: Novel representations and methods in text classification

69

Experimental results: region labeling

Per-concept improvement of EPSMS variants over straight PSMS

ID PSMS EPSMS-BS EPSMS-SE EPSMS-BI

AUC 91.53±6.8 93.27±5.6 92.79±7.4 94.05±5.3

MCC 69.58% 76.59% 79.13% 81.49%

Page 70: Novel representations and methods in text classification

70

Lessons learned

• Ensembles generated with EPSMS outperformed individual classifiers; including those selected with PSMS

• Models evaluated by PSMS are diverse to each other and accurate

• The extension of PSMS for generating ensembles is promising

Page 71: Novel representations and methods in text classification

71

Summary• Full model selection is a broad view of the model selection

problem in supervised learning

• There is an increasing interest for this type of methods

• The search space is huge and multiple minima may exist

• There is no guarantee of avoiding overfitting

• Yet, we showed evidence that it is possible to attempt to automate the cycle of design of pattern classification systems

Page 72: Novel representations and methods in text classification

72

Summary• PSMS / EPSMS an automatic tool for the selection of

classification models for any classification task

• PSMS / EPSMS has been successfully applied in several domains

• Disclaimer: We do not expect PSMS / EPSMS to perform well in every application it is tested, although we recommend it as a first option when building a classifier

Page 73: Novel representations and methods in text classification

73

Summary

• For multiclass classification straight OVA strategies did not work

• Alternative methods that do not use the output of classifiers are a good option to explore as future work

Page 74: Novel representations and methods in text classification

74

Research opportunities

• Multi-Swarm PSMS for building ensembles

• Multiclass PSMS/EPSMS:– Bi-level optimization (i.e., individual and

multiclass performance)– Learning to fuse the outputs (e.g., using

genetic programming)

• Meta-ensembles

Page 75: Novel representations and methods in text classification

75

Research opportunities

• Other search strategies for full model selection (e.g., Genetic programming, GRASP, Tabu search)

• Other toolboxes (e.g., weka)

• Meta-learning + PSMS

• Combination of different full model selection techniques

Page 76: Novel representations and methods in text classification

76

Applications

• Any classification problem where specific classifiers are required for each class

Page 77: Novel representations and methods in text classification

77

References• Isabelle Guyon. A Practical Guide to Model Selection. In Jeremie Marie, editor, Machine Learning Summer School 2008,Springer

Texts in Statistics, 2011. (slide from I.Guyon’s)

• A. Statnikov, I. Tsamardinos, Y. Dosbayev, C.F. Aliferis. GEMS: A System for Automated Cancer Diagnosis and Biomarker Discovery from Microarray Gene Expression Data. International Journal of Medical Informatics, 2005 Aug;74(7-8):491-503.

• D. Gorissen, T. Dhaene, F. de Turck. Evolutionary Model Type Selection for Global Surrogate Modeling. In Journal of Machine Learning Research, 10(Jul):2039-2078, 2009

• Brazdil P., Giraud-Carrier C., Soares C., Vilalta R. Metalearning: Applications to Data Mining. Springer Verlag. ISBN: 978-3-540-73262-4, 2008.

• H. J. Escalante. Towards a Particle Swarm Model Selection algorithm. Multi-level inference workshop and model selection game, NIPS, Whistler, VA, BC, Canada, 2006.

• Isabelle Guyon, Amir Saffari, Hugo Jair Escalante, Gokan Bakir, and Gavin Cawley, CLOP: a Matlab Learning Object Package. NIPS 2007 Demonstrations, Vancouver, British Columbia, Canada 2007.

• H. J. Escalante, E. Sucar, M. Montes. Particle Swarm Model Selection, In Journal of Machine Learning Research, 10(Feb):405--440, 2009.

• Hugo Jair Escalante, Jesus A. Gonzalez, Manuel Montes-y-Gomez, Pilar Gómez, Carolina Reta, Carlos A. Reyes, Alejandro Rosales. Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, In press, Artificial Intelligence in Medicine, Available online 15 April 2012.

• Hugo Jair Escalante, Manuel Montes, and Luis Villaseñor. Particle Swarm Model Selection for Authorship Verification. LNCS 5856, pp. 563--570, Springer, 2009.

Page 78: Novel representations and methods in text classification

7th Russian Summer School in Information Retrieval Kazan, Russia, September 2013

78

Questions?

Page 79: Novel representations and methods in text classification

79

Page 80: Novel representations and methods in text classification

80

EPSMS for acute leukemia classification

• Acute leukemia: a malignant disease that affects a considerable portion of the world population

• There are different types and subtypes of acute leukemia, requiring different treatments.

Page 81: Novel representations and methods in text classification

81

EPSMS for acute leukemia classification• Different types/subtypes:

– Acute Lymphocytic Leukemia (ALL): L1, L2, L3– Acute Myelogenous Leukemia (AML): M0, M1, M2, M3, M4,

M5, M6, M7

• Considered tasks:– Binary

• ALL vs AML• L1 vs L2• M2 vs M3, M5, M3 vs. M2, M5, M3 vs M2, M5,

– Multiclass• M1 vs M2 vs M3• L1 vs L2 vs M1 vs M2 vs M3

Page 82: Novel representations and methods in text classification

82

EPSMS for acute leukemia classification

• Despite the fact that there are advanced and precise methods to identify leukemia types, they are very expensive and unavailable in most of hospitals of third world countries

• A Cheaper alternative: morphological acute leukemia classification from bone marrow cell images

Page 83: Novel representations and methods in text classification

83

EPSMS for acute leukemia classification

• Morphological classification:– Image registration– Image segmentation – Feature extraction (Morphological,

statistical, texture)– Classifier construction

Cell Nucleus Cytoplasm

J. A. Gonzalez et al. Leukemia Identification from Bone Marrow Cells Images using a Machine Vision and Data Mining Strategy, Intelligent Data Analysis, Vol 15(3):443—462, 2011.

Page 84: Novel representations and methods in text classification

84

EPSMS for acute leukemia classification

• In previous work either: – the same classification model has been

considered for different problems, or– Models have been manually selected by

trial and error

• Proposal: Using EPSMS for automatically selecting specific classifiers for the different tasks

Hugo Jair Escalante, et al. . Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163-175 (2012)

Page 85: Novel representations and methods in text classification

85

EPSMS for acute leukemia classification

• Experiments were performed with real data collected by a Mexican health institution (IMSS), using 10-fold CV

Page 86: Novel representations and methods in text classification

86

EPSMS for acute leukemia classification

• In general ensembles generated with EPSMS outperformed previous work

• Models selected with EPSMS were more effective than those selected with straight PSMS

Hugo Jair Escalante, et al. . Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163-175 (2012)

Page 87: Novel representations and methods in text classification

87

EPSMS for ALC - binary tasks

Page 88: Novel representations and methods in text classification

88

EPSMS for ALC – Multiclass

Measure Region Ref. PSMS E1 E2 E3

M2 vs M3 vs M5

Avg. AUC C 78.66 92.37 93.94 93.94 93.82

Accuracy C 66.13 83.37 81.32 79.76 78.79

Avg. AUC N&C 92.80 92.36 93.94 93.92 93.28

Accuracy N&C 84.87 81.84 81.87 82.34 79.34

L1 vs L2 vs M2 vs M3 vs M5

Avg. AUC C 84.03 91.13 93.78 93.76 83.40

Accuracy C 55.86 72.86 75.83 76.06 73.92

Avg. AUC N&C 92.33 90.62 94.21 94.09 86.09

Accuracy N&C 77.48 71.72 74.50 75.65 74.03

Hugo Jair Escalante, et al. . Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163-175 (2012)

Page 89: Novel representations and methods in text classification

89

Models selected with EPSMS

Hugo Jair Escalante, et al. . Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163-175 (2012)

Page 90: Novel representations and methods in text classification

90

Models selected with EPSMS

Hugo Jair Escalante, et al. . Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163-175 (2012)

Page 91: Novel representations and methods in text classification

91

Lessons learned

• Models selected with PSMS / EPSMS outperformed those manually selected after a careful study

• Different settings of EPMS outperformed both PSMS and the baseline, although we did not found a better strategy for all of the tasks

• The multiclass classification performance was acceptable, although there is still much work to do