E q u al C l u st eri n g Makes Mi n -Max Mod u l ar S u p...

7
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/238065668 Equal Clustering Makes Min-Max Modular Support Vector Machine More Efficient Article CITATIONS 14 READS 33 3 authors: Some of the authors of this publication are also working on these related projects: Research on classifying complex concept-drifting data streams based on multi-task learnin View project Energy Efficient and Sustainable Communications Networks View project Yimin Wen Guilin University of Electronic Technology 27 PUBLICATIONS 72 CITATIONS SEE PROFILE Bao-Liang Lu Shanghai Jiao Tong University 259 PUBLICATIONS 3,256 CITATIONS SEE PROFILE Hai Zhao Northeastern University (Shenyang, China) 270 PUBLICATIONS 2,126 CITATIONS SEE PROFILE All content following this page was uploaded by Bao-Liang Lu on 18 May 2016. The user has requested enhancement of the downloaded file.

Transcript of E q u al C l u st eri n g Makes Mi n -Max Mod u l ar S u p...

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/238065668

Equal Clustering Makes Min-Max Modular Support Vector Machine More

Efficient

Article

CITATIONS

14

READS

33

3 authors:

Some of the authors of this publication are also working on these related projects:

Research on classifying complex concept-drifting data streams based on multi-task learnin View project

Energy Efficient and Sustainable Communications Networks View project

Yimin Wen

Guilin University of Electronic Technology

27 PUBLICATIONS   72 CITATIONS   

SEE PROFILE

Bao-Liang Lu

Shanghai Jiao Tong University

259 PUBLICATIONS   3,256 CITATIONS   

SEE PROFILE

Hai Zhao

Northeastern University (Shenyang, China)

270 PUBLICATIONS   2,126 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Bao-Liang Lu on 18 May 2016.

The user has requested enhancement of the downloaded file.

I︱

;I〓

Equal Clustering Makes Min-Max Modular SupportVector Machine More Efficient

Yi-Min Wenl'2, Bao-Liang Lut, and Hai Zhaor1 Department of Computer Science and Engineering, Shanghai Jiao Tong University

1954 Hua Shan Rd., Shanghai 200030, China

{ymwen,blu,zhaohai } @cs. sj tu. edu. cn2 Hunan Industry Polytechnic, Changsha 410007, China

Abstract-support vector machines has training time at leastquadratic in the number of examples, so it is hopeless touse it to solve large.scale problems. In this paper, a novelclustering algorithm that can equally partition training datasets is proposed to improve the performance of the min-maxmodulai support vector machine (M3-SVM). The simulationresults indicate that the proposed clustering method can not onlypromote the generalization accuracy of the M'-SVM, but also

speed up training and reduce the number of support vectors incomparison with the existing random task decomposition method.

I. INTnONUCTION

Today there are many very large-scale data sets like public-

health data, gene expression data, national economics data, and

geographic information data. By using these very large data

sets, researchers can get higher generalization accuracy, dis-

cover infrequent special cases, and avoid over-fitting. However,

most of existing machine leaming methods such as support

vector machine are hard to be used to deal with these very

large data sets because both their learning time and space

complexity have at least quadratic in the number of examples.

Therefore, one of the most challenging problems in machine

leaming community is to develop new learning model toefficiently handle these large data sets. Many efforts are made

to scale SVMs, such as using clustering methods to preprocess

large training data [] [2], using cascade and parallel methods

to filter non-support vectors t3l t4l t5l.According to Provost [6], parallelism is a good strategy

for dealing with large-scale data sets. In the training phase

of parallel learning methods, a large training data set is firstdecomposed and parallelly processed, then many local learners

are obtained. In recognition phase, an unknown sample is

presented to all the learners, the outputs of all the learners

are integrated to make a final solution to the original problem

t7l t8l. Obviously, the performance of parallel method heavily

depends on the integration principle of learners and the decom-

position strategy of training data set. In our previous work,

a modular support vector machine, called min-max modular

support vector machine (M3-SVM ) [9], was proposed to

overcome the drawback of traditional SVMs. With two module

combination principles, namely the minimization principle

and the maximization principle, and a random task partitionstrategy, M3-SVM has been successfully applied to various

fields such as text classification [9], face recognition [10], and

gender recognition[111

HoweVer, how to choose a task decomposition strategy

according to integration pⅡ nciple to make para11el leaming

method work emciently?In essence,山 e integraton prIncⅡ le

of M3-sVM is to dynamically choose a appropHate loca1

learner to classi灯 an input sample。 From山 o pont of view,

training data should be partitioned according to its‘ Ⅱs臼注bution

in feamre space。 This law is extremely needed when a11

samples in training data set are not identica11y clis臼 obuted。

Random data part"ion ignores this law and works not very

well Ⅱ 纠 .In th灬 pape马 a novel chstering me曲 od k proposed

to decompose large training data setinto Fnany sma⒒ er subsets,

WⅡ Ch盯 e roughly the sarne in⒍ ze。 A11the experiments show

山 at the proposed clustering1method can not only promote1he

M3-s`冂 ˇrs generalization acc刂 acy,but also speed up iaining

and reduce山 e nunber of support vectors6Vs,in comp盯 ison

w⒒ h山 e traditional randona task decompos⒒ ion method。

II。 MIN-MAX MODULAR sUPPORT VECTOR MACHINE

Min-max modul盯 (M3)ne刂 al netwOrk[8]is a realization

of the pHnciple “

divide-and-conquer’’

in machine leaming。

骡 :∶}∶I罗 aI∶∶丨:I:I|;i£ ∶:I;;∶;:邕F岁艮×叩⒒抟network is a general framework for1nachine learning。

M3-neural netwOrk can eas⒒ y address multic1ass problems。

Here for smpl忆 ity of discusson,only two-class classincat。 n

圹 !!∶∶∶∶

f1∶

||∶丨(芹 ,骂 :i∶丨|!!!∶ i∶丨

喙 d糊 谬 挠 n∶絮 气 e阻 涅

f(x厂 ,-1)l拦 I,where x产 ∈ Rn and X「 ∈ R%denote

the讠山 pos⒒ive and negative iaining sample respective” %and

N+ and JVˉ denote 山 e nl】 mber of positive and negative

training samples,respectively。 The entire training data set can

be denned as s=〃+∪

〃ˉ

.

In 订 aining phase, Ⅱ⒒in-1nax modular me曲 od decomposes

〃+and〃 ˉ

into K+and Kˉ roughly equal subse“ respec-

tlvely by u“ ng a data paH⒒ ion strategy.

〃+=U刀产,1产 =f(x扁 ,+lll肚 1,讠 =1,⒉ ⋯,Κ

+

讠〓 1

〃̄ =∪ 玎,玎J=1·

=((Xt,— 0l挂 1,J=1,⒉⋯ ,Κ

ˉ

(1)

7̈7¨

aF fff - o, nF xi : o; o denotes empty set;

^|f : lN+17+l for i, - L,i,...,K+ -L, ly) denotes thehBE$ ireger trat is less or equal than y; N** - N* -D$-tN;+; N; - LN-lK-J, for i: r,2,,...,K- -t;ad /q- - N- Df:;'lrt. According to (l), everytwo zubsets from X+ and,t- respectively are chosen to con-struct one two-class classification subproblem, so the originalclassification problem can be divided into K+ x K- smallerclassification subproblems as follows:

岛,J=硭 ∪玎Because these K+ x K- smaller subproblems need not to

communicate with each other in training phase, they can behandled parallelly or sequentially by standard SVM method.Therefore, K* x K- individual classifiers, SVMi,i, i _r,2,---,K+, j _ r,2,...,I(- will be obtained. All subsetsin each class with roughly equal size can make load bal-ance among all subproblems S.,j, i - 1,2,..., K+, j :I,2,,...,.K-, but it should be pointed out that SVMs' trainingtime does not only rely on problem size.

In recognition phase, a sample X is presented to allclassifiers, SV M,i,i, i : L,2, ..., K+ ,, j : 1, 2, ..., K- andeach classifier outputs a decision value that can be denotedby SVM;,j(X).Then, min-max modular method uses twomodule combination principles to integrate them. By using"minimization" principle, K- individual classifiers are inte-grated as follows:

GIX) : *.i,rf:rSVM,;,i(X), i : I,2,..., K* (3)

where "min" operation chooses the minimum value amongI(- decision values of SVM6,j(X), j : L,2,...,1{-. After"minimization" principle, "maximization" principle is used togive the final decision value for X:

C(x)=m泡 ℃琶 c讠 (x)

Fig. 1. A M3ˉ sVM netwOrk cOnsisjng of K+ X Kˉ individuaI sVMclassiners,K+ MIN un"s,and one MAX unit.

locahzed clusters needs to solVe1he uncons饣 ahed nonhnear

prograⅡ Ⅱnhg probleⅡ l as follows:

¥ uⅡ 忤

:九 =m。 ε馑 1|贶 一 万

|

where%and l〃 ;denote the center vector and nl】 mber of

s:Ⅱnples in the 犭伍 cluste△ respective玩 and m denotes thenunber of par血 ion。 W is山 e mean number of samples per

dust叽 i.e.,万 =r锘 ⒈ The。 刂 ect hnction九 evaluates山 e

degree of balance among aⅡ clusters, the smaⅡer山 e value

of九 is,the more balanced aⅡ clusters get。 The algor⒒ hn1is

show11in Fig。 2。

In ol】r proposed equal clusteong me山 od, if two cluster

Contain diJerent nl1mber of samples,the center c讠 ofsma11er

chster G wm m。 ve t。w盯 d to山 e center%of larger cluster

%While%wⅢ move tOward dorlg曲 e drection of the

vector%_%TWo paraIneters α and况 ∞ ntrol伍 e movement

r茑 J咒 骂 筚 」1婴 1:∶丨茎 :鞘 :1∶1∶∶蜃 J11;⒐

App盯 ently,山 e time comple姑 ty of thk chster me伍 od、

k“ or equ缸 曲 an o(m屮 mGt讠 古erl and山 ere k no memory

山 rashing in our me山 od even山 ough data set is very large。

Comparing to the algoⅡ uⅡn(3eoClust,there are tWo mod-

incati。 ns made to hnprove it。 the nrst is tha1 a progra1n

breakˉup condition, ‘‘If九 ( ε, 1hen brea跆

”, is added into

the algor"hn。 This lnodincati。 n can ensure that the algoriuⅡ n

stops at once if the prescribed balance condhion is satis£ ed。 If

d1is prograⅡ l break-up condition is canceⅡ ed,the partition rnay

become imbalanced again when叼 oGε氵古er is reached,because

`1does not monotonica11y decrease in iterauon pr。

cess。 Tbe

鞯 琵 J∶ g铣 elI∶ 蹴 嘎 :1fJ;l1rlhF;Ⅰ 1扌茁;卩骣℃

GeoClust can take high possbⅡ ty to become very large and

make the twO related centers c讠 and cJ move too long distance。

This wm lead t。 伍at the new data dk订 ibution between these

twO cluste‘ ks训 ℃ γ 血 balanced。 By m汕 驷 ng铭 ω

龆 rTT。 括 T揣 t品 黯 雯

(5)

(4)

where "max" operation selects the maximum value among -I(+decision values of Gi(X),, 'i: I,2,...,K+. At last, X canbe classified according to the final decision value of C(X).A M3-SVM for two-class classification problem is shown inFig. l.

III. A NBw Cr.usrsRrNc METHoD FoR DnrnDBCOTT,TPOSITION

The data decomposition problem can be described as fol-lows: given a data set .t C Rn, rn subsets Ci C R (i -I,2,...,m) are found to satisfu Un_4, Ct: X and l[, Ct:O. When training data are not identically distributed, theeffectiveness of random data partition method will be unstableand sometimes becomes very bad [12]. In order to handlethese problems, one needs to generate spatially localizedclusters that contain (nearly) equal number of samples tokeep load balance. Based on the algorithm "GeoClust" [13],a modified clustering method is proposed in this paper. Inour proposed method, the way to generate balanced spatially

-78-

Input:

丿9饣 fff曰

`汔

c'

/Jgorf汤 〃,·

Equal clustering alg① rithm

Data set:〃 =fXj∴ 琶 1,凡 ∈ Rη

PrescHbed number of parthion∶ m

Maxhnt1m nt1mber of iterations∶ mG£ j古er

LeaΠ 1ing parameters∶ ‘冫 and J

The threshold of the value of o刂 ect Jfi1∶ IJ1:1ction∶ ε

randomly choose m vectors in〃 ,四 ,饣 =1,2,¨ ” m,as center vectors of each duster

m dusters G=(c?l,讠 =1,2,¨ ” m

For t〓 l∶ m色 t犭古er

l。 Assign each sampk/YJ∈ 〃 ω the duster ne盯 e哎 ac∞ rding ω 山 e distanc∞ be抑 een

this sample and aⅡ center vectors。 At the same tiェne,the label indices of cluster that

this sample belong to are recoded:

svbcJzs古 erJabeJ凵 =Gr乡 m犭叫 篷 1|Pq一 砑

ˉ1‖,J=1,2,¨ ” Ⅳ

2.Compute山 e nl】 mbers of samples in each duster Wl,VV1,⋯ ,N砥

3.Compute the value of o刂 ect funct0n九

4.If九 (ε ,then breG括

5.For each cluster G(讠 =1,2,¨ ” mn),Update its center as follows∶

5.1Compute J讠 =∑羟 1j≠× 可 茄 茫 打 砑

— 1)(刂ˉ1一

^1)

5.2Update center磅 =磅^1+α

J讠

6.End

End

Fig.2。 EquaI clustering algorhhm

TABLE I

THE PROBLEM sTATIsTICs AND THE PARAMETERs UsED IN s、 厂Ms.

Letter recognition 16 2 15000 5000 16

of these two cluster methods, a quantify of h x 'iternumis introduced, where the variable ,iternum (iternummariter) means the number of iteration in each partitioning.Obviously, the smaller the quantity h x iternum is, the betterperformance the algorithm gets. The performance comparisonbetween Geoclust and the proposed equal clustering methodis shown in Fig. 3. It can be seen that the proposed clustermethod performs better than GeoClust.

IV. ExppRIUENTS

--1.. Experimental Setup

ln order to evaluate the perfornance of the proposed cluster-ing algorithm, simulation experiments are performed to com-pare the performances of M3-SVMs using random partitionand equal clustering partition. The first data set is Bananadata set that comes from [14] and the second one is Letterrecognition data that comes from ucl [15]. For banana data,all 100 of its realizations are united to construct a large trainingdata set and one realization of its test data is randomly chosenas test data set. Letter recognition problem is transformed

into a two-class classification problem by the method like in[16]. The classes of "Et'r"H"rt'I"rttM"rttPt'rttQttrttR"rttx"r"Y",and "2" are randomly set to positive class while the restclasses are set to negative class. The training and test dataare nonnalized in the range [0, l]. From our experience, thedata of these two experiments are not identically distributed.The problems statistics and the selection of the parameters forSVMs are shown in Table. I.

In this paper, libSVM [17] whose cache is set to l00M isselected as training tool. All the experiments are performed ona PC that has 3.0GHzCPU with lGB RAM. The kernel used isthe radial-basis tunction, exp(- #[X-Xnll2). For simplicity,we let K+ - K- - K in M3-SVM method. In order tosystematically evaluate the proposed method, the value of Kis set to 2,3,...,20 in these experiments. K : 1 means that theclassifier is trained by the entire training data. The trainingtime of min-max modular support vector machines is countedin two ways. Onp is sequential training time that is the sum ofthe training time of all subclassifiers, and the other is paralleltraining time that is the longest training time among all the

7̄9-

x107

1011 121314151617181920

TABLE II

THE TIME CosT OF TWO DATA PARTITIoNING METHODs.

Datapartitionmethod

Averagetime per

Fig. 3. Performance comparison between the modified Geocrust and theoriginal GeoClust.

classifiers. In order to ensure the credibility of the conclusions,all experiments are repeated three times and the average istaken.

B. Experimental Results and Discussions

In all the figirres below, "RP" means random partitionmethod and "CE" means equal clustering method. For conve-nience, M3-SVM-RP denotes min-max modular support vectormachine based on random partition method, while M3-SVM-CE denotes min-max modular support vector machine basedon equal clustering method.

From Fig. 4, it can be seen that the generalization ability ofM3-SVM is better than standard SVM. The most interestingphenomenon is that M3-SVM-CE takes higher generalizationaccuracy than M3-SVM-RP does in most of time. Fig. 5illustrates that M3-SVM-CE generates less support vectorsthan M3-SVIvI-RP does. In Fig. 6, even considering the timeused in data partition shown in Table II, MS-SVM-CE takesless time than M3-SVM-RP does in sequential mode. FromThble II, it can be seen that if the datapartition time is ignored,M3-SVM-CE can run faster than M3-SVM-RP does in parallelmode. The reason is that CE can make the classificationsubproblems more separable than RP does. On the other hand,even though the data partitioning costs some time, the fewersupport vectors and the higher generalization accuracy cancompensate for it. The smaller the number of support vectorsis, the less the test time and the cost of realization of M3-SVMare. In sum, equal clustering method makes min-max modularsupport vector machine more efficient than random partitiondoes.

The reason of equal clustering method performs better thanrandom partition method does can be explained from Figs.7 and 8. When training data are random partitioned, all theclassification subproblems look like the original classificationproblem as shown in Fig.7. Therefore, all the subclassifierswill look alike and fail to complement each other. In compar-ison, all the classification subproblems focus on local featurespace when training data are partitioned by equal clustering.

partitioning (s)

1.5

10.6

o。914

o.913

o.91

o。908

lO。 907

o.906ˉ1 2 6 7 8 9 1011 12131415161718192o

Pad"ion K

4 s 6 t . ;"Iilt

1213141s1617181s20

Fig. 4. The upper and lower figures show the accuracy of M3-SVMs forBanana and Letter recognition problems, respectively.

The subclassifiers are so diverse that the two "min" and ..max"

integration priciples works effectively. so the generalizationaccuracy of M3-SVM-CE is higher than M3-SVM-RP in mostof time. when training data is not identically distributedin feafure space, random partition will make the two classsamples in each subproblem mix much more while equalclustering method will decrease the mixture degree of twoclass samples in each subproblem and enhance the separabilityof two classes in it. Therefore, the number of support vectorscan be reduced and the training time is shortened. According toVapnik [18], the decrease of support vectors will often promotethe accuracy of support vector machines.

V. CoNcr.usroNS

In this paper " a new clustering method is proposed to

decompose large training data set into a number of smallersubsets. The feature of the proposed equal clustering method

E彐

o〓

×

6J

⒛υCE

Banana

υcE

仗L

·

9

·9

>ο

“L0°

°<

>°“L彐

°°

<

-80ˉ

0ˉˉˉNVmber of sVs by CE

-Number。fsVs by RP

1 2 3 4 5 6 7 8 9 1011 121314151617181920Parti】 on K

is that it can partition data set more balancedly than Geoclustdoes. It has been observed that equal clustering method cancatch local probability distribution information in trainingrl"ta, which is ignored by the random task partition method.The simulation results show that when training data are notidentically distributed, the proposed clustering method willnake min-max modular SVMs more efficient than the randomt"sk partition method does.

AcTNowLEDGMENTS

This work was supported in part by the National NaturalScience Foundation of China via the grants NSFC 60375022d 6M73040. The authors thank Yang Yang for usefuladvices.

RBTERBNcBS

Ul D. Boley and D. W. Cao, "Training support vector machine usingadaptive clustering," ln Prcceedings of SDM'04. Lake Buena Vista,usA,2004.

t2l T. Evgeniou and M. Pontil, "support vector machines with clusteringfor training with very large datasetsl' Lectwes Notes in ArtficialIntelligence, vol. 2308, pp. 346-354, 2002.

t3l H. P. Graf, E. Cosatto, L. Bottou, I. Durdanovic, and V. Vapnik, ..parallel

support vector machines: the cascade SVh{," ln Neurol InformationProcessing Syslems, vol, l?, MIT Press, 2005.

X105

ˉ-0⋯ Number of sVs by CE

-Number。fsVs by RP

1 2 3 4 5 6 7 8 9 1011 12131415161718192oPart"ion κ

Fig. 7. Distribution of banana training data

e method for reducing training timeLeclure Noles in Compuler Science,

Y. M. Wen and B. L. Lu, 'A hierarchical and parallel method for trainingsupport vector machines," Lecture Noles in Computer Science, vol. 3496,pp. 881-886, 2005.F. Provost and J. M. Aronis, "scaling up inductive learning witb massiveparallelism," Machine Learning, vol. 23, pp. 33-46, 1996.R. Collobert, Y. Bengio, and S. Bengio,'A parallel mixture of SVMs for

Jg皿

·

21B·

6彳

∞>∞

一°

Lo°

E5cΦ

〓卜

帘丫E一

~0c一

压四卜

ω>∞

一°LΦ

DE≡

co〓

rlIlⅠ

Ι

r■

:ll:Ⅰ

lⅠ

IlⅠ

:■

::lⅡ

Ι

lⅠ

lⅠ

lⅠ

I■

L

Fig- 5. The left and right figures show the number of SVs produced in Banana and Letter recognition problems, respectively.

1 2 3 4 5 6 7 8 9 1o11 12131415161718192oPa"ition K

Fig. 6. The left and right figures show both sequential and parallel training time in Banana and Lefter recognition problems, respectively.

-sequentiaI training0me by RP

-sequenuaI rain|ng刂me by CE

--Para"eI饣 aining time by RP

-Para"eI。ahing刂 me by cE

-《)-sequen刂 a"rainhg刂me by RP

— sequen刂 aI"dn∶ ng jme by CEˉ̄ ■⋯̄ Para"e1oaining time by RP

-ˉ -ParaIIel饣 ddngtime by CE

冖81ˉ

St.i subproblsm

皆刂

St.r subproblem

Sz1 subprobl€m

very large scale problems," In Neural Information Prccessing Systems,

vol, 17, MIT Press, 20M'.

t8] B. L. Lu and M. Ito, "Task decomposition'and module combinationbased on class relations: a modular neural network for pattern classifi-cation," IEEE Trans. Neural Networks, vol. 10, pp. l2M'1256,1999-

t9l B. L. Lu, K. A. Wang, M. Utiyama, and H. Isahara,'A part-versus-partmethod for massively parallel training of support vector machines," InPrcceedings of IJCNN'04. pp.735-740, Budapest, Hungary, 2004.

U0l Z. G. Fan and B. L. Lu, "Multi-view face recognition with min-maxmodular SVMS," Leclurc Notes in Compuler Science, vol. 3611, pp'

396-399,2005.

tl ll F. C. Lian and B. L. Lu, "Gender recognition using a min-max modularsupport vector machine," Lecturc Notes in Computer Science, vol. 361 l,pp. 438-441,2005.

[2] K. A. Wang, H. Zhao, and B. L. Lu, "Task decomposition usinggeometric relation for min-max modular SVMs," Lecturc Noles inComputer Science, vol. 3496, pp. 887-892, 2005.

Sr.. subproblem

S22 subp.oblem

Sl.z subproblem

S2.2 subproblem

tl3l A. Choudhury C. P. Nair, and A.J. Keane,'A data parallel approach forlarge-Scale gaussian process modeling," ln Prcceedings of the secondSIAM International Conference on Data Mining, Arlington, USA, 2002'

tl4l G. Riitsch, Available: http://idartr$.gmd.de/ raetsch/data/benchmarks.htm

tlsl C. L. Blake and C. J. Merz. (1998) UCI Repository of Machine LearningDatabases. Univ. California, Dept. Inform. Comput. Sci., Irvine, CA.

[online]. Available: ftp://ftp.ics.uci.edu/pub/machine-learning-databasestl6] G. Rdtsch, T. Onoda, and K. R. Miiller, "Soft margins for AdaBoost,"

Machine Learning,'tol. 42, pp. 287 -320, 2001.

tl7] C. W. Hsu and C. J. Lin, 'A comparision of methods for multiclasssupport vector machines," IEEE Trans. Neural Networks, vol. 13, pp.

4t5-425,2002.U8] C. J. C. Burges, "A tutorial on support vector machines for pattern

recognition," Data Mining and Knowledge Discovery, vol. 2, pp. l-47,1998.

Fig. 8. The decoqosiri@ of Bmtra &tr whet If : 2. In Etl figures, th6 poitrts l.belled by '+' dd 'o' dende positiv-e ed nc,gative samples rsspediv€ly. (a),

GL (c), ara (a) aerite ttre foui clEssificltior subprublens by using radom t6k decomposition ststbgy, and (e), (D, G), md(h) d€note th. four cl.ssifiodiotiui6p't66tcNn.-ty using equal clustcring mcthod. It can bG seen thd equ.l clustering makes clsssifroation subproblcms to be more s€perblc th&r ratrd(rllp€rtition do€6,

-82-

View publication statsView publication stats