1. Experimental Results ELM Weighted ELM Locally Weighted ELM Problem 2.

18
1

Transcript of 1. Experimental Results ELM Weighted ELM Locally Weighted ELM Problem 2.

Page 1: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

1

Page 2: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Experimental Results ELM Weighted ELM Locally Weighted ELM Problem

2

Page 3: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

All training data are randomly chosen Targets are normalize -1 to 1 Features are normalize 0 to 1 Using RMSE criterion

K

yy

RMSE

K

i

ii

1

2)ˆ(

3

Page 4: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Sinc function: X=-10:0.05:10 Train:351 Test:50 (hidden neuron, h, k) Original

ELM(10)

Weighted ELM

(10,0.01)

Locally Weighted

ELM(10,1,20)

1.95E-1 9.41E-5 1.53E-4

x

x

)sin(

4

Page 5: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

5

Page 6: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Function: X=-5:0.05:5 Train:151 Test:50 (hidden neuron, h, k)

6

2/2 2)21(1.1 xexxy

Original ELM(10)

Weighted ELM

(10,0.01)

Locally Weighted ELM

(10,1,20)

T2FNN

2.81E-1 1.39E-4 8.15E-4 1.3E-3

Page 7: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

7

Page 8: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Function: X1,x2,x3=-1:0.005:1 Train:351 Test:50 (hidden neuron, h, k)

8

Original ELM(10)

Weighted ELM

(10,0.01)

Locally Weighted

ELM(10,1,20)

1.41E-4 3.09E-6 2.61E-5

)1( 23

31

22 xexy x

Page 9: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Machine CPU Feature:6 Train:100 Test:109 (hidden neuron, h, k)

Original ELM(10)

Weighted ELM

(10,0.9)

Locally Weighted

ELM(10,1,40)

0.111342 0.103473 0.105663

9

Page 10: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Auto Price Feature:15 ,1 nominal ,14 continuous Train:80 Test:79 (hidden neuron, h, k)

Original ELM(15)

Weighted ELM

(10,0.9)

Locally Weighted

ELM(10,0.9,50)

0.201255 0.189584 0.193568

10

Page 11: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Cancer Feature:32 Train:100 Test:94 (hidden neuron, h, k)

Original ELM(10)

Weighted ELM

(3,0.9)

Locally Weighted

ELM(3,1,40)

0.533656 0.528415 0.532317

11

Page 12: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Input layer

hidden layer

output layer target, :

matrixght output wei the,:

matrixoutput layer hidden ,:

min

)()(

)()(

1

11

1111

mN

mj

jN

bgbg

bgbg

TT

jNjN

jj

T

β

H

THH)(Hβ

THβ

xwxw

xwxw

H

THβ

β

The weights between input layer and hidden layer and the biases of neurons in the hidden layer are randomly chosen.

12

]1,0[bias,]1,1[weight

Page 13: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

matrixdiagonal,

0

0

))/(5.0exp(

data testinga the:a

data trainingn the:n

feature ofnumber the:

~1,)(

11

2

th

th

1

2,,

NN

nnn

p

i

inian

w

w

hdw

p

Nnxxd

W

13

Page 14: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

WTWHWH)WH(β

WTWHβ

WTWHββ

TT )()(

min

1

14

Page 15: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Ex

3648.0

2778.0

002.0

2365.0

5505.1)(

9.0

7.0

5.0

4013.0

4502.0

4750.0

]4.0;2.0;1.0[

1

THβ

THHHβ

TH

X

TT

15

Page 16: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

3628.0

2759.0

0007.0

2355.0

5534.1)())((

9975.000

09975.00

0099.0

]1.0;1.0;2.0[

1targettesting,0.3

1

WTWHβ

TWHWHWHβ

W

d

TT

為為假設

16

Page 17: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Find the k nearest training data to testing data

WTWHWH)WH(β

WTWHβ

W

TT

kkw

w

)()(

matrixdiagonal,

0

0

1

11

17

Page 18: 1.  Experimental Results  ELM  Weighted ELM  Locally Weighted ELM  Problem 2.

Paper數據 Randomly weight and bias The output of Nearest data (feature selection…?)

18

-0.9318

2

0.312205

0.029309

0.061562

0 0 0

1-

0.95352

0.25826

0.060621

0.061562

00.0192

310.0056

82

2-

0.98056

0.393122

0.022044

0.03028

00.0192

310.0056

82

3-

0.97946

0.211059

0.029309

0.045921

00.0384

620.0227

27

6 -0.94930.2043

160.0060

120.0928

430

0.019231

0.034091