Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

6
Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition Spiros Ioannou 1 , Manolis Wallace 2 , Kostas Karpouzis 1 , Amaryllis Raouzaiou 1  and Stefanos Kollias 1 1  National Technical ni!ersity of Athens, ", Iroon #olytechniou Str$, 1%& '( )o*raphou, Athens, +reece 2 ni!ersity o f Indianapolis, Athens ampus, ", Ipitou Str$, 1(% %& Synta*ma, Athens, +reece  Abstract  — Since facial expressions are a key modality in human communication, the automated analysis of facial images for the estimation of the displayed expression is essential in the design of intuitie and accessi!le human computer interaction syste ms" #n mos t exist ing rule-!a sed expre ssion recogniti on approaches, analysis is semi-automatic or re$uires high $uality id eo" #n this paper %e pro pose a featur e extra ction system %hich com!ines analysis from multiple channels !ased on their co nf idence, to re sult in !e tte r facial feat ure !o undary det ect ion " &he fac ial featur es are the n use d for exp res sio n estimation" &he proposed approach has !een implemented as an extension to an existing expression analysis system in the frame%ork of the #S& ERM#S pro'ect"  Index Terms  Facial feature extraction, confidence, multiple cue fusion, human computer interaction I$ I  NTR-.TI-N In re cent years the re has /e en a *r o0in* int er est in impro!in* all aspects of the interaction /et0een humans and computer s, pro!i din* a realization of the term affec ti!e computin*  31%4$  5umans interact 0i th ea ch ot her in a multimodal manner to con!ey *eneral messa*es6 emphasis on certain parts of a messa*e is *i!en !ia speech and display of emotions /y !isual, !ocal, and other physiolo*ical means, e!en instincti!ely 7e$*$ s0eatin*8 3194$ Inter pers ona l co mmuni ca ti on is fo r the most part compl ete d !ia the fac e$ .e spi te common /el ief , socia l  psychol o*y research has sho0n that con!ersations are usual ly dominated /y faci al e:pre ssion s, and not spo; en 0or ds, indica tin* the spea; er<s predi spos ition to0ards the lis tener$ Mehra/ian indica ted that the lin* uis tic part of a mess a*e, that is the actual 0ordi n*, contr i/ute s only for se!en percent to the effect of the messa*e as a 0hole6 the  paralin*uistic part, that is ho0 the specific passa*e is !ocalized, contri/utes for thirty ei*ht percent, 0hile facial e:pression of the spea;er contri/utes for fifty fi!e percent to the effect of the spo; en messa*e 324$ This implies that the fa ci al e:pr essions form the ma =or modali ty in human communica tion, and need to /e conside red /y 5I >MMI systems$ In most real?life applications nearly all !ideo media ha!e reduced !ertical and horizontal color resolutions6 moreo!er, the face occupies only a small percenta*e of the 0hole frame and illumination is far from perfect$ When dealin* 0ith such input 0e ha!e to acce pt that co lor @ual it y and !i de o resolution 0ill /e !ery poor$ While it is feasi/le to detect the face and all facial features, it is !ery difficult to find the e:act /oundary of each one 7eye, eye/ro 0, mouth8 in order to estimate its def ormat ion from the neutral ?e:pre ssion frame$ Moreo!er it is !ery difficult to fit a precise model to each feat ure or to empl oy trac;in* since hi*h?orde r fre@ uency information is mi ss in* in such situations$ A 0a y to o!ercome this limitation is to com/ine the result of multiple feat ure e:tractor s into a final result /ased on the e!a luatio n of their performance on each frame6 the fusion method is  /ased on the o/ser!ation that ha!in* multiple mas;s for eac h fe ature lo0 ers the pro /a/ ili ty tha t all of them are in!alid since each of them produces different error patterns$ II$B#RSSI-N R #RSNTATI-N An automa te d emot ion re co*niti on thr ou*h fa ci al e:pression analysis system, must deal mainly 0ith t0o ma=or research areasC automatic facial feature e:traction and facial e:pression reco*nition$ Thus, it needs to com/ine lo0?le!el ima*e processin* 0ith the results of psycholo*ical studies a/out facial e:pression and emotion perception$ Most of the e:istin* e:pression reco*nition syste ms can /e cla ssi fi ed in t0o ma= or ca te* ori esC the former inc ludes techni@ues 0hich e:amine the face in its entirety 7holistic appr oaches8 and ta;e into ac co unt pr operties such as int ensi ty 3"4 or opt ica l fl o0 dist ri/uti ons and the lat ter includes methods 0hich operate locally, either /y analyzin* the motion of local features, or /y separately reco*nizin*, meas ur in*, and com/ inin* the !a ri ous facial element  properties 7analytic approaches8$ A *ood o!er!ie 0 of the current state of the art is presented in 3D431(4$ In this 0or; 0e estima te facial e:press ion throu*h the estimation of the M#+ EA#s$ EA#s are measured throu*h detection of mo!ement and deformation of local intransient facial features such as mouth, eyes and eye/ro0s in sin*le frame s$ Eeature defo rmati ons are esti mated /y comp arin* their states to some frame, in 0hich the person<s e:pression is ;no0n to /e neutral$ Althou *h EA#s 314 pro!ide all the necessary elements for M#+?D compati/le animation, 0e cannot use them directly for the analysis of e:pressions from !id eo sc ene s, due to the a/s ence of a cle ar @ua ntit ati !e def ini tion fra me0 or;$ In order to mea sur e EA#s in real ima *e se@ uen ces , 0e ha!e to define a map pin* /et 0e en them and the mo!ement of specif ic E.# featur e points 7E#s8, 0hich correspond to salient points on the human face$

Transcript of Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

Page 1: Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

8/12/2019 Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

http://slidepdf.com/reader/full/confidence-based-fusion-of-multiple-feature-cues-for-facial-expression-recognition 1/6

Confidence-Based Fusion of Multiple Feature Cues for Facial Expression RecognitionSpiros Ioannou1, Manolis Wallace2, Kostas Karpouzis1, Amaryllis Raouzaiou1 and Stefanos Kollias1

1 National Technical ni!ersity of Athens,

", Iroon #olytechniou Str$, 1%& '( )o*raphou, Athens, +reece2ni!ersity of Indianapolis, Athens ampus,

", Ipitou Str$, 1(% %& Synta*ma, Athens, +reece

 Abstract  — Since facial expressions are a key modality in

human communication, the automated analysis of facial images

for the estimation of the displayed expression is essential in the

design of intuitie and accessi!le human computer interaction

systems" #n most existing rule-!ased expression recognition

approaches, analysis is semi-automatic or re$uires high $uality

ideo" #n this paper %e propose a feature extraction system

%hich com!ines analysis from multiple channels !ased on their

confidence, to result in !etter facial feature !oundary

detection" &he facial features are then used for expression

estimation" &he proposed approach has !een implemented as

an extension to an existing expression analysis system in the

frame%ork of the #S& ERM#S pro'ect"

 Index Terms — Facial feature extraction, confidence,

multiple cue fusion, human computer interaction

I$ I NTR-.TI-N 

In recent years there has /een a *ro0in* interest in

impro!in* all aspects of the interaction /et0een humans and

computers, pro!idin* a realization of the term affecti!e

computin*  31%4$   5umans interact 0ith each other in a

multimodal manner to con!ey *eneral messa*es6 emphasis

on certain parts of a messa*e is *i!en !ia speech and display

of emotions /y !isual, !ocal, and other physiolo*ical means,

e!en instincti!ely 7e$*$ s0eatin*8 3194$Interpersonal communication is for the most part

completed !ia the face$ .espite common /elief, social

 psycholo*y research has sho0n that con!ersations are

usually dominated /y facial e:pressions, and not spo;en

0ords, indicatin* the spea;er<s predisposition to0ards the

listener$ Mehra/ian indicated that the lin*uistic part of a

messa*e, that is the actual 0ordin*, contri/utes only for

se!en percent to the effect of the messa*e as a 0hole6 the

 paralin*uistic part, that is ho0 the specific passa*e is

!ocalized, contri/utes for thirty ei*ht percent, 0hile facial

e:pression of the spea;er contri/utes for fifty fi!e percent to

the effect of the spo;en messa*e 324$ This implies that the

facial e:pressions form the ma=or modality in humancommunication, and need to /e considered /y 5I>MMI

systems$

In most real?life applications nearly all !ideo media ha!e

reduced !ertical and horizontal color resolutions6 moreo!er,

the face occupies only a small percenta*e of the 0hole frame

and illumination is far from perfect$ When dealin* 0ith such

input 0e ha!e to accept that color @uality and !ideo

resolution 0ill /e !ery poor$ While it is feasi/le to detect the

face and all facial features, it is !ery difficult to find the

e:act /oundary of each one 7eye, eye/ro0, mouth8 in order to

estimate its deformation from the neutral?e:pression frame$

Moreo!er it is !ery difficult to fit a precise model to each

feature or to employ trac;in* since hi*h?order fre@uency

information is missin* in such situations$ A 0ay to

o!ercome this limitation is to com/ine the result of multiple

feature e:tractors into a final result /ased on the e!aluation

of their performance on each frame6 the fusion method is

 /ased on the o/ser!ation that ha!in* multiple mas;s for

each feature lo0ers the pro/a/ility that all of them are

in!alid since each of them produces different error patterns$

II$B#RSSI-N R #RSNTATI-N

An automated emotion reco*nition throu*h facial

e:pression analysis system, must deal mainly 0ith t0o ma=or

research areasC automatic facial feature e:traction and facial

e:pression reco*nition$ Thus, it needs to com/ine lo0?le!el

ima*e processin* 0ith the results of psycholo*ical studies

a/out facial e:pression and emotion perception$

Most of the e:istin* e:pression reco*nition systems can /e

classified in t0o ma=or cate*oriesC the former includes

techni@ues 0hich e:amine the face in its entirety 7holistic

approaches8 and ta;e into account properties such asintensity 3"4  or optical flo0 distri/utions and the latter

includes methods 0hich operate locally, either /y analyzin*

the motion of local features, or /y separately reco*nizin*,

measurin*, and com/inin* the !arious facial element

 properties 7analytic approaches8$ A *ood o!er!ie0 of the

current state of the art is presented in 3D431(4$ 

In this 0or; 0e estimate facial e:pression throu*h the

estimation of the M#+ EA#s$ EA#s are measured throu*h

detection of mo!ement and deformation of local intransient

facial features such as mouth, eyes and eye/ro0s in sin*le

frames$ Eeature deformations are estimated /y comparin*

their states to some frame, in 0hich the person<s e:pression

is ;no0n to /e neutral$ Althou*h EA#s 314 pro!ide all the

necessary elements for M#+?D compati/le animation, 0e

cannot use them directly for the analysis of e:pressions from

!ideo scenes, due to the a/sence of a clear @uantitati!e

definition frame0or;$ In order to measure EA#s in real

ima*e se@uences, 0e ha!e to define a mappin* /et0een

them and the mo!ement of specific E.# feature points 7E#s8,

0hich correspond to salient points on the human face$

Page 2: Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

8/12/2019 Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

http://slidepdf.com/reader/full/confidence-based-fusion-of-multiple-feature-cues-for-facial-expression-recognition 2/6

Page 3: Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

8/12/2019 Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

http://slidepdf.com/reader/full/confidence-based-fusion-of-multiple-feature-cues-for-facial-expression-recognition 3/6

 /oundaries or mas; dislocation due to noise$ If comb y  is the

com/ined machine output and t   the desired output it has

 /een pro!en in the committee machine 7M8 theory that the

com/ination error comb y t − from different machines  f i  is

*uaranteed to /e lo0er than the a!era*e errorC

2 2

2

17 8 7 8

17 8

comb i

i

i comb

i

 y t y t  M 

 y y M 

− = −

− −

In a Static M, the !otin* 0ei*ht for a component is

 proportional to its error on a !alidation set$ In .Ms,

7Ei*ure 28 input is directly in!ol!ed in the com/inin*

mechanism throu*h a +atin* Net0or; 7+N8, 0hich is used

to modify those 0ei*hts dynamically$

Ei*ure 2C .ynamic ommittee Machine Architecture

In our case, the final mas;s for the left eye, ri*ht eye and

mouth,   I R e e   m, , f f f  M M M are considered as the machine

output and the final confidence !alues of each mas; for

feature :  f  c x M  are considered as the confidence of each

machine$ Therefore, for feature  x, each element  x

 f m  of the

final mas;   :

 f  M is calculated from the n mas;s asC

1

1i

ncx x x i i

 f i f 

i

m m M h g  n   =

=   ∑ ,

 

(   )

(   )

!d@

!d@

1, t

(, t

k k 

k k 

c,x c,x

 f qk 

c,x c,x

 f q

 M M 

h

 M M 

≥ ×= 

  < ×

Where xim is the element of mas;

 xi M  , ic,x

 f  M   the final

!alidation !alue of mas; i  andih is used to pre!ent the

mas;s 0ith (   )!d@

L tk k c,x c,x

 f q M M ×   to contri/ute to the

final mas;$ A sufficient !alue for !dt  is ($'$ The role of the

*atin* !aria/le  i g  is to fa!or the color?/ased feature

e:traction methods 7  e

1M ,  m

1M 8 in ima*es of hi*h color and

resolution$ In this sta*e, t0o !aria/les are ta;en into

accountC ima*e resolution and color @uality6 since non?

synthetic trainin* data for the latter is difficult to ac@uire, in

our first implementation, the *atin* output of !aria/le  i g  is

not trained /ut it is defined manually as follo0sC

other0ise

, 1, 12', L , L

1> , 1, 12', L , L

1,

bp cr cb

i

bp cr cb

n i D t t  

 g n i D t t 

σ σ  

σ σ  

σ σ  

σ σ  

  = >= ≠ >

 

0here   bp D   the /ipupil 0idth in pi:els and σ cr  , σ cb  the

standard de!iation of the C r , C b channels respecti!ely inside

the facial area$ It has /een found that σ cr  , σ cb  in the same

ima*e is less than   ?J% 1(× for *ood color @uality and much

lar*er for poor @uality ima*es$

7a8

7/8 7c8

7d8 7e8Ei*ure J$ -ri*inal frame 7a8 and the four

detected mas;s for the eyes in frame

f 1

f 2

f n

 M

+ate

output

y1,

1

y2,

2

yn,

n

Input

ot

i

n

*

*1

*2

*n

.ynamicommitteeMachines

Page 4: Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

8/12/2019 Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

http://slidepdf.com/reader/full/confidence-based-fusion-of-multiple-feature-cues-for-facial-expression-recognition 4/6

J%2' of the Alyssa se@uence 3&4

Ei*ure D$ Einal mas; for the eyesEi*ure %$ All detected feature points from the final mas;s

I$ B#RSSI-N A NAGSIS

The feature mas;s are used to e:tract the Eeature #oints7E#s8 considered in the definition of the EA#s, used in this0or;$ ach E# inherits the confidence le!el of the final mas;from 0hich it deri!es6 for e:ample, the four E#s 7top,

 /ottom, left and ri*ht8 of the left eye share the sameconfidence as the left eye final mas;$ ontinuin*, EA#s can

 /e estimated !ia the comparison of the E#s of the e:aminedframe to the E#s of a frame that is ;no0n to /e neutral, i$e$ a

frame 0hich is accepted /y default as one displayin* nofacial deformations$ Eor e:ample, EA# J&

 F 

(squeeze_l_eyebro! is estimated asC

J& D$% J$11 D$% J$11

n n F F" F" F" F" = − − −

0here  n

i F"  , i F"  are the locations of feature point i  on the

neutral and the o/ser!ed face, respecti!ely, and

i # F" F" −   is the measured distance /et0een feature

 points i  and  # $

Ei*ure 9$ M#+?D Eeature #oints 7E#s8

-/!iously, the uncertainty in the detection of the feature points propa*ates in the estimation of the !alue of the EA#as 0ell$ Thus, the confidence in the !alue of the EA#, in thea/o!e e:ample, is estimated as

J& D$% J$11min7 , 8c c c F F" F" =-n the other hand, some EA#s may /e estimated in different

0ays$ Eor e:ample, EA# J1 F  is estimated asC

1

J1 J$1 J$J J$1 J$J

n n F F" F" F" F" = − − −or as

2

J1 J$1 "$1 J$1 "$1

n n F F" F" F" F" = − − −As ar*ued a/o!e, considerin* /oth sources of informationfor the estimation of the !alue of the EA# alle!iates some of

the initial uncertainty in the output$ Thus, for cases in 0hicht0o distinct definitions e:ist for a EA#, the final !alue andconfidence for the EA# are as follo0sC

1 2

2

i ii

 F F  F 

  +=

The amount of uncertainty contained in each one of the

distinct initial EA# calculations can /e estimated /y1 11   c

i i $ F = −for the first EA# and similarly for the other$ The uncertainty

 present after com/inin* the t0o can /e *i!en /y some t ?norm operation on the t0oC

1 27 , 8i i i $ t $ $ =The Ga*er t ?norm 0ith parameter %&  *i!es reasona/le

results for this operationC

( )(   )1 21 min 1, 71 8 71 8

i i i $ $ $ = − − + −

The o!erall confidence !alue for the final estimation of theEA# is then ac@uired as

1c

i i F $ = −While e!aluatin* the e:pression profiles, EA#s 0ith

*reater uncertainty must influence less the profile e!aluationoutcome, thus each EA# must include a confidence !alue$This confidence !alue is computed from the correspondin*E#s 0hich participate in the estimation of each EA#$

Einally, EA# measurements are transformed to antecedent

!alues  # x   for the fuzzy rules usin* the fuzzy num/ers

defined for each EA#, and confidence de*reesc

 # x   are

inherited from the EA#Cc c

 # i x F =

0here i F   is the EA# /ased on 0hich antecedent  # x   is

defined$ More information a/out the used e:pression profilescan /e found in 3J43'4$

$ B#RIMNTA R STS

Eacial feature e:traction can /e seen as a su/cate*ory of

ima*e se*mentation, i$e$ ima*e se*mentation into facial

features$ )han* 32(4  re!ie0ed a num/er of simple

discrepancy measures of 0hich, if 0e consider ima*e

se*mentation as a pi:el classification process, only one is

applica/le hereC the num/er of misclassified pi:els on each

facial feature$ While manual feature e:traction do not

necessarily re@uire e:pert annotation, it is clear in especially

in lo0?resolution ima*es manual la/elin* introduces an

error$ It is therefore desira/le to o/tain a num/er of manual

interpretations in order to e!aluate the inter?o/ser!er

!aria/ility$ A 0ay to compensate for the latter is Williams<

Inde: 7WI8  394,   0hich compares the a*reement of an

o/ser!er 0ith the =oint a*reement of other o/ser!ers$ An

e:tended !ersion of WI 0hich deals 0ith multi!ariate data

can /e found in 31"4$ The modified Williams< Inde: di!ides

Page 5: Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

8/12/2019 Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

http://slidepdf.com/reader/full/confidence-based-fusion-of-multiple-feature-cues-for-facial-expression-recognition 5/6

the a!era*e num/er of a*reements 7in!erse disa*reements,

 D #,#' 8 /et0een the computer 7o/ser!er (8 and n)  human

o/ser!ers 7 #8 /y the a!era*e num/er of a*reements /et0een

human o/ser!ersC

1

1 (,

OC O , O

1

2 1

7 18

n

n

 #   #

 # # # #   # #

 D

*+ 

n n D

=

>

=−

∑ ∑ 

and in our case 0e define the a!era*e disa*reement /et0een

t0o o/ser!ers #,#'  asC

, O O

1   x x

 # # # #

bp

 D M M  D

=   ©

0here ©  denotes the pi:el?0ise :or operator, x

 # M  denotes

the cardinality of feature mas;  x  constructed /y o/ser!er  #,

and   bp D 7/i/upil 0idth8 is used as a normalization factor to

compensate for camera zoom on !ideo se@uences$Erom a dataset of a/out %(((( frames, 2%( frames 0ere

selected at random and 0ere manually la/eled from t0o

o/ser!ers$ .istri/ution of WI is sho0n in Ei*ure &$ At a

!alue of (, the computer mas; is infinitely far from the

o/ser!er mas;$ When the inde: is lar*er than 1, the

computer *enerated mas; disa*rees less 0ith the o/ser!ers

than the o/ser!ers disa*ree 0ith each other$ TA 1 

summarizes the results$ Eor the eyes and mouth *+  has /een

calculated for the /oth the final mas; and each of the

intermediate mas;s$  x*+   denotes *+ for sin*le mas; x and

 f *+  is the *+  for the final mas; for each facial feature6

 x*+  denotes the a!era*e *+  for mas; x calculated o!er all

test frames$ Ei*ure & illustrates the *+  distri/ution on the

test frames, calculated on each frame as the a!era*e *+ of all

the final feature mas;s$

Ei*ure &Williams Inde: distri/ution7a!era*e on eyes and mouth8

Ei*ure 'Williams Inde: distri/ution

7a!era*e on left and ri*ht eye/ro0s8

I$ -NSI-NS

Automatic reco*nition of EA#s is a difficult pro/lem, and

relati!ely little 0or; has /een reported 3214$   Within the

RMIS 3%4 frame0or; the ma=ority of collected data ha!e

had the aforementioned @uality pro/lems6 sometimes one has

to compromise /et0een @uality and the use of intrusi!e

e@uipment$ In /oth the study of emotional cues and 5I

!ideo @uality has to /e sacrificed$ The procedure 0e ha!edescri/ed can e:ploit anthropometric ;no0led*e 3&4 to

e!aluate a set of e:tracted features /ased on different

techni@ues in order to impro!e o!erall performance$ arly

tests on /oth lo0 and hi*h @uality !ideo from the RMIS

data/ase ha!e /een !ery promisin*C the al*orithm can

 perform fully unattended EA# e:traction and self?reco!ers in

cases of false detections$ The system runs currently in

MATA and the performance is in the order of a fe0

seconds per frame$

Page 6: Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

8/12/2019 Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

http://slidepdf.com/reader/full/confidence-based-fusion-of-multiple-feature-cues-for-facial-expression-recognition 6/6

TA 1R ST SMMARG

Mas;P   x*+   

 f *+   f 

 x

*+ 

*+ σ 

Q of frames 0here

 f x*+ *+  >*+ 

in frames 0here

 f x*+ *+  <

*+   

in frames 0here

 f x*+ *+  >eft ye

 NN1 ($9&&1

($'J''

1$2'& ($1(J &D$2 ($9"& ($''%

1 ($&(19 1$219 ($(%9 &'$' ($&J1 ($'9'

2 ($'21" 1$(2" ($(2& '2$D ($&&( ($''&

D ($&D19 1$1J1 ($(%& &9$2 ($'11 ($'D&

J ($'&(' ($"&" ($(29 DD$J ($'12 ($'9&

Ri*ht ye

 NN1 ($'(('

($'&%9

1$("J ($(2( &%$2 ($9&2 ($"D9

1 ($&1'% 1$2DJ ($('D '1$D ($9&D ($"2"

2 ($&&D( 1$1D( ($(21 %'$2 ($'J9 ($''J

J ($9%(D 1$JD9 ($(2' 'D$% ($9J2 ($"2(

D ($'"J" ($"'2 ($(2 D'$D ($&&' ($""9

Mouth

1 ($&9J2

($&'(J

1$(%1 ($(D9 %"$2 ($&%2 ($&&2

2 ($'2J1 ($"9J ($(J' DD$' ($&21 ($'%2

J ($%&(J 1$DD9 ($2(D "9$" ($%1( ($&"Jye/ro0s

left 1$(JD(

ri*ht 1$(1J"

 x*+   denotes *+ for sin*le mas; x and  f *+  is the *+ for the final mas; for each facial feature$

1 NN denotes the eye mas; deri!ed from the eye detection neural net0or; output

R ERNS

314 A$ M$ Te;alp, $ -stermann, Eace and 2?. Mesh Animation in M#+?D, Si*nal #rocessin*C Ima*e ommunication, ol$ 1%, pp$ J'&?D21,2((($

324 A$ Mehra/ian, ommunication 0ithout Words, #sycholo*y Today, !ol$

2, no$ D, pp$ %J?%9, 1"9'$3J4 A$ Raouzaiou, N$ Tsapatsoulis, K$ Karpouzis and S$ Kollias,

#arameterized facial e:pression synthesis /ased on M#+?D,RASI# ournal on Applied Si*?nal #rocessin*, ol$ 2((2, No$ 1(, pp$1(21?1(J', 5in?da0i #u/lishin* orporation, -cto/er 2((2$

3D4 $ Easel, et al, Automatic Eacial :pression AnalysisC A Sur!ey,#attern Reco*nition, J9, pp 2%"?2&%, 2((J

3%4 RMIS, motionally Rich Man?machine Intelli*ent System IST?2(((?2"J1" 7httpC>>000$ima*e$ntua$*r>ermis8

394 +$ W$ Williams, omparin* the =oint a*reement of se!eral raters 0ithanother rater, iometrics, !olJ2, pp$ 91"?92&, 1"&9

3&4 $W$ Goun*, 5ead and face anthropometry of adult $S$ ci!ilians, EAAi!il Aeromedical Institute, 1""J$

3'4 K$ Karpouzis, A$ Raouzaiou, A$ .rosopoulos, S$ Ioannou, T$ alomenos, N$ Tsapatsoulis and S$ Kollias$ Eacial e:pression and *esture analysisfor emotionally?rich man?machine interaction N$ Sarris,

3"4 M$ A$ Tur; and A$ #$ #entland$ Eace reco*nition usin* ei*enfaces$ In

#roc$ of omputer ision and #attern Reco*nition, pa*es %'9?%"1$ I,une 1""1/$

31(4 M$ 5$ Gan*, .$ Krie*man, N$ Ahu=a, .etectin* Eaces in Ima*esCASur!ey, #AMI, ol$2D718, pp$ JD?%', 2((2$

3114 #$ ;man, Eacial e:pression and motion$ Am$ #sycholo*ist, ol$ D',1""J$

3124 R$ o0ie, $ .ou*las?o0ie, N$ Tsapatsoulis, +$ otsis, S$ Kollias , W$Eellenz, $ Taylor$ motion Reco*nition in 5uman?omputer Interaction,I Si*nal #rocessin* Ma*azine, 2((1, pp$ J2?'($

31J4 R$ Eransens, an .e #rins, SM?/ased Nonparametric .iscriminantAnalysis, An Application to Eace .etection, Ninth I Internationalonference on omputer ision olume 2, -cto/er 1J ? 19, 2((J

31D4 R$ #lutchi;, motionC A psychoe!olutionary synthesis, 5arper and Ro0, NG, SA, 1"'($

31%4 R$W$ #icard, Affecti!e omputin*, MIT #ress, am/rid*e, MA$3194 R$W$ #icard,, yzas $, -ffline and -nline Reco*nition of motion

:pression from #hysiolo*ical .ata, motion?ased A*ent ArchitecturesWor;shop Notes, IntOl onf$ Autonomous A*ents, pp$ 1J%?1D2, 1"""$

31&4 S$Ioannou, A$ Raouzaiou, K$ Karpouzis, M$ #ertsela;is, N$ Tsapatsoulis,S$Kollias sa!e Adapti!e Rule?ased Eacial :pression Reco*nition +$ouros, T$ #anayiotopoulos 7ds$8, ecture Notes in ArtificialIntelli*ence, ol$ J(2%, Sprin*er?erla*, pp$ D99 ? D&%, 2((D$

31'4 T$+$ .ietterich, nsem/le methods in machine learnin*, #roceedin*s ofEirst International onference on Multiple lassifier Systems, 2((($

31"4 i;ram halana and Gon*min Kim, A Methodolo*y for !aluation ofoundary .etection Al*orithms on Medical Ima*es, I Transactionson Medical Ima*in*, ol$19, No$% -cto/er 1""&

32(4 G$$)han*, A Sur!ey on !aluation Methods for Ima*e Se*mentation,

#attern Reco*nition, ol 2", No$ ', pp1JJD?1JD9, 1""93214 Gin*?li Tian, Ta;eo Kanade and effrey E$ ohn, Reco*nizin* Action

nits for Eacial :pression Analysis I Transactions -n #atternAnalysis And Machine Intelli*ence, ol$ 2J, No$ 2, Ee/ruary 2((1