Database Construction for Speech to Lip-readable Animation Conversion

17
Database Construction for Speech to Lip- readable Animation Conversion Gyorgy Takacs, Attila Tihanyi, Tamas Bardi, Gergo Feldhoffer, Balint Srancsik Peter Pazmany Catholic University, Faculty of Information Technology 1083 Budapest, Práter u. 50/a. Hungary

description

Database Construction for Speech to Lip-readable Animation Conversion. Gyorgy Takacs, A t tila Tihanyi, Tamas Bardi, Gergo Feldhoffer, Balint Srancsik. Peter Pazmany Catholic University, Faculty of Information Technology 1083 Budapest, Práter u. 50/a. Hungary. - PowerPoint PPT Presentation

Transcript of Database Construction for Speech to Lip-readable Animation Conversion

Page 1: Database Construction for Speech to Lip-readable Animation Conversion

Database Construction for Speech to Lip-readable Animation Conversion

Gyorgy Takacs, Attila Tihanyi, Tamas Bardi, Gergo Feldhoffer,

Balint Srancsik

Peter Pazmany Catholic University, Faculty of Information Technology 1083 Budapest, Práter u. 50/a. Hungary

Page 2: Database Construction for Speech to Lip-readable Animation Conversion

Specialities of the Faculty of Information Technology at Peter Pazmany Catholic University

Page 3: Database Construction for Speech to Lip-readable Animation Conversion

Main concept in the project for communicatoin aid for deaf people

• Deaf people have fantastic abilities in speech understanding based purely on lip reading.

• The gap between hearing and deaf persons can be eliminated by such an everyday equipment as a high-end class second or third generation mobile phone.

• In our system we provide a part of an animated human face on a display as output to deaf user.

• Közvetlenül a beszédből számolja…• Jeltolmács• A rather promising feature of our system is the

potentially language independent operation

Page 4: Database Construction for Speech to Lip-readable Animation Conversion
Page 5: Database Construction for Speech to Lip-readable Animation Conversion

The system

MPEG4 based facial animation

model

FP Feature points

coordinates

Acoustic feature extraction

NN

PCA-1

w1 … w6 weights of the

principal components

MFCC

Page 6: Database Construction for Speech to Lip-readable Animation Conversion

System components• The input speech sound is sampled at 16 bit/48 kHz and then

acoustic feature vectors based on Mel-Frequency Cepstrum Coefficients (MFCC) are extracted from the signal.

• The feature vectors are sent to the neural network (NN), which computes a special weighting vector [w1,…w6] that is a compressed representation of the target frame of the animation.

• The coordinates of our selected feature point set used to drive the animation are obtained by linear combination of our component vectors with the weights coming from the neural net.

• The FP positions are computed in this way for 25 frames per second.

• The final component in our system is a modified LUCIA talking head model.

Page 7: Database Construction for Speech to Lip-readable Animation Conversion

Database

Synchronised audiovisual Database

Video preprocessing

Audio preprocessing

Page 8: Database Construction for Speech to Lip-readable Animation Conversion

• we have decided to employ interpreters to record our audiovisual database

• So the database contains two-digit frames, names of months, numbers, days.

• The MPEG-4 standard describes the face with 86 feature points (FP). We selected 15 FP around the mouth according to our preliminary results

• We used commercially available video cameras with 720x576 resolution, 25 fps PAL format video – which means 40ms for audio and video frames

• Automatic marker tracking.• Frames were binarized according to a statistically determinded

threshold value. To get a fused patch a dilation process was perfected. Step-by step eroding provides marker patches only one pixel, which are the centres of FP-s.

Page 9: Database Construction for Speech to Lip-readable Animation Conversion
Page 10: Database Construction for Speech to Lip-readable Animation Conversion
Page 11: Database Construction for Speech to Lip-readable Animation Conversion

• Since each video frame has 15 FPs, one frame is a vector in a 30 dimensional space. The training of the system is much more efficient by compressing 30 coordinate parameters into 6 weight parameters.

• The first 6 PCA vectors have been selected as a basis, the database was encoded in 6 dimension.

Page 12: Database Construction for Speech to Lip-readable Animation Conversion

• Where P is the matrix of PCA vectors (30x30) ordered by the eigenvalue, B is the 30 dimensional vector set, and c is the chosen orgio which is the PCA weights of the neutral face. This data reduction causes 1-3% error (1-2 pixel variance in FP x-y coordinates) which is an acceptable approximation

16

11

16...1 ..

ppBPw

Page 13: Database Construction for Speech to Lip-readable Animation Conversion

The FP positions expressed by the 1st, 2nd 3rd and 4th principal

components using

Page 14: Database Construction for Speech to Lip-readable Animation Conversion

NN

• Matrix Back-propagation algorithm (elaborated by David Anguita) was used to train the network [8]. The NN has 3 layers: 80 elements in the input layer to receive 16 MFCC from 5 frames. The hidden layer consists of 40 nodes. The output layer has 6 nodes, providing 6 centered PCA weights to reproduce the FPs.

• The training file contained 5450 frames. The network was trained by 100 000 epochs. This NN model uses interval[-1,1] both for inputs and outputs. MFCC and PCA vectors are scaled linearly by the same value to fit to the appropriate range except the energy coefficient of MFCC.

Page 15: Database Construction for Speech to Lip-readable Animation Conversion
Page 16: Database Construction for Speech to Lip-readable Animation Conversion

Lucia

Page 17: Database Construction for Speech to Lip-readable Animation Conversion

Conclusion

• The visual word recognition even in the case of natural and professional speaker has about 3 % of errors.

• The animated face model controlled by 15 FP parameters following accurately the FP parameters of interpreter/lip-speaker’s face resulted about 42% of errors. After test discussions it was clarified, that the visible parts of tongue and movement of parts of the face others then the mouth convey additional information to help the correct recognition. Probably the face model itself needs further improvements.

• The decreasing of correct recognition only by about 7% as a result the complete changing of face model control from natural parameters to calculated parameters seems to be the fundamental result of our system.