crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results...

17
1 23 Annals of Biomedical Engineering The Journal of the Biomedical Engineering Society ISSN 0090-6964 Ann Biomed Eng DOI 10.1007/s10439-013-0744-3 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures Thiago Oliveira-Santos, Christian Baumberger, Mihai Constantinescu, Radu Olariu, Lutz-Peter Nolte, Salman Alaraibi, et al.

Transcript of crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results...

Page 1: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

1 23

Annals of Biomedical EngineeringThe Journal of the BiomedicalEngineering Society ISSN 0090-6964 Ann Biomed EngDOI 10.1007/s10439-013-0744-3

3D Face Reconstruction from 2D Pictures:First Results of a Web-Based ComputerAided System for Aesthetic Procedures

Thiago Oliveira-Santos, ChristianBaumberger, Mihai Constantinescu,Radu Olariu, Lutz-Peter Nolte, SalmanAlaraibi, et al.

Page 2: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

1 23

Your article is protected by copyright and

all rights are held exclusively by Biomedical

Engineering Society. This e-offprint is for

personal use only and shall not be self-

archived in electronic repositories. If you

wish to self-archive your work, please use the

accepted author’s version for posting to your

own website or your institution’s repository.

You may further deposit the accepted author’s

version on a funder’s repository at a funder’s

request, provided it is not made publicly

available until 12 months after publication.

Page 3: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

3D Face Reconstruction from 2D Pictures: First Results of a Web-Based

Computer Aided System for Aesthetic Procedures

THIAGO OLIVEIRA-SANTOS,1 CHRISTIAN BAUMBERGER,1 MIHAI CONSTANTINESCU,2 RADU OLARIU,2

LUTZ-PETER NOLTE,1 SALMAN ALARAIBI,1 and MAURICIO REYES1

1Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstr. 78, 3014 Bern, Switzerland;and 2Department of Plastic, Reconstructive, and Aesthetic Surgery, Inselspital, Bern, Switzerland

(Received 2 July 2012; accepted 5 January 2013)

Associate Editor Ioannis A. Kakadiaris oversaw the review of this article.

Abstract—The human face is a vital component of ouridentity and many people undergo medical aesthetics proce-dures in order to achieve an ideal or desired look. However,communication between physician and patient is fundamen-tal to understand the patient’s wishes and to achieve thedesired results. To date, most plastic surgeons rely on either‘‘free hand’’ 2D drawings on picture printouts or computer-ized picture morphing. Alternatively, hardware dependentsolutions allow facial shapes to be created and planned in3D, but they are usually expensive or complex to handle. Tooffer a simple and hardware independent solution, wepropose a web-based application that uses 3 standard 2Dpictures to create a 3D representation of the patient’s face onwhich facial aesthetic procedures such as filling, skin clearingor rejuvenation, and rhinoplasty are planned in 3D. Theproposed application couples a set of well-established meth-ods together in a novel manner to optimize 3D reconstruc-tions for clinical use. Face reconstructions performed withthe application were evaluated by two plastic surgeons andalso compared to ground truth data. Results showed theapplication can provide accurate 3D face representations tobe used in clinics (within an average of 2 mm error) in lessthan 5 min.

Keywords—Statiscal, Shape, Modelling, Plastic surgery,

Comminication tool.

INTRODUCTION

Along with its functional aspects, the human face isa vital component of our identity and it provides uswith many intricate and complex communicationchannels to society. Many people with desire forchange, suffering from low self-esteem, or seeking for

an ideal look according to society, undergo medicalaesthetics procedures.6 However, communicationbetween physician and patient is fundamental in orderto understand the very often subjective wishes of thepatient.

Most plastic surgeons rely on either ‘‘free hand’’two-dimensional (2D) drawings on picture printouts orcomputerized picture morphing19,20 in order to estab-lish the goals of facial aesthetic procedures, i.e., todiscuss the feasible procedures according to the wishesof the patient. However, 2D visualization is limited toone point of view and therefore hinders the overview ofthe procedure outcome. Three-dimensional (3D)models associated with 3D planning tools can over-come such limitations, but the acquisition of theserepresentations of the patient face is not trivial.Computed Tomography (CT)-scans allow creation of3D facial shapes,14,15 but their radiation exposure andcosts are not acceptable for aesthetic procedures. Otherhardware dependent solutions such as laser or stereo-photogrammetric scanners12,16,22 also offer the possi-bility of creating 3D facial shapes, but such devices areusually expensive or complex to handle. As a result,their use is limited to physicians having the necessaryfinancial or technical resources. Alternatively, otherhardware independent methods for creating 3D facesfrom pictures and video have been proposed includingshape from shading,24 structure from motion,13,23

shape from silhouette,18,21 and statistical facial mod-els.3 While the former three are dependent on thecondition of light available, continuous multipleframes acquisitions (e.g., video), or high number offrames respectively, the latter has showed very robustresults on reconstructing 3D faces by morphing a sta-tistical facial model to a subject specific face. The useof these methods has been mainly limited to a research

Address correspondence to Thiago Oliveira-Santos, Institute for

Surgical Technology and Biomechanics, University of Bern, Stauffac-

herstr. 78, 3014 Bern, Switzerland. Electronic mail: thiago.oliveira@

istb.unibe.ch

Annals of Biomedical Engineering (� 2013)

DOI: 10.1007/s10439-013-0744-3

� 2013 Biomedical Engineering Society

Author's personal copy

Page 4: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

environment and therefore they are not optimized forclinics. Limitations on speed, accuracy and lack ofplanning capabilities hinder the direct use of thesetechniques as a physician-patient communication tool.

In order to overcome these drawbacks, we proposein this paper a web-based and hardware independentapplication which creates a 3D representation of thepatient’s face from 2D-digital pictures and enablesplanning of aesthetic procedures in 3D. The proposedapplication requires three standard 2D-pictures of thepatient (one frontal and two profiles) and a few land-marks in each image as input. The web-based softwarecomputes a patient-specific virtual 3D face on whichphysicians can directly show the intended proceduralchanges to the patient with the 3D planning tools indifferent points of view. Clinical usability was the mainfocus of this application, therefore the methods forfacial feature detection, 3D reconstruction and texturemapping were carefully chosen in the literature andoptimized to enable their use within standard consul-tation time. Emphasis was given to the link betweenthe different steps of the pipeline in order to allow for atime-effective application with sufficient accuracy forclinical use. Therefore, methods for facial featurecontour detection based on training from syntheticdata and the semi-supervised feature contour correc-tion are proposed. To evaluate this application, a set offace reconstructions was performed and compared todifferent ground truth data. Variables include 3Dreconstruction time, distance to the ground truth, aswell as qualitative evaluation performed by two plasticsurgeons (for reconstructions and planning tools).

MATERIALS AND METHODS

Application

Similarly to a previously developed application for3D visualization of breast augmentation procedures,7

this application is accessible entirely on the internet.Therefore, it requires no additional hardware apartfrom a standard digital camera. The application runson a normal web browser, which allows physicians,dermatologists, and aesthetic professionals from allover the world to plan aesthetic facial procedure in 3Dfollowing a simple workflow without dealing withtechnical challenges (see Fig. 1). First, the user mea-sures the approximate distance between the eyes of thepatient with a normal ruler and takes three pictures(one frontal, one from left profile and one from theright profile). The measurement and images are up-loaded to the application running on a standard webbrowser. Subsequently, the physician manually placesa set of landmarks in the images (28 in total, see image

and landmarks from Fig. 1) and uploads them to theweb server. These landmarks representing key facialfeatures (pupils, mouth corner, etc.) increase therobustness of the automatic detection of further points(around 120 per image) representing contours of facialfeatures (eyes, mouth, face, etc.) and necessary for the3D reconstruction. With the detected contour points,the physician has the opportunity to check and correctthem if necessary. Splines facilitate interaction with thecontours (see feature contours correction in Fig. 1).Once the correction is finished, the splines are con-verted back to facial contours points that are sent tothe web server to reconstruct a 3D textured shape ofthe patient face based on a 3D statistical shape model.Finally, the 3D representation of the patient face isdisplayed on the web browser powered with Unity 3(from Unity Technologies, San Francisco, UnitedStates), and the physician may discuss the intendedaesthetical procedures with the patients using theincluded and available planning tools. The set of toolsallow manipulation of the patient specific virtual facein 3D considering different aesthetic procedures suchas rhinoplasty, skin fillers, and dermabrasion proce-dures. The following sections explains the differentsteps of the pipeline.

Input Data Acquisition

The instructions for acquiring the input data aresimple and easy to be replicated in every clinic. Coloredimages should be taken in the portrait orientation withthe face occupying most of the image (around 60%).Faces on the order of 500 pixels width have been foundto be a good compromise between reconstruction andimage quality. The patient should have a neutralexpression, with opened eyes and without accessories(e.g., ear rings or glasses). In case of long hair, a hairband can be used to avoid hair on the facial region. Thefrontal image should be acquired with the patient fac-ing the camera and with the nose in the center of theimage. The profile images should be acquired with thepatient ear and shoulder facing the camera and with thecheek in the center of the image. The camera to patientdistance should be approximately 2 m. Such distancewas found to be a good compromise between distor-tions caused by perspective projection and opticalzoom power of standard digital cameras. The eye dis-tance can be measured by placing a ruler on the nose ofthe patient and checking the values laying on the centerof the eye. This distance is later used to rescale the finalreconstructed shape and perform virtual measure-ments. Additionally, the physician must place 6 land-marks on the frontal image and 11 in each profile imagewith the instructions presented by an interactive toolthat highlights the proper location of the currently

OLIVEIRA-SANTOS et al.

Author's personal copy

Page 5: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

selected landmark on a sketch of a face. See Fig. 1 forlocation of the landmarks.

Statistical Shape Models

Statistical shape modeling is a technique used torepresent the shape of objects that possess prior gen-eral geometric information, but that may vary among apopulation. For example, the shape of human faces isdifferent for every person, but follow a general patterndefining the overall location of eyes, nose, and mouth.This method has been explored for 2D and 3D shapes,and its main applications include object detectionin images and regression for estimating objectshapes.3,5,17 The first is typically known as active shapemodel (ASM), while the second as shape or surfacereconstruction. We explore statistical shape models intwo different ways (2D and 3D), to detect contourpoints of facial features in the images and to recon-struct the final representation of the patient’s facerespectively. The 3D statistical shape model used herewas created using 200 faces of young adults (100 maleand 100 females).3 In order to create a statistical modelout of shapes represented by vertices, a one-to-one

correspondence between each vertex of the shapes hadto be established (for further details on establishmentof vertex correspondence the reader is referred to Blanzand Vetter3 and Cootes et al.5). Assuming vertex cor-respondence and a common coordinate system, sta-tistics about variation among the shapes can becollected and used to reconstruct new shape instances.The model used in this work comprises a mean shapemesh and a mean texture along with two matrices ofeigenvectors and their respective eigenvalues (for fur-ther details on statistical shape generation the reader isreferred to Blanz and Vetter3).

2D Feature Contour Points Detection

In this work, a 2D ASM is used to identify a set ofpoints representing feature contours in the images.17

The search for the best contour point is performedaccording to the smallest Mahalanobis distancebetween a profile surrounding the current vertexposition and the base profile of the correspondingvertex. The base profile is generated out of a trainingset. The update of the new shape instance is performedby applying a set of weights. The weights are estimated

FIGURE 1. Overview of the data flow in the different steps of the application that is divided in two layers separated by the internetcloud: the web browser powered with ‘‘unity 3’’ at the physician’s computer and server computer providing the web service. Theimage & landmarks box highlights the landmarks to be manually defined (yellow crosses): 6 frontal (right eyebrow, eye centers,nose tip, left mouth corner, and chin) and 11 in each profile image (top of the forehead, inflection of the nose with forehead, end ofthe eyebrow, eye corner, tip of the nose, corner of the mouth, connection of the chin with the neck, back part of the jaw, bottom andtop of the ear, and neck inflection). The feature contours correction box highlights the splines for correction of the feature contourpoints. The 3D Planning box highlights the final 3D shape for planning.

3D Face Reconstruction from 2D Pictures

Author's personal copy

Page 6: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

from the difference of the current shape points andtheir new best positions, as proposed by Cootes et al.5

Convergence is achieved when there are no morechanges between the new points and the new estimatedshape.

One of the key challenges with ASM is the gatheringof data to create and train the statistical shape model.Several databases with annotated facial images areavailable, but the number and position of annotatedpoints are typically fixed and not suitable for the 3Dreconstruction method used here. In order to obtainaccurate 3D reconstructions, the approach presentedhere demands a ASM generated out of a database ofimages with frontal and profile images annotated withseveral facial feature points (e.g., eyes contours, mouthcontours, silhouette contour, etc.). Since such a flexibledatabase is not publicly available, we used the 3D sta-tistical shape model to generate artificial data to trainour 2D ASM. The artificial data not only eliminates thevariability of the annotation process, but also allow forflexible optimization of the set of points used for 3Dreconstruction since new images and annotations can beeasily re-generated. A set of 6000 shapes were artificiallygenerated by randomly varying shape and textureweights of the 3D statistical shapemodel. The 3D shapeswere subsequently projected from two different pointsof view: frontal, to simulate the frontal image; andprofile, to simulate the lateral view. The backgrounds ofthese artificial images were replaced by one of 12 ran-domly selected images of different uniform walls in or-der to simulate a real scenario. In addition, a subset ofvertices representing manually chosen feature contoursof the 3Dmean shape were defined. The selected verticeswere carefully chosen to match their respective facialfeature locations in the images. Finally, the projected 3Dvertices representing feature contour points and imageswere used to train two 2D ASM, one for frontal imagesand one for the right profile images.

With the trained 2D ASMs, the search of the 2Dfeature contour points in a new image can be per-formed. Firstly, the mean shape comprising the pointsrepresenting the feature contour points and comprisingthe points representing the manually annotated land-marks is aligned to the face according to the landmarksdefined on the images by the physician. Finally, thealgorithm iteratively searches for the optimal 2D shape.To ensure a stable search, the manually annotatedlandmarks are considered as ground truth. Therefore,points in the 2D shape representing a initial landmarkare set to their respective manually annotated locationat each iteration. The left profile image feature contourpoints are found by mirroring the left image along thevertical axis, applying the ASM for the right profile,and mirroring the contour points back. (See Appendixfor additional feature search information.)

2D to 3D Face Reconstruction

One of the challenges with dense 3D shape recon-struction is the computational time required to esti-mate the set of weights that best represent the desiredface. For this reason iterative approaches,3,5 such asthe one used in the previous section, do not provide aclinical acceptable processing time. To overcome suchtime constraints, Blanz et al.2 have proposed a methodfor reconstructing dense 3D shapes from sparse data.The method presents a time efficient closed formsolution for reconstructing 3D faces out of a set ofpoints defined on a 2D image, but is limited to oneimage. To cope with the time requirements of thedynamical clinical environment while also increasingthe information for reconstruction, we adopt a similarapproach based on multiple views, presented by Fag-gian et al.8 The multiple views reconstruction methodallows for fast reconstruction of a 3D face out of a setof points representing facial features defined in imagesacquired from different points of views. Basically, anenergy function averaging the contribution of points ineach image and the prior knowledge of the shape of aface helps to find an optimal set of weights for thereconstruction in one step. (See Appendix for addi-tional information on the energy function.) The set ofweights is finally used to obtain the desired shape ofthe patient’s face.

Texture Mapping

Original statistical shape model approaches3 esti-mate the texture of the shape from a statistical texturemodel. However, faster and more realistic shape tex-ture can be achieved by mapping real images ofpatients. With one frontal and two profile images, agood corresponding texture value can be found foreach vertex of the shape representing the patient’s face.Therefore, shape texture was mapped from the imagesacquired during the consultation.

Firstly, a surface parameterization algorithm(Floater Mean Value Coordinates9) was applied to themean shape to define a offline transformation estab-lishing correspondence between the shape vertices andthe texture image. Such transformation is used after-wards to generate two intermediate texture images foreach patient, one derived from the frontal image andone derived from the profile images. Finally, the twotextures are blended in one texture image using amultiband filter.4 (See Appendix for the formulation ofthe texture mapping.)

3D Visualization and Planning Tools

The 3D visualization and planning is very importantto the physician because it is where he (she) will

OLIVEIRA-SANTOS et al.

Author's personal copy

Page 7: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

continuously interact with the system to discuss with thepatient. Therefore, a clear and responsive tool is essen-tial to maintain clinical usability. In order to cope withthese requirements with a web based application, thevisualization and planning tools are implemented withUnity 3 (a environment for high-end 3D web-basedgame development). As a plug-in, Unity 3 enables 3Drendering such as lightning, and mesh manipulation onstandardwebbrowserswith performance comparable tostate-of-the-art platform applications.

Four main planning tools were developed: one forrhinoplasty, one for skin fillers, one for dermabrasion(skin cleaning) and one for comparing before and afterplanning. The tool for rhinoplasty allows the nose tobe manipulated using pre-defined points that are typ-ically changed during plastic interventions. The pre-defined points are used as control points for localinterpolation of the 3D mesh and deformation of thenose. The tool for skin filling allows regions on the skinto be delineated and filled with a certain volume that isevenly distributed along the selected region. Theinjected volume is not intended to represent the volumeto be injected in reality because of difficulties related toabsorption and other factors, but rather to illustratethe difference between pre- and post-procedure. Thedermabrasion tool allows wrinkles and undesiredmarks to be removed, as well as rejuvenation of theskin. Basically, a 3D brush selects a circular region tobe smoothed on the skin. The region is then mapped tothe textured image where a gaussian filter is applied.With the comparison tool, physicians and patients canvisualize the intended effect of the intervention withpre- and post-planning situations displayed side byside. Additional tools enable measurements of dis-tances between two points considering straight lines oralong the surface (geodesic paths).

Experiments

In order to evaluate our application, three types ofdata have been used as ground truth (see Table 1):in-model data (IMD), out-model registered data(OMRD) and out-model non-registered data(OMNRD). For IMD, the initial landmarks wereautomatically generated using their ground truthlocation with a gaussian noise (sigma equals to 2 andcropped at 4 pixels) and reconstruction was performedautomatically without considering the feature contourcorrection step (illustrated in Fig. 1). For OMRD andOMNRD, reconstruction was performed by an expertfor each case. The time required to perform each stepof the pipeline was measured. Finally, the recon-structed faces were compared to the ground truth foreach case as follow. Firstly, ground truth and recon-structed shapes were aligned considering eye, nose and

mouth segments of the 3D statistical shape model.3

Secondly, distances for all three datasets were measuredfrom vertices of the reconstructed face to their corre-sponding point in the ground truth surface. For IMDand OMRD (with one-to-one vertex correspondencebetween ground truth and reconstruction), the shapeswere aligned with Procrustes.11 For OMNRD (withoutvertex correspondence), the shapes were aligned withiterative closest point (ICP).1 The vertex correspon-dence of IMD and OMRD were not directly used fordistance measurement because the correspondencecannot be ensured in flat areas such as cheeks andforehead after reconstruction. Therefore, two differentmethods were used to find vertex correspondence in allthree datasets14: closest point matching (CPM), whichconsiders the closest point in the ground truth surfaceas corresponding point; and thin plate spline plusclosest point matching (TPS + CPM), which firstwarps the reconstructed face with a TPS transforma-tion and a set of landmarks, and subsequently finds theclosest point on the ground truth surface. The former isa direct method that is not influenced by human error,nevertheless it does not ensure correct anatomical cor-respondence. The latter relies on the manual definitionof landmarks, but presents a better anatomical corre-spondence. Since the distance measured from corre-sponding points found by TPS + CPM is notnecessarily to the closest point between the two sur-faces, but a more anatomically relevant distance, itshould result in higher values than the ones found byCPM. A total of 15 validation landmarks were definedin the reconstructed and in the ground truth shapes (seeElectronic Supplementary Material). In addition to thedistance measurements, a visual analysis was per-formed by two plastic surgeons in each of the casesfrom OMNRD to support the qualitative results. Thesurgeons rated each of the reconstruction according tothe values presented in Table 2 while comparing to theground truth and to the pictures. In a last step, the 3Dplanning tools were evaluated qualitatively on thereconstructed cases.

RESULTS

The average time necessary to obtain the 3D faceonce the 2D images were uploaded to the applicationwas 297.79 ± 90.49 s. This time has been dividedamong different individual steps of the application:manual definition of the facial landmarks (94.32 ±

36.45 s), 2D feature contour points detection (8.50 ±

3.99 s), manual correction of the feature contours(191.52 ± 70.59 s), 3D face reconstruction (0.71 ±

0.39 s), and texture mapping (0.83 ± 0.23 s).

3D Face Reconstruction from 2D Pictures

Author's personal copy

Page 8: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

The average reconstruction error over all casesmeasured with CPM was below 2 mm for all datasetstype. Peaks of up to 2.1 mm per region were noticedfor the individual case errors (except for the 299th

worst case that presented 2.88 mm error for the mouthregion before manual correction of the feature con-tours, but below 2 mm after manual correction). Theaverage reconstruction error over all cases measuredwith TPS + CPM was below 2 mm for subjects ofIMD, OMRD, and below and 3 mm for OMNRD.The distances calculated with CPM and TPS + CPMvertex correspondence representing the reconstructionerror are presented in different graphs according todataset type, see Figs. 2 and 3, respectively. Figures 4and 5 show respectively the good and bad examples ofreconstructions of each dataset type. The distancemaps present small errors around the eyes and chinregion. There exist larger errors in the face regionaround the cheek and forehead since the currentmethod does not use information from those regionsfor reconstruction. The neck and ears region alsoshowed large errors, but are not considered in theanalysis since such regions have only been used tocomplete the appearance of the face. It is worth men-tioning that the errors of the IMD cases could still beimproved since no manual corrections were performedfor the feature contours. Visual inspection of the 2Dfeature contour points detection showed that auto-matic detection of the feature contour failed consid-erable in 9% of the IMD cases. Therefore, thereconstruction of such cases could be significantlyimproved after manual the corrections, see Fig. 6 fortwo examples.

According to the visual analysis performed by 2surgeons, all reconstructed cases from the OMNRDcould be used for communicating with the patient,although some of them presented sub-optimal recon-struction. Out of the 28 real cases reconstructed, 1 and2 cases were evaluated as a ‘‘Bad’’ reconstruction by

TA

BL

E1.

Descri

pti

on

of

data

sets

used

for

evalu

ati

on

.

Data

set

Data

description

2D

input

images

Aim

IMD

Aset

of

300

face

shapes

art

ificia

llygenera

ted

by

random

ly

vary

ing

the

para

mete

rsof

the

3D

sta

tist

ical

shape

model

usin

ga

norm

aldis

trib

ution.

For

furt

her

deta

ilsth

ere

ader

is

refe

rred

toB

lanz

and

Vett

er3

Art

ificia

llycre

ate

dfr

onta

l,rightpro

file

and

left

pro

file

images

(972

91280

pix

els

and

eye

dis

tance

of

aro

und

240

pix

els

)

Allo

wverifica

tion

of

the

capabili

tyof

the

applic

ation

tore

pro

duce

aw

ell

know

nfa

ce

OM

RD

Asetof10

face

shapes

pro

vid

ed

with

the

3D

sta

tist

icals

hape

model

3and

there

fore

with

one-t

o-o

ne

vert

ex

corr

espon-

dence

toth

em

ean

shape,

but

out

of

the

faces

used

for

cre

ating

the

model

Art

ificia

llycre

ate

dfr

onta

l,rightpro

file

and

left

pro

file

images

(972

91280

pix

els

and

eye

dis

tance

of

aro

und

240

pix

els

)

Allo

wverifica

tion

of

the

capabili

tyof

the

applic

ation

tore

pro

duce

an

unknow

nfa

ce

OM

NR

DA

set

of

28

new

lyscanned

faces

(five

male

sand

five

fem

ale

s

scanned

with

an

Art

ec

MH

T3D

scanner

Art

ec

Gro

up,

Moscow

,R

ussia

),th

ere

fore

out

of

the

faces

used

for

cre

-

ating

the

modeland

without

vert

ex

corr

espondence

Acquired

with

asta

ndard

dig

italcam

era

(Nik

on

Coolp

ixL23

with

59

Optica

lZ

oom

,

2736

93648

pix

els

)

Allo

wverifica

tion

of

the

capabili

tyof

the

applic

ation

tore

pro

duce

an

unknow

nfa

ce

TABLE 2. Rating system used for the qualitative evaluationof the reconstructed 3D faces.

Value Meaning Description

5 Excellent No apparent differences from the

patient can be noticed

4 Very Good It is very similar to the patient

apart from minor details

3 Good It is similar to the patient and good

for communication with the patient

2 Bad Large differences to the patient

can be noticed, but it can still be

used for communication of

certain procedures

1 Very Bad It has no resemblance with the

patient and cannot be used for

any kind of communication

OLIVEIRA-SANTOS et al.

Author's personal copy

Page 9: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

surgeon 1 and 2 respectively. No cases were evaluatedas ‘‘Very Bad’’ or ‘‘Excellent’’. The average evaluationwere in between ‘‘Good’’ and ‘‘Very Good’’ with val-ues of 3.54 and 3.32 for surgeon 1 and 2 respectively.Examples of reconstruction paired with the respectivegrade attributed by the surgeons are presented inFig. 7. According to the surgeons, the reconstructionof some cases gave different impressions when ana-lyzing from different points of view (e.g., frontal andprofile). For example, case 1 from Fig. 7 (graded as‘‘Very good’’ by surgeon 2) gives better frontalimpression than profile, while case 3 (graded as ‘‘Very

good’’ by surgeon 2) gives better profile impression.From the reconstructions, only case 5 diverged signif-icantly between the surgeons (‘‘Very good’’ by surgeon1 and ‘‘Bad’’ by surgeon 2). While for surgeon 1 theoverall appearance of the face was very well captured,for surgeon 2 it did not replicate the nose very wellfrom profile and it did not capture the face appearancefrom frontal view. Another example of a ‘‘Bad’’reconstruction for surgeon 2 can be seen in case 6. Thesame case was considered ‘‘Good’’ by surgeon 1 with abetter profile view than frontal (face appeared thinnerthan subject). Case 3 was considered only ‘‘Good’’ by

FIGURE 2. Three graphs (one per dataset type) illustrate the average CPM distance and standard deviation for the three bestcases (blue bars with one bar per case), overall cases (green bar with average considering all cases) and for the three worst cases(red bars with one bar per case) grouped by segments. Classification of the best and worst cases is according to the Nose + -Mouth + Eyes region. Nose + Mouth + Eyes represents the error considering vertices from nose, mouth and eyes segments. Nose,Mouth and Eyes represents the error considering vertices from each of the segments individually (nose, mouth and eyesrespectively). Different colors represent different subjects.

3D Face Reconstruction from 2D Pictures

Author's personal copy

Page 10: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

both surgeons because of differences on the facialcurves around the cheek region.

The planning tools enabled emulation of variousaesthetic procedures. The rhinoplasty arrows that arelocated in crucial points typically considered forintervention allowed for easy manipulation of the nosein 3D. Simple emulation of filling procedures could beachieved by delineating the region to be filled andvarying the amount of filling to be injected. Wrinklescould be quickly removed from the patient skin con-sidering certain regions of the face. The planned pro-cedure could be directly visualized on the 3D face fromdifferent angles. Additionally, pre- and post procedure

emulation could be compared side by side in order toemphasize the modifications achieved. An illustrationof the results of the planning tools validation on arandom case are displayed in Fig. 8.

DISCUSSION

This paper presents the first results of a web-basedcomputer assisted system for aesthetic procedure con-sultations that enables physicians to emulate differentprocedures on a virtual 3D representation of thepatient’s face. The application aims to facilitate

FIGURE 3. Three graphs (one per dataset type) illustrate the average TPS + CPM distance and standard deviation for the threebest cases (blue bars with one bar per case), overall cases (green bar with average considering all cases) and for the three worstcases (red bars with one bar per case) grouped by segments. Classification of the best and worst cases is according to theNose + Mouth + Eyes region. Nose + Mouth + Eyes represents the error considering vertices from nose, mouth and eyes seg-ments. Nose, Mouth and Eyes represents the error considering vertices from each of the segments individually (nose, mouth andeyes respectively). Different colors represent different subjects.

OLIVEIRA-SANTOS et al.

Author's personal copy

Page 11: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

communication between physicians and patients. Thesimple workflow requiring no additional expensive andcomplicated hardware is a great advantage for plasticsurgeons, dermatologists, and aesthetic professionalsnot having the resources to acquire current availableapproaches, or reluctant to adopt such complex tech-nologies. Current hand-held scanner are still optimizedfor accuracy and lack on usability for clinics. Webbased applications provides not only world wide accessallowing for online discussions between physicians but

also simplify upgrades and maintenance (considered ahighlight in the clinical community) since doctors onlyneed to login and use the application. The pipeline isbased on standard 2D images that are already part ofthe standard clinical workflow requiring therefore noadditional steps. The automatic steps of the applica-tion are performed within a few seconds and are of noconcern in this scenario. Among the automatic meth-ods, this work proposes a feature contour detectionthat takes advantage of synthetic data generated by the

FIGURE 4. Example of good reconstruction results showing the input 2D pictures, the reconstructed 3D face with and withouttexture, and the distance map.

3D Face Reconstruction from 2D Pictures

Author's personal copy

Page 12: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

3D statistical model to facilitate the engineering ofapplications similar to the presented here. Our resultshave showed that physicians are able to reconstructfaces of patients in less than an average of five minutes,which allows the application to be used within stan-dard consultation time. Since the application is goingto run on a server and the input images are scaled to astandard size (e.g., interpupillary distance) beforeprocessing, the processing time of those steps isexpected to be similar for different cases in the manualCurrently, the most time consuming parts of the

procedure are the manual definition of the faciallandmarks and correction of the facial feature contours(averaging around 2 and 3 min, respectively). No dif-ficulties with those steps were reported from volunteerswho tested the application since they are facilitated bysemi-supervised methods (e.g., spline contours). Theexperiments with IMD cases, showed that the manualcorrection of the facial feature contours can improvethe reconstruction results (see Fig. 6) but they were notnecessary for most of the cases. Therefore, faster lessaccurate reconstructions can be obtained without

FIGURE 5. Example of bad reconstruction results showing the input 2D pictures, the reconstructed 3D face with and withouttexture, and the distance map.

OLIVEIRA-SANTOS et al.

Author's personal copy

Page 13: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

manual correction depending on the need of each user.Furthermore, automatic detection of manually definedlandmarks is part of future improvements of the system.

In this study, two distance measures were used(considering CPM and TPS + CPM point corre-spondence) to compare reconstruction and groundtruth. The comparison with the ground truth allowedfor identification of the distribution of error in theface. The graphs showed that from the three regionsthe mouth has usually higher error. Distance mapsshowed that the cheek and the forehead regions con-centrate most of the errors. However, as it can be seenin Figs. 4, 5, and 7, the texture mapping plays animportant role on the overall perception of the face.The texture seems to minimize perception of smallerrors of the shape reconstruction. From our experi-ments, it was noticed that imperfections are less per-ceived when casually examining the reconstructed facerather than thoroughly analyzing it, which is usuallythe case when communicating a certain procedure tothe patient in a given dynamic clinical scenario. Thereconstructions in our evaluation presented a verystable texture mapping. None of the cases showedmajor texture problems such as background as part ofthe face or stitching effect even for the IMD caseswithout manual correction of the feature contours(example in Fig. 6). The qualitative analysis performedby two surgeons on the reconstructions showed thatmost of the cases were evaluated as ‘‘Good’’ or ‘‘VeryGood’’, which supports the use of the application inclinics. Although there were sub-optimal reconstruc-tions, the surgeons would still use it to discuss with thepatient. According to the surgeons, ‘‘Bad’’ recon-structions would reduce the visual impact of the

application, but not hinder its use as a communicationtool. From the clinical point of view, some recon-structions gave different impression when analyzedfrom different sights, which could make them lesssuitable for discussing certain procedures than forothers. For example, a case with better frontal thanprofile impression could be used for communicatingskin clearing or rejuvenating better than for otherprocedures. Although texture could reduce some of theperceived reconstruction errors, large errors in theshape can still affect the overall appearance of the face.For example, case 6 of Fig. 7 is also illustrated as sub-optimal reconstruction in Fig. 5 with large errors onthe cheek bone region. Such errors in shape made thesubject look thinner than he actually is and reduced thescore gave by surgeon 2. According to the surgeons,rhinoplasty or other profile altering surgical proce-dures rely more on profile view and therefore on shapereconstruction accuracy. Hence, accurate shapereconstructions (illustrated on Figs. 2 and 4) canfacilitate discussions on rhinoplasty increasing thepower of the application as well as giving a betteroverall impression to patients and physicians. None ofthe cases were evaluated as ‘‘Excellent’’, showing thatthere are limitations on the actual face appearancereproduction accuracy when compared to 3D scannerdevices that require more complex setup and post-processing. On our results, errors seemed to be higherin subjects with features not belonging to the popula-tion used to create the 3D statistical shape model usedin this work (200 young Caucasians). Therefore, futurework includes extending the range of face representa-tions of our application by expanding the current 3Dstatistical shape model and by creating similar models

FIGURE 6. Example of reconstruction improvement with manual feature contour correction of two IMD cases. Case one (first row)show most improvements in the eyes and lips region, while case two (second row) show most improvements in the mouth region.

3D Face Reconstruction from 2D Pictures

Author's personal copy

Page 14: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

for different races. Additionally, wrinkles are typicallynot reconstructed on the shape in the current versionsince the model was mostly created with young sub-jects. Therefore, wrinkles are represented as textureonly.

The planning tools were created using feedback fromplastic surgeons and optimized for fast and intuitive 3Doperations. Mainly, the proposed operations can beperformed on real time on the web browser of commonpersonal computers. The application offers possibilities

FIGURE 7. Example of reconstructed cases. From left to right, the figure shows the original images (part of the input), therespective reconstructed 3D face from 4 points of view, and the evaluation of surgeons 1 and 2 according to Table 2.

OLIVEIRA-SANTOS et al.

Author's personal copy

Page 15: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

of emulating filling, skin clearing or rejuvenation, andrhinoplasty procedures. Additionally, visualizationtools allow the user to compare pre, and post-interven-tion scenarios in a synchronized way, which enriches thedecision making of the physician and the communica-tion to the patient.

In summary, we have presented the first results of adeveloped web-based 2D to 3D facial reconstructiontool which provides sufficiently high precision for com-munication between physician and patients for visuali-zation of facial treatment options. Patientunderstanding about the aesthetic procedure, and con-sequently satisfaction with the consultation, is expectedto increase with the use of 3D virtual face representationand procedure planning. The current results warrantfurther evaluation of the application in clinical settingwith evaluation of this novel method at large scale byphysicians, aesthetic professionals and patients.

APPENDIX

Let s = (s1,…,sm) be a set of m shapes representedby p corresponding vertices si = v = (v1,…,vp)

T where

vi 2 R3 and represent x, y and z coordinates. Newshape instances v where v X s, can be created with alinear combination of weights

v ¼ �vþ Pv diagðrvÞbv; ð1Þ

where �v ¼ 1m

Pmi¼1 si is the mean shape,

Pv ¼ ðPv1; . . . ;Pv

mÞ is a matrix of eigenvectors and di-ag(rv) is a diagonal matrix with the respective eigen-values that can be obtained by applying principalcomponent analysis (PCA)10 to the m shapes, andbv = (b1

v,…,bm)T is a vector of weights where bi

v 2 R.With an analogous approach, the texture values of

the vertices of these shapes sit = t = (t1,…,tp)

T, whereti 2 R3 and represent r, g and b values, can modeled asa linear combination of weights, bt = (b1

t ,…,bnt )T. New

texture instances can be estimated as

t ¼ �tþ PtdiagðrtÞbt; ð2Þ

where �t ¼ 1m

Pmi¼1 s

ti is the mean texture, Pt = (P1

t ,…,Pnt )

is a matrix of eigenvectors and is diag(rt) is a diagonalmatrix with the respective eigenvalues that can beobtained by applying PCA to the m shape textures.

FIGURE 8. Illustration of the 3D planning tools. (a) Tools for nose correction that can be used for rhinoplasty. (b) Tools foremulating filling procedures in which physicians define a region and an amount to be filled. (c) A tool for cleaning the skin. (d) Acomparison of pre- and post procedure planning.

3D Face Reconstruction from 2D Pictures

Author's personal copy

Page 16: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

A 2D statistical shape model can be created with asimilar approach, but considering a set of h featurecontour points fa = (f1,…,fh)

T, where fi 2 R2 and rep-resents x and y coordinates, and a 2 {‘‘fi’’, ‘‘ri’’, ‘‘li’’}indicating frontal, right profile and left profile imagesrespectively, to be automatically detected in theimages. Let la = (l1,…,lk)

T, where li 2 R2, li � fi, andrepresents x and y coordinates, be a set of k manuallydefined landmarks. The 2D ground truth location ofthe facial feature contour points fa used for training the2D ASM is calculated as

ffi ¼ Tpfi Tsfiv; fri ¼ Tpri Tsriv; ð3Þ

where Tpfi and Tpri represent the frontal and rightprofile 3D to 2D projections respectively, Tsfi and Tsri

are transformations (manually defined offline) to selecta subset of vertices representing facial feature on the3D mean shape (i.e., the landmarks lfi, lri, and lli, andadditional features such as eyes contour, mouth con-tour, etc.). Random shapes and texture used fortraining of the 2D ASM were generated by varying theweights bv and bt in Eqs. (1) and (2) according to anormal distribution.

The 2D facial feature contour points search wasperformed in two steps. Firstly, an initial alignment ofthe mean shape, ffi or fri; with the manually definedlandmarks, lfi or lri, was performed following

ffi� ¼ TPSðPROCðffl; lfiÞ; lfiÞ;fri� ¼ TPSðPROCðfrl; lriÞ; lriÞ;

ð4Þ

where ffi� and fri� are the initial position of the shapesearch for the frontal and right profile respectively,TPS(m,n) is a thin plate spline transformation of thepoint set m to a subset of control points n, andPROC(m,n)is a Procrustes transformation of the pointset m to a subset of points n. Secondly, an iterativeprocess5 searches for the optimal 2D shape, i.e., ffi, fri,and fli for frontal, right profile and left profile respec-tively.

With the set of points representing 2D facial featurecontours from different point of views, the optimalweights bv can be found by minimizing the energyfunction8:

E ¼ 1

3TpfiTsfi PvdiagðrvÞbv � �vð Þ � ffi � TpfiTsfi�v

� ���

��2

þ 1

3TpriTsri PvdiagðrvÞbv � �vð Þ � ðfri � TpriTsri�vÞ��

��2

þ 1

3TpliTsli PvdiagðrvÞbv � �vð Þ � ðfli � TpliTsli�vÞ��

��2

þ g bvk k2 ð5Þ

where the first three lines represent the contribution ofcontour points in each image (frontal, right profile, and

left profile respectively) and the last line represents aprior to keep the desired shape close to the shape ofa human face, in this case the mean shape. The lastline is necessary because the energy function canconverge to different minimums, and the optimumminimum when assuming an error on locating thefacial feature contour points might result on a non-face-like shape. A closed form solution to solve theequation above in one step can be found in Blanzet al.2 and Faggian et al.8 Finally, bv is replaced inthe in equation 1 to reconstruct the 3D shape of thepatient’s face.

After the final patient’s face shape is reconstructed,the texture mapping is performed as follows. Firstly,two intermediate textures are generated, tfi frontal andtpi profile. See Eqs. (6) and (7).

tfi ¼ RGBðPROCðTpfiv; ffiÞ; IfiÞ; ð6Þ

where Ifi is the frontal image of the patient, andRGB(m, n) is a function that gets the list of r, g, and bvalues out of the image n at the locations m.

tpi ¼ RGBltðPROCðTpriv; friÞ;PROCðTpliv; fliÞ; Iri; IliÞð7Þ

where Iri and Ili are the right and left profile images ofthe patient respectively, and RGBlt(mr, ml, nr, nl) is thefunction that gets the list of r, g, and b values out of theimage, nr or nl, at the locations mr or ml dependingwhether vi is located on the left or right side of theshape. Finally, the two intermediate textures areblended in one texture image Itx

Itx ¼MULTIBANDðT�vtxtfi;T�vtxtpi; ImskÞ; ð8Þ

where Imsk is a mask (generated offline) separatingfrontal and profile portions of a face in the Itx space,T�vtx is a transformation to map shape vertices con-sidering the mean shape to a texture image (surfaceparameterization9 obtained offline using the meanshape), and MULTIBAND(nf, nl, nm) is a multi bandmultiband filter4 that blends images nf and nl accordingto the mask image nm.

ELECTRONIC SUPPLEMENTARY MATERIAL

The online version of this article (doi:10.1007/s10439-013-0744-3) contains supplementary material,which is available to authorized users.

ACKNOWLEDGMENTS

We acknowledge the support of the Swiss KTIPromotion Agency (grant: 12892.1 PFLS-LS).

OLIVEIRA-SANTOS et al.

Author's personal copy

Page 17: crisalix-crisalix-files.s3.amazonaws.com · 3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures THIAGO OLIVEIRA-SANTOS,

REFERENCES

1Besl, P. J., and H. D. McKay. A method for registration of3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell.14:239–256, 1992.2Blanz, V., A. Mehl, T. Vetter, and H. Seidel. A statisticalmethod for robust 3D surface reconstruction from sparsedata. In: International symposium on 3D data processing,visualization and transmission, pp. 293–300, 2004.3Blanz, V., and T. Vetter. A morphable model for thesynthesis of 3D faces. In: Proceedings of the 26th annualconference on computer graphics and interactive tech-niques, New York, NY, pp. 187–194, 1999.4Burt, P. J., and E. H. Adelson. A multiresolution splinewith application to image mosaics. ACM Trans. Graph.2:217–236, 1983.5Cootes, T. F., C. J. Taylor, D. H. Cooper, and J. Graham.Active shape models—their training and application.Comput. Vis. Image Underst. 61:38–59, 1995.6Darisi, T., S. Thorne, and C. Iacobelli. Influences ondecision-making for undergoing plastic surgery: a mentalmodels and quantitative assessment. Plast. Reconstr. Surg.116:907–916, 2005.7De Heras Ciechomski, P., M. Constantinescu, J. Garcia, R.Olariu, I. Dindoyal, S. Le Huu, and M. Reyes. Develop-ment and implementation of a web-enabled 3D consulta-tion tool for breast augmentation surgery based on3D-image reconstruction of 2D pictures. J. Med. InternetRes. 14:e21, 2012.8Faggian, N., A. Paplinski, and J. Sherrah. 3D morphablemodel fitting frommultiple views. In: 8th IEEE internationalconference on automatic face & gesture recognition, 2008(FG’08), Amsterdam, The Netherlands, pp. 1–6, 2008.9Floater, M. S., and K. Hormann. Surface Parameteriza-tion: a Tutorial and Survey. In: Advances in Multiresolu-tion for Geometric Modelling, edited by N. A. Dodgson,M. S. Floater, and M. A. Sabin. Springer Berlin Heidel-berg, 2005, pp. 157–186.

10Flury, B., and H. Riedwyl. Multivariate Statistics:A Practical Approach. London: Chapman and Hall,296 pp., 1988.

11Gower, J. Generalized procrustes analysis. Psychometrika40:33–51, 1975.

12Heike, C. L., K. Upson, E. Stuhaug, and S. M. Weinberg.3D digital stereophotogrammetry: a practical guide tofacial image acquisition. Head Face Med. 6:18, 2010.

13Jebara, T., A. Azarbayejani, and A. Pentland. 3D structurefrom 2D motion. IEEE Signal Process. Mag. 16:66–84,1999.

14Kim, H., P. Jurgens, S. Weber, L.-P. Nolte, and M. Reyes.A new soft-tissue simulation strategy for cranio-maxillo-facial surgery using facial muscle template model. Prog.Biophys. Mol. Biol. 103:284–291, 2010.

15Koch, R. M., M. H. Gross, F. R. Carls, D. F. von Buren,G. Fankhauser, and Y. I. H. Parish. Simulating facialsurgery using finite element models. In: Proceedings of the23rd annual conference on computer graphics and inter-active techniques, New York, NY, pp. 421–428, 1996.

16Lin, S. J., N. Patel, K. O’Shaughnessy, and N. Fine. A newthree-dimensional imaging device in facial aesthetic andreconstructive surgery. Otolaryngol Head Neck Surg139:313–315, 2008.

17Milborrow, S., and F. Nicolls. Locating Facial featureswith an extended active shape model. In: Proceedings of the10th European conference on computer vision: part IV,Berlin, Heidelberg, pp. 504–513, 2008.

18Moghaddam, B., J. Lee, H. Pfister, and R. Machiraju.Model-based 3D face capture with shape-from-silhouettes.In: IEEE international workshop on analysis and modelingof faces and gestures, 2003 (AMFG 2003), Nice, France,pp. 20–27, 2003.

19Ozkul, T., and M. H. Ozkul. Computer simulation tool forrhinoplasty planning. Comput. Biol. Med. 34:697–718,2004.

20Rabi, S. A., and P. Aarabi. Face fusion: an automaticmethod for virtual plastic surgery. In: 9th internationalconference on information fusion, Florence, Italy, pp. 1–7,2006.

21Salzmann, M., J. Pilet, S. Ilic, and P. Fua. Surface defor-mation models for nonrigid 3D shape recovery. IEEETrans. Pattern Anal. Mach. Intell. 29:1481–1487, 2007.

22See, M., M. Foxton, N. Miedzianowski-Sinclair, C.Roberts, and C. Nduka. Stereophotogrammetric measure-ment of the nasolabial fold in repose: a study of age andposture-related changes.Eur. J. Plast. Surg. 29:387–393, 2007.

23Torresani, L., A. Hertzmann, and C. Bregler. Nonrigidstructure-from-motion: estimating shape and motion withhierarchical priors. IEEE Trans. Pattern Anal. Mach. Intell.30:878–892, 2008.

24Zhang, R., P.-S. Tsai, J. E. Cryer, and M. Shah. Shapefrom shading: a survey. IEEE Trans. Pattern Anal. Mach.Intell. 21:690–706, 1999.

3D Face Reconstruction from 2D Pictures

Author's personal copy