Smart Mirror: Intelligent Makeup Recommendation and Synthesis

2
Smart Mirror: Intelligent Makeup Recommendation and Synthesis Tam V. Nguyen Department of Computer Science University of Dayton [email protected] Luoqi Liu Department of ECE National University of Singapore [email protected] ABSTRACT The female facial image beautification usually requires pro- fessional editing softwares, which are relatively difficult for common users. In this demo, we introduce a practical sys- tem for automatic and personalized facial makeup recom- mendation and synthesis. First, a model describing the re- lations among facial features, facial attributes and makeup attributes is learned as the makeup recommendation model for suggesting the most suitable makeup attributes. Then the recommended makeup attributes are seamlessly synthe- sized onto the input facial image. Categories and Subject Descriptors I.4.9 [Image Processing and Computer Vision]: Appli- cations Keywords Makeup Recommendation, Makeup Synthesis 1. INTRODUCTION Nowadays, facial makeup has been indispensable for most modern women. As stated in L’Oreal annual report [5], the worldwide cosmetics market has reached e205 billion in 2016 along with 4% annual increase. Different from other commercial products, makeup product is person-dependent. The lack of personalized recommendation and effect visual- ization functionality hinders the cosmetic products’ growing for online shopping business. How to imitate the physical makeup process and synthe- size the effects of these products has attracted the interest of research society [6, 7, 2, 8, 9] in the past years. Most of the studies utilize image pairs with before-and-after makeup effects. Tong et al. [9] extracted makeup from before-and- after training image pairs, and transferred the makeup ef- fect defined as ratios to a new testing image. Meanwhile, Kristina Scherbaum et al.[8] used 3D image pairs of before- and-after makeup, and modelled makeup as the ratio of ap- Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). ACM MM ’17 Mountain View, CA, USA c 2017 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 Figure 1: The user interface of our system: (a) the test facial image, (b) facial feature extraction, i.e., landmark point extraction, (c) makeup recommen- dation and synthesis, (d) comparison of the test fa- cial image before and after makeup synthesis. pearance. Guo and Sim [2] considered makeup effect existing in two layers of the three-layer facial decomposition result, and makeup effect of a reference image is transferred to the target image. Recently, Li et al. [4] presented a deep learn- ing model to capture the underlying relations between facial shape and attractiveness. These works focus on the facial geometric feature while ignoring skin texture information. The aforementioned works are however still limited. First, they do not provide personalized makeup effect/products recommendation function, and the synthesized makeup is unnecessary to be suitable. Second, most of these meth- ods adopt the global transformation approach; however, a good facial makeup effect can be generally decomposed into several individual effects like eye shadow, skin color and lip color. Therefore, in this work, we develop a practical system which can personalize the makeup product recommendation along with a function of visualizing the effect on an input fa- cial image without makeup. The two functionalities, namely recommendation and synthesis, can supplement each other, and can significantly facilitate both in-store or online shop- ping experience. Our user-friendly interface is shown in Fig- ure 1. Additionally, a short video about our system is avail- able at https://www.youtube.com/watch?v=tpY-FZtb-Cs. arXiv:1709.07566v1 [cs.CV] 22 Sep 2017

Transcript of Smart Mirror: Intelligent Makeup Recommendation and Synthesis

Smart Mirror: Intelligent Makeup Recommendation andSynthesis

Tam V. NguyenDepartment of Computer Science

University of [email protected]

Luoqi LiuDepartment of ECE

National University of [email protected]

ABSTRACTThe female facial image beautification usually requires pro-fessional editing softwares, which are relatively difficult forcommon users. In this demo, we introduce a practical sys-tem for automatic and personalized facial makeup recom-mendation and synthesis. First, a model describing the re-lations among facial features, facial attributes and makeupattributes is learned as the makeup recommendation modelfor suggesting the most suitable makeup attributes. Thenthe recommended makeup attributes are seamlessly synthe-sized onto the input facial image.

Categories and Subject DescriptorsI.4.9 [Image Processing and Computer Vision]: Appli-cations

KeywordsMakeup Recommendation, Makeup Synthesis

1. INTRODUCTIONNowadays, facial makeup has been indispensable for most

modern women. As stated in L’Oreal annual report [5],the worldwide cosmetics market has reached e205 billionin 2016 along with 4% annual increase. Different from othercommercial products, makeup product is person-dependent.The lack of personalized recommendation and effect visual-ization functionality hinders the cosmetic products’ growingfor online shopping business.

How to imitate the physical makeup process and synthe-size the effects of these products has attracted the interestof research society [6, 7, 2, 8, 9] in the past years. Most ofthe studies utilize image pairs with before-and-after makeupeffects. Tong et al. [9] extracted makeup from before-and-after training image pairs, and transferred the makeup ef-fect defined as ratios to a new testing image. Meanwhile,Kristina Scherbaum et al. [8] used 3D image pairs of before-and-after makeup, and modelled makeup as the ratio of ap-

Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).

ACM MM ’17 Mountain View, CA, USAc© 2017 Copyright held by the owner/author(s).

ACM ISBN 978-1-4503-2138-9.

DOI: 10.1145/1235

Figure 1: The user interface of our system: (a) thetest facial image, (b) facial feature extraction, i.e.,landmark point extraction, (c) makeup recommen-dation and synthesis, (d) comparison of the test fa-cial image before and after makeup synthesis.

pearance. Guo and Sim [2] considered makeup effect existingin two layers of the three-layer facial decomposition result,and makeup effect of a reference image is transferred to thetarget image. Recently, Li et al. [4] presented a deep learn-ing model to capture the underlying relations between facialshape and attractiveness. These works focus on the facialgeometric feature while ignoring skin texture information.

The aforementioned works are however still limited. First,they do not provide personalized makeup effect/productsrecommendation function, and the synthesized makeup isunnecessary to be suitable. Second, most of these meth-ods adopt the global transformation approach; however, agood facial makeup effect can be generally decomposed intoseveral individual effects like eye shadow, skin color and lipcolor. Therefore, in this work, we develop a practical systemwhich can personalize the makeup product recommendationalong with a function of visualizing the effect on an input fa-cial image without makeup. The two functionalities, namelyrecommendation and synthesis, can supplement each other,and can significantly facilitate both in-store or online shop-ping experience. Our user-friendly interface is shown in Fig-ure 1. Additionally, a short video about our system is avail-able at https://www.youtube.com/watch?v=tpY-FZtb-Cs.

arX

iv:1

709.

0756

6v1

[cs

.CV

] 2

2 Se

p 20

17

BMD Database

Latent

SVM

Makeup Extraction

Eye Lipstick Foundation

Makeup Invariant Features

Facial Attributes

Narrow Eyes

Facial

shape

Pupil Hair color

Training Phase Testing Phase

Black Hair

White Skin …

Non-makeup Face

Face with Makeup

Recommend Synthesize

Figure 2: The flowchart of our proposed makeuprecommendation and synthesis system.

2. PROPOSED SYSTEM

2.1 Beauty Makeup Dataset CollectionMakeup products make great commercial benefits among

woman customers, however there exists no public datasetfor academic research. Most previous works [9, 8, 2] areexample-based or only work on a few selected samples. Chenand Zhang [1] released a benchmark for facial beauty study,but their focus is geometric facial beauty, not facial makeup.

Thus, we first collected an image dataset, Beauty MakeupDataset (BMD), from both professional makeup websitesand popular image sharing websites (e.g. taaz.com andflickr.com) using key words such as make up, cosmetics,and celebrity makeup. The initially downloaded ∼ 500, 000images are very noisy and contain many non-face images.Thus, we use a commercial face detector 1 to detect face andlocate 87 facial key points in the images. Only images withfaces in frontal pose, high resolution and with high face andlandmark detection confidences are retained. The remain-ing ∼ 10, 000 images are carefully checked to remove faceswithout obvious makeup effects or with non-uniform illumi-nations. The final 500 images are in good quality and withobvious makeup effects. Then all faces are aligned and de-composed into different regions and the makeup effect anal-ysis is performed for each region. More specifically, we focuson 1) the analysis of eye shadow and exploit spectral mat-ting based methods to extract eye shadow templates; and 2)the modeling of colors of foundation, eye shadow and lip viacolor clustering method.

2.2 Makeup Recommendation and SynthesisIn this work, we propose a novel system for automatic fa-

cial makeup recommendation and synthesis. The flowchartof our system is illustrated in Figure 2. For facial makeuprecommendation, makeup-invariant facial features and at-tributes are designed to describe the intrinsic traits of faces,and a latent SVM model [10] is learned from our BMDdataset to explore the relation among makeup-invariant fa-cial features, attributes and makeup effects for personalizedmakeup recommendation. In the synthesis mode, the sys-tem recommended makeup products are seamlessly applied

1OMRON, OKAO vision.http://www.omron.com/r d/coretech/vision/okao.html

Figure 3: The results of our makeup synthesis:before-makeup images are listed in the first row,and after-makeup ones are in the second row. Bestviewed in ×3 size of original color PDF file.

onto the testing face for visualizing effects. In particular, weprocess different facial parts (skin, eyes and lip) separately.Edge preserved filtering [3] is applied to imitate the physi-cal effect of foundation, and then different colors are mergedinto respective regions. The makeup synthesis results areshown in Figure 3. A short video about our system is avail-able at https://www.youtube.com/watch?v=tpY-FZtb-Cs.

3. CONCLUSIONIn this work, we showcase a practical system of automatic

personalized makeup recommendation and synthesis. It per-forms the beautification task in a learning-based manner andgenerates natural makeup results. We hope our work wouldattract more interesting research works in this area. In thefuture work, we are planning to focus on more detailed fa-cial regions, i.e., eyebrow and eyelash. In addition, we shallfurther extend the proposed framework to handle makeuprecommendation and synthesis in videos.

4. REFERENCES[1] F. Chen and D. Zhang. A benchmark for geometric facial

beauty study. Medical Biometrics, pages 21–32, 2010.[2] D. Guo and T. Sim. Digital face makeup by example. In

IEEE Conference on Computer Vision PatternRecognition, pages 73–79, 2009.

[3] K. He, J. Sun, and X. Tang. Guided image filtering. InEuropean Conference on Computer Vision, pages 1–14,2010.

[4] J. Li, C. Xiong, L. Liu, X. Shu, and S. Yan. Deep facebeautification. In ACM Multimedia, pages 793–794, 2015.

[5] L’Oreal. Annual report 2016. 2016.

[6] T. V. Nguyen, S. Liu, B. Ni, J. Tan, Y. Rui, and S. Yan.Sense beauty via face, dressing, and/or voice. In ACMMultimedia, pages 239–248, 2012.

[7] T. V. Nguyen, S. Liu, B. Ni, J. Tan, Y. Rui, and S. Yan.Towards decrypting attractiveness via multi-modality cues.TOMCCAP, 9(4):28:1–28:20, 2013.

[8] K. Scherbaum, T. Ritschel, M. Hullin, T. Thormahlen,V. Blanz, and H. Seidel. Computer-suggested facialmakeup. In CGF, volume 30, pages 485–492, 2011.

[9] W. Tong, C. Tang, M. Brown, and Y. Xu. Example-basedcosmetic transfer. In Pacific Conference on ComputerGraphics and Applications, pages 211–218, 2007.

[10] Y. Wang and G. Mori. A discriminative latent model ofobject classes and attributes. In European Conference onComputer Vision, pages 155–168. Springer, 2010.