Specular Highlight Removal for Real-world...

11
Pacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38 (2019), Number 7 Specular Highlight Removal for Real-world Images Gang Fu 1 , Qing Zhang 2 , Chengfang Song 1 , Qifeng Lin 1 , Chunxia Xiao 1 1 School of Computer Science, Wuhan University, Wuhan 430072, China 2 School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, China Abstract Removing specular highlight in an image is a fundamental research problem in computer vision and computer graphics. While various methods have been proposed, they typically do not work well for real-world images due to the presence of rich textures, complex materials, hard shadows, occlusions and color illumination, etc. In this paper, we present a novel specular highlight removal method for real-world images. Our approach is based on two observations of the real-world images: (i) the specular highlight is often small in size and sparse in distribution; (ii) the remaining diffuse image can be represented by linear com- bination of a small number of basis colors with the sparse encoding coefficients. Based on the two observations, we design an optimization framework for simultaneously estimating the diffuse and specular highlight images from a single image. Specif- ically, we recover the diffuse components of those regions with specular highlight by encouraging the encoding coefficients sparseness using L 0 norm. Moreover, the encoding coefficients and specular highlight are also subject to the non-negativity according to the additive color mixing theory and the illumination definition, respectively. Extensive experiments have been performed on a variety of images to validate the effectiveness of the proposed method and its superiority over the previous methods. CCS Concepts Computing methodologies Computational photography; 1. Introduction Specular highlight is common in real-world images, which chal- lenges many vision and graphics tasks, such as intrinsic image de- composition [GF19, BBS14, LZL14], color constancy [TNI03], im- age segmentation [AMFM11], illumination estimation [ZYL * 17], and low-light image enhancement [ZYX * 18, ZNXZ19]. Hence, ef- fective specular highlight removal technique is widely required. Al- though several methods [KJHK13, RTT17] have been proposed for removing specular highlight, they, however, typically do not work well for real-world images and may produce visually unpleasing results, which limits their applicability. Recently, some intrinsic image decomposition methods [BBS14, ZTD * 12, MZRT16] have been proposed to handle real-world im- ages, which are also applicable to specular highlight removal. How- ever, since these methods are mostly built upon priors that may not suitable for images with specular highlight, they typically do not work well for specular highlight removal. Unlike them, we Shiny surfaces tend to reflect light more strongly in the opposite direction from the angle of incidence. We call it specular reflection. A perfect specu- lar reflector would be a mirror. However, many surfaces in the real world is a mixture of specular and diffuse, e.g., a book with a smooth surface gener- ates both kinds of reflection. So we use the term specular highlight loosely throughout our paper, except stated definitely. build our specular highlight removal approach on two observations of real-world images: (i) the specular highlight is often small in size and sparse in distribution; (ii) the diffuse image can be rep- resented by linear combination of a small number of basis colors with the sparse encoding coefficients. The first observation is in- tuitive, while the second one is inspired by existing palette-based- recoloring methods [ZXST17, COL * 15], which assumes that there is a linear representation between colors in an image with a few ba- sis colors. From the two observations, we summarize two sparsity priors, which are the specularity sparseness and encoding coeffi- cients sparseness. Note the encoding coefficient denotes the numer- ical contribution of the basis colors in the reconstruction of diffuse component of each pixel. Based on the above two observations, i.e., the encoding coeffi- cients sparseness and the specularity sparseness, we in this paper propose a novel specular highlight removal method for real-world images. By taking full advantage of the sparse nature of diffuse and specular highlight components, we formulate the specular highlight removal as an energy minimization problem. In particular, we re- cover the diffuse components of specular highlight regions using a color palette by encouraging the encoding coefficients sparseness. We also include a total variation regularizer on specular highlight to enforce the specular sparseness. Moreover, we constrain that the encoding coefficients and specular highlight are non-negative c 2019 The Author(s) Computer Graphics Forum c 2019 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

Transcript of Specular Highlight Removal for Real-world...

Page 1: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

Pacific Graphics 2019C. Theobalt, J. Lee, and G. Wetzstein(Guest Editors)

Volume 38 (2019), Number 7

Specular Highlight Removal for Real-world Images

Gang Fu1 , Qing Zhang2 , Chengfang Song1 , Qifeng Lin1, Chunxia Xiao1

1 School of Computer Science, Wuhan University, Wuhan 430072, China2 School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, China

Abstract

Removing specular highlight in an image is a fundamental research problem in computer vision and computer graphics. While

various methods have been proposed, they typically do not work well for real-world images due to the presence of rich textures,

complex materials, hard shadows, occlusions and color illumination, etc. In this paper, we present a novel specular highlight

removal method for real-world images. Our approach is based on two observations of the real-world images: (i) the specular

highlight is often small in size and sparse in distribution; (ii) the remaining diffuse image can be represented by linear com-

bination of a small number of basis colors with the sparse encoding coefficients. Based on the two observations, we design an

optimization framework for simultaneously estimating the diffuse and specular highlight images from a single image. Specif-

ically, we recover the diffuse components of those regions with specular highlight by encouraging the encoding coefficients

sparseness using L0 norm. Moreover, the encoding coefficients and specular highlight are also subject to the non-negativity

according to the additive color mixing theory and the illumination definition, respectively. Extensive experiments have been

performed on a variety of images to validate the effectiveness of the proposed method and its superiority over the previous

methods.

CCS Concepts

• Computing methodologies → Computational photography;

1. Introduction

Specular highlight † is common in real-world images, which chal-lenges many vision and graphics tasks, such as intrinsic image de-composition [GF19,BBS14,LZL14], color constancy [TNI03], im-age segmentation [AMFM11], illumination estimation [ZYL∗17],and low-light image enhancement [ZYX∗18,ZNXZ19]. Hence, ef-fective specular highlight removal technique is widely required. Al-though several methods [KJHK13,RTT17] have been proposed forremoving specular highlight, they, however, typically do not workwell for real-world images and may produce visually unpleasingresults, which limits their applicability.

Recently, some intrinsic image decomposition methods [BBS14,ZTD∗12, MZRT16] have been proposed to handle real-world im-ages, which are also applicable to specular highlight removal. How-ever, since these methods are mostly built upon priors that maynot suitable for images with specular highlight, they typically donot work well for specular highlight removal. Unlike them, we

† Shiny surfaces tend to reflect light more strongly in the opposite directionfrom the angle of incidence. We call it specular reflection. A perfect specu-lar reflector would be a mirror. However, many surfaces in the real world isa mixture of specular and diffuse, e.g., a book with a smooth surface gener-ates both kinds of reflection. So we use the term specular highlight looselythroughout our paper, except stated definitely.

build our specular highlight removal approach on two observationsof real-world images: (i) the specular highlight is often small insize and sparse in distribution; (ii) the diffuse image can be rep-resented by linear combination of a small number of basis colorswith the sparse encoding coefficients. The first observation is in-tuitive, while the second one is inspired by existing palette-based-recoloring methods [ZXST17, COL∗15], which assumes that thereis a linear representation between colors in an image with a few ba-sis colors. From the two observations, we summarize two sparsitypriors, which are the specularity sparseness and encoding coeffi-

cients sparseness. Note the encoding coefficient denotes the numer-ical contribution of the basis colors in the reconstruction of diffusecomponent of each pixel.

Based on the above two observations, i.e., the encoding coeffi-

cients sparseness and the specularity sparseness, we in this paperpropose a novel specular highlight removal method for real-worldimages. By taking full advantage of the sparse nature of diffuse andspecular highlight components, we formulate the specular highlightremoval as an energy minimization problem. In particular, we re-cover the diffuse components of specular highlight regions using acolor palette by encouraging the encoding coefficients sparseness.We also include a total variation regularizer on specular highlightto enforce the specular sparseness. Moreover, we constrain thatthe encoding coefficients and specular highlight are non-negative

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John

Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

Page 2: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

according to the additive color mixing theory and the illumina-tion definition, respectively. Extensive experiments and compar-isons with state-of-the-art methods demonstrate the effectivenessof our proposed algorithm.

In particular, the main contributions of this work are as follows:

• We propose a novel model for simultaneously estimating diffuseand specular highlight images from a single image.

• We derive an ADMM procedure for optimizing the proposedspecular highlight removal model.

• We conduct various experiments on a number of images includ-ing human facial images, laboratory images, and real-world im-ages etc., and validate the superiority of our model over otherstate-of-the-art methods.

2. Related work

We review previous specular highlight removal algorithms basedon the type of the input, and then briefly summarize related non-Lambertian intrinsic image decomposition techniques.

2.1. Specular highlight removal

Single-image-based methods Early methods [KSK88, BLL96]often perform a color segmentation. But such a color segmentation,as we know, is not robust to complex textures. To cope with thisissue, Tan et al. [TI05] proposed a chromaticity-based model with-out explicitly leveraging color segmentation. Yang et al. [YWA10]proposed a similar method as [TI05], which allows parallel imple-mentation, leading to real-time performance. Shen et al. [SZSX08]separated specular highlight based on the error analysis of chro-maticity and appropriate choice of body color for each pixel. Subse-quently, several models based on the dichromatic reflection model[Sha85] have been proposed. Kim et al. [KJHK13] introduced anoptimization framework with several effective constraints, basedon an observation that for real-world images, the dark channel canprovide an approximate specular-free image. In addition, Akashiet al. [AO15] presented a method based on a modified versionof sparse non-negative matrix factorization (NMF) while Guo et

al. [GZW18] proposed a sparse and low-rank reflection model. Intheir framework, the diffuse and specular highlight images are si-multaneously estimated via optimization. However, dark pixels aresometimes induced on the highlight region in the recovered dif-fuse image. In this paper, we focus on tackling real-world imagesand avoiding the visual artifacts commonly encountered by previ-ous methods. Please refer to a survey [ABC11] for more methods.Note, our method belongs to this category.

Multiple-image-based methods To reduce the ambiguity oflayer separation and obtain high-quality results, multiple-image-based methods have been proposed. Based on the dichromaticreflection model [Sha85], Sato et al. [YK94] performed specu-lar highlight removal by analyzing color signatures in multipleimages captured under a moving light source, while Lin et al.[LS01] achieved the same goal with two photometric images. How-ever, these methods require capturing images with a dynamic lightsource. To avoid this issue, some researchers proposed to sim-plify the images capturing by only changing the viewpoint. Lee

et al. [LB92] presented a model for specular highlight region de-tection. Lin et al. [LLK∗02] performed specular highlight removalby considering those pixels on specular highlight region as out-liers, and matching the remaining diffuse parts in other views.Guo et al. [GCM14] proposed a method focusing on handling avideo/image sequence. This method has achieved impressive re-sults. However, it is computationally expensive. Although multiple-image-based methods can better remove specular highlight, theirpracticability is low since the source images are often unavailable.

Depth-based methods Chen et al. [LLZI17] presented amethod aiming at removing specular highlight in facial images.Their impressive results benefit from the use of physical and statis-tical properties of human skin and faces. However, this method onlyworks well for facial images and requires extra depth map corre-sponding to the facial image. Also, please refer to [LLZI17,LZL15]for more specular highlight removal methods for facial images. Dif-ferent from these methods, our goal in this paper aims at handlingimages of arbitrary real-world scenes, not a specific scene.

2.2. Non-Lambertian intrinsic image decomposition

Recently, several non-Lambertian intrinsic image decompositionmethods have been proposed to handle specular highlight. Basedon the dichromatic model, Alperovich et al. [AG16] constructed aframework for intrinsic light field decomposition that decomposesan input image into a reflectance image, a diffuse shading image,and a specular highlight image. Moreover, Shi et al. [SYHS17] pro-posed a learning-based model to achieve the same goal as [AG16],but focused on handling images with single object. Note that thesetwo methods have a strict requirement for the input. In compari-son, our method aims at removing specular highlight for real-worldimages.

3. Our approach

This section presents the proposed specular highlight removalmethod. We first introduce the dichromatic reflection model in Sec-tion 3.1. Then we formulate the specular highlight removal problemas an energy minimization problem in Section 3.2, and derive anADMM based procedure for solving the involved non-convex opti-mization problem in Section 3.3. Next, the implementation detailsand parameter setting are provided in Section 3.4.

3.1. Background

The dichromatic reflection model has been widely used for specularhighlight removal [LLZI17, AO15, KJHK13] and non-Lambertianintrinsic image decomposition [SYHS17, AG16]. Our approach isalso based on this model. For completeness, we briefly summarizethe dichromatic reflection model. Formally, this model assumes thatan input image I can be decomposed into a diffuse image D and aspecular highlight image H, mathematically expressed as

Ip = Dp +Hp , (1)

where p indexes 2D pixels. In some previous works, for the sakeof simplicity, researchers often assume that the image is white bal-anced, and the illumination is single-channel [ZTD∗12]. For real-

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.

Page 3: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

world scenes, the illumination, however, is often colorful. There-fore, we further model the illumination color. Similar to previousintrinsic image decomposition methods, e.g., [BST∗14], we assumethat the illumination is monochromatic, and the illumination has aconstant color Γ. By this means, Eqn. (1) can be rewritten as

Ip = Dp +ΓHp , (2)

where I and D are RGB color vectors and H is a scalar. The il-lumination color Γ can be estimated by existing color constancyalgorithms [Bar15, TNI03].

3.2. Specular highlight removal model

Problem formulation Given an input image I with specular high-light, our algorithm aims to decompose I into linear combinationof a specular-free diffuse image D, the specular highlight image H,and the illumination color Γ. Note, the decomposed diffuse imageis treated as the specular highlight removal image in this paper. Ob-viously, this is a severely ill-posed problem since we have more un-knowns than the equations. To reduce the decomposition ambigui-ties and maintain a solution that reasonably explains the formationof the image, we formulate the decomposition as the minimizing ofthe following objective function:

argminD,H

F = fr +λd fd +λs f s , (3)

where fr, fd and fs are energy terms, and λd and λs are parame-ters for balancing contribution of these terms . In the following, wedescribe these energy terms in details.

Data term A meaningful and valid layer separation is that thediffuse image D, the specular highlight image H, and the illumi-nation color Γ can reconstruct the input image I. We here adopta common L2-norm metric to measure the reconstruction error asfollows:

fr = ∑p

||Ip−Dp−ΓHp||22 , (4)

where || · ||22 denotes the L2 norm.

Regularizer on diffuse image D Inspired by some palette-based recoloring techniques [ZXST17, COL∗15], we assume thatthe colors in the diffuse image D can be linearly represented byseveral basis colors with sparse encoding coefficients, which is ex-pressed as

Dp =K

∑i

WipRi . (5)

where Ri stands for the i-th basis color, the set of which forms thepalette. W i

p is the coefficient that denotes the numerical contribu-tion of the basis color Ri in the reconstruction of the pixel colorDp. Note that the efficient W is subject to W ≥ 0 according tothe additive color mixing theory. Note that, previous methods of-ten adopt the L1- or L2-norm constraints to encourage the encodingcoefficients sparseness, which makes them not effective enough toproduce satisfactory results. In contrast, our model with L0 norm ismore robust to noise and can produce better results.

We note that the different forms of sparsity mentioned abovehave been adopted in intrinsic image decomposition techniques[BM15, ZTD∗12], recoloring techniques [COL∗15, ZXST17],specular highlight removal [KJHK13], and image smoothing tech-niques [XLXJ11]. Unlike these techniques, we use this idea to en-courage the sparseness of the encoding coefficient W using the L0-norm. Specifically, we design the following constraint on the en-coding efficient

fd =C(W ) , (6)

where the L0-norm measure of diffuse component is defined as

C(W ) = #{p | |W ip| 6= 0}, (7)

which counts the number of pixels that has the number of nonzerocoefficients. Moreover, akin to [ZXST17, COL∗15], the diffusepalette can be constructed by a K-means clustering with the au-tomatically determined or user-specified number of palette colorsK. It is worth noting that when constructing our diffuse palette, wesort all the color bins in descending order according to their den-sities and select the top K (typically 60) bins. This operation caneffectively avoid choosing the specular colors, since their densitiesare usually small.

Regularizer on specular highlight image H Our regulariza-tion on specular highlight image H is based on the observation thatspecular highlight in the real-world scenes is often small in sizeand sparse in distribution, as shown in Figure 3 and 5. Similar ideahas also been also explored in some previous works: depth refine-ment for specular objects [EHW∗16], non-Lambertian intrinsic im-age decomposition techniques [AG16,AJSG17], specular highlightremoval [KJHK13, GZW18], and recoloring [BvdW11]. Differentfrom these works, we combine it into our novel image decomposi-tion framework for simultaneous diffuse image and specular high-light image estimation. Here we adopt the L0-norm to encouragethe intensity sparsity of the specular highlight image, which is writ-ten as

fs =C(H) , (8)

where the L0-norm is defined as

C(H) = #{p | |Hp| 6= 0} , (9)

which counts the number of p whose intensity magnitude Hp is notzero.

3.3. Model solver

Due to involving the L0-norm, our objective function is non-linearand difficult to solve. Therefore, we derive an approximate solutionbased on an Alternating Direction Method of Multipliers (ADMM)scheme [BPC∗11] and L0 minimization [XLXJ11]. Specifically, wefirst rewrite our objective function (3) in a matrix-form as

argminW,H

1

2||I−RW−ΓH||22 +λdC(W)+λsC(H)

s.t. W≥ 0, and H≥ 0 ,

(10)

where W,H,I,D are the matrix form of W,H, I,D, respectively.Here we introduce two auxiliary variables S and G, correspond-ing to W and H respectively. Now the objection function (10) can

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.

Page 4: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

be further rewritten as

argminW,H

1

2||I−RW−ΓH||22 +λdC(S)+λsC(G)

+(S−W)TY1 +(G−H)T

Y2+

ρ

2(||S−W||22 + ||G−H||22)

s.t. W≥ 0, and H≥ 0 ,

(11)

where Y1 and Y2 are the Lagrangian dual variables. We solve forthe objective function (11) by minimizing several sub-problems andmaximizing the dual problems alternatively. In the following, wewill describe these sub-problems and dual problems.

Subproblem 1: computing W The W estimation subproblemis

argminW

||I−RW−ΓH||22 +ρ||S−W+Y1

ρ||22 . (12)

This is a linear problem with a closed-form solution. Thus we up-date W as follows

W = (RTR+ρU)−1(RT

I−RTΓH+ρS+Y1) , (13)

where U is an identity matrix. Note that, according to W ≥ 0,after W is updated, we perform a simple correction operation:W = max(W,0).

Subproblem 2: computing S The S estimation subproblem is

argminS

||S−W+Y1

ρ||22 +

2λd

ρC(S) . (14)

According to [XLXJ11], the above energy function (14) can besolved in an element-wise manner as

argminS

∑j

||S j−W j +Y1, j

ρ||22 +

2λd

ρC(S j) , (15)

where j is the index of the entries of the matrix. The solution of S

is then updated by

S j =

{

0, if (W j− Y1, j

ρ )2 ≤ 2λd

ρ

W j− Y1, j

ρ , otherwise. (16)

For the proof of Eqn. (16), please refer to our supplementary mate-rial for details.

Subproblem 3: computing H The H estimation subproblemis

argminH

||I−RW−ΓH||22 +ρ||G−H+Y2

ρ||22 . (17)

Similar to subproblem 1, the solution of the above energy func-tion (17) can be obtained by

H = (ΓTΓ+ρU)−1(ΓT

I−ΓT

RW+ρG+ρY2) . (18)

The correction is also adopted to ensure that H≥ 0: H=max(H,0).

Subproblem 4: computing G The G estimation subproblemis

argminG

||G−H+Y2

ρ||22 +

2λs

ρC(G) . (19)

Similar to the update in Subproblem 2, the solution of the above

Algorithm 1: Specular highlight removal framework

Input: Input image I, parameters λs, λd , K and maximumiterations M, error tolerance ε.

Output: Coefficient W and specular highlight image H.1 Initialize:

W = S = 0,H = G = 0,Y1 = 0,Y2 = 0,K = 60,ε = 0.001.2 Diffuse palette construction R.3 for k=1 to M do

4 Update W using Eqn. (13);5 Update S using Eqn. (16);6 Update H using Eqn. (18);7 Update G using Eqn. (21);8 Update Y1,Y2 using Eqn. (22);9 Update ρ using Eqn. (23);

10 if ||W−S||/||I|| ≤ ε and ||H−G||/||I|| ≤ ε then

11 Break;12 else

13 Continue;

14 return W,H;

energy function (19) can also be estimated in an element-wise man-ner as

argminS

∑j

||G j−H j +Y2, j

ρ||22 +

2λs

ρC(H j) . (20)

And the solver is updated by

H j =

{

0, if (H j− Y2, j

ρ )2 ≤ 2λs

ρ

H j− Y2, j

ρ , otherwise. (21)

Dual ascend updates The variables Y1 and Y2 are updated by{

Y1← Y1 +ρ(S−W)

Y2← Y2 +ρ(G−H). (22)

Update ρ The weight parameter ρ is updated by

ρ← 2ρ (23)

3.4. Implementation and Parameter Setting

We have implemented our algorithm using Matlab on a PCequipped with Manjaro system, Intel i7-6700 3.40GHZ CPU and8GB of RAM. We found that the parameters λd = 2/

√N and λs =

0.01/√

N work well for most of our test images, where N is thenumber of pixels in an input image. Note that our algorithm stopsiteration when: (i) the maximum number of iteration M (typically200) is reached or (ii) ε1 = ||W−S||/||I|| and ε2 = ||H−G||/||I||are simultaneously less than a given threshold ε (typically 0.01).Figure 1 compares our results with the ground truths. Note, ourcurrent MATLAB implementation is relatively inefficient, since itis neither optimized nor accelerated. However, we believe that byadopting more efficient solvers for the subproblem in Eqn. (15) andthe GPU parallelization, our method can be significantly acceler-ated to provide instant feedback for even high-resolution images.

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.

Page 5: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

(a) (b) (c) (d) (e)

Figure 1: Our specular highlight removal results on an example image (a). (b) and (c) are diffuse image and specular highlight image

generated by our method. (d) and (e) are the corresponding ground truth diffuse image and specular highlight image. The input image (a)

and ground truth images (c) and (d) are from the MIT Intrinsic Images dataset [GJAF09].

(a) (b) (c)

Figure 2: An example from the MIT Intrinsic Images dataset

[GJAF09]. (a) Input. (b) Diffuse image. (c) Specular highlight im-

age.

4. Experiments

In this section, we conduct extensive experiments to validate theeffectiveness of our specular highlight removal method. More anal-ysis of our model is also provided.

4.1. Comparison with the State-of-the-art Methods

We conduct experiments on a number of images to validate theeffectiveness of our algorithm. For visual and quantitative evalu-ation, we compare with five recent methods, including Yamamotoet al. [YKK17], Akashi et al. [AO15], Shen et al. [SC09], Yoonet al. [YCK06], and Tan et al. [TI05]. Note, since Yamamoto et

al. [YKK17] is a post-processing method that refines other meth-ods, we choose the method of Yang et al. [YWA10] as their ini-tialized reflection separation results. The implementation of thesemethods we used in the experiments are from the authors’ web-sites. To fairly perform the comparison, the optimal parameter set-ting for each compared method is obtained via leave-one-out cross-validation. We do not compare with Chen et al. [LLZI17], sincethis method focuses on handling human facial images. And Guo et

al. [GZW18] is not included in our comparison, because their im-plementation code is not available. Moreover, our test images arediverse, including: (1) MIT Intrinsic Images dataset [GJAF09]; (2)facial images from [LLZI17]; (3) laboratory images from [SZ13];(4) real-world images collected from the Internet.

Table 1: Quantitative comparison on the MIT Intrinsic Images

dataset.

Methods MSE LMSE DSSIM

Ours 0.027 0.021 0.412

Yamamoto et al. [YKK17] 0.048 0.039 0.490Akashi et al. [AO15] 0.031 0.028 0.032

Shen et al. [SC09] 0.093 0.052 0.590Yoon et al. [YCK06] 0.098 0.032 0.420

Tan et al. [TI05] 0.110 0.099 0.624

4.1.1. Comparisons on Synthetic and Laboratory Images

MIT Intrinsic Images dataset We first conduct evaluation onimages from the MIT Intrinsic Images dataset [GJAF09], whichoffers the ground truth albedo, diffuse shading and specular high-light images for 20 objects (see Figure 2 for an example). The vi-sual comparisons are shown in Figure 1 and 3. As can be seen,our method produces visually natural-looking results similar to theground truths, while the compared methods [YCK06, TI05] inducevisual artifacts including enhanced textures/structures and colordistortion. In addition, we note that the structures of the diffuseimage are slightly boosted in the results of [YKK17].

For quantitative evaluation, akin to previous methods [GJAF09,CK13, NMY15], we adopt three commonly used metrics includingmean-squared error (MSE), local mean-squared-error (LMSE), anddissimilarity version of the structural similarity index (DSSIM). Wecompute average MSE, LMSE, and DSSIM values of specular-free images and specular highlight images of all 20 objects inthe dataset. Note, for the three metrics, lower value indicates bet-ter result. The quantitative results are shown in Table 1. As seen,our method achieves the lowest MSE, LMSE, and DSSIM values,which means that our method can produce better results than otherstate-of-the-art methods on the MIT Intrinsic Images dataset.

Facial images Then we conduct comparison on facial imagesselected from [LLZI17]. An example is shown in Figure 4. Ya-mamoto et al. [YKK17] may enhance the facial details, e.g.thebeard on the face and background, so does Yoon et al. [YCK06].Shen et al. [SZSX08] degrades the texture details, leading tounnatural-looking result. Tan et al. [TI05] wrongly alters the il-

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.

Page 6: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

(a) (b) (c) (d) (e) (f)

Figure 3: Visual comparison on an image from the MIT Intrinsic Images dataset [GJAF09]. 1st row: specular-free images; 2nd row: close-

ups of the specular-free images; 3rd row: specular highlight images. (a) Input. (b) Ground truth. (c) Ours. (d)-(f) Results of Yamamoto et

al. [YKK17], Yoon et al. [YCK06], and Tan et al. [TI05], respectively. Please zoom in on the results and their corresponding close-ups for

more details.

(a) (b) (c) (d) (e) (f)

Figure 4: Visual comparison on a facial image. The input facial image is selected from [LLZI17]. (a) Input. (b) Ours. (c)-(f) Results of

Yamamoto et al. [YKK17], Shen et al. [SC09], Yoon et al. [YCK06], and Tan et al. [TI05], respectively.

lumination color. In contrast, our method avoids the above visualartifacts, and produces natural-looking result.

Laboratory images Next we evaluate our method on four lab-oratory images (with ground truths) selected from [SZ13] bothqualitatively and quantitatively, as shown in Figure 8 and Table 2.Note, MSE, LMSE and DSSIM are also used as the error metricsin quantitative evaluation. As can be seen, our method produceshigh-quality results closed to the ground truths, and achieves lowererrors than state-of-the-art methods for most of test images.

4.1.2. Comparisons on Real-world Images

We compare our method against previous methods on real-worldimages in Figure 5. Note that the ground truths of these images areunavailable and thus we here only perform the visual comparison

and analysis. On the whole, our method can handle these imagesbetter than other state-of-the-art methods. In contrast, Yamamoto et

al. [YKK17] induces color distortion around the edges/boundaries(see 4th and 9th rows), while Akashi et al. [AO15] results inunnatural-looking black color for white regions (see 1th and 9throws). There are specular highlight residuals in the diffuse imagesestimated by Shen et al., and Yoon et al. [YCK06] may incur in-ducing visual artifacts or unnatural-looking transition around theboundary (see 3rd and 9th rows). Furthermore, we note that mostprevious methods often treat white materials as specular highlightto be removed, resulting in visually unpleasing results. In compar-ison, our method can effectively handle white regions and producenatural-looking results.

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.

Page 7: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

(a) (b) (c) (d) (e) (f)

Figure 5: Visual comparison on real-word images. From top to bottom, the scenes we selected are Dwarves, Beans, Toys, Flower, Inkpad,

respectively. Odd rows: specular-free images; Even rows: specular highlight images. (a) Input. (b) Ours. (c)-(f) Results of Yamamoto et

al. [YKK17], Akashi et al. [AO15], Shen et al. [SC09], and Yoon et al. [YCK06], respectively.

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.

Page 8: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

(a) Input (b) K = 10 (c) K = 20

(d) K = 60 (e) K = 80 (f) K = 100

Figure 6: Effect of varying K. (a) Input. (b)-(f) Specular-free im-

ages with different K.

λd = 1 λd = 10 λd = 1000

λs=

0.01

λs=

0.00

01

Figure 7: Effect of varying λd and λs. Odd rows: the specular-free

images; Even rows: the specular highlight images. See Figure 6(a)

for the input image.

4.2. More analysis

In this section, we analyze the effects of several key parameters inour model including λd , λs, K, and Γ. Then we evaluate the ro-bustness of our method to noisy input and analyze the convergenceof our algorithm. Finally, the limitations and future work are dis-cussed.

4.2.1. Effect of varying parameters

Effect of varying λd and λs We analyze the effect of the twoweights λd and λs. As shown in Figure 7, when λs = 0.1, with theincreasing of λd , the specular highlight residual in the specular-free

image is gradually reduced, e.g., the root of watermelon. However,we note that large λs may also wash out some textures due to oversmoothness of the specular-free image. On the other hand, whenλd = 1000, with the increasing of λs, there is only a subtle changein the specular-free image. We found that the default parameter set-ting can produce good results for most of our testing images.

Effect of varying K Figure 6 presents the effect of varying K

on the diffuse image. As can be seen, when K is small, e.g., 10or 20, the specular-free image may have color distortion, e.g., redpumpkin. On the other hand, when K is large, e.g., 60, or 80, or100, K has limited effect on the diffuse image. However it oftentakes more time to perform the layer separation when K is large.Hence, we typically set it to 60 and found that it works well formost of our testing images. Moreover, as shown in Figure 9, wecan see that K has limited effect on the convergence rate, but largeK would increase εmax (i.e., max(ε1,ε2)). This is because that thesmall value of K (i.e., limited basis colors) may fail to accuratelyreconstruct the diffuse image.

4.2.2. Robustness to inaccurate illumination color

We conduct an experiment to validate the robustness of our methodto inaccurate illumination color. The test images are Animals, Cups,Fruit, and Masks from [SZ13], for which we compare results be-tween our method with the accurate and inaccurate illuminationcolor. The visual comparisons are shown in Figure 8 and the cor-responding quantitative evaluation is reported in Table 2. We cansee that, with accurate illumination color, our method producesresults closer to the ground truths. Surprisingly, our method canalso achieve reasonably good results with the inaccurate illumina-tion color, which convincingly demonstrates the robustness of ourmethod to inaccurate illumination color.

4.2.3. Robustness to noise

We also conduct an experiment to evaluate the robustness of ourmethod to noise. An example is shown in Figure 10, where weadd the zero-mean Gaussian noises with σ = 0.1 and σ = 1 tothe input images. As shown, Akashi et al. [AO15] fails to removenoises since their employed NMF model is sensitive to outliers. Ya-mamoto et al. [YKK17] can effectively remove noise, but inducespecular highlight residual in the resulting diffuse image. Otherclustering-based methods [SC09, YCK06] are sensitive to noiseand thus produce visually unpleasing results. In comparison, ourmethod produces more appealing results than the compared meth-ods, demonstrating its superiority and robustness in handling im-ages with noise.

4.2.4. Convergence analysis

We plot the convergence curve of our algorithm for an input image(Fish) in Figure 11. As seen, the specular highlight residual in thediffuse image is gradually decreased with the increasing of the iter-ations. There is no noticeable change in the diffuse image betweenthe two adjacent iterations when the iterations reach 100. Empiri-cally, 200 iterations are sufficient enough to obtain reasonably goodresults. Note that we provide the convergence curves for the fiveimages (including Dwarves, Beans, Toys, Flower, and Inkpad) inFigure 5 in our supplementary material. We also evaluate how dif-ferent K affects the convergence, as shown in Figure 9.

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.

Page 9: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

Figure 8: Visual comparison on laboratory images. (a) Input. (b) Ground truth. (c)-(d) Ours using accurate and estimated illumination color

respectively. (e)-(h) Results of Yamamoto et al. [YKK17], Akashi et al. [AO15], Shen et al. [SC09], and Yoon et al. [YCK06], respectively.

0 50 100 150 200 250

Iteration number

0

0.1

0.2

0.3

0.4

0.5

0.6

K=10

K=20

K=60

K=80

K=100

Figure 9: Effect of varying K on the convergence. Please see Fig-

ure 6 (a) for the source image.

4.2.5. Limitations and future work

Our method has limitations. First, our method, as well as theexisting methods, may fail to recover the diffuse image and sub-tle textures from large and mostly white specular highlight regions.To address this issue, we plan to adopt learning-based image in-painting techniques to aid us to synthesize the missing details andtextures for those challenging regions. Second, our current unopti-mized implementation is relatively inefficient. Hence, in the future,we would like to explore the GPU accelerated version to enablereal-time performance.

5. Conclusion

In this paper, we have presented a novel approach for removingspecular highlight from a single real-world image. Our approachis built upon two observations of specular highlight in real-world

Table 2: Quantitative comparison on the laboratory images.

Methods Scenes MSE LMSE DSSIM

Ours (accurate Γ)

Animals 0.080 0.072 0.340Cups 0.071 0.062 0.316Fruit 0.082 0.070 0.380

Masks 0.069 0.010 0.338

Ours (estimated Γ)

Animals 0.082 0.076 0.341Cups 0.072 0.066 0.320Fruit 0.089 0.072 0.401

Masks 0.070 0.012 0.342

Yamamoto et al. [YKK17]

Animals 0.092 0.082 0.401Cups 0.079 0.068 0.342Fruit 0.092 0.088 0.421

Masks 0.081 0.021 0.381

Akashi et al. [AO15]

Animals 0.091 0.080 0.387Cups 0.076 0.069 0.311Fruit 0.090 0.083 0.402

Masks 0.070 0.012 0.301

Shen et al. [SC09]

Animals 0.080 0.070 0.360Cups 0.073 0.068 0.310Fruit 0.081 0.073 0.392

Masks 0.072 0.013 0.321

Yoon et al. [YCK06]

Animals 0.098 0.081 0.410Cups 0.081 0.069 0.380Fruit 0.094 0.083 0.423

Masks 0.072 0.025 0.401

images: (i) the specular highlight is often small in size and sparsein distribution; (ii) the diffuse image can be represented by linearcombination of a small number of basis colors with sparse encod-ing coefficients. Based on the two observations, we formulate spec-ular highlight removal problem as an energy minimization, where

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.

Page 10: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

(a) (b) (c) (d) (e) (f)

Figure 10: Evaluation of our method’s robustness to noise. Note, zero-mean Gaussian noise with σ = 0.1 and 1 are added to the two input

images in (a). (b) Ours. (c)-(f) Results of Yamamoto et al. [YKK17], Akashi et al. [AO15], Shen et al. [SC09], and Yoon et al. [YCK06],

respectively.

0 50 100 150 200 250

Iteration number

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

Iter 0

Iter 10

Iter 100

Iter 250

Figure 11: Convergence curve of our algorithm.

we simultaneously estimate diffuse and specular highlight images.Our method can effectively handle real-world images with occlu-sions, varying materials, and color illumination. Extensive exper-iments have been performed on both real-world images and syn-thetic images to validate the superiority of the proposed algorithmover state-of-the-art methods, both qualitatively and quantitatively.

Acknowledgement

This work was partially supported by The National Key Re-search and Development Program of China (2017YFB1002600),the NSFC (No. 61672390), Wuhan Science and Technology Plan

Project (No. 2017010201010109), and Key Technological Innova-tion Projects of Hubei Province (2018AAA062). Chunxia Xiao isthe corresponding author of this paper.

References

[ABC11] ARTUSI A., BANTERLE F., CHETVERIKOV D.: A survey ofspecularity removal methods. Computer Graphics Forum (CGF) 30, 8(2011), 2208–2230. 2

[AG16] ALPEROVICH A., GOLDUECKE B.: A variational model for in-trinsic light field decomposition. In Asian Conference on Computer Vi-

sion (ACCV) (2016). 2, 3

[AJSG17] ALPEROVICH A., JOHANNSEN O., STRECKE M., GOLD-LUECKE B.: Shadow and specularity priors for intrinsic light field de-composition. In International Workshop on Energy Minimization Meth-

ods in Computer Vision and Pattern Recognition (2017). 3

[AMFM11] ARBELAEZ P., MAIRE M., FOWLKES C., MALIK J.: Con-tour detection and hierarchical image segmentation. IEEE Transactions

on Pattern Analysis and Machine Intelligence (TPAMI) 33, 5 (2011),898–916. 1

[AO15] AKASHI Y., OKATANI T.: Separation of reflection componentsby sparse non-negative matrix factorization. Computer Vision and Image

Understanding (CVIU) 146 (2015), 77–85. 2, 5, 6, 7, 8, 9, 10

[Bar15] BARRON J. T.: Convolutional color constancy. In the IEEE In-

ternational Conference on Computer Vision (ICCV) (2015). 3

[BBS14] BELL S., BALA K., SNAVELY N.: Intrinsic images in the wild.ACM Transactions on Graphics (TOG) 33, 4 (2014), 159. 1

[BLL96] BAJCSY R., LEE S., LEONARDIS A.: Detection of diffuseand specular interface reflections and inter-reflections by color image

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.

Page 11: Specular Highlight Removal for Real-world Imagesgraphvision.whu.edu.cn/papers/paper1234_CRC.pdfPacific Graphics 2019 C. Theobalt, J. Lee, and G. Wetzstein (Guest Editors) Volume 38

G. Fu & Q. Zhang & C. Song & Q. Lin & C. Xiao / Specular Highlight Removal for Real-world Images

segmentation. International Journal of Computer Vision (IJCV) 17, 3(1996), 241–272. 2

[BM15] BARRON J., MALIK J.: Shape, illumination, and reflectancefrom shading. IEEE Transactions on Pattern Analysis and Machine In-

telligence (TPAMI) 37, 8 (2015), 1670–1687. 3

[BPC∗11] BOYD S., PARIKH N., CHU E., PELEATO B., ECKSTEIN J.:Distributed optimization and statistical learning via the alternating di-rection method of multipliers. Foundations and Trends R© in Machine

learning 3, 1 (2011), 1–122. 3

[BST∗14] BONNEEL N., SUNKAVALLI K., TOMPKIN J., SUN D.,PARIS S., PFISTER H.: Interactive intrinsic video editing. ACM Trans-

actions on Graphics (TOG) 33, 6 (2014), 197. 3

[BvdW11] BEIGPOUR S., VAN DE WEIJER J.: Object recoloring basedon intrinsic image estimation. In the IEEE International Conference on

Computer Vision (ICCV) (2011). 3

[CK13] CHEN Q., KOLTUN V.: A simple model for intrinsic image de-composition with depth cues. In the IEEE International Conference on

Computer Vision (ICCV) (2013). 5

[COL∗15] CHANG H., OHAD F., LIU Y., STEPHEN D., ADAM F.:Palette-based photo recoloring. ACM Transactions on Graphics (TOG)

34, 4 (2015). 1, 3

[EHW∗16] EL R. O., HERSHKOVITZ R., WETZLER A., ROSMAN G.,BRUCKSTEIN A., KIMMEL R.: Real-time depth refinement for spec-ular objects. In the IEEE Conference on Computer Vision and Pattern

Recognition (CVPR) (2016). 3

[GCM14] GUO X., CAO X., MA Y.: Robust separation of reflectionfrom multiple images. In the IEEE Conference on Computer Vision and

Pattern Recognition (CVPR) (2014). 2

[GF19] G. FU Q. ZHANG C. X.: Towards high-quality intrinsic imagesin the wild. In the IEEE International Conference on Multimedia and

Expo (ICME) (2019). 1

[GJAF09] GROSSE R., JOHNSON M. K., ADELSON E. H., FREEMAN

W. T.: Ground truth dataset and baseline evaluations for intrinsic imagealgorithms. In the IEEE International Conference on Computer Vision

(ICCV) (2009). 5, 6

[GZW18] GUO J., ZHOU Z., WANG L.: Single image highlight removalwith a sparse and low-rank reflection model. In European Conference on

Computer Vision (ECCV) (2018). 2, 3, 5

[KJHK13] KIM H., JIN H., HADAP S., KWEON I.: Specular reflectionseparation using dark channel prior. In the IEEE Conference on Com-

puter Vision and Pattern Recognition (CVPR) (2013). 1, 2, 3

[KSK88] KLINKER G. J., SHAFER S. A., KANADE T.: The measure-ment of highlights in color images. International Journal of Computer

Vision (IJCV) 2, 1 (1988), 7–32. 2

[LB92] LEE S. W., BAJCSY R.: Detection of specularity using color andmultiple views. In European Conference on Computer Vision (ECCV)

(1992). 2

[LLK∗02] LIN S., LI Y., KANG S., TONG X., SHUM H.: Diffuse-specular separation and depth recovery from image sequences. In Eu-

ropean Conference on Computer Vision (ECCV) (2002). 2

[LLZI17] LI C., LIN S., ZHOU K., IKEUCHI K.: Specular highlightremoval in facial images. In the IEEE Conference on Computer Vision

and Pattern Recognition (CVPR) (2017). 2, 5, 6

[LS01] LIN S., SHUM H.: Separation of diffuse and specular reflection incolor images. In the IEEE Conference on Computer Vision and Pattern

Recognition (CVPR) (2001). 2

[LZL14] LI C., ZHOU K., LIN S.: Intrinsic face image decompositionwith human face priors. In European Conference on Computer Vision

(ECCV) (2014). 1

[LZL15] LI C., ZHOU K., LIN S.: Simulating makeup through physics-based manipulation of intrinsic image layers. In the IEEE Conference on

Computer Vision and Pattern Recognition (CVPR) (2015). 2

[MZRT16] MEKA A., ZOLLHÖFER M., RICHARDT C., THEOBALT C.:Live intrinsic video. ACM Transactions on Graphics (TOG) 35, 4 (2016),109. 1

[NMY15] NARIHIRA T., MAIRE M., YU S. X.: Direct intrinsics: Learn-ing albedo-shading decomposition by convolutional regression. In the

IEEE International Conference on Computer Vision (ICCV) (2015). 5

[RTT17] REN W., TIAN J., TANG Y.: Specular reflection separation withcolor-lines constraint. IEEE Transactions on Image Processing (TIP) 26,5 (2017), 2327–2337. 1

[SC09] SHEN H., CAI Q.: Simple and efficient method for specularityremoval in an image. Applied Optics 48, 14 (2009), 2711–2719. 5, 6, 7,8, 9, 10

[Sha85] SHAFER S. A.: Using color to separate reflection components.Color Research & Application (1985), 210–218. 2

[SYHS17] SHI J., YUE D., HAO S., STELLA X. Y.: Learning non-lambertian object intrinsics across shapenet categories. In the IEEE Con-

ference on Computer Vision and Pattern Recognition (CVPR) (2017). 2

[SZ13] SHEN H., ZHENG Z.: Real-time highlight removal using intensityratio. Applied Optics 52, 14 (2013), 4483–4493. 5, 6, 8

[SZSX08] SHEN H., ZHANG H., SHAO S., XIN J.: Chromaticity-basedseparation of reflection components in a single image. Pattern Recogni-

tion PR) 41, 8 (2008), 2461–2469. 2, 5

[TI05] TAN R. T., IKEUCHI K.: Separating reflection components oftextured surfaces using a single image. IEEE Transactions on Pattern

Analysis and Machine Intelligence (TPAMI) 27, 2 (2005), 178–193. 2, 5,6

[TNI03] TAN T. T., NISHINO K., IKEUCHI K.: Illumination chromatic-ity estimation using inverse-intensity chromaticity space. In IEEE Com-

puter Society Conference on Computer Vision and Pattern Recognition

(2003). 1, 3

[XLXJ11] XU L., LU C., XU Y., JIA J.: Image smoothing via l0 gradientminimization. ACM Transactions on Graphics (TOG) 30, 6 (2011), 174.3, 4

[YCK06] YOON K., CHOI Y., KWEON I.: Fast separation of reflectioncomponents using a specularity-invariant image representation. In the

IEEE International Conference on Image Processing (ICIP) (2006). 5,6, 7, 8, 9, 10

[YK94] YOICHI S., KATSUSHI I.: Temporal-color space analysis of re-flection. Journal of the Optical Society of America A 11, 11 (1994),2990–3002. 2

[YKK17] YAMAMOTO T., KITAJIMA T., KAWAUCHI R.: Efficient im-provement method for separation of reflection components based on anenergy function. In the IEEE International Conference on Image Pro-

cessing (ICIP) (2017). 5, 6, 7, 8, 9, 10

[YWA10] YANG Q., WANG S., AHUJA N.: Real-time specular highlightremoval using bilateral filtering. In European Conference on Computer

Vision (2010). 2, 5

[ZNXZ19] ZHANG Q., NIE Y., XIAO C., ZHENG W.: Enhancing un-derexposed photos using perceptually bidirectional similarity. CoRR

abs/1907.10992 (2019). 1

[ZTD∗12] ZHAO Q., TAN P., DAI Q., SHEN L., WU E., LIN S.: Aclosed-form solution to retinex with nonlocal texture constraints. IEEE

Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 34,7 (2012), 1437–1444. 1, 2, 3

[ZXST17] ZHANG Q., XIAO C., SUN H., TANG F.: Palette-based imagerecoloring using color decomposition optimization. IEEE Transactions

on Image Processing (TIP) 26, 4 (2017), 1952–1964. 1, 3

[ZYL∗17] ZHANG L., YAN Q., LIU Z., ZOU H., XIAO C.: Illumina-tion decomposition for photograph with multiple light sources. IEEE

Transactions on Image Processing (TIP) 26, 9 (2017), 4114–4127. 1

[ZYX∗18] ZHANG Q., YUAN G., XIAO C., ZHU L., ZHENG W.-S.:High-quality exposure correction of underexposed photos. In ACM Mul-

timedia Conference on Multimedia Conference (MM) (2018). 1

c© 2019 The Author(s)

Computer Graphics Forum c© 2019 The Eurographics Association and John Wiley & Sons Ltd.