1.pdf

5
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 11, NOVEMBER 2014 1871 Effective Contrast-Based Dehazing for Robust Image Matching Cosmin Ancuti, Member, IEEE, and Codruta O. Ancuti, Member, IEEE Abstract—In this letter we present a novel strategy to enhance images degraded by the atmospheric phenomenon of haze. Our single-based image technique does not require any geometrical in- formation or user interaction enhancing such images by restoring the contrast of the degraded images. The degradation of the finest details and gradients is constrained to a minimum level. Using a simple formulation that is derived from the lightness predictor our contrast enhancement technique restores lost discontinuities only in regions that insufficiently represent original chromatic contrast of the scene. The parameters of our simple formulation are opti- mized to preserve the original color spatial distribution and the local contrast. We demonstrate that our dehazing technique is suitable for the challenging problem of image matching based on local feature points. Moreover, we are the first that present an image matching evaluation performed for hazy images. Extensive experiments demonstrates the utility of the novel technique. Index Terms—Dehazing, image matching, Scale Invariant Feature Transform (SIFT). I. I NTRODUCTION G IVEN two or more images of the same scene, the process of image matching requires to find valid correspond- ing feature points in the images. These matches represent projections of the same scene location in the corresponding image. Since images are in general taken at different times, from different sensors/cameras and viewpoints this task may be very challenging. Image matching plays a crucial role in many remote sensing applications such as, change detection, cartography using imagery with reduced overlapping, fusion of images taken with different sensors. In the early remote sensing systems, this task required substantial human involve- ment by manually selecting some feature points of significant landmarks. Nowadays, due to the significant progress of local feature points detectors and descriptors, the tasks of matching and registration can be done in most of the cases automatically. Many local feature points operator have been introduced in the last decade. By extracting regions that are covariant to a class of transformation [1], recent local feature operators are robust to occlusions being invariant to image transformations such as geometric (scale, rotation, affine) and photometric. A comprehensive survey of such local operators is included in the study of [2]. Manuscript received December 19, 2013; revised January 27, 2014; accepted March 11, 2014. The authors are with the Department of Measurements and Optical Engineer- ing, Politehnica University of Timisoara, 300006 Timisoara, Romania (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LGRS.2014.2312314 However, besides the geometric and photometric variations, outdoor and aerial images that need to be matched are often degraded by the haze, a common atmospheric phenomenon. Obviously, remote sensing applications are dealing with such images since in many cases the distance between different sensors and the surface of earth is significant. Haze is the atmospheric phenomenon that dims the clarity of an observed scene due to the particles such as smoke, fog, and dust. A hazy scene is characterized by an important attenuation of the color that depends proportionally by the distance to the scene objects. As a result, the original contrast is degraded and the scene features gradually fades as they are far away from the camera sensor. Moreover, due to the scattering effects the color information is shifted. Restoring such hazy images is a challenging task. The first dehazing approaches employ multiple images [3] or additional information such as depth map [4] and specialized hardware [5]. Since in general such additional information is not available to the users, these strategies are limited to offer a reliable solution for dehazing problem. More recent, several single image based techniques [6]–[11] have been introduced in the literature. Roughly, these techniques can be divided in two major classes: physically based and contrast-based techniques. Physically based techniques [6], [9], [10] restore the hazy images based on the estimated transmission (depth) map. The strategy of Fattal [6] restores the airlight color by assuming that the image shading and scene transmission are locally uncorrelated. He et al. [9] estimate a rough transmission map version based on the dark channel [12] that is refined in a final step by a computationally expensive alpha-matting strategy. The technique of Nishino et al. [10] employs a Bayesian probabilistic model that jointly estimates the scene albedo and depth from a single degraded image by fully leveraging their latent statistical structures. On the other hand, contrast-based techniques [7], [8], [11] aim to enhance the hazy images without estimating the depth information. Tan’s [7] technique maximizes the local contrast while constraining the image intensity to be less than the global atmospheric light value. The method of Tarel and Hautière [8] enhances the global contrast of hazy images assuming that the depth-map must be smooth except along edges with large depth jumps. Ancuti and Ancuti [11] enhance the appearance of hazy images by a multiscale fusion-based technique that is guided by several measurements. In this letter we introduce a novel technique that removes the haze effects of such degraded images. Our technique is a single-based image that aims to enhance such images by restoring the contrast of the degraded images. Different than 1545-598X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Transcript of 1.pdf

  • IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 11, NOVEMBER 2014 1871

    Effective Contrast-Based Dehazingfor Robust Image Matching

    Cosmin Ancuti, Member, IEEE, and Codruta O. Ancuti, Member, IEEE

    AbstractIn this letter we present a novel strategy to enhanceimages degraded by the atmospheric phenomenon of haze. Oursingle-based image technique does not require any geometrical in-formation or user interaction enhancing such images by restoringthe contrast of the degraded images. The degradation of the finestdetails and gradients is constrained to a minimum level. Using asimple formulation that is derived from the lightness predictor ourcontrast enhancement technique restores lost discontinuities onlyin regions that insufficiently represent original chromatic contrastof the scene. The parameters of our simple formulation are opti-mized to preserve the original color spatial distribution and thelocal contrast. We demonstrate that our dehazing technique issuitable for the challenging problem of image matching based onlocal feature points. Moreover, we are the first that present animage matching evaluation performed for hazy images. Extensiveexperiments demonstrates the utility of the novel technique.

    Index TermsDehazing, image matching, Scale InvariantFeature Transform (SIFT).

    I. INTRODUCTION

    G IVEN two or more images of the same scene, the processof image matching requires to find valid correspond-ing feature points in the images. These matches representprojections of the same scene location in the correspondingimage. Since images are in general taken at different times,from different sensors/cameras and viewpoints this task maybe very challenging. Image matching plays a crucial role inmany remote sensing applications such as, change detection,cartography using imagery with reduced overlapping, fusionof images taken with different sensors. In the early remotesensing systems, this task required substantial human involve-ment by manually selecting some feature points of significantlandmarks. Nowadays, due to the significant progress of localfeature points detectors and descriptors, the tasks of matchingand registration can be done in most of the cases automatically.

    Many local feature points operator have been introduced inthe last decade. By extracting regions that are covariant to aclass of transformation [1], recent local feature operators arerobust to occlusions being invariant to image transformationssuch as geometric (scale, rotation, affine) and photometric. Acomprehensive survey of such local operators is included in thestudy of [2].

    Manuscript received December 19, 2013; revised January 27, 2014; acceptedMarch 11, 2014.

    The authors are with the Department of Measurements and Optical Engineer-ing, Politehnica University of Timisoara, 300006 Timisoara, Romania (e-mail:[email protected]; [email protected]).

    Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

    Digital Object Identifier 10.1109/LGRS.2014.2312314

    However, besides the geometric and photometric variations,outdoor and aerial images that need to be matched are oftendegraded by the haze, a common atmospheric phenomenon.Obviously, remote sensing applications are dealing with suchimages since in many cases the distance between differentsensors and the surface of earth is significant. Haze is theatmospheric phenomenon that dims the clarity of an observedscene due to the particles such as smoke, fog, and dust. Ahazy scene is characterized by an important attenuation of thecolor that depends proportionally by the distance to the sceneobjects. As a result, the original contrast is degraded and thescene features gradually fades as they are far away from thecamera sensor. Moreover, due to the scattering effects the colorinformation is shifted.

    Restoring such hazy images is a challenging task. The firstdehazing approaches employ multiple images [3] or additionalinformation such as depth map [4] and specialized hardware[5]. Since in general such additional information is not availableto the users, these strategies are limited to offer a reliablesolution for dehazing problem. More recent, several singleimage based techniques [6][11] have been introduced in theliterature. Roughly, these techniques can be divided in twomajor classes: physically based and contrast-based techniques.

    Physically based techniques [6], [9], [10] restore the hazyimages based on the estimated transmission (depth) map. Thestrategy of Fattal [6] restores the airlight color by assumingthat the image shading and scene transmission are locallyuncorrelated. He et al. [9] estimate a rough transmission mapversion based on the dark channel [12] that is refined in a finalstep by a computationally expensive alpha-matting strategy.The technique of Nishino et al. [10] employs a Bayesianprobabilistic model that jointly estimates the scene albedo anddepth from a single degraded image by fully leveraging theirlatent statistical structures.

    On the other hand, contrast-based techniques [7], [8], [11]aim to enhance the hazy images without estimating the depthinformation. Tans [7] technique maximizes the local contrastwhile constraining the image intensity to be less than the globalatmospheric light value. The method of Tarel and Hautire [8]enhances the global contrast of hazy images assuming that thedepth-map must be smooth except along edges with large depthjumps. Ancuti and Ancuti [11] enhance the appearance of hazyimages by a multiscale fusion-based technique that is guided byseveral measurements.

    In this letter we introduce a novel technique that removesthe haze effects of such degraded images. Our technique isa single-based image that aims to enhance such images byrestoring the contrast of the degraded images. Different than

    1545-598X 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

  • 1872 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 11, NOVEMBER 2014

    most of the existing techniques, our strategy does not requireany geometrical information or user interaction. Our method isbuilt on the basic observation that haze-free images are char-acterized by a better contrast than hazy images. The presentedstrategy takes advantage by the original color information bymaximizing the contrast of degraded regions. The degradationof the finest details and gradients is constrained to a minimumlevel. Using a simple formulation that is derived from thelightness predictor defined in [13] effect our contrast enhance-ment technique restores lost discontinuities only in regions thatinsufficiently represent original chromatic contrast of the scene.The parameters of our simple formulation are optimized topreserve the original color spatial distribution and the localcontrast. Extensive experiments demonstrate the utility of thenovel technique.

    As a second contribution of this letter, we demonstrate thatour dehazing technique is suitable for the challenging problemof image matching based on local feature points. To the bestof our knowledge this represents the first work that presents animage matching evaluation performed for hazy images.

    II. CONTRAST-BASED ENHANCING OF HAZY SCENESThe process of contrast enhancement aims to increase the

    perceptibility of the objects in the image. Typically, the contrastof an image can be enhanced by general operators that can befound in the commercial tools (e.g., Adobe Photoshop) suchas auto-contrast, gamma correction, linear mapping, histogramstretching. As discussed previously also by Fattal [6], theseoperators ignore the spacial relations between pixels. Since thehaze effect is not constant across the scene, various locationsin the image are spoilt differently. As a result, such classicalcontrast enhancing operators, since they perform the same op-eration for each image pixel, do not represent reliable solutionsfor the dehazing problem.

    To solve this problem, we define a contrast technique withseveral parameters that are optimized for a given image. Ourstrategy is derived from the global mapping operator usedpreviously also for the decolorization problem [15]. We startfrom the global operator that enhances the contrast of theluminance L based on the hue H and saturation S (the mappingis processed in the HSL color space)

    LE = L (1 + Sf(H)) (1)

    where LE is the enhanced contrast and the f can be modeledas a trigonometric polynomial function. This global operatorcan be also related with the lightness predictor of Nayatani[13] used to model the influence of chromatic components onthe perceptual lightness of an isolated color. In Nayatani [13]function f is expressed as f(x) =

    i ai cos(ix) + bi sin(ix)

    but all the unknown parameters are fixed constants determinedby fitting the function to an experimental data set of perceptuallightness of colors.

    In contrast we express the variation of the hue by the follow-ing equation with the parameters that are adaptively optimizedfor a given image

    f(H) = cos(H + ) (2)

    where the parameter aims to temper the impact of thesaturation acting like a modulator that controls the amount ofcolor contrast. In our extensive experiments we observed thatparameter needs to be set to smaller values for desaturatedimages while for highly saturated images is assigned tohigher values. The parameter represents the period whileparameter represents the offset angle of the color wheel(0360) being related by the color distribution and paletteof the input image. Finally, our nonlinear contrast operator isobtained by replacing previous expression in (1)

    LE = L (1 + S cos(H + )) . (3)As a result, the three parameters of (3) (, , and ) of the

    nonlinear mapping [(3)] are depending by the input image beingoptimized in our framework to to enhance the local features ofthe degraded hazy image.

    A. OptimizationAs previously mentioned, in this letter we do not intend to

    fully recover the original colors of the scene or albedo but torestore the visibility of a hazy image by properly enhancing thecontrast of an input degraded image. Moreover, besides visi-bility improvement, different than existing dehazing methodswe aim for a method that increases the matching performancesof the local operators. Therefore, our strategy is designed topreserve most of the local features and details in the enhancedversion. As emphasized by Lowe [14], local contrast preserva-tion is crucial in the process of matching by feature points. In itsfinal stage, the well-known Scale Invariant Feature Transform(SIFT) operator [14] filters out all the features with low localcontrast. This operation improves considerably the process ofmatching by decreasing the ambiguity when comparing thevalues of Euclidean distance between descriptor vectors.

    We optimize the parameters of (3) (, , and, ) by con-straining the local contrast of the enhanced version by the chro-matic contrast of the original image. As a result, to enhance thelocal features of our nonlinear contrast mapping, we minimizethe following energy function E that represents the differenceof image gradients between the original hazy color image andthe enhanced version of the luminance LE [(3)]

    E(x, y) =

    LE(x, y)DLab(x, y)2 (4)

    where (x, y) is a pixel value, LE represents the gradient ofthe enhanced luminance LE((f) = (f/x, f/y)), andDLab is the difference between color pixels of the originalhazy image computed in the perceptual CIELab [16] colorspace with the expression

    DLab =L2 +

    (a2 +b2

    ) (5)where the L (color lightness), a (lightness position betweenred/magenta and green), b (lightness position between yellowand blue) are the coordinates of the CIELab and weightsthe impact of color information in the contrast expression.

    Our problem is now reduced to finding a the unknown pa-rameters (, , and, ) that best minimize the cost function E.We addressed it as a least squares problem with non-negativity

  • ANCUTI AND ANCUTI: EFFECTIVE CONTRAST-BASED DEHAZING FOR ROBUST IMAGE MATCHING 1873

    constraints. Since we aim for a low-complexity recovery so-lution of the unknown vector we solve the problem as aL1-regularized least squares. To prevent over-fitting we intro-duce a regularization term weighted by the parameter = 0.1.Our L1-regularized least squares problem always converges toa solution and shows to be relatively fast converging in approxi-mately 2030 iterations. The final dehazed version of the imageis the result of substituting the original degraded luminance Lwith the enhanced version LE and the new saturation SE =S(1 + L/LE) in the HSL color space.

    III. MATCHING OF HAZY IMAGESLocal feature points (keypoints) are used for matching im-

    ages due to their impresive robustness and invariance to differ-ent transformations. The matching methods based on keypointsshown to be more effective [2] than matching techniques basedon extracting edges and contours. Typically, the frameworkof matching images based on local keypoints consists onthree main steps. First, the local feature points (keypoints orinterest points) are extracted from an image based on theirneighborhood information. In general the keypoints are thoselocations of images with important variation in their immediateneighborhoods. The second step is to compute descriptors(signatures) based on the neighbor regions of the keypoints.Different techniques, which describe nearby regions of featurepoints, considers in general color, structure, and texture. Themain goal of them is to increase the distinctness of the extractedfeature points to improve the efficiency and to simplify thematching process. Finally, the signature vectors of extractedkeypoints are compared using some metrics (e.g., Euclideandistance, earth movers distance) or derived strategies that arebased on such distances.

    Recent remote sensing applications employ local featurepoints to solve problems such as automatic registration [17],urban-area and building detection [18], registration of hyper-spectral imagery [19]. In general these applications are builton the well-known SIFT [14] operator. Due to its impressiveresults reported in the comprehensive studies [2], [20] applyingSIFT is not causal. Basically, most of the recent local featurepoints operators represent improvements derived from SIFT,designed for specific cases and applications.

    In this letter, we also use the the well-known SIFT [14]operator. However, different than previous work, in this letterwe analyze the problem of matching hazy images based on localfeature points. For the sake of completeness the SIFT operatoris briefly discussed in the following subsection.

    A. Matching Based on SIFT OperatorThe feature points (keypoints) of the SIFT operator [14]

    are searched in the Difference of Gaussian (DoG) scale space.The DoG is built by subtracting images that previously havebeen convolved (blurred) with a Gaussian function with astandard deviation that increases monotonically and representsa good approximation of Laplacian. The keypoints candidatesare filtered as local extrema in DOG scale space. Such locationis selected only if its value is greater or smaller than all its26 neighbors [Fig. 1(a)]. As discussed previously, related to

    Fig. 1. SIFT operator. (a) Keypoint detection in DOG scale space. (b) Signa-ture of the keypoint.

    our contrast-based dehazing technique, the feature points withstrong response to edges and low contrast are rejected to reducethe ambiguity in the process of matching.

    The SIFT descriptor or signature is calculated based on theimage gradient information. A signature is assigned to eachextracted keypoint and consists of a 128-dimension vector. It iscomputed from the gradient magnitudes and orientations in thecircular neighbor regions of the keypoint. The selected imagepyramid level of the image is determined by the computedcharacteristic scale of the respective feature point selects thepyramid level of the image. For every keypoint a 4 4 orien-tation histogram is built on a 4 4 subregion computed from a16 16 region that is centered on the keypoint location. Eachhistogram has 8 bins corresponding to every 45 [Fig. 1(b)].

    Finally, the matching procedure is based on computing theEuclidean distance among the descriptor vectors that corre-spond to the extracted keypoints. Since extensive experimentsrevealed that only the minimum distance criterion is not enoughto identify good matches, Lowe [14] adopted a matchingtechnique where the ratio between the distance of the first-best matched and the second-best matched feature points isevaluated. Only the keypoints with the ratio value greater thana threshold are considered valid matches.

    IV. RESULTS AND DISCUSSIONOur technique has been tested extensively for real hazy im-

    ages. In Fig. 2 we compared with several state-of-the-arts singleimage dehazing techniques. As can be observed our techniqueis able to produce visually pleasing results that are comparablewith the results yielded by the more complex techniques [9],[10] that require estimation of the depth map. Moreover, incomparison with the technique of Tarel and Hautiere [8] that isalso a contrast-based technique that processes the latent imagewithout computing the transmission map, our technique is lessprone to the artifacts. Our unoptimized (Matlab implementa-tion) code is able to process an 800 600 image in less than4 seconds.

    Fig. 3 displays one of the images used in the feature-basedimage matching shown in Fig. 4. As can be seen, the AdobePhotoshop Auto Contrast operation performs poorly. Comparedwith the technique of Tarel and Hautiere [8] our contrast-based strategy is able to better preserve the fine transitionsin the hazy regions without introducing unpleasing artifacts.These observations are emphasized also by the matching resultsshown in Fig. 4. Applying standard SIFT operator on theoriginal hazy images yields only 10 valid matches while usingimages enhanced by Adobe Photoshop Auto Contrast yields 52

  • 1874 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 11, NOVEMBER 2014

    Fig. 2. Comparative results.

    Fig. 3. Hazy image and the enhanced results of Adobe Photoshop AutoContrast operation, Tarel and Hautiere [8] and our contrast-based strategy.

    good correspondences. The same feature-based operator yields71 valid matches for the images enhanced by technique ofTarel and Hautiere [8]. On the other hand, the same matchingprocedure based on SIFT but applied on the images enhancedwith our technique generates 236 valid matches.

    To measure the performances of different enhancing tech-niques for the matching procedure we evaluate a number of 9pairs of images that represent aerial images and hazy imagescaptured during the daylight. In our evaluation we use thestandard SIFT operator that is applied on the original hazyimages, images enhanced using Photoshop Auto Contrast, tech-nique of Tarel and Hautiere [8] and our contrast-based method.Besides the similar complexity with ours, the technique of Tareland Hautiere [8] is the only one of the recent state-of-the-artdehazing techniques of which the code has been made availableto the public.

    In the evaluation we investigate the influence of the overlaperror on performance of the SIFT operator applied on the

    Fig. 4. Matching based on SIFT: original hazy images (10 good matches),Adobe Photoshop Auto Contrast (52 good matches), Tarel and Hautiere [8](71 good matches), our method (236 good matches).

    original images but also on the images enhanced using dif-ferent enhancing techniques. This evaluation, used previouslyin the comprehensive study of Mikolajczyk and Schmid [1],compares the recall for various overlap errors. We slightlyredefine recall as the number of correct matches divided bythe maximum number of correct matches obtained by the mostefficient method. The overlap error represents how well theneighbor regions of the matched features correspond under atransformation. In our case the tested pairs of images are relatedby the homography that is computed as presented in [1]. Theoverlap error is defined by the ratio of the intersection and unionof the feature neighbor regions.

    Fig. 5 plots the relation between the recall value and thevalue of the overlap error for different enhancing techniquesmatching procedure that uses the SIFT operator. In Figs. 4and 6 the valid matches are those that have the overlap errorless than 50%. As can be seen, matching procedure applied onour enhanced results yields significant improvements in termsof correct matches compared with the Adobe Photoshop AutoContrast but also the recent technique of Tarel and Hautiere [8].

  • ANCUTI AND ANCUTI: EFFECTIVE CONTRAST-BASED DEHAZING FOR ROBUST IMAGE MATCHING 1875

    Fig. 5. Matching evaluation based on SIFT. Relation between overlap errorand recall (number of correct matches divided by the maximum number ofcorrect matches obtained by the most efficient method).

    Fig. 6. Matching the original images using SIFT [14] yields 37 valid matcheswhile the same matching procedure applied on the enhanced images using ourtechnique yields 179 valid matches.

    V. CONCLUSION

    In this letter is presented a new single-based image dehazingtechnique. Our strategy to enhance hazy images optimizes thecontrast by constraining the degradation of the finest detailsand gradients to minimum. The degraded discontinuities are en-hanced mostly in the regions that lost the original contrast of the

    scene. Our technique is simple and does not require estimationof the transmission map. We show that our technique is suitablefor the task of matching images using local feature pointsoutperforming the compared techniques. In the future work, wewill extend our work to the problem of video dehazing.

    REFERENCES[1] K. Mikolajczyk and C. Schmid, A performance evaluation of local

    descriptors, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10,pp. 16151630, Oct. 2005.

    [2] T. Tuytelaars and K. Mikolajczyk, Local invariant feature detectors:A survey, Found. Trends Comput. Graph. Vis., vol. 3, no. 3, pp. 177280,Jan. 2008.

    [3] S. G. Narasimhan and S. K. Nayar, Contrast restoration of weatherdegraded images, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 6,pp. 713724, Jun. 2003.

    [4] J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen,M. Uyttendaele, and D. Lischinski, Deep photo: Model-based photo-graph enhancement and viewing, ACM Trans. Graph., vol. 27, no. 5,p. 116, Dec. 2008.

    [5] T. Treibitz and Y. Y. Schechner, Polarization: Beneficial for visibilityenhancement? in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2009,pp. 525532.

    [6] R. Fattal, Single image dehazing, ACM Trans. Graph., vol. 27, no. 3,p. 72, Aug. 2008.

    [7] R. T. Tan, Visibility in bad weather from a single image, in Proc. IEEEConf. Comput. Vis. Pattern Recognit., 2008, pp. 18.

    [8] J.-P. Tarel and N. Hautiere, Fast visibility restoration from a singlecolor or gray level image, in Proc. IEEE Int. Conf. Comput. Vis., 2009,pp. 22012208.

    [9] K. He, J. Sun, and X. Tang, Single image haze removal using darkchannel prior, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12,pp. 23412353, Dec. 2011.

    [10] K. Nishino, L. Kratz, and S. Lombardi, Bayesian defogging, Int. J.Comput. Vis., vol. 98, no. 3, pp. 263270, Jul. 2012.

    [11] C. O. Ancuti and C. Ancuti, Single image dehazing by multiscalefusion, IEEE Trans. Image Process., vol. 22, no. 8, pp. 32713282,Aug. 2013.

    [12] P. S. Chavez, An improved dark-object subtraction technique for atmo-spheric scattering correction of multispectral data, Remote Sens. Envi-ron., vol. 24, no. 3, pp. 459479, Apr. 1988.

    [13] Y. Nayatani, Simple estimation methods for the Helmholtz Kohlrauscheffect, Color Res. Appl., vol. 22, no. 6, pp. 385401, Dec. 1997.

    [14] D. Lowe, Distinctive image features from scale-invariant keypoints, Int.J. Comput. Vis., vol. 60, no. 2, pp. 91110, Nov. 2004.

    [15] C. Ancuti, C. O. Ancuti, and P. Bekaert, Enhancing by saliency-guideddecolorization, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.,2011, pp. 257264.

    [16] G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods,Quantitative Data and Formulae, 2nd ed. Hoboken, NJ, USA: Wiley,2000.

    [17] A. Wong and D. A. Clausi, ARRSI: Automatic registration of remote-sensing images, IEEE Trans. Geosci. Remote Sens., vol. 45, no. 5,pp. 14831493, May 2007.

    [18] B. Sirmacek and C. Unsalan, Urban-area and building detection using siftkeypoints and graph theory, IEEE Trans. Geosci. Remote Sens., vol. 47,no. 4, pp. 11561166, Apr. 2009.

    [19] L. P. Dorado-Muoz, M. Vlez-Reyesa, A. Mukherjee, and B. Roysam,A vector sift detector for interest point detection in hyperspectral im-agery, IEEE Trans. Geosci. Remote Sens., vol. 50, no. 11, pp. 45214533,Nov. 2012.

    [20] P. Moreels and P. Perona, Evaluation of features detectors and descriptorsbased on 3D objects, Int. J. Comput. Vis., vol. 73, no. 3, pp. 263284,Jul. 2007.

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice