Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

8
IEICE TRANS. INF. & SYST., VOL.E82–D, NO.2 FEBRUARY 1999 453 PAPER Automated Detection and Removal of Clouds and Their Shadows from Landsat TM Images Bin WANG , Atsuo ONO †† , Members, Kanako MURAMATSU ††† , Nonmember, and Noboru FUJIWARA ††† , Member SUMMARY In this paper, a scheme to remove clouds and their shadows from remotely sensed images of Landsat TM over land has been proposed. The scheme uses the image fusion tech- nique to automatically recognize and remove contamination of clouds and their shadows, and integrate complementary informa- tion into the composite image from multitemporal images. The cloud regions can be detected on the basis of the reflectance dif- ferences with the other regions. Based on the fact that shadows smooth the brightness changes of the ground, the shadow regions can be detected successfully by means of wavelet transform. Fur- ther, an area-based detection rule is developed in this paper and the multispectral characteristics of Landsat TM images are used to alleviate the computational load. Because the wavelet trans- form is adopted for the image fusion, artifacts are invisible in the fused images. Finally, the performance of the proposed scheme is demonstrated experimentally. key words: remote sensing, image fusion, wavelet transform, automated detection and removal, Landsat TM images 1. Introduction Sensors on satellite platforms can provide a great deal of valuable information for various global applications, such as environment monitoring and natural resource management, etc., on account of their global and repet- itive measurement capability. Unfortunately, 2/3 of the Earth’s surface is always covered by clouds throughout the year [1], causing serious problems in optical wave- length remote sensing [1]–[8]. Since clouds not only hide the ground but also cast their shadows on the ground, they impede many applications, such as the detection of vegetation, etc. This means that, for many applica- tions, it is necessary to remove these clouds and their shadows from the acquired satellite images. To address the limitation issue, some compositing techniques have been developed[5]–[8]. However, these techniques were initially designed for data from low- spatial-resolution systems, such as the Advanced Very High Resolution Radiometer (AVHRR) on-board the NOAA satellite. They are not suitable for data from Manuscript received February 6, 1998. Manuscript revised July 28, 1998. The author is with the Graduate School of Science and Technology, Kobe University, Kobe-shi, 657–8501 Japan. †† The author is with the Graduate School of Science and Technology and the Faculty of Cross-Cultural Studies, Kobe University, Kobe-shi, 657–8501 Japan. ††† The authors are with the Faculty of Science, Nara Women’s University, Nara-shi, 630–8506 Japan. high-spatial-resolution systems, such as the Thematic Mapper (TM) on-board the Landsat satellite. For the high-spatial-resolution systems, the clouds over small areas can still be observed clearly in the satellite im- ages acquired, and in these cases, the clouds and their shadows are probably sufficiently separated from each other considerably far apart. Even though the loca- tions of clouds can be detected, it is still difficult to estimate the locations of their corresponding shadows. This problem cannot be solved by the previous tech- niques [5]–[8]. Besides, the occurrence of blocking ar- tifacts in the composite images was almost inevitable when the previous compositing techniques were used. Although it is important to remove clouds and their shadows from the satellite images obtained from the high-spatial-resolution systems, little work in this area has appeared in the literature. In this paper, we propose an image fusion scheme to try to solve the above problems. Our basic strategy is to automatically detect the regions containing clouds and their shadows and then incorporate complementary information into the composite from multitemporal im- ages. This scheme permits the production of cloud-free images from individual, partly cloudy input scenes by selecting the cloud-free information for the same geo- graphic location from images taken over a limited pe- riod of time. The block diagram of the fusion scheme is illustrated in Fig. 1. After taking the weather conditions into consider- ation, a satellite image, in which there are a few clouds, is selected as the main image. Another multitemporal satellite image is selected as the reference image. The reference image is coregistered with the main image. Before the detection of clouds and their shadows, the brightness of the reference image is corrected relative to that of the main image. It may be easy to identify the cloud regions using reflectance thresholds, but it is difficult to detect the shadow regions because their brightness values are very close to those of their neigh- bors or some other regions. However, based on the fact that the shadows smooth the brightness changes of the ground so that the local contrasts of image are reduced, the shadow regions could be detected successfully by means of wavelet transform. An important property of wavelet transform is the easy characterization of the local regularity of the image [12], [13]. The wavelet co-

description

Cloud removal

Transcript of Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

Page 1: Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

IEICE TRANS. INF. & SYST., VOL.E82–D, NO.2 FEBRUARY 1999453

PAPER

Automated Detection and Removal of Clouds and Their

Shadows from Landsat TM Images

Bin WANG†, Atsuo ONO††, Members, Kanako MURAMATSU†††, Nonmember,and Noboru FUJIWARA†††, Member

SUMMARY In this paper, a scheme to remove clouds andtheir shadows from remotely sensed images of Landsat TM overland has been proposed. The scheme uses the image fusion tech-nique to automatically recognize and remove contamination ofclouds and their shadows, and integrate complementary informa-tion into the composite image from multitemporal images. Thecloud regions can be detected on the basis of the reflectance dif-ferences with the other regions. Based on the fact that shadowssmooth the brightness changes of the ground, the shadow regionscan be detected successfully by means of wavelet transform. Fur-ther, an area-based detection rule is developed in this paper andthe multispectral characteristics of Landsat TM images are usedto alleviate the computational load. Because the wavelet trans-form is adopted for the image fusion, artifacts are invisible in thefused images. Finally, the performance of the proposed schemeis demonstrated experimentally.key words: remote sensing, image fusion, wavelet transform,automated detection and removal, Landsat TM images

1. Introduction

Sensors on satellite platforms can provide a great dealof valuable information for various global applications,such as environment monitoring and natural resourcemanagement, etc., on account of their global and repet-itive measurement capability. Unfortunately, 2/3 of theEarth’s surface is always covered by clouds throughoutthe year [1], causing serious problems in optical wave-length remote sensing [1]–[8]. Since clouds not only hidethe ground but also cast their shadows on the ground,they impede many applications, such as the detectionof vegetation, etc. This means that, for many applica-tions, it is necessary to remove these clouds and theirshadows from the acquired satellite images.

To address the limitation issue, some compositingtechniques have been developed [5]–[8]. However, thesetechniques were initially designed for data from low-spatial-resolution systems, such as the Advanced VeryHigh Resolution Radiometer (AVHRR) on-board theNOAA satellite. They are not suitable for data from

Manuscript received February 6, 1998.Manuscript revised July 28, 1998.

†The author is with the Graduate School of Science andTechnology, Kobe University, Kobe-shi, 657–8501 Japan.

††The author is with the Graduate School of Science andTechnology and the Faculty of Cross-Cultural Studies, KobeUniversity, Kobe-shi, 657–8501 Japan.

†††The authors are with the Faculty of Science, NaraWomen’s University, Nara-shi, 630–8506 Japan.

high-spatial-resolution systems, such as the ThematicMapper (TM) on-board the Landsat satellite. For thehigh-spatial-resolution systems, the clouds over smallareas can still be observed clearly in the satellite im-ages acquired, and in these cases, the clouds and theirshadows are probably sufficiently separated from eachother considerably far apart. Even though the loca-tions of clouds can be detected, it is still difficult toestimate the locations of their corresponding shadows.This problem cannot be solved by the previous tech-niques [5]–[8]. Besides, the occurrence of blocking ar-tifacts in the composite images was almost inevitablewhen the previous compositing techniques were used.Although it is important to remove clouds and theirshadows from the satellite images obtained from thehigh-spatial-resolution systems, little work in this areahas appeared in the literature.

In this paper, we propose an image fusion schemeto try to solve the above problems. Our basic strategyis to automatically detect the regions containing cloudsand their shadows and then incorporate complementaryinformation into the composite from multitemporal im-ages. This scheme permits the production of cloud-freeimages from individual, partly cloudy input scenes byselecting the cloud-free information for the same geo-graphic location from images taken over a limited pe-riod of time. The block diagram of the fusion schemeis illustrated in Fig. 1.

After taking the weather conditions into consider-ation, a satellite image, in which there are a few clouds,is selected as the main image. Another multitemporalsatellite image is selected as the reference image. Thereference image is coregistered with the main image.Before the detection of clouds and their shadows, thebrightness of the reference image is corrected relativeto that of the main image. It may be easy to identifythe cloud regions using reflectance thresholds, but itis difficult to detect the shadow regions because theirbrightness values are very close to those of their neigh-bors or some other regions. However, based on the factthat the shadows smooth the brightness changes of theground so that the local contrasts of image are reduced,the shadow regions could be detected successfully bymeans of wavelet transform. An important propertyof wavelet transform is the easy characterization of thelocal regularity of the image [12], [13]. The wavelet co-

Page 2: Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

454IEICE TRANS. INF. & SYST., VOL.E82–D, NO.2 FEBRUARY 1999

Fig. 1 The block diagram of the fusion scheme.

efficients can give a measure of the local contrasts ofthe image at different scales. In this paper, we proposean area-based detection rule for shadow detection. Fur-ther, we integrate complementary information into thecomposite image by fusing wavelet transform values inthe stage of image fusion. In addition, we suggest theuse of multispectral characteristics of remotely sensedimages to ease the computational load.

We point out that we mainly concentrate on Land-sat TM data in this work. The Landsat TM is a mul-tispectral image system of the scanning type [9]. TheTM instrument collects radiometric data (256 radiancelevels) in seven spectral bands, and offers a ground res-olution of 30 m in the six reflective bands, and of 120 min the emissive thermal infrared channel. In practice,the Landsat TM sensor provides a very powerful toolfor monitoring environmental changes. Other sensors,including the SPOT-HRV, may be considered later. Webelieve that our work will also be useful for other cases.

The rest of the paper is organized as follows. Sec-tion 2 introduces discrete wavelet frame. Section 3 pro-poses a fusion scheme to detect and remove clouds andtheir shadows. Section 4 deals with the improvementof the proposed scheme. Finally, some experimental re-sults for Landsat TM data and discussions are given inSect. 5 and the conclusion is given in Sect. 6.

2. Discrete Wavelet Frame

In this section, we briefly review the wavelet transform,and then focus on the discrete wavelet frame.

Traditionally, Fourier transform has been utilizedfor performing signal analysis and representation. Al-though it is a straightforward procedure to reconstructa signal from its Fourier transform, no local descriptionof the signal is included in its Fourier representation.To alleviate this problem, windowed Fourier transformhas been introduced. After the signal is filtered by afixed window function, this transform obtains the local-ization property not present in the traditional Fourier

Fig. 2 Illustration of DWF: (a) image decomposition, (b) im-age reconstruction.

transform. Unfortunately, because the envelope of thesignal is the same for all frequencies, it does not providesufficient spatial details for high frequencies. However,the wavelet transform has better localization proper-ties in both frequency and spatial domains than thetraditional Fourier transform because the original sig-nal is filtered by the translations and dilations of a basicfunction, called the “mother wavelet.” There are con-tinuous and discrete wavelet transforms, and in eachcase some conditions on the mother wavelet can be de-rived for reconstruction of the signal. See [11]–[13] forfurther details on wavelet transform.

An overcomplete wavelet transform called discretewavelet frame (DWF) was described in [14]. This DWFis similar to a discrete wavelet transform (DWT) whichwas presented in [12] and [13], except that no down-sampling occurs between the levels of hierarchy. Thiswavelet frame decomposition for an image signal can beobtained by successive 1-D processing along the rowsand columns of the image. The DWF is described bythe block diagram in Fig. 2 (a). After one-level waveletdecomposition, one image f(i, j) is transformed intofour subimages: an image s(i, j) at a coarser resolutionlevel, a horizontal orientation image d1(i, j), a verti-cal orientation image d2(i, j) and a diagonal orienta-tion image d3(i, j). According to filter theory, thesesubimages correspond to the outputs of low-low (LL),low-high (LH), high-low (HL), and high-high (HH) fre-quency bands. In addition, the process of image recon-struction is illustrated in Fig. 2 (b).

The method for generating HI(ω) and GI(ω) oflevel I, and HI(ω) = H(2I−1ω), GI(ω) = G(2I−1ω)was described in [15]. Each step involves a convolutionwith the basic filter h (lowpass filter) and g (highpassfilter), which are expanded by the insertion of an ap-propriate number of zeros between taps. Here, H andG denote the Fourier transform of h and g, and H̃ andG̃ the conjugate of H and G, respectively.

It is proven that the frame of this DWF is tightand its decomposition has energy conservation prop-erty. The fundamental difference from an orthogonalsystem is that the representation may be redundant.

Page 3: Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

WANG et al: AUTOMATED DETECTION AND REMOVAL OF CLOUDS AND THEIR SHADOWS FROM LANDSAT TM IMAGES455

However, the DWF has the advantages of translationinvariance and there is no aliasing between multichan-nel signals. These two properties are desirable for signalrepresentation and analysis.

In this scheme, the DWF is particularly advanta-geous for processing images which have not been per-fectly coregistered, because it does not perform thedownsampling operation. We have found that the DWFis insensitive to misregistration of images, unlike theDWT. This should be very valuable for practical pur-poses.

We will use the 4-order Battle-Lemarie filter [12]for the DWF in this paper. The Battle-Lemarie familyis indexed by the order of the spline n. Its basis func-tions are symmetrical, which means that there is nophase distortion and the spatial localization of waveletcoefficients is well preserved. We feel that this propertyis extremely relevant to our purpose.

3. The Fusion Scheme

The basic procedure for automated removal of cloudsand their shadows has been shown in Fig. 1. An impor-tant preprocessing step in image fusion is image reg-istration. Image registration ensures that the informa-tion from each of the images refers to the same physicalstructure in the environment. In this paper, we assumethat images to be combined have already been coregis-tered.

3.1 Correction of Brightness

The main image and the reference image defined in thispaper are two images which were observed at differenttimes but cover the same region over the ground. Dueto the different solar irradiance and atmospheric effects,it is necessary to correct the brightness of the two im-ages before the image fusion. For each spectral band ofthe Landsat TM data, we can relatively eliminate thedifferent solar irradiance and atmospheric effects by as-suming a linear relation [10] between the correspondingbrightness values of the two images. According to [10],the correction of brightness for Landsat TM data canbe performed using

f ′ref(i, j) =

σmain

σref[fref (i, j)− mref ] + mmain, (1)

where fref (i, j) is the old brightness value of a pixelof the reference image and f ′

ref (i, j) is its new value.mref and σref are the mean and the standard devia-tion of the reference image, and mmain and σmain arethose of the main image, respectively. The main imageis adopted as a standard and the reference image is ad-justed so that its mean and standard deviation matchthose of the main image. It is worth noting that, here,the brightness values of feature region rather than fea-ture dots are used for the brightness correction so thatthe correction error is reduced significantly.

Fig. 3 Cloud detection.

3.2 Detection of Clouds

In general, clouds reflect the solar radiation in the vis-ible and infrared spectra to a much higher degree thanthe ground. By setting a threshold C1, we can distin-guish the cloud regions from the ground regions. If

fmain(i, j) > C1 (2)

or

f ′ref(i, j) > C1 (3)

where fmain(i, j) is the brightness value of a pixel of themain image, it can be surmised that there is a cloud inthe main image or the reference image at the location(i, j). The threshold C1 can be quite easily determinedby investigating the histogram of the image.

To ensure reliable detection, we compute the abso-lute brightness difference between the same locations ofthe main image and the brightness-corrected referenceimage, and add a difference threshold C2 to ensure thereliability of the conclusion that the detected region iscloud region. It can be written as follows

|fmain(i, j)− f ′ref (i, j)| > C2. (4)

The reason for introducing this difference threshold isthat some objects on the ground probably have fairlyapproximate reflectance with the clouds, but they can-not move as the clouds do. The block diagram for thecloud detection is shown in Fig. 3.

Finally, a binary decision map of the same sizeas the input images is created to record the detectedresults of locations of cloud regions. In the stage ofimage fusion, we will use this binary decision map todecide how to combine the wavelet transform values ofcomplementary information into the fused image fromthe input images.

3.3 Detection of Shadows

When clouds are present, their corresponding shadowsoccur in the observed image. The locations of shad-ows in the image depend on the distances of the corre-sponding clouds from the ground and the incidence an-gle of the sunlight at that time. It is difficult to directlyuse brightness thresholds to determine the locations ofshadow regions because the brightness values of shadowregions are quite close to those of other regions. How-ever, because parts of the ground are obscured by the

Page 4: Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

456IEICE TRANS. INF. & SYST., VOL.E82–D, NO.2 FEBRUARY 1999

shadows, their brightness changes corresponding to theshadow regions appear much smoother than those ofthe other regions. As a result, the absolute wavelet co-efficients corresponding to the shadow regions decreaseby a far greater amount than those of other regions.According it, we present an approach based on wavelettransform to automatically detect the shadow regions.As pointed out in Sect. 2, we adopt DWF in our ap-proach because it can yield a better fusion result evenif there is misregistration between the two coregisteredimages. In addition, only one-level wavelet decomposi-tion is performed, because it is sufficient for the detec-tion of shadows.

Figure 4 shows the block diagram of the basic pro-cedure for the shadow detection. First, the wavelettransform of each input image is computed, and then itsfour subimages are obtained. Except for the LL bandwhich has all positive transform values, all other bands(LH, HL and HH bands) contain transform values thatfluctuate around zero. The larger absolute value of thewavelet coefficients corresponds to a sharper brightnesschange and thus to a better local contrast, and thesmaller absolute value at each point of the two inputimages would correspond to the shadow.

Since the useful features in the image are usuallylarger than one pixel, a point-by-point minimum de-tection rule may not be appropriate. A noisy image istypically of higher local contrast than an image thatis not noisy, so that this point-by-point minimum ruleresults in the fusion of the noisier parts of the imageinto the composite. To avoid this, we present an area-based detection rule in our scheme while consideringits potential for improved noise reduction. The inputimages are first decomposed by DWF. Then, the ab-solutes of LH, HL and HH bands are smoothed by a3 × 3 or 5× 5 window, respectively, and the smoothedresults are added. Smain(i, j) and Sref (i, j) are theadded results corresponding to the main image and thebrightness-corrected reference image, respectively. Fur-ther, we compute a relative contrast difference,

Fig. 4 Shadow detection.

D(i, j) =|Smain(i, j)− Sref (i, j)|Smain(i, j) + Sref (i, j)

, (5)

for the detection of shadows. D(i, j) is a percent dif-ference, and is compared to a threshold T . If the dif-ference is greater than T , it can be inferred that thereis a shadow in the main image or the reference im-age at the location (i, j). By comparing Smain(i, j)with Sref (i, j), one can determine in which of them theshadow is located.

This detection based on the relative contrast dif-ference is mainly motivated by following reasons. If thedetails in the main image are similar to those in the ref-erence image (within the threshold), these details arepreserved. However, if they are not sufficiently similar(beyond the threshold), then it is inferred that there isa shadow. This similarity detection allows the similarcontrast details to be preserved, if they are the salientfeatures. In effect, this operation results in that the in-formation from the main image can be integrated intothe composite image to the greatest possible extent,and only the complementary information correspond-ing to the cloud and shadow regions of the main imageis extracted from the reference image and then com-bined into the composite image. This should be usefulfor practical applications.

The activity measures based on similarity detec-tion are performed at the corresponding locations ofthe input images, and the locations of shadow regionsare detected. The detected results are also recorded onanother binary decision map.

3.4 Fusion of Images

The detected results of cloud and shadow regions havebeen recorded on the two binary decision maps. Byusing the two maps, we integrate the wavelet transformvalues of complementary information for each point.Subsequently, a fused image (composite) is constructedby performing an inverse wavelet transform based onthe combined transform values. Figure 5 shows thisfusion process.

Fig. 5 Block diagram of image fusion.

Page 5: Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

WANG et al: AUTOMATED DETECTION AND REMOVAL OF CLOUDS AND THEIR SHADOWS FROM LANDSAT TM IMAGES457

It is not difficult to complete the fusion operationbecause the wavelet transform provides both spatialand frequency domain localization. The two binary de-cision maps are used to control a switch so that thecomplementary information corresponding to the cloudand shadow regions of the main image is extracted fromthe reference image and incorporated into the main im-age. Subsequently, a fused image from which the cloudsand shadows have been removed is obtained. In addi-tion, artifacts such as the blocking effects which haveoften been visible in other fused results become invisiblein our result, because the wavelet transform is adoptedfor this image fusion.

4. Improvement of the Fusion Scheme

Here, we take into account multispectral characteristicsof Landsat TM images.

According to the spectral characteristics, the sixreflective bands of Landsat TM images can be dividedinto two groups: the visible bands (Bands 1, 2 and 3)and the infrared bands (Bands 4, 5 and 7). The in-frared spectral bands are less affected by water vaporthan the visible spectral bands, and offer more powerfulpenetration through thin smoke and haze. Further, ithas been experimentally found that the areas of cloudsand their shadows are exactly alike among the visiblespectral bands and among the infrared spectral bands,respectively. To lighten computational load, we con-sider using Band 1 to detect clouds and shadows forthe visible spectral group. The thresholds for detectingcloud regions in Band 1 could be decided more easilybecause the reflectance differences between clouds andobjects on the ground in Band 1 are generally largerthan those in Band 2 and Band 3. For the same reason,we select Band 5 to identify clouds and shadows for theinfrared spectral group. Our experimental results indi-cate that this division of the six reflective bands intotwo groups for the detection of clouds and shadows isbeneficial not only to alleviate the computational load,but also to obtain a more stable result.

5. Experimental Results and Discussions

To demonstrate the performance and validity of theproposed scheme, an example of cloud and shadow re-moval is presented in the following. In the current lit-erature there is no quantitative performance measurefor evaluating image fusion algorithms. The fusion re-sults are mostly evaluated visually. This is consideredto be due to the fact that the problem with defininga quantitative measure lies in the difficulty in definingan ideal composite image based on the images taken atdifferent times.

Two scenes (5000×5000 in size) covering the OsakaBay and Osaka Plain were acquired by Landsat-5 TMon Aug. 4 and 20, 1995, respectively. Their corre-

sponding natural color composite images are shown inFigs. 6 (a) and (b). We can see that both images containclouds and shadows, but the locations of their cloudsand shadows are quite different. Figures 7 (a) and (b)show two image fragments (size: 512 × 512), which in-clude a part of Awaji island shown in the southwesternpart of Figs. 6 (a) and (b), respectively. Here, the for-mer is taken as the main image and the later as thereference image. First, we set C1 = 110, C2 = 25 andT = 0.1 for Band 1 of the visible bands and C1 = 120,C2 = 30 and T = 0.1 for Band 5 of the infrared bands.Then, we apply the proposed scheme to detect and re-move the contamination of clouds and shadows. Thefused result is shown in Fig. 7 (c). It can be clearly seenthat most of the clouds and shadows have been neatlyremoved. Unfortunately, parts of the clouds and shad-ows have still been left in the middle eastern part ofFig. 7 (c) because these clouds or shadows overlap at thesame locations of Figs. 7 (a) and (b). If a third scene,acquired at a different time from those in Figs. 7 (a)and (b), is introduced into this removing process, wecan probably completely remove all clouds and shad-ows from Fig. 7 (c).

In addition, these color composite images are com-posed of the multispectral images which have beenscaled in full contrast. Since the amounts of the cloudsand shadows are different for these images shown inFig. 7, the composed color images have very differentcolor tones. However, it can be seen that the composedcolor image, from which the clouds and shadows havebeen removed, has obtained the most vivid tone.

The spectral characteristics of the clouds and theshadows located in Fig. 7 (a) are plotted in Figs. 8 (a)and (b), respectively. As seen from Fig. 8 (a), the cloudsin Band 1 and Band 5 have higher gray levels than thosein other bands.

For the proposed scheme, we take Band 1 as anexample for further explanation. Figures 9 (a) and (b)show images of Band 1 which correspond to Figs. 7 (a)and (b), respectively. The brightness-corrected resultof (b) relative to (a) is shown in Fig. 9 (c). Figure 9 (d)shows the fused result from (a) and (c). In addition,Figs. 10 (a), (b), (c) and (d) illustrate some profiles ex-tracted from Figs. 9 (a), (b), (c) and (d), respectively.It can be seen that clouds A, B and C and their corre-sponding shadows A, B and C are present in Fig. 10 (a),but, they have been clearly removed from Fig. 10 (d).Observing Fig. 10 (d), we can also see that the curveappears quite natural and the so-called artifacts suchas blocking effects are invisible.

In the following, we will briefly do some discussionsabout the proposed scheme.

Cloud as well as thick fog can be detected on thebasis of their high reflectance, whereas the reflectanceof thin fog, mist and haze may not be so high, but is suf-ficient to smooth the brightness changes of the ground.Therefore, they can also be detected in a similar manner

Page 6: Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

458IEICE TRANS. INF. & SYST., VOL.E82–D, NO.2 FEBRUARY 1999

(a) (b)

Fig. 6 Natural color composite images acquired on (a) Aug. 4 and (b) Aug. 20, 1995,respectively. Band 3 is displayed as Red, 4 as Green and 2 as Blue. They cover an areaof approximately 150 km×150 km. They were acquired with nominal resolution of 30 m.Image size: 625 × 625 pixels (image resolution has been divided by 8).

(a) (b) (c)

Fig. 7 Natural color composite images for a part of Awaji island: (a) image fragmentfrom Fig. 6 (a), (b) image fragment from Fig. 6 (b), (c) fused result. Band 3 is displayedas Red, 4 as Green and 2 as Blue. Image size: 512 × 512 pixels.

to shadow detection. Our experimental results indicatethat the proposed scheme has the ability to detect thecontamination of not only cloud and shadow but alsofog, mist and haze. In fact, it is not difficult to un-derstand that preservation of the salient features of theground is the essence of this scheme.

While this cloud and shadow removal scheme of-fers numerous advantages, we have to point out thatcare should be taken. Although the composites madeover a reasonably short time period are very effective inreducing clouds and shadows, the selected time periodpossibly allows adequate representation of the devel-oped curves of the ground objects. If the time period

is too long, some response curves of the ground objectswould possibly be lost.

The Landsat-5 satellite operates in a repetitive,circular, near-polar orbit at a nominal altitude of705.3 km. It completes 14.6 orbits per day and viewsthe entire earth within 16 days [9]. The 16 days maybe too long for some applications. In fact, althoughthe TM instrument images a swath of approximately185 km, adjacent orbits at the equator are only 172 kmapart. Thus, there is an overlap (called side-lap) of7%. Towards higher latitudes, the side-lap graduallyincreases up to a maximum of 84% [1], [4]. By reduc-ing the repetition period for the overlapping areas to 8

Page 7: Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

WANG et al: AUTOMATED DETECTION AND REMOVAL OF CLOUDS AND THEIR SHADOWS FROM LANDSAT TM IMAGES459

Fig. 8 Spectral characteristics of (a) the clouds and (b) theshadows which are all located in Fig. 7 (a).

Fig. 9 Images of Band 1: (a) image from Fig. 7 (a), (b) imagefrom Fig. 7 (b), (c) brightness-corrected result of (b) relative to(a), (d) fused result. Image size: 512 × 512 pixels. The crossinglines show the image profiles used in Fig. 10. The left points arethe starting points (256,300), and the right points are the endingpoints (512,300).

days, it is possible, in many cases, to remove the cloudcoverage in these regions more frequently. Note thatadjacent orbits to the west are imaged with a delay of7 days, those to the east 9 days later.

As for the algorithmic complexity of the proposedscheme, its burden mainly concentrates on the DWF.However, the computation of this DWF is efficient [14].

Fig. 10 Illustration of profiles: (a), (b), (c) and (d) obtainedfrom Figs. 9 (a), (b), (c) and (d), respectively.

Its computational complexity is the same for all iter-ations, and is simply proportional to the number ofsamples. For the example shown in Fig. 7, it took 467seconds to finish the entire removal process for the 6reflective bands, which was performed on an IBM AIXRISC System/6000.

In addition, the image data from the same sensor,such as the Landsat-5 TM sensor, is consistent data seton a regular basis. By investigating large amounts ofimage data, one may obtain the more accurate thresh-old values of C1, C2 and T , so that a more precise de-tected result of clouds and shadows could be achieved.Also, we think that the pixel classification techniquescould be used to obtain a more accurate result for thedetection of clouds although it is more complex andtime-consuming. Further application study of the pro-posed scheme will be our future work.

6. Conclusion

In this paper we presented a scheme to automaticallydetect and remove clouds and their shadows from Land-sat TM images. The entire algorithm is mainly com-posed of four stages: brightness correction, detection ofclouds, detection of shadows and image fusion. It has alower computational complexity. In addition, this algo-rithm can also be used to detect and remove fog, mistand haze contamination from Landsat TM images. Itis worth pointing out that the blocking artifacts, whichhave often been visible in the fused result, become in-visible in our result by the use of wavelet transform.

Further, we believe that this automated removal of

Page 8: Automated Detection and Removal of Clouds and Their Shadows From Landsat TM Images

460IEICE TRANS. INF. & SYST., VOL.E82–D, NO.2 FEBRUARY 1999

clouds and their shadows can be considered as a kindof preprocessing before quantitative study, and shouldbe very useful for many practical applications such asenvironment monitoring.

Acknowledgment

The authors would like to thank the Earth ObservationSatellite Company (EOSAT) and the National SpaceDevelopment Agency (NASDA) in Japan for providingnumerous remotely sensed data.

References

[1] A.S. Belward and C.R. Valenzuela, “Remote sensing andgeographical information system for resource managementin developing countries,” Kluwer Academic Publishers,Dordrecht/Boston/London, 1991.

[2] K. Fukue, “Digital mosaic of Landsat MSS data for theJapanese islands,” J. of Japan Soc. of Photogrammetry andRem. Sens., vol.32, no.3, pp.12–20, 1993.

[3] R.W. Saunders and K.T. Kriebel, “An improved methodfor detecting clear sky and cloudy radiances from AVHRRdata,” Int. J. Remote Sensing, vol.9, pp.123–150, 1988.

[4] C. Kontoes and J. Stakenborg, “Availability of cloud-freeLandsat images for operational projects. The analysis ofcloud-cover figures over the countries of the European Com-munity,” Int. J. Remote Sensing, vol.11, no.9, pp.1599–1608, 1991.

[5] B. Holben, “Characteristics of maximum-value compositeimages from temporal AVHRR data,” Int. J. Remote Sens-ing, vol.7, pp.1417–1434, 1986.

[6] T.F. Eck and V.L. Kalb, “Cloud-screening for Africa usinga geographically and seasonally variable threshold,” Int. J.Remote Sensing, vol.12, pp.1205–1221, 1991.

[7] J. Cihlar and J. Howarth, “Detection and removal of cloudcontamination from AVHRR images,” IEEE Trans. Geosci.Rem. Sens. vol.32, no.3, pp.583–589, 1994.

[8] J.J. Simpson and J.R. Stitt, “A procedure for the detectionand removal of cloud shadow from AVHRR data over land,”IEEE Trans. Geosci. Rem. Sens., vol.36, no.3, pp.880–897,1998.

[9] Earth Observation Center, National Space DevelopmentAgency of Japan, “Landsat-5 data User’s Handbook,” Rem.Sens. Tech. Center of Japan, 1990.

[10] V. Caselles, “An alternative simple approach to estimateatmospheric correction in multitemporal studies,” Int. J.Remote Sensing, vol.10, no.6, pp.1127–1134, 1989.

[11] I. Daubechies, “Ten lectures on wavelets,” Soc. Ind. AppliedMath., Philadelphia, 1992.

[12] S.G. Mallat, “Multifrequency channel decompositions ofimages and wavelet methods,” IEEE Trans. Acoust., Speech& Signal Process., vol.37, no.12, pp.2091–2110, 1989.

[13] S.G. Mallat, “A theory of multiresolution signal decompo-sition: The wavelet representation,” IEEE Trans. PatternAnal. & Mach. Intell., vol.11, no.7, pp.1186–1191, 1989.

[14] M. Unser, “Texture classification and segmentation us-ing wavelet frames,” IEEE Trans. Image Processing, vol.4,no.11, pp.1549–1560, Nov. 1995.

[15] O. Rioul and M. Vetterli, “Wavelet and signal processing,”IEEE Signal Processing Mag., pp.14–38, Oct. 1991.

[16] B. Wang, A. Ono, and N. Fujiwara, “Automatic removalof clouds and their shadows for digital mosaic of LandsatTM data,” Proc. the 23rd Japanese Conf. on Rem. Sens.,pp.257–258, Hiroshima, Nov. 1997.

Bin Wang received a B.E. in elec-tronic engineering and a M.E. in commu-nication and electronic system from Xi-Dian University, Xi’an, China, in 1985and 1987, respectively. He was a Re-search Associate from 1988 to 1989, anda Lecturer from 1990 to 1994 in the De-partment of Measurement and Instrumen-tation, XiDian University. From 1994to 1995, he was a Guest Researcherat Osaka Electro-Communication Univer-

sity, Neyagawa, Japan. He is currently a Doctor Course studentat Kobe University, Japan. His current research interests includeimage processing, pattern recognition and remote sensing.

Atsuo Ono received a M.S. and aD.S. in physics from Tohoku University,Sendai, Japan, in 1961 and 1967, respec-tively. He is currently a Professor at theFaculty of Cross-Cultural Studies and theGraduate School of Science and Technol-ogy, Kobe University, Kobe, Japan. Hisresearch interests include image process-ing, pattern recognition and their applica-tions in nuclear physics and remote sens-ing. He is a member of the Physical So-

ciety of Japan, the Information Processing Society of Japan andthe Remote Sensing Society of Japan.

Kanako Muramatsu received aM.S. and a Ph.D. in physics from NaraWomen’s University, Nara, Japan, in 1989and 1993, respectively. She was a Fellow-ship of the Japan Society for the Promo-tion of Science for Japanese Junior Sci-entists from 1992 to 1993. She joinedthe Department of Information and Com-puter Sciences, Nara Women’s University,as a Research Assistant in 1993. Since1996, she has been a Research Associate.

Her research interests include the assessment of impact of landsurface change on environment, remote sensing and image pro-cessing. She is a member of the Remote Sensing Society of Japanand the Japan Society of Photogrammetry and Remote Sensing.

Noboru Fujiwara received a M.S.and a D.S. in physics from Tohoku Uni-versity, Sendai and Kyoto University,Kyoto, Japan, in 1965 and 1970, re-spectively. In 1966, he joined the De-partment of Nuclear Physics, Institute ofChemical Research, Kyoto University. Hejoined the Department of Physics, NaraWomen’s University, as an Associate Pro-fessor in 1976 and became a Professor in1984. In 1992, he joined the Department

of Information and Computer Sciences. His current research in-terests include pattern recognition, remote sensing and informa-tion physics in self-organizing systems. He is a member of thePhysical Society of Japan, the Information Processing Society ofJapan and the Remote Sensing Society of Japan.