Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using...

12
Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei School of Computer Science and Technology, Shandong Institute of Economic & Techonlogy, Yantai, 264005, China [email protected], [email protected], [email protected], [email protected] doi: 10.4156/jdcta.vol4.issue6.4 Abstract The dyadic wavelet has good multi-scale edge detection and sub-band correlation features. Contourlet transformation has multi-directional characteristics. So a new dyadic nonsampling contourlet transformation is constructed. Firstly, multi-scale decomposition is performed on source images using dyadic contourlet transform to get high-frequency and low-frequency images. And then, according to the different region statistics between high-frequency and low-frequency, the fused coefficients in contourlet domain are obtained by using different fusion rules. Finally, the inverse wavelet based contourlet transform is utilized to obtain fused image. Low-frequency sub-band coefficient used the choice or weighted method according to regional similarity measure, and in accordance with the edge-dependent fusion quality index to determine the weight of edge information. For the edge of high-frequency sub-band, the fusion rule uses the largest absolute value method, and the non-edge part selects the sub-band coefficients of clear region. The experimental results show that the proposed method outperforms other conventional wavelet methods. At the same time, it can extract all useful information from the original images and improve fusion quality. Keywords: Image Fusion, Multifocus image, Contourlet transform, Dyadic Wavelet 1. Introduction Multifocus image fusion is a classical research field in the image fusion. Multifocus sequences image is fused to obtain the clear image of each target, and can effectively improve the use ratio of image information and reliability of target detection and recognition. For the ideal optical imaging system, a certain image flat surface can generate the imaging in objects flat surface of the conjugate, but the image flat surface outside objects flat surface imaging is the fuzzy in different degree. The image fusion technology is used to handle the different focus image and extract the clear information of kinds of images to synthesize a new clear image. Image fusion method can be divided into two categories: one kind is in the spatial domain, the other kind is in the transform domain. The image fusion method of current mainstream is still the fusion method based on wavelet transform domain. The spatial domain method is a simple fusion method, it usually not transforms and decomposes the source image, it only weights the pixel of image to generate the fusion image, but the simple superposition will make images of the signal-to-noise ratio decreased. The different focus image will be positive transformed by wavelet transform method, and it can be decomposed in the different feature domain of different frequency, and then fused in the feature domain. According to a certain fusion rule, each feature domain selects the suitable low-frequency and high-frequency wavelet coefficients to invert transform to obtain the clear fusion image. The wavelet theory rise because it has good performance of time-frequency localization and optimal approximation performance, and its multi-resolution analysis has been widely used in digital signal processing and analysis, signal detection and noise suppression. Wavelet transform can better express one-dimension signals. However, because two-dimensional wavelet is the tensor product of one- dimensional, it is only very limited direction of horizontal and vertical and diagonal. Ordinary wavelet transform is usually not optimal in high circumstances, so other multi-scale geometric analysis methods have been proposed, these methods have Redgelet and Curvelet and Contourlet [1]. Each kind of method is good at handling a particular type of characteristic, but for other types of handling effect is not ideal. Two-dimensional wavelet expresses the point singularity and spots. Redgelet express linear singular. 36

Transcript of Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using...

Page 1: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei

Multifocus Image Fusion Algorithms using Dyadic non-subsampled

Contourlet Transform

LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei School of Computer Science and Technology, Shandong Institute of Economic & Techonlogy,

Yantai, 264005, China [email protected], [email protected], [email protected], [email protected]

doi: 10.4156/jdcta.vol4.issue6.4

Abstract The dyadic wavelet has good multi-scale edge detection and sub-band correlation features.

Contourlet transformation has multi-directional characteristics. So a new dyadic nonsampling contourlet transformation is constructed. Firstly, multi-scale decomposition is performed on source images using dyadic contourlet transform to get high-frequency and low-frequency images. And then, according to the different region statistics between high-frequency and low-frequency, the fused coefficients in contourlet domain are obtained by using different fusion rules. Finally, the inverse wavelet based contourlet transform is utilized to obtain fused image. Low-frequency sub-band coefficient used the choice or weighted method according to regional similarity measure, and in accordance with the edge-dependent fusion quality index to determine the weight of edge information. For the edge of high-frequency sub-band, the fusion rule uses the largest absolute value method, and the non-edge part selects the sub-band coefficients of clear region. The experimental results show that the proposed method outperforms other conventional wavelet methods. At the same time, it can extract all useful information from the original images and improve fusion quality.

Keywords: Image Fusion, Multifocus image, Contourlet transform, Dyadic Wavelet

1. Introduction

Multifocus image fusion is a classical research field in the image fusion. Multifocus sequences image is fused to obtain the clear image of each target, and can effectively improve the use ratio of image information and reliability of target detection and recognition. For the ideal optical imaging system, a certain image flat surface can generate the imaging in objects flat surface of the conjugate, but the image flat surface outside objects flat surface imaging is the fuzzy in different degree. The image fusion technology is used to handle the different focus image and extract the clear information of kinds of images to synthesize a new clear image.

Image fusion method can be divided into two categories: one kind is in the spatial domain, the other kind is in the transform domain. The image fusion method of current mainstream is still the fusion method based on wavelet transform domain. The spatial domain method is a simple fusion method, it usually not transforms and decomposes the source image, it only weights the pixel of image to generate the fusion image, but the simple superposition will make images of the signal-to-noise ratio decreased. The different focus image will be positive transformed by wavelet transform method, and it can be decomposed in the different feature domain of different frequency, and then fused in the feature domain. According to a certain fusion rule, each feature domain selects the suitable low-frequency and high-frequency wavelet coefficients to invert transform to obtain the clear fusion image.

The wavelet theory rise because it has good performance of time-frequency localization and optimal approximation performance, and its multi-resolution analysis has been widely used in digital signal processing and analysis, signal detection and noise suppression. Wavelet transform can better express one-dimension signals. However, because two-dimensional wavelet is the tensor product of one-dimensional, it is only very limited direction of horizontal and vertical and diagonal. Ordinary wavelet transform is usually not optimal in high circumstances, so other multi-scale geometric analysis methods have been proposed, these methods have Redgelet and Curvelet and Contourlet [1]. Each kind of method is good at handling a particular type of characteristic, but for other types of handling effect is not ideal. Two-dimensional wavelet expresses the point singularity and spots. Redgelet express linear singular.

36

Page 2: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010

Curvelet express two-dimensional data of image. Do M N and Martin Vetterli have proposed a kind of good mathematical tool of expressing two-dimension signal --- Contourlet transformation in 2002.

Contourlet transform is superior to the wavelet transform in the direction and anisotropy, the fusion algorithm based on Contourlet domain can more effectively fuse the source image information and maintain the source image feature. Reference [2] use golden section method to search the optimal low-frequency fusion weights, and adaptive fuse the low-frequency subband coefficients, the high-frequency subband coefficients is fused by using the big fusion rules. Reference [3] use fusion rules based on regional energy to obtain the non-subsampled Contourlet coefficients of fusion image. Reference [4] use different window function to calculate the regional energy of image low-frequency components and high-frequency components, and the regional energy is normalized to weight each wavelet-Contourlet coefficients to obtain the fusion the wavelet-Contourlet coefficients. Reference [5] has introduced Cycle Spinning to effectively eliminate the image distortion becase the wavelet-Contourlet lack transformation invariability to generate. Reference [6] has analyzed the performance of Contourlet transform low-pass filter for image fusion algorithm, and has discussed the relationship of low-pass filter and decomposition layers selection. Reference [7] use non-subsampled Contourlet transform to fuse the multifocus image, and low-frequency subband and high-frequency subband are respectively handled by the direction vector and standard deviation. Reference [8] has proposed multifocus image fusion method of sharp frequency localized Contourlet transform domain based on sum-modified-Laplacian, so as to overcome the aliasing component of Contourlet fusion has generated, and restrain Pseudo Gibbs phenomenon. Reference [9] has proposed multifocus image fusion algorithm based on directional window statistics in nonsubsampled contourlet domain, low-frequency subband and high-frequency subband respectively use the variance matching degree fusion rules of directional region and energy fusion rules. Reference [10] has introduced the concept of the local region visibility and local direction energy in Contourlet domain, and has proposed the coefficient selection plan based on local area visibility and local direction energy. Reference [11] has injected HIS transform and non-subsampled Contourlet transform into multispectral image, which not only has high spatial resolution, and effectively maintain the spectral feature of multispectral image. Reference [12] has introduced PCNN-Pulse Coupled NeuralNetworks in non-subsampled Contourlet domain, and clarity matrix is handled by PCNN to generate the clear fusion image.

Above fusion algorithm of Contourlet domain use different strategies to extract the useful information of source image, and eliminate noise interference to improve fusion result. This paper propose multifocus image fusion algorithms using dyadic non-subsampled Contourlet transform, this transformation has more direction subband, and use the non-subsampled filter bank to directional decompose, so it has translation invariability, and can effectively eliminate image distortion and reduce the data redundancy. This paper algorithm fusion result is superior to traditional Contourlet domain fusion algorithm under the same fusion rules condition. 2. Constructing Dyadic nonsubsampled Contourlet Transform

The discrete dyadic wavelet is a special situation of wavelet frames. The wavelet function possesses characteristics of narrow-band pass filtering, and possesses energy conservation of the signal transformation. The dyadic wavelet transform is continuous in time domain and spatial domain. The dimensions will be dual discredited, but the translational measurement of time domain is remained continuous change. So it possesses translation invariability with same continuous wavelet transform and can effectively detect image edge and localization and classification.

Definition 1: If function 2( ) ( )t L Rψ ∈ is one-dimensional dyadic wavelet and a constant 0 A B< ≤ < ∞ exists, so formula (1) is defined as:

2| (2 ) |j

j z

A Bψ ω∈

≤ ≤∑ (1)

Definition 2: If function 1 2

2 2{ ( , ), ( , )} ( )x y x y L Rψ ψ ⊂ is two-dimensional dyadic wavelet function and 0 A B< ≤ < ∞ exists, so formula (2) is defined as:

2( , ) {(0,0)}x y Rω ω ω∀ = ∈ − , 2 2

1 20 | (2 , 2 ) | | (2 , 2 ) |j j j jx y x y Bψ ω ω ψ ω ω< + ≤ (2)

37

Page 3: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei

where ψ̂ is the Fourier transform ofψ .

The dyadic wavelet transform of images ( , )f x y is defined as: 1 2

2 2( , ) { ( , ), ( , )},j jWf x y W f x y W f x y j z= ∈ (3)

where 11

22( , ) ( , ) ( , )jjW f x y f x y x yψ= ∗ , 22

22( , ) ( , ) ( , )jjW f x y f x y x yψ= ∗ , 2 ( , ) ( , ), 1, 2j

kx y x y kψ = − − = .

The diagram of dyadic wavelet decomposition as figure 1:

12 jW2

( , ) jG H

2( , ) jH G

2( , ) jH H

22 jW

12 jA f− 2 jA f

Figure1. The dyadic wavelet decomposition

Contourlet transform uses dual filter structure, and firstly use the Laplaeian Pyramid decomposition (LP) to multi-scale decomposition the input signal to catch the odd point, then according to the direction information to gather odd point to become the contour. The Non-Subsampled Contourlet transform (NSCT) [13] has solved the problem that the Contourlet sample not to meet translation invariant and the frequency spectrum leakage and the frequency spectrum aliasing. NSCT is composed two parts of Non-Sampling Pyramid Structure (NSPS) and Non-Sampling Directional Filter Bank (NSDFB). The NSPS has achieved the multi-scale characteristic. The NSDFB has achieved the multi-direction. The NSPS is achieved by the multi-level iteration. Its reconstruction condition is: 0 0 1 1( ) ( ) ( ) ( ) 1H z G z H z G z+ = .Where: 0 ( )H z be the low decompose filter, 1( )H z be the high decomposition filter, 0 ( )G z be low rebuild filter, 1( )G z be high rebuild filter. The image is decomposed into several two-dimensional low-frequency subbands and several two-dimensional high-frequency subbands by the NSDFB.

The traditional Contourlet transform use laplace transform, and cascade the direction filter transformation, so the decomposition of frequency domain is obtained from different directions, and possess the multi-direction of frequency domain. But the traditional contourlet transformation do not make full use of the frequency information, and use the nonsubsampled direction filter to solve problems of the frequency domain translation invariability to lead to the big redundancy. A new dyadic contourlet transform based on the traditional contourlet transform is proposed by this paper. The main idea is: The image is firstly transformed by the dyadic wavelet. The high-frequency subband has been transformed is decomposed by NSDFB. The low-frequency subband will be again transformed to continue iteration by the dyadic wavelet. Its ideas as figure 2:

Figure 2. The dyadic Contourlet transform decomposition

Figure 3 is the sample image. Figure 4 is the first layer decomposition of dyadic contourlet transformation. Its parameter settings: nlevels = [0, 3]; pfilter = 'maxflat'; dfilter = 'dmaxflat7'. Figure 5 is the first layer decomposition of traditional contourlet transformation. Its parameter settings: levels = [0, 3]; pfilter = 'pkva' ; dfilter = 'pkva'. Compared with the traditional contourlet transform, the dyadic

Target image

non-subsampled DFB

LL subband

Dyadic wavelet transform

Continue Iteration

HL subband

LH subband

non-subsampled DFB

non-subsampled DFB

HL Contourlet each Direction subband

LH Contourlet each Direction subband

38

Page 4: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010

contourlet transformation uses NSDFB to decompose the high-frequency subband of dyadic wavelet transform, so the dyadic contourlet translation possess the translation invariability, and can be widely used in image fusion and denoising.

Figure 3. The sample image

Figure 4. The first layer decomposition of dyadic contourlet transformation

Figure 5. The first layer decomposition of traditional contourlet transformation

3. Fuzzy Area Judgement 3.1. Point Spread Function(PSF)

The point spread function is the basic tool of evaluating optical system imaging quality and symbolize the distribution condition of light. Because of the PSF, the imaging of once optical system is different from the material object, so produce clear part and fuzzy part.

The input image information is carried by the light to spread from the object flat surface to the image flat surface. The quality of output depends on the transfer characteristics of optical system. The ideal state of optical system is that the light energy has been emitted by a point of the object space will concentrate on a point of the image space, but the actual optical system imaging, the light has been emitted by a point of the object space is dispersed in certain areas, the distribution condition is called the point spread function. A simple optical imaging system as figure 6:

39

Page 5: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei

Figure 6. The optical imaging system

Where R be the radius of the lens, P be a point on the object flat surface, Q be a pixel on the focusing flat surface. According to the principle of lens imaging, the relation of the focal distance f and the object distance O and the image distance i can be written as:

1 1 1f o i= + (4)

Under the ideal situation, one flat surface of objective space and one flat surface of image space are

conjugate. If the distance of object flat surface and lens is O, and the distance of observe surface and lens is i, and objects can clearly focused imaging, the observe surface is focusing flat surface. If the observe surface and focusing flat surface have the deviationδ , the observe surface form a fuzzy circle of radius is r, which generate the defocused phenomenon. Therefore the size of r can represent the focus level of image. The bigger r represent the farther object imaging and focusing flat surface. The smaller r represent the closer image and focusing flat surface. The relationship of fuzzy circle radius r and offsetδ can be expressed as:

Rriδ

= (5)

Because the diffraction effects of light and the idealistic of lens imaging, the light intensity

distribution of fuzzy circle can use two-dimensional gauss function to express.

2 2

222

1( , )2

x y

h x y e σ

πσ

+−

= (6)

where σ be the defocused parameters of the model, and is determined by the defocused fuzzy image.

( , )h x y be the point spread function of imaging system, and is called impulse response function. 3.2. Judging Clear Goals and Fuzzy Goals

In practical applications, for a specific goal, ( , )h x y can use a gauss function of varianceσ to express, andσ decide the fuzzy degree of optical system for the target. This point spread function is equivalent to a smooth function. The smallerσ express the clearer target imaging. The largerσ express the fuzzier target imaging. Therefore, a optical imaging system can be simulated by the convolution of an image and a gauss function of varianceσ . Focusing clear goals can use the convolution of the original target and gauss function of small varianceσ to express. Focusing fuzzy goals can use the convolution of the original target and gauss function of big varianceσ to express. The difference of focusing clear goals and focusing fuzzy goal can be reflected by the varianceσ of gauss point spread function.

Two images A and B of different focus are smoothed by the gauss function of varianceσ to obtain A′and B′ . So the original clear part becomes fuzzy, and the original fuzzy part and former basically maintain unchanged. The clear part and fuzzy part of image can be judged by using this feature. Three cases [14] are usually existed by smoothing:

40

Page 6: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010

(1) If clear in A, and fuzzy in B, the target is more fuzzy in B′ , at the same time, the high-frequency

difference of A and B′ is more than the high-frequency difference of A and B, the high-frequency difference of A′and B′ is less than the high-frequency difference of A and B.

(2) If fuzzy in A, and clear in B, the target is more fuzzy in A′ , at the same time, the high-frequency difference of A′ and B is more than the high-frequency difference of A and B, the high-frequency difference of A and B′ is less than the high-frequency difference of A and B.

(3) If clear in A and B, or fuzzy in A and B, the high-frequency difference of A′and B is more than the high-frequency difference of A and B, and the high-frequency difference of A and B′ is more than the high-frequency difference of A and B.

The clear goals can be judged in A or B by the above analysis: (1) If | | | |w w w w

B A A BD D D D′ − > − and| | | |w w w wA B A BD D D D′ − < − , the clear goals in A.

(2) If | | | |w w w wA B A BD D D D′ − > − and| | | |w w w w

B A A BD D D D′ − < − , the clear goals in B. (3) If w w

A BD D′ ′> , the clear goals in A, otherwise in B. Where: w

AD be the clarity of some one high-frequency coefficients local area w in image A. 4. Fusion Based on Dyadic Contourlet

The dyadic Contourlet is introduced the multifocus image fusion, and its excellent properties can be used to extract the geometric feature of the original image, and provide more information for the fusion image. The dyadic Contourlet transformation is not only provides the multiscale analysis, it also possesses abandant direction and shape, and thus it can effectively capture the smooth contour and geometric structure of images. Because the detail feature of image is often shown by multiple pixels of some one local area, and each pixel of this local area often has strong correlation, therefore, the fusion rules also uses the window area methods. 4.1. Fusion Steps

(1) The images will be fused is firstly converted to IHS color space. Quantitative treatment color usually uses RGB color space model, which is not suitable for the

image fusion processing, and it is very nonuniform in the spatial perception, its components not only express color, and express brightness, existing correlation, therefore, three components are respectively processed to loss the color information.

Qualitative description color uses IHS system to be more intuitive from visual sense. IHS algorithm is the earliest in image fusion technology development, and is a kind of space transformation algorithm of maturity. Each component of IHS possesses the advantage of clearly describing the color properties. Its three components are Intensity and Hue and Saturation, and they possess the relative independence, so they can be respectively controlled, and can accurately quantitatively describe the color features.

RGB color space is converted to HIS transformation as:

1/ 3 1/ 3 1/ 3

1/ 6 1/ 6 2 / 6

1/ 6 2 / 6 0

I RH RS B

⎡ ⎤⎛ ⎞ ⎡ ⎤⎢ ⎥⎜ ⎟ ⎢ ⎥= − −⎢ ⎥⎜ ⎟ ⎢ ⎥

⎜ ⎟ ⎢ ⎥ ⎢ ⎥−⎝ ⎠ ⎣ ⎦⎣ ⎦

(7)

The brightness component I contain many detail information, so it need be chiefly handled in the

fusion. (2) The brightness component I of image will be fused need be decomposed by the L layers dyadic

Contourlet transform. The multifocus image A and B are firstly decomposed by the dyadic wavelet to obtain a low-frequency

subband ,H HD and two high-frequency subbands ,G HD and ,H GD . Two high-frequency subbands are also decomposed by NSDFB to obtain low-frequency subband HDC and multiple wedge high-frequency direction subband { }, ( , ),0 1,1l i lDC n m l L i k≤ ≤ − ≤ ≤ , lk be the high-frequency direction subband number in scale 2 l− , , ( , )l iDC n m be the direction i subband in scale 2 l− .

41

Page 7: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei

(3) Low-frequency subband fusion. (4) High-frequency direction subband handling. (5)The low-frequency coefficients and high-frequency coefficients of the brightness component are

inverter converted by the dyadic Contourlet to generate the brightness component I ′ . (6) Two color components H and S directly use the mean value method to obtain H ′ and S ′ . (7) I ′ and H ′ and S ′ are inverter converted by IHS to reconstruct the fusion image.

Figure 7. Multifocus image fusion algorithms based on dyadic Contourlet transform 4.2. Fusion Rules

Fusion rules is the core part of image fusion, its choice directly impact on the quality of image fusion. Pajares G[15] has discussed kinds of fusion rules, and it basically includes all kinds of existing fusion scheme. According to the characteristics of multifocus image, the low-frequency and high-frequency decomposition coefficients of transform area are respectively fused by this paper algorithm.

The standard deviation reflects the discrete case of the image gray relative to the gray mean value, and it can be used to evaluat the size of the image contrast. The bigger standard deviation expresses the bigger contrast of image and the more information of image.

2

1 1

( ( , ) )M N

i ji j

F x y

M N

μσ = =

−=

∑∑ (8)

1 1( , )

M N

i ji j

F x y

M Nμ = ==

∑∑ (9)

where μ be the image gray mean value.

For any area r R∈ , the similarity measure of two images A and B can be exressed as:

| |( , ) | | | |

2 2| | | || |

( , ) ( , )

( ( , ) ) ( ( , ) )| |1( ) ( 2 )

3 max( , ) max( , )( ( , ) ) ( ( , ) )

A r B rx y r A r B r A r B r

ABA r B r A r B rA r B r

x y r x y r

A x y B x yS r

A x y B x y

μ μμ μ σ σ

μ μ σ σμ μ∈

∈ ∈

− ⋅ −− −

= + − −− −

∑ ∑ (10)

Dyadic non-subsampled Contourlet decompose

source image A source image B

Transform to HIS color space

Mean value method resolve color component

According to the low-frequency fusion rules, Low-frequency subband coefficients is fused

According to the high-frequency fusion rules, High-frequency subband coefficients is fused

Dyadic non-subsampled Contourlet invert transform

IHS invert transform

Fusion image F

Brightness component I color component H,S

Low-frequency subband High-frequency subband

42

Page 8: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010

where |A rμ and |Arσ are respectively the mean value and standard deviation of area r in image A.

(1) Low-frequency subband fusion rules The low frequency part of image contains the smooth information, also is the large scale feature,

such as the objects shap, position. The low-frequency part is calculated by formula (11): ( , ) ( , ) | ( , ) ( , ) |i j i j A i j B i jF x y F x y D x y D x yβ′= − ⋅ − (11)

where ( , )i jF x y′ decide the image brightness and impact the image energy after image fuses. β be the weight coefficient, | ( , ) ( , ) |A i j B i jD x y D x yβ⋅ − be the weighted difference value of two images, and contains the fuzzy information of two images, the biggerβ express the stronger image edges.

For any area r R∈ , and according to the similarity measure ( )ABS r of the image area, ( , )F i j′ judge to use the coefficients choice or coefficients weighted method. If ( )AB SS r T< , and ST be the similarity threshold, it use the coefficient choice methods:

| |

| |

( , ),( , )

( , ),A i j A r B r

i jB i j A r B r

D x yF x y

D x y

σ σ

σ σ

≥⎧⎪′ = ⎨ <⎪⎩ (12)

If ( )AB SS r T≥ , it use the coefficients weighted method:

B | |

B | |

(1 ) ( , ) ( , ),( , )

( , ) (1 ) ( , ),A A r B r

A A r B r

D i j D i jF i j

D i j D i jα α σ σ

α α σ σ− ⋅ + ⋅ ≥⎧

′ = ⎨ ⋅ + − <⎩ (13)

where 1 ( )1 (1 )2 1

AB

s

S rT

α −= −

−.

Edge-dependent fusion quality index (EFQI) is a new kind of objective index of evaluating image fusion quality, and can reflect the edges maintain situation and edges around sound effects of fusion image. The bigger EFQI express the higher fusion image quality. The definition [16] can be expressed as:

0 0

1 ( ( ) ( , | ) ( ) ( , | ))| | A A B B

w W

Q w Q D F w w Q D F wW

λ λ∈

= +∑ (14)

where Q express EFQI, F be the frequency-domain fusion coefficient of source image A and B.

0 2 2 2 2

2 2( , ) AB A B A B

A B A B A B

Q A B σ μ μ σ σσ σ μ μ σ σ

= ⋅ ⋅+ +

, ,A Bσ σ be respectively the variance of subband coefficients

AD and BD , ABσ be the covariance. 0 ( , | )Q A B w be the edge fusion quality index in the window w.

|

| |

( ) A wA

A w B w

λσ σ

=+

, ( ) 1 ( )A Bw wλ λ= − .

The parameters is decided by the maximum formula (14), is also the maximize edge fusion quality index.

(2) High-frequency subband fusion rules The high-frequency component of image contains the important feature and detail information. The

fusion key lies in whether effectively extract detail information from the source image. High-frequency subband fusion rules are expressed as:

The edge information of high-frequency component is extracted by canny algorithm, and high-frequency subband is divided into edge part and non-edge part.

In order to better protect the image edge information, the edge part is fused by the most absolute method:

( , ) ( , ) ( , )( , )

( , )A A B

B

D i j if D i j D i jF i j

D i j otherwise≥⎧

= ⎨⎩

(15)

43

Page 9: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei

The non-edge part is judged the clear area and fuzzy area by the section 3.2 :

( , )

( , )( , )

A

B

D i j if Ais clearF i j

D i j otherwise⎧

= ⎨⎩

(16)

5. Experimental Results

Three different focus images are respectively experimented by this paper algorithm. The experimental environment is the computer of Intel Pentium(R)2.8 GHz and memory 512 and operating system of Windows XP. This paper experiment use the image of complete matching. This paper use 3×3 neighborhood window and similarity threshold 0.7ST = . This paper Dyadic-Contourlet Transform (D-CT) and the Laplacian Pyramid Transform (LPT) and the Wavelet Transform (WT) and the Non-Subsampled Contourlet Transform (NSCT) are compared by the experiment, and all kinds of methods have been four layers decomposed. WTF use the ‘db4’ wavelet basis, NSCT use the classic ‘9-7’ pyramid decomposition and ‘c-d’ directional filter bank (DFB), the directional subband decomposition number is 16, 8, 4, 4 from the fine scale to the coarse scale.

The image fusion effect evaluation is mainly divided the subjective standard and the objective standard. The subjective evaluation method is impacted by the observer and image type and environmental conditions. Therefore, in order to objective quantitative evaluate fusion effect, this paper use Sntropy and Average Grads and Standart Deviation and Spacial Frequency to quantitative describe. The bigger Sntropy and Average Grads and Standart Deviation and Spacial Frequency of a multifocus image express the better fusion image quality. This paper methods have certain advantages has been shown by the experimental data of different fusion algorithm from table 1 in several evaluation index.

(1) Sntropy: Image information entropy is an important index of evaluate the image information abundant degree,

and can express the detail performance ability of image. The size of entropy reflects the number of image information.

1

0

logL

i ii

H p p−

=

= −∑ (17)

where H be the entropy of image, L be the total gray level of image, /i ip N N= , iN be the number of pixels of gray value i, N be the total number of pixels of image.

(2) Average Grads: Average grads can be sensitive to reflect on tiny detail image contrast expression ability, also

reflects the image clarity. The bigger the value, the image clearer, it can be used to evaluate the clear of images.

1 1

2 2

1 1

1 ( , ) ( , )( ) ( ) 2( 1) ( 1)

M Ni i i i

i j i i

f x y f x yAgx xM N

− −

= =

⎡ ⎤∂ ∂= × +⎢ ⎥∂ ∂− ⋅ − ⎣ ⎦

∑∑ (18)

(3) Spacial Frequency: Spatial frequency reflects the whole activity degree of an image space. It includes space row

frequency RF and space line frequency CF:

21

1 2( ( ) ( )) /

M N

i j i ji j

RF I x y I x y M N−= =

= + − + ×∑∑ (19)

21

2 1( ( ) ( )) /

M N

i j i ji j

CF I x y I x y M N−= =

= + − + ×∑∑ (20)

Total spatial frequency as: 2 2SF RF CF= + (21)

44

Page 10: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010

Figure 8 Clock_A focus on the right side of the big clock, big clock clear, small clock fuzzy.

Clock_B focus on the left side of the small clock, small clock clear, big clock fuzzy. Figure 9 Pepsi_A is the close clear left focus image, Pepsi_B is the far clear right focus image. The size of three groups of source image is 512*512. High-frequency and low-frequency subband fusion rules belong to this paper rule in different scales.

Clock_A Clock_B LPT

WT NSCT D-CT

Figure 8. Clock image fusion

Pepsi_A Pepsi_B LPT

WT NSCT D-CT

Figure 9. Pepsi image fusion

NSCT and D-CT two methods can obtain the clear image from the vision. However, this paper algorithm can obtain clearer fusion image than former two methods, and obtain satisfactory effect. D-CT transform is superior to the wavelet transform and NSCT transform in the edge feature expression, The edge of fusion image has been obtained by transformation is more smooth.

Becase the Laplace transform and wavelet transform is unable to accurately express the directional edge feature; the performance of fusion algorithm is low. But NSCT and D-CT transform possess the good time-frequency local properties and directional properties and translation invariant, and it can better capture images edge information, so the performance of fusion algorithm is high. Compared with NSCT transform, D-CT transform can effectively reduce the matching error, so as to realize the fusion operation. At the same time, In the same configuration D-CT transform single-layer decomposition

45

Page 11: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei

contain more quantity subband, so contain more abundant image information, thus the image fusion algorithm based on D-CT transform has better performance. In the actual image fusion system, the fusion effect of this paper algorithm is good. However, this paper algorithm use D-CT transform to exist more quantum subband, so increasing the time complexity in a certain extent.

Table 1. Experimental results comparison

Image Clock Pepsi Ent Ag Std Sf Ent Ag Std Sf

LPT(lc) 7.2277 3.7618 112.7214 10.7220 7.1164 4.0936 107.7549 13.7573 WT(Lp) 7.3687 3.7107 112.6599 10.4085 7.1157 4.0954 107.7551 13.7385

NSCT(w3) 7.3893 3.7986 112.8318 10.3744 7.1188 4.0983 107.8678 13.7753 D-CT(w4) 7.4405 3.8571 113.1377 10.5814 7.1232 4.1702 107.6700 13.8614

From table 1 experimental results, the fusion effect of this paper algorithm is significant for the

image edge detail, and this paper algorithm can simple effectively realize the multifocus image fusion. The other three methods have obtained the information entropy of fusion result image to be low; the image quality is relatively poor, and fuzzier than the result image has been obtained by this paper method. Average Grads of fusion image has been obtained by D-CT is bigger than the Average Grads of the other methods, which demonstrate the fusion image to be clearer, and the details more abundant, and maintain more edge information of the original image. 6. Conclusion

Becase dyadic wavelet and non-subsampled Contourlet possess translation invariability, they can effectively avoid the distortion, and Contourlet can also effectively capture multi-scale and multi-directional information in images, therefore, this paper has constructed multifocus image fusion algorithms to use dyadic non-subsampled Contourlet transform. The image will be fused is decomposed by the dyadic Contourlet. High-frequency and low-frequency subband coefficient respectively use the different fusion rules, and the coefficient of fusion is inverter transformed by the dyadic Contourlet to reconstruct the fusion image. Experimental results verify that this paper method has obtained the fusion results image to be texture clear, and edge details information reserve the more, and the fusion results have improved.

7. Acknowledgment

This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 60970105; National 863 High-Tech programe of China(2009AA01Z304); National Research Foundation for the Doctoral Program of Higher Education of China (20070422098). 8. References

[1] Do M N, Vetterli M, “The contourlet transform: an efficient directional multiresolution image

representation”, IEEE Transactions on Image Processing, vol.14, no.12, pp.2091-2106, 2005. [2] Chang Xia, Jiao Licheng, Jia Jianhua, “Multisensor Image Adaptive Fusion Based on Nonsubsampled

Contourlet”. Chinese Journal of Computers, vol.32, no.11, pp.2229-2238, 2009. [3] Ye Chuanqi, Miao Qiguang, Wang Baoshu, “Image Fusion Method Based on the Nonsubsampled

Contourlet Transform”, Journal of Computer-aided Design & Computer Graphics, vol.19, no.10, pp.1274-1278, 2007.

[4] Song Yajun, Ni Guoqiang, Gao Kun, “Regional Energy Weighting Image Fusion Algorithm by Wavelet Based Contourlet Transform”, Transactions of Beijing Institute of Technology, vol.28, no.2, pp.168-172, 2008.

[5] Liang Dong, Li Yao, Shen Min, et al, “An Algorithm for Multi2Focus Image Fusion Using Wavelet Based Contourlet Transform”, Acta Electronica Sinica, vol.35, no.2, pp.320-322, 2007.

46

Page 12: Multifocus Image Fusion Algorithms using Dyadic non ... · Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui,

International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010

[6] Cai Xi, Zhao Wei, “Discussion upon Effects of Contourlet Lowpass Filter on Contourlet-based

Image Fusion Algorithms”, Acta Automatica Sinica, vol.35, no.3, pp.258-266, 2009. [7] Qiang Zhanga, Bao-long Guo, “Multifocus image fusion using the nonsubsampled contourlet

transform”, Signal Processing, vol.89, no.7, pp.1334-1346, 2009. [8] Qu Xiaobo, Yan Jingwen, Yang Guide, “Multifocus image fusion method of sharp frequency

localized Contourlet transform domain based on sum-modified-Laplacian”, Optics and Precision Engineering, vol.17, no.5, pp.1203-1212, 2009.

[9] Sun Wei, Guo Baolong, Chen Long, “Multifocus image fusion algorithm based on directional window statistics in nonsubsampled contourlet domain”, Journal of Jilin University (Engineering and Technology Edition), vol.39, no.5, pp.1384-1389, 2009.

[10] Zhang Qiang, Guo Baolong, “Fusion of Multifocus Images Based on the Nonsubsampled Contourlet Transform”, Acta Photonica Sinica, vol.37, no.4, pp.838-843, 2008.

[11] Huang Haidong, Wang Bin, Zhang Liming, “A New Method for Remote Sensing Image Fusion Based on Nonsubsampled Contourlet Transform”, Journal of Fudan University (Natural Science), vol.47, no.1, pp.124-134, 2008.

[12] Yang Shuyuan, Wang Min, Lu Yanxiong, et al, “Fusion of multiparametric SAR images based on SW-nonsubsampled contourlet and PCNN”, Signal Processing, vol.89, no.12, pp.2596–2608, 2009.

[13] Cunha A L, Zhou J, Do M N, “The nonsubsampled contourlet transform: Theory, design and application”, IEEE Transactions on Image Processing, vol.15, no.10, pp.3089-3101, 2006.

[14] Yang Xuan, Yang Wan hai, Pei Ji hong, “Fusion multifocus images using wavelet decomposition”. Acta Electronica Sinica, vol.29, no.6, pp.846-848, 2001.

[15] Pajares G, Mauel J C, “A wavelet-based image fusion tutorial”, Pattern Recognition, vol.37, no.9, pp.1855-1872, 2004.

[16] G. Piella, “New quality measures for image fusion” In Proceedings of the 7th International Conference on Information Fusion (Fusion 2004), International Society of Information Fusion (ISIF), Stockholm, Sweden, no.6, pp.542-546, 2004.

47