EE4H, M.Sc 0407191 Computer Vision Dr. Mike Spann m.spann@bham.ac.uk .

Post on 17-Dec-2015

218 views 0 download

Tags:

Transcript of EE4H, M.Sc 0407191 Computer Vision Dr. Mike Spann m.spann@bham.ac.uk .

EE4H, M.Sc 0407191Computer VisionDr. Mike Spann

m.spann@bham.ac.ukhttp://www.eee.bham.ac.uk/spannm

IntroductionImages may suffer from the following

degradations: Poor contrast due to poor illumination or finite

sensitivity of the imaging device Electronic sensor noise or atmospheric

disturbances leading to broad band noise Aliasing effects due to inadequate samplingFinite aperture effects or motion leading to

spatial

IntroductionWe will consider simple algorithms for image

enhancement based on lookup tablesContrast enhancement

We will also consider simple linear filtering algorithmsNoise removal

Histogram equalisation In an image of low contrast, the image has

grey levels concentrated in a narrow band Define the grey level histogram of an

image h(i) where : h(i)=number of pixels with grey level = i

For a low contrast image, the histogram will be concentrated in a narrow bandThe full greylevel dynamic range is not used

h i( )

i

Histogram equalisationCan use a sigmoid lookup to map input to

output grey levelsA sigmoid function g(i) controls the mapping

from input to output pixelCan easily be implemented in hardware for

maximum efficiency

Histogram equalisationh i( )

g i( )

h i' ( )

h i h g i' ( ) ( ( )) 1

g ii

( )exp

1

1

Histogram equalisationθ controls the position of maximum slope λ controls the slope Problem - we need to determine the

optimum sigmoid parameters and for each image A better method would be to determine the

best mapping function from the image data

Histogram equalisationA general histogram stretching algorithm is

defined in terms of a transormation g(i)We require a transformation g(i) such that

from any histogram h(i) :

constant)()(')(:

jgij

jhih

Histogram equalisationConstraints (N x N x 8 bit image)No ‘crossover’ in grey levels after

transformation

2)(' Nihi

i i g i g i1 2 1 2 ( ) ( )

Histogram equalisationAn adaptive histogram equalisation algorithm

can be defined in terms of the ‘cumulative histogram’ H(i) :

H i( ) = number of pixels with grey levels i

i

j

jhiH0

)()(

Histogram equalisationSince the required h(i) is flat, the required

H(i) is a ramp:

h(i) H(i)

Histogram equalisationLet the actual histogram and cumulative

histogram be h(i) and H(i)Let the desired histogram and desired

cumulative histogram be h’(i) and H’(i)Let the transformation be g(i)

H g iN g i

' ( ( ))( )

2

255

( ' ( ) , ' ( ) )H N H255 0 02

Histogram equalisationSince g(i) is an ‘ordered’ transformation

i i g i g i1 2 1 2 ( ) ( )

H g i H iN g i

' ( ( )) ( )( )

2

255

g iH i

N( )

( )

2552

Histogram equalisationWorked example, 32 x 32 bit image with grey

levels quantised to 3 bits

g iH i

( )( )

7

1024

h i h jj i g j

' ( ) ( ): ( )

Histogram equalisationi h i( ) H i( ) g i( )

0 197 197 1.351 -

1 256 453 3.103 197

2 212 665 4.555 -

3 164 829 5.676 256

4 82 911 6.236 -

5 62 993 6.657 212

6 31 1004 6.867 246

7 20 1024 7.07 113

h i' ( )

Histogram equalisation

0 1 2 3 4 5 6 70

50

100

150

200

250

300

0 1 2 3 4 5 6 7

Original histogram

Stretched histogram

0.00

500.00

1000.00

1500.00

2000.00

0.00 50.00 100.00 150.00 200.00 250.00

i

h(i)

0.00

500.00

1000.00

1500.00

2000.00

0.00 50.00 100.00 150.00 200.00 250.00

i

h(i)

0.00

500.00

1000.00

1500.00

2000.00

2500.00

3000.00

0.00 50.00 100.00 150.00 200.00 250.00

i

h(i)

0.00

500.00

1000.00

1500.00

2000.00

2500.00

3000.00

0.00 50.00 100.00 150.00 200.00 250.00

i

h(i)

Histogram equalisationImageJ demonstration

http://rsb.info.nih.gov/ij/signed-applet

Image FilteringSimple image operators can be classified as

'pointwise' or 'neighbourhood' (filtering) operators

Histogram equalisation is a pointwise operation

More general filtering operations use neighbourhoods of pixels

(x,y) (x,y)

Input image Output image

(x,y) (x,y)

pointwisetransformation

neighbourhoodtransformation

Input image Output image

Image FilteringThe output g(x,y) can be a linear or non-

linear function of the set of input pixel grey levels {f(x-M,y-M)…f(x+M,y+M}.

(x,y) (x,y)

Input image f(x,y) Output image g(x,y)

(x-1,y-1)

(x+1,y+1)

Image FilteringExamples of filters:

g x y h f x y h f x y

h f x y

( , ) ( , ) ( , )

..... ( , )

1 2

9

1 1 1

1 1

g x y medianf x y f x y

f x y( , )

( , ), ( , )

..... ( , )

1 1 1

1 1

Linear filtering and convolutionExample

3x3 arithmetic mean of an input image (ignoring floating point byte rounding)

(x,y) (x,y)

Input image f(x,y) Output image g(x,y)

(x-1,y-1)

(x+1,y+1)

Linear filtering and convolutionConvolution involves ‘overlap – multiply –

add’ with ‘convolution mask’

H

1

9

1

9

1

91

9

1

9

1

91

9

1

9

1

9

(x,y) (x,y)

Input image f(x,y) Output image g(x,y)

Image point

Filter mask point

Linear filtering and convolutionWe can define the convolution operator

mathematicallyDefines a 2D convolution of an image f(x,y)

with a filter h(x,y)

g x y h x y f x x y y

f x x y y

yx

yx

( , ) ( ' , ' ) ( ' , ' )

( ' , ' )

''

''

1

1

1

1

1

1

1

11

9

Linear filtering and convolutionExample – convolution with a Gaussian filter

kernel σ determines the width of the filter and hence

the amount of smoothing

g x yx y

g x g y

g xx

( , ) exp(( )

)

( ) ( )

( ) exp( )

2 2

2

2

2

2

2

0.00

0.20

0.40

0.60

0.80

1.00

-6 -4 -2 0 2 4 x

g(x)

σ

Linear filtering and convolution

Original Noisy

Filtered

σ=1.5

Filtered

σ=3.0

Linear filtering and convolutionImageJ demonstration

http://rsb.info.nih.gov/ij/signed-applet

Linear filtering and convolutionWe can also convolution to be a frequency

domain operationBased on the discrete Fourier transform F(u,v)

of the image f(x,y)

F u v f x yj

Nux vy

y

N

x

N

( , ) ( , )exp( ( ))

2

0

1

0

1

u v N, .. 0 1

Linear filtering and convolutionThe inverse DFT is defined by

f x yN

F u vj

Nux vy

y

N

x

N

( , ) ( , )exp( ( ))

1 22

0

1

0

1

x y N, .. 0 1

x

y

f(x,y)

(0.0)

(N-1,N-1)

v

u(0,0)

(N-1,N-1)

F(u,v)

DFT IDFT

log( ( , ) )1 F u v

Linear filtering and convolutionF(u,v) is the frequency content of the image

at spatial frequency position (u,v)Smooth regions of the image contribute low

frequency components to F(u,v) Abrupt transitions in grey level (lines and

edges) contribute high frequency components to F(u,v)

Linear filtering and convolutionWe can compute the DFT directly using the

formulaAn N point DFT would require N2 floating

point multiplications per output point Since there are N2 output points , the

computational complexity of the DFT is N4

N4=4x109 for N=256Bad news! Many hours on a workstation

Linear filtering and convolutionThe FFT algorithm was developed in the 60’s

for seismic explorationReduced the DFT complexity to 2N2log2N

2N2log2N~106 for N=256A few seconds on a workstation

Linear filtering and convolutionThe ‘filtering’ interpretation of convolution

can be understood in terms of the convolution theorem

The convolution of an image f(x,y) with a filter h(x,y) is defined as:

g x y h x y f x x y y

f x y h x yy

M

x

M

( , ) ( ' , ' ) ( ' , ' )

( , )* ( , )''

0

1

0

1

(x,y)

Input image f(x,y) Output image g(x,y)

(x,y)

Filter mask h(x,y)

Linear filtering and convolutionNote that the filter mask is shifted and

inverted prior to the ‘overlap multiply and add’ stage of the convolution

Define the DFT’s of f(x,y),h(x,y), and g(x,y) as F(u,v),H(u,v) and G(u,v)

The convolution theorem states simply that :

G u v H u v F u v( , ) ( , ) ( , )

Linear filtering and convolutionAs an example, suppose h(x,y) corresponds to

a linear filter with frequency response defined as follows:

Removes low frequency components of the image

H u v u v R( , )

0

1

2 2 for

otherwise

DFT

IDFT

Linear filtering and convolutionFrequency domain implementation of

convolutionImage f(x,y) N x N pixelsFilter h(x,y) M x M filter mask pointsUsually M<<NIn this case the filter mask is 'zero-padded'

out to N x NThe output image g(x,y) is of size N+M-1 x

N+M-1 pixels. The filter mask ‘wraps around’ truncating g(x,y) to an N x N image

Filter mask h(x,y)

Input image f(x,y) zero padding

x x

x x

x x

DFT DFT H(u,v) F(u,v)

H(u,v)F(u,v)

IDFT

f(x,y) * h(x,y)

x x x x x

x x x x x

Input image f(x,y) Output image g(x,y)

(x,y)

Filter mask h(x,y)

(x',y')

x' = x modulo N

y' = y modulo N

Linear filtering and convolutionWe can evaluate the computational

complexity of implementing convolution in the spatial and spatial frequency domains

N x N image is to be convolved with an M x M filterSpatial domain convolution requires M 2 floating

point multiplications per output point or N 2 M 2 in total

Frequency domain implementation requires 3x(2N 2 log 2 N) + N 2 floating point multiplications ( 2 DFTs + 1 IDFT + N 2 multiplications of the DFTs)

Linear filtering and convolutionExample 1, N=512, M=7

Spatial domain implementation requires 1.3 x 107 floating point multiplications

Frequency domain implementation requires 1.4 x 107 floating point multiplications

Example 2, N=512, M=32Spatial domain implementation requires 2.7

x 108 floating point multiplications Frequency domain implementation requires

1.4 x 107 floating point multiplications

Linear filtering and convolutionFor smaller mask sizes, spatial and

frequency domain implementations have about the same computational complexity

However, we can speed up frequency domain interpretations by tessellating the image into sub-blocks and filtering these independentlyNot quite that simple – we need to overlap the

filtered sub-blocks to remove blocking artefactsOverlap and add algorithm

Linear filtering and convolutionWe can look at some examples of linear filters

commonly used in image processing and their frequency responsesIn particular we will look at a smoothing filter

and a filter to perform edge detection

Linear filtering and convolutionSmoothing (low pass) filter

Simple arithmetic averagingUseful for smoothing images corrupted by

additive broad band noise

H H3 5

1

9

1 1 1

1 1 1

1 1 1

1

25

1 1 1 1 1

1 1 1 1 1

1 1 1 1 1

1 1 1 1 1

1 1 1 1 1

etc

h x( ) H u( )

Spatial domain Spatial frequency domain

ux

Linear filtering and convolutionEdge detection filter

Simple differencing filter used for enhancing edged

Has a bandpass frequency response

H

1 0 1

1 0 1

1 0 1

Linear filtering and convolutionImageJ demonstration

http://rsb.info.nih.gov/ij/signed-applet

f x( )

p

x

f x( )* ( ) 1 0 1

p

x

Linear filtering and convolutionWe can evaluate the (1D) frequency

response of the filter h(x)={1,0,-1 } from the DFT definition

H u h xjux

Nju

Nju

N

ju

N

ju

N

jju

N

u

N

x

N

( ) ( )exp( )

exp( )

exp( ) exp( ) exp( )

exp( )sin( )

2

14

2 2 2

22 2

0

1

Linear filtering and convolutionThe magnitude of the response is therefore:

This has a bandpass characteristic

H uu

N( ) sin( )

2

H u( )

u

ConclusionWe have looked at basic (low level) image

processing operationsEnhancementFiltering

These are usually important pre-processing steps carried out in computer vision systems (often in hardware)