Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point...
Transcript of Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point...
![Page 1: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/1.jpg)
Point Operations and Spatial Filtering
Ranga Rodrigo
November 13, 2011
1/102
![Page 2: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/2.jpg)
1 Point OperationsHistogram Processing
2 Spatial FilteringSmoothing Spatial FiltersSharpening Spatial Filters
3 Edge DetectionLine Detection Using the Hough Transform
2/102
![Page 3: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/3.jpg)
Outline
1 Point OperationsHistogram Processing
2 Spatial FilteringSmoothing Spatial FiltersSharpening Spatial Filters
3 Edge DetectionLine Detection Using the Hough Transform
3/102
![Page 4: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/4.jpg)
We can process images either in spatial domain or in transformdomains. Spatial domain refers to image plane itself. In spatial domainprocessing, we directly manipulate pixels. Spatial domain processingis common. Frequency domain and wavelets are examples oftransform domains.
4/102
![Page 5: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/5.jpg)
Domains
Image enhancement
Spatial domain Transform domain
5/102
![Page 6: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/6.jpg)
Spatial Domain Processing
Spatial domain processing
Range operations
Point operations Neighborhood operations
Domain operations
6/102
![Page 7: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/7.jpg)
Point Operations
g (x0, y0) = T[
f (x0, y0)].
These are known as gray level transformations.Examples:
Gamma correction.Window-center correction.Histogram equalization.
7/102
![Page 8: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/8.jpg)
Neighborhood Operations
g (x0, y0) = T(x,y)∈N
[f (x, y)
], where N is some neighborhood of
(x0, y0).Examples:
Filtering: mean, Gaussian, median etc.Image gradients etc.
8/102
![Page 9: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/9.jpg)
Domain Operations
g (x0, y0) = f (Tx (x0, y0),Ty (x0, y0)).Examples:
Warping.Flipping, rotating, etc.Image registration.
9/102
![Page 10: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/10.jpg)
Point Operations: Recapitulation
g (x0, y0) = T[
f (x0, y0)].
These are known as gray level transformations.The enhanced value of a pixel depends only on the original valueof the pixel.If we denote the value of the pixel before and after thetransformation as r and s, respectively, the above expressionreduces to
s = T (r ). (1)
For example, s = 255− r gives the negative of the image.
10/102
![Page 11: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/11.jpg)
00
14 L
14 L
12 L
12 L
34 L
34 L
L−1
L−1
Figure 1: Intensity transformation function for image negative.
11/102
![Page 12: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/12.jpg)
Example
Write a program to generate the negative of an image.
12/102
![Page 13: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/13.jpg)
im = imread ( ‘ image . jpg ’ ) ;imneg = 255 − im ;subp lo t (1 ,2 ,1 )imshow ( im )t i t l e ( ‘ O r i g i na l ’ )subp lo t (1 ,2 ,2 )imshow ( imneg )t i t l e ( ‘ Negative ’ )
13/102
![Page 14: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/14.jpg)
Power-Law Transformations
s = cr γ, (2)
where c and γ are positive constants.Values of γ such that 0 < γ< 1 map a narrow range of dark input pixels
vales into a wider range of output values, with oppositebeing true for higher values of input levels.
Values γ> 1 have the opposite behavior to above.c = γ= 1 gives the identity transformation.
Gamma correction is an application.
14/102
![Page 15: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/15.jpg)
00
14 L
14 L
12 L
12 L
34 L
34 L
L−1
L−1
γ= 2
γ= 0.5
γ= 0.1
Figure 2: Intensity transformation function using power law.
15/102
![Page 16: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/16.jpg)
Example
Write a program to carry out power-law transformation on an image.
16/102
![Page 17: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/17.jpg)
r = imread ( ‘ image . jpg ’ ) ;c = 1 ;gamma = 0 . 9 ;s1 = c * r . ^gamma;gamma = 1 . 2 ;s2 = c * r . ^gamma;subp lo t (1 ,3 ,1 )imshow ( r )t i t l e ( ’Original’ )subp lo t (1 ,3 ,2 )imshow ( s1 )t i t l e ( ‘Gamma = 0 . 9 ’ )subp lo t (1 ,3 ,3 )imshow ( s2 )t i t l e ( ‘Gamma = 1 . 2 ’ )
17/102
![Page 18: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/18.jpg)
Piecewise-Linear Transformation Functions
These functions can be used to do gray level operations such as these:
Contrast stretching.Window-center correction to enhance a portion of levels.Gray-level slicing.
18/102
![Page 19: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/19.jpg)
00
14 L
14 L
12 L
12 L
34 L
34 L
L−1
L−1
T (r )
Figure 3: Contrast stretching function.
19/102
![Page 20: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/20.jpg)
Here is an example of a piecewise linear transformation.
im = imread ( ’Images/airplane.jpg’ ) ;r = rgb2gray ( im ) ;l i n e 1 = 0 : 0 . 5 : 1 0 0 ;l i n e 2 = 155 /55* ( [201 :1 :255 ] − 200) + 100;t = [ l i ne1 , l i n e 2 ]p l o t ( t )s = t ( r + 1 ) ;subp lo t (1 ,2 ,1 )imshow ( r )t i t l e ( ’Original’ )subp lo t (1 ,2 ,2 )imshow ( mat2gray ( s ) )t i t l e ( ’Output’ )
20/102
![Page 21: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/21.jpg)
Example
Write a program to carry out contrast stretching.
21/102
![Page 22: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/22.jpg)
Outline
1 Point OperationsHistogram Processing
2 Spatial FilteringSmoothing Spatial FiltersSharpening Spatial Filters
3 Edge DetectionLine Detection Using the Hough Transform
22/102
![Page 23: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/23.jpg)
Histogram
The histogram of a digital image with gray levels in the range[0,L−1] is a discrete function h(rk ) = nk , where rk is the kth graylevel and nk is the number of pixels having gray level rk .We can normalize the histogram by dividing by the total number ofpixels n. Then we have an estimate of the probability ofoccurrence of level rk , i.e., p(rk ) = nk /n.
23/102
![Page 24: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/24.jpg)
Number of Bins in a Histogram
The histogram that we described above has L bins.We can construct a coarser histogram by selecting a smallernumber of bins than L. Then several adjacent values of r will becounted for a bin.
24/102
![Page 25: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/25.jpg)
Example
Write a program to compute the histogram with a given number of bins.
25/102
![Page 26: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/26.jpg)
c lose a l lim = imread ( ’fruits.jpg’ ) ;img = rgb2gray ( im ) ;imshow ( img )numbins = 10;binbounds = l inspace (0 ,255 , numbins + 1 ) ;cumhist = zeros ( numbins + 1 , 1 ) ;f o r i = 2 : numbins + 1
cumhist ( i ) = sum(sum ( img <= binbounds ( i ) ) ) ;endforh i s t = cumhist ( 2 : end ) − cumhist ( 1 : end−1) ;b incen te rs = ( binbounds ( 2 : end )+ binbounds ( 1 : end −1 ) ) / 2 ;bar ( b incenters ’ , h i s t , 0 .2 )
d
26/102
![Page 27: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/27.jpg)
Histogram Equalization
We saw that a given image may have a peaked histogram. Such apeaked or non-flat histogram shows that pixels do not equally occupyall the gray levels. This results in a low contrast image. We can remedythis by carrying out the operation called histogram equalization.Histogram equalization is a gray-level transformation that results in animage with a more or less flat histogram.
27/102
![Page 28: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/28.jpg)
Consider, for now, that continuous intensity values of an image are tobe processed. We assume that r ∈ [0,L−1]. Lets consider the intensitytransformation
s = T (r ) 0 ≤ r ≤ L−1 (3)
that produces an output intensity level s for every pixel in the inputimage having intensity r . We assume that
T (r ) is monotonically increasing in the interval 0 ≤ r ≤ L−1, and0 ≤ T (r ) ≤ L−1 for 0 ≤ r ≤ L−1.
The intensity levels in the image may be viewed as random variables inthe interval [0,L−1]. Let pr (r ) and ps(s) denote the probability densityfunctions (PDFs) of r and s. A fundamental result from basic probabilitytheory is that if pr (r ) and T (r ) are known, and T (r ) is continuous anddifferentiable over the range of values of interest, then the PDF of thetransformed variable s can be obtained using the simple formula
ps(s) = pr (r )
∣∣∣∣dr
d s
∣∣∣∣ . (4)
28/102
![Page 29: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/29.jpg)
Now let’s consider the following transform function:
s = T (r ) = (L−1)∫ r
0pr (w)d w, (5)
where w is the dummy variable of integration. The right-hand side ofthis equation is the cumulative distribution function (CDF) of randomvariable r . This function satisfies the two conditions that wementioned.
Because PDFs are always positive and the integral of a function isthe area under the curve, Equation 5 satisfies the first condition.This is because the area under the curve cannot decrease as rincreases.When the upper limit in this equation is r = L−1, the integralevaluates to 1, because the area under a PDF curve is always 1.So the maximum value of s is L−1, and the second condition isalso satisfied.
29/102
![Page 30: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/30.jpg)
To find ps(s) corresponding to this transformation we use Equation 3.
d s
dr= dT (r )
dr,
= (L−1)d
dr
[∫ r
0pr (w)d w
],
= (L−1)pr (r ).
(6)
Substituting this result in Equation 3, and keeping in mind that allprobability values are positive, yields
ps(s) = pr (r )
∣∣∣∣dr
d s
∣∣∣∣ ,
= pr (r )
∣∣∣∣ 1
(L−1)pr (r )
∣∣∣∣ ,
= 1
L−10 ≤ s ≤ L−1.
(7)
We recognize the form of ps(s) in the last line of this equation as auniform probability density function.
30/102
![Page 31: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/31.jpg)
For discrete values, we deal with probabilities (histogram values) andthe summation instead of probability density functions and integrals.The probability of occurrence of intensity level rk in a digital image isapproximated by
pr (rk ) = nk
M Nk = 0,1, . . . ,L−1, (8)
where M N is the total number of pixels in the image, nk is the numberof pixels that have intensity rk , and L is the number of possibleintensity levels in the image (e.g., 256). Recall that a plot of pr (rk )versus rk is commonly referred to as a histogram.The discrete form of the Equation 4 is
sk = T (rk ) = (L−1)k∑
j=0pr (r j ),
= (L−1)
M N
k∑j=0
n j k = 0,1, . . . ,L−1.
(9)
31/102
![Page 32: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/32.jpg)
Thus the output image is obtained by mapping each pixel in the inputimage with intensity rk into a corresponding pixel level sk in the outputimage using Equation 9.
32/102
![Page 33: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/33.jpg)
Example
Suppose that a 3-bit image (L = 8) of size 64×64 pixels (M N = 4096)has the intensity distribution shown in Table 1, where the intensitylevels are in the range [0,L−1] = [0,7]. carry out histogram equalization.
33/102
![Page 34: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/34.jpg)
rk nk pr (rk ) = nk /M Nr0 = 0 790 0.19r1 = 1 1023 0.25r2 = 2 850 0.21r3 = 3 656 0.16r4 = 4 329 0.08r5 = 5 245 0.06r6 = 6 122 0.03r7 = 7 81 0.02
Table 1: Table for Example 5.
34/102
![Page 35: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/35.jpg)
Example
White a program to carry out histogram equalization.
35/102
![Page 36: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/36.jpg)
img = imread ( ’Images/Fig3.15(a)1top.jpg’ ) ;L = 256;cumhist = zeros ( L , 1 ) ;f o r k = 0 :L−1
s ( k+1) = sum(sum ( img <= k ) ) ;endforn = s ize ( img , 1 ) * s ize ( img , 2 )s = s / n ;imeq = mat2gray ( s ( img + 1 ) ) ;subp lo t (2 ,2 ,1 )imshow ( img )subp lo t (2 ,2 ,2 )imshow ( imeq )subp lo t (2 ,2 ,3 )i m h i s t ( im2double ( img ) )subp lo t (2 ,2 ,4 )i m h i s t ( imeq )
36/102
![Page 37: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/37.jpg)
Outline
1 Point OperationsHistogram Processing
2 Spatial FilteringSmoothing Spatial FiltersSharpening Spatial Filters
3 Edge DetectionLine Detection Using the Hough Transform
37/102
![Page 38: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/38.jpg)
Spatial filtering is one of the main tools that we use in imageprocessing.There are many applications including image enhancement, e.g.,smoothing.We can accomplish effects such as smoothing by applying aspatial filter directly on the image.Spatial filters are also called spatial masks, kernels, templates,and windows.
38/102
![Page 39: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/39.jpg)
Applying a Spatial Filter
In the spatial filter operation we apply a predefined operation on theimage pixels included in the neighborhood of the pixel in question.Filtering gives the value of the corresponding pixel at the coordinatesof the center of the neighborhood, in the resultant image. If theoperations performed on the imaged pixels is linear, then the filter iscalled a linear filter. Otherwise, the filter is non-linear. Figure 4 showsa sketch of the neighborhood of a particular pixel. Figure 5 shows theweights of a 5×5 kernel.
39/102
![Page 40: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/40.jpg)
j
i
0
0
h −1
w −1
i0
j0
Figure 4: A 3×3 neighborhood for spatial filtering. The center if theneighborhood is the pixel (i0, j0).
40/102
![Page 41: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/41.jpg)
w(−2,−2)
w(−1,−2)
w(0,−2)
w(1,−2)
w(2,−2)
w(−2,−1)
w(−1,−1)
w(0,−1)
w(1,−1)
w(2,−1)
w(−2,0)
w(−1,0)
w(0,0)
w(1,0)
w(2,0)
w(−2,1)
w(−1,1)
w(0,1)
w(1,1)
w(2,1)
w(−2,2)
w(−1,2)
w(0,2)
w(1,2)
w(2,2)
Figure 5: A 5×5 kernel showing weights.
41/102
![Page 42: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/42.jpg)
Consider the 3×3 kernel shown in Figure 6.
w(−1,−1)
w(0,−1)
w(1,−1)
w(−1,0)
w(0,0)
w(1,0)
w(−1,1)
w(0,1)
w(1,1)
Figure 6: A 3×3 kernel showing weights.
42/102
![Page 43: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/43.jpg)
At any pixel (i , j ) in the image, the response g (i , j ) of the filter is thesum of products of the filter coefficients and the image pixelsencompassed by the filter:
g (i , j ) = w(−1,−1) f (i −1, j −1)+w(−1,0) f (i −1, j )
+·· ·+w(1,1) f (i +1, j +1).(10)
Observe that the center coefficient of of the filter, w(0,0) aligns with thepixel at the location (i , j ). For a mask of size m ×n, we assume thatm = 2a +1 and n = 2b +1, where a and b are positive integers. Thismeans that we always choose filters of odd dimensions forconvenience. If general, linear spatial filtering for an image of sizeM ×N , with the filter of size m ×n is given by the expression
g (i , j ) =a∑
s=−a
b∑t=−b
w(s, t ) f (i + s, j + t ), (11)
where i and j are varied so that each pixel in f is visited. Of course,when the kernel is not fully within the image, we have to use methodssuch as zero padding.
43/102
![Page 44: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/44.jpg)
ExampleGive a 3×3 kernel that can be used for averaging a 3×3 neighborhood.
44/102
![Page 45: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/45.jpg)
Instead of using equal 1/9 entries, we can have a 3×3 kernel of allones and then divide the filter output by 9.
45/102
![Page 46: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/46.jpg)
ExampleWrite a program to average filter an image using a 3×3 kernel.
46/102
![Page 47: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/47.jpg)
w = 3;h = 1 /9* ones (w,w) ;hw = f l o o r (w / 2 ) ;imrgb = imread ( ’Images/airplane.jpg’ ) ;im = im2double ( rgb2gray ( imrgb ) ) ;[ row , co l ] = s ize ( im )r e s u l t = zeros ( row , co l ) ;f o r i =hw+1: row−hw
f o r j =hw+1: col−hwr e s u l t ( i , j ) =sum(sum( h . * im ( i −hw: i +hw, j − hw: j + hw ) ) ) ;
endendf i g u r e ;imshow ( r e s u l t ) ;
47/102
![Page 48: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/48.jpg)
A faster and convenient implementation of the aforementioned loops isas follows:
r e s u l t = conv2 ( im , h , "same " ) ;
48/102
![Page 49: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/49.jpg)
Outline
1 Point OperationsHistogram Processing
2 Spatial FilteringSmoothing Spatial FiltersSharpening Spatial Filters
3 Edge DetectionLine Detection Using the Hough Transform
49/102
![Page 50: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/50.jpg)
Smoothing filters are used for blurring and noise reduction. Blurring isused as a preprocessing operation to remove small noise-like objectsbefore large object extraction and bridging small gaps in lines andcurves. Noise reduction can be achieved by blurring with a linear filterand by nonlinear filtering.
50/102
![Page 51: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/51.jpg)
Examples of Smoothing Filters
The averaging filter that we studied earlier is a smoothing filter. Thisreduces sharp transitions and noise. Figure 7 shows this kernel. Notethat using 1 as filter coefficients instead of 1/9 and later multiplying thesum by 1/9 is computationally efficient.
1
1
1
1
1
1
1
1
1
1
9×
Figure 7: 3×3 averaging smoothing filter.
51/102
![Page 52: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/52.jpg)
The weighted averaging kernel shown in Figure 8 gives moreimportance to pixels close to the center.
1
2
1
2
4
2
1
2
1
1
16×
Figure 8: 3×3 averaging smoothing filter.
52/102
![Page 53: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/53.jpg)
ExampleWrite a program to carry out weighted averaging using the kernel inFigure 8.
53/102
![Page 54: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/54.jpg)
Order-Statistics (Non-Linear) Filters
Order statistics filters determine the value of the resultant pixel bya value related to the ranking or order of the pixels within a kernel.The best known in this category is the median filter, whichcomputes the value of the resultant pixel as the median of theneighborhood.Median filters are very effective in the presence of impulse noisecalled salt-and-pepper noise.In order to do median filtering, we sort the pixels in theneighborhood, find the median, and set the pixel of the resultantimage with the median value.
54/102
![Page 55: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/55.jpg)
Outline
1 Point OperationsHistogram Processing
2 Spatial FilteringSmoothing Spatial FiltersSharpening Spatial Filters
3 Edge DetectionLine Detection Using the Hough Transform
55/102
![Page 56: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/56.jpg)
We were able to achieve image blurring by using pixel averaging,an operation similar to integration.Sharpening, the opposite of blurring, can be achieved by spatialdifferentiation.The strength of the response of the derivative operator isproportional to the degree of intensity discontinuity in the image atthe point of interest.Thus, image differentiation enhances the edges and otherdiscontinuities, while deemphasizing areas of low intensityvariations.
56/102
![Page 57: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/57.jpg)
Using the Second Derivative for Sharpening—TheLaplacian
We can use a 2-D, second order derivative for image sharpening.We will use the discrete formulation of the second-order derivativeand construct a filter mask.We are interested in isotropic filters, whose response isindependent of the direction of discontinuities.Isotropic filters are invariant to the image rotation, or simply,rotation invariant.
57/102
![Page 58: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/58.jpg)
Laplacian
A simple isotropic second-order derivative operator is the Laplacian.For a function (image) f (x, y) of two variables it is defined as
∇2 f = ∂2 f
∂x2 + ∂2 f
∂y2 . (12)
The second-order derivative in x-direction is
∂2 f
∂x2 = f (x +1, y)+ f (x −1, y)−2 f (x, y). (13)
In y-direction, we have
∂2 f
∂y2 = f (x, y +1)+ f (x, y −1)−2 f (x, y). (14)
58/102
![Page 59: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/59.jpg)
Laplacian
A simple isotropic second-order derivative operator is the Laplacian.For a function (image) f (x, y) of two variables it is defined as
∇2 f = ∂2 f
∂x2 + ∂2 f
∂y2 . (12)
The second-order derivative in x-direction is
∂2 f
∂x2 = f (x +1, y)+ f (x −1, y)−2 f (x, y). (13)
In y-direction, we have
∂2 f
∂y2 = f (x, y +1)+ f (x, y −1)−2 f (x, y). (14)
58/102
![Page 60: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/60.jpg)
Therefore, the discrete version of the Laplacian is
∇2 f = f (x +1, y)+ f (x −1, y)+ f (x, y +1)+ f (x, y −1)−4 f (x, y). (15)
Figure 9 shows some implementations of the Laplacian.
59/102
![Page 61: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/61.jpg)
0
1
0
1
-4
1
0
1
0
1
1
1
1
-8
1
1
1
1
0
-1
0
-1
4
-1
0
-1
0
-1
-1
-1
-1
8
-1
-1
-1
-1
Figure 9: Laplacian kernels.
60/102
![Page 62: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/62.jpg)
Because the Laplacian is a derivative operator, its use highlights theintensity discontinuities in an image and deemphasizes regions withslowly varying intensity levels. We can produce a sharpened image bycombining the Laplacian with the original image. If we use the firstkernel in Figure 9, we can add the image and the Laplacian. If g (x, y) isthe sharpened image
g (x, y) = f (x, y)+∇2 f (x, y). (16)
Usually, we need to re-scale the graylevels to the [0,L−1] after thisoperation.
61/102
![Page 63: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/63.jpg)
Unsharp Masking
A process used in printing industry is unsharp masking. It sharpensthe image by subtracting an unsharp (smoothed) version of the imagefrom the original. Unsharp masking consists of the following steps:
1 Blur the original image.2 Subtract the blurred image from the original. The resulting
difference is called the mask.3 Add the mask to the original.
62/102
![Page 64: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/64.jpg)
Using the First-Order Derivatives
First-order derivative based operations are implemented using themagnitude of the gradient. For a function f (x, y), the gradient of f atthe coordinates (x, y) is defined as the two-dimensional column vector
∇ f = grad( f ) =[
fx
fy
]=
∂ f
∂x∂ f
∂y
. (17)
This vector has the important property that it points in the direction ofthe greatest rate of change of f at location (x, y).
63/102
![Page 65: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/65.jpg)
The magnitude (length) of vector ∇ f , denoted as M(x, y), where,
M(x, y) = mag(∇ f ) =√
f 2x + f 2
y , (18)
is the value at (x, y) of the rate of change in the direction of the gradientvector. Note that M(x, y) is an image of the same size as the original,created when x and y are allowed to vary over all the pixel locations inf . This is commonly referred to as the gradient image of simply thegradient.
64/102
![Page 66: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/66.jpg)
Sobel operators are discrete approximations to the gradient. Figure 10shows the Sobel operators.
-1
-2
-1
0
0
0
1
2
1
(a) For fy
-1
0
1
-2
0
2
-1
0
1
(b) For fx
Figure 10: Sobel operators.
65/102
![Page 67: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/67.jpg)
(a) Input (b) fx
(c) fy (d) Magnitude
Figure 11: Sobel filtering.66/102
![Page 68: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/68.jpg)
Outline
1 Point OperationsHistogram Processing
2 Spatial FilteringSmoothing Spatial FiltersSharpening Spatial Filters
3 Edge DetectionLine Detection Using the Hough Transform
67/102
![Page 69: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/69.jpg)
Segmentation subdivides an image into its constituent regions orobjects. the level to which the subdivision is carried depends on theproblem being solved. Segmentation of nontrivial images is one of themost difficult tasks in image processing. Segmentation accuracydetermines the eventual success or failure of computerized analysisprocedures. For this reason, considerable care should be taken toimprove the probability of rugged segmentation. In some situations,such as industrial inspection applications, at least some measure ofcontrol over the environment is possible at times. In others, as inremote sensing, user control over image acquisition is limitedprincipally to the choice of image sensors.Segmentation algorithms for monochrome images generally are basedon one of two basic properties of image intensity values: discontinuityand similarity. In the first category, the approach is to partition animage based on abrupt changes in intensity, such as edges in animage. The principal approaches in the second category are based onpartitioning an image into regions that are similar according to a set ofpredefined criteria.
68/102
![Page 70: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/70.jpg)
There are three basic types of intensity discontinuities in a digitalimage:
1 points,2 lines, and3 edges.
The most common way to look for discontinuities is to run a maskthrough the image.
69/102
![Page 71: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/71.jpg)
For a 3×3 mask this procedure involves computing the sum ofproducts of the coefficients with the intensity levels contained in theregion encompassed by the mask. That is the response, R, of themask at any point in the image is given by
R =9∑
i=1wi zi
where zi is the intensity of the pixel associated with the maskcoefficient wi .
-1
-1
-1
-1
8
-1
-1
-1
-1
Figure 12: A mask for point detection.
70/102
![Page 72: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/72.jpg)
Point Detection I
The detection of isolated points embedded in areas of constant ornearly constant intensity in an image is straightforward in principle.Using the mask shown in Figure 12 we say that an isolated point hasbeen detected at the location on which the mask is centered if
R ≥ T,
where T is a nonnegative threshold. Point detection is implemented inMATLAB using functions imfilter , with a mask. The importantrequirements are that the strongest response of a mask must be whenthe mask is centered on an isolated point, and that the response be 0in areas of constant intensity.
71/102
![Page 73: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/73.jpg)
If T is given, the following command implements the point-detectionapproach just discussed:
g = abs ( i m f i l t e r ( double ( f ) , w) ) >= T ;
where f is the input image, w is an appropriate point-detection maskand g is the resulting image.
72/102
![Page 74: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/74.jpg)
Line Detection
The next level of complexity is line detection. Consider the mask inFigure13.
73/102
![Page 75: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/75.jpg)
-1
2
-1
-1
2
-1
-1
2
-1
(a) Hor
2
-1
-1
-1
2
-1
-1
-1
2
(b) +45◦-1
-1
-1
2
2
2
-1
-1
-1
(c) Ver
-1
-1
2
-1
2
-1
2
-1
-1
(d) −45◦
Figure 13: Line detector masks.
74/102
![Page 76: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/76.jpg)
If the first mask were moved around an image, it would respond morestrongly to lines (one pixel thick) oriented horizontally. With a constantbackground, the maximum response would result when the linepassed through the middle row of the mask. Similarly, the secondmask in Figure13 responds best to lines oriented at +45◦; the thirdmask to vertical lines; and the fourth mask to lines in the −45◦
direction. Note that the preferred direction of each mask is weightedwith a larger coefficient (i.e., 2) than other possible directions. Thecoefficients of each mask sum to zero, indicating a zero response fromthe mask in areas of constant intensity.
75/102
![Page 77: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/77.jpg)
Edge Detection Using Function edge
Although point and line detection certainly are important in anydiscussion on image segmentation, edge detection is by far the mostcommon approach for detecting meaningful in intensity values. Suchdiscontinuities are detected by using first-and second-order derivatives.The first-order derivative of choice in image processing is the gradient.The gradient of a 2-D function, f (x, y), is defined as the vector
∇ f =[
Gx
Gy
]=
[∂ f∂x∂ f∂y .
]
76/102
![Page 78: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/78.jpg)
The magnitude of this vector is
|∇ f | = [G2x +G2
y ]1/2.
To simplify computation, this quantity is approximated sometimes byomitting the square-root operation,
|∇ f | ≈G2x +G2
y ,
or by using absolute values,
|∇ f | ≈ |Gx |+ |Gy |.
77/102
![Page 79: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/79.jpg)
These approximations still behave as derivatives; that is, they are zeroin areas of constant intensity and their values are proportional to thedegree of intensity change in areas whose pixel values are variable. Itis common practice to refer to a magnitude of the gradient or itsapproximations simply as “the gradient.” A fundamental property of thegradient vector is that it points in the direction of the maximum rate ofchange of f at coordinates (x, y). The angle which this maximum rateof change occurs is
α(x, y) = tan−1(
Gy
Gx
).
One of the key issues is how to estimate the derivative Gx and Gy
digitally. The various approaches used by function edge are discussedlater in this section.
78/102
![Page 80: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/80.jpg)
Second order derivatives in image processing are generally computedusing the Laplacian. That is the Laplacian of a 2-D function f (x, y) isformed from second-order derivatives, as follows:
∇2 f (x, y) = ∂2 f (x, y)
∂x2 + ∂2 f (x, y)
∂y2 .
The Laplacian is seldom used by itself for edge detection because, asa second-order derivative, it is unacceptably sensitive to noise, itsmagnitude produces double edges, and it is unable to detect edgedirection. However, the Laplacian can be powerful complement whenused in combination with other edge-detection techniques. Forexample, although its double edges make it unsuitably for edgedetection directly, this property can be used for edge location.
79/102
![Page 81: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/81.jpg)
With the preceding discussion as background, the basic idea behindedge detection is to find places in an image where the intensitychanges rapidly, using one of two general criteria:
1 find places where the first derivative of the intensity is grater inmagnitude than a specified threshold, or
2 find places where the second derivative of the intensity has a zerocrossing.
80/102
![Page 82: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/82.jpg)
IPT’s function edge provides several derivative estimators based on thecriteria just discussed. For some of these estimators, it is possible tospecify whether the edge detector is sensitive to horizontal or verticaledges or to both. The general syntax for this function is
[ g , t ] = edge ( f , ‘ method ’ , parameters )
where f is the input image, method is one of the approaches listed in itshelp and parameters are additional parameters explained in thefollowing discussion. In the output, g is a logical array with 1s at thelocations where edge points where detected in f and 0s elsewhere.Parameter t is a optional; it gives the threshold used by edge todetermine which gradient values are strong enough to be called edgepoints.
81/102
![Page 83: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/83.jpg)
Sobel Edge Detector
The Sobel edge detector uses the masks in Figure 14 to approximatedigitally the first derivatives Gx and Gy . In other words, the gradient atthe center point in a neighborhood is computed as follows by the Sobeldetector:
g = |∇ f | = [G2x +G2
y ]1/2.
82/102
![Page 84: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/84.jpg)
z7
z4
z1
z8
z5
z2
z9
z6
z3
(a) General mask
1
0
-1
2
0
-2
-1
0
-1
(b) Sobel, Verti-cal
-1
-2
-1
0
0
0
1
2
1
(c) Sobel, Hori-zontal
1
0
-1
1
0
-1
1
0
-1
(d) Prewitt Verti-cal
-1
-1
-1
0
0
0
1
1
1
(e) Prewitt Hori-zontal
Figure 14: Edge detector masks.
83/102
![Page 85: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/85.jpg)
Sobel edge detection can be implemented by filtering an image, f ,(using imfilter ) with the left mask in Figure 14 filtering f again with theother mask, squaring the pixels values of each filtered image, addingthe two results, and computing their square root. Similar commentsapply to the other and third entries in Figure 14. Function edge simplypackages the preceding operations into the function call and addsother features, such as accepting a threshold value or determining athreshold automatically. In addition, edge contains edge detectiontechniques that are not implementable directly with imfilter .
84/102
![Page 86: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/86.jpg)
The general calling syntax for the Sobel detector is
[ g , t ] = edge ( f , ‘ sobel ’ , T , d i r )
where f is the input image, T is specified threshold, and dir specifiesthe preferred direction of the edges detected: ‘ horizontal ’ , ‘ vertical ’ or‘both’ (the default). As noted earlier, g is a logical image containing 1sat locations where edges were detected and 0s elsewhere. Parametert in the output is optional. It is the threshold value used by edge. If T isspecified, then t = T. Otherwise, if T is not specified (or is empty, [ ]),edge sets t equal to a threshold it determines automatically and thenuses for edge detection. One of the principal reason for including t inthe output argument is to get an initial value for the threshold. Functionedge uses the Sobel detector as a default if the syntax g = edge(f), isused.
85/102
![Page 87: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/87.jpg)
Prewitt Edge Detector
The Prewitt edge detector uses the masks in Figure 14. toapproximate digitally the first derivatives. Its general calling syntax is
[ g , t ] = edge ( f , ‘ p re w i t t ’ , T , d i r )
The parameters of this function are identical to the Sobel parameters.The Perwitt detector is slightly simpler to implement computationallythan the Sobel detector, but it tends to produce somewhat noisierresults. (It can be shown that the coefficient with value 2 in the Sobeldetector provides smoothing.)
86/102
![Page 88: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/88.jpg)
Laplacian of a Gaussian (LoG) Detector
Consider the Gaussian function
h(r ) =−e−r 2
2σ2
where r 2 = x2 + y2 and σ is the standard deviation. This is a smoothingfunction which, if convolved with an image, will blur it. The degree ofblurring is determined by the value of σ. The Laplacian of this function(the second derivative with respect to r ) is
∇2h(r ) =−[
r 2 −σ2
σ4
]e−
r 2
2σ2
87/102
![Page 89: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/89.jpg)
This function is called the Laplacian of Gaussian (LoG). Because thesecond derivative is a linear operation, convolving (filtering) an imagewith ∇2h(r ) is the same as convolving the image with the smoothingfunction first and then computing the Laplacian of the result. This is thekey concept underlying the LoG detector. We convolve the image with∇2h(r ) knowing that it has two effects: It smoothes the image (thusreducing noise), and it computes the Laplacian, which yields adouble-edge image. Locating edges then consists of finding the zerocrossings between the double edges.
88/102
![Page 90: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/90.jpg)
The general calling syntax for the LoG detector is
[ g , t ] = edge ( f , ‘ log ’ , T , sigma )
where sigma is the standard deviation and the other parameters are asexplained previously. The default value of sigma is 2. As before, edgeignores any edges that are not stronger than T. If T is not provided, orit is empty, [ ], edge chooses the value automatically. Setting T to 0produces edges that are closed contours, a familiar characteristic ofthe LoG method.
89/102
![Page 91: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/91.jpg)
Zero-Crossings Detector
This detector is based on the same concept as the LoG method, butthe convolution is carried out using a specified filter function, H. Thecalling syntax is
[ g , t ] = edge ( f , ‘ zerocross ’ , T , H)
The other parameters are as explained for the LoG detector.
90/102
![Page 92: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/92.jpg)
Canny Edge Detector I
The Canny detector [1] is the most powerful edge detector provided byfunction edge. The method can be summarized as follows:
1 The image is smoothed using a Gaussian filter with a specifiedstandard deviation, σ to reduce noise.
2 The local gradient, g (x, y) = [G2x +G2
y ]1/2, and edge direction,α(x, y) = tan−1(Gy /Gx ), are computed at each point, using anoperator such as Sobel or Prewitt. An edge point is defined to bea point whose strength is locally maximum in the direction of thegradient.
3 The edge point is determined in (2) give rise to ridges in thegradient magnitude image. The algorithm then tracks along thetop of these ridges and sets to zero all pixels that are not actuallyon the ridge top so as to give a thin line in the output, a processknown as nonmaximal suppression. The ridge pixels are thenthresholded using two thresholds, T1 and T2, with T1 < T2. Ridge
91/102
![Page 93: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/93.jpg)
Canny Edge Detector II
pixels with values greater than T2 are said to be "strong" edgepixels. Ridge pixels with values between T1 and T2 are said to be“weak” edge pixels.
4 Finally the algorithm performs edge linking by incorporating theweak pixels that are 8-connected to the strong pixels.
92/102
![Page 94: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/94.jpg)
The syntax for the Canny edge detector is
[ g , t ] = edge ( f , ‘ canny ’ , T , sigma )
where T is a vector, T = [T1,T2], containing the two thresholdsexplained in step 3 of the preceding procedure, and sigma is thestandard deviation of the smoothing filter. If t is included in the outputargument, it is a two-element vector containing the two thresholdvalues used by the algorithm. The rest of the syntax is as explained forthe other methods, including the automatic computation of thresholds ifT is not supplied. The default value for sigma is 1.
93/102
![Page 95: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/95.jpg)
We can extract and display the vertical edges in the image, f,
[ gv , t ] = edge ( f , ‘ sobel ’ , ‘ v e r t i c a l ’ ) ;imshow ( gv )t
t = 0.0516
94/102
![Page 96: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/96.jpg)
Outline
1 Point OperationsHistogram Processing
2 Spatial FilteringSmoothing Spatial FiltersSharpening Spatial Filters
3 Edge DetectionLine Detection Using the Hough Transform
95/102
![Page 97: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/97.jpg)
Ideally, the methods discussed in the previous section should yieldpixels lying only on edges. In practice, the resulting pixels seldomcharacterize an edge complectly because of noise, breaks in the edgefrom nonuniform illumination, and other effects that introduce spuriousintensity discontinuities. Thus edge-detection algorithms typically arefollowed by linking procedures to assemble edge pixels into meaningfuledges. One approach that can be used to find and link line segmentsin an image is the Hough transform.
96/102
![Page 98: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/98.jpg)
Given a set of points in an image (typically a binary image), supposethat we want to find subsets of these points that lie on straight lines.One possible solution is to first find all lines determined by every pair ofpoints and then find all subsets of points that are close to particularlines. The problem with this procedure is that it involves findingn(n −1)/2 ∼ n2 lines and then performing n(n(n −1))/2 ∼ n3
comparisons of every point to all lines. This approach iscomputationally prohibitive in all but the most trivial applications.
97/102
![Page 99: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/99.jpg)
With the Hough transform, on the other hand, we consider a point(xi , yi ) and all the lines that pass through it. Infinitely many lines passthrough (xi , yi ) all of which satisfy the slope-intercept equationyi = axi +b for some values of a and b. Writing this equation asb =−axi + yi and considering the ab-plane (also called parameterspace) yields the equation of a single line for a fixed pair (xi , yi ).Furthermore, a second point (x j , y j ) also has a line in parameter spaceassociated with it, and this line intersects the line associated with(xi , yi ) at (a′,b′). where a′ is the slope and b′the intercept of the linecontaining both (xi , yi ) and (x j , y j ) in the x y-plane. In fact, all pointscontained on this line have lines in parameter space that intersect at(a′,b′).
98/102
![Page 100: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/100.jpg)
In principle, the parameter-space lines corresponding to all imagepoints (xi , yi ) could be plotted, and then the image lines could beidentified where large numbers of parameter lines intersect. A practicaldifficulty with this approach, however, is that a (the slope of the line)approaches infinity as the line approaches the vertical direction. Oneway around this difficulty is to use the normal representation of a line:
x cosθ+ y sinθ = ρ.
99/102
![Page 101: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/101.jpg)
The computational attractiveness of the Hough transform arises fromsubdividing theρθ parameter space into so-called accumulator cells.Usually the maximum range of the values is −90◦ ≤ θ ≤ 90◦ and−D ≤ ρ ≤ D where D is the distance between corners in the image.The cell at the coordinates (i , j ) with accumulator value A(i , j ),corresponds to the square associated with parameter spacecoordinates (ρi ,θ j ) Initially, these cells are set to zero. Then, for everynon-background point (xk , yk ) in the image plane, we let θ equal eachof the allowed subdivision values on the θ axis and solve for thecorresponding ρ using the equation ρk = xk cosθ+ yk sinθ. Theresulting ρ-values are then rounded off to the nearest allowed cellvalue along the ρ-axis. The corresponding accumulator cell is thenincremented. At the end of this procedure, a value of Q in A(i , j ),means that Q points in the x y-plane lie on the line x cosθ j + y sinθ j = ρi
The number of subdivisions in the ρθ-plane determines the accuracyof the colinearity of these points.
100/102
![Page 102: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/102.jpg)
Using houghlines to Find and Link Line Segments
l i n e s = houghl ines ( f , theta , rho , r , c )f i gu re , imshow ( f ) , hold onf o r k = 1: leng th ( l i n e s )xy = [ l i n e s ( k ) . po in t1 ; l i n e s ( k ) . po in t2 ] ;p o l t ( xy ( : , 2 ) , xy ( : , 1 ) , ‘ LineWidth ’ , 4 , ’Color’ , [ . 6 .6 . 6 ] ) ;end
101/102
![Page 103: Point Operations and Spatial Filteringranga/courses/en5204_2013/L02.pdf · 2011-11-13 · Point Operations and Spatial Filtering Ranga Rodrigo November 13, 2011 1/102. 1 Point Operations](https://reader030.fdocuments.us/reader030/viewer/2022041101/5eda17bdb3745412b570bfe1/html5/thumbnails/103.jpg)
J Canny.A computational approach to edge detection.IEEE Transactions on Pattern Analysis and Machine Intelligence,8(6):679–698, 1986.
102/102