05623332

download 05623332

of 7

Transcript of 05623332

  • 8/3/2019 05623332

    1/7

    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 58, NO. 3, MARCH 2011 567

    Boundary Detection in Medical Images Using EdgeFollowing Algorithm Based on Intensity Gradient

    and Texture Gradient FeaturesKrit Somkantha, Nipon Theera-Umpon*, Senior Member, IEEE,

    and Sansanee Auephanwiriyakul, Senior Member, IEEE

    AbstractFinding the correct boundary in noisy images is stilla difficult task. This paper introduces a new edge following tech-nique for boundary detection in noisy images. Utilization of theproposed technique is exhibited via its application to various typesof medical images. Our proposed technique can detect the bound-aries of objects in noisy images using the information from theintensity gradient via the vector image model and the texture gra-dient via the edge map. The performance and robustness of thetechnique have been tested to segment objects in synthetic noisyimages and medical images including prostates in ultrasound im-ages, left ventricles in cardiac magnetic resonance (MR) images,aortas in cardiovascular MR images, and knee joints in computer-ized tomography images. We compare the proposed segmentationtechnique with the active contour models (ACM), geodesic activecontour models, active contours without edges, gradient vector flowsnake models, and ACMs based on vector field convolution, by us-ing the skilled doctors opinions as the ground truths. The resultsshow that our technique performs very well and yields better per-formance than the classical contour models. The proposed methodis robust and applicable on various kinds of noisy images withoutprior knowledge of noise properties.

    IndexTermsBoundary extraction, edge detection, edge follow-ing, image segmentation, vector field model.

    I. INTRODUCTION

    IMAGE segmentation is an initial step before performing

    high-level tasks such as object recognition and understand-

    ing. Image segmentation is typically used to locate objects and

    boundaries in images. In medical imaging, segmentation is im-

    portant for feature extraction, image measurements, and im-

    Manuscript received June 22, 2010; revised October 3, 2010 and October 28,2010; accepted October 29, 2010. Date of publication November 9, 2010; dateof current version February 18, 2011. This work was supported in part by agrant under the program Strategic Scholarships for Frontier Research Networkfor Ph.D. Program Thai Doctoral degree from the Commission on Higher Edu-cation, Thailand. Asterisk indicates corresponding author.

    K. Somkantha is with the Department of Electrical Engineering, Facultyof Engineering, Chiang Mai University, Chiang Mai 50200, Thailand (e-mail:[email protected]).

    *N. Theera-Umpon is with the Department of Electrical Engineering, Facultyof Engineering, and the Biomedical Engineering Center, Chiang Mai University,Chiang Mai 50200, Thailand (e-mail: [email protected]).

    S. Auephanwiriyakul is with the Department of Computer Engineering, Fac-ulty of Engineering, and the Biomedical Engineering Center, Chiang Mai Uni-versity, Chiang Mai 50200, Thailand (e-mail: [email protected]).

    Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

    Digital Object Identifier 10.1109/TBME.2010.2091129

    age display. In some applications it may be useful to extract

    boundaries of objects of interest from ultrasound images [1],

    [2], microscopic images [3][5], magnetic resonance (MR) im-

    ages [6][8], or computerized tomography (CT) images [9],

    [10]. Segmentation techniques can be divided into classes in

    many ways, depending on the classification scheme. The most

    commonly used segmentation techniques can be categorized

    into two classes, i.e., edge-based approaches and region-basedapproaches. The strategy of edge-based approaches is to de-

    tect the object boundaries by using an edge detection operator

    and then extract boundaries by using the edge information. The

    problem of edge detection is the presence of noise that results

    in random variation in level from pixel to pixel. Therefore, the

    ideal edges are never encountered in real images [11], [12]. A

    great diversity of edge detection algorithms have been devised

    with differences in their mathematical and algorithmic proper-

    ties such as Roberts, Sobel, Prewitt, Laplacian, and Canny, all of

    which are based on the difference of gray levels [13][16]. The

    difference of gray levels can be used to detect the discontinu-

    ity of gray levels. On the other hand, region-based approaches

    are based on similarity of regional image data. Some of the

    more widely used approaches are thresholding, clustering, re-

    gion growing, and splitting and merging [17]. However, the

    performance evaluation of image segmentation results is still a

    challenging problem as they fail to extract the correct boundaries

    of objects in noisy images.

    In recent years, there have been several new methods to

    solve the problem of boundary detection, e.g., active contour

    model (ACM), geodesic active contour (GAC) model, active

    contours without edges (ACWE), gradient vector flow (GVF)

    snake model, vector field convolution (VFC) snake model, etc.

    The snake models have become popular especially in bound-

    ary detection where the problem is more challenging due to thepoor quality of the images. The ACMs also known as snakes

    are curves defined within an image domain that can be moved

    under the influence of the internal energy and external en-

    ergy [18][20]. The internal energy is designed to keep the

    model smooth during deformation. The external energy is de-

    signed to move the model toward an object boundary or other

    desired features within an image. However, the snake has weak-

    nesses and limitations of small capture range and difficulties

    progressing into concave boundary regions. The GAC model is

    an extension of the ACM by taking into account of the geo-

    metric information of an image [21]. An ACM based on curve

    evolution and level sets, namely the ACWE, can detect objects

    0018-9294/$26.00 2011 IEEE

  • 8/3/2019 05623332

    2/7

    568 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 58, NO. 3, MARCH 2011

    boundaries that are not necessarily defined by gradient [22]. The

    GVF is an ACM with a new external energy [23], [24]. This new

    external energy is computed as a diffusion of gray-level gradient

    vector of a binary edge map derived from the image. The resul-

    tant field has a large capture range and forces active contours

    into concave regions. VCF was applied as a new external energy

    of an ACM and proved to be better than the existing snake mod-els [25]. However, when an image is highly noisy and possesses

    a complex background, determining the correct boundaries of

    objects from the gradients is not easy. Most of the methods of

    snake models require a very accurate initial contour estimate

    of the object. The contour of snake models can converge to a

    wrong boundary if the initial contour is not close enough to the

    desired boundary.

    Though many algorithms for boundary detection have been

    developed to achieve good performance in field of image pro-

    cessing, most algorithms for detecting the correct boundaries

    of objects have difficulties in medical images in which ill-

    defined edges are encountered [26][28]. Medical images are

    often noisy and too complex to expect local, low level opera-tions to generate perfect primitives. The complexity of medical

    images renders the correct boundary detection very difficult.

    To remedy the problem, we propose a new technique for

    boundary detection for ill-defined edges in noisy images using a

    novel edge following. The proposed edge following technique is

    based on the vector image model and the edge map. The vector

    image model provides a more complete description of an image

    by considering both directions and magnitudes of image edges.

    From the vector image model, a derivative-based edge operator

    is applied to yield the edge field [29]. The proposed edge vector

    field is generated by averaging magnitudes and directions in

    the vector image. The edge map is derived from Laws texturefeature [30] andthe Canny edge detection[31]. Thevector image

    model and the edge map are applied to select the best edges.

    This paper is organized as follows. Section II describes the

    proposed boundary detection technique. In section III, we show

    the experimental results on the synthetic noisy images, prostate

    ultrasound images, left ventricle in cardiac MR images, aorta in

    cardiovascular MR images, and knee joints in CT images. The

    results of our technique are compared with that of five snake

    models by using the opinions of skilled doctors as the ground

    truth. Section IV concludes this paper.

    II. BOUNDARY EXTRACTION ALGORITHM

    A. Average Edge Vector Field Model

    We exploit the edge vector field to devise a new boundary

    extraction algorithm [29]. Given an image f(x, y), the edge

    vector field is calculated according to the following equations:

    e(i, j) =1

    k(Mx (i, j)i + My (i, j)j) (1)

    e(i, j) 1k

    f(x, y)

    yi f(x, y)

    xj

    (2)

    k = maxi, j

    Mx (i, j)2

    + My (i, j)2

    . (3)

    Fig. 1. (a) Original unclear image. (b) Result from the edge vector field andzoomed-in image. (c) Result from the proposed average edge vector field andzoomed-in image.

    Each component is the convolution between the image and the

    corresponding difference mask, i.e.,

    Mx (i, j) = Gy f(x, y) f(x, y)y

    (4)

    My (i, j) = Gx f(x, y) f(x, y)x

    (5)

    where Gx and Gy are the difference masks of the Gaussian

    weighted image moment vector operator in the x and y direc-

    tions, respectively, [29]

    Gx (x, y) =12

    x

    x2 + y2

    exp

    x

    2 + y2

    22

    (6)

    Gy (x, y) =1

    2 y

    x2 + y2

    expx

    2 + y2

    22

    . (7)

    Edge vectors of an image indicate the magnitudes and directions

    of edges which form a vector stream flowing around an object.

    However, in an unclear image, the vectors derived from the edge

    vector field may distribute randomly in magnitude and direction.

    Therefore, we extend the capability of the previous edge vector

    field by applying a local averaging operation where the value of

    each vector is replaced by the average of all the values in the

    local neighborhood, i.e.,

    M(i, j) =1

    Mr (i, j )N

    Mx (i, j)2 + My (i, j)2 (8)

    D(i, j) =1

    Mr

    (i, j )N

    tan1

    My (i, j)

    Mx (i, j)

    (9)

    where Mr is the total number of pixels in the neighborhood N.

    We apply a 3 3 window as the neighborhood N throughoutour research.

    An example of the edge vector field and average edge vector

    field is displayed in Fig. 1. Fig. 1(b) and (c) shows the results

    of the edge vector field and average edge vector field of the

    original image in Fig. 1(a). From the result, we can see that

    our proposed edge vector field yields more descriptive vectors

    along the object edge than that of the original edge vector field.

  • 8/3/2019 05623332

    3/7

    SOMKANTHA et al.: BOUNDARY DETECTION IN MEDICAL IMAGES USING EDGE FOLLOWING ALGORITHM 569

    Fig. 2. (a) Synthetic noisy image. (b) Left ventricle in the MR image.(c) Prostate ultrasound image. (d)(f) Corresponding edge maps derived fromLaws texture and Canny edge detection.

    This idea is exploited for the boundary extraction algorithm of

    objects in unclear images.

    B. Edge Map

    Edge map is edges of objects in an image derived from Laws

    texture and Canny edge detection. It gives important information

    of the boundary of objects in the image that is exploited in a

    decision for edge following.

    1) Laws Texture: The texture feature images of Laws tex-

    ture are computed by convolving an input image with each of the

    masks. Given a column vector L = (1,4,6,4,1) T , the 2-D maskl(i, j) used for texture discrimination in this research is gener-

    ated by L

    LT . The output image is obtained by convolving

    the input image with the texture mask.2) Canny Edge Detection: The Canny approach to edge de-

    tection is optimal for step edges corrupted by white Gaussian

    noise. This edge detector is assumed to be the output of a filter

    that reduces the noise and locates the edges. The first step of

    Canny edge detection is to convolve the output image obtained

    from the aforementioned Laws texture t(i, j) with a Gaussian

    filter. The second step is to calculate the magnitude and direc-

    tion of the gradient. The third step is nonmaximal suppression

    to identify edges. The broad ridges in the magnitude must be

    thinned so that only the magnitudes at the points of the greatest

    local change remain. The last step is the thresholding algorithm

    to detect and link edges. The double threshold algorithm is usedto detect and link edges.

    Edge map shows some important information of edge. This

    idea is exploited for extracting objects boundaries in unclear

    images. Examples of the edge maps are shown in Fig. 2.

    C. Edge Following Technique

    The edge following technique is performed to find the bound-

    ary of an object. Most edge following algorithms take into

    account the edge magnitude as primary information for edge

    following. However, the edge magnitude information is not ef-

    ficient enough for searching the correct boundary of objects in

    noisy images because it can be very weak in some contour areas.

    Fig. 3. Edge masks used for detecting of image edges (normal directionconstraint).

    This is exactly the reason why many edge following techniques

    fail to extract the correct boundary of objects in noisy images. To

    remedy the problem, we propose an edge following technique

    by using information from the average edge vector field and

    edge map. It gives more information for searching the boundary

    of objects and increases the probability of searching the correct

    boundary. The magnitude and direction of the average edge vec-

    tor field give information of the boundary which flows around

    an object. In addition, the edge map gives information of edgewhich may be a part of object boundary. Hence, both average

    edge vector field and edge map are exploited in the decision of

    the edge following technique. At the position (i, j) of an image,

    the successive positions of the edges are then calculated by a

    3 3 matrix

    Lij (r, c) = Mij (r, c) + Dij (r, c) + Eij (r, c)

    0 r 2, 0 c 2 (10)

    where , , and are the weight parameters that control the

    edge to flow around an object. The larger value of an element in

    Lij

    indicates the stronger edge in the corresponding direction.

    The 3 3 matrices Mij , Dij and Eij are calculated as follows:

    Mij (r, c) =M(i + r 1, j + c 1)

    maxi, j M(i, j)(11)

    Dij (r, c) = 1 |D(i, j) D(i + r 1, j + c 1)|

    (12)

    Eij (r, c) = E(i + r 1, j + c 1), 0 r 2,0 c 2 (13)

    where M(i, j) and D(i, j) are the proposed average magnitude

    and direction of edge vector fields as shown in (8) and (9).

    E(i, j) is the edge map from Laws texture and Canny edge

    detection. It should be noted that the value of each element in

    the matrices Mij , Dij, and Eij are ranged between 0 and 1.

    Let Ck , k = 1, 2, . . . , 8, be the constraint masks of edgefollowing to the next direction in object boundary as shown

    in Fig. 3. The constraint mask is selected by considering the

    direction of the vector model at a position (i,j). The mask which

    hasa similarity in directionof vector is selected to suit thechosen

    constraint of edge following. The value of each element in each

    mask dictates the corresponding direction. At the position (i,j),

    the next direction of the edge following technique is selected as

    the direction that gives the maximum value of the element-wise

    multiplication results between Lij and Ck . The next direction

  • 8/3/2019 05623332

    4/7

    570 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 58, NO. 3, MARCH 2011

    Fig. 4. (a) Edge map [E(i, j)]. (b) Results of counting the connected pixels[C(i, j)]. (c) Density of edge length [L(i, j)].

    can be calculated by

    Dij,opt = arg maxk

    2r =0

    2c= 0

    Lij (r, c)Ck (r, c) (14)

    where k = 1,2, . . ., 8 denotes the eight directions as indicated bythe arrows at the center of the masks shown in Fig. 3. The edge

    following is started from the initial position to end position.

    D. Initial Position

    In this section, we present a technique for determining a

    good initial position of edge following that can be used for

    the boundary detection. The initial position problem is very

    important in the classical contour models. Snake models can

    converge to a wrong boundary if the initial position is not close

    enough to the desired boundary. Finding the initial position of

    the classical contour models is still difficult and time consuming

    [32], [33]. In this proposed technique, the initial position of edge

    following is determined by the following steps.

    The first step is to calculate the average magnitude [M(i, j)]using (8). The position with high magnitude should be a good

    candidate of strong edges on the image. The second step is to

    calculate the density of edge length for each pixel from an edge

    map. An edge map [E(i, j)], as a binary image, is obtained

    by Laws texture and Canny edge detection. The idea of using

    density is to obtain measurement of the edge length. The density

    of edge length [L(i, j)] in each pixel can be calculated from

    L(i, j) =C(i, j)

    maxi, j C(i, j)(15)

    where C(i,j) is the number of connected pixels at each position

    of pixel. An example of counting the number of connected pixelsis shown in Fig. 4(a) and (b). The density of edge length from

    the example is shown in Fig. 4(c). The third step is to calculate

    the initial position map P(i, j) from summation of average

    magnitude and density of edge length, i.e.,

    P(i, j) =1

    2(M(i, j) + L(i, j)). (16)

    The last step is the thresholding of the initial position map. We

    have to threshold the map in order to detect the initial position

    of edge following. IfP(i, j) > Tm ax , then P(i, j) is the initial

    position of edge following. We obtained the initial position by

    setting Tm ax to 95% of the maximum value. Theinitial positions

    from our method are positions that are close to the edges of

    Fig. 5. (a) Aorta in cardiovascular MR image. (b) Averaged magnitude[M(i, j)]. (c) Density of length edge [L(i, j)]. (d) Initial position map [P(i, j)]and initial position of edge following derived by thresholding Tm ax = 0.95.

    interested areas. An example of the initial position derived from

    our method is shown in Fig. 5 to illustrate that the method can

    be applied to ill-defined edges in medical images.

    We can see that any one of the white circle points in the initial

    position map is a good candidate to be the initial position for our

    edge following technique. However, the maximum value of the

    white circle points is used in this research. After determining

    the suitable initial position, the proposed technique will follow

    edges along the object boundary until the closed loop contour

    is achieved. This causes a limitation of the technique, i.e., theboundary must be closed loop.

    III. EXPERIMENTAL RESULTS

    We tested the performance of the proposed method by com-

    paring its results with the results from five classical contour

    models. We also computed the probability of error in bound-

    ary detection and the Hausdorff distance of the six methods by

    comparing their results with the opinion from expert medical

    doctors. Finally, we tested the efficiency of time consumption

    by comparing running times of the six methods. Each input

    image was preprocessed by a 3

    3 median filter prior to the

    application of each method.

    A. Boundary Extraction in Synthetic Noisy Images

    and Medical Images

    The proposed method was first implemented on synthetic im-

    ages. The synthetic images were generated from the original

    binary image by corrupting them with additive white Gaussian

    noise. We did this to make sure that the ground truths of the

    boundaries were known. Several synthetic images were tested

    and the intuitive results were achieved, i.e., the boundary de-

    tection results from the images with higher signal-to-noise ratio

    were better than that with lower ratio. We further tested our

    method to detect object boundaries in some types of medical

    images including prostates in ultrasound images, left ventricles

    in cardiac MR images, aortas in cardiovascular MR images, and

    knee joints in CT images. The prostate ultrasound images were

    obtained from the CIMI Labs website of the Nanyang Techno-

    logical University [34] and UW image computing systems lab

    [35]. The cardiac MR images of left ventricles were obtained

    from York University [36] and Medical School, Chiang Mai

    University (CMU.) The cardiovascular MR images of aortas

    were obtained from Imaging Consult website [37] and Medi-

    cal School, CMU. The CT images of knee joints were obtained

    from Learning Radiology website [38] and Medical School,

    CMU. The total numbers of synthetic, prostate ultrasound, left

  • 8/3/2019 05623332

    5/7

    SOMKANTHA et al.: BOUNDARY DETECTION IN MEDICAL IMAGES USING EDGE FOLLOWING ALGORITHM 571

    TABLE IERRORS IN IMAGE SEGMENTATION AND HAUSDORFF DISTANCE BETWEEN TWO

    SKILLED DOCTORS ON EACH TYPE OF MEDICAL IMAGES

    ventricle MR, aorta MR, and knee join CT images are 20, 10,

    20, 30, and 22, respectively. The object boundaries of these or-

    gans are very useful in the diagnoses and treatment planning

    for the corresponding diseases. For comparison, we tried to ap-

    ply the traditional edge following technique [39], [40] in this

    research but it did not perform well. Therefore, we turned to

    five conventional edge detection methods, i.e., the ACM, GAC,

    ACWE, GVF, and VFC snake models that are normally applied

    to solve the problem of boundary detection. To make the com-parison fair, the initial contours of the five snake models were

    selected manually to make them close to the object boundary.

    We adjusted the weight parameters , , , and of the ACM,

    GAC, ACWE, GVF, and VFC snake models between 0.1 and

    0.5 with 0.1 increment. The results of the five snake models

    were selected from the best experimental results of all parame-

    ter settings. For the weight parameters of the proposed method,

    we used just three settings: = 0.6, = 0.2, = 0.2; =0.5, = 0.2, = 0.3; and = 0.4, = 0.3, = 0.3 for allimages. The results of our method were selected from the best

    experimental results of the three parameter settings.

    B. Boundary Detection Evaluation Measures

    To further evaluate the efficiency of the proposed method

    in addition to the visual inspection, we evaluate our boundary

    detection method numerically using the Hausdorff distance and

    the probability of error in image segmentation

    PE = P(O)P(B|O) + P(B)P(O|B) (17)

    where P(O) and P(B) are a priori probabilities of objects and

    background in images. P(B|O) is the probability of error inclassifying objects as background and vice versa [41]. The ob-

    jects surrounded by the contours obtained using the five snake

    models and the proposed method are compared with that man-

    ually drawn by skilled doctors from the Medical School, CMU.

    Figs. 69 show segmentation results in medical images from the

    six methods comparing with the skilled doctors opinions. The

    results from our proposed method are visually better than that

    from the five snake models. It yields the contours that are very

    close to the experts opinions.

    It should be noted that each of the medical images was de-

    lineated by two experts. This leads us to the investigation of

    the variation among the experts opinions. Table I shows the

    PEs and the Hausdorff distances between the experts. The re-

    sults showing disagreements among them confirm that the object

    segmentation is subjective among the experts.

    Fig. 6. Prostate ultrasound images. (a) Original image. (b) Doctors delin-eation. Results of (c) ACM, (d) GAC, (e) ACWE, (f) GVF, (g) VFC, and (h) theproposed technique.

    Fig. 7. Left ventricle in cardiac MR images. (a) Original image. (b) Doctorsdelineation. Results of (c) ACM, (d) GAC, (e) ACWE, (f) GVF, (g) VFC, and(h) the proposed technique.

    Fig. 8. Aorta in cardiovascular MR Images. (a) Original image. (b) Doctorsdelineation. Results of (c) ACM, (d) GAC, (e) ACWE, (f) GVF, (g) VFC, and(h) the proposed technique.

    The average PEs of the results on the synthetic images using

    the ACM, GAC, ACWE, GVF, VFC snake models, and the

    proposed method are 15.67%, 11.17%, 8.32%, 8.37%, 7.56%,

    and 7.10%, respectively. The average Hausdorff distances of the

    results on the synthetic images using the six methods are 6.40,

    4.71, 2.76, 2.71, 2.54, and 2.17 pixels, respectively. The PEs and

    Hausdorff distances between the results using all six methods

    and each of the two experts opinion on each type of the medical

    images are shown in Tables II and III, respectively. The results

  • 8/3/2019 05623332

    6/7

    572 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 58, NO. 3, MARCH 2011

    TABLE IIAVERAGE RESULT ON ALL IMAGES BY MEAN OF PROBABILITY OF ERROR IN IMAGE SEGMENTATION (PE) (IN %)

    TABLE IIIAVERAGE RESULTS ON ALL IMAGES BY MEAN OF HAUSDORFF DISTANCE (IN PIXELS)

    Fig. 9. Knee jointsin CT images. (a) Original image.(b) Doctorsdelineation.

    Results of(c) ACM,(d) GAC,(e) ACWE,(f) GVF, (g)VFC,and(h) theproposedtechnique.

    shown in the tables are the average results of all images in each

    type. We can see that the results from the proposed method

    are much closer to the experts opinions than that from the

    five snake models. The ACWE and GVF snake models provide

    similar performances that are better than the ACM and GAC

    models. The VFC model provides the best performance among

    the five snake models.

    C. Efficiency of Computation Cost

    We compared the efficiency of time consumption between

    the proposed method and the snake models on two synthetic

    images, i.e., images containing an ellipse-shaped object and im-

    ages containing a U-shaped object. The sizes of images used

    in this experiment were varied from 64 64 pixels to 1024 1024 pixels. The maximum number of iterations of the five

    snake models was set to 50 for all images. Fig. 10 shows the

    results of the computation time for each method on the images

    containing a U-shaped object. The times on the images contain-

    ing an ellipse-shaped object are similar and are not shown here

    to save space. From the figure, the five snake models require

    huge computation cost. On the other hand, the proposed method

    can achieve the contour much faster. The experimental results

    Fig. 10. Calculation time comparison of the six methods in images containinga U-shaped object.

    show that the proposed method provides much more efficient

    computation time than the five snake models. The reason is that

    it only considers a 3 3 neighborhood to follow the objectboundary while the other methods consider much larger neigh-

    borhood for each iteration. This is another advantage of our

    proposed method over the classical contour models to detect the

    correct object boundary.

    IV. CONCLUSION

    We have designed a new edge following technique for bound-

    ary detection and applied it to object segmentation problem in

    medical images. Our edge following technique incorporates a

    vector image model and the edge map information. The pro-

    posed technique was applied to detect the object boundaries in

    several types of noisy images where the ill-defined edges were

    encountered. The proposed techniques performances on object

    segmentation and computation time were evaluated by compar-

    ing with five popular methods, i.e., the ACM, GAC, ACWE,

    GVF, and VFC snake models. Several synthetic noisy images

    were created and tested for the sake of the known ground truths.

    The opinions of the skilled doctors were used as the ground

  • 8/3/2019 05623332

    7/7

    SOMKANTHA et al.: BOUNDARY DETECTION IN MEDICAL IMAGES USING EDGE FOLLOWING ALGORITHM 573

    truths of interesting objects in different types of medical im-

    ages including prostates in ultrasound images, left ventricles in

    cardiac MR images, aortas in cardiovascular MR images, and

    knee joints in CT images. Besides the visual inspection, all six

    methods were evaluated using the probability of error in image

    segmentation and the Hausdorff distance. The results of detect-

    ing the object boundaries in noisy imagesshow that the proposedtechnique is much better than the five contour models. The re-

    sults of the running time on several sizes of images also show

    that our method is more efficient than the five contour models.

    We have successfully applied the edge following technique to

    detect ill-defined object boundaries in medical images. The pro-

    posed method can be applied not only for medical imaging, but

    can also be applied to any image processing problems in which

    ill-defined edge detection is encountered.

    ACKNOWLEDGMENT

    The authors would like to thank the following surgeons

    for drawing the respective ground truths: Dr. J. Tanyanop-porn (prostates), Dr. W. Kultangwattana (left ventricles, aor-

    tas, and knee joints), Dr. S. Arworn (prostates). They thank

    Dr. J. Euathrongchit, a diagnostic radiologist who provided the

    ground truths for left ventricles, aortas, and knee joints and

    the images of aortas and knee joints. They are also grateful to

    Dr. A. Phrommintikul for providing the images of left ventricles.

    REFERENCES

    [1] J. Guerrero, S. E. Salcudean, J. A. McEwen, B. A. Masri, andS. Nicolaou, Real-time vessel segmentation and tracking for ultrasoundimaging applications, IEEE Trans. Med. Imag., vol. 26, no. 8, pp. 10791090, Aug. 2007.

    [2] F. Destrempes, J. Meunier, M.-F. Giroux, G. Soulez, and G. Cloutier,Segmentation in ultrasonic B-mode images of healthy carotid arteriesusing mixtures of Nakagami distributions and stochastic optimization,

    IEEE Trans. Med. Imag., vol. 28, no. 2, pp. 215229, Feb. 2009.[3] N. Theera-Umpon and P. D. Gader, System level training of neural net-

    works for counting white blood cells, IEEE Trans. Syst., Man, Cybern.C, App. Rev., vol. 32, no. 1, pp. 4853, Feb. 2002.

    [4] N. Theera-Umpon, White blood cell segmentation and classification inmicroscopic bonemarrowimages, Lecture NotesComput. Sci.,vol.3614,pp. 787796, 2005.

    [5] N. Theera-Umpon and S. Dhompongsa, Morphological granulometricfeatures of nucleus in automatic bone marrow white blood cell classifi-cation, IEEE Trans. Inf. Technol. Biomed., vol. 11, no. 3, pp. 353359,May 2007.

    [6] J. Carballido-Gamio, S. J. Belongie, and S. Majumdar, Normalized cutsin 3-D for spinal MRI segmentation, IEEE Trans. Med. Imag., vol. 23,no. 1, pp. 3644, Jan. 2004.

    [7] H. Greenspan, A. Ruf, and J. Goldberger, Constrained Gaussian mixturemodel frameworkfor automatic segmentation of MR brainimages, IEEETrans. Med. Imag., vol. 25, no. 9, pp. 12331245, Sep. 2006.

    [8] J.-D. Lee, H.-R. Su, P. E. Cheng, M. Liou, J. Aston, A. C. Tsai, andC.-Y. Chen, MR image segmentation using a power transformation ap-proach, IEEE Trans. Med. Imag., vol. 28, no. 6, pp. 894905, Jun. 2009.

    [9] P. Jiantao, J. K. Leader, B. Zheng, F. Knollmann, C. Fuhrman, F. C.Sciurba, and D. Gur, A computational geometry approach to automatedpulmonary fissure segmentation in CT examinations, IEEE Trans. Med.

    Imag., vol. 28, no. 5, pp. 710719, May 2009.[10] I. Isgum, M. Staring, A. Rutten, M. Prokop, M. A. Viergever, and

    B. van Ginneken, Multi-Atlas-based segmentation with local decisionfusionApplication to cardiac and aortic segmentation in CT scans,

    IEEE Trans. Med. Imag., vol. 28, no. 7, pp. 10001010, Jul. 2009.[11] J. R. Parker, Algorithms for Image Processing and Computer Vision.

    New York: Wiley, 1997.

    [12] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Reading,MA: Addison-Wesley, 1992.

    [13] J. M. S. Prewitt, Object enhancement and extraction, in Picture Pro-cessing and Psychopictorics. B. S. Lipkin and A. Rosenfeld, Eds., NewYork: Academic Press, pp. 75149, 1970.

    [14] G. S. Robinson, Edge detection by compass gradient masks, Comput.Graph. Image Process., vol. 6, no. 5, pp. 492501, Oct. 1977.

    [15] E. Argyle, Techniques for edge detection, Proc. IEEE, vol. 59, no. 2,pp. 285287, Feb. 1971.

    [16] J. F. Canny, A computational approach to edge detection, IEEE Trans.Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679698, Nov. 1986.

    [17] R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision. New York:McGraw-Hill, 1995.

    [18] V. Caselles, F. Catte, T. Coll, and F. Dibos, A geometric model for activecontours, Numer. Math., vol. 66, pp. 131, 1993.

    [19] F. Leymarie and M. D. Levine, Tracking deformable objects in the planeusing an active contour model, IEEE Trans. Pattern Anal. Mach. intell.,vol. 15, no. 6, pp. 617634, Jun. 1993.

    [20] M. Kass,A. Witkin, andD. Terzopoulos, Snakes:Active contour models,Int. J. Comput. Vis., vol. 1, pp. 321331, 1987.

    [21] V. Caselles, R. Kimmel, and G. Sapiro, Geodesic active contours, Int.J. Comput. Vis., vol. 22, no. 1, pp. 6179, 1997.

    [22] T. Chan andL. Vese, Active contours without edges, IEEE Trans. Imag.Process., vol. 10, no. 2, pp. 266277, Feb. 2001.

    [23] C. Xu and J. L. Prince, Gradient vector flow: A new external force

    for snake, in IEEE Proc. Conf. Comput. Vis. Pattern Recognit., 1997,pp. 6671.

    [24] C. Xu and J. L. Prince, Snakes, shapes, and gradient vector flow, IEEETrans. Imag. Process., vol. 7, no. 3, pp. 359369, Mar. 1998.

    [25] B. Li and T. Acton, Active contour external force using vector fieldconvolution for image segmentation, IEEE Trans. Imag.Process, vol. 16,no. 8, pp. 20962106, Aug. 2007.

    [26] I. N. Bankman, Handbook of Medical Imaging. San Diego, CA: Aca-demic, 2000.

    [27] J. Cheng and S. W. Foo, Dynamic directional gradient vector flow forsnakes, IEEE Trans. Imag. Process., vol. 15, no. 6, pp. 15631571, Jan.2006.

    [28] L. Ballerini, Genetic snakes for color images segmentation, LecturesNotes Comput. Sci., vol. 2037, pp. 268277, 2001.

    [29] N. Eua-Anantand L. Udpa, A novel boundaryextraction algorithmbasedon a vector image model, in Proc. IEEE Sym. Circ. and Syst., 1996,pp. 597600.

    [30] K. Laws, Textured image segmentation, Ph.D. dissertation, Dept. Elec.Eng., Univ. Southern California, Los Angeles, 1980.

    [31] J. Canny, A computational approach to edge detection, IEEE Trans.Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679698, Nov. 1986.

    [32] A. Guocheng, C. Jianjun, and W. Zhenyang, A fast external force modelfor snake-based image segmentation, in Proc. Int. Conf. Signal Process.,2008, pp. 11281131.

    [33] N. P. Tiilikainen, A Comparative Study of Active Contour Snakes, Dept.Comput. Sci., Univ. Copenhagen, Denmark, DIKU 07/04, 2007.

    [34] CIMI Lab, Nanyang Technological University. (2007). [Online]. Avail-able: http://mrcas.mpe.ntu.edu.sg/

    [35] ICSL, University of Washington. [Online]. 2004. Available:http://icsl.washington.edu/node/316

    [36] A. Andreopoulos and J. K. Tsotsos, Efficient and generalizable statisticalmodels of shapeand appearance foranalysis of cardiac MRI, Med. Image

    Anal., vol. 10, no. 3, pp. 335357, 2008.

    [37] Imaging Consult. (2007). [Online]. Available: http://imaging.consult.com[38] Learning Radiology. (2008). [Online]. Available: http://www.

    learningradiology.com[39] S. Nagabhushana, Computer Vision and Image Processing. New Delhi:

    New Age International Publishers, 2006.[40] I. Pitas, Digital Image Processing Algorithms and Applications. NJ:

    Wiley, 2000.[41] X.W.Zhang,J. Q.Song,M. R.Lyu, andS. J.Cai,Extractionof karyocytes

    and their components from microscopic bone marrow images based onregional color features, Pattern Recognit., vol. 37, no. 2, pp. 351361,2004.

    Authors photograph and biography are not available at the time of publication.