Smooth 3D Surface Reconstruction from Contours of ...david/Papers/braude_thesis.pdfSmooth 3D Surface...
Transcript of Smooth 3D Surface Reconstruction from Contours of ...david/Papers/braude_thesis.pdfSmooth 3D Surface...
Smooth 3D Surface Reconstruction from Contours of Biological Data with MPU
Implicits
A Thesis
Submitted to the Faculty
of
Drexel University
by
Ilya Braude
in partial fulfillment of the
requirements for the degree
of
Master of Science in Computer Science
August 2005
c© Copyright 2005Ilya Braude. All Rights Reserved.
ii
Dedications
I felt a cleaving in my mindAs if my brain had split;I tried to match it, seam by seam,But could not make them fit.
The thought behind I strove to joinUnto the thought before,But sequence ravelled out of reachLike balls upon the floor.
- Emily Dickinson (1830-1886)
Few things are impossible to diligence and skill. Great worksare performed not by strength, but perseverance.
- Samuel Johnson (1709 - 1784)
iii
Acknowledgements
I would like to acknowledge my advisor,Dr. David. E. Breen for his knowledge, patience,
and support throughout this work. I also wish to acknowledgethe other committee mem-
bersDr. William C. Regli andDr. Jonathan Nissanovfor their time, helpful comments
and suggestions. An additional thanks is extended to Dr. Regli for his support and belief in
me during my early research years.
I wish to thankChristopher D. Cera for being my mentor and friend, and for being
an excellent role model to look up to. Thanks to my research colleaguesNadya Belov,
Manolya Eyiyurekli andServesh Tiwari for their support and for proofreading this work.
Additional thanks are extended toCheuk Yiu Ip (Horace) for all of his technical help over
the years. I would also like to thankJeff Marker for his help and support when it was
needed. Further thanks toJames J. Simfor many inspirational words and encouragement
to persevere in the face of adversity.
Lastly, I wish to thankmy family and all those close to me for being very supportive
and encouraging during this important time. A special thanks to my mom: thank you for
your support and for believing in me.
This work was supported in part by the National Science Foundation (NSF) Grant ACI-
0083287 and the Drexel University Synergy Grant.
iv
Table of Contents
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Contours. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 Contour Stitching. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Distance Field Interpolation. . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Point Set Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1 Implicit functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 MPU Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Effect of Surface Normals on MPU Reconstruction. . . . . . . . . . . . . 17
4 Slice-based Reconstruction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1 Distance Fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 Surface Normal Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.1 Tangent Line Approximation. . . . . . . . . . . . . . . . . . . . . 21
4.2.2 Gaussian Blurring and Gradient Calculation. . . . . . . . . . . . . 23
4.2.3 Convolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
v
4.2.4 Normals Calculation. . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 Inter-slice Interpolation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3.1 Linear Interpolation. . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3.2 Spline Interpolation. . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.4 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4.1 Uniform Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4.2 Non-uniform Sampling. . . . . . . . . . . . . . . . . . . . . . . . 32
5 Volume-based Reconstruction. . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.1 Surface Normal Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Surface Reconstruction Quality and Accuracy. . . . . . . . . . . . . . . . 37
5.3 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.3.1 Uniform Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.3.2 Non-uniform Sampling. . . . . . . . . . . . . . . . . . . . . . . . 38
5.4 Arbitrary Cross-sections. . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.2 Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.2.1 Convolution Optimization . . . . . . . . . . . . . . . . . . . . . . 43
6.2.2 2D Distance Fields. . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.2.3 Volume Smoothing Optimization. . . . . . . . . . . . . . . . . . . 44
6.2.4 Challenges of Large, High Resolution Data. . . . . . . . . . . . . 45
7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.1 Slice-based Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
vi
7.2 Volume-based Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.2.1 Isotropic Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.2.2 Anisotropic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.3 Comparison to Commercial Methods. . . . . . . . . . . . . . . . . . . . . 60
8 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8.2 Future Work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
vii
List of Tables
7.1 Characteristics of isotropic input data. . . . . . . . . . . . . . . . . . . . . 50
7.2 Approximation quality of reconstructed surfaces for isotropic data withspecified MPUtol and Nmin parameters. Metrics are calculated in unitsof voxels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.3 Execution times for embryo, heart, stomach, tongue, andbrain reconstruc-tions in Table 7.2.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.4 Characteristics of anisotropic input data. . . . . . . . . . . . . . . . . . . 56
7.5 Approximation quality of reconstructed surfaces for anisotropic data withspecified MPUtol andNmin parameters. Metrics are calculated in units ofvoxels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.6 Execution times for ventricles and pelvis reconstructions in Table 7.5. . . . 59
7.7 Approximation quality of reconstructed surfaces for anisotropic data withspecified MPUtol andNmin parameters. Metrics are calculated in units ofvoxels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.8 Approximation quality forvolume-based, amira-non-smooth, andamira-smoothsurfaces. The best reconstruction is marked by a?. . . . . . . . . . 61
viii
List of Figures
1.1 Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Functional and pixel representation of a contour.. . . . . . . . . . . . . . . 5
3.1 Two local approximations (red, thin) blended to form theglobal function(blue, thick). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Selection of MPU local fitting function. . . . . . . . . . . . . . . . . . . . 17
3.3 Effect of surface normals on MPU reconstruction. . . . . . . . . . . . . . 18
4.1 Discrete vs smooth distance fields. . . . . . . . . . . . . . . . . . . . . . 20
4.2 Least squares minimization may produce erroneous normals . . . . . . . . 23
4.3 Gaussian blurring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Gaussian Functions in 1D (left) and 2D (right).. . . . . . . . . . . . . . . 26
4.5 Edge detection in 1D.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.6 Kernel that is used in Figure 4.5.. . . . . . . . . . . . . . . . . . . . . . . 27
4.7 Sobel Kernels.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.8 Examples of estimated normals in 2D. Normals have been enlarged for em-phasis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.9 Narrow band and full distance fields. . . . . . . . . . . . . . . . . . . . . 30
4.10 The result of applying the marching cubes algorithm to the stacked distancevolume without any smoothing or filtering in thez direction. . . . . . . . . 31
4.11 The effect of compensation for non-uniform scaling at the mesh level. . . . 33
7.1 Results from slice-based reconstruction. . . . . . . . . . . . . . . . . . . 49
7.2 Reconstruction of isotropic data (part 1). . . . . . . . . . . . . . . . . . . 51
7.3 Reconstruction of isotropic data (part 2). . . . . . . . . . . . . . . . . . . 52
ix
7.4 Comparison Klein, NUAGES, and our results for the pelvisdataset. . . . . 53
7.5 Reconstruction of anisotropic data. . . . . . . . . . . . . . . . . . . . . . 57
7.6 Details of ventricles and pelvis datasets showing approximation errors.. . . 58
7.7 The three surfaces reconstructed from artificially noisy contour data.. . . . 61
x
AbstractSmooth 3D Surface Reconstruction from Contours of Biological Data with MPU Implicits
Ilya BraudeDavid E. Breen, Ph. D.
With the enhancement of computer and imaging technology, increasing amounts of bi-
ological data are being generated. The data include Magnetic Resonance Imaging (MRI),
Computed Tomography (CT), and histological scans of objects. The goal of the work pre-
sented in this thesis is to accurately and efficiently reconstruct smooth 3D implicit surface
models of the structures that are described by contours frombiological cross-section data.
Two methods are presented for performing contour reconstruction that use Multi-level
Partition of Unity (MPU) implicit surfaces. A convenient method is proposed to estimate
surface normals for contours so that implicit point set-based surface representations can
be applied. First, a slice-based approach is discussed. In the slice-based approach, a dis-
tance field is generated for each contour slice using MPU implicits. The distance fields are
stacked to produce a 3D volume, from which a 3D surface model is extracted. Second, a
volume-based approach is presented. This approach treats the points on every contour as a
single point set and constructs a smooth but accurate 3D surface.
The techniques presented here are compared to several otherreconstruction methods
described in the literature. Further, the reconstruction process is shown to be effective at
dealing with noise and to generally have sub-voxel accuracywith respect to the input data.
The volume-based approach is also invariant under anisotropic sampling of the original
data, producing accurate results even in the presence of missing slices.
1
1. Introduction
With the enhancement of computer and imaging technology, increasing amounts of
biological data are being generated. The data include Magnetic Resonance Imaging (MRI),
Computed Tomography (CT), and histological scans of objects such as mouse and frog
embryos, bones, brains, and in fact anything that can be scanned with a CT or MRI scanner,
or histologically analyzed. These data consist of 2D imagesthat represent thin slices of 2D
cross-sections of a specimen. Depending on the scanning method used, these slices can be
color images or greyscale intensity values. For example, the Visible Human Project [35]
has high resolution data, in various formats, of human adultmale and female bodies. The
goal of this data acquisition is to obtain computerized 3D representations of biological
specimens for further study, allowing for analysis and viewing from arbitrary angles and
cross-sections. Biologically accurate 3D computer modelsof these structures would make
an unprecedented level of examination possible by medical professionals. The goal of the
work presented in this thesis is to accurately and efficiently reconstruct smooth 3D surface
models of the structures that are present in biological cross-section data using an implicit
surface representation.
The methods for surface reconstruction from contours that are presented in this thesis
consider contour points as a single point set inR3. In order for this point set to be useful, a
method for estimating surface normals has been proposed. Given a point set with associated
surface normal information, a smooth surface function is fitto the data. The flexibility
of using a point set to represent contours allows for, among other applications, surface
2
reconstruction from non-parallel and irregularly sampledcontour slices. Our reconstruction
technique provides an effective method of dealing with missing, noisy, or anisotropic data.
Furthermore, our method produces surfaces that approximate original contours with sub-
pixel accuracy and fast computation times.
1.1 Contours
This work concentrates on 3D surface reconstruction from contours. Acontour is an
outline of a shape in an image. Contours can be generated or delineated for each part of
the original object that is visible in each cross-section. Acontour can be represented in
one of two ways. It can be represented as a polygon that is specified by a list of vertices
and connecting edges. It can also be represented, more generally, without any connectivity
information simply as pixels in an image. The latter representation is used in our approach.
1.2 Segmentation
Contours are generated by a process referred to as segmentation. The process ofseg-
mentationisolates individual parts from initial data scans. Severalmethods exist for seg-
mentation, ranging from manual delineation to semi-automatic ([50, 54]) to fully automatic
([47]). We concentrate on using already segmented contour data as our input and starting
point.
During manual segmentation, an experienced anatomist analyzes each slice and de-
lineates the border of each structure of interest using a stylus on a tablet computer screen.
Semi-automatic segmentation involves computer-generated segmentation that requires user
3
input to adjust parameters in real-time. Fully automatic segmentation is an open problem
in the computer vision community, and would ideally produceperfect delineations and
require no user intervention. Figure1.1 shows the segmentation process, with initial im-
age delineation through region extraction. Grouping the resulting contours on every slice
creates an outline of the object of interest.
Figure 1.1: Segmentation. From left to right: MRI slice, delineated contours, extractedregions of interest.
However, the segmentation process, regardless of the method used, is not perfect. This
results in noisy data containing information that does not belong to the true specimen.
Exact interpolation of the data incorporates such noise in the reconstructed model, which
is undesirable. Therefore, a certain degree of smoothing ofcontours is necessary in order
to produce biologically accurate reconstructions.
4
1.3 Reconstruction
There are two types of reconstruction that can be performed on contour data. The first
type is slice-based reconstruction, and the second type is volume-based reconstruction.
Slice-based reconstruction involves, in some form, contour “stacking” to reconstruct the
surface. In slice-based reconstruction, each contour slice (or pair of slices) is processed
individually during the reconstruction process.
There are several techniques that have been used for this type of reconstruction in the
past. The most prominent of which is contour stitching [7, 6, 5]. This technique con-
nects one contour with the next using a mesh. A mesh contains only straight lines and flat
surfaces because it is defined by a set of vertices and faces. Using a mesh as the base re-
construction representation is equivalent to linear interpolation between slices, producing
onlyC0 continuity. Moreover, the effective resolution of the meshis limited by the number
and density of input points.
Linear interpolation is not desirable because it does not depict smooth biological data
well. Consequently, other methods have been proposed to address this problem and to
overcome it using smoothing operations.
Better continuity can be accomplished by smoothing the resulting mesh. However,
proposed algorithms ([11]) tend to shrink the data as they smooth it or require special
considerations to prevent shrinkage. A better approach, and one that is adopted in this work,
is to approximate each contour with a continuous function. Since contour data is available
in the form of discrete pixel values that are themselves approximations to a smooth curve,
approximating the pixels with a smooth function is natural.Figure1.2 demonstrates this
5
Smooth
Contour
Contour Pixels
Figure 1.2: Functional and pixel representation of a contour.
concept. The green curve and the black pixels are approximations of each other.
A completely different approach to contour reconstructionconsiders the set of con-
tours as points inR3. The use of point sets to represent 3D models has gained momentum
in recent years, as the quality of 3D scanners and other 3D data acquisition devices has
dramatically improved [27]. Point set models lie at the heart of our volume-oriented re-
construction process. General point set techniques have been previously investigated in the
literature [26, 1, 37]. However, these general techniques do not necessarily achieve the goal
of reconstructing biological data. Since the scope of this thesis deals with a specific do-
main, it is possible to take advantage of information and point structure that is not normally
available with general point sets.
The approach presented in this thesis makes significant advances to both the slice-based
and volume-based reconstruction methods. We will show thatimplicit surface algorithms
based on point set surfaces can be applied to contour-based reconstruction. In the slice-
based reconstruction domain, our approach is able to produce smooth and continuous 2D
distance fields that can be subsequently used with inter-slice interpolation methods. For
6
volume-based reconstruction, our approach introduces a method for producing smooth 3D
models while adhering to original pixel data with sub-pixelaccuracy. At the same time, our
reconstruction technique is able to deal with noisy input data by allowing for an arbitrary
degree of smoothing. Furthermore, we show that not only can implicit surface techniques
be applied to this domain, but they also perform very well in comparison with other meth-
ods in the literature and in commercial software.
7
2. Related Work
The problem of reconstructing 3D models from 2D contour images has been studied
since the 1970’s [21, 15]. Various solutions have been proposed; each tackles the problem
in slightly different ways. The most prominent of these methods are described in this
chapter, highlighting the problems faced in contour reconstruction.
2.1 Contour Stitching
Most of the work in contour reconstruction has been performed by stitching successive
contours with a mesh. This approach has been the topic of numerous papers and research
which are described here. There are three general problems that contour stitching attempts
to solve (as defined by Bajaj et al. [4]):
Correspondence –Correspondence is the correlation between adjacent slices. Because
contours constitute only a partial sampling of the actual physical object, the exact connec-
tions between contours are not known.
Branching – Branching is a large problem in mesh-based approaches. It occurs when one
contour slice contains only one closed contour, while the next slice contains two or more
closed contours, creating a branch somewhere between the two slices. Various methods
have proposed solutions to dealing with this problem, whileothers ignore it.
Tiling – Tiling is the actual meshing of two adjacent slices. The tiling process joins two
slices by creating a strip of vertices and triangles betweenthem. Most of the progress in
mesh-based approaches has been made in this area.
8
Keppel [21] and Fuchs et al. [15] perform contour stitching between two contoursP
andQ by placing either a vertex onP and two vertices onQ, or by placing a vertex onQ
and two vertices onP to form a triangle strip between the two contours. They use graph
minimum path algorithms that minimize or maximize an objective function to choose the
exact locations of the vertices. Keppel’s approach tries tomaximize an objective function
that corresponds to the volume of the polyhedron that is formed by the triangle strip. On
the other hand, Fuchs uses a minimum path cost algorithm to find the optimal triangulation
that minimizes the surface area. However, these early techniques do not deal with handling
special cases such as branching.
Ganapathy [17] uses a similar approach to Fuchs, but instead parameterizes each con-
tour with a parametert ∈ [0,1]. This parametric value is then used to guide the greedy
selection of vertices on either the upper or lower contour, such that the difference between
the parameter values of the current position in each contouris minimized.
Other methods that have improved on these approaches include Boissonnat [9], who
uses Delaunay triangulation in the plane and then “raises” one of the contours to give the
surface its 3D shape. However, this approach fails to deal effectively with some contour
stitching problems such as contour pairs that are either toodifferent from each other, or
that overlap. Barequet et al. [7, 6, 5] utilize a partial curve matching algorithm to con-
nect most portions of the contours. Then they apply a multi-level approach to triangulate
the remaining portions. Non-convex contour polygons also cause problems for some algo-
rithms. Ekoule et al. [13] claim to handle dissimilar and non-convex polygons well, but use
a heuristic algorithm. Heuristic algorithms sacrifice optimality for speed of computation.
Meyers et al. [33] introduce a method that deals with yet more contour stitching problems.
9
Their algorithm handles narrow valleys and branching structures using a Minimum Span-
ning Tree (MST) of a contour adjacency graph.
Bajaj et al. [4] use various constraints on the triangulation procedure combined with
contour augmentation to solve the three problems of contourtriangulation (correspon-
dence, tiling, branching) simultaneously. More recently,Fujimura and Kuo [16] use an
isotropy-based method that introduces new vertices (besides those on the contours) to pro-
duce smoother meshes. They also propose to solve the branching problem by introducing
an intermediate slice at the branch point between two adjacent contours.
2.2 Distance Field Interpolation
In the context of this thesis, a distance field is an implicit representation of an object. In
the simplest case, the value of a point in the distance field describes the distance from that
point to the object. Jones and Chen [20] investigate a simple marching cubes algorithm on
a discrete distance field. Their approach produces voxelized models and has been improved
in subsequent research. Raya and Udupa [42] treat the problem of anisotropic data and in-
terpolating intermediate contours in order to get an isotropic sampling before performing
reconstruction. They segment greyscale volume data into contours, then turn them into 2D
distance fields. These 2D distance fields are then interpolated in thez-direction, with the
positive values constituting the interior of an object. Levin [25] introduces a method that
interpolates 2D distance fields in thez-direction with cubic B-splines, producing aR3 7→R
function whose zero set is the reconstructed surface. However, reconstruction smoothness
depends on the smoothness of the distance field. Barrett [8] performs greyscale interpo-
lation between slices to get a distance field representing a height value. This approach
10
works well on topology maps with nested contours. Cohen-Or et al. [10] study the prob-
lem of successive contours being too far from each other (in thexy-plane). The proposed
approach creates “links” between contours (features) and uses these links to guide inter-
polation. Klein et al. [23] use hardware to compute 2D distance fields, and then use a
fast method to compute a 3D distance field that is conservatively Eucledian (Klein dist≤
Eucledian dist). Contours are also decimated (simplified) to a user specified level before
processing. Hoppe et al. [18] propose a method to produce a surface mesh from unorga-
nized points inR3 by creating a distance function. The distance function is estimated by
the closest distance from an input point to a tangent plane that approximates point data.
Hoppe et al. also proposed a unique method for automaticallyextracting surface normals
(which are required in their algorithm) from point sets thatdo not have this information.
However, this general normal-generation procedure is far from perfect. A slightly different
approach to distance fields is the level set method, as reported by Zhao et al. [53]. A nice
overview of current level set methods and results is given byOsher and Fedkiw [39, 40].
2.3 Point Set Surfaces
The use of point sets as a display primitive was originally proposed by Levoy and Whit-
ted [27]. When the area of a rendered triangle of a mesh on the screen is smaller than a
pixel, it becomes more prudent to represent the model using points instead. Levoy and
Whitted [27] introduced an efficient method for rendering continuous surfaces from point
data. Levin [26] uses moving least squares (MLS) to approximate point sets with polyno-
mials. This base technique has been extended and improved, but is still heavily relied upon.
Alexa et al. [1] also use point sets to represent shapes. They use the MLS approach, and
11
introduce techniques to allow for upsampling (creating newpoints) or downsampling (re-
moving points) of the surface. They also introduce a point sample rendering technique that
allows for the visualization of point sets at interactive frame rates with good visual quality.
Fleishman and Cohen-Or [14] extend the MLS procedure and introduce Progressive Point
Set Surfaces (PPSS) to generate abase setof points that are refined to an arbitrary reso-
lution. Amenta and Kil [2] project points onto the “extremal” surface defined by a vector
field and an energy function. However, this approach requires (undirected) surface normals
to be present in the point set. Another popular approach by Ohtake et al. uses Radial Basis
Functions [37] to define point set surfaces. In a different and much faster approach, Ohtake
et al. propose the Multi-level Partition of Unity (MPU) implicits [36]. We have chosen to
build upon the MPU implicit approach in our work and describeit in detail in Chapter3.
Other point-based approaches are discussed by Savchenko etal. ([45], [46]) and Hui et al.
([51]).
12
3. Approach
In the approach discussed in this thesis, we use the Multi-level Partition of Unity (MPU)
implicit function [36] for smooth surface reconstruction. The MPU function operates on a
point set [26, 1, 37] and tries to reconstruct the surface that is approximated by it. MPU
implicits were chosen for the following reasons. They impose no restrictions on topological
type, are able to deal with points that vary in sampling density, use an adaptive technique
that confines the reconstruction to a specified error parameter, and are both space and time
efficient. Unlike other point set surface reconstruction algorithms [18] (which calculate
their own surface normal estimates), the MPU function requires normals to be present
in the input for every data point. Ohtake et al. [36] claim that this is not a significant
issue as normals can be easily obtained from a mesh representation, least-squares fitting, or
obtained automatically from range acquisition devices. However, in practice this is rarely
the case. Given a set of points inR3 with no other information about the represented object,
finding surface normals for the point set is a problem on its own. Such is the case with
contour representation because there is no explicit structure that would suggest a method
for acquiring surface normals at each point. Without accurate surface normals information
for the point set, the MPU function is impractical as a surface reconstruction algorithm. We
examine the effect of normals on reconstruction quality in Section3.3.
13
3.1 Implicit functions
An implicit function is generally defined such that a pointx lies on the surface repre-
sented by the functionf (x) if
f (x) = 0. (3.1)
The domain of the function in Equation3.1 is any point inR3 for 3D surfaces and inR2
for 2D curves. The range of the function is the set of values inR. The result is a mapping
of any point in space to some quantity which, in the context ofthis thesis, is a distance
measure. In a signed distance function, a positive value of the function means that the
domain point is inside of the surface, and a negative value means that the domain point lies
outside of the surface. For example, an implicit equation for a unit sphere centered at the
origin is
f (x,y,z) =−(x2 +y2 +z2)+1 = 0.
More formally, an implicit function is defined as follows. Given a set of pointsP =
x1,x2, . . . ,xn ⊂ R3 on or near the surface boundary (δΩ) of an object (Ω), the implicit
function gives a mappingf : R3 7→R. The mapping is a signed distance function:
f (x) =
D(x,δΩ), x∈Ω
−D(x,δΩ), otherwise(3.2)
whereD is some distance measure fromx to the borderδΩ. The surfaceS represented by
the boundaryδΩ is given as
S= x | f (x) = k, (3.3)
14
wherek, representing an isovalue, can be any constant. Ifk is chosen to be 0, the surface is
referred to as lying on the zero set of the function. This is also referred to in the literature
as the isosurface of the implicit function.
3.2 MPU Surfaces
The implicit surface representation that is chosen to be used in this work is the Multi-
level Partition of Unity (MPU) Implicits surface, as presented by Ohtake et al. [36]. MPU
implicit surfaces are convenient because they use local piecewise quadric functions and
adapt to surface detail through the use of recursive octree subdivision. Computation times
are also fast because of the local nature of the surface patches.
An MPU surface is implicitly defined by an MPU function. The MPU function defines
a distance field around the surface that it represents. Globally, the MPU function is com-
posed of overlapping local functions that are blended together, summing to one (partition
of unity). A partition of unity is a set of nonnegative compactly supported functionsωi
where
∑i
ωi ≡ 1, on the domainΦ. (3.4)
The global function is then
f (x) = ∑i
ωi(x)Qi(x), (3.5)
whereQi(x) is a local approximation function, see Figure3.1. Eachωi is generated by
ωi =wi
∑nj=1w j
, (3.6)
15
where the setwi is a set of nonnegative compactly supported weight functions such that
Φ⊂⋃
i
supp(wi). (3.7)
In the current MPU implicits implementation, each weight function wi(x) is a quadratic
B-splineb(t):
wi(x) = b
(
3|x−ci|2Ri
)
. (3.8)
The weight functions are centered atci , which is the midpoint of each octree cell in the
subdivision process, and have a support radius ofRi. Both of these are described in greater
detail below.
f(x) = 0Q(x) = 0
Figure 3.1: Two local approximations (red, thin) blended toform the global function (blue,thick).
MPU implicits use an adaptive octree-based subdivision scheme in order to selectively
refine areas of higher detail. There are several parameters that control this subdivision
process. There is a support radiusR for the weight functions that is centered at the midpoint
(c) of each octree cell. This support radius is initialized toR= αd, whered is the length of
the diagonal of the current cell andα = 0.75. Rcan be enlarged if the enclosing sphere does
16
not contain enough points, as specified by a parameterNmin. In that case,R is enlarged with
a parameterλ : R′ = λR until enough points are enclosed. Currently,λ = 1.1. Increasing
theNmin parameter results in fewer local approximations and increased smoothing.
At each step of the algorithm, a local functionQ(x) is fit to the points in the ball defined
by Rand centered in a cell on a leaf node of the octree. The local function is then evaluated
for accuracy using the Taubin distance [48], which is estimated as follows:
ε = maxx
|Q(x)||∇Q(x)| , ∀x∈ ball defined byR. (3.9)
If ε is greater than a specified tolerance (tol) value, the subdivision process continues (i.e.
the current cell is divided into eight child cells and the approximation procedure is per-
formed within each of the child cells).
The local functionQ(x) is approximated in one of three ways. It can either be a 3D
quadric, a bivariate quadric polynomial, or a piecewise quadric surface used for edges
and corners. The chosen function is based on the local surface features as determined by
normals. In the implementation used in the work in this thesis, we do not use sharp feature
detection, and as a result, only one of the two quadrics is used.
The selection of one of the two functions is governed by the examination of surface
normals of points within the radiusR. If all normals point in the same direction then a
bivariate quadric is used, otherwise a general 3D quadric (which is capable of constructing
two sheet functions) is used. Figure3.2explains this process in 2D: in the circle on the left,
all of the normals point in the same relative direction and mandate the use of a bivariate
quadric, while in the circle on the right a two sheet general quadric is used.
17
R R
Q(x) = 0
Q(x) = 0
Figure 3.2: Left: bivariate quadric is used. Right: general3D quadric is used.
3.3 Effect of Surface Normals on MPU Reconstruction
The MPU function is heavily dependent upon accurate surfacenormal information.
Variations in as few as one normal in a dataset can potentially cause spurious unwanted
artifacts in a reconstructed surface. For example, if one ofthe normals on the left side
of Figure3.2 were to be reversed, then a bivariate quadric would be incorrectly fitted to
the data. A concrete example is illustrated in Figure3.3, where a 2D image is used for
clear illustrative purposes. The concept, however, extends to 3D as well. In Figure3.3(a),
normals (shown as yellow lines and enlarged for clarification) are accurate for the data. The
corresponding reconstruction, shown in Figure3.3(c), is subsequently smooth and artifact-
free. Figure3.3(b)shows the same dataset as3.3(a), but the normals are not as accurate,
although they appear to be valid under a visual inspection. The resulting reconstruction
accentuates the “error” of the normals by introducing artifacts that are clearly not part of
the original data (Figure3.3(d)), which is reproduced exactly in Figure3.3(c).
18
(a) Data with accurate normals (b) Data with inaccurate normals
(c) Reconstruction with accurate nor-mals
(d) Reconstruction with inaccuratenormals
Figure 3.3: Effect of surface normals on MPU reconstruction. Shown as a distance fieldwith negative (blue) and positive (green) distances. The reconstructed contour is shown inred.
19
4. Slice-based Reconstruction
Much of the previous work on contour reconstruction has focused on slice-based con-
tour analysis. The slice-based reconstruction method operates by creating a 2D distance
field from each contour slice. The actual contour lies at the zero set of this distance field.
Next, the distance fields for each slice are stacked vertically to produce a 3D distance vol-
ume. The implied surface lies at the zero set of this distancevolume and is extracted by
means of a marching cubes algorithm [28] in the form of a triangular mesh. The resolu-
tion and the coarseness or smoothness of the distance volumedirectly effects the quality
of the reconstructed surface. Work by Sandholm [43] uses 2D distance field processing
and smoothing in order to perform 3D reconstructions. The distance fields are filtered to
remove undesired effects such as the medial axis artifact [44]. In this chapter we present an
approach for solving the slice-based reconstruction problem based on MPU implicits. Our
approach creates smooth 2D distance fields which can be used with various interpolation
methods to produce smooth 3D surfaces.
4.1 Distance Fields
The first step in this contour reconstruction approach is thegeneration of 2D distance
fields for the input contours. A 2D distance field is a 2D array of distance values where each
entry contains the closest distance to the contour from thatlocation. Traditional methods
of distance field generation have used distances that are calculated between pixel centers in
input images. Such an approach limits the number of discretedistance values that can exist
20
in the immediate proximity of the contour. For example, eachshade of blue in Figure4.1(a)
stands for one distance value. In the proximity of the contour, there are only two possible
distance values: 0 (black), 1 (light blue), and√
2 (dark blue).
An MPU implicit function is used to approximate the contoursand to generate distance
fields. MPU implicits provide two important benefits. First,distances are calculated to
the function that approximates the contour defined by the contour points, instead of to the
center of a pixel that only roughly approximates the contour, as shown in Figure4.1(b).
Second, implicit functions form the approximation of the contour to a desired level of
accuracy and smoothness. By constraining the function witha small error parameter, it
is possible to produce distance fields that are similar to those of previous approaches, i.e.
almost no smoothing is performed. If the error bound is increased, the function can be
naturally used to produce a smooth reconstruction. This is an important feature to have,
especially when reconstructing smooth biological specimens from noisy segmentations.
Contour Pixels
Distances
√2
1
(a) Discrete distances
Smooth
Contour
(b) Distances to function
Figure 4.1: (a) Distances to pixel-centers results in a limited number of possible values thatdescribe the contour. (b) Distances to a function approximating the contour provide a broadrange of possible values.
Let us consider every contour pixel as a point inR2, and assume that all contours
21
are closed. Aclosed contourcan be split into three parts:insidepoints,outsidepoints,
and contour points. Each point on the image is marked to indicate its category. Then
inside/outside information is vital for the construction of curve normals, as described in
Section4.2below. There are no restrictions on the number of closed contours that can be
present in a single slice. The inside/outside categorization can be easily achieved using a
standard flood-fill algorithm. First, every pixel, unless itlies on the contour, is labeled as
being aninsidepixel. Next, a flood-fill algorithm that is initiated at an outside pixel (such
as(0,0)) sets every encountered pixel to be anoutsidepixel. The algorithm stops when it
encounters a contour pixel, achieving the desired categorization.
4.2 Surface Normal Estimation
As discussed earlier in Section3.3, the MPU implicit function is highly dependent on
surface normals for accurate curve (in the 2D case) reconstruction. However, normal infor-
mation is not available from MRI, CT, or any other bio-imaging methods. Two approaches
are presented here for approximating surface normals. First, tangent line approximation is
described and shown to be ineffective. Second, a gradient-based approach with much better
performance is described in detail.
4.2.1 Tangent Line Approximation
The first approach approximates a tangent line to the contourusing a close neighbor-
hood of points around the point of interest. Least squares minimization is used to calculate
the line. The normal at the point is orthogonal to the tangentline. This procedure generates
undirected normals for every contour point because the orientation of the tangent line is
22
not known. The undirected normals are transformed into directed normals by examining
the current direction of the normal. In order to determine ifa normaln for a pointp points
inside the contour it is possible to check that the pointe represented by
e= p+ns; (4.1)
has been previously labeled as being aninsidepoint. The scalars is a scaling factor for the
purposes of this test. If such a normal is determined to pointto the inside of the model, its
direction is reversed:n←−n.
Although this method seems straightforward, it fails on very low resolution input, espe-
cially for contours with high curvature. Let us consider theexample in Figure4.2(a). The
normal that is calculated using the tangent line approach for the right-most pixel is pointing
down. However, this is a bad approximation to the true normal, which should clearly be
pointing to the right. Let us also consider the second case inFigure4.2(b), where a cor-
rectly oriented normaln from the tangent line approximation is incorrectly flipped because
the pointe after applying Equation4.1 to n for a givens lies in an inside pixel. Increas-
ing the value ofs in Equation4.1 does not solve this problem in all cases. For example,
in a high curvature region, the extended normal can be directed into an incorrect region.
Although such problems can be reduced by decreasing the neighborhood of points that are
used for computation of the tangent line, doing so also reduces the number of possible
normals. In fact, the example in Figure3.3 showing the effect of incorrect normals is ob-
tained by one of the best possible combinations, as determined through experimentation,
of neighborhood size ands values for that particular contour image. These instabilities are
23
only increased when extended to 3D.
Calculated
normal
Correct
normalTangent
line
Pixel in
question
Contour
pixel
(a)
Incorrectly
flipped normal
Original (correct)
orientation
Tangent
line
Pixel in
question
Contour
pixel
Inside
pixel
(b)
Figure 4.2: Least squares minimization may produce erroneous normals
4.2.2 Gaussian Blurring and Gradient Calculation
Section4.2.1discussed a local tangent line approximation, which has significant draw-
backs. A much better method for estimating normals is through the use of gradient cal-
culation. Let us consider a contour as a 2D monochrome image.For example, let white
pixels signify the objectΩ (including the border pixelsδΩ), and let black pixels signify
the outside area (everything else). Applying a Gaussian kernel to the image produces a
greyscale blurred image, where the border between black andwhite is no longer well de-
fined (Figure4.3).
The idea behind blurring is to create a smooth boundary between the contour and the
outside pixels. It is then possible to detect edge properties of the boundary, as described
in detail below. During the process of edge detection, two elements of information can be
calculated: the direction of the edge, and the magnitude of the edge. A normal to the edge
can be calculated from the magnitude information. This normal is an approximation of the
24
(a) Original filled contour (b) Gaussian filter applied to contour(σ = 3)
Figure 4.3: Gaussian blurring
normal to the true object boundary at the specified point. Theprocedure is described in
detail in Section4.2.4.
4.2.3 Convolution
The process of blurring the binary segmentation described in the previous section is
achieved through convolution. To produce the blurred image, a monochrome image is con-
volved with a Gaussian kernel. Convolution is a blending of two functions that represents
the amount of their overlap over an interval, and is defined for two functions f andg by
Weinsstein [49] as
f ⊗g =
∫ ∞
−∞f (τ)g(t− τ)dτ, (4.2)
as a function oft.
In the image processing domain, convolution is implementedas a window (convolution
25
kernel) that is applied to an image at every point, producinganother image. The convolution
kernel specifies a weighting of the original image pixels. Different kernels produce various
effects including blurring, sharpening, edge detection, and numerous others. Suppose that
an imageI of size[M,N] is convolved with a kernelh of size[J,K]. Then the convolution
of the imageI with the kernelh at the point (x,y) is defined as the discrete sum
O[x,y] = I ⊗h =J−1
∑j=0
K−1
∑k=0
I [x+ j,y+k]h[ j,k], (4.3)
whereO is the output image.
The equation for the Gaussian in 1D is
F(x) =1√2πσ
exp(− x2
2σ2) (4.4)
Extended to 2D, it becomes
F(x,y) =1√2πσ
exp(−x2 +y2
2σ2 ) (4.5)
The standard deviation of the distributions in Equations4.4and4.5is the parameterσ .
Both distributions are centered at 0. In fact, the Gaussian distribution is simply the Normal
distribution that is centered at 0, i.e.µ = 0; N(0,σ). The 1D and 2D Gaussian distributions
are shown in Figure4.4.
26
y
1
0.8
0.6
0.4
0.2
0
x
420-2-4
(a)
-3
-203
-121 0
0.1
y0 1x -1
2-2
0.2
3-3
0.3
0.4
(b)
Figure 4.4: Gaussian Functions in 1D (left) and 2D (right).
4.2.4 Normals Calculation
Once a smooth edge representing the contour is obtained, it is now possible to infer the
normal directions to points on the edge. This is achieved by calculating the gradient of the
image at a specific point. The gradient supplies informationabout the strength of the edge,
and also implies the normal to the edge at the given point. In order to obtain normals at all
points on the contour, the gradient calculation is performed at each contour point.
In order to extract the gradient at a point, we use part of a well known edge detec-
tion [55] algorithm commonly used in computer vision. Generally, edges are located by
scanning through a field of values and detecting differences. Edges are denoted by gradi-
ents with large magnitudes (i.e. large differences in adjacent pixel intensity values). The
process of calculating gradients in 1D is shown in Figure4.5.
The second row in Figure4.5 shows differences between two numbers directly above
and to the right of a value on the second row. For example, the first difference is produced
27
Image values 25 24 20 15 15 5 1Difference values 1 4 5 0 10 4 -
Figure 4.5: Edge detection in 1D.
by adding 25 to -1 times 24. There is no value for the last difference because there are no
two numbers to subtract. In practice, however, the last difference can be calculated using a
number of techniques. The one that is used in our implementation extends the last known
value, making the last difference 0. The simple convolutionkernel that is used with a 1D
version of Equation4.3to obtain the second row of Figure4.5is shown in Figure4.6.
[
1 −1]
Figure 4.6: Kernel that is used in Figure4.5.
A 2D filter works similarly to a 1D filter except that it consists of two kernels - one for
the horizontal direction and one for the vertical.
We use the well known Sobel filter [29], shown in Figure4.7, for measuring gradient
changes in our images. The Sobel filter was chosen because it takes into account the diag-
onal surroundings of the pixel in question instead of just the given row and column of the
pixel. The filter is a 3×3 matrix so that differences along the diagonal are incorporated
into the gradient calculation. However, it places increased emphasis on the pixel’s own
row or column. This is due to the increased weight in the horizontal and vertical directions
as evidenced by the 2 and−2 in the kernels. Thus, more weight is given to the pixel’s
immediate vicinity while not ignoring the other surrounding values.
Convolving thex andydirections of the Sobel filter at pointp in an imageI produces the
28
−1 −2 −10 0 01 2 1
(a) Vertical Kernel
−1 0 1−2 0 2−1 0 1
(b) Horizontal Kernel
Figure 4.7: Sobel Kernels.
gradient∇I(p) = dx,dy which represents the change inx and change iny, respectively.
The estimate of the normal to the true object boundary at the point p is given as
n = 〈dx,dy〉. (4.6)
The normalized vector is given by
n =n|n| , (4.7)
where|v| operation denotes the length (norm) of the vectorv.
Usually, a gradient is defined to point in the direction of theincrease in a function.
However, because of the form of the Sobel filters, the calculated gradient points in the
direction of decreasing values in the distance fields. This means that all calculated normals
are correctly directed away from the contour objectΩ.
Figure4.8 shows examples of the resulting normals for two sample contours from the
mouse embryo and ventricles datasets.
4.3 Inter-slice Interpolation
Once the points and normals for a single contour have been obtained, an MPU function
is used to generate a 2D distance field around the contour. It is important to emphasize that
29
(a) (b)
Figure 4.8: Examples of estimated normals in 2D. Normals have been enlarged for empha-sis. (a) contour from mouse embryo (b) contour from ventricles.
distances that MPU implicits generate are accurate close tothe contour, and decrease in
accuracy as the function is evaluated at points further awayfrom the contour. Therefore,
the MPU function is utilized to generate a narrow band of distances around the original
data points, as shown in Figure4.9(a). This band is wide enough to accommodate fluctua-
tions in the approximated contour. This width is controlledby the tolerance parametertol
(see Section3.2). Since the MPU function is signed, distances that lieinsidethe contour
are positive, and distances that lieoutsidethe contour are negative. The narrow band of
signed distances is then grown using a marching method with acorrectness criterion [30].
This results in a complete 2D distance field, shown in Figure4.9(b). One may note that
using a marching method algorithm with a fixed step size defeats the purpose of the accu-
rate distances that are obtained from a function evaluation. However, this is not the case.
The marching methods create a more accurate Eucledian distance field extending from the
narrow band of accurate distances than a function evaluation would have provided. It is
necessary to have the complete distance field in each slice inorder to avoid a stair-step
30
effect in the 3D reconstruction, which can occur if there is no overlap between the narrow
distance bands of neighboring contours.
(a) (b)
Figure 4.9: (a) narrow band of accurate distances that are generated by the MPU function.(b) full distance field that is generated from the narrow bandof distances.
4.3.1 Linear Interpolation
In order to complete the reconstruction of a full 3D model, the 2D distance slices must
be joined together. We have investigated two methods for this. The first uses linear inter-
polation between slices.
By stacking up all of the 2D contour distance fields on top of each other, we create a
3D distance volume. A linear interpolation between slices is achieved by using a march-
ing cubes [28] algorithm on the distance volume, producing a polygonal isosurface. This
produces only mediocre results and each contour is clearly visible in the 3D model. Fig-
ure 4.10shows why this is not very desirable. TheC0 continuity is only present in thez
direction. Continuity in thexy-plane is higher (C2) because of the smooth function approx-
imation. Note that this results in better reconstruction than the method that is mentioned
31
by Klein [22]. See Section7.1for a detailed comparison and examples.
Figure 4.10: The result of applying the marching cubes algorithm to the stacked distancevolume without any smoothing or filtering in thez direction.
4.3.2 Spline Interpolation
Instead of using linear interpolation within either the distance field itself or the march-
ing cubes algorithm, it is possible to use a smooth function such as a spline for the inter-
polation in thez direction. Depending on the exact spline curve type that is used for the
reconstruction, it is easy to guarantee a high degree of continuity between slices.
A drawback of this approach is that volumes have a fixed width,height, and depth,
and are integer indexed. It is impractical to introduce extra information into a volume that
represents a uniform sampling, as the extra information will stretch out the volume and
the reconstructed model. Thus inter-slice interpolation methods that generate intermediate
contours are only useful for data whose in-plane to out-of-plane aspect ratio is non-uniform.
Another drawback of this approach is that there are visible medial axis discontinuities
32
in the distance fields. Interpolation of these fields can result in problems as shown by
Sandholm and Museth [44].
4.4 Sampling
The issue of sampling differentiates contour reconstruction from most other forms of
reconstruction. Let us suppose that the contour data is coming from histological images
of a mouse brain. The in-plane resolution of the camera is about 8 microns perpixel,
while the out-of-plane (z direction) resolution is only about 16 microns perslice. This
results in anisotropic sampling of the data, which must be considered in order to obtain a
reconstruction with an accurate aspect ratio.
4.4.1 Uniform Sampling
Contour data has uniform sampling if the data is provided in equal resolution in each
of the x, y, andz directions. In this case, the use of reconstruction methodsdescribed
in this chapter does not require any special considerations. This is because voxels in the
the distance volume created from a stack of 2D distance fieldshave an accurate aspect
ratio such thatx andy dimensions of a voxel have the same meaning in thez dimension.
However, real-world biological and histological data don’t always come with convenient
uniform sampling.
4.4.2 Non-uniform Sampling
Some biological data that come from real-world data acquisition devices are anisotropic.
Very often thexy-plane imaging capabilities are much higher than the slicing (or out-of-
33
plane imaging) capabilities. There are two methods for addressing this problem. First, it
is possible to perform the reconstruction normally as described above and then scale down
(or scale up) the resulting mesh in the appropriate direction. Scaling the mesh, however,
can lead to visual artifacts being created through the shrinking of a mesh in one dimension
or artifacts can be accentuated if a mesh is stretched out in one direction. An example of
the accordion effect from shrinking a mesh is shown in Figure4.11, where slow changes
of the surface in thez direction are transformed into fast and sharp changes in thescaled
mesh.
(a) Anisotropic reconstructionwherexy:z ratio is 5:1
(b) Compensation by rescaling ofthe mesh
(c) Correct reconstruction
Figure 4.11: The effect of compensation for non-uniform scaling at the mesh level. Thepancake effect in (b) is visible.
Another approach to the anisotropic reconstruction problem is to simply discard extra
information or to approximate missing information. The following example illustrates this
method as well as outlines its advantages and disadvantages. Suppose that thexy resolution
of the given data is 100×100 and there are 50 such slices. Also assume that in order to
have uniform sampling of the model, it is necessary to have 100 slices. Therefore, thexy
34
to z ratio is 2 : 1.
Reducing the resolution of each of the slices to 50×50 will make the data uniform at the
expense of losing detail that could have otherwise been incorporated into the reconstruction
process. This approach may be acceptable in some situations. However, most biological
and medical applications rely on the full use of their imaging capabilities and it is not
desirable to loose data in such a way.
Extending the previous example, suppose that instead of discarding existing data, the
missing data is approximated. It is possible to approximatethe missing slices by repeating
every slice once, or by creating a new slice between each existing slice that is the linear
interpolation of the two surrounding slices. This producesa uniform sampling that is more
desirable because no information is lost during reconstruction. As a step beyond linear
interpolation, the spline interpolation techniques that are described in Section4.3.2can be
used here instead. As the ratio between thexy-plane and thezdirection resolution increases
(more and more slices are missing in thez direction), methods such as spline interpolation
become increasingly more important for producing high quality reconstruction results.
35
5. Volume-based Reconstruction
Full volume-based reconstruction without the intermediate 2D steps is the ultimate goal
of our work. However, the methods presented in this thesis sofar can be extrapolated to
perform full 3D reconstructions. In the volume-based approach we present the novel idea
of considering the set of contour pixels as a point set inR3. Although point sets have been
investigated before, to the authors’ knowledge these methods have not been specifically
applied to the contour reconstruction problem.
Generic point-set surface reconstruction methods either assume information that is not
available in contour reconstruction or may not perform wellunder all conditions. However,
when viewed in the context of medical contour data, it is possible to get a much better result
from contour point set models through exploitation of the knowledge and structure of the
contours and the objects that they represent. Our approach performs well when accurate
surface normals are given and uses domain specific knowledgeto obtain these normals.
The decoupling of the reconstruction process allows for optimizations to each component
independently of the other.
The problem of generating normals to an arbitrary set of points in space has been stud-
ied by Mitra et al. [34]. However, it is possible to exploit the nature and the organization
of the contour points in a way that will permit the proposed approach to generate accurate
normals at each point. This is the most important step in the reconstruction process. The
estimated normals are used together with the point set derived from contours in order to
create a smooth surface using MPU implicits.
36
5.1 Surface Normal Estimation
The MPU function is designed to run on a 3D point set. However,as mentioned previ-
ously in Section3.3, accurate normals are imperative for the MPU implicits. Theapproach
for approximating these normals is similar to the 2D case. However, a 3D Gaussian kernel
is used instead of a 2D one. The same smoothing and gradient calculation approach can be
used with the 3D point set representation because of the inherent structure of the contours.
Each contour consists of a set of pixels in 2D. By stacking up the points for each contour
on top of each other while incrementing thezvalue for each slice, 2D points are transformed
into 3D points. Thus, the result is a 3D binary volumeVΩ that has integer indices. This is
a binary volume because each voxel is marked as lying in or on the objectΩ, or not (thus
requiring only binary coding).
Just as in the 2D case, it is necessary to identify inside voxels in order to keep track of
the inside of the volume. The inside voxels of the volume correspond directly to the inside
points of each of the 2D contours. Since that information canbe made readily available,
the concept is easy to extend to 3D, producing a filled 3D volume.
Smoothing the volumeVΩ with a 3D Gaussian kernel results in a similar edge blurring
effect that was achieved in 2D. The 3D Gaussian kernel is a 3×3×3 matrix that is given
by
F(x,y,z) =1√2πσ
exp(−x2 +y2 +z2
2σ2 ). (5.1)
In the implementation,σ varied between 1 and 3, depending on the size of the point set.
By calculating the gradient of the blurred volumeV at each data point in the contour
point set, it is possible to measure the rate of change in eachof the three directions:x, y,
37
andz. Thus, similarly to the 2D case (see Section4.2.4), the normalized normal at pointp
in R3 is
n =〈dx,dy,dz〉|〈dx,dy,dz〉| , (5.2)
wheredx, dy, anddz are components of the gradient∇V(p) at point p = (x,y,z). The
smoothed volume is no longer a binary volume because it must be able to represent values
of variable intensity.
Using this approach, it is not necessary to deal with the orientation of the normals
as other normal generation techniques, such as Mitra et al. [34] and Hoppe et al. [18],
inevitably do. As the values of the blurred volume range fromhigh on the inside to low on
the outside of the model, the gradient slopes away from the model due to the Sobel filters
that are used, which is exactly the correct orientation for the normals.
5.2 Surface Reconstruction Quality and Accuracy
Once the point and surface normal information is obtained, the surface is reconstructed
using an MPU function. There are two main parameters to the MPU function that are used
to generate reconstructions of varying quality and smoothness. The first is the minimum
number of points (Nmin) that must be included in every ball of radiusR that is centered at
each octree cell (described in Section3.2). The second is a relative tolerance (tol) value
(also described in Section3.2) that guarantees that the reconstructed surfaces lies within tol
distance of the input data points. Both of these parameters control the quality of the recon-
struction. Increasing either of the two parameters resultsin a higher degree of smoothing.
A smaller tolerance value results in better approximation (truer to the original data points)
38
locally, where the local area is defined byNmin. Increasing theNmin parameter while keep-
ing tol constant results in a smoother wider area that is still subject to the constraint of the
tolerance. Experimentation has also showed that incorrectvalues for these parameters can
cause spurious surface artifacts. For example, setting thetol parameter to very low val-
ues can cause interpolation (as opposed to approximation) artifacts such as small spurious
protrusions or dents in the reconstructed surface. The MPU algorithm is also designed to
deal effectively with sharp edges and corners. However, this feature is not necessary for
biological data and has been disabled in our studies.
5.3 Sampling
5.3.1 Uniform Sampling
Under isotropic sampling, the reconstructed surface already has the correct aspect ratio.
After generating a surface for a given uniformly sampled model, the reconstruction is eval-
uated for accuracy to the input point set. The error measurement is the algebraic distance
between an input data point and the closest point on the reconstructed surface. Statistics of
these measurements are generated for every input point to establish the amount of deviation
from the reconstructed surface. The process of error measurement ignores the sign of the
distances that are obtained and only measures the absolute value of the deviation.
5.3.2 Non-uniform Sampling
Due to the quality of scanning devices, it is possible that imaging resolution in thexy-
plane is higher than slicing capabilities. For example, regardless of units or precision used,
39
imaging resolution is only accurate to the pixel, whereas slices could be two, three, or more
pixels apart from each other. Such acquisitions produce non-uniform resolution data.
Data that is originally non-uniform must be accommodated byan appropriate trans-
formation of each input inR3. The resulting points must describe a model with correct
relative dimensions in each direction. The transformationconsists of a scaling operation
that either expands or condenses the points. Since each point is simply a point inR3 and
not necessarily constrained to a volumetric grid, scaling each point is a simple procedure.
However, first, it is necessary to obtain surface normals to the data before any such scaling
takes place. After the scaling, the grid layout of the voxelsin the volume is lost, and with
it the ability to construct normals. Thus the transformation procedure must occur after the
normals generation phase of the reconstruction process. Once a normal is obtained for each
input point, the grid structure of points is no longer necessary and can be discarded, leaving
only a list of points and their corresponding normals.
Care must be taken when applying the scale transformation topoint normals. A sur-
face normal is not a vector, but a property of a surface; normals arenot invariant under
anisotropic scaling. Therefore, special care must be takento properly scale normals.
Let us define a matrixM that performs the necessary scaling transformation on anisotropic
points, a plane (without loss of generality) defined by a tangent vectort that represents the
geometry, and a normaln that is orthogonal to the plane and vectort. When the transfor-
mationM is applied tot, resulting int′, the transformed normaln′ must remain orthogonal
to t′. Let the transformation matrix that achieves this constraint beA i.e.,n′ = An. Thus
n′ · t′ = (An)TMt = nTATMt = 0. (5.3)
40
Solving Equation5.3for normal transformation matrixAT , results inAT = M−1. Thus the
transformation matrixA that must be applied to the normal isA = (M−1)T [31].
It is necessary to analyze the effect of non-uniform contourreconstructions by artifi-
cially creating non-uniform variants of uniformly distributed data. Suppose that a 100×
100× 100 volume is a uniform sampling. It is then possible to artificially create non-
uniform variations by only using every 2nd, 3rd, 4th, etc. slice, resulting in volumes whose
dimensions are 100× 100× 50, 100× 100× 33, and 100× 100× 25, respectively. The
interesting observation about the artificially sampled data is that the missing data is known.
It consists of those points from the original model that werenot chosen to be included in
the non-uniform sample. The quality of the reconstruction can be evaluated using both
complete data, and just the the missing data. For instance, given an isotropic point set
P = (x1,y1,z1),(x2,y2,z2), . . . ,(xn,yn,zn), an anisotropicPevenset can be formed by re-
moving fromP every point with an oddz-coordinate:Peven= P−(xi ,y j ,zk), k is odd.
ThereforePodd = P−Peven. Once these sets are obtained, a reconstruction that is per-
formed using only the points in setPodd are evaluated not only against the entire setP, but
also against points only inPeven. This procedure provides an accurate and controlled error
measurement for the anisotropic reconstruction process.
5.4 Arbitrary Cross-sections
Since MPU implicits define a 3D distance field, it is possible to intersect this field with
an arbitrary plane. By evaluating the MPU function at pointson this plane, a 2D distance
field is created. This distance field is similar to the ones used in slice-based reconstruction;
a 2D contour can be easily produced by examining the zero crossing of the distance field.
41
The ability to create contours from arbitrary planes through the reconstructed surface can
be a powerful tool in medical analysis.
An arbitrary cut plane is defined by a center pointp and a normaln to the plane. The
dimensions of the resulting 2D distance field correspond to the diagonal of the bounding
box of the original point set. The pointp is defined to lie at the center of this field. Let
the unit vectorss andt be the two orthogonal vectors that lie on the plane. For every2D
integer coordinate (x,y), the pointpi = sx+ ty is calculated and is used to evaluate the MPU
function. The value is recorded at point (x,y) in the output 2D distance field.
42
6. Implementation
The techniques discussed in this thesis have been implemented and empirical testing
was performed in order to validate the claims of this thesis.The implementation consists
of software and hardware components which are described below.
6.1 Hardware
The software implementation was written on an Apple G5 running on Mac OS X 10.3.x.
The testing computer has two 2.2 GHz processors and 1 GB of RAM.
6.2 Software
All software and code used is written in C++ and compiled using the GNU C++ com-
piler, version 3.3 (GCC 3.3). However the code has been successfully compiled and run on
various other Unix and Linux operating systems.
The ImageMagick C++ API [19] was used for the realization of flood fill and some
other image manipulation algorithms used in this work.
The MPU implicit implementation is based on the implementation by Yutaka Ohtake [38].
The code has been adapted to be used at a command line and some parts rewritten to work
under GNU C++ on a Unix-based operating system. The originalcode is written for Mi-
crosoft Visual Studio 6. Some other modifications to the codewere performed in order to
attain maximum user control of the reconstruction process,including the ability to perform
reconstruction for 2D images.
43
6.2.1 Convolution Optimization
The convolution operation described in Equation4.3 can be implemented more opti-
mally if the filter used isseparable[52]. A separable 2D filterH can be decomposed into
two 1D filtersHi andH j such that
H = HiHTj .
Thus, the convolution operation can be evaluated as the product of two 1D filters, in-
stead of one 2D filter. This reduces the complexity of the convolution operation (per pixel)
from O(n2) to O(n), wheren is the size of the filter.
Fortunately, both the Gaussian kernel and the Sobel filters are separable. For example,
a 2D Gaussian is simply the product of two 1D Gaussians:
G2D(x,y) = G1D(x)[G1D(y)]T . (6.1)
By extension, a 3D Gaussian is the product of 3 1D Gaussians. Similarly, a 2D Sobel filter
can be decomposed into its 1D components:
−1 −2 −1
0 0 0
1 2 1
=
−1
0
1
[
1 2 1
]
. (6.2)
This optimization is implemented and used for both 2D and 3D convolution operations and
dramatically reduces computation time.
44
6.2.2 2D Distance Fields
The MPU implicits implementation is written for 3D surface representation. However,
we use the MPU function to generate 2D distance fields as well.In order to take advantage
of the existing implementation of MPU implicits, we have devised a procedure to produce
the desired effect. Once a surface normal has been estimatedfor each point on a 2D contour,
a point set model is generated by stacking several copies of the contour on top of itself.
Thus the MPU surface defined by this stack is topologically equivalent to a cylinder (or
several cylinders if there are several closed contours in one slice). By evaluating the MPU
function at a fixedz value (preferably at the midpoint of thez dimension of the point set’s
bounding box) over thexy plane, a 2D distance field is produced.
6.2.3 Volume Smoothing Optimization
Performing the convolution operation on a 3D volume would normally takeO(n3) time,
wheren is the dimension of the volume. However, in practice, it is not necessary to do this
for the whole volume. Since the area of interest for the gradient is immediately around a
specified pixel, it is possible to optimize the volume convolution procedure to only operate
on voxels that will be used in the gradient calculation. For example, the Sobel filter is (in the
3D case) a 3×3×3 matrix that only requires information one voxel away from the center
voxel. This information and optimization is used in our implementation to dramatically
reduce computation time and memory requirements.
We iterate over every input pointp and add every surrounding voxel location in the
binary volume to the setC (without repetitions). In the next step, we only perform the
45
convolution operation on the voxels recorded in the setC. Gradient calculations are only
performed for each of the input points. This reduces the complexity of the entire procedure
to be linear in the number of input points.
6.2.4 Challenges of Large, High Resolution Data
High resolution data causes not only theoretical problems,but also practical physical
limitation problems. For example, our brain dataset, even reduced to 25% of its original
size in thexy-plane still contains 314 contour images that are 730×525 pixels each. In
order to store all of these images in a 3D volume that has float values would take
730×525×314×4 = 481362000 bytes= 459 MB.
On our test system, a float is 4 bytes (32 bits). On a system thatonly has 512 MB of
RAM, loading such a large file into RAM would slow the system toa crawl as the operating
system swaps kernel and other program code in and out.
If we were working with the full resolution model, our volumewould have to be
2920×2098×314×4= 7694456960 bytes≈ 7.17 GB.
It is virtually impossible to load such a file into memory evenon standard high-end
computers. Therefore, we had to devise a method that would allow us to work with such
files. Our solution uses memory mapped files. A memory mapped file allows access to
the file contents as if the file were a simple array. Memory mapping is accomplished with
46
themmap() system call. However, system limitations dictate that a single process is only
allowed to access 1 GB of memory-mapped space at a time. Thus,in order to deal with
files that are larger than 1 GB, we implemented a swapping system that properly maps
and unmaps parts of the file as necessary. The concept works similarly to memory paging
in modern operating systems. Thus it is guaranteed that every access into the array that
represents the file is valid, regardless of the extent of the previously mapped region. A new
region is remapped and loaded if necessary.
47
7. Results
Both the slice-based and volume-based reconstruction methods have been implemented
and evaluated, and the results are presented in this chapter. First, the slice-based approach
that has been popular with most contour reconstruction techniques is discussed, and com-
pared to some methods in the literature. Then, results from our novel approach of con-
sidering contours as a single point set inR3 are presented. The reconstruction quality is
examined in detail through analysis and error measurementsof the reconstructed models.
Presented results are based on the following specimens:
Mouse embryo, heart, stomach, and tongue –Contours generated from an MRI scan of
a 12-day-old mouse embryo. Data provided by the Caltech Biological Imaging Center.
Mouse brain – Contours generated by calculating the outline of histological images of a
mouse brain. Data provided by the Laboratory for Bioimagingand Anatomical Informat-
ics [24].
Human brain ventricles – Contours generated from segmentation of diffusion tensor mag-
netic resonance imaging (DT-MRI) of a human brain [54].
Pelvis –Contours acquired from the publicly available database at Tel Aviv University.
Unlike many examples in the literature, all of the examples presented here use flat
shading, not Gouraud shading which makes models appear smoother than they are, un-
less otherwise noted. Thus what is shown in our screenshots is the true geometry of the
reconstructed model.
48
7.1 Slice-based Approach
Given contours as 2D slices, this approach computes a 2D distance field using an
MPU implicit function that approximates the individual contours. These distance fields
are stacked to produce a volume dataset and a marching cubes algorithm [28] is used to
extract a polygonal isosurface representing the the zero set of the volume.
Figure7.1 shows the reconstructions that are similar to Jones [20] in the left column,
and our reconstructions that are obtained from stacking 2D distance slices that were pro-
duced by the MPU method in the right column. In both cases, a marching cubes algorithm
was run on the distance volumes. There is a clear improvementin our results over those
produced from distance fields to contour pixel centers.
7.2 Volume-based Approach
In the following sections we present examples of our reconstructions using the full
volume-based approach. We examine how the procedure works for isotropic data and then
we investigate its performance for anisotropic data and present comparisons to popular
mesh stitching methods. Throughout this section, the same set of MPU parameters are
used for a given model. For convience, these parameters are also reproduced in every
applicable table.
7.2.1 Isotropic Data
First, isotropic data is investigated. The mouse heart, stomach, and tongue datasets are
all isotropic and have equal sampling in thex, y, andz directions. Figures7.2 and7.3
49
(a) (b)
(c) (d)
Figure 7.1: Results from slice-based reconstruction. (a) marching cubes on discrete dis-tance volume. (b) marching cubes on stacked MPU distances.
50
show our reconstructions using MPU implicits. Table7.1 lists characteristics for each of
the datasets.
Table 7.1: Characteristics of isotropic input data
name number of slices resolution total # of data pointsembryo 186 122×128 46204heart 34 89×98 4528stomach 34 90×63 4088tongue 32 90×120 6842brain (downsampled) 314 730×525 500K
Figure 7.4 compares several reconstruction methods. Reconstructions are compared
for the Klein [22] result, the reconstruction using NUAGES [41], and our slice-based and
volume-based reconstruction methods. The Klein result is extracted from an image in [22],
and uses slightly different data. The remaining three models are based on the exact same
dataset. Based on these results, it is evident that meshing techniques produce faceted sur-
face models and require post-processing in order to smooth them. Our approach produces
a naturally smooth surface without the need for any other steps. A detailed comparison of
reconstructions performed with the method presented in this thesis with those of a com-
mercial software is discussed in Section7.3.
Isotropic Statistics
Although the reconstructions are visually appealing, it isnecessary to quantify the qual-
ity of the reconstructions. This is accomplished by evaluating the implicit function at each
of the input data points. Since the MPU function is a distancefunction, it produces the dis-
51
(a) Heart point set (b) error increasing from yel-low to red
(c) areas with error> 1
(d) Ventricles point set (e) error increasing from yel-low to red
(f) areas with error> 1
(g) Tongue point set (h) error increasing from yel-low to red
(i) areas with error> 1
Figure 7.2: Reconstruction of the isotropically sampled mouse heart, stomach, and tonguedata, with approximation errors.
52
(a) Mouse embryo point set (b) error increasing from yellowto red
(c) areas with error> 1
(d) Brain point set (e) Reconstructed surface with errorincreasing from yellow to red
Figure 7.3: Reconstruction of the isotropically sampled embryo and brain data, with ap-proximation errors.
tance between each of the input points and the reconstructedsurface. We call this distance
the error of the reconstruction. However, theerror can also be considered to be the de-
gree of smoothing of the reconstruction, given a noisy data set. The minimum, maximum,
median, arithmetic mean, and standard deviation of the error values for each reconstructed
model are shown in Table7.2. The respective computation times (given as system time) for
53
(a) Klein result (b) NUAGES reconstruction
(c) Our slice-based reconstruction (d) Our volume-based reconstruction
Figure 7.4: Comparison to Klein results for similar (but notidentical) pelvis datasets. (a)Klein [22] reconstruction (gouraud shaded). (b) NUAGES reconstruction. (c) Our slice-based reconstruction (gouraud shaded). (d) Our volume-based reconstruction.
these results are displayed in Table7.3. All of the error metrics that are presented in this
thesis are in voxel units. That is, an error of 2 signifies thatthe surface lies 2 voxels away
from the contour dataset.
54
Table 7.2: Approximation quality of reconstructed surfaces for isotropic data with specifiedMPU tol andNmin parameters. Metrics are calculated in units of voxels.
name min max median mean st dev tol Nmin
embryo 3E-06 1.7273 0.2788 0.3079 0.2126 3.5 200heart 1.45E-04 1.3217 0.2403 0.2782 0.2052 2.5 100stomach 2.48E-04 1.6821 0.2473 0.2897 0.2213 3.0 100tongue 6.6E-05 1.4534 0.2777 0.3151 0.2247 2.5 100brain (downsampled) 4E-06 6.5316 0.5669 0.7183 0.6155 20 200
Table 7.3: Execution times for embryo, heart, stomach, tongue, and brain reconstructionsin Table7.2.
name normals estimation surface reconstruction totalembryo 10 sec 6 sec 16 secheart 1 sec 3 sec 4 secstomach 1 sec 3 sec 4 sectongue 1 sec 2 sec 3 secbrain (downsampled) 23 min,6 sec 20 sec 23 min,26 sec
Although the maximum error values can be as high as about 2 voxels for the embryo
model, most tend to be in the neighborhood of 1.5 voxels. However, the mean and standard
deviation measures are low. For all of the smaller models (embryo, heart, stomach, and
tongue), most points (whose error measurements are within one standard deviation of the
mean) are approximated with a surface point that lies less than 0.5 voxels away. An even
higher majority of points (with error measurements within three to four standard devia-
tions) lies within one voxel of their respective points on the reconstructed surface. This is
significant because for a vast majority of input points, the error of the reconstructed surface
is sub-pixel. The input voxels (or pixels in the contours) are only representative of a point
55
somewhereinsidethat pixel. Sub-pixel errors are admissible even if an accurate reconstruc-
tion is desired because the surface still lies somewhere within the bounds of the original
pixel, just not necessarily at its center. For a much larger model such as the brain, we have
relaxed the tolerance parameter. This results in a higher maximum value, but the mean
and standard deviation still stay relatively very low, despite there being about 500,000 data
points.
Table7.3shows that the procedure presented in this thesis is very fast, requiring mere
seconds to perform surface reconstructions. The executiontimes for the “surface recon-
struction” phase include the evaluation of the MPU functionand the generation of a trian-
gular mesh from the function. The execution time for the “normals estimation” phase for
the brain dataset is rather high. However, normals estimation is only a one-time compu-
tation. Once points and normals have been obtained, a surface reconstruction at a desired
level of quality can be achieved in only about 20 seconds.
7.2.2 Anisotropic Data
The following data with anisotropic sampling was reconstructed: ventricles, pelvis,
mouse brain, and embryo. Characteristics of the input data are summarized in Table7.4.
In order to quantify the quality of the reconstruction for anisotropic data we use a method
of artificially removing data from original isotropic datasets. Thus, we can evaluate the
quality of the reconstruction for all points and for only theremoved points. For example, if
we perform the reconstruction on only theevenslices of the mouse embryo, we can gather
error statistics only for points that lie on theodd slices of the embryo. This provides a
controlled environment for anisotropic reconstruction analysis.
56
The artificial anisotropic data is generated from the mouse brain and embryo datasets.
The ventricles and pelvis data are originally anisotropic,and therefore we perform the same
type of analysis on the reconstructed surface that we did in Section7.2.1. We have included
information on the original brain data in Table7.4 for completeness. Due to the computa-
tional complexity that is required to process a dataset thislarge, we have instead decided to
downsample it to 25% of its original resolution and refer to it as thebrain (downsampled)
dataset. Details of the ventricles and pelvis datasets are shown in Figure7.6.
Table 7.4: Characteristics of anisotropic input data
name # of slices resolution xy:z # of data pointsbrain (original) 314 2920×2098 2 : 1 >> 2Mbrain (downsampled: even/odd) 157 730×525 2 : 1 278Kembryo (even/odd) 93 122×128 2 : 1 23080ventricles 36 195×285 8 : 1 19800pelvis 26 500×500 8 : 1 21650
Anisotropic Statistics
The results from the artificial anisotropic data are shown inTables7.5and7.7. Table7.5
contains statistics based on the evaluation of anisotropicreconstructions using all known
data points. The ventricles and pelvis datasets are also included in this group because all
data points were used to evaluate the surface reconstruction quality. Computation times
(given as system time) for the ventricles and pelvis data areshown in Table7.6. Table7.7
contains statistics based on the evaluation of anisotropicreconstructions using only the
missing data points. These evaluations were performed on the embryo and brain datasets.
57
(a) All contours (b) Even contours (c) Odd contours
(d) All contours (e) Even contours (f) Odd contours
(g) Ventricle (h) Pelvis
Figure 7.5: Reconstruction of the anisotropic embryo, and brain datasets.
58
(a) Ventricles point set (b) error increasing from yellowto red
(c) areas with error> 1
(d) Pelvis point set (e) error increasing from yellowto red
(f) areas with error> 1
Figure 7.6: Details of ventricles and pelvis datasets showing approximation errors.
We can conclude that MPU Implicits handle missing slices very well as there is no
significant increase in error of the reconstructed surface when evaluated at all of the original
as well as the missing slices. The mean and standard deviation values for the embryo
and brain datasets in Tables7.5 and7.7 are only slightly higher than their counterparts in
Table7.2.
The error measurements for the pelvis and ventricles datasets show fairly low mean
and standard deviation values despite highly uneven sampling. The pelvis dataset performs
59
Table 7.5: Approximation quality of reconstructed surfaces for anisotropic data with spec-ified MPU tol andNmin parameters. Metrics are calculated in units of voxels.
name min max median mean st dev tol Nmin
embryo even 12E-06 2.2293 0.2822 0.3190 0.2346 3.5 200embryo odd 3E-06 2.2726 0.2810 0.3182 0.2329 3.5 200brain even 8E-06 6.6521 0.5748 0.7239 0.6082 20 200brain odd 0.00 7.7753 0.5842 0.7482 0.6220 20 200ventricles 7E-06 3.6074 0.4808 0.5859 0.4652 6.50 200pelvis 0.00 3.1170 0.2181 0.2837 0.2587 6.00 200
Table 7.6: Execution times for ventricles and pelvis reconstructions in Table7.5.
name normals estimation surface reconstruction totalventricles 6 sec 3 sec 9 secpelvis 13 sec 7 sec 20 sec
Table 7.7: Approximation quality of reconstructed surfaces for anisotropic data with spec-ified MPU tol andNmin parameters. Metrics are calculated in units of voxels.
name min max median mean st dev tol Nmin
embryo even? 9E-06 2.1368 0.2848 0.3243 0.2391 3.5 200embryo odd? 3E-06 2.3092 0.2847 0.3252 0.2407 3.5 200brain even? 8E-06 6.6521 0.5777 0.7282 0.6114 20 200brain odd? 0.00 7.7753 0.5908 0.7488 0.6347 20 200
? Error calculations for these reconstructions were performed with the known missingdata.
very well, despite having more points than the ventricles dataset.
60
7.3 Comparison to Commercial Methods
We have evaluated the quality of our developed surface reconstruction technique against
amira 3.1 [32], which is a commercial visualization and scientific analysis software pack-
age. In order to mimic the actual process of contour delineation and subsequent surface
reconstruction, we have created the following datasets forevaluation. First, we created
an artificial set of contours representing the ground truth for a specimen. Next, in-plane
and out-of-plane noise was added to the contours as specifiedby [12] and [3], respectively.
The in-plane noise consists of a random shifting of a contourpixel along its normal to the
contour by up to.25 pixels. The out-of-plane noise is applied to all pixels ina contour si-
multaneously and consists of a rotation and a translation operation such that the maximum
shift for any pixel in the contour does not exceed 1.3966. The resulting noisy contour data
consists of 174 slices, each having a resolution of 204×231. The total number of contour
points is 48238.
Three surfaces are then generated from the noisy contour data. The first is created
with the volume-based reconstruction technique presentedhere. We refer to this surface as
volume-based. The MPU parameterstol andNmin are set to 2 and 100, respectively. The
next two surfaces are created by amira. The first,amira-non-smooth, is not smoothed by
amira. The second,amira-smooth, is smoothed by amira after the reconstruction. These
surfaces are shown in Figure7.7. By calculating distances from input contours to these
surfaces, we are able to evaluate how closely they approximate the input.
The results in Table7.8are presented as surface evaluations against original and noisy
contours. There is no significant difference in error measurements for the amira-generated
61
(a) volume-based (b) amira-non-smooth (c) amira-smooth
Figure 7.7: The three surfaces reconstructed from artificially noisy contour data.
surfaces when evaluated against either the original or noisy contours. However, when the
volume-basedsurface is evaluated against both sets of contours, a much closer fit is ob-
served. In fact, thevolume-basedsurface is closer to the original (ground truth) contour
points rather than the noisy contour points. This confirms our hypothesis that the volume-
based reconstruction method is able to deal effectively with noisy contours.
Table 7.8: Approximation quality forvolume-based, amira-non-smooth, andamira-smoothsurfaces. The best reconstruction is marked by a?.
surface contours min max median mean st devvolume-based? noisy 3.1E-05 1.8954 0.2778 0.3210 0.2398volume-based? original 3.1E-05 1.2849 0.2590 0.2826 0.1907
amira-non-smoothnoisy 0.2886 2.4748 0.4564 0.5137 0.2591amira-non-smoothoriginal 0.2886 1.7911 0.4564 0.5151 0.2576
amira-smooth noisy 9.9E-06 2.5523 0.5112 0.5597 0.3631amira-smooth original 3.6E-05 1.9871 0.5129 0.5542 0.3454
62
8. Conclusions
8.1 Conclusion
The work in this thesis has shown that point set and implicit surface reconstruction
techniques can be applied effectively and efficiently to theproblem of contour-based sur-
face reconstruction. Improving on previous results in the literature, we have shown that it is
possible to create smooth, biologically accurate 3D surface reconstructions from contours
using implicit surfaces. In order to accomplish this, a novel method for surface normal es-
timation has been proposed and applied to the area of contourreconstruction. This method
uses a combination of Gaussian blurring with an edge detection algorithm in order to accu-
rately estimate smooth normals. Using the estimated surface normals, Multi-level Partition
of Unity (MPU) implicits are used as the underlying surface representation.
We have applied this technique to the slice-based and volume-based reconstruction
methods. The slice-based approach with trivial linear interpolation in thez direction per-
forms at least as well as previous contour stitching methods. The volume-based approach
produces smooth surface models that are capable of having sub-pixel accuracy to the orig-
inal data while preserving low computation times. This reconstruction procedure has also
been shown to be invariant under the sampling resolution of the original data, producing
accurate reconstructions in every case.
Consequently, it has been shown that implicit surface reconstruction techniques are
an effective method of dealing with missing data, anisotropic contours, and noisy data.
Through comparison with a standard commercial software application, we have shown that
63
our reconstruction method effectively deals with noise that may be present in the contour
delineation process. Our conclusions are based on a statistical evaluation of the quality of
the reconstructions with respect to the input data.
The idea of considering a set of contours as a point set inR3 creates possibilities for
reconstruction of non-traditional contour data, such as non-parallel contour slices and ir-
regularly sampled data. The proposed method improves both the popular slice-based re-
construction technique as well as advances the state of the art in volume-based 3D surface
reconstruction.
8.2 Future Work
It is still the case today that contours have a higher degree of accuracy in thexy-plane
than in thez direction. This occurs not only because of sampling resolution, but also
because contours can be misaligned from one slice to the next. Therefore, it would be
prudent to develop an adaptive technique that differentiates between input noise in thexy-
plane and thezdirection. The adaptive technique would analyze curvaturein all directions
at a given point and based on this information modify approximation parameters.
The amount of smoothing in thez direction should be inversely proportional to the
curvature in thexy direction. If the curvature in thexy plane is low, then more smoothing
should be done in thezdirection. Conversely, in areas of highxycurvature, less smoothing
should be done in thez direction to capture the finer details of the surface.
Another future consideration is to make the smoothing and surface normals generation
adaptive so that normals to detailed features are not smoothed away. It is possible to obtain
locally accurate normals by varying the Gaussian mask size and σ value at different por-
64
tions of the data. Areas of high curvature would use smallerσ values, while areas of low
curvature would use largerσ values.
65
Bibliography
[1] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T. Silva. Point setsurfaces.IEEE Visualization 2001, pages 21–28, October 2001.
[2] Nina Amenta and Yong Joo Kil. Defining point-set surfaces. ACM Trans. Graph.,23(3):264–270, 2004.
[3] Siamak Ardekani. Inter-animal rodent brain alignment and supervised reconstruction.Master’s thesis, Drexel University, 2000.
[4] Chandrajit L. Bajaj, Edward J. Coyle, and Kwun-Nan Lin. Arbitrary topology shapereconstruction from planar cross sections.Graphical models and image processing,58:524–543, 1996.
[5] Gill Barequet, Michael T. Goodrich, Aya Levi-Steiner, and Dvir Steiner. Straight-skeleton based contour interpolation. InProc. 14th Symp. Discrete Algorithms, pages118–127. ACM and SIAM, Jan 2003.
[6] Gill Barequet, Daniel Shapiro, and Ayellet Tal. Multilevel sensitive reconstruction ofpolyhedral surfaces from parallel slices.The Visual Computer, 16(2):116–133, 2000.
[7] Gill Barequet and Micha Sharir. Piecewise-linear interpolation between polygonalslices.Computer Vision and Image Understanding: CVIU, 63(2):251–272, 1996.
[8] W. Barrett, E. Mortensen, and D. Taylor. An image space algorithm for morphologicalcontour interpolation. InProc. Graphics Interface, pages 16–24, 1994.
[9] J.-D. Boissonnat. Shape reconstruction from planar cross sections.Computer Vision,Graphics, and Image Processing, 44(1):1–29, 1988.
[10] Daniel Cohen-Or and David Levin. Guided multi-dimensional reconstruction fromcross-sections. In K. Jetter F. Fontanella and P.-J. Laurent, editors,Advanced Topicsin Multivariate Approximation, pages 1–9. World Scientific Publishing Co., Inc, 1996.
[11] Mathieu Desbrun, Mark Meyer, Peter Schroder, and AlanH. Barr. Implicit fairingof irregular meshes using diffusion and curvature flow. InSIGGRAPH ’99: Proceed-ings of the 26th annual conference on Computer graphics and interactive techniques,pages 317–324, 1999.
66
[12] J Eilbert, C Gallistel, and D McEachron. The variation in user drawn outlines on dig-ital images: Effects on quantitative autoradiography.Computerized Medical Imagingand Graphics, (14):331–339, 1990.
[13] A.B. Ekoule, F.C. Peyrin, and C.L. Odet. A triangulation algorithm from arbitraryshaped multiple planar contours.ACM Transactions on Graphics, 10(2):182–199,1991.
[14] Shachar Fleishman, Marc Alexa, Daniel Cohen-Or, and Claudio T. Silva. Progressivepoint set surfaces.ACM Transactions on Computer Graphics, 22(4), 2003.
[15] H. Fuchs, Z.M. Kedem, and S.P. Uselton. Optimal surfacereconstruction from planarcontours.Communications of the ACM, 20(10):693–702, 1977.
[16] Kikuo Fujimura and Eddy Kuo. Shape reconstruction fromcontours using isotopicdeformation.Graphical Models & Image Processing, 61(3):127–147, May 1999.
[17] S. Ganapathy and T. G. Dennehy. A new general triangulation method for planarcontours. InSIGGRAPH ’82: Proceedings of the 9th annual conference on Computergraphics and interactive techniques, pages 69–75, 1982.
[18] Hugues Hoppe, Tony DeRose, Tom Duchamp, John McDonald,and Werner Stuetzle.Surface reconstruction from unorganized points. InSIGGRAPH ’92: Proceedings ofthe 19th annual conference on Computer graphics and interactive techniques, pages71–78, 1992.
[19] ImageMagick Studio LLC. ImageMagick Magick++ API for C++.http://www.imagemagick.org/Magick++/.
[20] Mark Jones and Min Chen. A new approach to the construction of surfaces fromcontour data.Computer Graphics Forum, 13(3):75–84, 1994.
[21] E. Keppel. Approximating complex surface by triangulation of contour lines.IBMJournal of Research and Development, 19:2–11, 1975.
[22] R. Klein and A. Schilling. Fast distance field interpolation for reconstruction of sur-faces from contours. InEurographics ’99, Short Papers & Demos proceedings, Mi-lano, Italy, 1999.
[23] Reinhard Klein, Andreas Schilling, and Wolfgang Straßer. Reconstruction and sim-plification of surfaces from contours.Graphical models, 62(6):429–443, November2000.
67
[24] Laboratory for Bioimaging and Anatomical Informatics. Drexel University.http://www.neuroterrain.org/.
[25] David Levin. Multidimensional reconstruction by set-valued approximation.IMAJournal of Numerical Analysis, 6:173–184, 1986.
[26] David Levin. The approximation power of moving least-squares. Math. Comput.,67(224):1517–1531, 1998.
[27] Marc Levoy and Turner Whitted. The use of points as a display primitive. TechnicalReport 85–022, University of North Carolina at Chapel Hill,1985.
[28] William E. Lorensen and Harvey E. Cline. Marching cubes: A high resolution 3Dsurface construction algorithm. InSIGGRAPH ’87: Proceedings of the 14th annualconference on Computer graphics and interactive techniques, pages 163–169, 1987.
[29] E.P. Lyvers and O.R. Mitchell. Precision edge contrastand orientation estimation.10(6):927–937, 1988.
[30] Sean Mauch.Efficient Algorithms for Solving Static Hamilton-Jacobi Equations. PhDthesis, California Institute of Technology, Purdue, CA, 2003.
[31] Leonard McMillan. Transforming Normals.http://www.cs.unc.edu/∼mcmillan/comp136/Lecture24/Normals.html.
[32] Mercury Computer Systems, Inc. amira for Biology, Medicine and Life-Sciences.http://www.tgs.com/prodiv/amiraoverview.htm.
[33] David Meyers, Shelley Skinner, and Kenneth Sloan. Surfaces from contours.ACMTransactions on Graphics (TOG), 11(3):228–258, July 1992.
[34] Niloy J. Mitra and An Nguyen. Estimating surface normals in noisy point clouddata. InSCG ’03: Proceedings of the nineteenth annual symposium on Computationalgeometry, pages 322–328, 2003.
[35] National Library of Medicine at the National Instituteof Health. Visible HumanProject. http://www.nlm.nih.gov/research/visible/visible human.html.
[36] Y. Ohtake, A. Belyaev, M. Alexa, G. Turk, and H. Seidel. Multi-level partition ofunity implicits. ACM Transactions on Graphics (TOG), 22(3):463–470, 2003.
68
[37] Y. Ohtake, A. Belyaev, and H.-P. Seidel. A multi-scale approach to 3D scattered datainterpolation with compactly supported basis functions. In SMI ’03: Proceedingsof the Shape Modeling International 2003, page 292, Washington, DC, USA, 2003.IEEE Computer Society.
[38] Yutaka Ohtake. Mult-level Partition of Unity Implementation. http://www.mpi-sb.mpg.de/∼ohtake/mpuimplicits/.
[39] S. Osher and R. Fedkiw.Level Set Methods and Dynamic Implicit Surfaces. Springer,2002.
[40] Stanley Osher and Ronald P. Fedkiw. Level set methods: an overview and some recentresults.J. Comput. Phys., 169(2):463–502, 2001.
[41] NUAGES package for 3D reconstruction from parallel cross-sectional data.http://www-sop.inria.fr/prisme/logiciel/nuages.html.en.
[42] S.P. Raya and J.K. Udupa. Shape-based interpolation ofmultidimensional objects.IEEE Transactions on Medical Imaging, 9(1):32–42, 1990.
[43] Anders Sandholm. 3D reconstruction from non-euclidian distance fields. Master’sthesis, Linkoping University, Sweden, 2005.
[44] Anders Sandholm and Ken Museth. 3D reconstruction fromnon-euclidian distancefields. InLinkoping Electronic Conference Proceedings, Linkoping Electronic Con-ference Proceedings, page 55, Gavle, Sweden, November 2004. SIGRAD 2004. TheAnnual SIGRAD Conference. Special Theme – Environmental Visualization.
[45] Vladimir Savchenko and Alexander Pasko. Reconstruction from contour data andsculpting 3D objects.Journal of Computer Aided Surgery, 1:56–57, 1995.
[46] Vladimir Savchenko, Alexander Pasko, Oleg Okunev, andTosiyasu Kunii. Func-tion representation of solids reconstructed from scattered surface points and contours.Computer Graphics Forum, 14(4):181–188, 1995.
[47] A. Singh, D. Goldgof, and D. Terzopoulos, editors.Deformable Models in MedicalImage Analysis. IEEE Computer Society Press, 1998.
[48] Gabriel Taubin. Estimation of planar curves, surfaces, and nonplanar space curvesdefined by implicit equations with applications to edge and range image segmentation.IEEE Trans. Pattern Anal. Mach. Intell., 13(11):1115–1138, 1991.
69
[49] Eric W. Weinsstein. Convolution.From MathWorld – A Wolfram Web Resource,http://mathworld.wolfram.com/Convolution.html, July 2005.
[50] R. Whitaker, D. Breen, K. Museth, and N. Soni. Segmentation of biological datasetsusing a level-set framework. In M. Chen and A. Kaufman, editors,Volume Graphics2001, pages 249–263. Springer, Vienna, 2001.
[51] Hui Xie, Jianning Wang, Jing Hua, Hong Qin, and Arie Kaufman. Piecewise C1 con-tinuous surface reconstruction of noisy point clouds via local implicit quadric regres-sion. InIEEE Visualization ’03 Conference Proceedings, Seattle, WA, USA, October2003. IEEE Computer Society.
[52] I.T. Young, J.J. Gerbrands, and L.J. van Vliet. Image processing fundamentals. InOnline book, 2004.
[53] Hong-Kai Zhao, Stanley Osher, and Ronald Fedkiw. Fast surface reconstruction usingthe level set method. InVLSM ’01: Proceedings of the IEEE Workshop on Variationaland Level Set Methods (VLSM’01), page 194, Washington, DC, USA, 2001. IEEEComputer Society.
[54] L. Zhukov, K. Museth, D. Breen, R. Whitaker, and A. Barr.Level set modeling andsegmentation of DT-MRI brain data.Journal of Electronic Imaging, 12(1):125–133,January 2003.
[55] D. Ziou and S. Tabbone. Edge detection techniques - an overview. InternationalJournal of Pattern Recognition and Image Analysis, 8:537–559, 1998.