TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume...

13
Institut f ¨ ur Computergraphik und Algorithmen Technische Universit¨ at Wien Karlsplatz 13/186/2 A-1040 Wien AUSTRIA Tel: +43 (1) 58801-18601 Fax: +43 (1) 58801-18698 Institute of Computer Graphics and Algorithms Vienna University of Technology email: [email protected] other services: http://www.cg.tuwien.ac.at/ ftp://ftp.cg.tuwien.ac.at/ TECHNICAL REPORT D 2 VR: High-Quality Volume Rendering of Projection-based Volumetric Data Bal´ azs Cs´ ebfalvi Budapest University of Technology Hungary Peter Rautek Vienna University of Technology Austria oren Grimm Vienna University of Technology Austria Stefan Bruckner Vienna University of Technology Austria Eduard Gr ¨ oller Vienna University of Technology Austria TR-186-2-05-3 April 2005 Keywords: Volume Rendering, Reconstruction, Filtered Back-Projection

Transcript of TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume...

Page 1: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

Institut fur Computergraphik undAlgorithmen

Technische Universitat Wien

Karlsplatz 13/186/2A-1040 Wien

AUSTRIA

Tel: +43 (1) 58801-18601Fax: +43 (1) 58801-18698

Institute of Computer Graphics andAlgorithms

Vienna University of Technology

email:[email protected]

other services:http://www.cg.tuwien.ac.at/ftp://ftp.cg.tuwien.ac.at/

TECHNICAL REPORT

D2VR: High-Quality Volume Rendering of Projection-basedVolumetric Data

Balazs CsebfalviBudapest University

of TechnologyHungary

Peter RautekVienna University

of TechnologyAustria

Soren Grimm†

Vienna Universityof Technology

Austria

Stefan Bruckner†

Vienna Universityof Technology

AustriaEduard Groller†

Vienna Universityof Technology

Austria

TR-186-2-05-3

April 2005

Keywords: Volume Rendering, Reconstruction, Filtered Back-Projection

Page 2: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render
Page 3: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

D2VR: High-Quality Volume Rendering ofProjection-based Volumetric Data

Balazs Csebfalvi∗Budapest University

of TechnologyHungary

Peter Rautek†

Vienna Universityof Technology

Austria

Soren Grimm†

Vienna Universityof Technology

Austria

Stefan Bruckner†

Vienna Universityof Technology

AustriaEduard Groller†

Vienna Universityof Technology

Austria

(d)(c)(b)(a)

Figure 1: CT scan of stag beetle (832x832x494): (a) DVR of original grid. (b) D2VR of projection-based volu-metric data (64 projections, each projection with a resolution of 642). (c) DVR of a 1283 grid (d) DVR of a 643

grid. Grids are reconstructed from 64 filtered projections, each projection with a resolution of 642.

Abstract

Volume-rendering techniques are conventionally classi-fied into two categories represented by direct and in-direct methods. Indirect methods require to transformthe initial volumetric model into an intermediate geo-metrical model in order to efficiently visualize it. Incontrast, direct volume-rendering (DVR) methods candirectly process the volumetric data. Modern 3D scan-ning technologies, like CT or MRI, usually provide dataas a set of samples on rectilinear grid points, which arecomputed from the measured projections by discrete to-mographic reconstruction. Therefore the set of thesereconstructed samples can already be considered as anintermediate volume representation. In this paper weintroduce a new paradigm fordirect direct volume ren-dering (D2VR), which does not require a rectilinear gridat all, since it is based on an immediate processing ofthe measured projections. Arbitrary samples for raycasting are reconstructed from the projections by using

∗e-mail: [email protected]†e-mail:{rautek|grimm|bruckner|groeller}@cg.tuwien.ac.at

the Filtered Back-Projection algorithm. Our method re-moves an unnecessary and lossy resampling step fromthe classical volume rendering pipeline. Thus, it pro-vides much higher accuracy than traditional grid-basedresampling techniques do. Furthermore we also presenta novel high-quality gradient estimation scheme, whichis also based on the Filtered Back-Projection algorithm.Finally we introduce a hierarchical space partitioningapproach for projection-based volumetric data, which isused to accelerate D2VR.

CR Categories: I.4.5 [Image Processing]:Reconstruction—Transform Methods; I.4.10 [ImageProcessing]: Volumetric Image Representation; I.3.7[Computer Graphics]: Three-dimensional Graphics andRealism

Keywords: Volume Rendering, Reconstruction, Fil-tered Back-Projection

1

Page 4: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

2 2 RELATED WORK

1 Introduction

Modern 3D scanning technologies, like Computed To-mography (CT) or Magnetic Resonance Imaging (MRI),usually provide data values on rectilinear grid points.These data values are computed from measured pro-jections by discrete tomographic reconstruction [19, 6].The set of the reconstructed data values (or samples) canbe interpreted as a discrete representation of the under-lying continuous phenomenon. In order to authenticallyvisualize the original continuous signal, it has to be ac-curately reconstructed from the discrete samples (notethat such a signal reconstruction is differentiated fromdiscrete tomographic reconstruction).From a signal-processing point of view, the original sig-nal can be perfectly reconstructed from discrete samplesif it is band-limited and the sampling frequency is abovethe Nyquist limit [16]. Theoretically the perfect contin-uous reconstruction is obtained by convolving the dis-crete volume representation with thesinc function. Thesinc function is considered to be the best reconstructionkernel, since it represents an ideal low-pass filter. Inpractice, however, it is difficult to convolve a discretesignal with thesinc kernel, because of its infinite sup-port. Therefore practical reconstruction filters either ap-proximate it or truncate it with an appropriate window-ing function [11, 20]. Moreover, real-world signals canhardly be considered band-limited. As a consequence,practical resampling results in a loss of information.Figure 2 shows the signal-processing approach of thetraditional volume rendering pipeline (follow the redline). The first step is the discrete tomographic recon-struction of a rectilinear volume representation from themeasured projections. Although there exist differentalgorithms for tomographic reconstruction, one of themost popular techniques is the Filtered Back-Projectionalgorithm. It first performs high-pass filtering on themeasured projections and afterwards the samples at rec-tilinear grid points are computed by back-projecting thefiltered signals. As the projections are acquired by mea-suring accumulated attenuation by a limited number ofsensors, they are actually available as discrete represen-tations of continuous projection functions. Thereforehigh-pass filtering is performed in discrete frequencydomain, so the result is also a discrete function. Inthe back-projection phase, however, the rectilinear gridpoints are not necessarily projected exactly onto the dis-crete samples of the filtered projections. Therefore forback-projection resampling is necessary, which resultsin thefirst loss of information in the pipeline.The obtained rectilinear volume can be visualized bydifferent rendering techniques. Using indirect meth-ods, like the classical Marching Cubes algorithm [9],

an intermediate geometrical model of an isosurface isconstructed from the volumetric model. This geomet-rical model is then interactively rendered by, for exam-ple, conventional graphics hardware. In contrast, DirectVolume Rendering (DVR) approaches, like ray casting[7] or splatting [24, 25] directly render the volumet-ric model without any intermediate representation. Inboth cases an interpolation technique is applied to de-fine data values between the rectilinear grid points. Inother words, a resampling of the discrete volume rep-resentation is required. This resampling results in thesecondloss of information in the traditional pipeline.

In order to minimize the loss of information we proposeto modify the traditional volume-rendering pipeline bysimply removing an unnecessary resampling step (fol-low the green line in Figure 2). To render the underlyingcontinuous phenomenon, data samples at arbitrary sam-ple points need to be defined, and for shading computa-tion the corresponding gradients need to be determined.As it will be shown, both tasks can be solved using di-rectly the filtered projections. This eventually leads toan alternative projection-based volume representation.Thus, there is no need to compute samples at regulargrid points by discrete tomographic reconstruction, andas a consequence one resampling step (see Figure 2) isunnecessary. Traditional direct volume-rendering meth-ods rely on such an intermediate grid representation, soin this sense they are in fact indirect. In contrast, wepresent DVRdirectly from the measured raw data. Todistinguish from the common DVR the novel approachis referred to asD2VR (pronunciation: [di-skwerd viar]).

In Section 2 we review previous work related to discretetomographic reconstruction and volume resampling. InSection 3 our novel volume-rendering approach is in-troduced, where it is explained how to reconstruct datavalues and gradients directly from the filtered projec-tions by using the Filtered Back-Projection algorithm.Furthermore, in Section 3.3, we also present a hierar-chical space partitioning approach for projection-basedvolumetric data. Section 4 describes the implementa-tion. Section 5 reports the results. Finally in Section 6the contribution of this paper is summarized and ideasfor future work are given.

2 Related Work

In most of the practical volume-rendering applications,especially in 3D medical imaging, the input data is usu-ally generated from measured projections by using to-mographic reconstruction [19, 6, 15]. The set of projec-

Page 5: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

3

Rectilinear Grid

2. Resampling

DVR

D²VR

1. Resampling

Resampling

Figure 2: Data processing work flow of projection- and grid-based volume rendering. The red line corresponds tothe traditional volume rendering pipeline. It requires two resampling steps in order to visualize the data. First anintermediate grid is resampled and then this grid is resampled again for rendering. The green line corresponds tothe projection-based volume rendering pipeline; the lossy first resampling step is avoided.

tions is referred to as the Radon transform of the orig-inal signal. Therefore the tomographic reconstructionis, in fact, the inversion of the Radon transform. Theinversion can be performed by using the classical Fil-tered Back-Projection [2] algorithm, which is based onthe Fourier projection-slice theorem [6, 10]. Althoughthere exist alternative tomographic reconstruction tech-niques like algebraic or statistical ones, Filtered Back-Projection is still the most popular method used in com-mercial CT scanners.

The output of tomographic reconstruction is a discrete(or sampled) representation of the underlying continu-ous phenomenon. The samples are conventionally gen-erated on rectilinear grid points. The rectilinear grid hasseveral advantages. For example, the sampled signal canbe represented by 3D arrays, implicitly storing the loca-tions of the samples. Furthermore, the neighborhoodof a certain sample can be efficiently addressed, whichis important for many volume-processing or volume-rendering algorithms.

Nevertheless, in order to render the underlying continu-ous 3D function, data values need to be defined also be-tween the rectilinear grid points. Thesinckernel as idealreconstruction filter is impractical because of its infiniteextent. In practice it is approximated by filters of finitesupport [11, 20]. Generally, the wider the support of thereconstruction filter, the higher the quality of the recon-struction. On the other hand, the wider the support ofthe filter, the higher the computational cost of a spatial-domain convolution. Therefore several researchers an-

alyzed different reconstruction filters, both in terms ofaccuracy and computational cost [12, 11, 13, 14]. Asthe practical filters only approximate the ideal low-passfilter they result in either aliasing or smoothing [11],which can be interpreted as a loss of information.

For frequent resampling tasks, like rotation, or up-sampling, frequency-domain techniques can be alterna-tively applied [8, 1, 3, 4, 21, 22]. In frequency do-main, it is exploited that a computationally expensivespatial-domain convolution is replaced by a simple mul-tiplication. Although the frequency-domain resamplingmethods generally provide higher accuracy than spatial-domain methods do, they assume that the new samplesto be computed are also located at regular grid points.

In order to avoid a lossy resampling step in the tradi-tional volume-rendering pipeline, we directly use the to-mographic inversion in order to reconstruct the underly-ing function at arbitrary sample positions. Therefore wedo not generate an intermediate rectilinear volume rep-resentation, but we directly process the filtered projec-tions as an alternative volume representation. Using thisgridless or projection-based volume-rendering approachas a new paradigm, the same accuracy can be ensured atall the sample positions. In contrast, using the tradi-tional grid-based approach, accurate samples are avail-able only at the grid points, while the accuracy of inter-mediate samples depends on the quality of the appliedimperfect reconstruction filter.

Page 6: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

4 3 D2VR

3 D2VR

We present D2VR based on a raycasting approach. Inorder to perform raycasting the underlying 3D volumet-ric function needs to be reconstructed at arbitrary resam-pling locations. In case the data is given on a rectilineargrid the reconstructed function value is computed froma close neighborhood of samples as shown in Figure 3a.In contrast to that, for raycasting directly performed onthe filtered projections the reconstructed function valueis computed from the filtered projections at the corre-sponding positions, see Figure 3b. Furthermore, gradi-ents at these resample locations need to be determinedin order to perform shading. The whole reconstructionmechanism is described in the following.

(a) (b)

Resample locationGrid Projections

Figure 3: (a) Resampling along a ray on rectilinear vol-umetric data. (b) Resampling along a ray directly fromthe filtered projections

3.1 Data reconstruction

Data reconstruction from projection-based volumetricdata requires Filtered Back-Projection. For simplicitywe illustrate the Filtered Back-Projection in 2D basedon a computed tomography scanning process using or-thographic projection. Parallel projections are taken bymeasuring a set of parallel rays for a number of differ-ent angles. A projection is formed by combining a setof line integrals. The whole projection is a collectionof parallel ray integrals as is given byPθ (t) for a con-stantθ , see Figure 4. The line integrals are measured bymoving an x-ray source and detector along parallel lineson opposite sides of the object.

The Filtered Back-Projection can be derived, using theFourier projection-slice Theorem as follows:

The density functionf (x,y) can be expressed as:

f (x,y) =∫ ∞

−∞

∫ ∞

−∞F(u,v)ej2π(ux+vy)dudv

where F(u,v) denotes the two-dimensional Fouriertransform of the density functionf (x,y). By movingfrom a cartesian coordinate system in the frequency do-main to a polar coordinate system, i.e.,u = wcosθ ,v = wsinθ , anddudv= wdwdθ , we obtain:

f (x,y) =∫ 2π

0

∫ ∞

0F(w,θ)ej2πw(xcosθ+ysinθ)wdwdθ

If we considerθ from 0 to π, the integral can be split asfollows:

f (x,y) =∫ π

0

∫ ∞

0F(w,θ)ej2πw(xcosθ+ysinθ)wdwdθ

+∫ π

0

∫ ∞

0F(w,θ +π)ej2πw(xcos(θ+π)+ysin(θ+π))wdwdθ

SinceF(w,θ + π) = F(−w,θ), the above expressioncan be written as:

f (x,y) =∫ π

0

[∫ ∞

−∞F(w,θ) |w|ej2πwtdw

]dθ

wheret = xcosθ +ysinθ . By substitutingSθ (w) for thetwo-dimensional Fourier transformF(w,θ) the aboveintegral can be expressed as:

f (x,y) =∫ π

0

∫ ∞

−∞Sθ (w) |w|ej2πwtdwdθ

According to the Fourier projection-slice theoremSθ (w)is the Fourier transform ofPθ (t). Let us define:

Qθ (t) =∫ ∞

−∞Sθ (w) |w|ej2πwtdw (1)

which is the inverse Fourier transform ofSθ (w) · |w|.As multiplication in frequency domain corresponds to aconvolution in spatial domain, according to Equation 1,Qθ (t) is obtained by high-pass filtering the measuredprojectionPθ (t). Other filters to reduce artifacts result-ing from reconstruction can be applied, see [6].

In practice, the 2D density functionf (x,y) is discretelyapproximated by:

f (x,y)≈ f (x,y) =πK

K

∑i=1

Qθi (xcosθi +ysinθi) (2)

whereQθi are the filtered projections. Thus, accordingto Equation 2 the density function can be reconstructed

Page 7: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

3.2 Derivative Estimation 5

from a fixed number of projections. The Filtered Back-Projection algorithm is conventionally used for discretetomographic reconstruction in order to obtain a recti-linear representation of the original density function.This intermediate representation is then usually resam-pled by many volume visualization algorithms. How-ever, the formula in Equation 2 can also be consideredas a resampling scheme to interpolate a density valueat an arbitrary sample point. Therefore, it is unneces-sary to generate an intermediate rectilinear representa-tion by discrete tomographic reconstruction. Further-more, each resampling step usually causes a loss of in-formation. By avoiding the unnecessary intermediategrid representation, the quality of reconstruction can beimproved. Previous reconstruction techniques assumethat accurate samples are available at the grid points. Inpractice these samples are obtained by tomographic re-construction. In order to maintain the same accuracyat any arbitrary sample location, we apply the FilteredBack-Projection to reconstruct the density value.

θ

x

y

f(x,y)P (t)θ

t

Figure 4: Parallel projection for a specific angleθ .

3.2 Derivative Estimation

In order to process or render volumetric data in manycases derivatives of the original density function are nec-essary. For example, for volume rendering the estimatedgradients are used as surface normals to perform shad-ing. In case of a grid based representation the straight-forward way is to estimate the derivatives from a certain

voxel neighborhood. To determine the gradient, com-mon methods, such as intermediate difference gradient,central difference gradient, or higher order gradient esti-mation schemes are applied. In our case, computing thederivatives from a certain 3D neighborhood of samplesrequires to perform a large number of back-projections.Especially for higher order gradient estimation schemes,which need a large neighborhood of samples, the com-putational costs would be significantly high. However,the Filtered Back-Projection reconstruction scheme canalso be exploited to compute derivatives.

For example the partial derivativefx according to vari-ablex can be expressed by using Newton’s differencequotient:

fx =∂ f (x,y)

∂x= lim

∆x→0

1∆x

(πK

(K

∑i=1

Qθi (xcosθi +ysinθi)

−K

∑i=1

Qθi ((x+∆x)cosθi +ysinθi)))

Substitutingti := xcosθi +ysinθi we obtain:

fx = lim∆x→0

1∆x

(πK

(K

∑i=1

Qθi (ti)−K

∑i=1

Qθi (∆xcosθi + ti)))

= lim∆x→0

1∆x

(πK

K

∑i=1

Qθi (ti)−Qθi (∆xcosθi + ti))

=πK

K

∑i=1

lim∆x→0

1∆x

(Qθi (ti)−Qθi (∆xcosθi + ti))

The term

lim∆x→0

1∆x

(Qθi (ti)−Qθi (∆xcosθi + ti)) (3)

is the partial derivative of the projectionsQθi , but scaledwith cosθi . We can therefore calculate the partialderivative fx directly as sum of scaled derivatives of theprojection data. Analogously, taking the difference quo-tient with respect toy we obtain:

fy =∂ f (x,y)

∂y=

πK

K

∑i=1

lim∆y→0

1∆y

(Qθi (ti)−Qθi (∆ysinθi +ti))

It can be seen that applying Newton’s difference quo-tient directly on the filtered projections is equivalent toapplying Newton’s difference quotient for the 2D den-sity function f (x,y). Moreover, any higher order deriva-tive can be obtained by applying Newton’s differencequotient multiple times.

Page 8: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

6 4 IMPLEMENTATION

Using Filtered Back-Projection for gradient estimationwe expect higher accuracy than using the traditional gra-dient estimation schemes on the rectilinear grid. Con-sider central differences on the continuous reconstruc-tion from a rectilinear representation. In order to calcu-late the gradient at an arbitrary sampling point six addi-tional samples have to be interpolated. As interpolationusually causes loss of information, the introduced er-rors are accumulated in the estimated gradients. In con-trast, using Filtered Back-Projection, the density valuesat the additional sample points are as accurate as the gridpoints. Therefore, no interpolation error is introduced.

3.3 Hierarchical Space Partitioning

For almost all volumetric processing approaches a hier-archical space partitioning data structure is essential forefficient processing. The performance gain which canbe achieved with such a data structure mainly dependson its granularity. An octree is one of the most widelyused data structure for organizing three-dimensionalspace. An octree is based on the principle of hierar-chical space partitioning. Each node of the octree repre-sents a cuboid cell. In a min-max octree for volumetricdata each node contains the minimum and maximum ofthe enclosed data. The minimum and maximum valueof an octree cell, in case of grid-based volumetric data,depends on the used data reconstruction method. Themost widely used data reconstruction method is trilinearinterpolation. The convex nature of trilinear interpola-tion ensures that all function values within a cuboid arebounded by the maximal and minimal values at the gridpositions.

Octree

cell C

θiC

θiP

Figure 5: Projected octree cellC onto projectionPθi .

This convexity condition does not hold for reconstruc-tion based on the Filtered Back-Projection. However,it is still possible to generate a min-max octree forprojection-based volumetric data. Consider an octreecellC projected onto all filtered projectionsPθi , see Fig-ure 5. The resulting projections ofC are referred to asCθi . According to Equation 2 an upper bound of themaximum value contained in the octree cellC can bedetermined by:

maxC f ≤πK

K

∑i=1

maxCθi f(4)

whereC f =

{v = f (~x)|~x∈C⊆ℜ3}

are all possible function values within the enclosingcuboid of cellC and

Cθi f=

{v = Qθi (~t)|~t ∈Cθi ⊆ℜ2}

are all projection values within the enclosing rectangleof the projected cellC onto the projectionPθi . Analo-gously a lower bound of the minimum of the octree cellC can be determined by:

minC f ≥πK

K

∑i=1

minCθi f(5)

The algorithm to generate an octree, using a bottom-upapproach, for projection-based volumetric data is as fol-lows:Starting with a root cell, which encloses the entire vol-umetric function defined by the projections, we recur-sively, subdivide the octree cells. Once we reach a de-sired octree depth we compute the maximum respec-tively the minimum using Equation 4 and 5. These val-ues are then propagated to the higher octree levels. In-stead of propagating these minimums and maximums tothe higher levels, it would have been also possible tocompute the minimum and maximum directly for eachof these higher levels using Equation 4 and 5. However,this would lead to a more conservative approximation.Although this space partitioning is a conservative ap-proximation, it works very well in practice, as shown inFigure 6.

4 Implementation

In Section 3 we presented all the necessary componentsfor D2VR. First, in Section 3.1 we presented an ap-proach for data reconstruction at arbitrary sample po-sitions. Following, in Section 3.2 we showed how to

Page 9: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

7

Figure 6: Octree up to depth 8 of the stag beetle. Red octree cells are classified as visible.

determine gradients directly out of the filtered projec-tions at arbitrary sample positions. In Section 3.3 wepresented a hierarchical space partitioning data structurefor projection-based volumetric data, which is used inthe following as acceleration structure for D2VR. Allthese approaches do not employ an intermediate gridrepresentation.

We implemented a CPU-based as well as a GPU-basedprototype for orthographic and perspective projection.The CPU implementation is based on a raycasting ap-proach. For each image pixel of the image plane, raysare cast through the volumetric space enclosed by thefiltered projections. At each resample location the un-derlying 3D density function is reconstructed accord-ing to Equation 2 and gradient estimation is done us-ing Equation 3. The final color and opacity of the im-age pixel is determined by the over-operator [18] infront-to-back order. The GPU version is implementedin C++ and Cg using OpenGL. We are aware that thecurrent Cg compiler does not always produce optimalcode. We did not manually optimize the code by us-ing assembly as this is a proof of concept implemen-tation. We implemented D2VR on the GPU (NVidiaGForce 6800 GT, 256 MB) utilizing texture-based vol-ume rendering with view-aligned slices in combinationwith filtered projection textures. All the filtered projec-tions are linearly stored in a 3D texture. Basically wecompute texture slices parallel to the viewing plane. Asthe newest NVidia hardware supports dynamic branch-ing, the slices are directly computed in afor-loop usingEquation 2 from all the filtered projections. The sameis done for the gradient computation using Equation 3.The volumetric space enclosed by the filtered projec-tions is thereby cut by polygons parallel to the viewingplane. The polygons are then projected by the graphics

hardware onto the image plane and composited usingalpha blending. In order to accelerate the rendering pro-cess, we utilized the octree described in Section 3.3. Weproject all visible octree cells onto the image-plane. Theresulting z-buffer image is then used for early z-culling,a capability of modern hardware, implementing emptyspace skipping. In average, for example, we measuredfor 128 projections (1282 sized) using a 2562 view-portfour seconds per frame for an iso-surface rendering. Theperformance benefit depends on the granularity of theoctree. Further optimization, such as early ray termina-tion, are not employed due to the lack of graphics hard-ware support. In future we will investigate alternativeacceleration approaches in order to provide fully inter-active projection-based volume rendering.

5 Results

In order to show the differences between projection-based and grid-based data reconstruction and gradi-ent estimation we simulated the computed tomographicscanning process. We scanned several different densityfunctions, such as the Marschner & Lobb function, astag beetle, and a carp. The Marschner & Lobb func-tion is analytically defined. For the other data sets, ananalytical representation is not given, therefore we tookhigh resolution grids in combination with trilinear in-terpolation as ground truth. The Marschner & Lobbfunction is scanned taking 64 projections, each projec-tion with a resolution of 642. From this projections agrid is reconstructed, using the same amount of sam-ples (643). Additionally we also reconstructed a gridwith eight times more samples (1283). Furthermorewe computed an iso-surface directly from the analytical

Page 10: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

8 5 RESULTS

(d)

(a) (b) (c)

(e)

Figure 10: CT scan of Carp (256x256x512): (a) DVR of original grid. (b) DVR of a 128x128x256 grid, recon-structed from 128 filtered projections, each projection with a resolution of 128x256. (c) D2VR of projection-basedvolumetric data (128 projections, each projection with a resolution of 128x256). (d) Zoom in of DVR. (e) Zoomin of D2VR.

Marschner & Lobb function. Figure 7a shows the differ-ences between the analytical value and the reconstructedvalue using trilinear interpolation on the grid (643) andFigure 7b shows the differences between the analyticalvalue and the reconstructed value using Filtered Back-Projection. To encode the data reconstruction error onthe iso-surface a color coding is applied. Green encodeslow error, on the other hand red encodes higher errors.Figure 8a shows the differences in degrees between theanalytically computed gradients and the estimated gra-dients using central difference gradient estimation. Fig-ure 8b shows the differences in degrees between the an-alytically computed gradients and the estimated gradi-ents using our new projection-based gradient estimationmethod. Figure 9 shows a comparison of iso-surfacerenderings of the Marschner & Lobb function: Figure 9ashows an analytical rendering. Figure 9b shows DVR ofthe 643 grid. Figure 9c shows DVR of the eight timesbigger grid (1283). And finally in Figure 9d our D2VRfrom 64 filtered projections, each projection with a reso-lution of 642 is shown. Furthermore, we rendered a stag

0% 100%5%10% 50%

(a) (b)

Figure 7: Color encoded differences between analyticalvalue and (a) the reconstructed value using trilinear in-terpolation on the grid (643), (b) the reconstructed valueusing Filtered Back-Projection.

Page 11: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

9

0° 20° 45° 90° 180°

(a) (b)

Figure 8: Color encoded differences in degrees betweenanalytically computed gradients and (a) the estimatedgradients using central difference gradient estimationon the grid, (b) the estimated gradients using our newprojection-based gradient estimation method.

beetle and a carp, In Figure 1, and Figure 10 the dif-ferences between rendering from projection-based andgrid-based volumetric data can be seen. Additionallyfor all the shown data sets we reconstructed a 1283 gridfrom the filtered projections. From this grid as well asfrom the filtered projections we also reconstructed a ro-tated grid. In Table 1 the root mean squared data re-construction error with respect to the analytical functionrespectively to the corresponding high resolution grid incombination with trilinear interpolation is shown.

Data RMS of RMS of filtered Ratiogrid projections

Marschner & Lobb 0.0624 0.0537 1.1620Fish 37.5486 26.9216 1.3947Stag beetle 80.2559 77.8378 1.0310

Table 1: Root Mean Squared (RMS) data reconstructionerror with respect to the analytical function respectivelyto the corresponding high resolution grid in combinationwith trilinear interpolation. Values for the Marschner& Lobb data set are between zero and one, and for theother data sets between zero and 4095. In column threethe ratio of column one and two is shown.

6 Conclusion and Future Work

In this paper a new direct volume-rendering paradigmhas been introduced. It has been shown that volumet-

(a)

(d)(c)

(b)

Figure 9: Comparison of an iso-surface of theMarschner & Lobb function: (a) Analytically computed.(b) DVR rendering of a 643 grid. (c) DVR renderingof an eight times bigger grid (1283). (d) D2VR fromprojection-based volumetric data (64 projections, eachprojection with a resolution of 642). Grids are recon-structed from 64 filtered projections, each projectionwith a resolution of 642.

ric raw data measured as a set of projections can be di-rectly rendered without generating an intermediate grid-based volume representation by using tomographic re-construction. As our method avoids an unnecessary andlossy resampling step, it provides much higher imagequality than traditional direct volume-rendering tech-niques do. Furthermore, our novel projection-based gra-dient estimation scheme avoids the accumulation of in-terpolation errors. Traditional methods ensure accuratesamples at the grid points, while the accuracy of inter-mediate samples strongly depends on the quality of theapplied interpolation method. In contrast, our approachprovides an accurate data value for an arbitrary sampleposition.

In order to accelerate D2VR we proposed a hierarchicaldata structure for empty space skipping. Furthermore,as the filtered projections can be interpreted as 2D tex-tures, the conventional graphics cards can be exploitedto efficiently accumulate the contributions of the filteredprojections to view-aligned sampling slices. Inspite ofthese optimizations, our volume-rendering method is

Page 12: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

10 REFERENCES

still slower than previous ones. On the other hand, in thelast two decades, a huge research effort was spent to ac-celerate traditional direct volume rendering, which wasfar from interactivity in the beginning. As our approachis a try to open a new research direction, the current per-formance of our technique is not yet comparable to thatof a well developed technology.

Although our work was inspired by the practical tomo-graphic reconstruction problem, its theoretical signifi-cance is the demonstration of an alternative image-basedvolume representation. In our future work we plan toexplore other gridless volume representations, which arenot necessarily related to the physical constraints of cur-rent scanning devices. For example, in order to achievefull uniformly distributed reconstruction quality, projec-tion planes with uniformly distributed normals mightalso be applied. Although the adaptation of the FilteredBack-Projection algorithm to such a geometry requiresfurther research, it would lead to a direction independenthigh-quality volume reconstruction scheme.

7 Acknowledgements

The work presented in this publication hasbeen funded by the ADAPT project (FFF-804544). ADAPT is supported by TianiMedgraph, Vienna (http://www.tiani.com),and the Forschungsforderungsfonds furdie gewerbliche Wirtschaft, Austria. Seehttp://www.cg.tuwien.ac.at/research/vis/adapt forfurther information on this project. The stag beetle hasbeen provided by G. Glaeser, University of AppliedArts Vienna. Data scanning is courtesy of J. Kastner,Wels College of Engineering. The carp is courtesyof Michael Scheuring Computer Graphics Group,University of Erlangen, Germany.

References

[1] M. Artner, T. Moller, Ivan Viola, and EduardGroller. High-quality volume rendering with re-sampling in the frequency domain. InProceedingsof Eurographics / IEEE VGTC Symposium on Vi-sualization, 2005. to appear.

[2] B. Cabral, N. Cam, and J. Foran. Accelerated vol-ume rendering and tomographic reconstruction us-ing texture mapping hardware. InProceedings ofIEEE Symposium on Volume Visualization, pages91–98, 1994.

[3] Q. Chen, R. Crownover, and M.Weinhous. Sub-unity coordinate translation with Fourier trans-form to achieve efficient and quality three-dimensional medical image interpolation.Med.Phys., 26(9):1776–1782, 1999.

[4] R. Cox and R. Tong. Two- and three-dimensionalimage rotation using the FFT.IEEE Transactionson Image Processing, 8(9):1297–1299, 1999.

[5] M. Frigo and S. G. Johnson. An adaptive soft-ware architecture for the FFT. InProceedings ofICASSP, pages 1381–1384, 1998.

[6] A. C. Kak and M. Slaney.Principles of Comput-erized Tomographic Imaging. IEEE Press, 1988.

[7] M. Levoy. Display of surfaces from volume data.IEEE Computer Graphics and Applications, Vol.8,No.3, pages 29–37, 1988.

[8] A. Li and K. Mueller. Methods for efficient, highquality volume resampling in the frequency do-main. InProceedings of IEEE Visualization, pages3–10, 2004.

[9] W. E. Lorensen and H. E. Cline. Marching cubes:A high resolution 3D surface construction algo-rithm. Computer Graphics (Proceedings of SIG-GRAPH ’87), pages 163–169, 1987.

[10] T. Malzbender. Fourier volume rendering.ACMTransactions on Graphics, Vol.12, No.3, pages233–250, 1993.

[11] S. Marschner and R. Lobb. An evaluation of recon-struction filters for volume rendering. InProceed-ings of IEEE Visualization, pages 100–107, 1994.

[12] D. Mitchell and A. Netravali. Reconstruction fil-ters in computer graphics. InProceedings of SIG-GRAPH, pages 221–228, 1988.

[13] T. Moller, R. Machiraju, K. Mueller, and R. Yagel.A comparison of normal estimation schemes. InProceedings of IEEE Visualization, pages 19–26,1997.

[14] T. Moller, K. Mueller, Y. Kurzion, R. Machiraju,and R. Yagel. Design of accurate and smooth fil-ters for function and derivative reconstruction. InProceedings of IEEE Symposium on Volume Visu-alization, pages 143–151, 1998.

[15] L. T. Niklason, B. T. Christian, L. E. Niklason,D. B. Kopans, D. E. Castleberry, B. H. Opsahl-Ong, P. J. Slanetz C. E. Landberg, A. A. Gia-rdino, R. Moore, D. Albagli, M. C. DeJule, P. F.

Page 13: TECHNICAL REPORT - TU Wien · ple, conventional graphics hardware. In contrast, Direct Volume Rendering (DVR) approaches, like ray casting [7] or splatting [24, 25] directly render

REFERENCES 11

Fitzgerald, D. F. Fobare, B. W. Giambattista, R. F.Kwasnick, J. Liu, S. J. Lubowski, G. E. Possin,J. F. Richotte, C. Y. Wei, and R. F. Wirth. Dig-ital tomosynthesis in breast imaging.Radiology,205:399–406, 1997.

[16] A. V. Oppenheim and R. W. Schafer.Discrete-Time Signal Processing. Prentice Hall Inc., Engle-wood Cliffs, 2nd edition, 1989.

[17] D. P. Petersen and D. Middleton. Sampling and re-construction of wave-number-limited functions inn-dimensional euclidean spaces.Information andControl, 5(4):279–323, 1962.

[18] T. Porter and T. Duff. Compositing digital images.Computer Graphics, 18(3):253–259, 1984.

[19] J. Russ.The Image Processing Handbook. CRCPress, 1992.

[20] T. Theußl and E. Groller. Mastering windows: Im-proving reconstruction. InProceedings of IEEESymposium on Volume Visualization, pages 101–108, 2000.

[21] R. Tong and R. Cox. Rotation of NMR images us-ing the 2D chirp-z transform.Magn. Reson. Med.,41:253–256, 1999.

[22] M. Unser, P. Thevenaz, and L. Yaroslavsky.Convolution-based interpolation for fast, high-quality rotation of images.IEEE Transactions onImage Processing, 4(10), 1995.

[23] R. Westermann and T. Ertl. Efficiently usinggraphics hardware in volume rendering applica-tions. Computer Graphics (Proceedings of SIG-GRAPH ’98), pages 169–176, 1998.

[24] L. Westover. Footprint evaluation for volume ren-dering. Computer Graphics (Proceedings of SIG-GRAPH ’90), pages 144–153, 1990.

[25] M. Zwicker, H. Pfister, J. van Baar, and M. Gross.EWA volume splatting. InProceedings of IEEEVisualization 2001, pages 29–36, 2001.