Visualization Process sensorssimulationdatabase raw data images Transformation.

80
Visualization Process sensors simulation database raw data images Transformation

Transcript of Visualization Process sensorssimulationdatabase raw data images Transformation.

Page 1: Visualization Process sensorssimulationdatabase raw data images Transformation.

Visualization Processsensors simulation database

raw data

images

Transformation

Page 2: Visualization Process sensorssimulationdatabase raw data images Transformation.

Visualization Pipelinesensors simulation database

raw data

vis data

Renderable primitives

images

filter

mapping

rendering

Page 3: Visualization Process sensorssimulationdatabase raw data images Transformation.

Visualization Pipelinesensors simulation database

raw data

vis data

Renderable primitives

images

filter• Denoising• Decimation• Multiresolution• Mesh generation• etc

Page 4: Visualization Process sensorssimulationdatabase raw data images Transformation.

Visualization Pipelinesensors simulation database

raw data

vis data

Renderable primitives

images

mapping

Geometry:• Line• Surface• VoxelAttributes:• Color • Opacity• Texture

Page 5: Visualization Process sensorssimulationdatabase raw data images Transformation.

Visualization Pipelinesensors simulation database

raw data

vis data

Renderable primitives

images

rendering

• Surface rendering• Volume rendering• Point based rendering• Image based rendering• NPR

Page 6: Visualization Process sensorssimulationdatabase raw data images Transformation.

Volume Rendering

Goal: visualize three-dimensional functions Measurements (medical imaging) Numerical simulation output Analytic functions

Page 7: Visualization Process sensorssimulationdatabase raw data images Transformation.

Volume Rendering

Page 8: Visualization Process sensorssimulationdatabase raw data images Transformation.

Important Steps

Reconstruction Classification Optical model Shading

Page 9: Visualization Process sensorssimulationdatabase raw data images Transformation.

Reconstruction

Recover the original continuous function from discrete samples

c1

c2

c3

Page 10: Visualization Process sensorssimulationdatabase raw data images Transformation.

Reconstruction

Recover the original continuous function from discrete samples

Box filter

Hat filter

Sinc filter

Page 11: Visualization Process sensorssimulationdatabase raw data images Transformation.

Classification

Map from numerical values to visual attributes Color Transparency

Transfer functions Color function: c(s) Opacity function: a(s)

21.05 27.05

24.03 20.05

Page 12: Visualization Process sensorssimulationdatabase raw data images Transformation.

Classification Order

Pre classification: Classification first, and then filter

Post classification: filter first, then classification

21.05 27.05

24.03 20.05

25.3post

pre

Page 13: Visualization Process sensorssimulationdatabase raw data images Transformation.

Optimal model

Assume a denity-emitter model (approximation) Each data point in the volume can

emit and absort light Light emission – color classification Light absortion – block light from

behind due to non-zero opacity

Page 14: Visualization Process sensorssimulationdatabase raw data images Transformation.

Optical Model

Ray tracing is one method used to construct the final image

x(t) : ray, parameterized by t

s(x(t)) : scalar valuec(s(x(t)): color; emitted lighta(s(x(t)): absortion coefficient

Page 15: Visualization Process sensorssimulationdatabase raw data images Transformation.

Ray Integration

Calculate how much light can enter the eye for each ray

C = c(s(x(t)) e dt - a(s(x(t’)))dt’

0

D

0

t

C0

D

Page 16: Visualization Process sensorssimulationdatabase raw data images Transformation.

Discrete Ray Integration

C0

D

C = C (1- A ) i i0

n

0

i-1

C’ = C + (1-A ) C’ i i i+1

Back to front blending: step from n-1 to 0

Page 17: Visualization Process sensorssimulationdatabase raw data images Transformation.

Shading

Common shading model – Phong model

For each sample, evaluate: C = ambient + diffuse + specular

= constant + Ip Kd (N.L) + Ip Ks (N.H) Ip: emission color at the sample N: normal at the sample

in

Page 18: Visualization Process sensorssimulationdatabase raw data images Transformation.

Early attempts (pre-MCs)

The Cuberille Approach(1979) - Each voxel has a value

- Each voxel has 6 faces- Applying a threshold to perform binary classification- Draw the visible faces of the boundary voxels as polygons

voxel

(Fairly jagged images)

Page 19: Visualization Process sensorssimulationdatabase raw data images Transformation.

Contour Tracking

Extract contours at each section and connect them together (1976 and after)

Page 20: Visualization Process sensorssimulationdatabase raw data images Transformation.

New Methods Were Needed

Better image quality is necessary

Finding object’s boundary sometimes can be difficult)

The process of connecting boundary contours is also very complicated

The intermediate geometry size can be huge

Page 21: Visualization Process sensorssimulationdatabase raw data images Transformation.

Direct 2-D Display of 3D Objects

Tuy and Tuy 1984, IEEE CG & A (one of the earliest volume rendering techniques)

Direct: No conversion from data to geometry

Page 22: Visualization Process sensorssimulationdatabase raw data images Transformation.

Basic Idea

Based on the idea of ray tracing• Treat each pixel as a light source • Emit light from the image to the object space• The ray stops at the object boundary• Calculate shading at the boundary point• Assign the value to the pixel

Page 23: Visualization Process sensorssimulationdatabase raw data images Transformation.

Algorithm details

• Data Representation: (establish 3D volume and 2D screen space) • Viewing

• Sampling

• Shading

Page 24: Visualization Process sensorssimulationdatabase raw data images Transformation.

Data Representation

3D volume data are represented by a finite number of cross sectional slices (a stack of images)

N x 2D arraies = 3D array

Page 25: Visualization Process sensorssimulationdatabase raw data images Transformation.

Data Representation (2)

What is a Voxel? – Two definitions

A voxel is a cubic cell, whichhas a single value cover the entire cubic region

A voxel is a data pointat a corner of the cubic cellThe value of a point inside the cell is determined by interpolation

Page 26: Visualization Process sensorssimulationdatabase raw data images Transformation.

Viewing

Ray Casting

• Where to position the volume and image plane • What is a ‘ray’ • How to march a ray

Page 27: Visualization Process sensorssimulationdatabase raw data images Transformation.

Viewing (1)

1. Position the volumeAssuming the volume dimensions is w x w x w

We position the center of the volume at the world origin

y

(0,0,0) x

z

Volume center = [w/2,w/2,w/2](local space)

Translate T(-w/2,-w/2,-w/2)

(data to world matrix? world to data matrix )

Page 28: Visualization Process sensorssimulationdatabase raw data images Transformation.

Viewing (2)

2. Position the image planeAssuming the distance between the image plane and the volume center is D, and initially the center of the image plane is (0,0,-D)

y

(0,0,0) x

z

Image plane

Page 29: Visualization Process sensorssimulationdatabase raw data images Transformation.

Viewing (3)

3. Rotate the image planeA new position of the image plane can be defined in termsof three rotation angle with respect to x,y,z axes

Assuming the original view vector is [0,0,1], then the newview vector g becomes:

cossincossing = [0,0,1] 0 1 0 0 cos sinsincos sin 0 cossincos

Page 30: Visualization Process sensorssimulationdatabase raw data images Transformation.

Viewing (4)

y

(0,0,0) x

z

uv

E

S

E0 u0v0

+ S0

B

B = [0,0,0]S0 = [0,0,-D]u0 = [1,0,0]v0 = [0,1,0]

Now,R: the rotation matrix S = B – D x g U = [1,0,0] x R V = [0,1,0] x R

Page 31: Visualization Process sensorssimulationdatabase raw data images Transformation.

Viewing (5)

R: the rotation matrix S = B – D x g U = [1,0,0] x R V = [0,1,0] x R

+ S

Image Plane: L x L pixels

E

Then

E = S – L/2 x u – L/2 x v

So Each pixel (i,j) has coordinates

P = E + i x u + j x v

u

v

We enumerate the pixels by changingi and j (0..L-1)

Page 32: Visualization Process sensorssimulationdatabase raw data images Transformation.

Viewing (6)4. Cast rays

Remember for each pixel on the image planeP = E + i x u + j x vand the view vector g = [0,0,1] x RSo the ray has the equation:

Q = P + k (d x g) d: the sampling distance at each step

xx

dx

x p

Q

K = 0,1,2,…

Page 33: Visualization Process sensorssimulationdatabase raw data images Transformation.

Sampling

At each step of the ray, we sample the volume data

What do you mean ?

Page 34: Visualization Process sensorssimulationdatabase raw data images Transformation.

Sampling (1)

In tuys’ paperQ = P + K x V (v=dxg)

At each step k, Q is roundedoff to the nearest voxel(like the DDA algorithm)

Check if the voxel is on the boundary or not (compareagainst a threshold)

If yes, perform shading

Page 35: Visualization Process sensorssimulationdatabase raw data images Transformation.

Shading

• Take into account the voxel position, distance to the image plane, the object normal, and the light position

• The paper does not describe in detail, but you can imagine we can easily perform local illumination (diffuse or even specular).

• The distance can be used alone to provide An 3D depth cue (e.g. distant voxels are dimmer)

Page 36: Visualization Process sensorssimulationdatabase raw data images Transformation.

Pros and Cons

+ Require no boundary estimation/hidden surface removal+ No display holes

- Binary object representation- Flat lighting (head on illumination)- Jagged surface- No semi-transparencies

A more sophisticated classification and lighting modelin [Levoy 88]

Page 37: Visualization Process sensorssimulationdatabase raw data images Transformation.

Remember …

The paper we discussed last time Discrete Sampling (jagged edges) Binary Classification (no fuzzy

objects) Shading is based on binary

classification (quality is bad)

Page 38: Visualization Process sensorssimulationdatabase raw data images Transformation.

Levoy’s 1988 paper

Tried to improve the above problems Node-Center Voxel Floating point sampling No explicit surface detection Shading and classification are done separately

Page 39: Visualization Process sensorssimulationdatabase raw data images Transformation.

Basic Idea

- Data are defined at the corners of each cell (voxel)

- The data value inside the voxel is determined using tri-linear interpolation

- No ray position round-off is needed when sampling

-Accumulate colors and opacities along the ray path

c1

c2

c3

Page 40: Visualization Process sensorssimulationdatabase raw data images Transformation.

Volume Rendering PipelineAcquired values f0(xi)

Prepared values f1(xi)

Compositing

Image Pixels

shading

voxel colors Ci(x)

ray sampling

sample colors Cs(x)

classification

voxel opacity a(x)

ray sampling

sample opacities Cs(x)

Page 41: Visualization Process sensorssimulationdatabase raw data images Transformation.

Shading and Classification

- Shading: compute a color for each data point in the volume - Classification: Compute an opacity for each data point in the volume

-Done by table lookup (transfer function) - Levoy preshaded the entire volume

f(xi) C(xi), a(xi)

Page 42: Visualization Process sensorssimulationdatabase raw data images Transformation.

Shading

-Use a phong illumination model

Light (color) = ambient + diffuse + specular

C(x) = Cp * Ka + Cp / (K1 + K2* d(x)) * (Kd * (N(x) . L + Ks * (N(x) . H )^n )

Cp: color of light ka,kd,ks: ambient, diffusive, specular coefficient K1,K2: constant (used for depth attenuation) N(x): normal at x

Page 43: Visualization Process sensorssimulationdatabase raw data images Transformation.

Normal estimation

How to compute N(x)?

1. Compute the gradient at each corner2. Interpolate the normal using central difference

N(x,y,z) = [ (f(x+1)-f(x-1))/2, (f(y+1)-f(y-1))/2, (f(z+1)-f(z-1))/2 ]

X+1x-1,y-1,

Y+1z+1

z-1

Page 44: Visualization Process sensorssimulationdatabase raw data images Transformation.

Classification

Classification: Mapping from data to opacities

Region of interest: high opaicity (more opaque) Rest: translucent or transparent

The opacity function is typically specified by the user

Levoy came up with two formula to compute opacity1. Isosurface

2. Region boundary (e.g. between bone and fresh)

Page 45: Visualization Process sensorssimulationdatabase raw data images Transformation.

Opacity function (1)

Goal: visualize voxels that have a selected threshold value fv - No intermediate geometry is extracted - The idea is to assign voxels that have value fv the maximum opacity (say ) - And then create a smooth transition for the surrounding area from 1 to 0 -Levoy wants to maintain a constant thickness for the transition area.

Page 46: Visualization Process sensorssimulationdatabase raw data images Transformation.

Opacity function (2)

Maintain a constant isosurface thickness

opacity = opacity = 0

Can we assign opacity based on function value instead of distance? (local operation: we don’t know wherethe isosurface is)

Yes – we can based on the value distance f – fv but we need to take into account the local gradient

Page 47: Visualization Process sensorssimulationdatabase raw data images Transformation.

Opacity function (3)

Assign opacity based on value difference (f-fv) and local gradient

gradient: the value fall-off rate grad = fsAssuming a region has a constant gradient and the isosurfacetransition has a thickness R

opacity = F = fv

opacity = 0F = fv – grad * R

thickness = R

F = f(x)Then we interpolate the opacity

opacity = – * ( fv-f(x))/ (grad * R)

Page 48: Visualization Process sensorssimulationdatabase raw data images Transformation.

Continuous Sampling

c1

c2

c3

• We sample the volume at discrete points along the ray (Levoy sampled color and opacity, but you can sample the value and then assign color and opacity) • No integer round-off • Use trilinear interpolation• Compositing (front-to-back or back-to-front)

Page 49: Visualization Process sensorssimulationdatabase raw data images Transformation.

Tri-linear Interpolation

c2

- Use 7 linear interpolations

- Interpolate both value and gradient

-Levoy interpolate color and opacity

Page 50: Visualization Process sensorssimulationdatabase raw data images Transformation.

Compositing

c1

c2

c3

The initial pixel color = Black

Back-to-Front compositing: use ‘under’ operator

C = C1 ‘under’ backgroundC = C2 ‘under C C = C3 ‘under C…

Cout = Cin * (1-(x)) + C(x)*(x)

Page 51: Visualization Process sensorssimulationdatabase raw data images Transformation.

Compositing (2)

c1

c2

c3

Or you can use ‘Front-to-Back’Compositing formula

Front-to-Back compositing: use ‘over’ operator

C = backgrond ‘over’ C1C = C ‘over’ C2 C = C ‘over’ C3…

Cout = Cin + C(x)*(1- inout = in + (x) *(1-in)

Page 52: Visualization Process sensorssimulationdatabase raw data images Transformation.

Texture Mapping

2D image 2D polygon

+

Textured-mappedpolygon

Page 53: Visualization Process sensorssimulationdatabase raw data images Transformation.

Tex. Mapping for Volume Rendering

Consider ray casting …

x

yz

(top view)

Page 54: Visualization Process sensorssimulationdatabase raw data images Transformation.

Texture based volume rendering

x

z

y

• Render every xz slice in the volume as a texture-mapped polygon• The proxy polygon will sample the volume data • Per-fragment RGBA (color and opacity) as classification results• The polygons are blended from back to front

Use pProxy geometry for sampling

Page 55: Visualization Process sensorssimulationdatabase raw data images Transformation.

Texture based volume rendering

Page 56: Visualization Process sensorssimulationdatabase raw data images Transformation.

Changing Viewing Direction

What if we change the viewing position?

That is okay, we justchange the eye position(or rotate the polygons and re-render),

Until …

x

y

Page 57: Visualization Process sensorssimulationdatabase raw data images Transformation.

Changing View Direction (2)

Until … You are not going to see anything this way …

This is because the view direction now is Parallel to the slice planes

What do we do?

x

y

Page 58: Visualization Process sensorssimulationdatabase raw data images Transformation.

Switch Slicing Planes

What do we do?

• Change the orientation of slicing planes

• Now the slice polygons are parallel to YZ plane in the object space

x

y

Page 59: Visualization Process sensorssimulationdatabase raw data images Transformation.

Some Considerations… (5)

When do we need to change the slicing orientation?

When the major component of view vector changes from y to -x

x

y

Page 60: Visualization Process sensorssimulationdatabase raw data images Transformation.

Major component of view vector?

Given the view vector (x,y,z) -> get the maximal component

If x: then the slicing planes are parallel to yz planeIf y: then the slicing planes are parallel to xz planeIf z: then the slicing planes are parallel to xy plane

-> This is called (object-space) axis-aligned method.

Some Considerations… (6)

Page 61: Visualization Process sensorssimulationdatabase raw data images Transformation.

Three copies of data needed

x

zy

xz slices yz slices xy slices

•We need to reorganize the input textures for diff. View directions.

• Reorganize the textures on the fly is too time consuming. We want to prepare the texture sets beforehand

Page 62: Visualization Process sensorssimulationdatabase raw data images Transformation.

Texture based volume rendering

Algorithm: (using 2D texture mapping hardware)

Turn off the depth test; Enable blendingFor (each slice from back to front) { - Load the 2D slice of data into texture memory - Create a polygon corresponding to the slice - Assign texture coordinates to four corners of the polygon - Render and blend the polygon (use OpenGL alpha blending) to the frame buffer}

Page 63: Visualization Process sensorssimulationdatabase raw data images Transformation.

Problem (1)

Non-even sampling rate

d d’ d’’

d’’ > d’ > d Sampling artifact will become visible

Page 64: Visualization Process sensorssimulationdatabase raw data images Transformation.

Problem (2)Object-space axis-alighed method can create artifact:Popping Effect

There is a sudden change of slicing direction when the view vector transits from one major direction to another.The change in the image intensity can be quite visible

x

y

Page 65: Visualization Process sensorssimulationdatabase raw data images Transformation.

Solution (1)

Insert intermediate slides to maintain the sampling rate

Chaoli and Liya will present the paper

d d’ d’’

Page 66: Visualization Process sensorssimulationdatabase raw data images Transformation.

Solution (2)

Use Image-space axis-aligned slicing plane: the slicing planes are always parallel to the view plane

Page 67: Visualization Process sensorssimulationdatabase raw data images Transformation.

3D Texture Based Volume Rendering

Page 68: Visualization Process sensorssimulationdatabase raw data images Transformation.

Image-Space Axis-Aligned

Arbitrary slicing through the volume and texturemapping capabilities are needed

- Arbitrary slicing: this can be computed using software in real time

This is basically polygon-volumeclipping

Page 69: Visualization Process sensorssimulationdatabase raw data images Transformation.

Image-Space Axis-Aligned (2)

Texture mapping to the arbitrary slices

This requires 3D Solid texture mapping

Input texture: A bunch of slices (volume)

Depending on the position of the polygon, appropriate textures are mapped to the polygon.

Page 70: Visualization Process sensorssimulationdatabase raw data images Transformation.

3D Texture Mapping

Now the input texture space is 3D

Texture coordinates (r,s) -> (r,s,t)

(0,0,0) (1,0,0)

(0,1,0)(1,1,0)

(0,1,1) (1,1,1)

(r0,s0,t0) (r1,s1,t1)

(r2,s2,t2) (r3,s3,t3)

Page 71: Visualization Process sensorssimulationdatabase raw data images Transformation.

Pros and Cons

2D textured object-aligned slices + Very high performance + High availability - High memory requirement- Bi-linear interpolation - inconsistent sampling rates - popping artifacts

Page 72: Visualization Process sensorssimulationdatabase raw data images Transformation.

Pros and Cons

+ Higher quality+ No popping effect - Need to compute the slicing planes for every view angle- Limited availability

3D textured view-aligned slices

Page 73: Visualization Process sensorssimulationdatabase raw data images Transformation.

Classification Implementation

v

Red

v

Green

v

Blue

v

Alphavalue

(R, G, B, A)

Page 74: Visualization Process sensorssimulationdatabase raw data images Transformation.

Classification Implementation (2)

Pre-classification – using color palette

glColorTableExt( GL_SHARED_TEXTURE_PALETTE_EXT, GL_RGBA8, 256*4, GL_RGBA, GL_UNSIGNED_BYTE,

color_palette); Post-classification – using

1D(2D,3D) texture glTexImage1D(Gl_TEXTURE_1D, 0, GL_RBGA8,

256*4, 0, GL_RGBA, GL_UNSIGNED_BYTE, color_palette);

Page 75: Visualization Process sensorssimulationdatabase raw data images Transformation.

Classification implementation (3)

Post-classification – dependent texture

Texture Unit 0 (volume intensity)

v

Texture Unit 1 (transfer function)

(s, t, r)

(R, G, B, A)

Page 76: Visualization Process sensorssimulationdatabase raw data images Transformation.

Shading

User OpenGL 1.2 Pre-compute normalized gradient for

every voxel node and store as color components in an RGB texture

Light direction stored as primary color Use GL_DOT3_RGB_EXT to combine

the primary color and texture color using dot product 3

Only allow single light source

Page 77: Visualization Process sensorssimulationdatabase raw data images Transformation.

Shading (2)

Use per-fragment shader Store the pre-computed gradient into

a RGBA texture Light 1 direction as constant color 0 Light 1 color as primary color Light 2 direction as constant color 1 Light 2 color as secondary color

Page 78: Visualization Process sensorssimulationdatabase raw data images Transformation.

Shading (3)

nVIDIA Register Combiner

Page 79: Visualization Process sensorssimulationdatabase raw data images Transformation.

Non-polygonal isosurface

Store voxel gradient as GRB texture Store the volume density as alpha Use OpenGL alpha test to discard

volume density not equal to the threshold

Use the gradient texture to perform shading

Page 80: Visualization Process sensorssimulationdatabase raw data images Transformation.

- Isosurface rendering results

No shading diffuse diffuse + specular

Non-polygonal isosurface (2)