University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

50
University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing Detlef Richter New Applications of Digital Image Processing in Technology and Medicine CERN IT Division CH – 1211 Genève 23 August 15th, 2003

description

University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing Detlef Richter New Applications of Digital Image Processing in Technology and Medicine CERN IT Division CH – 1211 Gen è ve 23 August 15th, 2003. Agenda - PowerPoint PPT Presentation

Transcript of University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

Page 1: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

University of Applied Sciences of WiesbadenDepartment of Computer ScienceImage ProcessingDetlef Richter

New Applications of Digital Image Processingin Technology and Medicine

CERNIT Division

CH – 1211 Genève 23August 15th, 2003

Page 2: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

Agenda-----------------------------------------------------------------------------------------1. Introduction2. Hardware Development3. Basic Algorithms4. Spectral Sensitivity5. Image Sensors6. Stereo Vision7. Examples of Applications

7.1 Automated Assembly7.2 Sound Track Restoration7.3 Computer Based Learning7.4 Image Processing of medical Images7.5 medical Navigation

8. Summary

Page 3: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

1. Introduction ----------------------------------------------------------------------------------------1975 Systematic Scientific Development of Image Processing

Problem : Availability of sufficient fast ADC / DAC Hardware

1980 Introduction of Image Processing inIndustrial Assembly Lines, Production Control, Quality Reliability

1985 medical Image Analysis

For Example : Production of Middle Class Car

1985 West Europe 65 h / Japan 35 h

2002 General Motors, Eisenach 8 h

by automation, part of which is done by robotics and image processing

Page 4: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

2. Hardware Development (1)

• Dedicated Image Processing Unit

• Image Display on Video Monitor

• Image Data Transfer to Computer via DMA

• Image Processing Unit Control via C-Bus Extension

• Bottle Necks : Transfer Rates of DMA and Computer Bus System

ADC DACMEM.

DMA

Interf.Contr.

Interf.

DMA

Interf.

Contr.

Interf.

Bus-System

MEM. CPU Graphic

Board

Imaging

SensorVideo

Monitor

Computer

Monitor

Hardw01.DS4

Process

Control

Imaging

Unit

Page 5: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

2. Hardware Development (2)

• Frame Grabber Boards

• Occasional Local CPU on Board

• Improvement of Fast Processing

• Disadvantage of Programming

• Decreasing use of Video Output

• No Separate Video Monitor Necessary

• Image Output and Programming in Different Windows on Same Screen

• Bottle Neck : Computer Bus System if no Local CPU on Board

ADC MEM.Contr.

Interf.

Bus-System

MEM. CPUGraphic

Board

Imaging

Sensor

Video

Monitor

Computer

Monitor

Hardw02.DS4

Process

Control

DAC

local

CPU

Frame

Grabber

Board

Page 6: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

2. Hardware Development (3)

• No Memory on Board

• Image Data and Programs in Same Memory

• Fast Data Transfer via Computer Bus,

PCI-Bus : 1.3 MByte, Transfer Rate Sufficient for Transferring 3 Video Frames at Same Time,

e. g. RGB-Images

ADCContr.

Interf.

Bus-System

MEM. CPU Graphic

Board

Imaging

Sensor

Computer

Monitor

Hardw03.DS4

Process

Control

Frame

Grabber

Board

Page 7: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

3. Basic Algorithms ( 1 )

Modeling of a Scene( A-Priori-Knowledge )

Global Preprocessing

( Noise Reduction, Edge Filtering, Image Transforms etc. )

Segmentation

( Objects / Background )ROI or VOI

Extraction of Attributes / Values

Analysis of Attributes / Values

( Classification )

Interpretation, Action

••

BinarisationLow Pass Filtering ( Noise Reduction )

2D High Pass Filtering ( Edge Detection )3D High Pass Filtering ( Shape Detection )

Hough TransformFourier Transform

Autocorrelation FunctionSpecific Procedures

••

Page 8: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

3. Basic Algorithms ( 2 )

• Video Image

• Resolution 786 x 576 Pixels

• 8 Bit Gray Level Resolution

Page 9: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

3. Basic Algorithms ( 3 )

• Global Filtering

e. g. 2D-Low Pass Filter ( Median )

• Scan the Image with n*n-Matrix

• Apply Matrix for Each Pixel

• Generate New Image Frame

( Noise Reduction )

Page 10: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

3. Basic Algorithms ( 4 )

• Global Filtering

e. g. 2D-High Pass Filter

• Scan the Image with n*n-Matrix

• Apply Matrix for Each Pixel

• Generate New Image Frame

( Edge Detection )

Page 11: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

3. Basic Algorithms ( 5 )

• Global Transformation

e. g. Hough Transform

for Lines and Circles

• Transform Each Pixel of an Edge

into a Hough Matrix

• Analyze the Hough Matrix

• Gain Mathematical Equations

of Edges

Page 12: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

3. Basic Algorithms ( 6 )

• Calculate Vanishing Points of Lines

• Define optimal Cut-Out of Source Image

• Define Rectangular Area in Target Image

• Interpolate all Pixels in Target Image by Resampling

Page 13: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

3. Basic Algorithms ( 7 )

• Global Transformation

e. g. Fourier- Transform

( 1D and 2D )

• FFT

• Transform Each Line and/or Row

from Geometric Domain

into Frequency Domain

• Analyze Frequency Domain

• Extract Frequency Attributes

or Change Frequency Attributes

and Execute Inverse Transform

• Gray Levels with logarithmic enhanced Representation

Page 14: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

4. Spectral Sensitivity ( 1 )

• Visible Electromagnetic Wavelengths 400 nm ( violet-blue ) to 750 nm ( red )

Monochromatic Cameras

• Gray Level Video Camera (Luminance Signal Output According Integral Sensitivity )

• IR-sensitive Cameras for medical Applications ( One Channel, Luminance or False Color Output )

• Video Cameras with IR-Long Pass Filter for Technical Applications ( One Channel, Small Bandwidth )

Spectral Sensitive Cameras

• One Chip Cameras with R,G,B-Filter Mask, FBAS-Output ( Three Channels coded on one Line )

• Three Chip Cameras, R,G,B-Output ( Three Channels )

• Landsat Images, IR-Channels ( up to Seven Channels )

Page 15: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

4. Spectral Sensitivity ( 2 )

CCD-Chip

R R R RG G G GB B B B

Reduced Resolution

Slit Mask

Orig-inalReso-lution

Slit Mask Camera

CCD-Chip

CCD-Chip

CCD-Chip

R-Filter

G-Filter

B-Filter

Objektiv

Bild620a.DS4

Beam Splitter

Page 16: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

5. Image Sensors

• Video Cameras ( 2D )– Reflection of Incident Light on Surface– Monochrome Cameras ( Gray Level, Modified IR Level )– Color Cameras ( One Line FBAS, Three Lines R,G,B )– Infrared Cameras ( monochrome )

• Line Scan Camera ( 1D )

• Roentgen Sensitive Image– 2D Transparent Shadow of 3D Object– Large Area Solid Roentgen-Sensitive Devices

• Computer Tomography ( CT ), since 1972– 2D Slices of 3D Object, Volume Image, X-Ray Scattering by High Z-Nuclei

• Magnetic Resonance Tomography ( MRT ), since 1980– 2D Slices of 3D Object, Volume Image, “Light Emitting Volume by Hydrogen Nuclei”

• Positron Emission Tomography ( PET ), since 1978– 2D Slices of 3D Object, Distribution of radionuclides, shows chemical activity,

“Functional Image”

• Ultra Sound ( 2D ), since 1960– Reflection on Tissue Discontinuities, Run Time Measurement of Sound

Page 17: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

5.1 Large Area Solid Roentgen-Sensitive Devices

• Large scale Roentgen sensitive sensor

• Sensor elements 200 μ * 200 μ

• Resolution 2 K *2 K pixel

• Grey level resolution 16 Bit

• Data 16 MB per Image

• Roentgen energy 80 KeV → 400 KeV

• approx. 3 frames per s ( 1K*1K 7 frames )

• Quality assurance / Production Control / Materials Research and Testing

• Medical application

Page 18: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

5.2 PET Images

• 18F-Fluorodesoxyglucose

• Decay

• p → n + e+ + ν

• e+ + e- → 2 * 511 KeV

• Ekin(e+) = 0,633 MeV

• mean free path in H2O : 2,4 mm

• τ½ = 109,7 min.

• search for diseases before symptoms appear ( e.g. Alzheimer disease, micro metastases etc. )

• Problems :Mathematical modeling of the physiological procedure

OF 10n 8p18Z

9n 9p18Z

Page 19: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

5.3 US Images

• US-Picture Abdomen

• High Signal to Noise Ratio

• Frequency of Sound 8 to 13 MHz

• Power 10 to 50 mW

• Velocity of Waves Dependent on Tissue

• Measurement of Time with supposed constant Velocity

• Ratio of Emitting to Receiving Time 0.1%

Page 20: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

6. Stereo Vision ( 1 )

• Model of Pinhole Camera with Radial Symmetric Distortion of the lenses

• Calibration of Intrinsic Mathematical and Physical Parameters :

– Width to Height Ratio of Pixels– Intersection of Optical Axes of the

Lenses with the Surface of the Sensor Chip

– Image width of the lenses– Coefficients defining Radial Symmetric

Distortion– Position and Orientation of the

Cameras

Page 21: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

6. Stereo Vision ( 2 )

• Use chessboard like pattern of known size

• Apply Hough Transform

• Find the known number of edges of known direction

• Calculate all intersections

• Find subpixel precise all corners

• Calculate position and orientation of the camera

Page 22: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.1 Automated Industrial Assembly ( 1 )

• Top View of an Industrial Precise Mechanical Production Line

• Problem of Perspective Distortion : Recognition of exact Positions in x,y-Plane

• Problem of Positioning the Robot Hand for Automatic Production and Control within 3D Space

• Parts Recognition, Identification and Measurement using Bottom Illumination by IR-Light

Page 23: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.1 Automated Industrial Assembly ( 2 )

• Calibration of Monocular or Binocular Vision System (Position, Orientation, Focus Length of Camera ) with Respect to the Coordinates of Assembling System

• Calibration of Robot Coordinate System with Respect to Vision System

• Development of Customer-Specific Recognition Algorithms

• Optimizing of Lighting

• Test of Complete System

Page 24: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.2 Sound Track Restoration ( 1 )

tonsp03e.DS4

Width of sound track 2.5 mm

Width of frame22 mm

Width of carrier 35 mm

Frameheight16 mm

Frameperiodicity

19 mm

Projection speed 24 frames / s. = 45.6 cm/s

Sensor

AmplifierLamp withslit aperture

EIZO

Video-Monitor

CPU

Frame-grabber Disc Video Sound

8 Bit CCDLine-Camera

512 PixelsAudio

Control

PC

VisualControl

tonsp04e.DS4

Film Scanner

Page 25: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.2 Sound Track Restoration ( 2 )

• Reproduction Speed 24 Frames / s

• Sound Track Scanning with Line Camera 6 Frames / s

• Scanning of Track Width 512 Pixels per Line

• Gray Level Resolution 8 Bit / Pixel = 256 Gray Levels

• Nyquist Frequency 24 KHz

• Analog Cut off Frequency 15 KHz

• Frequency Resolution 2000 Lines per Frame

• Data Transfer Rate ( Camera – Disk Memory ) 6 MByte / s

• Required Storage Capacity 24 MByte / 1 Second of Sound

Page 26: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.2 Sound Track Restoration ( 3 )

Left : Intensity Code Sound Track of the Speech of Albert Einstein 1930 on behalf of the Opening Ceremony of the Broadcasting Fair in Berlin ( Scratches, Spots, Fibers )

Right : Restored Sound Track within ROI, Conserving the Authenticity of First RecordingConverting the Sound Track in Digital Audio Data

Page 27: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.2 Sound Track Restoration ( 4 )

Left : Twofold Double Sided Variable Area Code with Faults on Sound TrackRight : Multifold Double Sided Variable Area Code with Faults on Sound Track

Page 28: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.3 Computer Based Learning ( 1 )

• Analysis of Time Dependent Color Change caused by Chemical Reactions

• Color Sensitive Chemical Indicator

• Time Dependent Addition of Reagents

• Analysis of Colors in Defined Color Space( R, G, B )

Page 29: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.3 Computer Based Learning ( 2 )

• Example : Reagent Phenolphthalein, Measurement Time 300 s, Sample Intervalls 500 ms

• Create Colored Animation of Chemical Reaction

Page 30: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.4 Image Processing of medical Images (1)

Methods of Segmentation :

• Body Surface : Change of Gray Level Distribution

• Vessels, Tracer of Vessels : Tube Model with slowly varying Diameter and Ramifications

Filament Model, Transition between both Models possible

- Seed Algorithm with Use of Contrasting Injection

- 3D rays in forward Direction

- Texture Analysis on Surface of Spheres

Page 31: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.4 Image Processing of medical Images (2)

• Optic nerve : 3D interactively directed rays, no defined contrast

• Ventricles : 3D-growing region, Problem : Running out in Filament Structures

• Tumors : Slowly and unnoticed growing, backing down in liquor holes ( ventricles ), first complaints if burdening nerves ( optic nerves, auditory nerves )

Threshold based and edge detection based Procedures are without success.

Manual contouring is time consuming but a standard procedure.

New : Getting interactive significant numerical attrbutes for texture analysis, 2D-texture analysis, 2D reconstruction of Tumor

Disadvantage : No proper Contours detectable, manual completion necessary

• Liver : Definition of liver segments by blood vessels ( maximum 5 out of 7 liver-segments are resectable )

Page 32: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 medical Navigation ( 1 )

• Besides infrared guided systems exist other navigational systems :

• Mechanical, Stereotactical ( direct contact )• Electromagnetical ( electromagnetic disturbance )• Laser Guided ( laser beam )

Page 33: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing
Page 34: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 medical Navigation ( 2 ), Components

• IR-based stereo vision system with two CCD cameras with synchronised H-Sync-Signals

• four camera system under development

• conventional calibration of cameras, i. e.– spatial position and orientation– focal length – radial symmetric distortion of the optical lenses

Page 35: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 medical Navigation ( 3 ), Tracker

• Tracker for biopsy needles with IR-LEDs, adapter for robot based guidance, force / torque sensor for feed back

• ( min. 3 LEDs, max. 6 LEDs )

• Wavelength of 895 ± 45 nm

• Longpass filter with cut-off frequency of 830 nm

• CT-volume image data( DICOM III Standard )

• Registration of patients position

• Visualization of navigation

Page 36: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 medical Navigation (4 ), Aim of Brachytherapy

• Irradiation of recurrent or inoperable tumours,

• preserving normal tissue ( cells ),

• avoiding pre-irradiated tissue,

• using biopsy needles with inner diameter of 2.0 mm and outer diameter of 2.1 mm.

Page 37: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

using np z== =11577 192Ir as radioactive source

t ( Ir ) = 74,2 da ys1/2

192

External therapy Brachytherapy

Radiation energy 18 MeV 0.6 MeV

Daily applications 1 2

Duration of one application 3 directions, each 10 - 20 s 5 - 15 min

Dose of one appl. 1,8 Gy 4 - 7 Gy

Duration of application 5 - 6 weeks 1 week

Total Dose in Gy 45 Gy 30 - 40 Gy

Page 38: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

4. Mathematical Procedure -----------------------------------------------------------------------------------------------------------------------------

1. Step :

Calibrate the stereovision system

2. Step :

Calibrate the tracker,getting a precise tracker model T, i. e. calculate the 3D-coordinates of the LEDs of the trackernoted by : T = { Ti | Ti R3, i = 1, ..., n }

Repeat this step 50 times for getting a higher precision in the coordinates

Use a-priori-knowledge about the model for matching corresponding points

Page 39: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

3. Step :

Calibrate the equation of the biopsy needle ( assumed to be a linear one ),with respect to the tracker modelwith different lengths of the needle

Page 40: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

4. Step : ( Application )

Find the 3D coordinates of actual tracker position PP = { Pi | Pi R3, i = 1, ..., n }

Superimpose the actual tracker position P to the model T with optimising

getting the precise position and orientation of the tracker in 3D

Calculate the precise position of the end of the biopsy needle and its orientation

MinimumP - T2

ii

Page 41: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 medical Navigation ( 5 ), Evaluation

High precision of position and orientation measurement

Page 42: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 medical Navigation ( 6 ), Visualization ---------------------------------------------------------------------------------------------------------------------------------

--Definition of a CT tomogram data set intersecting plane containing the biopsy channel

Resolution of CT tomogram data set is not homogeneoustime consuming 3D interpolation would be necessary

Homogenisation of the data set ( isotropic voxels ), increasing the data by a factor 5 to 10 and using the nearest voxel of intersection plane without interpolation

Page 43: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 medical Navigation ( 7 ), Example of Visualization

Page 44: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 medical Navigation ( 8 ), Projected Lay-out

Page 45: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 Medical Navigation ( 9 ), Registration of Patient

• 3D-Segmentation of landmarks ( containing IR-LEDs ) within the CT-volume-data of the interesting region of the body, defining the position of the data-cube and of the tumor within the body.

• Segmentation of the landmarks by the stereo vision system during medical treatment, defining the absolute position of the tumor within the CT-data cube.

• Transfer Navigation Data from Vision System into CT Volume Date by

xCT = R * xCam + T

• Visualize Navigation of Biopsy Needle within CT-volume Data

Page 46: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

7.5 medical Navigation ( 10 )

• 8. Procedure of Virtual Navigation and Further Steps ---------------------------------------------------------------------------------------------------------------------------------

• Interactive segmentation of tumor and other risk structures by the medical specialist.

• Automatic segmentation of bones.

• Calculation of possible 3D access paths to the tumor.

Page 47: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing
Page 48: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

Iteration of verification of the position of the biopsy needle within the tumour by CT duringapplication.

Automatic positioning of the biopsy needle by a robot according to the known access paths formanual interactive application by physician ( projected ).

Automatic positioning of the biopsy needle by a robot according according to the known accesspaths using online force feed back ( projected ).

Page 49: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

8. Summary ( 1 )

• Hardware Development

• Basic Algorithms

• Spectral Sensitivity

• Image Sensors

• Stereo Vision

• Examples of applications

Page 50: University of Applied Sciences of Wiesbaden Department of Computer Science Image Processing

8. Summary ( 2 )

• Fast Developing Discipline

• Many Known Algorithms Exist for Technical Applications ( mainly without any interaction )

• Brand New Applications have to be developed for medical Diagnosis and Therapy

• Only very few standard Algorithms Exist for medical Applications ( interactions necessary )

• Creativity, Intuition and Experience is necessary to Solve Problems

• Many Attainments of other Disciplines ( Electrical Engineering, Math, Physics, Chemistry ) are required