Image Processing Basics

59
i Board of Radiology: Physics Refresher Course Kostas Chantziantoniou, MSc 2 , DABR Head, Imaging Physics Section King Faisal Specialist Hospital & Research Centre Biomedical Physics Department Riyadh, Kingdom of Saudi Arabia Image Processing Basics

Transcript of Image Processing Basics

Page 1: Image Processing Basics

Saudi Board of Radiology: Physics Refresher Course

Kostas Chantziantoniou, MSc2, DABRHead, Imaging Physics Section

King Faisal Specialist Hospital & Research CentreBiomedical Physics Department

Riyadh, Kingdom of Saudi Arabia

Image Processing Basics

Page 2: Image Processing Basics

Image Processing: Basics

They are many factors that determine the diagnostic usability of a digital image:

• exposure techniques• detector quality (technology dependent)• scatter• viewing conditions• quality of readers• number of readers• image processing

Page 3: Image Processing Basics

Image Processing: Basics

Why do we need image processing?

• since the digital image is “invisible” it must be prepared for viewing on one or more output device (laser printer, monitor, etc)

• the digital image can be optimized for the application by enhancing or altering the appearance of structures within it (based on: body part, diagnostic task, viewing preferences, etc)

• it might be possible to analyze the image in the computer and provide cues to the radiologists to help detect important/suspicious structures (e.g.: Computed Aided Diagnosis, CAD)

Page 4: Image Processing Basics

Image Processing: Transformations

They are three types of image processing (transformation algorithms) used:

• image-to-image transformations• image-to-information transformations• information-to-image transformations

Page 5: Image Processing Basics

Image Processing: Image-to-Image Transformations

Image In Image Out

• enhancement (make image more useful, pleasing)

• restoration (compensate for known image degradations to produce an image that is “closer” to the (aerial) image that came out of the patient - e.g: deblurring, grid line removal)

• geometry (scaling/sizing/zooming, morphing one object into another, distorting or altering the spatial relationship between pixels)

Page 6: Image Processing Basics

Image Processing: Image-to-Image Transformations

They are three types of image-to-image transformations:

• point transformation• local transformation• global transformation

Page 7: Image Processing Basics

Image Processing: Image-to-Image Transformations

Point Transformation (use Look-up Tables to adjust Tonescale or image contrast)

• the shape of the LUT depends on the desired “look” of the output image and the structure of the histogram

Page 8: Image Processing Basics

Image Processing: Image-to-Image Transformations

Page 9: Image Processing Basics

Image Processing: Image-to-Image Transformations

Page 10: Image Processing Basics

Image Processing: Image-to-Image Transformations

Page 11: Image Processing Basics

Image Processing: Image-to-Image Transformations

Page 12: Image Processing Basics

Image Processing: Image-to-Image Transformations

Image contrast window

Page 13: Image Processing Basics

Image Processing: Image-to-Image Transformations

Image brightnesswindow

Page 14: Image Processing Basics

Image Processing: Image-to-Image Transformations

Non-linear LUTs can be used as well (but more complex to implement)

Page 15: Image Processing Basics

Image Processing: Image-to-Image Transformations

What LUT shape should be used?

Page 16: Image Processing Basics

Image Processing: Image-to-Image Transformations

Local Transformation (Edge Enhancement, Zooming)

Page 17: Image Processing Basics

Image Processing: Image-to-Image Transformations

Edge Enhancement (Un-sharp Masking Technique)

Page 18: Image Processing Basics

Image Processing: Image-to-Image Transformations

Creating a blurred image

The pixels within the kernel are averagedto determine the value of the center pixelfor the output image

Repeat process for all pixels inimage

Page 19: Image Processing Basics

Image Processing: Image-to-Image Transformations

Kernel size will have a large effecton the level of smoothing that is performed

Sum of all pixel weight factors in kernel must equal 1

Page 20: Image Processing Basics

Image Processing: Image-to-Image Transformations

Page 21: Image Processing Basics

Image Processing: Image-to-Image Transformations

Creating a “amplified” difference image

Page 22: Image Processing Basics

Image Processing: Image-to-Image Transformations

Creating the final edge enhanced output image

Page 23: Image Processing Basics

Image Processing: Image-to-Image Transformations

Global Transformation (Spatial frequency “Fourier” decomposition):

Page 24: Image Processing Basics

Image Processing: Image-to-Image Transformations

Page 25: Image Processing Basics

Image Processing: Image-to-Information Transformations

Image In Information (Data) Out

• image statistics (histograms)

• image compression

• image analysis (image segmentation, feature extraction, pattern recognition)

• computer-aided detection and diagnosis (CAD)

Page 26: Image Processing Basics

Image Processing: Image-to-Information Transformations

Image Statistics (Histogram)

• the histogram is the fundamental tool for image analysis and image processing

• it histogram is created by examining each pixel in the digital image and counting the number of occurrences of each pixel value (or Code Value)

Page 27: Image Processing Basics

Image Processing: Image-to-Information Transformations

Page 28: Image Processing Basics

Image Processing: Image-to-Information Transformations

Low Contrast Image Histograms

• low contrast images produce tall and narrow histograms

• histogram covers a short range of pixels values

High Contrast Image Histograms

• high contrast images produce short and flat (wide) histograms

• histogram covers a wide range of pixels values

NOTE: histograms do not care for the location of the pixels (both high contrast images shown above have the same histogram)

Page 29: Image Processing Basics

Image Processing: Image-to-Information Transformations

Page 30: Image Processing Basics

Image Processing: Image-to-Information Transformations

Image Compression

• medical images can contain huge amounts of data (CT image: 0.25 MB, CR chest image: 8 MB, Digital Mammo: 32 MB)

• image compression aims to reduce the total number of bits needed to represent the image without compromising image quality, which in turn:

• reduces storage requirements• reduces the time required to transmit images• uses existing network bandwidth more effectively

Image compression is more than:

• sampling at a lower rate or throwing away pixels

• quantizing each pixel more coarsely or reducing the precision of each pixel

Page 31: Image Processing Basics

Image Processing: Image-to-Information Transformations

Why can images be compressed?

Redundancy: relationship do exist between pixels in an image based on their location

• algorithms can be spatial (statistical), temporal or spectral (wavelet) in nature

Irrelevancy: pixels included in image that do not add to the diagnostic information

Page 32: Image Processing Basics

Image Processing: Image-to-Information Transformations

Two types of image compression are used:

Lossless (Reversible) Compression

• uses statistical redundancy only• compression ratios range from 2:1 to 4:1• decompressed (reconstructed) images is numerically identical to original

Lossy (Irreversible) Compression

• uses statistical redundancy and irrelevancy• compression ratios range from 6:1 to 20:1 and more• decompressed image is degraded relative to original

Page 33: Image Processing Basics

Image Processing: Image-to-Information Transformations

Image Analysis (Segmentation)

Original image Original image with segmentation data

Page 34: Image Processing Basics

Image Processing: Information-to-Image Transformations

Information (Data) In Image Out

• decompression of compressed image data

• reconstruction of image slices from CT or MRI raw data

• computer graphics, animations and virtual reality (synthetic objects)

Page 35: Image Processing Basics

Image Processing: Information-to-Image Transformations

3D Image Reconstruction

Page 36: Image Processing Basics

Image Processing: Information-to-Image Transformations

Image Synthesis

Page 37: Image Processing Basics

Image Output (Reconstruction): Basics

Why do we need to reconstruct the image?

• the digital image is still a 2D array of numbers (pixels values)

• if it is to be viewed by a human it must be converted back to an analog image on some display device and/or medium (e.g.: CRT monitor, hardcopy)

• so digital image must be reconstructed for output device

Page 38: Image Processing Basics

Image Output (Reconstruction): What is the problem?

Nuclear medicine image (96 x 128, 6 bit) to be printed on a laser printer film (4k x 5k, 12 bit)

The problem is:

• how do we match the gray scales (tonescale)?• how do we match the image size?

Page 39: Image Processing Basics

Image Output (Reconstruction): What is the problem?

CR image (2k x 2.5k, 12 bit) to be displayed on a CRT monitor (1.2k x 1k, 8 bit)

Page 40: Image Processing Basics

Image Output (Reconstruction): Tonescale

Output system tonescale depends on:

• image processing applied (output device should not change any post processing that was done on the image prior to this step)

• calibration of output device (very important & can vary with time)

• dynamic range of output device

• viewing conditions

• observer

Page 41: Image Processing Basics

Image Output (Reconstruction): Tonescale

Output Calibration (needs to be performed frequently)

• every output device has a LUT that relates its output pixel values to the input pixel values that generated them

Laser Printer

CRT Monitor

Page 42: Image Processing Basics

Image Output (Reconstruction): Tonescale

Dynamic Range

• every output device has a different dynamic range that must be considered when selecting or calibrating the device LUT

Dynamic Range = Highest signal value device can produce Lowest signal value device can produce

Dynamic Range = antilog(3.0) = 1000

therefore dynamic range of film is 1,000:1

Page 43: Image Processing Basics

Image Output (Reconstruction): Tonescale

Must use LUTs that compensate for differences in dynamic range:

• CRT monitors: non-linear

• Laser printers: linear or non-linear (to introduce additional contrast)

Page 44: Image Processing Basics

Image Output (Reconstruction): Tonescale

Viewing Conditions

Page 45: Image Processing Basics

Image Output (Reconstruction): Output Geometry

Image Scaling Techniques

• in order to display images properly on the output device, the image may have to be scaled by the use of one of the following techniques:

• decimation (sub-sampling)• interpolation

Page 46: Image Processing Basics

Image Output (Reconstruction): Decimation

• this technique is required when image matrix size is too big for output device

• method of decimation is determined by degree of reduction (may have image quality concerns)

Page 47: Image Processing Basics

Image Output (Reconstruction): Decimation

Methodology

Page 48: Image Processing Basics

Image Output (Reconstruction): Decimation

Imaging Concerns

• decimation can be dangerous

• high frequency signals can be removed during sub-sampling and cause artifacts

• proper decimation requires that the digital image be smoothed (blurred) first to remove any signal frequencies that are higher then half of the new sampling frequency

Page 49: Image Processing Basics

Image Output (Reconstruction): Decimation

Page 50: Image Processing Basics

Image Output (Reconstruction): Interpolation

Why do we need to interpolation?

• the digital image is too small for output device and we have to scale it up

• problem is that when we scale the image up, we have new pixels that will require new pixel values, that should make the new image appear continuous in space and in gray scale, note output devices are analog devices (e.g.: laser printer, CRT monitor)

• three interpolation techniques are often used:• nearest neighbor interpolation (pixel replication)• linear (or bilinear) interpolation• cubic (spline) interpolation (nonlinear interpolation)

Page 51: Image Processing Basics

Image Output (Reconstruction): Interpolation

What are the effects of interpolation?

After blurringyour eyes

NOTE the human eye-brain system is an efficient interpolator

Page 52: Image Processing Basics

Image Output (Reconstruction): Interpolation

What does an interpolator do?

• creates enough pixels in the new digital image such that the matrix sent to the output device produces an image of the right size

• generates new pixels with gray values in such a way that when the display aperture (electron gun: CRT’s or laser spot: laser cameras) marks the output medium, it creates the impression that the image is continuous in space and continuous in values

NOTE excessive interpolation can degrade image quality

Page 53: Image Processing Basics

Image Output (Reconstruction): Interpolation

• the interpolator uses the known pixels values to calculate or produce new pixels anywhere within the image

• interpolation adds no new information or detail to the image

Page 54: Image Processing Basics

Image Output (Reconstruction): Nears Neighbor Interpolation

Methodology

Page 55: Image Processing Basics

Image Output (Reconstruction): Bi-linear Interpolation

Methodology

Page 56: Image Processing Basics

Image Output (Reconstruction): Cubic Interpolation

Methodology

Page 57: Image Processing Basics

Image Output (Reconstruction): Interpolation

• all reconstructions of analog signals are approximations

• which interpolator to use depends on the application needs:

Nearest neighbor: maintains/inserts hard edges around pixels (good for text and some images like nuclear medicine)

Linear: smoothing effect, sometimes excessive (good to suppress high frequency structures or noise), very easy to implement

Cubic: can produce very accurate reconstructions but more complex and costly to implement

Page 58: Image Processing Basics

Image Output (Reconstruction): Display Aperture

Output device aperture size does effect image quality and perceived image resolution

Page 59: Image Processing Basics

Image Output (Reconstruction): Addressability/Resolution

Because output device has 2k x 2.5 pixels it does not mean you can see all of them

Addressability (matrix size)

• is the data capacity of the output device characterized by the number of values that are addressable by the user (a 4k x 5k laser printer has about 4000 x 5000 = 20,000,000 addressable points (pixels) over its usable area

Resolution

• the ability to see or measure details in the output device

• more important than addressability since it determines the usefulness of a given output device

• is usually lower than addressability (due to effects of display aperture)