Image Processing Basics

Post on 16-Apr-2017

47.592 views 0 download

Transcript of Image Processing Basics

Saudi Board of Radiology: Physics Refresher Course

Kostas Chantziantoniou, MSc2, DABRHead, Imaging Physics Section

King Faisal Specialist Hospital & Research CentreBiomedical Physics Department

Riyadh, Kingdom of Saudi Arabia

Image Processing Basics

Image Processing: Basics

They are many factors that determine the diagnostic usability of a digital image:

• exposure techniques• detector quality (technology dependent)• scatter• viewing conditions• quality of readers• number of readers• image processing

Image Processing: Basics

Why do we need image processing?

• since the digital image is “invisible” it must be prepared for viewing on one or more output device (laser printer, monitor, etc)

• the digital image can be optimized for the application by enhancing or altering the appearance of structures within it (based on: body part, diagnostic task, viewing preferences, etc)

• it might be possible to analyze the image in the computer and provide cues to the radiologists to help detect important/suspicious structures (e.g.: Computed Aided Diagnosis, CAD)

Image Processing: Transformations

They are three types of image processing (transformation algorithms) used:

• image-to-image transformations• image-to-information transformations• information-to-image transformations

Image Processing: Image-to-Image Transformations

Image In Image Out

• enhancement (make image more useful, pleasing)

• restoration (compensate for known image degradations to produce an image that is “closer” to the (aerial) image that came out of the patient - e.g: deblurring, grid line removal)

• geometry (scaling/sizing/zooming, morphing one object into another, distorting or altering the spatial relationship between pixels)

Image Processing: Image-to-Image Transformations

They are three types of image-to-image transformations:

• point transformation• local transformation• global transformation

Image Processing: Image-to-Image Transformations

Point Transformation (use Look-up Tables to adjust Tonescale or image contrast)

• the shape of the LUT depends on the desired “look” of the output image and the structure of the histogram

Image Processing: Image-to-Image Transformations

Image Processing: Image-to-Image Transformations

Image Processing: Image-to-Image Transformations

Image Processing: Image-to-Image Transformations

Image Processing: Image-to-Image Transformations

Image contrast window

Image Processing: Image-to-Image Transformations

Image brightnesswindow

Image Processing: Image-to-Image Transformations

Non-linear LUTs can be used as well (but more complex to implement)

Image Processing: Image-to-Image Transformations

What LUT shape should be used?

Image Processing: Image-to-Image Transformations

Local Transformation (Edge Enhancement, Zooming)

Image Processing: Image-to-Image Transformations

Edge Enhancement (Un-sharp Masking Technique)

Image Processing: Image-to-Image Transformations

Creating a blurred image

The pixels within the kernel are averagedto determine the value of the center pixelfor the output image

Repeat process for all pixels inimage

Image Processing: Image-to-Image Transformations

Kernel size will have a large effecton the level of smoothing that is performed

Sum of all pixel weight factors in kernel must equal 1

Image Processing: Image-to-Image Transformations

Image Processing: Image-to-Image Transformations

Creating a “amplified” difference image

Image Processing: Image-to-Image Transformations

Creating the final edge enhanced output image

Image Processing: Image-to-Image Transformations

Global Transformation (Spatial frequency “Fourier” decomposition):

Image Processing: Image-to-Image Transformations

Image Processing: Image-to-Information Transformations

Image In Information (Data) Out

• image statistics (histograms)

• image compression

• image analysis (image segmentation, feature extraction, pattern recognition)

• computer-aided detection and diagnosis (CAD)

Image Processing: Image-to-Information Transformations

Image Statistics (Histogram)

• the histogram is the fundamental tool for image analysis and image processing

• it histogram is created by examining each pixel in the digital image and counting the number of occurrences of each pixel value (or Code Value)

Image Processing: Image-to-Information Transformations

Image Processing: Image-to-Information Transformations

Low Contrast Image Histograms

• low contrast images produce tall and narrow histograms

• histogram covers a short range of pixels values

High Contrast Image Histograms

• high contrast images produce short and flat (wide) histograms

• histogram covers a wide range of pixels values

NOTE: histograms do not care for the location of the pixels (both high contrast images shown above have the same histogram)

Image Processing: Image-to-Information Transformations

Image Processing: Image-to-Information Transformations

Image Compression

• medical images can contain huge amounts of data (CT image: 0.25 MB, CR chest image: 8 MB, Digital Mammo: 32 MB)

• image compression aims to reduce the total number of bits needed to represent the image without compromising image quality, which in turn:

• reduces storage requirements• reduces the time required to transmit images• uses existing network bandwidth more effectively

Image compression is more than:

• sampling at a lower rate or throwing away pixels

• quantizing each pixel more coarsely or reducing the precision of each pixel

Image Processing: Image-to-Information Transformations

Why can images be compressed?

Redundancy: relationship do exist between pixels in an image based on their location

• algorithms can be spatial (statistical), temporal or spectral (wavelet) in nature

Irrelevancy: pixels included in image that do not add to the diagnostic information

Image Processing: Image-to-Information Transformations

Two types of image compression are used:

Lossless (Reversible) Compression

• uses statistical redundancy only• compression ratios range from 2:1 to 4:1• decompressed (reconstructed) images is numerically identical to original

Lossy (Irreversible) Compression

• uses statistical redundancy and irrelevancy• compression ratios range from 6:1 to 20:1 and more• decompressed image is degraded relative to original

Image Processing: Image-to-Information Transformations

Image Analysis (Segmentation)

Original image Original image with segmentation data

Image Processing: Information-to-Image Transformations

Information (Data) In Image Out

• decompression of compressed image data

• reconstruction of image slices from CT or MRI raw data

• computer graphics, animations and virtual reality (synthetic objects)

Image Processing: Information-to-Image Transformations

3D Image Reconstruction

Image Processing: Information-to-Image Transformations

Image Synthesis

Image Output (Reconstruction): Basics

Why do we need to reconstruct the image?

• the digital image is still a 2D array of numbers (pixels values)

• if it is to be viewed by a human it must be converted back to an analog image on some display device and/or medium (e.g.: CRT monitor, hardcopy)

• so digital image must be reconstructed for output device

Image Output (Reconstruction): What is the problem?

Nuclear medicine image (96 x 128, 6 bit) to be printed on a laser printer film (4k x 5k, 12 bit)

The problem is:

• how do we match the gray scales (tonescale)?• how do we match the image size?

Image Output (Reconstruction): What is the problem?

CR image (2k x 2.5k, 12 bit) to be displayed on a CRT monitor (1.2k x 1k, 8 bit)

Image Output (Reconstruction): Tonescale

Output system tonescale depends on:

• image processing applied (output device should not change any post processing that was done on the image prior to this step)

• calibration of output device (very important & can vary with time)

• dynamic range of output device

• viewing conditions

• observer

Image Output (Reconstruction): Tonescale

Output Calibration (needs to be performed frequently)

• every output device has a LUT that relates its output pixel values to the input pixel values that generated them

Laser Printer

CRT Monitor

Image Output (Reconstruction): Tonescale

Dynamic Range

• every output device has a different dynamic range that must be considered when selecting or calibrating the device LUT

Dynamic Range = Highest signal value device can produce Lowest signal value device can produce

Dynamic Range = antilog(3.0) = 1000

therefore dynamic range of film is 1,000:1

Image Output (Reconstruction): Tonescale

Must use LUTs that compensate for differences in dynamic range:

• CRT monitors: non-linear

• Laser printers: linear or non-linear (to introduce additional contrast)

Image Output (Reconstruction): Tonescale

Viewing Conditions

Image Output (Reconstruction): Output Geometry

Image Scaling Techniques

• in order to display images properly on the output device, the image may have to be scaled by the use of one of the following techniques:

• decimation (sub-sampling)• interpolation

Image Output (Reconstruction): Decimation

• this technique is required when image matrix size is too big for output device

• method of decimation is determined by degree of reduction (may have image quality concerns)

Image Output (Reconstruction): Decimation

Methodology

Image Output (Reconstruction): Decimation

Imaging Concerns

• decimation can be dangerous

• high frequency signals can be removed during sub-sampling and cause artifacts

• proper decimation requires that the digital image be smoothed (blurred) first to remove any signal frequencies that are higher then half of the new sampling frequency

Image Output (Reconstruction): Decimation

Image Output (Reconstruction): Interpolation

Why do we need to interpolation?

• the digital image is too small for output device and we have to scale it up

• problem is that when we scale the image up, we have new pixels that will require new pixel values, that should make the new image appear continuous in space and in gray scale, note output devices are analog devices (e.g.: laser printer, CRT monitor)

• three interpolation techniques are often used:• nearest neighbor interpolation (pixel replication)• linear (or bilinear) interpolation• cubic (spline) interpolation (nonlinear interpolation)

Image Output (Reconstruction): Interpolation

What are the effects of interpolation?

After blurringyour eyes

NOTE the human eye-brain system is an efficient interpolator

Image Output (Reconstruction): Interpolation

What does an interpolator do?

• creates enough pixels in the new digital image such that the matrix sent to the output device produces an image of the right size

• generates new pixels with gray values in such a way that when the display aperture (electron gun: CRT’s or laser spot: laser cameras) marks the output medium, it creates the impression that the image is continuous in space and continuous in values

NOTE excessive interpolation can degrade image quality

Image Output (Reconstruction): Interpolation

• the interpolator uses the known pixels values to calculate or produce new pixels anywhere within the image

• interpolation adds no new information or detail to the image

Image Output (Reconstruction): Nears Neighbor Interpolation

Methodology

Image Output (Reconstruction): Bi-linear Interpolation

Methodology

Image Output (Reconstruction): Cubic Interpolation

Methodology

Image Output (Reconstruction): Interpolation

• all reconstructions of analog signals are approximations

• which interpolator to use depends on the application needs:

Nearest neighbor: maintains/inserts hard edges around pixels (good for text and some images like nuclear medicine)

Linear: smoothing effect, sometimes excessive (good to suppress high frequency structures or noise), very easy to implement

Cubic: can produce very accurate reconstructions but more complex and costly to implement

Image Output (Reconstruction): Display Aperture

Output device aperture size does effect image quality and perceived image resolution

Image Output (Reconstruction): Addressability/Resolution

Because output device has 2k x 2.5 pixels it does not mean you can see all of them

Addressability (matrix size)

• is the data capacity of the output device characterized by the number of values that are addressable by the user (a 4k x 5k laser printer has about 4000 x 5000 = 20,000,000 addressable points (pixels) over its usable area

Resolution

• the ability to see or measure details in the output device

• more important than addressability since it determines the usefulness of a given output device

• is usually lower than addressability (due to effects of display aperture)