Noise recognition in digital image

38
Gurunanak Institute of Technology (GNIT) M.Tech (CSE) Different Types of Noise Recognition in Digital Image Processing Presentation on Submitted by: MD: Reyad Hossain Submitted to: Mr: Moloy Dhar 1 03/14/2022 MD:Reyad Hossain (GNIT)

Transcript of Noise recognition in digital image

05/02/2023 1MD:Reyad Hossain (GNIT)

Gurunanak Institute of Technology (GNIT) M.Tech (CSE)

Different Types of Noise Recognition in Digital

Image Processing

Presentation on

Submitted by:

MD: Reyad Hossain

Submitted to:

Mr: Moloy Dhar

05/02/2023 2MD:Reyad Hossain (GNIT)

INDEX1.What is an Image page- (3-4)2.What is Digital Image page-(5)3.What is Digital Image Processing page-(6-7)4.Types of Images page-(8-10)5.Formats of Images page-(11)6.Image Noise page-(12-13)7.Types of Noises in Image page-(14-30)8.Filtering page-(31-35)9.Conclussion page-(36)10.Referencess page-(37)

05/02/2023 3MD:Reyad Hossain (GNIT)

What is an Image:An image (from Latin: imago) is an artefact that depicts or records visual perception, for example a two-dimensional picture, that has a similar appearance to some subject—usually a physical object or a person, thus providing a depiction of it.

Images may be two-dimensional, such as a photograph, screen display, and as well as a three-dimensional, such as a statue or hologram. They may be captured by optical devices – such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and phenomena, such as the human eye or water.

The word image is also used in the broader sense of any two-dimensional figure such as a map, a graph, a pie chart, or a painting. In this wider sense, images can also be rendered manually, such as by drawing, the art of painting, carving, rendered automatically by printing or computer graphics technology, or developed by a combination of methods, especially in a pseudo-photograph.

05/02/2023 4MD:Reyad Hossain (GNIT)

The word ‘Spatial Domain’ means that we have to work in the given space, in this case, the image. In other words, the term spatial domain implies working with the pixel values or working directly with the available raw data.

(0,0) g( x ,

y )

(255,255)x

y

Let g(x , y) be the original image where g is the gray level value and (x , y) are the image coordinates. For a 8-bit image, g can take values from 0-255 where 0 represents black, 255 represents white and all the intermediate values represent shades of gray. In an image of size 256*256, x and y can take values from (0 , 0) to (255 , 255) as shown in the figure.

05/02/2023 5MD:Reyad Hossain (GNIT)

What is Digital Image:DIGITAL IMAGES are electronic snapshots taken of a scene or scanned from documents, such as photographs, manuscripts, printed texts, and artwork. The digital image is sampled and mapped as a grid of dots or picture elements (pixels). Each pixel is assigned a total value (black, white, shades of gray or colour), which is represented in binary code (zeros and ones). The binary digits ("bits") for each pixel are stored in a sequence by a computer and often reduced to a mathematical representation (compressed). The bits are then interpreted and read by the computer to produce an analogy version for display or printing

Pixel Values: As shown in this bifocal image, each pixel is assigned a tonal value, in this example 0 for black and 1 for white.

05/02/2023 6MD:Reyad Hossain (GNIT)

What is Digital Image Processing:The Digital Image Processing (DIP) refers to processing digital images by means of a digital computer. The digital image processing encompasses a wide and various field of applications. We can also say that this field of digital image processing encompasses whose input and outputs are images and in addition encompasses processes that extract attributes from images including the recognition of the individual objects.

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analogy image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modelled in the form of multidimensional systems

05/02/2023 7MD:Reyad Hossain (GNIT)

Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement. The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. Images then could be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations.

05/02/2023 8MD:Reyad Hossain (GNIT)

Types of Images:It was stated earlier that images are 2-dimensional functions. Images are classified as follows.

1.Monochrome Images: Monochrome images or Binary images. In this, each pixel is stored as a single bit (0 or 1). Here, 0 represents black while 1 represents white. It is a black and white image in the strictest sense. These images are also called bit mapped images. In such images, we have only black and white pixels and no other shades of gray.

2.Gray scale Image: Here, each pixel is usually stored as a byte (8-bits). Due to this, each pixel can have values ranging from 0 (black) to 255 (white). Gray scale images, as the name suggests have black, white and various shades of gray presents in the image.

05/02/2023 9MD:Reyad Hossain (GNIT)

3.Colour Image (24-bit): Colour images are based on the fact that a variety of colours can be generated by mixing the three primary colours viz. Red, Green, and Blue in proper proportions. In colour images, each pixel is composed of RGB values and each of these colours require 8-bits (one byte) for its representation. Hence, each pixel is represented by 24-bits [R (8-bits), G (8-bits), B (8-bits)].

A 24-bit colour image supports 16,777,216 different combination of colours.Colour images can be easily converted to gray scale images using the equation.

X=0.30 R + 0.59 G + 0.11 B

An easier formula that could achieve similar results is.

X=

R + G + B3

05/02/2023 10MD:Reyad Hossain (GNIT)

4. Half Toning: We have all read newspapers at some point of time (hopefully). The images do look like gray level images. But if you look closely, all the images generated are basically using black colour.

Even the images that you see in most of the books (including this one) are generated using black colour on a white background. In spite of this we do get an

Illusion of seeing gray levels. The technique to achieve an illusion of gray levels from only black and white levels is called half-toning.

05/02/2023 11MD:Reyad Hossain (GNIT)

Formats of Images:Image file formats are standardized means of organizing and storing digital images. Image files are composed of digital data in one of these formats that can be raster zed for use on a computer display or printer. An image file format may store data in uncompressed, compressed, or vector formats. Once raster zed, an image becomes a grid of pixels, each of which has a number of bits to designate its colour equal to the colour depth of the device displaying it.

1. JPEG/JFIF: Joint Photographic Experts Group / JPEG File Interchange Format.2. JPEG 2000: It’s the higher version of JPEG.3. EXIF: Exchangeable Image File Format.4. TIFF: Tagged Image File Format.5. RIF: Raw Image Format.6. GIF: Graphics Interchange Format.7 BMP: Bitmap File Format.8. PNG: Portable Network Graphic Format.9. PPN: Portable Pixmap Format.10. PGM: Portable Gray map Format.11. PBM: Portable Bit map Format.

05/02/2023 12MD:Reyad Hossain (GNIT)

Image Noise:The principal sources of noise in a digital image arise during acquisition and during transmission. No matter how much care one takes, some amount of noise always creeps in. Based on the shapes (Probability Density Functions) of the noise.

Image noise is random (not present in the object imaged) variation of brightness or colour information in images, and is usually an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by-product of image capture that adds spurious and extraneous information.The original meaning of "noise" was and remains "unwanted signal"; unwanted electrical fluctuations in signals received by AM radios caused audible acoustic noise ("static"). By analogy unwanted electrical fluctuations themselves came to be known as "noise". Image noise is, of course, inaudible.The magnitude of image noise can range from almost imperceptible specks on a digital photograph taken in good light, to optical and radio astronomical images that are almost entirely noise, from which a small amount of information can be derived by sophisticated processing (a noise level that would be totally unacceptable in a photograph since it would be impossible to determine even what the subject was).

05/02/2023 13MD:Reyad Hossain (GNIT)

Degradation Functionh( x , y )

Restoration Function+f( x , y )

n(x , y)

g(x , y)f(x , y)

Image degradation is said to occur when a certain image under goes loss of stored information either due to digitization or conversion (i.e. algorithmic operations), decreasing visual quality.

The initial image (source, f(x , y)) undergoes degradation due to various operations, conversions and losses. This introduces Noise. This Noisy image is further restored via restoration filters to make it visually acceptable for user.

Degraded Image=Degradation Function*Source + Noise g(x , y) = h(x , y) * f(x , y) + n(x , y)

05/02/2023 14MD:Reyad Hossain (GNIT)

Types of Noise in Image:Image noise is random (not present in the object imaged) variation of brightness or colour information in images, and is usually an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector.

1.Gaussian Noise.2.Salt and pepper (Impulse) Noise.3.Poisson Noise.4.Erlang (Gamma) Noise.5.Exponential Noise.6.Uniform Noise.

05/02/2023 15MD:Reyad Hossain (GNIT)

1.Gaussian Noise:Principal sources of Gaussian noise in digital images arise during acquisition e.g. sensor noise caused by poor illumination and/or high temperature, and/or transmission e.g. electronic circuit noise.A typical model of image noise is Gaussian, additive, independent at each pixel, and independent of the signal intensity, caused primarily by Johnson–Nyquist noise (thermal noise), including that which comes from the reset noise of capacitors (“KTC noise").Amplifier noise is a major part of the "read noise" of an image sensor, that is, of the constant noise level in dark areas of the image.In color cameras where more amplification is used in the blue colour channel than in the green or red channel, there can be more noise in the blue channel.At higher exposures, however, image sensor noise is dominated by shot noise, which is not Gaussian and not independent of signal intensity.

05/02/2023 16MD:Reyad Hossain (GNIT)

The probability density function (PDF) of Gaussian Noise is given by the expression.

2.1)(

22 2/)(

zezp

21

2607.0

z

2

z Gray Level

Mean of average value of zStandard Deviation

Variance

05/02/2023 17MD:Reyad Hossain (GNIT)

If we plot this function, we notice that70% of its value lies in the range and95% of its value lies in the range Gaussian noise has a maximum value at μ and then it starts falling off.Let us consider the image shown in figure.

)](),[( )]2(),2[(

a

b

a b Gray Level

No

of p

ixel

s

a b

No

of p

ixel

s

(a) (b) Histogram of the image

(c) Gaussian occur Histogram modified

Note: Gaussian noise occur due to circuit noise, sensor noise, poor illumination, high temperature.

05/02/2023 18MD:Reyad Hossain (GNIT)

2.Salt and Pepper Noise:Fat-tail distributed or "impulsive" noise is sometimes called salt-and-pepper noise or spike noise. An image containing salt-and-pepper noise will have dark pixels in bright regions and bright pixels in dark regions. This type of noise can be caused by analogy-to-digital converter errors, bit errors in transmission, etc. It can be mostly eliminated by using dark frame subtraction, median filtering and interpolating around dark/bright pixels.Dead pixels in an LCD monitor produce a similar, but non-random, display.

The salt-and-pepper noise are also called shot noise, impulse noise or spike noise that is usually caused by faulty memory locations ,malfunctioning pixel elements in the camera sensors, or there can be timing errors in the process of digitization .In the salt and pepper noise there are only two possible values exists that is a and b and the probability of each is less than 0.2.If the numbers greater than this numbers the noise will swamp out image. For 8-bit image the typical value for 255 for salt-noise and pepper noise is 0 Reasons for Salt and Pepper Noise: a. By memory cell failure. b. By malfunctioning of camera’s sensor cells. c. By synchronization errors in image digitizing or transmission.

05/02/2023 19MD:Reyad Hossain (GNIT)

Fig . Salt and Pepper noiseThe PDF of the salt and pepper noise (bipolar noise) is:

;0;;

)( b

a

pp

zp For z=a

For z=b

elsewhereIf or is zero, this noise is called unipolar noise. The PDF of salt and pepper is shown in figure in the next page.

ap bp

05/02/2023 20MD:Reyad Hossain (GNIT)

ap

bp

a b Gray Level

Generally, a and b are black and white gray levels respectively. Hence, for a 8-bit image, a=0, b=255 because of which the noise is called salt (white) and pepper (black). Sometimes, it is called as speckle noise.

Now take some images to understand it properly.

05/02/2023 21MD:Reyad Hossain (GNIT)

Let us take the same image as the one taken for the Gaussian example.

a b

When salt anf pepper creeps in, the image looks like

a b

05/02/2023 22MD:Reyad Hossain (GNIT)

3.Poisson Noise:Photon noise, also known as Poisson noise, is a basic form of uncertainty associated with the measurement of light, inherent to the quantized nature of light and the independence of photon detections. Its expected magnitude is signal dependent and constitutes the dominant source of image noise except in low-light conditions.Image sensors measure scene irradiance by counting the number of discrete photons incident on the sensor over a given time interval. In digital sensors, the photoelectric effect is used to convert photons into electrons, whereas film based sensors rely on photo-sensitive chemical reactions. In both cases, the independence of random individual photon arrivals leads to photon noise, a signal dependent form of uncertainty that is a property of the underlying signal itself

05/02/2023 23MD:Reyad Hossain (GNIT)

The dominant noise in the darker parts of an image from an image sensor is typically that caused by statistical quantum fluctuations, that is, variation in the number of photons sensed at a given exposure level. This noise is known as photon shot noise. Shot noise has a root-mean-square value proportional to the square root of the image intensity, and the noises at different pixels are independent of one another. Shot noise follows a Poisson distribution, which except at very low intensity levels approximates a Gaussian distribution.In addition to photon shot noise, there can be additional shot noise from the dark leakage current in the image sensor; this noise is sometimes known as "dark shot noise "or "dark-current shot noise". Dark current is greatest at "hot pixels" within the image sensor. The variable dark charge of normal and hot pixels can be subtracted off (using "dark frame subtraction"), leaving only the shot noise, or random component, of the leakage. If dark-frame subtraction is not done, or if the exposure time is long enough that the hot pixel charge exceeds the linear charge capacity, the noise will be more than just shot noise, and hot pixels appear as salt-and-pepper noise.Individual photon detections can be treated as independent events that follow a random temporal distribution. As a result, photon counting is a classic Poisson process, and the number of photons N measured by a given sensor element over a time interval t is described by the discrete probability distribution .!

)()(kteKNpkt

r

05/02/2023 24MD:Reyad Hossain (GNIT)

where λ is the expected number of photons per unit time interval, which is proportional to the incident scene irradiance. This is a standard Poisson distribution with a rate parameter λt that corresponds to the expected incident photon count. The uncertainty described by this distribution is known as photon noise

Centre image Poisson noise occur

05/02/2023 25MD:Reyad Hossain (GNIT)

4.Erlang (Gamma) Noise:Gamma noise often is associated with processes related to waiting times between random (Poisson-distributed) events. Gamma noise typically is generated as a pseudorandom pattern of waiting times between events of a unit mean Poisson process.

The shape of the Gamma noise is very similar to the Rayleigh distribution. The Gamma noise distribution starts from zero. It is given by the following expression.

0)!1()(

1

bza

zp

bbaze for z≥0

For <0

05/02/2023 26MD:Reyad Hossain (GNIT)

)(zp

k

ab /)1( z

Here, a>0 and b is a positive integer. The mean and the variance of this distribution are given by,

and22 /

/

ab

ab

05/02/2023 27MD:Reyad Hossain (GNIT)

5.Exponential Noise:Exponential distribution has an exponential shape. It is given by the following expression.

Here, a>0The mean and the variance of the exponential noise is given by

0)(

azaezp

for z≥0for z<0

22 1

1

a

a

)(zp

a

z

05/02/2023 28MD:Reyad Hossain (GNIT)

6.Uniform Noise:The noise caused by quantizing the pixels of a sensed image to a number of discrete levels is known as quantization noise. It has an approximately uniform distribution. Though it can be signal dependent, it will be signal independent if other noise sources are big enough to cause dithering, or if dithering is explicitly applied.Quantization, in mathematics and digital signal processing, is the process of mapping a large set of input values to a (countable) smaller set. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all loss compression algorithms. The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error. A device or algorithmic function that performs quantization is called a quantize. An analogy-to-digital converter is an example of a quantize.

05/02/2023 29MD:Reyad Hossain (GNIT)

The uniform noise cause by quantizing the pixels of image to a number of distinct levels is known as quantization noise. It has approximately uniform distribution. In the uniform noise the level of the gray values of the noise are uniformly distributed across a specified range. Uniform noise can be used to generate any different type of noise distribution. This noise is often used to degrade images for the evaluation of image restoration algorithms. This noise provides the most neutral or unbiased noise

05/02/2023 30MD:Reyad Hossain (GNIT)

As the name suggests, this noise is uniform over a certain band of gray levels.

The PDF of uniform noise is given by

if a ≤ z ≤ b

otherwise

The mean of the function is

The variance of this function is given by

0

1)( abzp

2ba

12)( 2

2 ab

)(zp

ap

bp

a bz

)(zp

ab 1

a bz

05/02/2023 31MD:Reyad Hossain (GNIT)

Filtering:Filtering in an image processing is a basis function that is used to achieve many tasks such as noise reduction, interpolation, and re-sampling. Filtering image data is a standard process used in almost all image processing systems. The choice of filter is determined by the nature of the task performed by filter and behavior and type of the data. Filters are used to remove noise from digital image while keeping the details of image preserved is an necessary part of image processing. Filters can be described by different categories:

Filtering without Detection: In this filtering there is a window mask which is moved across the observed image. This mask is usually of the size (2N+1)/2, in which N is a any positive integer. In this the centre element is the pixel of concern. When the mask is start moving from left top corner to the right bottom corner of the image, it perform some arithmetic operations without discriminating any pixel of image

05/02/2023 32MD:Reyad Hossain (GNIT)

Detection followed by Filtering: This filtering involves two steps. In the first step it identify the noisy pixels of image and in second step it filters those pixels of image which contain noise. In this filtering also there is a mask which is moved across the image. It performs some arithmetic operations to detect the noisy pixels of image. Then the filtering operation is performed only on those pixels of image which are found to be noisy in the first step, keeping the non-noisy pixel of image intact.Hybrid Filtering: In hybrid filtering scheme, two or more filters are used to filter a corrupted location of a noisy image. The decision to apply a particular filter is based on the noise level of noisy image at the test pixel location and the performance of the filter which is used on a filtering mask.

05/02/2023 33MD:Reyad Hossain (GNIT)

Linear Filters: Linear filters are used to remove certain type of noise. Gaussian or Averaging filters are suitable for this purpose. These filters also tend to blur the sharp edges, destroy the lines and other fine details of image, and perform badly in the presence of signal dependent noise.

Non-Linear Filters: In recent years, a variety of non-linear median type filters such as rank conditioned, weighted median, relaxed median, rank selection have been developed to overcome the shortcoming of linear filter.

Different Type of Linear and Non-Linear Filters:

Mean Filter: The mean filter is a simple spatial filter .It is a sliding-window filter that replace the center value in the window. It replaces with the average mean of all the pixel values in the kernel or window. The window is usually square but it can be of any shape.

05/02/2023 34MD:Reyad Hossain (GNIT)

Advantage: a. Easy to implement b. b. Used to remove the impulse noise.

Disadvantage: It does not preserve details of image. Some details are removes of

image with using the mean filter

05/02/2023 35MD:Reyad Hossain (GNIT)

Median Filter: Median Filter is a simple and powerful non-linear filter which is based order statistics. It is easy to implement method of smoothing images. Median filter is used for reducing the amount of intensity variation between one pixel and the other pixel. In this filter, we do not replace the pixel value of image with the mean of all neighbouring pixel values, we replaces it with the median value. Then the median is calculated by first sorting all the pixel values into ascending order and then replace the pixel being calculated with the middle pixel value. If the neighbouring pixel of image which is to be consider contain an even numbers of pixels, than the average of the two middle pixel values is used to replace. The median filter gives best result when the impulse noise percentage is less than 0.1 %. When the quantity of impulse noise is increased the median filter not gives best result

05/02/2023 36MD:Reyad Hossain (GNIT)

Conclusion:Enhancement of an noisy image is necessary task in digital image processing. Filters are used best for removing noise from the images. In this paper we describe various type of noise models and filters techniques. Filters techniques are divided into two parts linear and non-linear techniques. After studying linear and non-linear filter each of have limitations and advantages. In the hybrid filtering schemes, there are two or more filters are recommended to filter a corrupted location .The decision to apply a which particular filter is based on the different noise level at the different test pixel location or performance of the filter scheme on a filtering mask.

05/02/2023 37MD:Reyad Hossain (GNIT)

References:[1]. A. K. Jain, “Fundamentals of Digital Image Processing”, Prentice Hall of India, First Edition, 1989. [2]. Rafael C .Gonzalez and Richard E. woods, “Digital Image Processing”, Pearson Education, Second Edition, 2005 [3]. K. S. Srinivasan and D. Ebenezer, “A New Fast and Efficient Decision-Based Algorithm for Removal of High-density Impulse Noises,” IEEE Signal Processing Letters, Vol. 14, No. 3, March 2007. [4]. H. Hwang and R. A. Haddad”Adaptive Median Filters: New Algorithms and Results” IEEE Transactions on image processing vol 4. P.no 499-502, Apr 1995. [5]. Nachtegael, M, Schulte, S, Vander We ken. Kerre, E.E.2005.Fuzzy Filters for Noise Reduction: The Case of Gaussian Noise. IEEE Explore, 201-206 D, De Witte. V, 206. [6]. Suresh Kumar, Papendra Kumar, Manoj Gupta, Ashok Kumar Nagawat, “Performance Comparison of Median and Wiener Filter in Image De-noising” ,International Journal of Computer Applications (0975 – 8887) Volume 12– No.4, November 2010

05/02/2023 38MD:Reyad Hossain (GNIT)

Sir…..