MC0086 Assignment Spring 2013 solved

6
Q. Explain the process of formation of image in human eye? Ans : The eye is an optical image-forming system. Many parts of the eye shown and described on the page about the anatomy of the eye play important roles in the formation of an image on the retina (which is the back surface of the eye that consists of layers of cells whose function is to transmit to the brain information corresponding to the the image formed on it). The following example is explained below: Notes about the Basic Ray Diagram of image formation within the Human Eye: 1. Representation of an object: First consider the object - which is represented by a simple red arrow pointing upwards (left-hand-side of diagram). Most real objects have complicated shapes, textures, and so on. This arrow is used to represent a very simple object for which just two clearly defined points on the object are traced through the eye to the retina. 2. Light leaves the object - propagating in all directions: It is assumed for simplicity that this is a scattering object, meaning that after light in the area (which may be called "ambient light") reaches the object, it leaves the surface of the object traveling in a wide range of directions. 3. Some of the light leaving the object reaches the eye: Although the object is scattering light in all directions, only a small proportion of the light scattered from it reaches the eye. The longer strong pink and green lines with the arrows marked along them are called "rays". These represent the direction of travel of light. The pink rays indicate paths taken by light leaving the top point of the object (that eventually reaches the retina), while the green rays indicate paths taken by light leaving the lower point of the object (that eventually reaches the retina). Only two rays are shown leaving each point on the object. This simplification is to keep the diagram clear. The two rays drawn in each case are the extreme rays, that is those that only just get through the optical system called the eye. These generally represent a cone of light that propagates all the way through the system from the object to the image. 4. Light changes direction when it passes from the air into the eye: When light traveling away from the object, towards the eye, arrives at the eye, the first surface it reaches is the cornea. The ray-diagram shows the rays changing direction when they pass through the cornea. This change in direction is due to refraction (i.e. the re-direction of light as it passes from one medium into another, different, medium). Refraction is covered in more detail on the next page. To describe this ray-diagram it is sufficient to say that several structures in the eye contribute to image formation by re- directing the light passing through them in such a way as to improve the quality of the image formed on the retina.

description

MC0086 Assignment Spring 2013 solved (answers)mahiassig.netai.net

Transcript of MC0086 Assignment Spring 2013 solved

Page 1: MC0086 Assignment Spring 2013 solved

Q. Explain the process of formation of image in human eye?

Ans : The eye is an optical image-forming system.

Many parts of the eye shown and described on the page about the anatomy of the eye play important

roles in the formation of an image on the retina (which is the back surface of the eye that consists of

layers of cells whose function is to transmit to the brain information corresponding to the the image

formed on it).

The following example is explained below:

Notes about the Basic Ray Diagram of image formation within the Human Eye:

1. Representation of an object:

First consider the object - which is represented by a simple red arrow pointing upwards (left-hand-side

of diagram).

Most real objects have complicated shapes, textures, and so on. This arrow is used to represent a very

simple object for which just two clearly defined points on the object are traced through the eye to the

retina.

2. Light leaves the object - propagating in all directions:

It is assumed for simplicity that this is a scattering object, meaning that after light in the area (which

may be called "ambient light") reaches the object, it leaves the surface of the object traveling in a wide

range of directions.

3. Some of the light leaving the object reaches the eye:

Although the object is scattering light in all directions, only a small proportion of the light scattered

from it reaches the eye.

The longer strong pink and green lines with the arrows marked along them are called "rays".

These represent the direction of travel of light.

The pink rays indicate paths taken by light leaving the top point of the object (that eventually reaches

the retina), while the green rays indicate paths taken by light leaving the lower point of the object (that

eventually reaches the retina).

Only two rays are shown leaving each point on the object. This simplification is to keep the diagram

clear.

The two rays drawn in each case are the extreme rays, that is those that only just get through the

optical system called the eye. These generally represent a cone of light that propagates all the way

through the system from the object to the image.

4. Light changes direction when it passes from the air into the eye:

When light traveling away from the object, towards the eye, arrives at the eye, the first surface it

reaches is the cornea.

The ray-diagram shows the rays changing direction when they pass through the cornea.

This change in direction is due to refraction (i.e. the re-direction of light as it passes from one medium

into another, different, medium). Refraction is covered in more detail on the next page. To describe this

ray-diagram it is sufficient to say that several structures in the eye contribute to image formation by re-

directing the light passing through them in such a way as to improve the quality of the image formed on

the retina.

Page 2: MC0086 Assignment Spring 2013 solved

Q. Explain different linear methods for noise cleaning?

Noise reduction is the process of removing noise from a signal. Noise reduction techniques are

conceptually very similar regardless of the signal being processed, however a priori knowledge of the

characteristics of an expected signal can mean the implementations of these techniques vary greatly

depending on the type of signal.

All recording devices, both analogue or digital, have traits which make them susceptible to noise. Noise

can be random or white noise with no coherence, or coherent noise introduced by the device's

mechanism or processing algorithms.

In electronic recording devices, a major form of noise is hiss caused by random electrons that, heavily

influenced by heat, stray from their designated path. These stray electrons influence the voltage of the

output signal and thus create detectable noise.

In the case of photographic film and magnetic tape, noise (both visible and audible) is introduced due to

the grain structure of the medium. In photographic film, the size of the grains in the film determines the

film's sensitivity, more sensitive film having larger sized grains. In magnetic tape, the larger the grains of

the magnetic particles (usually ferric oxide or magnetite), the more prone the medium is to noise.

One method to remove noise is by convolving the original image with a mask that represents a low-pass

filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a

Gaussian function. This convolution brings the value of each pixel into closer harmony with the values of

its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average,

of itself and its nearby neighbors; the Gaussian filter is just one possible set of weights.

Smoothing filters tend to blur an image, because pixel intensity values that are significantly higher or

lower than the surrounding neighborhood would "smear" across the area. Because of this blurring,

linear filters are seldom used in practice for noise reduction; they are, however, often used as the basis

for nonlinear noise reduction filters.

Q. Which are the two quantitative approaches used for the evaluation of image features?

Ans : The theory of histogram modification of continuous real-valued pictures is developed. It is shown

that the transformation of gray levels taking a picture's histogram to a desired histogram is unique

under the constraint that the transformation be monotonic increasing. Algorithms for implementing this

solution on digital pictures are discussed. A gray-level transformation is useful for increasing visual

contrast, but may destroy some of the information content. It is shown that solutions to the problem of

minimizing the sum of the information loss and the histogram discrepancy are solutions to certain

differential equations, which can be solved numericall

Page 3: MC0086 Assignment Spring 2013 solved

Q. Explain with diagram Digital image restoration model?

Ans : Digital Image Restoration

A current research project at IMM lead by Prof. Per Christian Hansen.

Digital image restoration - in which a noisy, blurred image is restored on the basis of a

mathematical model of the blurring process - is a well-known example of a 2-D deconvolution problem.

A recent survey of this topic, including a discussion of many practical aspects, can be found in [1].

There are many sources of blur. Here we focus on atmospheric turbulence blur which arises,

e.g., in remote sensing and astronomical imaging due to long-term exposure through the atmosphere,

where the turbulence in the atmosphere gives rise to random variations in the refractive index. For

many practical purposes, this blurring can be modelled by a Gaussian point spread function, and the

discretized problem is a linear system of equations whose coefficient matrix is a block Toeplitz matrix

with Toeplitz blocks.

Discretizations of deconvolution problems are solved by regularization methods - such as those

implemented in the Matlab package Regularization Tools - that seek to balance the noise suppression

and the loss of details in the restored `. Unfortunately, classical regularization algorithms tend to

produce smooth solutions, and as a consequence it is difficult to recover sharp edges in the image.

We have developed a 2-D version [2] of new algorithm [3] that is much better able to

reconstruct the sharp edges that are typical in digital images. The algorithm, called PP-TSVD, is a

modification of the truncated-SVD method and incorporates the solution of a linear l1-problem, and it

includes a parameter k that controls the amount of noise reduction. The algorithm is implemented in

Matlab and is available as Matlab function pptvsd.

The four images on top of this page show various fundamental solutions that can be computed

by means of the PP-TSVD algorithm. The underlying basis functions are delta functions, piecewise

constant functions, piecewise linear functions, and piecewise 2. degree polynomials, respectively. We

are currently investigating the use of the PP-TSVD algorithms in such areas as astronomy and

geophysics.

Q. Discuss Orthogonal Gradient Generation for first order derivative edge detection

Ans : Orthogonal Gradient Generation

An edge in a continuous domain edge segment F(x,y) can be detected by forming the continuous

one-dimensional gradient G(x,y) along a line normal to the edge slope, which is at an angle Θ with

respect to the horizontal axis. If the gradient is sufficiently large (i.e., above some threshold value), an

edge is deemed present. The gradient along the line normal to the edge slope can be computed in terms

of the derivatives along orthogonal axes according to the following

For computational efficiency, the gradient amplitude is sometimes approximated by the

magnitude combination

The orientation of the spatial gradient with respect to the row axis is

The remaining issue for discrete domain orthogonal gradient generation is to choose a good discrete

approximation to the continuous differentials of Eq. 8.3a.

The simplest method of discrete gradient generation is to form the running difference of pixels along

rows and columns of the image. The row gradient is defined as

and the column gradient is

Diagonal edge gradients can be obtained by forming running differences of diagonal pairs of

pixels. This is the basis of the Roberts cross-difference operator, which is defined in magnitude form as

and in square-root form as

Page 4: MC0086 Assignment Spring 2013 solved

Prewitt has introduced a pixel edge gradient operator described by the pixel numbering The Prewitt

operator square root edge gradient is defined as

With

where K = 1. In this formulation, the row and column gradients are normalized to provide unit-gain

positive and negative weighted averages about a separated edge position.

The Sobel operator edge detector differs from the Prewitt edge detector in that the values of

the north, south, east and west pixels are doubled (i.e., K = 2). The motivation for this weighting is to

give equal importance to each pixel in terms of its contribution to the spatial gradient.

C) Second-Order Derivative Edge Detection

Second-order derivative edge detection techniques employ some form of spatial second- order

differentiation to accentuate edges. An edge is marked if a significant spatial change occurs in the

second derivative. We will consider Laplacian second-order derivative method.

The edge Laplacian of an image function F(x,y) in the continuous domain is defined as

where, the Laplacian is

The Laplacian G(x,y) is zero if F(x,y) is constant or changing linearly in amplitude. If the rate of

change of F(x,y) is greater than linear, G(x,y) exhibits a sign change at the point of inflection of F(x,y).

The zero crossing of G(x,y) indicates the presence of an edge. The negative sign in the definition of Eq.

8.4a is present so that the zero crossing of G(x,y) has a positive slope for an edge whose amplitude

increases from left to right or bottom to top in an image.

Torre and Poggio have investigated the mathematical properties of the Laplacian of an image

function. They have found that if F(x,y) meets certain smoothness constraints, the zero crossings of

G(x,y) are closed curves. In the discrete domain, the simplest approximation to the continuous Laplacian

is to compute the difference of slopes along each axis:

This four-neighbor Laplacian can be generated by the convolution operation Where The four-

neighbor Laplacian is often normalized to provide unit-gain averages of the positive weighted and

negative weighted pixels in the 3 * 3 pixel neighborhood. The gain-normalized four-neighbor Laplacian

impulse response is defined by

Prewitt has suggested an eight-neighbor Laplacian defined by the gain normalized impulse

response array

First-Order Derivative Edge Detection

There are two fundamental methods for generating first-order derivative edge gradients. One

method involves generation of gradients in two orthogonal directions in an image; the second utilizes a

set of directional derivatives. We will be discussing the first method.

Page 5: MC0086 Assignment Spring 2013 solved

Q. Explain about the Region Splitting and merging with example.

Ans : Region Splitting

The basic idea of region splitting is to break the image into a set of disjoint regions which are coherent

within themselves:

Initially take the image as a whole to be the area of interest.

Look at the area of interest and decide if all pixels contained in the region satisfy some similarity

constraint.

If TRUE then the area of interest corresponds to a region in the image.

If FALSE split the area of interest (usually into four equal sub-areas) and consider each of the sub-areas

as the area of interest in turn.

This process continues until no further splitting occurs. In the worst case this happens when the areas

are just one pixel in size.

This is a divide and conquer or top down method.

If only a splitting schedule is used then the final segmentation would probably contain many

neighbouring regions that have identical or similar properties.

Thus, a merging process is used after each split which compares adjacent regions and merges them if

necessary. Algorithms of this nature are called split and merge algorithms.

To illustrate the basic principle of these methods let us consider an imaginary image.

1. Let denote the whole image shown in Fig 35(a).

2. Not all the pixels in are similar so the region is split as in Fig 35(b).

3. Assume that all pixels within regions , and respectively are similar but those in are not.

4. Therefore is split next as in Fig 35(c).

5. Now assume that all pixels within each region are similar with respect to that region, and that

after comparing the split regions, regions and are found to be identical.

6. These are thus merged together as in Fig 35(d).

Fig. 35 Example of region splitting and merging

Page 6: MC0086 Assignment Spring 2013 solved

We can describe the splitting of the image using a tree structure, using a modified quadtree. Each non-

terminal node in the tree has at most four descendants, although it may have less due to merging. See

Fig. 36.

Fig. 36 Region splitting and merging tree

No water marks no pdf formate in this website