My "Grain Motion Detection" Project

85
1 Abstract The purpose of this study is development of software that measures the dimensions of the grains and their speed and displacement. That is required as in the current study of sediment transport there are no efficient ways of measuring mentioned parameters. This work covers the experimental work in order to evaluate the efficiencies of the currently available methods and provide the new ways of problem solution. Some popular segmentation techniques were implemented in program to test their efficiencies in current problem. Mostly those methods are based on the edge-detection, some other methods were used as well. To test the mentioned ideas it was required to construct a simplified model of the channel bed and simulate grain movement in order to test it by current methods. Written codes were tested under different regulation parameters and best possible results were presented and discussed. Weaknesses of those methods were identified and those results were the basis to select further alternative research direction. After analysis of all the work done new alternative methods were recommended and details regarding their implementation provided in a very descriptive way.

description

Please have a look at my Final Year University Project dedicated to the development of grain motion detection software.

Transcript of My "Grain Motion Detection" Project

Page 1: My "Grain Motion Detection" Project

1

Abstract

The purpose of this study is development of software that measures the dimensions of the

grains and their speed and displacement. That is required as in the current study of

sediment transport there are no efficient ways of measuring mentioned parameters.

This work covers the experimental work in order to evaluate the efficiencies of the

currently available methods and provide the new ways of problem solution.

Some popular segmentation techniques were implemented in program to test their

efficiencies in current problem. Mostly those methods are based on the edge-detection,

some other methods were used as well.

To test the mentioned ideas it was required to construct a simplified model of the channel

bed and simulate grain movement in order to test it by current methods.

Written codes were tested under different regulation parameters and best possible results

were presented and discussed. Weaknesses of those methods were identified and those

results were the basis to select further alternative research direction.

After analysis of all the work done new alternative methods were recommended and

details regarding their implementation provided in a very descriptive way.

Page 2: My "Grain Motion Detection" Project

2

Contents

ABSTRACT............................................................................................................... 1

1.0 INTRODUCTION ........................................................................................... 6

Aims and objectives ................................................................................................................................. 6

2.0 LITERATURE REVIEW AND BACKGROUND INFORMATION .............. 8

2.1 Sediment definition ..................................................................................................................... 8

2.2 Subject Relevance ....................................................................................................................... 9

2.3 Solution of sediment problems .................................................................................................... 9

2.4 Sediment Discharge Calculation ............................................................................................... 10

2.5 Bed Load Transport Definition ................................................................................................ 10 2.5.1 Two Schools of thought ............................................................................................................... 10

2.6 Physical description of the bed load transport ......................................................................... 11 2.6.1 General function........................................................................................................................... 11

2.7 Bed Load Transport Relations.................................................................................................. 13

2.8 Possible implementation of theory in software design .............................................................. 13 2.8.1 Direct measure method ................................................................................................................ 13 2.8.2 Estimation method that uses bed load calculation formulas ........................................................ 14

2.9 Digital image properties ............................................................................................................ 14 2.9.1 Digital Image definition ............................................................................................................... 14 2.9.1.1 Raster graphics ........................................................................................................................ 15

2.9.1.2 Important characteristics ..................................................................................................... 16 2.9.1.3 Advantages .......................................................................................................................... 16 2.9.1.4 Disadvantages ..................................................................................................................... 17 2.9.1.5 Formats ................................................................................................................................ 17 2.9.1.6 Types of raster images......................................................................................................... 17 2.9.1.7 Grayscale images ................................................................................................................ 17

2.9.2 Image processing ......................................................................................................................... 18

3.0 METHODOLGY ........................................................................................... 19

3.0.1 Package used for software development ................................................................................... 19 3.0.1.1 Matlab Overview ................................................................................................................. 19 3.0.1.2 Image Processing Toolbox .................................................................................................. 20

3.1 Experimental Set up .................................................................................................................. 21 3.1.1 Programming Package features related to software design ........................................................ 23

3.1.1.1 Systems of co-ordinates ...................................................................................................... 23 3.0.1.1 List of Matlab commands and functions used in experiments. .......................................... 23 3.0.1.2 Cycle operators ................................................................................................................... 24

Page 3: My "Grain Motion Detection" Project

3

3.0.1.3 PARFOR loop ....................................................................................................................... 25 3.0.1.4 Converting image from colour to grayscale ........................................................................ 25

3.2 Proposed software working principles...................................................................................... 26 3.2.1 Proposed program concept Nr. 1 .................................................................................................. 27

3.2.1.1 Description .......................................................................................................................... 27 3.2.1.2 Background ......................................................................................................................... 27 3.2.1.3 Segmentation Method 1 ...................................................................................................... 28 3.0.1.4 Segmentation Method 2 ...................................................................................................... 32 3.0.1.5 Segmentation Method 3 ...................................................................................................... 36 3.0.1.5 Segmentation Method 4 ...................................................................................................... 46 3.0.1.6 Canny edge detector ............................................................................................................ 53 Conclusion on the Program Concept Nr.1 ............................................................................................ 55

3.0.2 Proposed program concept Nr. 2 ................................................................................................. 55 Description............................................................................................................................................ 55 Highlight modified areas ...................................................................................................................... 55 Results .................................................................................................................................................. 56 Conclusion ............................................................................................................................................ 57

DISCUSSIONS ........................................................................................................ 58

RECOMMENDATIONS ......................................................................................... 60

Development of the 3d surface recreation method ................................................................................ 61 3d scanners ................................................................................................................................................ 61 Three-dimensional photo .......................................................................................................................... 62 Analysis of obtained 3d data ..................................................................................................................... 65 Conclusion on proposed recommendation ................................................................................................ 67

Alternative recommendation ................................................................................................................. 67 Development of the segmentation method based on graph theory ............................................................ 67

Method description ............................................................................................................................... 67 Conclusion on proposed method ........................................................................................................... 71

CONCLUSIONS OUTLINE.................................................................................... 72

LIST OF REFERENCES ................................................................................................. 73

APPENDIX A : MATLAB CODES OF THE PROPOSED METHODS .................................... 74

1. Function rem_small ........................................................................................................................ 74

2. Function rate_of_change_scan ....................................................................................................... 75

3. Function watershed_segmentation.m ............................................................................................ 76

4. Function horscan_i ......................................................................................................................... 80

5. Function vibr1_1 ............................................................................................................................. 81

6. Function imabsdiffer.m .................................................................................................................. 84

Page 4: My "Grain Motion Detection" Project

4

List of figures

FIGURE 1: TYPES OF SEDIMENT MOTION ............................................................................................. 8 FIGURE 2 – VECTOR AND RASTER FORMAT IMAGES ...................................................................... 15 FIGURE 3: VARIATIONS OF GRAYSCALE INTENSITIES ................................................................... 18 FIGURE 4: SIMPLIFIED MODEL OF THE CHANNEL ............................................................................ 22 FIGURE 5 -PIXEL SYSTEM OF COORDINATES .................................................................................... 23 FIGURE 6 : ORIGINAL COLOUR IMAGE ................................................................................................ 25 FIGURE 7: IMAGE CONVERTED TO GRAYSCALE .............................................................................. 25 FIGURE 8:PROPOSED ALGORITHM NR.1 .............................................................................................. 26 FIGURE 9: PROPOSED ALGORITHM NR.2 ............................................................................................. 26 FIGURE 12: ORIGINAL IMAGE ................................................................................................................ 30 FIGURE 13: SENSITIVITY PARAMETER “DIFFER” = 7 ........................................................................ 30 FIGURE 14: SENSITIVITY PARAMETER “DIFFER” = 10 ...................................................................... 30 FIGURE 15: SENSITIVITY PARAMETER “DIFFER” = 13 ...................................................................... 30 FIGURE 16:NOISE REDUCED BY MEDIAN FILTERING ...................................................................... 31 FIGURE 17: NOISE REDUCED BY REM_SMALL FUNCTION ............................................................. 31 FIGURE 18-ANALYZED PIXEL LOCATIONS ......................................................................................... 32 FIGURE 19: ORIGINAL GRAYSCALE IMAGE ....................................................................................... 34 FIGURE 20: SENSITIVITY PARAMETER “DIFFER” = 10 ...................................................................... 34 FIGURE 21: SENSITIVITY PARAMETER “DIFFER” = 7 ........................................................................ 34 FIGURE 22:SENSITIVITY PARAMETER “DIFFER” = 5 ......................................................................... 34 FIGURE 23: SENSITIVITY PARAMETER “DIFFER” = 5 ....................................................................... 35 FIGURE 24: SENSITIVITY PARAMETER “DIFFER” =9 ........................................................................ 35 FIGURE 25: OBTAINED GRAYSCALE PICTURE ................................................................................... 37 FIGURE 26:GRADIENT SEGMENTATION .............................................................................................. 38 FIGURE 27-FOREGROUND MARKING ................................................................................................... 39 FIGURE 28:IMAGE ERODE ....................................................................................................................... 39 FIGURE 29:IMAGE CLOSE ........................................................................................................................ 40 FIGURE 31:IMREGIONALMAX FUNCTION USED ................................................................................ 41 FIGURE 32:IMPOSE MARKERS ON THE IMAGE .................................................................................. 41 FIGURE 33: BWAREAOPEN FUNCTION ................................................................................................. 42 FIGURE 34:THRESHOLD OPERATION ................................................................................................... 43 FIGURE 35:IWATERSHED LINES ............................................................................................................ 43 FIGURE 36:BORDERS MARKED .............................................................................................................. 44 FIGURE 37:DISPLAY THE RESULTS ....................................................................................................... 45 FIGURE 38:RESULTS IMPOSE ON ORIGINAL IMAGE ......................................................................... 46 FIGURE 39:ACCIDENTLY HIGHLIGHTED BORDERS .......................................................................... 47 FIGURE 40: GRAIN EDGES OBTAINED USING HORIZONTAL SHIFT .............................................. 50 FIGURE 41: GRAIN EDGES OBTAINED USING VERTICALL SHIFT .................................................. 50 FIGURE 42: GRAIN EDGES OBTAINED USING DIAGONAL(VERTICAL + RIGHT) SHIFT ........... 50 FIGURE 43: GRAIN EDGES OBTAINED USING DIAGONAL(VERTICAL + LEFT) SHIFT .............. 50 FIGURE 44: IMAGE BEFORE NOISE REMOVAL ................................................................................... 51 FIGURE 45: FIG.454235 .............................................................................................................................. 51 FIGURE 46: ORIGINAL GRAYSCALE PICTURE .................................................................................... 52 FIGURE 47: GRAIN BORDERS OBTAINED USING VIBRATION SIMULATION ............................... 52 FIGURE 48: METHOD 1 ............................................................................................................................. 52 FIGURE 49: METHOD 2 ............................................................................................................................. 52 FIGURE 50: ORIGINAL IMAGE ................................................................................................................ 54 FIGURE 51: THRESH = 0.05 ....................................................................................................................... 54 FIGURE 52: THRESH = 0.1 ......................................................................................................................... 54

Page 5: My "Grain Motion Detection" Project

5

FIGURE 53: THRESH = 0.2 ......................................................................................................................... 54 FIGURE 54: HIGHLIGHTED AREAS WHERE PIXEL INTENSITY VALUES HAVE CHANGED

MORE THAN PRESET CRITICAL LEVEL. ..................................................................................... 56 FIGURE 55: HIGHLIGHTED AREAS WHERE PIXEL INTENSITY VALUES HAVE CHANGED

MORE THAN PRESET CRITICAL LEVEL. ..................................................................................... 57 FIGURE 56: WEAK EDGES, METHOD1 ................................................................................................... 59 FIGURE 57: WEAK EDGES, CANNY EDGE DETECTOR ...................................................................... 59 FIGURE 58: OVER SEGMENTATION, CANNY EDGE DETECTOR ..................................................... 59 FIGURE 59: NOISE CONTAMINATION, METHOD1 .............................................................................. 59 FIGURE 60: STRUCTURE LIGHT PRINCIPLE ........................................................................................ 63 FIGURE 61: AN EXAMPLE OF TRIANGULATION PRINCIPLE ........................................................... 64 FIGURE 62: AN EXAMPLE OF RESTORATION OF THREE-DIMENSIONAL SHAPE OF A

SURFACE USING A METHOD OF THE STRUCTURED ILLUMINATION: INITIAL OBJECT

(A) AND SHAPE RECONSTRUCTION (A VIEW FROM VARIOUS ANGLES)........................... 65 FIGURE 63: AN EXAMPLE OF MODELLING OF THE IMAGE THE WEIGHED GRAPH. ................. 68 FIGURE 64: AN EXAMPLE OF A MATRIX OF PAIRED DISTANCES FOR A DOT

CONFIGURATION. ............................................................................................................................ 69 FIGURE 65: RESULTS OF WORK OF METHOD NORMALIZED CUTS( IMAGE TAKEN FROM ..... 69 FIGURE 66: RESULT OF WORK NESTED CUTS .................................................................................... 70 FIGURE 67: CONSTRUCTION OF A PYRAMID OF THE WEIGHED GRAPHS FOR THE IMAGE. .. 70 FIGURE 68: COMPARISON OF RESULTS OF WORK OF ALGORITHM SWA, ITS UPDATINGS

AND NORMALIZED CUTS .............................................................................................................. 71

Page 6: My "Grain Motion Detection" Project

6

1.0 Introduction

In a science of sediment transport there is a common difficulty of detection of moving

particles. Considering the fact that it is one of the basic properties in an accurate

description of a sediment transport rate, an efficient method of grain movement detection

needs to be developed.

According to the description of the project, required method is limited to the image

analysis; therefore an image processing techniques must be used with input images to

obtain necessary grain parameters and dimensions.

A common challenge of analysis of the objects on the image is image segmentation.

Splitting of the image into unlike areas of some sign is called image segmentation. It is

supposed that areas correspond to real objects, or their parts, and borders of areas

correspond to borders of objects. Segmentation plays the important role in problems of

processing of images and computer sight. Therefore, major and most important part of

creation of the image analysis techniques is the development of precise and effective

ways to segment an image.

Work required for development of the software, is mainly of experimental nature.

Various methods must be tested and compared in terms of their efficiencies. Several

segmentation techniques are available for testing. Most of them are based on the detection

of the edge, some other are relied on another principles of work.

But even though variety of experiments should be done to get a big picture of the

efficiency of the tested methods, as that may lead to working out an optimal solution of

the problem, based on the mentioned experimental statistics.

Aims and objectives

Main aim of the project is development of the software that will be able to effectively

obtain the required parameters of the grain particles (such as size, displacement and

velocity) in order to use that data to work out the volumetric bed load transport rate.

Page 7: My "Grain Motion Detection" Project

7

The objectives of the project are:

Design and construction of a simplified model of a river bed to obtain required

input data for further analysis.

Experiment with different factors that could positively or negatively influence

further analysis(such as lighting mode, use of reflector, etc.) and identify the best

configuration.

Development and comparison of different approaches of obtaining the required

grain parameters. Test a variety of the proposed methods and based on the results

of the experiments – evaluate the efficiency and perspectives of method

implementation.

Consider alternative approaches instead of sticking with a single method

Analyze the obtained results to work out an optimal solution of the problem

Page 8: My "Grain Motion Detection" Project

8

2.0 Literature review and Background Information

2.1 Sediment definition

River sediment is firm mineral particles carried by a stream. River deposits are formed of

aeration products, denudation and erosion of rocks and soils. Water erosion, destruction

of a terrestrial surface under the influence of fluid waters, represents the most active

process enriching the rivers by deposits.

There are several phases of sediment movement. (Fig.1)

When the stream velocity is low, no deposits move, but with the increase of

velocity some of the deposits begin to sporadically move by rolling and sliding.

This type of movement is called “contact load of the stream”.

If a velocity continues to increase, individual sediments will start to make short

jumps by leaving the bed for a short amount of time, and then returning to rest

after that or continue motion by rolling or further jumping. This type of movement

is called “saltation load of the stream”.

If a velocity increases even more, the saltation will occur more frequently. Some

of the sediments will be kept in suspension by the upward components of flow

turbulence for appreciable lengths of time. Such type of sediment movement is

called “suspended load of the stream”.

FIGURE 1: TYPES OF SEDIMENT MOTION

(HTTP://WWW.DKIMAGES.COM/DISCOVER/PREVIEWS/796/5103379.JPG)

Page 9: My "Grain Motion Detection" Project

9

2.2 Subject Relevance

The information of river bedload is extremely necessary for designing and building of

various hydraulic engineering constructions, bridge transitions, channels of different

function, dams and other constructions, to avoid and predict possible negative effects of

the sediment.

Some structures in high-velocity streams are taking damage from the sediment particles

in motion. This process can seriously wear off the surface of such structures or pavements

unless sufficient protection is provided. The damage can be caused by both smaller and

larger particles. Chief damage of this nature is to turbines and pumps where sediment-

laden water causes excessive wear on runners, vanes, propellers, and other appurtenant

parts. (Murthy and Madhaven, 1959)

Deposition in natural or artificial channels and reservoirs can also cause serious problems,

as there is always a need of removing excessive sediment for following reasons. In

navigable channels, excessive sediment needs to be removed to maintain specific depths

that are crucial for safe shipping. Considering open channel hydraulics, excessive

sediment amount in natural streams have a huge impact on channel flood-water capacity

that may result in overflow. To avoid mentioned problems, sediments are removed from

the problematic channels on a regular basis. (H.Garcia ,2008)

2.3 Solution of sediment problems

As it may first appear, optimal solution for preventing problems of sediment transport

would be stopping the erosion source and thus preventing the occurring of new

sediments. But bearing in mind huge lengths of the rivers, even if the erosion source is

stopped, huge amounts of sediment will still remain in the river for considerably long

time .Therefore it is more rational to use protection systems that will filter out the

sediments and store them in special sediment-containers or bypass sediments from the

risk areas. In some cases more radical methods like dredging are used. To obtain a

solution for the mentioned and other sediment problems, and provide an assessment of

how effectively the measure is solving the problem, there should be a clear understanding

of basic principles of sedimentation and hydraulics. (Steven J. Goldman, Katharine

Jackson, Taras A. Bursztynsky, ”Erosion and sediment control handbook”)

Basics of the sediment bedload transport principles are discussed in the next chapter.

Page 10: My "Grain Motion Detection" Project

10

2.4 Sediment Discharge Calculation

It is crucial to have efficient methods for computing sediment discharge to effectively

plan and design all the construction and maintenance works in rivers and canals. But right

now, available techniques and methods for computing sediment discharge do not allow

efficient predicting and estimating of sediment movement. In practice, engineers cannot

use these methods as main argument in making decisions. Usually they have to rely on

their own experience. There is a certain difficulty for the engineer to select a formula to

be used in calculations, because the results often vary significantly when using different

methods. And it is difficult to judge which formula to use to get the most realistic result

unless some observations and comparison of the discharge are made by an engineer.

Many formulas that engineers treat as the most useful and realistic base only on the

subjective experience of the engineer. So comparison of efficiencies of the formulas is a

big concern. (A.Vanoni 1975)

2.5 Bed Load Transport Definition

2.5.1 Two Schools of thought

There are two main schools of thought in bed load science. The author of one is Ralph

Alger Bagnold and the other one is Professor Hans Albert Einstein.

The Bagnold’s (1956) definition of the bed load transport is that contact of the particles

with the bed is mainly influenced by the effect of gravity, when the suspended particles

are lifted by random upward impulses exerted on a particle by turbulence.

Einstein gives a little bit different definition for the transport bed load. He assumed that

the bed load transport is a movement of the grains in a thin layer with a thickness of a few

grain diameters, where grains move by sliding, rolling and making small jumps for a

distance of a few grain diameters.

Einstein considered that the bed layer turbulence mixing is too small to directly influence

movement of sediments and thus suspension of the particles is not possible in the bed

layer. He assumed that any particle in the bed load travels as a series of movements the

distance of approximately 100 grain diameters independently of the flow, transport rate

and bed characteristics.

He treated saltating grains as the suspended load as the heights of jumps were much

larger than one or two grain diameters. But Bagnold (1956, 1973) believed saltation is the

main process causing the bed load transport.

Page 11: My "Grain Motion Detection" Project

11

Many further works were based on these two Schools. The key element in the formulation

of Einstein is the entrainment rate of the particles per unit area as a function of different

parameters such as shear stress and other.

2.6 Physical description of the bed load transport

As the main purpose of the project is the design of software, only basic principles that are

necessary for understanding of the topic and successful implementation of these

principles in the software design, were covered below. More detailed descriptions can be

obtained from the referenced sources.

2.6.1 General function

In general the function of volumetric bed load transport rate is a relation of a boundary

shear stress 𝜏𝑏 and various parameters of deposits.

𝒒𝒃 = 𝒒𝒃(𝝉𝒃,𝒐𝒕𝒉𝒆𝒓 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒆𝒓𝒔)

Eq.1

It can be defined in several ways:

1. As the multiplication of grain velocity, thickness of the bed load layer and the

grain concentration.

𝒒𝒃 = 𝒖𝒃𝒄𝒃𝜹𝒃

Eq.2

𝑞𝑏 – volumetric bed load transport rate ( 𝑚2

𝑠)

𝑢𝑏 – velocity of moving particles ( 𝑚

𝑠 )

𝑐𝑏 – concentration of particles ( 𝑣𝑜𝑙𝑢𝑚𝑒 𝑜𝑓 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠

𝑣𝑜𝑙𝑢𝑚𝑒 𝑜𝑓 𝑤𝑎𝑡𝑒𝑟 −𝑠𝑒𝑑𝑖𝑚𝑒𝑛𝑡 𝑚𝑖𝑥𝑡𝑢𝑟𝑒 )

𝛿𝑏 – bed load layer thickness ( 𝑚 )

Page 12: My "Grain Motion Detection" Project

12

2. As the multiplication of grain velocity, grain volume and number of moving

grains per unit area

𝒒𝒃 = 𝒖𝒃𝑽𝒃𝑵𝒃

Eq.3

𝑞𝑏 – volumetric bed load transport rate ( 𝑚2

𝑠)

𝑢𝑏 – velocity of moving particles ( 𝑚

𝑠 )

𝑉𝑏 – volume of particles (𝑚3)

𝑁𝑏– number of moving grains per unit area

Velocity of the particles can be also defined as the ratio of the saltation distance 𝜆 and the

period of particle movement T so 𝒖𝒃 = 𝝀/𝑻.

𝒒𝒃 = 𝒖𝒃𝑽𝒃𝑵𝒃 can be expressed as 𝒒𝒃 = 𝑽𝒃𝑵𝒃𝝀/𝑻

Eq.4, Eq.5

Page 13: My "Grain Motion Detection" Project

13

2.7 Bed Load Transport Relations

Most of the bed load relations can be described in a universal form:

, where 𝒒∗ is the Einstein bed load number(1950).

Eq.6

Its dimensionless form is introduced by: 𝒒∗ = 𝑬𝒑𝑳𝒔, where

Eq.7

𝑬𝒑∗ =

𝑬𝒑

𝒈𝑹𝑫 and 𝑳𝒔

∗ =𝑳𝒔𝑫

Eq.8, Eq.9

𝑞𝑏 is volumetric transport rate of bed load

𝑅𝑒𝑝 is the particles Reynolds number. (Transition from laminar to a turbulent

mode occurs after achievement of so-called critical number of Reynolds 𝑅𝑒𝑐𝑟 . At

𝑅𝑒 < 𝑅𝑒𝑐𝑟 flow occurs in a laminar mode, at 𝑅𝑒 > 𝑅𝑒𝑐𝑟 occurrence of turbulence

is possible.)

𝑅 is submerged specific gravity of sediment.

𝑔 is the acceleration of gravity.

𝐸𝑝 is volumetric sediment entrainment rate per unit area.

𝐿𝑠 is particle travel distance

𝜏 is produced shear stress.

2.8 Possible implementation of theory in software design

Assumptions made about the way software could meet the requirements:

2.8.1 Direct measure method

The software is required to detect motion of the grains and measure the sizes of the

moving grains to work out the bed load transport rate afterwards. Thus if it is functioning

Page 14: My "Grain Motion Detection" Project

14

ideally it should be able to collect all the data necessary to work out the volumetric

transport rate 𝑞𝑏 by detecting each grain displacement and estimating individual moving

grain size from the top projection. (For example it can be estimated by treating each

particle as a sphere or by using Einstein‟s (1950) estimation that distance travelled by

each grain is approximately 100 grain diameters.)

2.8.2 Estimation method that uses bed load calculation formulas

In case program is not able to track the motion of the particles efficiently, it possibly

could measure the size of each grain on the picture and produce grain – size distribution.

Then by combining the grain sizes data and flow data characteristics that are measured

simultaneously, it might be possible to estimate the flow by using a variety of theories

and formulas introduced by different scientists.

2.9 Digital image properties

2.9.1 Digital Image definition

As the software input will be presented as the digital image it is very important to

understand what a digital image is and the way data is stored in digital image.

Digital image is differing by creation of the optical image on a photo sensor control

instead of traditional photographic material. The image presented in a digital way, is

suitable for further computer processing, therefore digital image often concerns to area of

information technologies.

Digital technologies are used in digital cameras and video cameras, fax and copying

devices with the various photo sensor controls, writing down and transferring analog or

digital signals. Achievements in the field of technology of photo sensor controls allow to

create digital cameras which supersede film photo technics from the majority of spheres

of application.

Page 15: My "Grain Motion Detection" Project

15

There are two basic ways of digital representation of images (fig.2):

Raster graphics

Vector graphics

FIGURE 2 – VECTOR AND RASTER FORMAT IMAGES

(http://www.edc.uri.edu/criticallands/raster.html)

2.9.1.1 Raster graphics

As the input for the required software will be given as the raster images, it is important to

understand principles of raster graphics.

The raster image - a grid (raster) which cells are called as pixels. The pixel in the raster

image has strictly certain site and color, hence, any object is represented by the program

as a set of the color pixels. It means that the user, working with raster images, works over

groups of pixels making them.

It is represented in the form of a considerable quantity of points - more points there are,

the better visual quality image has, file size increases accordingly. I.e. one specific picture

can be presented with a good or bad quality according to amount of points per unit of

length - the resolution (usually, points on inch - dpi or pixels on inch - ppi).

There are different types of raster images. They differ from each other in the ways of

representation and storage of color or brightness information of the pixel. Color is formed

as a result of mixing several components which can be set in various color systems (or

color spaces).

Page 16: My "Grain Motion Detection" Project

16

Term color depth is used for designation of how many bits are necessary for storage of

pixel color information. Depth of color is measured in bits per pixel.

Volume of memory, necessary for storage of raster image, can be calculated by formula:

𝑉 =𝑐 ∗ 𝑟 ∗ 𝑑

8

Eq.10

𝑐 - number of columns;

𝑟 - number of lines;

𝑑 - depth of color;

2.9.1.2 Important characteristics

The important characteristics of the raster image are:

quantity of pixels. The quantity of pixels can separately be specified on width and

height (1024*768, 640*480 ...) or, seldom, total amount of pixels (usually

measured in megapixels);

quantity of used colors (or depth of color);

color space RGB, CMYK, XYZ, YCbCr, etc.

Raster image is edited by means of raster graphic editors. Raster images are created by

cameras, scanners, in the raster editor, also by export from the vector editor or in the form

of screenshots.

2.9.1.3 Advantages

Raster graphics allow to create (to reproduce) practically any figure, without

dependence from complexity, in difference, for example, from vector graphics

where it is impossible to transfer precisely effect of transition from one color to

another (in the theory, certainly, it is possible, but a file with a size of 1 MB in

format BMP will have the size of 200 MB in a vector format in complex images).

Prevalence - raster graphics is used now practically everywhere: from small

badges up to posters.

High speed of processing of complex images if scaling is not necessary.

Raster representation of the image is natural to the majority of devices of

input/output of the graphic information, such as monitor, printer, digital camera,

scanner, etc.

Page 17: My "Grain Motion Detection" Project

17

2.9.1.4 Disadvantages

Big size of files with simple images.

Impossibility of ideal scaling.

Because of these disadvantages, for storage of simple figures it is recommended to use

vector graphics instead of even compressed raster graphics.

2.9.1.5 Formats

Raster images are usually stored in compressed way. Depending on type of compression

it can be possible or impossible to restore the image to a quality what it was before

compression (compression lost-free or compression with losses accordingly). In a graphic

file, additional information can be stored as well: author-related information, a camera

and its adjustments, quantity of points per centimeter at a press, etc.

2.9.1.6 Types of raster images

There are following types of raster images, each of which is intended for the decision of

the certain range of problems:

binary

grayscale

indexed

true colour

2.9.1.7 Grayscale images

Grayscale type of images is overviewed as the input data will be presented as a set of

grayscale raster images.

So called Grayscale or Intensity images are images, pixels of which can have one of

intensity values of any color in a range from minimal up to the maximal intensity (fig.3).

Usually it is supposed that in Intensity picture, gradation of grey color is stored in a range

from black up to white. Therefore sometimes Intensity pictures are called “grey” or

images in a gradation of grey, and term “brightness of pixel” is used as a synonym of

“intensity”.

Page 18: My "Grain Motion Detection" Project

18

Currently grayscale pictures with depth of color of 8 bits/pixel have the widest

application. They can store 256 values of brightness (from 0 up to 255). Grayscale

pictures having depth of color from 2 up to 16 bits/pixel are less often used.

FIGURE 3: VARIATIONS OF GRAYSCALE INTENSITIES

(http://www.kumagera.ne.jp/kkudo/grayscale.jpg)

2.9.2 Image processing

As mentioned above, software is required to measure certain parameters of the grains in

the image. To achieve that a method called “image processing” should be used in the

required software.

Image Processing - any form of processing of the information for which the input data is

presented by the image, for example, by photos or the video frames. Processing of images

can be carried out as for getting the image on an output (for example, preparation for

polygraphic duplicating, for teletranslation, etc.), and for getting other information (for

example, recognition of the text, calculation of number and type of cells in a field of a

microscope, etc.). Except for static two-dimensional images to process it is also required

to process the images changing with time, for example video.

The variety of the purposes and problems of processing of images can be classified as

follows:

Improvement of quality of images;

Measurements on images;

The spectral analysis of multivariate signals;

Recognition of images;

A compression of images.

(Wikipedia accessed on 03.03.2009, John C. Russ - The Image Processing

Handbook 2006)

Page 19: My "Grain Motion Detection" Project

19

3.0 Methodolgy

3.0.1 Package used for software development

3.0.1.1 Matlab Overview

MATLAB was selected for writing a code of required software due to its availability and

convenience. It has Image Processing Toolbox that makes working with images much

simpler.

MATLAB as the programming language has been developed at the end of 1970.

MATLAB (from English « Matrix Laboratory ») - the term related to a package of

applied programs for the solution of problems of technical calculations, and also to the

programming language used in this package. MATLAB is used by more than 1 000 000

engineering and science officers, it works on the majority of modern operational systems,

including GNU/Linux, Mac OS, Solaris and Microsoft Windows .

MATLAB is high-level programming language that includes structures based on matrixes

of data, a wide spectrum of the functions, the integrated environment of the development,

object-oriented opportunities and interfaces to the programs written in other programming

languages.

The programs written on MATLAB can be of two types - functions and scripts. Functions

have entrance and target arguments, and also own working space for storage of

intermediate results of calculations and variables. Scripts use the general working space.

Both scripts, and functions are not compiled in a machine code and kept in the form of

text files. There is also an opportunity to keep so-called pre-parsed programs - functions

and the scripts processed in a kind, convenient for machine execution. Generally such

programs are carried out more quickly than usual, especially if function contains

commands of construction of graphs and other figures.

The basic feature of programming language MATLAB is its wide opportunities of

working with matrixes, which developers of language have expressed in the slogan

"Think vectorized".

Page 20: My "Grain Motion Detection" Project

20

3.0.1.2 Image Processing Toolbox

Image Processing Package gives scientists, engineers and even artists a wide spectrum of

tools for digital processing and the analysis of images. Being closely connected with

environment of development of MATLAB applications, Image Processing Toolbox helps

an engineer to avoid performance of long operations of coding and debugging of

algorithms, allowing to concentrate efforts on the decision of the basic scientific or

practical problems.

The basic properties of a package are:

Restoration and allocation of details of images

Work with the allocated site of the image

The analysis of the image

Linear filtration

Transformation of images

Geometrical transformations

Increase in contrast of the important details

Binary transformations

Processing of images and statistics

Color transformations

Change of a palette

Transformation of types of images

Image Processing Package gives wide opportunities for creation and the analysis of

graphic representations in MATLAB environment. This package provides a flexible

interface, allowing to manipulate images, interactively develop graphic pictures, to

visualize data sets and to annotate results for descriptions, reports and publications.

Flexibility, connection of algorithms of a package with such feature of MATLAB as the

matrix-vector description makes a package very successfully adapted for solving any

problems in development and representation. MATLAB includes specially developed

procedures that allow to raise efficiency of a graphic core. It is possible to note, in

particular, such features:

Interactive debugging during development of graphics;

Profiler for optimization of time of algorithm performance;

Page 21: My "Grain Motion Detection" Project

21

Construction tools of the interactive graphic user interface (GUI Builder) for

acceleration of development of the GUI-patterns, allowing to adjust it under goals

and problems of the user.

This package allows the user to spend much less time and forces for creation of standard

graphic representations and, thus, to concentrate efforts on the important details and

features of images.

MATLAB and Image Processing Toolbox are highly adapted for development and

introduction of new ideas and methods of the user. For this purpose there is a set of the

interfaced packages directed on the decision of every possible specific problem and

problems in nonconventional statement.

Package Image Processing now is intensively used in thousands companies and

universities worldwide. Thus there is very much a broad audience of problems, which

users solve by means of the given package, for example space researches, military

development, astronomy, medicine, biology, a robotics, materiology, genetics, etc.

3.1 Experimental Set up

A set of high-quality images of sediment had to be taken to test different ideas and

methods during the development of software. In practice, a video sample of bed load

would be made at water channels or rivers for further analysis. But for required

experiments, simplified model can be efficiently used.

Simplified model consists from(fig.4):

Rectangular fish tank filled with water, and gravel to the point, where whole tank

bottom is uniformly covered with sediment.

A digital camera

A tripod

Two lighting sources (desk lamps)

A simple white reflector

Page 22: My "Grain Motion Detection" Project

22

FIGURE 4: SIMPLIFIED MODEL OF THE CHANNEL

A fish tank represents a section of a water channel where sediment transport occurs.

Camera is mounted on the tripod perpendicularly to the surface of water. Tripod ensures

stability of camera and thus each shot will be made from exactly the same position. This

is very important, because often, images will be analyzed in pairs and even minor image

displacements can create unnecessary problems.

Each camera shot is assumed to be one frame of the video that will be analyzed by the

software. To simulate movement of the sediment, some grains are regularly manually

moved between the shots of camera. All the images will be converted from colour to

grayscale as in reality the input images obtained from the high-speed cameras will be

given in grayscale mode.

To efficiently detect the edges of the grains it is important to create a lighting that will

leave shaded areas in gaps between the grains and to have a little effect on those shadows

in gape lighting source is placed almost parallel to the surface of sediment.

A white reflector is placed opposite to the lighting source. It is used to create dissipated

light that will help to highlight the surface of sediments and will not totally exclude

shadows from the gaps. This configuration was obtained in experimental way and proved

to be the most efficient.

Page 23: My "Grain Motion Detection" Project

23

3.1.1 Programming Package features related to software design

3.1.1.1 Systems of co-ordinates

Two coordinate systems are used in IРТ(Image Processing Toolbox) package : pixel and

spatial. In the majority of functions of a package the pixel system of co-ordinates is used,

in a number of functions the spatial system of co-ordinates is applied, and some functions

can work with both systems. It is possible to use only pixel system of co-ordinates when

writing own scenarios for the reference to values of pixels in Matlab system.

Pixel system of co-ordinates is traditional for digital processing of images. In it an image

is represented as a matrix of discrete pixels. For the reference to pixel of the image "I" it

is necessary to define a number of a line "r" and number of column "c" on crossing of

which the pixel is located: I(r,c). Lines are numbered from top to down, and columns

from left to right (fig. 5,). The top left pixel has co-ordinates (1;1). Only pixel system will

be used in the current software. Information about spatial system of coordinates can be

found in Appendix.

FIGURE 5 -PIXEL SYSTEM OF COORDINATES

3.0.1.1 List of Matlab commands and functions used in experiments.

In this part all the main used Matlab and Image Processing Toolbox functions that are

necessary for understanding the experimental codes, will be discussed:

imread

Reads the image from a file.

Function D = imread (filename, fmt) reads binary, grayscale or indexed image from a

file with a name "filename" and places it in array "D".

If MATLAB cannot find a file with a name "filename" the file with ­ a name "filename"

and expansion "fmt" is searched. Parametres "filename" and "fmt" are strings.

Page 24: My "Grain Motion Detection" Project

24

imwrite

Writes an image in to a file

Function imwrite (S, filename, fmt) writes down a binary, grayscale or indexed image S

in to a file with a name "filename". The file format is defined by parametre "fmt".

Parametres "filename" and "fmt" are strings.

adapthisteq

Function J=adapthisteq (I) improves contrast of a grayscale picture I by transformation

of values of its elements by a method of contrast limited adaptive histogram equalization

(CLAHE).

Method CLAHE works more effective with small local vicinities of images, than with full

images. Contrast, especially on homogeneous vicinities, should be limited in order to

avoid strengthening of a noise component.

medfilt2

Function q=medfilt2 (q, [A B]) makes a median filtration by a filter kernel with a size of

AхB pixels. The filtration eliminates noise of type "salt-pepper" on the image q in a

following way: all values of pixels in working area of a kernel line up in a row by

increase of brightness and last element of a row is equated to the central.

imabsdiff

Function Z=imabsdiff (X, Y) subtracts each element of image Y from a corresponding

element of the image X and places an absolute difference of these elements in to resultant

variable Z.

3.0.1.2 Cycle operators

Similar and repeating operations are fulfilled by means of cycle operators”for “and

“while”. The cycle for is intended for performance of the predetermined number of

repeating operations, a while - for the operations with unknown number of required

repetitions, but the condition of continuation of a cycle is known.

Page 25: My "Grain Motion Detection" Project

25

3.0.1.3 PARFOR loop

During experiments a problem arised. Most of the processes in the written functions were

based on the cycles and it took more than 10 hours to process an image using standart

loops. That is the reason why PARFOR cycle was introduced.

General purpose of the PARFOR loop is to run not just one cycle, but divide current cycle

in to independent parts and run them in parallel. This results in significant increase of the

image processing speed. In current project it helped to reduce more than 10 hours

processing time to a satisfactory 3-10 minutes.

More detailed information about parfor loop can be found at Matlab help or the

Mathworks website (http://www.mathworks.com)

3.0.1.4 Converting image from colour to grayscale

As it was previously stated, colour images were made for experiments in the simplified

model of a channel. As high speed cameras that produce grayscale images are going to be

used in real-life conditions, it is necessary to convert obtained colour pictures to the

grayscale ones.

To do that, a function called “mrgb2gray” (written by Kristian Sveen, 10 Sep 2004) was

taken from the Matlab central website. Detailed description of the code can be found on

the hyperlink below.

http://www.mathworks.com/matlabcentral/fileexchange/5855.

Figures below show the original obtained colour image and converted grayscale image.

All the experiments will be done using the grayscale image .(fig.7)

FIGURE 6 : ORIGINAL COLOUR IMAGE

FIGURE 7: IMAGE CONVERTED TO GRAYSCALE

Page 26: My "Grain Motion Detection" Project

26

3.2 Proposed software working principles

Detect the grain

edges on first

image

Segment and label

each grain

Detect the grain

edges on second

image

Segment and label

each grain

Compare two

images and

identify grains that

have moved

Measure the area

of the moved

grains

Work out the

volumetric

movement rate of

the sediment

Approximate

volume of those

grains

FIGURE 8:PROPOSED ALGORITHM NR.1

Calculate the

absolute

difference

between frames

and highlight the

areas where

values have

changed.

Image 1 Image 2

Identify which of

the highlighted

areas is grain

movement starting

point and where is

the grain stopping

point

Estimate area of

the grain from the

highlight of the

area and

approximate

volume of the

grain

Work out the

volumetric flow

rate of the

sediment

FIGURE 9: PROPOSED ALGORITHM NR.2

Page 27: My "Grain Motion Detection" Project

27

3.2.1 Proposed program concept Nr. 1

3.2.1.1 Description

The main purpose of current experiment is to find out if the segmentation based on object

edge detection is efficient in current situation. Considering the fact that provided image

cannot be classified as the one that is easy to segment, due to its properties(non-

homogeneity of grains surfaces, non-sharp edges, shading, etc.) many different methods

of edge detection are used in a set of experiments.

Additionally a method of a “watershed segmentation” was tested with edge-detectors.

3.2.1.2 Background

Grain detection

In this method, grain detection and segmentation is a key problem that needs to be solved.

When appropriate solution to this problem is found, work on other blocks of the

algorithm can be started.

Segmentation

Image segmentation represents division or splitting the image on regions by similarity of

properties of their points. Primary allocation of required objects on initial grayscale image

by means of segmentation transformation is one of the basic image analysis stages. Most

widely used transformations are brightness and planimetric. Some researchers include as

well textural segmentation as one of the basic methods . According to this classification,

allocation of areas in the process of segmentation is carried out proceeding from the

conformity estimation, by some criterion of similarity of values: either brightness of each

point, or the first derivative of brightness in some specified vicinity of each point, or any

of the textural characteristics of distribution of brightness in the specified vicinity of a

point. (C.Gonsales,2004)

Edges

Edges are such curves on the image along which there is a sharp change of brightness or

its derivatives on spatial variables. Such changes of brightness which reflect the important

features of a represented surface are most interesting. Places where surface orientation

varies in steps concern them, or where one object blocks another, or where the border of

Page 28: My "Grain Motion Detection" Project

28

the rejected shade lays down on the object, or there is no continuity in reflective

properties of a surface, etc.

It is quite natural, that noisy brightness measurements limit possibility to allocate the

information on edges. Contradiction rises between sensitivity and accuracy and thus

short edges should possess higher contrast, than long that they could be distinguished.

Allocation of edges can be considered as addition to image segmentation as edges can be

used for splitting of images into the areas corresponding to various surfaces.

3.2.1.3 Segmentation Method 1

Description

This experiment was the first one and its primary purpose was to check how efficient is

the simpliest method of edge detection and secondary objective was to test how fast

matlab package is processing images using simplest algorithms.

The idea is that ideal borders have a rapid change in grayscale intensity (brightness).

Concept of the method is – comparison of gray intensities of each coherent pair of pixels

in one image in vertical , horizontal and diagonal directions. If certain predetermined

difference amount is reached – pixels are marked as the ones that belong to the grain

edge.

function horscan_i

Input

“T” - an image or the video frame that needs to be analyzed

“differ”- maximum intensity value difference between coherent pixels.

(sensitivity of the edge scanner)

Output

“marked” - A variable that contains an image of the grain borders obtained by the

current method.

Page 29: My "Grain Motion Detection" Project

29

Basic steps

1) Image is read in to the variable:

T = imread('imname.bmp')

2) Image size is being measured for setting the cycle step amount later on:

siz = size(T)

3) Critical difference value between coherent pixels is set.

differ = 8

4) All the pairs of coherent pixels have to be checked for reaching critical intensity

difference value described in previous step.

a) Checking process is a cycle where each two coherent pixels are being processed

in sequence. When whole horizontal line is fully processed, next line is analyzed.

for hor = 1:(siz(1,2))

parfor ver = 1:(siz(1,1)-1)

b) The difference between coherent pixels in horizontal direction is computed, and

checked if it reaches the pre-set “differ” value. If so, pixel is marked as the edge

in a variable that stores the coordinates of the grain edges.

if (max(T(ver,hor), T(ver,hor + 1)) - min(T(ver,hor), T(ver,hor + 1))) >= differ

marked(ver,(hor)) = 1

c) The difference between coherent pixels in vertical direction and checked if it

reaches the pre-set “differ” value. If so, pixel is marked as the edge:

else if (max(T(ver,hor), T(ver + 1,hor)) - min(T(ver,hor), T(ver + 1,hor))) >= differ

marked((ver),hor) = 1

d) The difference between coherent pixels in diagonal direction and checked if it

reaches the pre-set “differ” value. If so, pixel is marked as the edge:

else if (max(T(ver,hor), T(ver + 1,hor + 1)) - min(T(ver,hor), T(ver + 1,hor + 1))) >=

differ

marked((ver),hor) = 1

Page 30: My "Grain Motion Detection" Project

30

Results

As it is seen from the figures above, method works to some extent. It captures some of the

grain borders and the amount of captured borders depends on the “differ parameter”. By

decreasing this parameter, sensitivity increases and thus, more borders are captured. But

by increasing sensitivity more noise comes along with better borders.

Noise might be acceptable in this case, as long as the amount of noise particles allows

efficiently excluding it and distinguishing noise from the borders. Noise can possibly be

excluded by size, solidity and other parameters that matlab can work with.

Figure 16 below is an example of significant noise reduction even on images obtained on

high sensitivity („‟differ=7‟‟). In this case it was reduced by smoothing the original image

using function medfilt2. (Appendix A) On the Figure 16, an example of noise reduced

FIGURE 10: ORIGINAL IMAGE

FIGURE 11: SENSITIVITY PARAMETER “DIFFER” = 7

FIGURE 12: SENSITIVITY PARAMETER “DIFFER” = 10

FIGURE 13: SENSITIVITY PARAMETER “DIFFER” = 13

Page 31: My "Grain Motion Detection" Project

31

by the method of excluding objects by their area using function called rem_small is

shown. Function is written by the author of the report. (Function code can be found in

Appendix A)

FIGURE 14:NOISE REDUCED BY MEDIAN FILTERING

FIGURE 15: NOISE REDUCED BY REM_SMALL

FUNCTION

It is seen that some of the grains are over contaminated by noise independently from

sensitivity parameter. The reason for that is non-homogenous surface of those grains (this

is clearly seen on the original image.

Conclusion

Despite of all the disadvantages of the method it captures some of the borders and might

find an application. It might efficiently work in combination with other methods of grain

border detection. It can either be used with low sensitivity and thus – little or no noise

particles, or with higher sensitivity, more noise but with efficient measures that separate

grain borders from noise particles.

Page 32: My "Grain Motion Detection" Project

32

3.0.1.4 Segmentation Method 2

Description

The idea of this experiment is detection of grain borders by comparing change rates of

pixel intensity values around the control pixel. It was assumed that rate of change of

grey-intensities changes rapidly at the point of object border.

FIGURE 16-ANALYZED PIXEL LOCATIONS

First of all, a rate of change of intensity from pixel A1 to central pixel B1 is calculated by

finding a numerical difference between current pixel values. After that, the same process

is repeated between pixels B1 and C1. Finally the rate of change on the left from the

middle pixel is compared to the rate of change on the right side of the middle pixel. If

rates of change have a reasonable difference (manually defined by the user), the middle

pixel B1 is marked as a pixel belonging to the border. The same sequence is used to

calculate vertical rates of change.

Matlab algorithm based on the proposed method

function rate_of_change_scan

Input

T - an image or the video frame that needs to be analyzed

differ- maximum rate of change difference, reaching which,centra pixel is

marked as the edge . (sensitivity of the edge scanner)

Output

marked - A variable that contains an image of the grain borders obtained by the

current method.

Page 33: My "Grain Motion Detection" Project

33

Basic steps

1) Image is read in to the variable:

T = imread('imname.bmp')

2) Image size is being measured for setting the cycle step amount later on:

siz = size(T)

3) Critical rate of change of intensity is set. Less value it has – more sensitive current

edge detector is.

differ = 8

4) With the aid of cycles,

for ver = 2:(siz(1,1) - 1) ;

parfor hor = 2:(siz(1,2) - 1);

each pixel one by one becomes a control pixel and is checked for the rates of change with

coherent pixels.

a) Rate of change is calculated with the pixel on the left

ROC_left = T(ver,hor) - T(ver,hor - 1)

b) Rate of change is calculated with the pixel on the left

ROC_right = T(ver,hor) - T(ver,hor + 1)

c) Rate of change is calculated with the upper pixel

ROC_up = T(ver,hor) - T(ver + 1,hor)

d) Rate of change is calculated with the pixel on the left

ROC_up = T(ver,hor) - T(ver - 1,hor)

5) Difference of intensity change rates around the control point is calculated

a) In the horizontal plane first

ROC_diff_hor = imabsdiff(ROC_left,ROC_right)

b) Then in the vertical plane

ROC_diff_vert = imabsdiff(ROC_up,ROC_down)

6) Difference obtained in previous step is compared with the critical difference value set

by user. If the critical value is exceeded, the central control pixel is marked in a

variable “marked” that highlights the pixels belonging to the border.

Page 34: My "Grain Motion Detection" Project

34

if ROC_diff_hor >= differ

marked(ver,hor) = 1;

elseif ROC_diff_vert >= differ

marked(ver,hor) = 1;

Results

FIGURE 17: ORIGINAL GRAYSCALE IMAGE

FIGURE 18: SENSITIVITY PARAMETER “DIFFER” = 10

FIGURE 19: SENSITIVITY PARAMETER “DIFFER” = 7

FIGURE 20:SENSITIVITY PARAMETER “DIFFER” = 5

Page 35: My "Grain Motion Detection" Project

35

It is seen on the images above that without preliminary preparation of the processed

image (such as smoothing, contrast improvement etc.), the given method highlights edges

efficiently only with big amount of noise, at high sensitivity set by the user. Situation is

very similar with results of the "Experiment 1" - by increasing sensitivity, amount of

noise increases as well, and thus similar measures need to be applied to cope with the

noise.

After a number of tests some methods of the noise reduction proved to be relatively

efficient in this case:

Preliminary original image smoothing using function medfilt2. Map used in the

filter had size 6x6 pixels. (figure 23)

A combination of contrast enhancement using function adapthisteq and image

smoothing using function medfilt2. (figure 24)

FIGURE 21: SENSITIVITY PARAMETER “DIFFER” = 5

FIGURE 22: SENSITIVITY PARAMETER “DIFFER” =9

Conclusion

If compared with the method described in “Experiment 1” both of these methods have

similar quality of grain edge capturing capabilities.

If using this method solely, obtained grain quality does not allow segmenting the grains

with at least a satisfactory quality. Thus current method cannot be used independently but

again, it might be combined with other edge scanners and provide those parts of edges

that an edge-detector of a different concept could not capture.

Page 36: My "Grain Motion Detection" Project

36

3.0.1.5 Segmentation Method 3

Description

This method is called “Marker-Controlled Watershed Segmentation”.

Development of technologies of processing of images has led to occurrence of new

approaches to the decision of problems of segmentation of images and their application at

the decision of many practical problems.

In this experiment rather new approach to the decision of a problem of segmentation of

images will be considered - a watershed method. Shortly the name of this method and its

essence wil be explained.

It is offered to consider the image as some district map where values of brightness

represent values of heights concerning some level. If this district is filled with water then

pools are formed. At the further filling with water, these pools unite. Places of merge of

these pools are marked as a watershed line.

Division of adjoining objects on the image is one of the important problems of processing

of images. Often for the decision of this problem the so-called Marker-Controlled

Watershed Segmentation is used. At transformations by means of this method it is

necessary to define "catchment basins" and "watershed lines" on the image by processing

of local areas depending on their brightness characteristics.

Matlab algorithm based on the proposed method

During this experiment, instructions described on the website given below were used.

Parts of the code were copied and used.

http://www.mathworks.com/products/image/demos.html?file=/products/demos/shipping/i

mages/ipexwatershed.html

Page 37: My "Grain Motion Detection" Project

37

function watershed_segmentation

Basic steps

1) Reading of the colour image and its transformation to the grayscale.

Reading data from a file

rgb = imread (' G:\Matlab\tested_image.jpg');

And present them in the form of a grayscale picture.

I = rgb2gray(rgb);

imshow(I)

text(732,501,'Image courtesy of

Corel(R)',...'FontSize',7,'HorizontalAlignment','right')

FIGURE 23: OBTAINED GRAYSCALE PICTURE

2) Use value of a gradient as segmentation function.

For calculation of value of a gradient Sobel edge mask, function imfilter and other

calculations are used. The gradient has great values on borders of objects and small (in

most cases) outside the edges of objects.

hy=fspecial (' sobel ');

hx=hy ';

Page 38: My "Grain Motion Detection" Project

38

Iy=imfilter (double (I), hy, ' replicate ');

Ix=imfilter (double (I), hx, ' replicate ');

gradmag=sqrt (Ix. ^ 2+Iy. ^ 2);

figure, imshow (gradmag, []), title (' value of a gradient ')

FIGURE 24:GRADIENT SEGMENTATION

3) Marking of objects of the foreground.

For marking of objects of the foreground various procedures can be used. Morphological

techniques which are called "opening by reconstruction" and "closing by reconstruction"

are used. These operations allow analyzing internal area of objects of the image by means

of function imregionalmax.

As it has been told above, at carrying out marking of objects of the foreground

morphological operations are also used. Some of them will be considered and compared.

At first operation of disclosing with function use imopen will be implemented.

se=strel (' disk ', 20);

Io=imopen (I, se);

figure, imshow (Io), title (' Io ')

Page 39: My "Grain Motion Detection" Project

39

FIGURE 25-FOREGROUND MARKING

Further opening using functions imerode and imreconstruct will be calculated.

Ie=imerode (I, se);

Iobr=imreconstruct (Ie, I);

figure, imshow (Iobr), title (' Iobr ')

FIGURE 26:IMAGE ERODE

The subsequent morphological operations of opening and closing will lead to moving of

dark stains and formation of markers. Operations of morphological closing are analyzed

below. For this purpose function imclose is used first:

Ioc=imclose (Io, se);

figure, imshow (Ioc), title (' Ioc ')

Page 40: My "Grain Motion Detection" Project

40

FIGURE 27:IMAGE CLOSE

Further function imdilate is applied together with function imreconstruct. For

implementation of operation "imreconstruct" it is necessary to perform operation of

addition of images.

Iobrd=imdilate (Iobr, se);

Iobrcbr=imreconstruct (imcomplement (Iobrd), imcomplement (Iobr));

Iobrcbr=imcomplement (Iobrcbr);

figure, imshow (Iobrcbr), title (' Iobrcbr ')

FIGURE - IMDIALATE

Comparative visual analysis Iobrcbr and Ioc shows, that the presented reconstruction on

the basis of morphological operations of opening and closing is more effective in

comparison with standard operations of opening and closing.

Local maxima Iobrcbr will be calculated and foreground markers recieved.

Page 41: My "Grain Motion Detection" Project

41

fgm=imregionalmax (Iobrcbr);

figure, imshow (fgm), title (' fgm ')

FIGURE 28:IMREGIONALMAX FUNCTION USED

Foreground markers imposed on the initial image.

I2=I;

I2 (fgm) =255;

figure, imshow (I2), title (' fgm, imposed on the initial image ')

FIGURE 29:IMPOSE MARKERS ON THE IMAGE

Some latent or closed objects of the image are not marked. This property influences

formation of result and such objects of the image will not be processed from the

segmentation point of view. Thus, in ideal conditions, foreground markers display

borders only of the majority of objects.

Page 42: My "Grain Motion Detection" Project

42

Some of the foreground markers cross the edges of the grains. Markers should be cleaned

and shrunk to allow further processing. In particular, it can be morphological operations.

se2=strel (ones (5, 5));

fgm2=imclose (fgm, se2);

fgm3=imerode (fgm2, se2);

As a result of carrying out of such operation the separate isolated pixels of the image

disappear. Also it is possible to use function bwareaopen which allows to delete the set

number of pixels.

fgm4=bwareaopen (fgm3, 20);

I3=I;

I3 (fgm4) =255;

figure, imshow (I3)

title (' fgm4, imposed on the initial image ')

FIGURE 30: BWAREAOPEN FUNCTION

4) Calculation of markers of a background.

Now operation of marking of a background will be performed. On image Iobrcbr dark

pixels relate to a background. Thus, it might be possible to apply operation of threshold

processing of the image.

bw=im2bw (Iobrcbr, graythresh (Iobrcbr));

figure, imshow (bw), title (' bw ')

Page 43: My "Grain Motion Detection" Project

43

FIGURE 31:THRESHOLD OPERATION

Background pixels are dark, however it is impossible to perform simple morphological

operations over markers of a background and to receive borders of objects which are

segmented. Background will be "thinned" so that in order to receive an authentic skeleton

of the image or, so-called, the grayscale picture foreground. It is calculated using

approach on a watershed and on the basis of measurement of distances (to watershed

lines).

D=bwdist (bw);

DL=watershed (D);

bgm=DL == 0;

figure, imshow (bgm), title (' bgm ')

FIGURE 32:IWATERSHED LINES

Page 44: My "Grain Motion Detection" Project

44

5) Calculate the Watershed Transformation of the Segmentation Function.

Function imimposemin can be applied for exact definition of local minima of the image.

On the basis of it function imimposemin also can correct values of gradients on the image

and thus specify an arrangement of markers of the foreground and a background.

gradmag2=imimposemin (gradmag, bgm | fgm4);

And at last, operation of segmentation on the basis of a watershed is carried out.

L=watershed (gradmag2);

Step 6: Visualization of the processing result

Displaying the imposed markers of the foreground on the initial image , markers of a

background and border of the segmented objects.

I4=I;

I4 (imdilate (L == 0, ones (3, 3)) |bgm|fgm4) =255;

figure, imshow (I4)

title (' Markers and the borders of objects imposed on the initial image ')

FIGURE 33:BORDERS MARKED

As a result of such display it is possible to analyze visually a site of markers of the

foreground and a background.

Page 45: My "Grain Motion Detection" Project

45

Display of results of processing by means of the colour image is also useful. The matrix

which is generated by functions watershed and bwlabel, can be converted in the

truecolor-image by means of function label2rgb.

Lrgb=label2rgb (L, ' jet ', ' w ', ' shuffle ');

figure, imshow (Lrgb)

title (' Lrgb ')

FIGURE 34:DISPLAY THE RESULTS

Also it is possible to use a translucent mode for imposing of a pseudo-colour matrix of

labels over the initial image.

figure, imshow (I), hold on

himage=imshow (Lrgb);

set (himage, ' AlphaData ', 0.3);

title (' Lrgb, imposed on the initial image in a translucent mode ')

Page 46: My "Grain Motion Detection" Project

46

FIGURE 35:RESULTS IMPOSE ON ORIGINAL IMAGE

A combination of contrast enhancement function adapthisteq and image smoothing

function medfilt2 was used as preliminary processing as under current conditions

segmentation result proved to be the best from all the tested ones.

Conclusion

It is seen from the Fig.38 that only few of all grains are segmented properly. Some of the

captured grains are over segmented. Such unimpressive grain capture capability can be

explained by non uniform lighting, absence of the well-defined background and non-

homogenous surface of the most grains.

Method might work with different efficiency, depending on variety of conditions (such as

lighting, background etc.)

3.0.1.5 Segmentation Method 4

Description

This method is very interesting as it was discovered by accident.

During experiments on the method of the pixel value difference (method is described

starting from page 55), when comparing 2 coherent frames and highlighting the areas

Page 47: My "Grain Motion Detection" Project

47

values of which have changed, it was noticed that the grain borders are highlighted at

some point as well (fig.39).

FIGURE 36:ACCIDENTLY HIGHLIGHTED BORDERS

After analysis of the reasons that caused the highlight of grain borders, it was noticed that

processed coherent frames were made from a little bit different perspectives. Camera was

accidently moved by few millimeters when pressing an image-capture button and that

caused vibration of the secondary image in relation to the first image.

It was assumed that such vibration can be simulated in matlab environment to construct

an edge detector based on that idea.

The main idea is to shift the same image for a few pixels, compare it with original image

and highlight the difference of the pixel values that are exceeding critical value set by the

user. If picture will be smoothed preliminary, the grains will be relatively homogenous,

and the grain edge lines could be highlighted with a little noise..

Matlab algorithm based on the proposed method

function [ dif ] = vibr1_1(imnam1,pix_mov,pix_difference,rem_area)

Input

imnam1 = image name

rem_area = area of particles to be filtered out

Page 48: My "Grain Motion Detection" Project

48

pix_move = distance to move pixel

pix_difference = critical pixel difference for highlighting, when comparing two frames.

Output

dif = variable containing grain borders obtained by current method

Basic steps

1) Image is read in to the variable:

Im1= imread(imnam1)

2) Image size is being measured for setting the cycle step amount later on:

siz = size(T)

3) Critical difference value between coherent pixels is set.

differ = 8

4) Shifted images are created and saved in to predetermined variables.

a) The process of the image shifting is a cycle that one by one shift pixels in all

possible directions and saves them into different variables to further analyze and

combine the borders obtained using all the directions of image shift.

for ver = (pix_move+1):(ver_size - pix_move)

pix_move_ver_down = ver - pix_move

pix_move_ver_up = ver + pix_move

parfor hor = (pix_move+1):(hor_size - pix_move)

b) Image shifted in horizontal direction:

im2(ver,hor + pix_move) = img1(ver,hor);

c) Image shifted in vertical direction:

im3(pix_move_ver_up,hor) = img1(ver,hor);

d) Image shifted in diagonal direction (vertically + right):

im4(pix_move_ver_down,hor + pix_move ) = img1(ver,hor);

e) Image shifted in diagonal direction (vertically + left):

im5(pix_move_ver_up,hor + pix_move) = img1(ver,hor);

Page 49: My "Grain Motion Detection" Project

49

5) Absolute pixel differences are calculated between original and shifted images

a) Between original and horizontally shifted image

im_difference1_2 = imabsdiff(im1,im2);

b) Between original and vertically shifted image

im_difference1_3 = imabsdiff(im1,im3);

c) Between original and diagonally(vertically + right) shifted image

im_difference1_4 = imabsdiff(im1,im4);

d) Between original and diagonally(vertically + left) shifted image

im_difference1_5 = imabsdiff(im1,im5);

6) Difference between images is checked for reaching the critical value. If so the pixel is

marked as the border one.

a) Current process is a loop,

for ver = (pix_move+1):(ver_size - pix_move)

pix_move_ver_down = ver - pix_move

pix_move_ver_up = ver + pix_move

parfor hor = (pix_move+1):(hor_size - pix_move)

where pixels are sequentially checked for reaching of critical value predetermined by the

user.

Variable pix_move is used to exclude grain border doubling that is caused by image

shifting.

b) Check differences of all the images for for reaching the critical value and mark

the required pixels.

if im_difference1_2(ver,hor) >= pix_difference

bi_im1_2(ver,hor - pix_move) = 1;

elseif im_difference1_3(ver,hor) >= pix_difference

bi_im1_3(pix_move_ver_down,hor) = 1;

elseif im_difference1_4(ver,hor) >= pix_difference

bi_im1_4(pix_move_ver_up,hor - pix_move) = 1;

elseif im_difference1_5(ver,hor) >= pix_difference

bi_im1_5(pix_move_ver_down,hor - pix_move) = 1;

Page 50: My "Grain Motion Detection" Project

50

On the figures below(Fig.40,Fig.41,Fig.42,Fig.43) grain edges obtained by different shift

directions can be seen.

FIGURE 37: GRAIN EDGES OBTAINED USING

HORIZONTAL SHIFT

FIGURE 38: GRAIN EDGES OBTAINED USING

VERTICALL SHIFT

FIGURE 39: GRAIN EDGES OBTAINED USING

DIAGONAL(VERTICAL + RIGHT) SHIFT

FIGURE 40: GRAIN EDGES OBTAINED USING

DIAGONAL(VERTICAL + LEFT) SHIFT

7) From the figures above, it is seen that obtain edge part images contain some amount

of noise. As all of those grain edge parts will be put together later on it is preferred to

remove the noise to avoid noise accumulation on the final image.

a) To do that, a function rem_small was written(Appendix A).

Page 51: My "Grain Motion Detection" Project

51

First of all this function labels all the objects in the image

Then it measures each objects area

Finally it excludes the objects with area that is less than predetermined value

b) Noise is removed from each image using function rem_small described above

bi_im1_2_mod = rem_small(bi_im1_2,rem_area)

bi_im1_3_mod = rem_small(bi_im1_3,rem_area)

bi_im1_4_mod = rem_small(bi_im1_4,rem_area)

bi_im1_5_mod = rem_small(bi_im1_5,rem_area)

Example of the noise removal result can be seen on the images below. Fig.44 is image

before noise removal and Fig.45 is the resultant image.

FIGURE 41: IMAGE BEFORE NOISE REMOVAL

FIGURE 42: FIG.454235

8) All the grain edge parts now have been cleaned from the noise and are put together in

one image. The resultant image is cleaned from the noise again.

a) Simple summation of the matrices is used to create a final image containing grain

borders

bi_im_final = bi_im1_2_mod + bi_im1_3_mod + bi_im1_4_mod + bi_im1_5_mod

b) Noise is removed from the resultant image using function rem_small

bi_im_final = rem_small(bi_im_final,rem_area)

Page 52: My "Grain Motion Detection" Project

52

Results

Original grayscale and combined final border images are shown at figures below

FIGURE 43: ORIGINAL GRAYSCALE PICTURE

FIGURE 44: GRAIN BORDERS OBTAINED USING

VIBRATION SIMULATION

FIGURE 45: METHOD 1

FIGURE 46: METHOD 2

It is seen that proposed method capture some of the grains ideally, some of the grain

borders are not fully highlighted though. Amount of noise is very small. If compared with

best quality edges captured with method 1 and method 2 (Fig. 42,43) , it is seen that

edges captured by current method (fig.47) are quite stronger and have significantly less

noise.

Page 53: My "Grain Motion Detection" Project

53

Conclusion

Method produced comparatively good edge detection results but still not enough to

segment grains with enough quality to use the segmentation in further analysis of

sediment discharge. It still might provide a reasonable segmentation if used in

combination with other methods. From the experiments described above it is possible to

assume that efficiency of current method in terms of detected edge strength and amount

of noise, is higher than Method 1 and Method 2 efficiencies.

3.0.1.6 Canny edge detector

Description

It is quite complex method consisting of the big number of stages. The method essence is

searching for local sites with brightness differences. Brightness differences are searched

by means of a filtration along each of axes by the one-dimensional Laplacian-Gaussian

filter. In Canny method for classification of differences on "weak" and "strong" two

thresholds are used - bottom and top. "Weak" borders are marked in resultant image, only

if they are connected to "strong". For noisy images the given method provides the best

detection of borders in comparison with other methods but it demands more time.

[J.Cаnny, 1986. А computаtional Аpproach to Еdge Detectiоn, IEEЕ Trаns. Pаttern

Anаlysis аnd Mаchine Intelligеnce].

Matlab function BW=edge(I, 'canny', thresh)

BW = еdgе(I,'cаnny',thrеsh)

The parameter thresh can be a two-element vector. In this case the first element of a

vector sets value of the bottom threshold, and the second element - value of the top

threshold. If the parameter "thresh" is scalar value "thresh" sets value of the top threshold,

and for the bottom threshold value- (0.4*thresh) is used. If the parameter "thresh" is not

given or "thresh" is an empty array, values of thresholds are defined automatically.

Results

Images obtained by Canny edge detector are either over segmented (fig. 51,52) or with

weak edges( fig,5). Oversegmentation is probably caused by the high efficiency of the

scanner. In experimental way it was figured out that there is no “thresh” sensitivity value

Page 54: My "Grain Motion Detection" Project

54

that would allow to avoid oversegmentation and capture the grain borders at satisfactory

level.

FIGURE 47: ORIGINAL IMAGE

FIGURE 48: THRESH = 0.05

FIGURE 49: THRESH = 0.1

FIGURE 50: THRESH = 0.2

Conclusion

Based on this experiment results, it can be stated that Canny filter is too efficient for the

image of such type. If sensitivity parameter “thresh” reaches level when a grain edges can

be detected efficiently, apart from that each single detail on the grain is segmented as

well. This causes oversegmentation that gives no possibility for further analysis of the

picture.

Page 55: My "Grain Motion Detection" Project

55

Conclusion on the Program Concept Nr.1

On a flow diagram on page (fig.9) it is seen that primary step of the proposed algorithm is

detection of the grain borders. Methods of detection that work to some extent were found

during a set of experiments described above. Unfortunately quality of the obtained

borders and segmentation does not allow proceeding to the next step of the proposed

algorithm. Probable reason for that is combination of conditions that are seriously

complicating the segmentation.(lighting conditions, non – homogenous surface of the

grains, absence defined background, shadows etc.)

It is not stated that grain segmentation and edge detection is impossible task in this case,

but based on the results, it is probably achievable only using extremely complicated

image processing techniques.

Before tackling such serious task it is recommended to consider other methods that might

be easier to achieve.

3.0.2 Proposed program concept Nr. 2

Description

It is assumed that the areas where the grains have moved will have their intensity values

relatively different, when comparing coherent frames where movement has occurred. T

The idea is to capture and highlight such areas as ideally these areas have the contours of

the moved grains. After obtaining the direction of grain movement and estimation of

volume, volumetric bed load could be estimated.

Highlight modified areas

1) Absolute difference is obtained using imabsdiff function

im3 = imabsdiff(im1,im2)

2) Values obtained in previous step with aid of cycles:

for ver = 1:(siz (1,1))

parfor hor = 1:(siz(1,2))

Page 56: My "Grain Motion Detection" Project

56

3) Each pixel is sequentially compared to a critical value pix_difference

predetermined by the user. If value is exceeded a pixel is highlighted in a variable

bi_im

if im3(ver,hor) >= pix_difference

bi_im(ver,hor) = 1

Results

During the experiment, only one grain was displaced manually to simulate the effect of

grain movement in order to check the possibility of capturing grain contour efficiently.

FIGURE 51: HIGHLIGHTED AREAS WHERE PIXEL INTENSITY VALUES HAVE CHANGED MORE THAN PRESET

CRITICAL LEVEL.

Page 57: My "Grain Motion Detection" Project

57

FIGURE 52: HIGHLIGHTED AREAS WHERE PIXEL INTENSITY VALUES HAVE CHANGED MORE THAN PRESET

CRITICAL LEVEL.

Based on the results (on figures54,55) it is seen that highlighted difference areas( in red

circles) at some point, have contours similar to the contours of the grains.

Conclusion

Considering amount of noise, coming with highlighted contours there will be difficulties

obtaining area of the grains efficiently. The most probable reason for noise is a change of

shading, caused by grain movement and non homogenous surface of the grains.

Thus it is not likely that a moved grain area could be approximated by this method.

Alternative application of the method

It was concluded that this method may be used in a way it was not designed for:

Method does not allow obtaining precise grain area information easily, but it is surely

efficiently marks the areas where changes have occurred, and thus there is no need to

process whole image, when area of interest can be reduced significantly using this

method.

Page 58: My "Grain Motion Detection" Project

58

That reduction of interest area can significantly simplify the image processing

(segmentation, edge detection, etc) and identification of the moved grains and their

position.

For example, instead of trying to identify a moved grain on the image of 1000 grains, this

number could be reduced to 10 grains and logically it would be much easier to identify

the required grain in that case. As on the smaller area of interest there are less possible

variants of a moved grain, thus the probability of the incorrect grain identification (error)

is also significantly reduced.

Method also might find an application when detecting the grains that did not move, but

only produced small “shakes” as all the displaced grains will be highlighted.

Accidental discovery

It should be also mentioned that when taking pictures for that experiment, a vibration of

the camera was accidently induced by pressing the image capturing button. As a result,

when highlighting the difference between frames, most of grain borders were noticed to

be highlighted with a relatively good quality. (it can be seen on Fig.39)

Mentioned accidently discovered useful effect of the vibration became the basic working

principle for the grain edge-detector described starting from page 44.

Discussions

During analysis of the experiment results described above, a following pattern was

noticed -during tests of the segmentation methods based on edge detection it was

observed that when analyzing the images at relatively low sensitivities at the beginning of

experiments, exactly the same parts of the grain edges often could not be captured even

when different methods of grain detection are used. Method 1 on figure 56 and canny

filter on figure 57 are shown as an example of the edges obtained using low edge-detector

sensitivities)

This might be an evidence of extremely weak preciseness of undetected edges.

Page 59: My "Grain Motion Detection" Project

59

FIGURE 53: WEAK EDGES, METHOD1

FIGURE 54: WEAK EDGES, CANNY EDGE DETECTOR

FIGURE 55: OVER SEGMENTATION, CANNY EDGE

DETECTOR

FIGURE 56: NOISE CONTAMINATION, METHOD1

After further increase of each methods sensitivity, there was a moment when excessive

amount of noise or over segmentation (depending on the method) appeared on the image.

(Fig.58 and 59)

Even though the images were over contaminated by noise or excessively

segmented(fig.58,59), some edges were still unrevealed. Considering the fact, that

mentioned observations were obtained using methods based on completely different

working principles (Method 1, Canny edge detector), this might be another confirmation

of the mentioned excessively weak edges hypothesis.

Page 60: My "Grain Motion Detection" Project

60

Based on the experimental results, majority of the tested segmentation and grain detection

methods proved to be inefficient in such conditions. The reason for that might be a huge

combination of negative image and grain features that complicate the analysis, for

example

Non homogeneity of grain surface

Absence of the background below the sediment

Input data is presented only in grayscale mode

Shadows induced by other grains

Lighting conditions

And many other negative features

A proposed “difference highlight” method (function “imdiffer.m”), appeared to be

inefficient in the way it was planned to be used. But presumably it might find an

application as the method that marks the area of interest and thus – significantly simplify

current problem. Efficiency of the method still needs to be tested in a set of experiments

by combining it with other algorithms. Possible implementation of the current algorithm

is discussed in the recommendation part on the page 65.

Considering the inefficiency of the techniques described above it is possible to assume

that very sophisticated methods of image processing need to be used to solve current

problem efficiently. Before starting very energy and time consuming process of the

sophisticated methods development it is rational to make that absolutely no other options

are available.

This was a reason to re-analyze the problem and consider different approach that might

help to avoid development of unnecessary complicated image processing techniques. A

totally different approach to the problem is discussed in the next section. Some advices

will be given about the further steps that will most likely solve the problem in most

optimal and energy-saving way.

Recommendations

As it was mentioned, results of the described experiments at some point can testify that

grain parameters that an image of the grain contains (Parameters like : top grain

projection, grain colour information, features of the grain surface, etc.) are insufficient to

Page 61: My "Grain Motion Detection" Project

61

process grains (segment grains, detect edges, obtain grain parameters,etc.) with high

degree of certainty by means of a simple tehniques.

Unlike the situation with images, in real life sediment grains have many more parameters,

such as:

Grain width, length and heigth

Grain colour information

Grain density

Friction factor of the grain surface

etc.

As the main idea of the project is processing of the images, it may first appear that

analysis is limited to the mentioned grain parameters available from the 2 dimensional

image. Logically, the grain analysis would be much more efficient and simpler if

grain data would contain and could be classified by more grain parameters apart from

those that are currently contained by the images.

An alternative approach to the problem is suggested. It is both aimed on getting an

additional grain parameter that would be helpful in solving the problem, and on

sticking with a requirement of using only images for analysis.

It is proposed to recreate a 3 dimensional surface of the analysed area, by obtaining

the only missing parameter for a 3d representation of the object – height information.

Thus some sort of 3d scanner have to be used to recreate the surface.

Development of the 3d surface recreation method

3d scanners

3D scanner - is the device that analyzes a physical object and on the basis of received data

is creating its 3D model.

3D scanners have 2 major types of a scanning method:

Contact, such method is based on direct contact of the scanner with investigated

object.

not Contact

Not contact scanners in turn can be classified in 2 separate categories:

Active scanners

Passive scanners

Page 62: My "Grain Motion Detection" Project

62

Active scanners radiate some directed waves (more often light, a laser beam) on the

object and find out its reflection for the analysis. Possible types of used radiation include

light, ultrasound or X-rays.

Passive scanners do not radiate anything on object, and instead rely on detection of the

reflected surrounding radiation. The majority of scanners of such type detect visible light

- readily available surrounding radiation.

3D models received by a method of scanning can be further processed and used in

engineering calculations.

Considering the fact that project is limited to image processing and those images(frames)

are obtained by a video recording, it is impossible to use scanners that involve any

moving parts , therefore it is proposed to use a scanner based on the principle of a

structured light. The method of structured illumination is described in a following

chapter.

Three-dimensional photo

It is known, that the traditional photo provides reception of two-dimensional projection of

three-dimensional objects, the information on object along the third co-ordinate is thus

lost. In some cases it is possible to restore the lost information, for example, using laws of

distribution of brightness in pictures of semitones , however generally it does not provide

unambiguity of received results.

Methods of definition of the shape of three-dimensional surfaces on the basis of optical

stereosystems which can be classified as "passive systems" are known. Along with

advantages (simplicity and rather low cost), such systems have serious lacks - dependence

of results on character of object and illumination conditions, sensitivity to influence of the

background flares, complexity of mathematical methods and computer algorithms of

processing, etc.

During last years, systems of "active" type were widely used, in which the object was

projected with a source of radiation with known properties that allowed to overcome the

lacks of traditional methods specified above. Use of various methods of modulation of

radiation, more often on amplitude or frequency, allows to realise a mode of a laser range

finder for restoration of a three-dimensional relief of a surface.

Page 63: My "Grain Motion Detection" Project

63

Active development of computer technologies and perfection of technology of

multielement radiation recievers (videocameras) have allowed to replace

electromechanical scanning by electronic scanning and has created preconditions of

working out of methods of restoration of the three-dimensional form of objects by more

simple means on the basis of «projection of lines» principle, generated in coherent or not

coherent light and «the structured light» - illumination of objects by not coherent

radiation with the set spatial distribution of brightness.

The principle of formation of images using structured light is shown On Fig.60

FIGURE 57: STRUCTURE LIGHT PRINCIPLE

Restoration of three-dimensional form of objects using structured illumination is based on

a triangulation principle: at a known mutual angular arrangement of axes of a source

(projector) and a videocamera, predefined pattern is projected on the object in the

cameras field of view.

Page 64: My "Grain Motion Detection" Project

64

FIGURE 58: AN EXAMPLE OF TRIANGULATION PRINCIPLE

At illumination of object by a set of "patterns" with known distribution of the brightness,

each "pattern" is deformed according to a surface relief, and the distance to each point of

a surface is represented as binary code.

Decoding operation allows to receive an estimation of distance to each point of a surface.

At a projection of parallel strips the picture of strips changes according to the form of a

covered surface. In the right part of fig. 61 an example of function of conformity for one

horizontal line is shown.

Not-monotony of a relief of the real objects, caused, for example, by sharp jumps or

shading of separate sites, can lead to ruptures of the function of conformity. For

overcoming of this problem conformity function is analyzed on its separate segments.

Methods of the structured illumination allow to restore the shape of three-dimensional

objects of a difficult kind. One of examples of restoration is illustrated on fig. 62

Page 65: My "Grain Motion Detection" Project

65

FIGURE 59: AN EXAMPLE OF RESTORATION OF THREE-DIMENSIONAL SHAPE OF A SURFACE USING A

METHOD OF THE STRUCTURED ILLUMINATION: INITIAL OBJECT (A) AND SHAPE RECONSTRUCTION (A

VIEW FROM VARIOUS ANGLES)

It is recommended to use current method to recreate the surface of sediment layer for

further analysis.

Analysis of obtained 3d data

A purpose of the required software is – obtaining the volumetric bed load. It is proposed

to experiment on implementation of a following sequence of 3 dimensional data analysis:

1) 3d surfaces are obtained from the coherent frames made by the high speed

camera.

2) Coordinates of the areas, where the grain started and stopped their movement, are

obtained.

Start and stop coordinates can be derived by highlighting the areas where the

heights of a corresponding points have decreased or increased in their values.

For that purpose it is advised to slightly modify and use a function “imdiffer.m”

discussed on previous pages. Current function was designed to compare gray

intensity values of the points of coherent frames and coincidently , the gray

intensity values are stored in a same format that heights are stored in Matlab

Page 66: My "Grain Motion Detection" Project

66

environment and therefore if provided input will contain heights data instead of

colour-intensity data, it will provide same type of output results.

Comment:

As it was assumed on pages 55-56 function”imdiffer.m” is likely to find an

application in an alternative way.

3) Each highlighted grain volume is calculated by summing up the height values of

the highlighted points in respect to the ordinance level.

𝑉 = Σ𝑕

Eq. 11

𝑉 - volume of the grain

Σ𝑕 - sum of heights at all the points of the grain

4) If several grains have moved at the same time, it is important to identify which

start location is corresponding to which grain stop location.

This can be done by either:

a) Comparing the volumes of the moved grains on the coherent pictures

b) If a weak accuracy of the scanner does not allow comparing grains by

volume only, it possibly can be additionally compared by the mean colour

of the grain. This might help to reduce the probability of incorrect

identification of the grain.

5) Distances of travel of each grain are measured between movement start and stop

points obtained in step 2 and using output obtained in step 4.

6) Finally, all the required grain motion parameters (moved grain volume, travelled

distances of the grains, times of grain travels) are obtained and volumetric bed

load can be directly obtained using them. (see pages 4-6 for details of volumetric

bedload transport rate description).

Page 67: My "Grain Motion Detection" Project

67

Conclusion on proposed recommendation

Major advantage of the described method is that it is transforming input information in a

type of information requested on the software output. (volume on input, volumetric bed

load rate on output), therefore it becomes possible to directly analyse the volumetric input

data without any prior assumptions and approximations made about the volumes of the

grains. Therefore, accuracy of the bed load measurements will not be lost because of such

simplifications. Logically, accuracy of the method is only limited to a quality and

precision of the proposed scanner.

Alternative recommendation

As it was mentioned above – 3d surface recreation method is targeted on avoiding the

development of complicated 2 dimensional image analysis techniques. Theoretically it

should be the optimal solution for the problem. But as no experiments have been done on

the method it is useful to have an alternative proposal in case unpredicted difficulties are

met during the implementation of the described above method.

As in current situation the basic and most important task is to efficiently segment the

grains to be able to further analyse it and use the obtained grain parameters to determine

the required bed load.

Development of the segmentation method based on graph theory

Method description

Methods of the theory of graphs - one of most actively developing directions in

segmentation of images.

The general idea of methods of this group is the following. The image is represented in

the form of the weighed graph, with tops in image points. The weight of an edge of the

Page 68: My "Grain Motion Detection" Project

68

graph reflects similarity of points in some way (distance between points under some

metrics). Image splitting is modelled by cuts of the graph.

FIGURE 60: AN EXAMPLE OF MODELLING OF THE IMAGE THE WEIGHED GRAPH.

Usually in methods of the theory of graphs the functional of "cost" of the cut, reflecting

quality of the received segmentation is entered. So the problem of splitting the image on

homogeneous areas is reduced to an optimizing problem of search of a cut of the

minimum cost on the graph. Such approach allows besides uniformity of colour and a

structure of segments to operate also the shape of segments, their size, complexity of

borders, etc.

Various methods are applied to search of a cut of the minimum cost: "greedy" algorithms

(on each step such edge when total cost of a cut is minimum is selected), methods of

dynamic programming (it is guaranteed, that, choosing on each step an optimum edge,

will provide an optimum way), etc.

Method of Normalized Cut (J. Shi, J. Malik 1997). The normalised functional of quality

of a cut is entered in a way to simultaneously maximise distinction of points between

classes and to minimise distinctions of points in a class. Optimisation of the normalised

functional is reduced to a problem of search of own values of a matrix of paired distances

between all points of the image.

Page 69: My "Grain Motion Detection" Project

69

FIGURE 61: AN EXAMPLE OF A MATRIX OF PAIRED DISTANCES FOR A DOT CONFIGURATION.

Complexity of effective algorithm of search of own values of the rarefied matrix is linear

by number of points of the image. However the method demands storage of a matrix of

the size n*n, where n - number of points of the image and thus, in an initial kind is

inapplicable to the big images.

FIGURE 62: RESULTS OF WORK OF METHOD NORMALIZED CUTS( IMAGE TAKEN FROM

(http://cgm.computergraphics.ru)

Method of the Nested Cuts (Olga Veksler, 2000). The Main principle of this method

consists of branching of each point of the image from a special point outside of the image

bya a cut of the minimum cost. At such approach the image shares on not crossed

segments. It is shown, that the size of segments of the image can be operated by imposing

restrictions on cut cost.

Page 70: My "Grain Motion Detection" Project

70

FIGURE 63: RESULT OF WORK NESTED CUTS

(http://cgm.computergraphics.ru)

The method of segmentation SWA (Segmentation by Weighted Aggregation) is based on

grouping of similar points of the image.The basic idea of a method consists in

construction of a pyramid of the weighed graphs, each of which is received from previous

by association of similar tops.

FIGURE 64: CONSTRUCTION OF A PYRAMID OF THE WEIGHED GRAPHS FOR THE IMAGE.

(http://cgm.computergraphics.ru)

On each step weights of connections are recalculated. In the course of pyramid

construction the various statistics characterising the form, colour, a structure of regions

are calculated, these statistics are used for calculation of a measure of similarity of

regions. Then, following ideology of methods of the theory of graphs, for the received

graph the functional of cost of a cut is entered and the cut of the minimum cost is found.

Page 71: My "Grain Motion Detection" Project

71

In updating of algorithm,on each following step the result of the previous aggregation is

analyzed and corrected, and also the information on borders of the received segments is

used.

FIGURE 65: COMPARISON OF RESULTS OF WORK OF ALGORITHM SWA, ITS UPDATINGS AND NORMALIZED

CUTS

(http://cgm.computergraphics.ru)

Quality of work of methods of the theory of graphs strongly depends on a metrics choice.

Therefore for a choice of the optimum metrics machine training is applied. The basic

problems of methods of the theory of graphs are a low speed of work and the big use of

memory. The majority of methods demand then storage of a matrix of paired distances

between the image points which size is equal to a square of number of points. Such

restrictions make grah methods practically inapplicable for huge images.

Conclusion on proposed method

As it is seen from the proposed method description, the proposed method is extremely

complicated and the process of its implementation to the problem of sediment bed load

measurement would be a very time-consuming process. Therefore it is recommended to

consider current method only if other more simple methods (including the 3d surface

reconstruction method described above) are proved to be insufficiently effective to

produce the qualitative output.

Page 72: My "Grain Motion Detection" Project

72

Conclusions Outline

A simplified model of the analyzed river bed was designed, and the required input

data was obtained and analyzed.

An optimal configuration of factors (such as lighting angle and strength, use of

reflector, etc.) was obtained in experimental way and described in the report.

Many approaches mostly of image segmentation (mostly based on edge detection)

were developed and successfully tested and analyzed.

The efficiency of each of the method was obtained and the possible

implementation of each method was considered and discussed.

Obtained statistics of the results of the experiments helped to make assumptions

about the extremely weak edge areas of the image.

It was concluded that image segmentation could not be produced efficiently on the

image with such combination of properties (lighting, shading, colors, etc.) by

means of the relatively simple and most widely used analysis techniques and

segmentation of the 2d image.

Therefore the ideas of t alternative ways of program implementations were

developed and precisely described in the recommendations part.

Relatively big amount of theories and different approaches were implemented and

tested. As a result, efficiency and the limit of each tested method is known. Based

on those results, possibly, most optimal direction for the problem solution was

identified.

Page 73: My "Grain Motion Detection" Project

73

List of References

K.M. Clayton. “Weathering”. 1969. Edinburgh: Oliver & Boyd. P 551.3.053 ONE

Steven J. Godman & Katharine Jackson & Taras A. Bursztynsky. “Erosin and

sediment control handbook”. 1986. New York; London: McGraw-Hill. R

631.459.GOL

Walter Hans Graf. “Hydraulics of sediment transport”. 1971. New York:

McGrraw-Hill. X 627.15 GRA

Gregory A. Baxes. “Digital image processing: a practical primer”. 1984.

Englewood Cliffs, N.J.: Prentice-Hall.

Chang Wen Chen, Ya-Qin Zhang. “Visual information Representation,

Communication, and Image Processing”. 1999. W 621.391.61 BAX

Kujawinska M., Weigiel M., Sitnik R. “Real-time 3D shape measurment based on

color structure light projection”.

W. Osten and W.Jueptner. “Workshop on Automatic Processing of Fringle

Patterns”. 2001. P.324-331

Salvi J., Batle J. “Pattern codification strategies in structured light systems”. 2004.

P. 827-849.

Shtuchkin A., Gurov I. “Structured light range sensing using color and two stage

dynamic programming”. 2004.

Marr D., Hildreth E. “Theory of edge detection”. 1980. Series 13, Vol. 207, pp.

187-217.

Ziou D., Tabbone S. “Edge detection techniques-an overview”. 1998. Vol. 4, pp

537-559.

Baker S. “Design and evaluation of feature detectors”. 1998.

Maccarone M. C. “Fuzzy Mathematical Morphology: Concepts and Applications”.

1996. Vol. 40. Part 4, pp 469-477.

Page 74: My "Grain Motion Detection" Project

74

Appendix A : matlab codes of the proposed methods

1. Function rem_small

function [ img_small_removed ] = rem_small( imge, siz_rem )

%This function removes particles with the area defined in the input, from the image

% First of all this function labels all the objects in the image

[L,num] = bwlabel(imge ,8)

feats = regionprops(L, 'Area')

Area1 = zeros(num:1)

% Then it measures each objects area

parfor i = 1:1:num

Area1(i) = feats(i).Area

end

% Finally it excludes the objects with area that is less than predetermined value

idx = find(Area1 > siz_rem)

img_small_removed = ismember(L,idx)

end

Page 75: My "Grain Motion Detection" Project

75

2. Function rate_of_change_scan

function rate_of_change_scan

T = imread('G:\Matlab\tested_image.jpg'); %original picture

T = colr_to_BW(T); %convert image to BW

siz = size(T);

%smooth the image

T = medfilt2(T, [6 6])

T=adapthisteq(T)

%rate of change diference to mark pixel as a border

differ = 9;

%create an empty image - variable with a same size as original image

marked = zeros(siz(1,1),siz(1,2));

for ver = 2:(siz(1,1) - 1) ;

parfor hor = 2:(siz(1,2) - 1);

% Rate Of Change of a central and left pixel

ROC_left = T(ver,hor) - T(ver,hor - 1)

% Rate Of Change of a central and right pixel

ROC_right = T(ver,hor) - T(ver,hor + 1)

% Rate Of Change of a central and upper pixel

ROC_up = T(ver,hor) - T(ver + 1,hor)

% Rate Of Change of a central and bottom pixel

ROC_down = T(ver,hor) - T(ver - 1,hor)

ROC_diff_hor = imabsdiff(ROC_left,ROC_right)

ROC_diff_vert = imabsdiff(ROC_up,ROC_down)

if ROC_diff_hor >= differ

marked(ver,hor) = 1;

elseif ROC_diff_vert >= differ

marked(ver,hor) = 1;

end

end

end

figure,imshow(T)

figure,imshow(marked)

end

Page 76: My "Grain Motion Detection" Project

76

3. Function watershed_segmentation.m function watershed_segmentation

1) Reading of the colour image and its transformation to the grayscale.

Reading data from a file

rgb = imread (' G:\Matlab\tested_image.jpg');

And present them in the form of a grayscale picture.

I = rgb2gray(rgb);

imshow(I)

text(732,501,'Image courtesy of Corel(R)',...'FontSize',7,'HorizontalAlignment','right')

2) Use value of a gradient as segmentation function.

For calculation of value of a gradient Sobel edge mask, function imfilter and other

calculations are used. The gradient has great values on borders of objects and small (in

most cases) outside the edges of objects.

hy=fspecial (' sobel ');

hx=hy ';

Iy=imfilter (double (I), hy, ' replicate ');

Ix=imfilter (double (I), hx, ' replicate ');

gradmag=sqrt (Ix. ^ 2+Iy. ^ 2);

figure, imshow (gradmag, []), title (' value of a gradient ')

3)Marking of objects of the foreground.

For marking of objects of the foreground various procedures can be used. Morphological

techniques which are called "opening by reconstruction" and "closing by reconstruction"

are used. These operations allow analyzing internal area of objects of the image by means

of function imregionalmax.

As it has been told above, at carrying out marking of objects of the foreground

morphological operations are also used. Some of them will be considered and compared.

At first operation of disclosing with function use imopen will be implemented.

se=strel (' disk ', 20);

Io=imopen (I, se);

figure, imshow (Io), title (' Io ')

Further opening using functions imerode and imreconstruct will be calculated.

Page 77: My "Grain Motion Detection" Project

77

Ie=imerode (I, se);

Iobr=imreconstruct (Ie, I);

figure, imshow (Iobr), title (' Iobr ')

The subsequent morphological operations of opening and closing will lead to moving of

dark stains and formation of markers. Operations of morphological closing are analyzed

below. For this purpose function imclose is used first:

Ioc=imclose (Io, se);

figure, imshow (Ioc), title (' Ioc ')

Further function imdilate is applied together with function imreconstruct. For

implementation of operation "imreconstruct" it is necessary to perform operation of

addition of images.

Iobrd=imdilate (Iobr, se);

Iobrcbr=imreconstruct (imcomplement (Iobrd), imcomplement (Iobr));

Iobrcbr=imcomplement (Iobrcbr);

figure, imshow (Iobrcbr), title (' Iobrcbr ')

Comparative visual analysis Iobrcbr and Ioc shows, that the presented reconstruction on

the basis of morphological operations of opening and closing is more effective in

comparison with standard operations of opening and closing.

Local maxima Iobrcbr will be calculated and foreground markers recieved.

fgm=imregionalmax (Iobrcbr);

figure, imshow (fgm), title (' fgm ')

Foreground markers imposed on the initial image.

I2=I;

I2 (fgm) =255;

figure, imshow (I2), title (' fgm, imposed on the initial image ')

Some latent or closed objects of the image are not marked. This property influences

formation of result and such objects of the image will not be processed from the

segmentation point of view. Thus, in ideal conditions, foreground markers display

borders only of the majority of objects.

Some of the foreground markers cross the edges of the grains. Markers should be cleaned

and shrunk to allow further processing. In particular, it can be morphological operations.

se2=strel (ones (5, 5));

fgm2=imclose (fgm, se2);

fgm3=imerode (fgm2, se2);

Page 78: My "Grain Motion Detection" Project

78

As a result of carrying out of such operation the separate isolated pixels of the image

disappear. Also it is possible to use function bwareaopen which allows to delete the set

number of pixels.

fgm4=bwareaopen (fgm3, 20);

I3=I;

I3 (fgm4) =255;

figure, imshow (I3)

title (' fgm4, imposed on the initial image ')

4) Calculation of markers of a background.

Now operation of marking of a background will be performed. On image Iobrcbr dark

pixels relate to a background. Thus, it might be possible to apply operation of threshold

processing of the image.

bw=im2bw (Iobrcbr, graythresh (Iobrcbr));

figure, imshow (bw), title (' bw ')

Background pixels are dark, however it is impossible to perform simple morphological

operations over markers of a background and to receive borders of objects which are

segmented. Background will be "thinned" so that in order to receive an authentic skeleton

of the image or, so-called, the grayscale picture foreground. It is calculated using

approach on a watershed and on the basis of measurement of distances (to watershed

lines).

D=bwdist (bw);

DL=watershed (D);

bgm=DL == 0;

figure, imshow (bgm), title (' bgm ')

5) Calculate the Watershed Transformation of the Segmentation Function.

Function imimposemin can be applied for exact definition of local minima of the image.

On the basis of it function imimposemin also can correct values of gradients on the image

and thus specify an arrangement of markers of the foreground and a background.

gradmag2=imimposemin (gradmag, bgm | fgm4);

And at last, operation of segmentation on the basis of a watershed is carried out.

L=watershed (gradmag2);

Step 6: Visualization of the processing result

Displaying the imposed markers of the foreground on the initial image , markers of a

background and border of the segmented objects.

Page 79: My "Grain Motion Detection" Project

79

I4=I;

I4 (imdilate (L == 0, ones (3, 3)) |bgm|fgm4) =255;

figure, imshow (I4)

title (' Markers and the borders of objects imposed on the initial image ')

As a result of such display it is possible to analyze visually a site of markers of the

foreground and a background.

Display of results of processing by means of the colour image is also useful. The matrix

which is generated by functions watershed and bwlabel, can be converted in the

truecolor-image by means of function label2rgb.

Lrgb=label2rgb (L, ' jet ', ' w ', ' shuffle ');

figure, imshow (Lrgb)

title (' Lrgb ')

Also it is possible to use a translucent mode for imposing of a pseudo-colour matrix of

labels over the initial image.

figure, imshow (I), hold on

himage=imshow (Lrgb);

set (himage, ' AlphaData ', 0.3);

title (' Lrgb, imposed on the initial image in a translucent mode ')

Page 80: My "Grain Motion Detection" Project

80

4. Function horscan_i

function horscan_i

%237 - krasnij cvet

T = imread('G:\Matlab\tested_image.jpg'); %original picture

T = colr_to_BW(T); %convert image to BW

siz = size(T);

%sensetivity (the smaller parameter is, the smaller should be pixel

%intensity diference to mark pixel as a border

differ = 7;

%create an empty image - variable with a same size as original image

marked = zeros(siz(1,1),siz(1,2));

T = wiener2(T,[5 5])

for ver = 1:(siz(1,1) - 1) ;

parfor hor = 1:(siz(1,2) - 1);

if (max(T(ver,hor), T(ver,hor + 1)) - min(T(ver,hor), T(ver,hor + 1))) >= differ;

marked(ver,(hor)) = 1;

else if (max(T(ver,hor), T(ver + 1,hor)) - min(T(ver,hor), T(ver + 1,hor))) >= differ;

marked((ver),hor) = 1 ;

else if (max(T(ver,hor), T(ver + 1,hor + 1)) - min(T(ver,hor), T(ver + 1,hor + 1))) >=

differ;

marked((ver),hor) = 1 ;

end

end

end

end

end

marked = rem_small(marked, 18 )

imshow(marked)

figure, imshow(T)

end

Page 81: My "Grain Motion Detection" Project

81

5. Function vibr1_1 function [ dif ] = vibr1_1(imnam1,pix_mov,pix_difference,rem_area)

%This function simulates vibration of the camera, in a way that will be

%useful to highlight the edges of the grains in the image

%Input:

%imnam1 = frame 1 name

%rem_area = area of particles to be filtered out

%pix_move = distance to move pixel

%pix_difference = critical pixel difference for highlighting

%Output:

%dif = variable containing grain borders obtained by current method

global im1 marked marked_after_hor_dif_sc marked_mod ver_size hor_size bin_im siz

ver_siz hor_siz

% im1 = original frame number one

% im2 = original frame number two

% marked = modified image after different operations

% marked_after_hor_dif_sc = original image after marking the horizontal borders with

this method

% marked_mod = original image modified also by this method, after all previous

modifications

% ver_size ; hor_size = vertical and horizontal sizes of the image

im1 = imread(imnam1)

im1= colr_to_BW(im1) %written function that converts colour image to black/white

im1 = adapthisteq(im1)

%smooth the image to reduce noise

im1 = medfilt2(im1, [6 6])

im_start %written function that measures the size of the image

Page 82: My "Grain Motion Detection" Project

82

%part that is simulating vibration of one pixel to the right side of frame

im2=im1

im3=im1

img1 = im1

im4=im1

im5=im1

pix_move = pix_mov+1

for ver = (pix_move+1):(ver_size - pix_move)

pix_move_ver_down = ver - pix_move

pix_move_ver_up = ver + pix_move

parfor hor = (pix_move+1):(hor_size - pix_move)

im2(ver,hor + pix_move) = img1(ver,hor); %Image shifted in horizontal direction

im3(pix_move_ver_up,hor) = img1(ver,hor); %Image shifted in vertical direction:

im4(pix_move_ver_down,hor + pix_move ) = img1(ver,hor); %Image shifted in

diagonal direction (vertically + right):

im5(pix_move_ver_up,hor + pix_move) = img1(ver,hor); %Image shifted in diagonal

direction (vertically + left):

end

end

bi_im1_2 = zeros(siz(1,1),siz(1,2)); % create an empty matrices to highlight borders

there in a binary format

bi_im1_3 = zeros(siz(1,1),siz(1,2));

bi_im1_4 = zeros(siz(1,1),siz(1,2));

bi_im1_5 = zeros(siz(1,1),siz(1,2));

im_difference1_2 = imabsdiff(im1,im2); %calculates absolute difference between the

two images

im_difference1_3 = imabsdiff(im1,im3); %calculates absolute difference between the

two images

im_difference1_4 = imabsdiff(im1,im4); %calculates absolute difference between the

two images

im_difference1_5 = imabsdiff(im1,im5); %calculates absolute difference between the

two images

for ver = (pix_move+1):(ver_size - pix_move)

pix_move_ver_down = ver - pix_move

Page 83: My "Grain Motion Detection" Project

83

pix_move_ver_up = ver + pix_move

parfor hor = (pix_move+1):(hor_size - pix_move)

if im_difference1_2(ver,hor) >= pix_difference

bi_im1_2(ver,hor - pix_move) = 1;

elseif im_difference1_3(ver,hor) >= pix_difference

bi_im1_3(pix_move_ver_down,hor) = 1;

elseif im_difference1_4(ver,hor) >= pix_difference

bi_im1_4(pix_move_ver_up,hor - pix_move) = 1;

elseif im_difference1_5(ver,hor) >= pix_difference

bi_im1_5(pix_move_ver_down,hor - pix_move) = 1;

end

end

end

bi_im1_2_mod = rem_small(bi_im1_2,rem_area) %this part removes all the noise from

the resultant edge images

bi_im1_3_mod = rem_small(bi_im1_3,rem_area) %function

rem_small[img_name,area_of_noise_particle]

bi_im1_4_mod = rem_small(bi_im1_4,rem_area)

bi_im1_5_mod = rem_small(bi_im1_5,rem_area)

%this part of code puts all the defined edges into 1 image

bi_im_final = bi_im1_2_mod + bi_im1_3_mod + bi_im1_4_mod + bi_im1_5_mod

bi_im_final = rem_small(bi_im_final,rem_area) %this part removes all the noise from the

resultant edge images

figure, imshow(bi_im_final)

end

Page 84: My "Grain Motion Detection" Project

84

6. Function imabsdiffer.m

function [im_d] = imabsdiffer(im_one,im_two,pix_difference)

% global np1 np2 im1 im2 bin_im

im1 = imread(im_one)

im2 = imread(im_two)

%

% im1 = np1;

% im2 = np2;

%coverting to B/W

im1= colr_to_BW(im1)

im2= colr_to_BW(im2)

im1 = medfilt2(im1, [6 6])

im2 = medfilt2(im2, [6 6])

siz = size(im1) %gets the size of the image

im3 = imabsdiff(im1,im2) %calculates absolute difference between the two images

bi_im = zeros(siz(1,1),siz(1,2))

for ver = 1:(siz (1,1))

parfor hor = 1:(siz(1,2))

if im3(ver,hor) >= pix_difference

bi_im(ver,hor) = 1

end

end

end

% bi_im = im2bw(bin_im)

bin_im = bi_im

im_d = bi_im

figure, imshow(bi_im)

figure, imshow(im1)

figure, imshow(im2)

end

Page 85: My "Grain Motion Detection" Project

85