Deformation and aggregation of red blood cells and vesicles flowing ...
Design and Development of a Method for Measurement of Red Blood Cell Aggregation
-
Upload
simisola-otukoya -
Category
Documents
-
view
183 -
download
0
description
Transcript of Design and Development of a Method for Measurement of Red Blood Cell Aggregation
-
Design and Development of a Method for
Measurement of Red Blood Cell Aggregation
Simisola Otukoya
5/27/2013
-
ii
Abstract
This report documents the project work carried out in developing a method to measure the
extent of Red Blood Cell (RBC) Aggregation. The purpose of the project was to create an
aggregation index that can be used to study the biomechanics of RBC aggregation through
the analysis of trend graphs that depict the effect of different flow conditions on RBC
aggregation. Computerized image analysis was used to develop three aggregation indices,
which were: the Standard deviation (SD) aggregation Index, the thresholding aggregation
index and the correlation coefficient aggregation index. Although the SD aggregation index
provided the most effective results, it was found that the thresholding aggregation index also
proved to be very effective at measuring the aggregation despite using a simple global
thresholding method. It was concluded that the thresholding aggregation index was the most
promising. The report recommended further work to be carried out using more sophisticated
thresholding method in order to increase the effectiveness of the thresholding aggregation
index.
-
iii
Acknowledgements
I would like to express my profound gratitude to the first supervisor of this project, Dr.
Efstathios Kaliviotis, for his guidance and support which has aided the completion of this
project. I would also like to thank the second supervisor of this project Dr. Shahriar Sajjadi for
his initial thoughts and comments.
-
iv
Contents Abstract ...................................................................................................................................................... i
Acknowledgements .................................................................................................................................. iii
Contents ................................................................................................................................................... iv
List of Figures ............................................................................................................................................ v
Nomenclature ........................................................................................................................................... v
1 Introduction ...................................................................................................................................... 1
1.1 Aim and Objectives ................................................................................................................... 2
2 Background and Literature Survey .................................................................................................... 2
2.1 Blood Rheology ......................................................................................................................... 2
2.2 Biological Nature of Erythrocytes ............................................................................................. 5
2.3 Effect of Erythrocyte Mechanical behaviour on Blood Rheology ............................................. 6
2.4 Measurement of RBC Aggregation............................................................................................ 7
2.5 Erythrocyte Sedimentation Rate (ESR)...................................................................................... 8
2.5.1 Evaluation of ESR Method ................................................................................................. 9
2.6 Low Shear Viscometry ............................................................................................................... 9
2.6.1 Evaluation of Low Shear Viscometry Method ................................................................. 10
2.7 Microscopic Aggregation Index (MAI) ..................................................................................... 10
2.7.1 Evaluation of MAI Method .............................................................................................. 11
2.8 Photometric Methods ............................................................................................................. 11
2.8.1 Evaluation of Photometric Methods ............................................................................... 12
2.9 Computerized Image Analysis Techniques .............................................................................. 12
2.9.1 Evaluation of Computerized Image Analysis Techniques................................................ 13
2.10 Other Methods ........................................................................................................................ 13
3 Background Theory and Theoretical Development ........................................................................ 13
3.1 Digital Image Processing ......................................................................................................... 13
3.2 Basic Statistical Analysis .......................................................................................................... 15
3.3 Image Segmentation ............................................................................................................... 17
3.4 Image Correlation Coefficient ................................................................................................. 19
3.5 Image Pre-processing .............................................................................................................. 20
4 Analysis and Design ......................................................................................................................... 20
4.1 Preliminary Algorithm ............................................................................................................. 20
4.2 Preliminary Results.................................................................................................................. 23
4.3 Pre-processing Methods ......................................................................................................... 26
4.4 Effect of MASK Dodging Filter on Preliminary Results ............................................................ 30
4.5 Preliminary Study Evaluation .................................................................................................. 33
4.6 Experimental Design ............................................................................................................... 34
-
v
5 Implementation and Experimental Work ....................................................................................... 36
5.1 SD Aggregation Index Algorithm ............................................................................................. 36
5.2 Thresholding Aggregation Index Algorithm ............................................................................ 38
5.3 Correlation Coefficient Aggregation Index Algorithm ............................................................ 39
6 Results and Discussion .................................................................................................................... 42
6.1 SD Aggregation Index .............................................................................................................. 42
6.2 Thresholding Aggregation Index Results ................................................................................. 45
6.3 Correlation Coefficient Aggregation Index Results ................................................................. 48
6.4 Discussion ................................................................................................................................ 50
7 Conclusion ....................................................................................................................................... 55
7.1 Future Work ............................................................................................................................ 56
References............................................................................................................................................... 56
Appendices .............................................................................................................................................. 59
Appendix A: Preliminary Algorithm .................................................................................................... 59
Appendix B1: Morphological Operation Technique ............................................................................ 59
Appendix B2: Top-hat Filter ................................................................................................................ 60
Appendix B3: Homomorphic Filter ...................................................................................................... 60
Appendix C: SD Aggregation Algorithm .............................................................................................. 62
Appendix D: Thresholding Aggregation Algorithm ............................................................................. 63
Appendix E: Correlation Coefficient AggregationAlgorithm ............................................................... 64
Appendix F: Gantt Chart ...................................................................................................................... 67
List of Figures
Figure 1: Image of non-aggregating sample........................................................................................... 16
Figure 2: Image of aggregating sample ................................................................................................... 16
Figure 3: RBC image at shear rate of 5s-1, acquisition time of 35.6s ..................................................... 21
Figure 4: RBC image at shear rate of 100s-1, acquisiton time of 3.7s ..................................................... 21
Figure 5: Flowchart of Preliminary Algorithm ......................................................................................... 22
Figure 6: Trend Graph of Mean, SD and CV ............................................................................................ 23
Figure 7: Close-up of Mean Trend Graph ................................................................................................ 24
Figure 8: Close-up of SD Trend Graph ..................................................................................................... 25
Figure 9: Close-up of CV Trend Graph ..................................................................................................... 26
Figure 10: Effect of 3 Pre-processing Methods on an Aggregating Sample ............................................ 27
Figure 11: Effect of 3 Pre-processing Methods on a Non -Aggregating Sample ..................................... 27
Figure 12: Hormomorphic and MASK Dodging filter Aggregating Sample Image .................................. 28
-
vi
Figure 13: Histogram of Non-aggregating Image .................................................................................... 28
Figure 14: Histogram of MASK Dodging Filter for Non-aggregating Sample .......................................... 29
Figure 15: Histogram of Top-hat Filter for Non-aggregating Sample ..................................................... 29
Figure 16: Preliminary Aggregation Indices with MASK Dodging Filter .................................................. 30
Figure 17: Close-up of Pre-processed Mean Trend Graph ...................................................................... 30
Figure 18: Close-up of Pre-processed SD Trend Graph ........................................................................... 32
Figure 19: Close-up of Pre-processed CV Trend Graph ........................................................................... 33
Figure 20: Experimental Setup, Adapted from [17] with Permission ..................................................... 35
Figure 21: SD Aggregation Index Algorithm Flowchart ........................................................................... 37
Figure 22: Thresholding Aggregation Index Flowchart ........................................................................... 39
Figure 23: Correlation Coefficient Aggregation Index Algorithm ........................................................... 41
Figure 24: SD Aggregation Index, Window Size 10 Pixels ....................................................................... 42
Figure 25: Normalised SD Aggregation Index, Smoothed with Moving Average Filter. (a) 0.5 RBC
Window Size, (b) RBC Window Size , (c) 1.5 RBC Window Size, (d) 2 RBC Window Size ........................ 43
Figure 26: Normalised SD Aggregation Index, Smoothed with Moving Average Filter. (e) 2.5 RBC
Window Size , (f) 5 RBC Window Size, (g) 10 RBC Window Size, (h) 25 RBC Window Size ..................... 43
Figure 27: Effect of Threshold Algorithm on a Non-aggregating Image ................................................. 45
Figure 28: Effect of Thresholding Algorithm on an Aggregating Image .................................................. 46
Figure 29: Thresholding Aggregation Index ............................................................................................ 46
Figure 30: Normalised Thresholding Aggregation Index, Smoothed with Moving Average Filter ......... 47
Figure 31: Correlation Coefficient Aggregation Index, Window Size 10 Pixels ....................................... 48
Figure 32: Normalised Correlation Coefficient Index Smoothed with Moving Average Filter. (a) (a) 0.5
RBC Window Size, (b) RBC Window Size, (c) 1.5 RBC Window Size, (d) 2 RBC Window Size.................. 48
Figure 33: Normalised Correlation Coefficient Index Smoothed with Moving Average Filter. (e) 2.5 RBC
Window Size, (f) 5 RBC Window Size, (g) 10 RBC Window Size, (h) 25 RBC Window Size ...................... 49
Figure 34: Min-max and Fast Phase Percentage Difference VS Window Size ........................................ 51
Figure 35: Mean High Shear to Mean Low Shear Percentage Difference VS Window Size .................... 52
-
vii
Nomenclature
A Image A -
Mean of Image A -
Ai Aggregation Index %
a Light intensity at end of measurement period Arbitrary unit (au)
B Image B -
Bp Black Pixels -
Mean of Image B -
b Change In light intensity at fast rate -
CV Coefficient of Variation -
C Change in light intensity at slow rate -
It Light intensity at time t Arbitrary unit (au)
i Count of pixels in a row -
j Count of pixels in a column -
n Total number of elements -
rij Correlation Coefficient -
SD Standard Deviation -
T Threshold -
Tfast Time constant for fast changes in light intensity s
Tslow Time constant for slow changes in light intensity s
t Time s
Wp White Pixels -
Mean of all pixels in an image -
xi Individual pixel element -
-
viii
H High shear viscosity Pa.s
Low shear viscosity Pa.s
-
1
1 Introduction
Red blood cell (RBC) or Erythrocyte aggregation is a phenomenon in blood where RBCs under
low or no shear forces clump together to form linear arrays similar to a stack of coins, also
known as rouleaux. RBC aggregation is of scientific interest because it is the major
determinant of blood viscosity at low shear rates [2] and significantly contributes to vascular
flow mechanics. It is distinct from blood coagulation (clotting) in that it is a reversible process
which occurs when blood is at stasis or little external forces are acting on it (e.g. shear
stresses) [1--2]. Fibrinogen, which plays a key role in both phenomena, is soluble in
aggregation, thus allowing its reversible nature. In coagulation, fibrinogen molecules interact
to form a large meshwork of insoluble strands of fibrin in which RBC and blood platelets stick
together and become enmeshed [1] [3].
A noted distinction is made in literature between aggregation and the term aggregability.
Aggregation concerns the extent to which RBC cells form roleaux as a result of the
macromolecular composition of blood or RBC suspensions whereas aggregability is the
intrinsic ability of RBC cells to undergo aggregation irrespective of any suspending medium
[1]. The extent of RBC aggregation is determined as a force balance between aggregating
forces and disaggregating forces. Factors that promote aggregation in blood include the
biconcave disc shape, the haematocrit concentration and plasma proteins.
When infections, tissue injuries, trauma, neoplastic growth, and immunological disorders
[2] cause inflammation in the body, it responds with an acute phase reaction. It is triggered
-
2
by cytokines secreted by inflammatory cells. These cells act to change the rate of synthesis of
a group of plasma factors in the liver such as fibrinogen and C-reactive protein, which are
positive acute phase proteins and increase in concentration in plasma [1]. These positive
acute phase proteins, in particular fibrinogen, are linked to an observed increase in RBC
aggregation. Other physiological processes noted to have an effect on RBC aggregation
includes pregnancy and menopause. Mechanisms for measuring and studying RBC
aggregation are thus of interest, from a clinical point of view, as an effective way to monitor
these physiological processes as well as the acute phase reactions caused by
pathophysiological conditions.
1.1 Aim and Objectives
In this project, computerised image analysis is used to:
Study the biomechanics of RBC aggregation
Develop an algorithm to quantify the process using aggregation indices
Produce and analyse trend graphs that depict the effect of different flow conditions
on RBC aggregation
2 Background and Literature Survey
2.1 Blood Rheology
Blood is a non-Newtonian concentrated suspension consisting of formed (cellular and cell-
derived) elements suspended in Plasma, the ground substance of blood. Non-Newtonian
signifies that its viscosity varies with shear rate. Blood typically accounts for about 68% of
the body weight of a healthy person, giving it a volume of about 4.56 litres. It is a solid-
liquid suspension, where the formed elements can be considered as the solid phase and
-
3
plasma the liquid phase. As a result, the fluidity of blood at any shear rate and temperature
is determined by the rheological properties of plasma, the cellular elements and the volume
fraction of the cellular elements (haematocrit) [1].
Plasma in blood is composed of 92% water, 7% plasma proteins (e.g. fibrinogen) and 1%
other solutes; this gives it a density thats just slightly greater than water. Its main role is to
transport formed elements, organic, inorganic molecules and waste throughout the
circulatory system. With a viscosity of about 1.21.4 mPa.s, Plasmas role as the suspending
medium of the cellular elements in blood means a change in its viscosity has a direct effect
on the viscosity of blood [2]. The level of plasma viscosity is noted to be a good indicator of
pathophysiological conditions that are associated with acute phase reactions. Acute phase
reactants such as fibrinogen contribute significantly to the non specific increase in plasma
viscosity in disease processes [2] (up to 56 mPa.s in disease states associated with
paraproteinemias).
Formed elements are the blood [3] cells and cell fragments such as platelets, white blood
cells (WBC) or leukocytes and RBC. Platelets are small cell fragments with irregular shape
which contains enzymes and factors that are used by the body for blood clotting. WBC can be
categorized into granulocytes (which have a lot of stained granules) and agranulocytes
(which have little to no granules). They are components of the immune system that carry out
two main functions: helping to protect the body by defending against pathogens invading
and clearing up toxins, wastes and abnormal or damaged cells [3]. Leukocytes and Platelets
together form only 0.1% of the formed elements and as a result have little contribution to
-
4
the rheology of blood except in small vessels (e.g. in microcirculation). Erythrocytes, which
make up the remaining 99.9% of the formed elements, obtain their red colour from the
protein haemoglobin (about 32% of its weight). It has a biconcave disc shape and it serves to
transport oxygen and carbon dioxide from the lungs to the tissues of the body. In its
unsheared form, it approximately has a diameter of 68m, a surface area of 130 m2 and a
volume of 98 m3 [1]. Normal RBC in human blood has a life span of 100120 days. As the
most numerous formed elements in blood, RBCs have an enormous influence on the
rheological properties of blood (as will be later discussed).
The haematocrit (HCT) or packed cell volume (PCV) is the volume fraction of cellular
elements in blood. In adult men, it normally averages about 45 and 40 in adult women. The
discrepancy is due to the promotion of RBC production by androgens and inhibitory effects of
estrogens [3]. The fact that the haematocrit is often reported as the volume of packed red
cells (VPRC) or PCV reflects the pre-dominance of RBCs in blood. The haematocrit is easily
affected by various factors such as dehydration, which increases the haematocrit through
reduction of plasma or internal bleeding, which decreases it through the loss of RBCs [3].
Under laminar flow, blood viscosity is higher than plasma viscosity because of the cellular
elements disturbing the flow streamlines [1]. This is reflected by the relative viscosity of
blood (blood viscosity divided by plasma viscosity)[1].The extent of flow streamline
disturbance becomes more pronounced as cellular elements increases, hence raising the
blood viscosity and showing its dependence on the concentration of the cellular elements
(haematocrit) [1].
-
5
2.2 Biological Nature of Erythrocytes
RBCs are specially designed for carrying out their main function of transporting oxygen and
blood around the circulatory system. The cells have a thin central region and a thick outer
margin [3], giving it its biconcave disc shape. RBCs do not have several of the organelles that
most other cells possess. They lose their mitochondria, ribosomes and nuclei as they go
through formation. The absence of a nucleus and ribosomes means that RBC cannot
synthesize proteins or go through cell division. Furthermore, its lack of mitochondria means
that it relies on anaerobic respiration to obtain energy. This characteristic is advantageous as
it ensures all the oxygen being carried by the RBCs is delivered to peripheral tissues as
opposed to being wasted on RBC mitochondria.
RBCs are formed in the bone marrow or myeloid tissue in a process known as erythrpoieseis.
The main blood cell production occurs in red marrow, which is to be found in portions of the
vertebrae, sternum, ribs, skull, scapulae, pelvis and proximal limb bones [3]. Yellow marrow
is a fatty tissue in other marrow areas that can convert to red marrow to increase RBC
formation in cases of extreme stimulation such as heavy and continuous blood loss [3]. As
RBCs are formed, they go through various stages of maturation. In its earliest stage, the
immature RBC cells (which are still synthesizing haemoglobin) are known as erythroblasts.
The reticulocyte is formed after about 4 days of differentiation and haemoglobin production.
It enters into circulation after 2 days in the bone marrow, accounting for up to 0.8% of
erythrocytes in the circulatory system. The reticulocytes fully mature after spending 24 hours
in circulation and are identical to other mature RBCs [3].
-
6
The severe physical strain RBCs undergo as they move through the circulatory system, in
addition to a lack repair mechanisms, has a contributory effect on its somewhat short
lifespan (120 days). At the end of its lifespan, a RBC either ruptures or is engulfed by WBC
phagocytes [3]. A damaged or rupture RBC has its haemoglobin broken down small
individual sunbits that are passed through and filtered by the kidneys. About 90% of RBCs do
not survive long enough to rupture (haemolyse), as they are engulfed by WBC phagocytes
which monitor their condition, recognise and engulf them before they haemolyse [3]. RBC
fragments and haemoglobin are removed from circulation and the haemoglobin molecules
are then recycled [3].
2.3 Effect of Erythrocyte Mechanical behaviour on Blood Rheology
The sheer amount of RBCs in the formed elements of blood means they have more influence
on blood rheology than any other cellular element. The unique morphology (structure) of
erythrocytes gives it special mechanical properties that affect the flow streamline of blood
(i.e. viscosity), in addition to the haematocrit. Under applied forces, RBCs respond to applied
forces by changing their shape and geometry (deforming) as a result of the magnitude and
orientation of the forces [2]. The extent to which deformation under a certain force occurs is
known as deformability. Under shear stresses high enough to deform them, Erythrocytes
orient themselves with the flow streamlines and behave like fluid drops under most flow
conditions. Therefore, under high shear rates, RBC deformation and orientation are the
main factors that affect blood viscosity [2].
RBCs are viscoelastic cells as they exhibit both viscous and elastic properties. This allows for
a reversible shape change after deformation [2]. However, under pathological influences and
-
7
excessive forces RBCs can exhibit plastic behaviour and undergo permanent deformation.
The dynamic mechanical behaviour of RBCs is primarily caused by the membrane [2]. The
lipid layer of the membrane is purely viscous, with no contribution to the elasticity of RBCs.
The membrane cytoskeleton is mainly responsible for keeping the biconcave discoid shape.
Other important contributors to deformability of RBCs are cytoplasmic viscosity (dependent
entirely on haemoglobin concentration) and the biconcave discoid geometry which provides
area for contained volume and allows shape change without increasing the surface area of
membrane [2].
2.4 Measurement of RBC Aggregation
As previously stated, under the influence of pathological and physiological processes,
prominent changes may be detected in the behaviour of RBC aggregation. Aspects of RBC
aggregation that reflect these changes include: extent of aggregation; duration of
aggregation and the magnitude of the forces that make RBC aggregate [2]. As such, it is
useful to be able to quantify these alterations from a clinical and diagnostic point of view.
Various methods and techniques exist for the quantification of RBC aggregation and each has
advantages and disadvantages that should be considered. These methods can be categorised
as either a static or dynamic measure of RBC aggregation. Static methods are usually
concerned with studying the sedimentation rate and dynamic methods with the study of the
reversible nature of aggregation under different flow conditions. The methods can also be
classified as direct (observing RBCs microscopically as they form aggregates) and indirect
(measuring microscopically rheological and other factors that are affected by RBC
aggregation).
-
8
2.5 Erythrocyte Sedimentation Rate (ESR)
The ESR is one of the earliest methods of RBC measurement and it is most frequently used in
lab tests [2]. It consists of observing the sedimentation of RBC in a glass tube [1] for at least
an hour and is primarily used by physicians as a non-specific indicator of inflammation [1]
(i.e., results explained in terms of degree of inflammation rather than aggregation). It exploits
the tendency for RBC to settle quicker in some disease states because of the increase of the
increase of plasma and plasma proteins such as fibrinogen. The ESR measures the rate at
which the tendency of RBCs in anti-coagulated blood to settle under the influence of gravity
in a narrow vertical tube occurs. The height per hour of plasma column free of RBCs in the
vertical tube determines the ESR. If the RBCs are separate from each other, then ESR will be
low. However in plasma, RBCs form rouleaux which enhance the ESR.
There are two main methods for measuring the ESR (Wintrobe and Westgreen), with the
most significant difference being the type of anti-coagulant and tubes used. After blood is
poured in to the sedimentation tube, the aggregation of RBC can be considered as the first
phase in the mechanism of ESR. This occurs a few minutes following the filling of the tube.
The second and most important phase concerns the rate of sedimentation which may
continue for up to 2hrs after the initial phase. A third phase with slower sedimentation
follows this phase due to the compaction of RBC aggregates [2].The magnitude of the
sedimentation rate in phase 2 is determined by the extent of RBC aggregation that occurs in
the first phase. The ESR has also been found to have an inverse relationship with the
haematocrit. If the haematocrit is low (i.e. less RBC relative to plasma), then the aggregates
will have less interaction between each other and their settling less hindered [2] which
-
9
results in an increase of the ESR. Hence, it is imperative to adjust the haematocrit to the
standard value, 40% for women and 45% for men, during ESR measurement [2].
2.5.1 Evaluation of ESR Method
1. The ESR is not an optimal choice of measuring RBC aggregation as it is a time
consuming procedure which requires at least an hour before the correct value can be
obtained. Also, it is not done under flow, which is the physiologically relevant
situation.
2. The ESR does not tell the time course for aggregation to occur. It only indicates the
extent of aggregation which determines the rate of settling.
3. The strong haematocrit dependence of the ESR is a disadvantage for using the ESR.
For instance, it is difficult to distinguish between the effect of anaemia (low
haematocrit) and large roleaux formation on the increase of ESR.
2.6 Low Shear Viscometry
Viscometry is an indirect and relatively simple method of measuring aggregation in RBC
suspensions that exploits the fact that RBC aggregation is the main determinant for blood
viscosity at low shear rates [1]. Viscometers such as the capillary tube or the couette
viscometer may be used to analyse the viscosity of blood. Several approaches have been
taken to develop aggregation indices (AI), which quantify the extent of aggregation, based on
viscometry. For example, Bull et al [4] quantified aggregation as:
-
10
2.6.1 Evaluation of Low Shear Viscometry Method
1. Experimental studies have shown that aggregation indices based on low shear
viscometry correlates well with other methods of measuring RBC aggregation,
especially for RBCs with normal structural/cellular properties.
2. Baskurt and Meiselman in [5] studied the effects of RBC geometric and mechanical
alterations on low shear viscometric behaviour of RBC suspensions and compared it
with other independent measures of RBC aggregation; it was found that low shear
viscometry may not always be appropriate for developing an index for RBC
aggregation. This supported work from other studies such as in Lacombe and Lelievre
in [6]. Baskurt and Meiselman explained the findings as a result of low shear
viscometry behaviour being affected by cellular, rheologic and morphologic
properties of RBCs, independent of aggregation effects. The paper recommended the
use of additional measures of RBC aggregation when shape alterations of RBC are
expected or possible.
2.7 Microscopic Aggregation Index (MAI)
In this measurement method, anti-coagulated blood samples are observed directly under a
microscope to determine the extent of aggregation. It is quantified by estimating the average
number of RBC per aggregate [2]. It is carried out by diluting RBC suspensions of haematocrit
0.01 1/1 in an aggregating and non-aggregating suspending media. After settlement (15
minutes standardized), a count is made at a constant temperature of 37 degrees Celsius
using a haemocytometer in a humified chamber of the amount of cellular units (any
rouleaux or single cells) in each medium. The MAI is calculated by using the cell count in the
non-aggregating medium as an estimate of the total number of RBCS in the volume of the
-
11
microscopic area counted [2]. The RBC count is then divided by the number of cellular units
in the aggregating suspension to provide the MAI. The MAI equals 1 in the absence of
aggregation and increases with the extent of aggregation.
2.7.1 Evaluation of MAI Method
1. The MAI is a simple procedure that can be performed with basic lab skills. It can also
be useful for detecting the level of aggregation in non-human, lab animals that have a
level of RBC aggregation so low that RBC aggregometers used in haemorheology
laboratories cannot detect them [2].
2. The MAI is however similar to the ESR in that it only gives an estimate for the extent
of RBC of RBC aggregation but not the time course [2]. It can also be time consuming
to count the number of cellular units [2].
2.8 Photometric Methods
Photometric methods using photometric rheoscopes have been developed to measure RBC
aggregation based on the intensity of the light backscattered from or transmitted through
RBCs under defined shearing conditions [2]. Light beams can be either backscattered
(reflected) or transmitted through depending on whether it hits a RBC or it travels through
the gaps between RBCs in a suspension. The time course of light transmission/reflection
intensity is known as the syllectogram [2].
Under high shear stresses, RBCs deform into an elongated form and orient along stream
lines leaving plasma gaps sufficient for light to pass through. Under low shear stresses, RBCs
are able to recover their shape and orientation, allowing them to form rouleaux and leaving
-
12
gaps between aggregates. As light transmission is a function of these gaps, the intensity of
the transmitted/reflected light can be used to measure RBC aggregation. The syllectogram is
able to show the change from high to low (or zero) shear rates. Based on the aggregation
phase of the curves, a number of parameters may be calculated such as time course and
overall extent of aggregation. For example, the syllectogram curve may be used to find the
total aggregation at a point by finding the difference between the light intensity at that
point and at the start of the aggregation process [2]. Another parameter that can be
calculated is the aggregation half-time which measures the rate of aggregation [1]. It
represents the time required to reach the transmitted or reflected light intensity that
corresponds to one-half of the total change at the end of the measurement period. The
aggregation half-time can be obtained by representing the syllectogram with double
exponential equations given by:
It = a + b e tTfast + c e tTslow (2)
2.8.1 Evaluation of Photometric Methods
1. Photometric methods are an effective method of measuring RBC aggregation.
Although care must be taking when comparing measurements taken from different
instruments as there could be discrepancies.
2.9 Computerized Image Analysis Techniques
This method involves the quantification of aggregation through the numerical processing of
recorded microscopic images. The processing is carried out by specially designed software
programs for image processing. This technique allows RBC aggregation to be measured either
-
13
after full aggregation or flow at various shear rates. Various numerical techniques may be
used to obtain various characteristics of aggregation such as aggregate size and morphology.
2.9.1 Evaluation of Computerized Image Analysis Techniques
1. The main disadvantage of this method is that there can be some loss of information
due to overlapping cells at normal RBC concentrations [7]. However, this can be
overcome by taking into account the geometrical characteristics of the apparent
plasma gaps [7].
2. This method is actually a significant enhancement of the MAI. This includes the
speed of carrying out laboratory work through the automation of the procedure
[2] and allows RBC aggregation to be studied under various shear stresses (dynamic
conditions). Computerized image analysis is therefore chosen for this project because,
not only is it fast and accurate, it also enables the direct study of RBC aggregation
under dynamic flow conditions.
2.10 Other Methods
Other methods of measuring RBC aggregation include the use of MRI scans, ultrasound
techniques and electrical conductometers. More work however needs to be done on these
methods to make them an optimal solution for RBC aggregation measurement.
3 Background Theory and Theoretical Development
3.1 Digital Image Processing
Computerised Image analysis involves the quantification of aggregation through the
numerical processing of recorded microscopic images. The processing is carried out by
-
14
specially designed software programs for image processing, e.g., MATLAB with image
processing toolbox
The images used for processing are captured using a digital camera. Digital Cameras consist
of a photosensitive silicon device (either CCD or CMOS) that converts light into digital signals.
The sensor consists of an array of picture elements (pixels) that produces an output
proportional to the intensity of light that is falling on it. The pixels represent a sampled point
from the original scene [8]. A digital image (monochromatic) may be considered as a 2-
dimensional function, f(x, y), that gives the brightness of an image at any given point [8]. The
brightness values are integer values that range from 0 (black) to 255 (white). A digital image
differs from a photo in that the values are discrete rather than continuous. It is a large array
of sampled points from the continuous image, each having a particular quantized brightness.
These points are the pixels which constitute the digital image, and are stored on a computer
memory as a 2d array of integers [8].
The four basic types of digital images are: binary, greyscale, RGB (or true colour) and
indexed. Binary and greyscale images are most commonly used in image analysis algorithms,
due to their comparative ease in processing and low memory consumption. In a binary
image, the pixels are either 0 or 1 representing black and white, whilst in a greyscale pixels
have a shade of grey ranging from 0 (black) to 255 (white), that is the image is represented
by one byte or 8 bits. In this project, greyscale images were used as it gave fuller information
on the intensity levels present in the images.
-
15
3.2 Basic Statistical Analysis
Basic statistical parameters such as the mean, standard deviation and coefficient of variation
are prominent as useful quantitative tools in literature that study RBC aggregation. In
particular, It has been used to develop aggregation indices in RBC aggregation literature
concerning computerized image analysis such as in [9] where Kavitha and Ramakrishnan used
the standard deviation of the coefficients generated from a 2-d wavelet transforms functions
to develop an aggregation size index to measure RBC aggregation. The aggregation size index
developed was found to be effective as it was similar and consistent for a chosen size of
aggregates [9], although the technique was not shown to be applicable to in vivo blood flow
conditions.
The use of basic statistical analysis is also evident in [10] where Xu et al used Spectral Domain
Doppler Optical Coherence Topography to measure RBC aggregation. They proposed the
standard deviation of doppler frequency spectrum of RBCs in flowing blood [10] as an
aggregation parameter and used it to develop an aggregation index. A significant correlation
was found between the SD value of Doppler frequency spectrum and RBC aggregation.
Although, the method was found to be effective in dynamic flow conditions, it necessitated
the use of expensive, highly specialized equipment.
In this project, the possibility of using the mean, standard deviation and coefficient of
variation of pixel intensity to develop an aggregation index was explored in order to gain
greater understanding of the computerized image analysis method. By visually examining the
images of a highly aggregating sample and a non-aggregating sample (shown below), it was
-
16
assumed that simple statistical analysis such as the mean, standard deviation and the
coefficient of variation could be used to reveal the inherent differences in pixel intensity that
existed in both images.
The mean, which is the arithmetic average of the pixel intensities, is given formally as:
The standard deviation (SD) indicates the variations of the pixel intensities from the mean
intensity. A high standard deviation indicates that the pixel intensities are far from the mean
while a low standard deviation indicates that they are close to the mean intensity. The
sample standard deviation is defined as:
The coefficient of variation (CV) is useful in helping to understand the standard deviation in
the context of the mean. It is the ratio of the standard deviation to the mean defined as:
Figure 2: Image of aggregating sample Figure 1: Image of non-aggregating sample
-
17
3.3 Image Segmentation
Segmentation is an operation whereby an image is partitioned into its constituent parts [8]. It
is a method of isolating a region of interest (e.g. object or boundary), which is useful for
feature detection. The application of segmentation methods to nontrivial images is however
a complex and computationally expensive endeavour in image processing. Techniques used
for segmentation includes edge detection, the Hough transform and thresholding.
Thresholding is one of the most popular and intuitive approaches to image segmentation due
to its speed and simplicity in execution. It involves the use of a threshold, T, chosen from an
original image in order to split the image pixels into black or white according to whether [8]
the pixel grey level intensity is greater than or less than the threshold. This effectively
extracts an object in the image from the background. Fundamentally, for an image f(x, y), a
thesholded image g(x, y) is defined as [11]:
The conventional method of thresholding where a constant T is used for all pixels is known as
global thresholding. This approach is practical when the images being considered have an
even background illumination. For uneven background illumination, local adaptive
thresholding may be used. This is a method where the threshold is allowed to dynamically
change in order to accommodate the uneven back ground illumination. However, this
method can prove to be computationally intensive, particularly when simultaneously
processing a large number of images. A common approach used is to apply pre-processing
methods to correct the uneven background illumination, and then to apply a global threshold
-
18
[11]. This method can be proven to be equivalent to applying a local adaptive threshold
function T(x, y) to f(x, y) [11]:
For
The pre-processed image is given by and is the result of applying a global
threshold to [11].Selecting a threshold value can be something of a black art, however in
order to develop a robust algorithm, an automatic method should be used to select the
threshold value. Although several algorithms exist for selecting an automatic threshold,
Otsus Method is one of the most commonly used and was implemented in this project in
order to take advantage of the image processing toolbox on MATLAB. Otsus Method is based
obtaining a threshold that minimises the weighted within class variance [12]. It assumes a
bimodal histogram and places a threshold between the peaks. It is implemented on MATLAB
with the function graythresh.
The use of image segmentation in RBC aggregation literature in the context computerised
image analysis is evident in [13--14]. In [13], Kaliviotis and Yianneskis used image
segmentation to develop a novel aggregation index. It was defined as the ratio of the
percentage of the area occupied by the RBCs and the haematocrit. The aggregation index
was found to produce results in agreement with physical insight and previous work. This
project also explored the use of image segmentation to develop an aggregation index. An
aggregation index is defined similar to that developed in [14], where an aggregation index Aa
was given by:
-
19
In this project, Otsus method was incorporated into an algorithm to develop an aggregation
index defined as the ratio of the white to black pixels in an image:
This definition allowed the measurement of aggregation through the study of the increasing
plasma gaps (represented as white pixels) as the aggregation increased. It is different to the
index given in [14] in that the white pixels are in effect normalised by the number of black
pixels, which represent the RBCs, as opposed to the maximum expected RBC-free area [14].
3.4 Image Correlation Coefficient
Correlation methods in image analysis are used to exploit regularly occurring features or
characteristics in an image. The objective is to use the correlation coefficient to assess the
level of similarity between a region of interest and other regions in the image; thus providing
useful comparative information between structural features in the image sample [15].
The correlation value, r is given between 1 and -1. A value of 1 or -1 indicates maximal
correlation while a value of 0 indicates minimal correlation. A negative sign indicates that
relative to the mean values, the two images being correlated have opposite signs. The 2-d
image correlation coefficient, for 2 images A and B is defined:
Support for the use of the correlation coefficient in the study of RBC aggregation comes from
-
20
[16], where lower aggregation levels resulted to a decreased correlation and were shown to
correspond to a greater variation in intensity distribution between consecutive images.
In this project, an investigation was carried out to ascertain whether the correlation
coefficient could be used to develop an aggregation index.
3.5 Image Pre-processing
Image pre-processing concerns methods that are employed to improve an image through the
suppression of unwanted noise or enhancing particular features needed in order for the
intended processing task to take place, i.e., analyzing and extracting information. In this
report, an implementation of the MASK dodging filter as described in [17], a hormorphic and
a top-hat filter as described in [12] are applied in order to correct uneven illumination.
4 Analysis and Design
4.1 Preliminary Algorithm
A preliminary algorithm (refer to Appendix A for MATLAB program), using the mean, SD and
CV as aggregation indices was developed in order to gain understanding of the image analysis
problem. The algorithm took advantage of the tools and functions in the image processing
toolbox on MATLAB that allowed the images to be processed as 2-dimensional numerical
arrays. The algorithm initially processed the images shown in figures 1 and 2. The imread
function was used to read both images. It was found that the images were of type indexed.
Hence, the function ind2gray was used to convert it to greyscale.
In order to perform, numeric computation on images, it is must be of class type double. The
function im2double was used to achieve this. Greyscale images of class double are
-
21
represented by convention as a floating point number between 0 and 1. Before statistical
analysis of the images were to be performed, it was necessary to remove the black edges
from the corners of the image. This was achieved by indexing the image array and selecting
the pixel elements that was of interest. The value of the mean and SD and CV for the non
aggregating sample was 0.6443, 0.0799 and 0.1240 respectively and for the aggregating
sample, it was 0.7983, 0.0914 and 0.1145. These results were reasonable as the aggregating
sample contained plasma-free areas which would have increased the intensity values
towards one. However, the differences in the results were slight.
In order to verify the results, the algorithm was used to process an extended series of
images. The images were of a blood sample sheared at shear rates of 100s-1 and 5s-1. The
sample was obtained from a healthy volunteer, washed twice with PBS, and re-suspended in
Dextran 2000 (1g/dl concentration) to induce aggregation. The study was approved by the
ethics committee (ref: 10-H0804-21). A sequence of 2043 images was captured at a
frequency of 30Hz, with the first 244 images at 100s-1 and the rest at 5s-1. The captured
images of typical RBCs and RBC aggregates at each shear rate are depicted below.
Figure 4: RBC image at shear rate of 100s-1, acquisiton time of 3.7s
Figure 3: RBC image at shear rate of 5s-1, acquisition time of 35.6s
-
22
The initial algorithm was modified to read the images from the folder in which it was
contained. A for-loop was used to obtain the basic statistical analysis each for each image so
that time series plots could be obtained. A flowchart for the algorithm is given below:
Figure 5: Flowchart of Preliminary Algorithm
Yes No
Start
Initialise For-loop, K=1
If Image K
-
23
4.2 Preliminary Results
Figure 6 below shows a comparison between the mean, standard deviation and coefficient of
variation for the series of images. The image number has been converted into a measure of
time by normalising it with the frequency at which the images were obtained. The figure
shows that the statistical measures have low sensitivity in distinguishing between the
aggregating state and the non-aggregating state. The percentage differences between the
averages of both states were 4.07% for the mean, 5.28% for the standard deviation and
1.38% for the coefficient of variation.
0 10 20 30 40 50 60 700
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Time (s)
Pix
el In
tensity A
ggre
gation I
ndic
es
Figure 6: Trend Graph of Mean, SD and CV 0 1 2 3 4 5 6 70
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
mean
standard deviation
coefficient of variation
-
24
A min-max percentage difference value from the high shearing interval to the low shearing
interval may be used to assess the sensitivity of the mean index to the aggregation process.
For Figure 7, the min-max percentage difference is 9.07% which is quite low. The close-up of
the mean trend graph indicates that the mean image intensity increases gradually with a
fairly low gradient overall. However, the interval in which the high shear rate occurs shows a
similar gradient of increasing intensity than in the interval with the lower shear rate which is
unexpected. Furthermore, the graph seems to suggest that the interval, before which the
high shear rate occurs, has a lower level of aggregation than during the interval of the higher
shear rate. This is of course not a true representation of the experimental process and the
inaccuracy is very likely due to the scattered background light which creates non-uniform
background brightness in the images.
Figure 7: Close-up of Mean Trend Graph
10 20 30 40 50 60
0.58
0.59
0.6
0.61
0.62
0.63
0.64X: 61.27
Y: 0.6315
Time (s)
Mean o
f I
mage inte
nsity
X: 2.1
Y: 0.579
-
25
The close-up of the standard deviation trend graph reveals that the standard deviation of the
intensity decreases with a low negative gradient during the interval of the high shear rate but
has a lower negative gradient at the lower shear rate. It has a min-max percentage difference
of 28.76% which is much greater than the mean index. The trend graph is however not
consistent with itself as although the pre-shearing interval has a higher SD that the high
shearing interval, which is expected, the pre-shearing state also has a higher SD than the low
shearing interval. In examining the captured images, it is clear that low shearing interval
should show a higher level of aggregation than the pre-shearing interval. Hence, the veracity
of the output for this trend graph is questionable and is also likely due to the presence of
non-uniform scattered back light.
Figure 8: Close-up of SD Trend Graph
10 20 30 40 50 60
0.14
0.15
0.16
0.17
0.18
0.19
0.2
0.21
X: 7.4
Y: 0.1398
Time (s)
Sta
ndard
devia
tion o
f I
mage inte
nsity
X: 8.367
Y: 0.18
-
26
The close-up of the CV trend graph shows that it shares a near identical trend with the SD
and doesnt provide any additional useful information. It has the second highest min-max
percentage difference with a value of 27.44%.
4.3 Pre-processing Methods
From the preliminary results, it is apparent that the scattered background light has a
significant contribution to the variation of the intensity levels in the images. Hence, the use
of pre-processing methods to remove the scattered background light is necessitated. The
effect of 3 different pre-processing methods (refer to Appendix B for pre-processing
programs) on an aggregating and disaggregating image sample is displayed below:
Figure 9: Close-up of CV Trend Graph
10 20 30 40 50 60 70
0.08
0.085
0.09
0.095
0.1
0.105
0.11
0.115
0.12
0.125
0.13
X: 39.27
Y: 0.107
Time (s)
Coeff
icie
nt
of
variation o
f I
mage inte
nsity
X: 8.133
Y: 0.08396
-
27
From the images above, the effect of the 3 different pre-processing methods used in order to
correct the uneven illumination may be assessed qualitatively. The MASK dodging filter
appears to give the most consistently acceptable result of the 3 methods applied. There is a
dramatic contrast in the effect of the top-hat filter on the 2 sample images, with the
aggregating sample showing a worse case of uneven illumination than in the original image.
A closer inspection of the hormomorphic and the MASK dodging filter is given in Figure 12
below for the aggregating sample image.
Figure 11: Effect of 3 Pre-processing Methods on a Non -Aggregating Sample
Figure 10: Effect of 3 Pre-processing Methods on an Aggregating Sample
Original Image MASK Dodging Filter
Top-hat Filter Homomorphic Filter
Original Image MASK Dodging Filter
Top-hat Filter Homomorphic Filter
-
28
On closer inspection, it is clear that that the MASK dodging filter gives a better correction of
the uneven illumination problem than the homormorphic filter. In addition, the
homomorphic is significantly more computationally intensive than the MASK dodging filter.
Figure 13 below displays the histogram of the original non-aggregating sample.
Quantitatively, the effect of the pre-processing methods on the original image may be
analysed by examining the histograms of the different methods. Figures 14 and 15, shown
0
2000
4000
6000
8000
10000
12000
Intensity
Pix
el C
ount
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Figure 13: Histogram of Non-aggregating Image
Figure 12: Hormomorphic and MASK Dodging filter Aggregating Sample Image
Homomorphic Filter MASK Dodging Filter
-
29
below, display the histogram for the MASK dodging and top-hat filter. The histogram for the
homomorphic filter is unobtainable as it contains imaginary parts. The histogram shows that
they both spread out the intensity values in order to make the original image more even.
However, Figure 14 retains more of the shape of the original histogram than Figure 15,
thereby vindicating the superiority of the MASK dodging filter. Hence, the MASK dodging
filter is utilised in developing the aggregation indices.
0
1000
2000
3000
4000
5000
6000
7000
Intensity
Pix
el C
ount
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
2000
4000
6000
8000
10000
12000
Intensity
Pix
el C
ount
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Figure 14: Histogram of MASK Dodging Filter for Non-aggregating Sample
Figure 15: Histogram of Top-hat Filter for Non-aggregating Sample
-
30
4.4 Effect of MASK Dodging Filter on Preliminary Results
Figure 16 below shows the effect of the pre-processing method on the aggregation indices. In
comparison to the results without pre-processing, it is evident that the mean intensity trend
graph is in about the same range of the y axis as it previously was. However, the trend graphs
for SD and CV have significantly shifted up the y axis.
0 10 20 30 40 50 60 70
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
Pix
el Int
ensi
ty A
ggre
gatio
n In
dice
s
Time
0 1 2 3 4 5 6 70
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
mean
standard deviation
coefficient of variation
Figure 17: Close-up of Pre-processed Mean Trend Graph
Figure 16: Preliminary Aggregation Indices with MASK Dodging Filter
10 20 30 40 50 60
0.51
0.52
0.53
0.54
0.55
0.56
0.57
0.58
0.59
0.6
0.61
Time (s)
Mea
n of
Pix
el Int
ensi
ty
X: 7.767
Y: 0.5339
X: 12.7
Y: 0.6007
-
31
Figure 17 above gives a close-up of the mean trend graph. It can be observed that .The graph
is as expected, with the aggregating state having higher pixel intensity than the non-
aggregating state, due to the plasma gaps contributing values of intensity closer to one. The
min-max percentage difference for the graph is 15.6%, which is much higher than previously.
During the short interval of the transition from high shear to low shear, a reference point
typically associated with a dip in light transmittance (or a peak in light reflectance) [2] occurs.
This reference point, t0, is the point at which shape recovery occurs (RBC re-aligning and re-
organising) and represents the starting point of aggregation. In photometric methods, It is
used to calculate a parameter known as the amplitude (differences in light intensity at any
point with respect to t0), [2] which represents the total extent of aggregation at any
particular period. This same concept may be used to indicate the sensitivity to the fast phase
of the aggregation process, which typically occurs within 5 seconds. This may be determined
by finding the percentage difference between the value at t0 and 5 seconds after.
For figure 17, the fast response percentage difference is approximately 12.51%. Thus it is
evident that the index works very well at distinguishing between the non-aggregating and
aggregating states. However, it should be noted that the characteristic minimum signifying
shape recovery is not visible on the graph and furthermore the interval of the low shear rate
has a very low gradient, indicating that the mean index is not very sensitive to the
aggregation state.
-
32
The SD trend graph has the shape of the mean graph reflected on a horizontal line. It has a
min-max percentage difference of 32.36% which has increased from previously. Although the
peak of the graph is not at the point at which shape recovery occurs, the graph clearly shows
a distinction between the two states with an approximate fast response percentage
difference of 19.78% separating them. However, this is not in a manner as expected as the
graph indicates that the non-aggregating state images have a higher standard deviation than
the aggregating state images. This outcome might be an inherent effect of the pre-processing
method used, as the ranges of values are higher than without pre-processing. Nevertheless,
the use of the SD in this manner is not very effective at measuring the increase of
aggregation in the aggregating state.
Figure 18: Close-up of Pre-processed SD Trend Graph
0 10 20 30 40 50 60
0.31
0.32
0.33
0.34
0.35
0.36
0.37
0.38
0.39
0.4
0.41
X: 8.3
Y: 0.4039
Time (s)
SD
of P
ixel
Int
ensi
ty
X: 13.3
Y: 0.3372
-
33
The CV graph is heavily influenced by the SD graph; as a result both of the graphs have the
same distinct shape. It has a min-max percentage difference of 16.14% which is significantly
lower than the value before pre-processing of 27.44%. The CV graph is however less
effective in distinguishing between the two states than the SD, with a fast response
percentage difference of value of 7.04 % compared to 19.78%.
4.5 Preliminary Study Evaluation
Following on from the preliminary study, it was evident that the use of basic statistical
measures as an inherent measure of the aggregation phenomenon is not an effective
solution. Despite this, the literature survey showed that the SD can be used to develop an
aggregation index. Hence, the preliminary algorithm was further developed in order to
assess the use of the SD in creating an aggregation index.
Figure 19: Close-up of Pre-processed CV Trend Graph
0 10 20 30 40 50 60
0.18
0.185
0.19
0.195
0.2
0.205
0.21
0.215
0.22
X: 8.433
Y: 0.2173
X: 13.43
Y: 0.203
Time (s)
CV
of P
ixel
Int
ensi
ty
-
34
4.6 Experimental Design
The experimental method used was the same as that of the preliminary algorithm with the
blood sample adjusted to a blood haematocrit of 45%. The experimental set-up was setup
was made up of 3 main components: the Lincam CSS 450 Optical Shearing System, the
Olympus BX51 Microscope (with 10 x and 50 x lenses) [18] and the JVC TK-C1380 colour
integrated camera. The images were captured with the aid of the MicroVideo Software
Program.
Before loading the sample into the optical shearing system, it was essential to ensure that
the samples were well mixed [17]. This was achieved by giving the sample a gentle shake for
30 seconds. The optical shearing system allowed the computer control of the location,
temperature and rotational speed of the bottom glass plate. The centre of the plates was
placed at a radius of 7.5 mm away from the microscope and camera setup, and the shearing
gap was 30 5m. Due to the small shearing gap of, it was assumed that that aggregates are
formed mostly in 2-d [17]. The Adobe Premiere 5 software program was used to create a
video image series of 2043 image for processing with the MATLAB algorithm.
-
35
Three different aggregation indices were proposed in order to develop a method of
quantifying the aggregation process.
1. An index was created from measuring the SD locally within each image; different
window (pixel length square regions) sizes were used to divide the image up. The
algorithm attempted to overcome any inherent effects of the pre-processing method
by calculating the SD on a window by window basis as opposed to using the entire
image per SD calculation. Then the SD of all the windows in an image was calculated
to give the overall standard deviation of an image. This was in an attempt to increase
the accuracy of the SD method.
2. A second index was created from using Otsus method to threshold the images. An
algorithm was developed to create an aggregation index defined as the ratio of the
Wp to the Bp in an image.
Figure 20: Experimental Setup, Adapted from [17] with Permission
-
36
3. The final index was created by using the correlation method to assess how similar
specific regions in the images were to each other. This was in order to assess whether
the correlation method could be used to detect the similarity of the non-aggregating
images and the non-similarity of the aggregating images.
5 Implementation and Experimental Work
5.1 SD Aggregation Index Algorithm
The flow chart below gives an overview of the main processes involved in developing the SD aggregation index algorithm (refer to Appendix C for MATLAB program).
No Yes
If Image K < 2043
Increment Inner For-loop 1
Start
Initialise Outer For-Loop, K=1
Initialise Path and Folder Directory for Image File
Set Window size Dimensions
Read and Apply Pre-processing method on image K
A-37 A-37 A-37
-
37
The algorithm was built upon the preliminary algorithm, with the pre-processing method
incorporated. The user defined function mat2tiles was used to divide each image into an
array of square windows, with the windows chosen so that the image would divide evenly.
The function received a square matrix and used this matrix to divide an image evenly into
windows with pixel length dimensions of the matrix size. The output of the function was
saving inside of a cell array. A double for loop was used to access all the windows inside the
cell array for a given image in order to calculate the SD for each window. Then the SD of all
the windows is calculated in order to obtain a SD value for the image.
Figure 21: SD Aggregation Index Algorithm Flowchart
Calculate and Save SD of window (x, y)
No
No Yes
If Window Column y< Total Columns Y
If Window Row x < Total Rows X
Initialise Inner For-loop 2, y=1
Initialise Inner For-loop1, x = 1
A-36 A-36
Increment Inner For-loop 2
Calculate and Save SD of all Windows in Image K
Plot SD Index for K
End
A-36
-
38
The experimental hypothesis for this algorithm was that the aggregating images would have
a higher SD than non-aggregating images. The experiment also sought to investigate what
was the optimal window size to use in the algorithm. The window size needed to observe the
smallest RBC cell possible was found to be 20 by 20 pixels in dimension. The algorithm was
thus repeated for window sizes of 0.5, 1.5, 2, 2.5, 5, 10, and 25 times the RBC window size in
order to determine the effect of the window size on the sensitivity of the algorithm.
5.2 Thresholding Aggregation Index Algorithm
The flow chart below gives an overview of the processes involved in developing the
thresholding aggregation index algorithm (refer to Appendix D for MATLAB program).
No Yes
A-39
If Image K < 2043
Increment For-loop
Start
Initialise Outer For-Loop, K=1
Initialise Path and Folder Directory for Image File
Set Window size Dimensions
Read and Apply Pre-processing method on image K
Compute Global Threshold for Image K
A-39 A-39
-
39
The algorithm extended the preliminary algorithm by incorporating the pre-processing
method and a thresholding operation. The MATLAB function graythresh is used to apply a
threshold to each successive image based on Otsus method. Then the function imb2w
binarizes the image. The sum function was applied twice in order to sum up the total
number of elements in the matrix representation of the image, for pixels greater than equal
or less than the threshold. Then the aggregation index is defined as the ratio of the white
pixels to black pixels. This was in order to investigate the behaviour of the plasma gaps in the
images which are represented by the white pixels. The experimental hypothesis for this
algorithm was that the more aggregating images would have a higher aggregation index than
the non-aggregating images as they have more plasma gaps.
5.3 Correlation Coefficient Aggregation Index Algorithm The flow chart below gives an overview of the processes involved in developing the
correlation coefficient aggregation index algorithm (refer to Appendix E for program)
Figure 22: Thresholding Aggregation Index Flowchart
A-38
Binarize Image K
Define Wp as No of Pixels Threshold
Define Bp as No of Pixels < Threshold
Plot Ai for all K
Define Ai as Wp/Bp
End
A-38 A-38
-
40
A-41
Initialise Inner For-loop1, x = 1
No Yes
If Window Row x < Total Rows X
Increment Inner For-loop 2
Yes
Yes
No
No
Initialise Inner For-loop 2, y=1
If Window Column y< Total Columns Y
Calculate and Save Magnitude of Correlation between Window (1, 1) and Window (x, y)
Increment Inner For-loop 1
Calculate the Mean of all Correlation Coefficients for Image K
If Image K < 2043
Start
Initialise Outer For-Loop, K=1
Initialise Path and Folder Directory for Image File
Set Window size Dimensions
Apply Pre-processing method on image K
A-41
-
41
The correlation coefficient algorithm was a slight modification of the SD algorithm. Inside of
the double for-loop, the algorithm used the correlation function to determine the similarity
of the different windows within an image. As previously stated, the function mat2tiles
wasps used to divide an image into an array of square window and the result was saved
inside a cell array. Windows of 0.5,1, 1.5, 2, 2.5, 5, 10 and 25 times the RBC window size were
used. The double for-loop started at the top left of the cell array and computed the
correlation between the window at the top left, and the window immediately to the right of
the top left window. This was done by indexing the cell array. For instance, in MATLAB terms,
the correlation was carried out between cell-array (1, 1) and cell-array (1, 2). The result was
saved in a matrix. Then the correlation function was used to correlate the correlation
between the top left window and the window immediately next to the previous window
used, i.e., cell-array (1, 3). The result was also saved in the matrix. The process continued
until the initial window had been compared with all the windows in every image. Then the
mean of all the correlation coefficients for each image is calculated using the mean2
MATLAB Function.
It should be noted that it was also possible to correlate other windows e.g. cell-array (1, 2) or
cell-array (1, 3) with the rest of the images. However this project concentrated on the top left
window as it was considered a good starting point for the algorithm. The experimental
Figure 23: Correlation Coefficient Aggregation Index Algorithm
A-40
Plot Correlation Coefficient Index for K
End
A-40
-
42
hypothesis for algorithm was that the higher the correlation, the more uniform the image
would be (non-aggregating state) and the lower the correlation, the more irregular the image
will be (aggregating state).
6 Results and Discussion
6.1 SD Aggregation Index
Figure 24 shows a representative result for the SD aggregation index, before normalization
and filtering. In figures 25 and 26, the results for the SD aggregation index are presented for
8 different window sizes. They have been filtered with a moving average function in order to
minimise noise and to get a clearer picture of the data. They have also been normalised by
the mean of the high shear values in order to compare the trend graphs and assess their
sensitivity.
0 10 20 30 40 50 60 700.05
0.055
0.06
0.065
0.07
0.075
0.08
0.085
Time (s)
Sta
ndad D
evia
tion o
f P
ixel In
tensity
Figure 24: SD Aggregation Index, Window Size 10 Pixels
-
43
In analysing the results, it can be observed that all the graphs display a considerable amount
of fluctuations. This may be due to the RBCs realigning and re-orientating leading to
0 20 40 60
1
1.5
2
2.5 (e)
SD
of P
ixe
l In
ten
sity
Time (s)
0 20 40 60
1
1.5
2
2.5 (f)
SD
of P
ixe
l In
ten
sity
Time (s)
0 20 40 60
1
1.5
2 (g)
SD
of P
ixe
l In
ten
sity
Time (s)
0 20 40 600
0.5
1
1.5
2
2.5 (h)
SD
of P
ixe
l In
ten
sity
Time (s)
Figure 25: Normalised SD Aggregation Index, Smoothed with Moving Average Filter. (a) 0.5 RBC Window Size, (b) RBC Window Size , (c) 1.5 RBC Window Size, (d) 2 RBC Window Size
Figure 26: Normalised SD Aggregation Index, Smoothed with Moving Average Filter. (e) 2.5 RBC Window Size , (f) 5 RBC Window Size, (g) 10 RBC Window Size, (h) 25 RBC Window Size
0 20 40 60
1
1.2
1.4 (a)
SD
of P
ixe
l In
ten
sity
Time (s)
0 20 40 600.8
1
1.2
1.4
1.6
(b)
SD
of P
ixe
l In
ten
sity
Time (s)
0 20 40 60
1
1.5
2 (c)
SD
of P
ixe
l In
ten
sity
Time (s)
0 20 40 60
1
1.5
2(d)
SD
of P
ixe
l In
ten
sity
Time (s)
-
44
variations from each group of images as the dynamic experimental process is undertaken.
The fluctuations observed in the low shear region (about 8. seconds onwards) may be due to
phase separation that occurs as the plasma gaps become increasingly apparent. 0.5 RBC
window size has the lowest amount of oscillations in the low shear region whereas 5 and 10
RBC window size show the most. The consistency in the gradient of the peaks of the
oscillations varies considerably between the window sizes. O.5-2 times the RBC window size
have quite a good consistency in the gradient of the peaks, while 5 RBC window size has
somewhat of a flat region as aggregation increases. 10 and 25 RBC window size have the
most inconsistent peaks.
The graphs are sensitive enough to detect the pre-shearing condition of the blood sample
which is evident at about 0 0.767 seconds, and just before the high shearing process
occurs. The region is shown to be distinct from the low shear region as there is a significant
difference in intensity with respect to the high shear region. This region is most apparent in
0.5 RBC window size. It seems to become less apparent as the window size increases,
particularly at 25 times the RBC window size where it is barely perceptible. Shape recovery is
also apparent in the transition from the high shear rate to the low shear rate. This is clearly
visible for all but 10 and 25 RBC window size where the significant oscillations make it
difficult to detect a distinct trend. The trend graphs show a clear distinction between the
non-aggregating and aggregating state. The RBC window size of 20 by 20 pixels appears to be
the most sensitive to this with a fast response percentage difference of 37.3%.
-
45
The min-max percentage difference increases with window size until 10 RBC window size. 25
RBC window size has the highest min-max percentage difference, with a value of 220.65%,
followed by 5 RBC window size with 162.8%. It should be noted that, In contrast to the rest of
the trend graphs, the min-max percentage difference for both 10 and 25 RBC window size
200 and 500 does not occur at the ends of the high and low shear region. The min-max
percentage difference for window size 200 occurs in the middle of the graph and for window
size 500, it occurs at the middle and the end graph.
6.2 Thresholding Aggregation Index Results
The effect of the algorithm on a typical non-aggregating and aggregating image is displayed
in figures 27 and 28. Figure 29 shows the results of the thresholding aggregation index before
normalisation and filtering. In Figure 30, the result has been normalised with respect to the
high shear region and smoothed with a moving average function.
Figure 27: Effect of Threshold Algorithm on a Non-aggregating Image
Before Thresholding After Thresholding
-
46
Before Thresholding After Thresholding
Figure 28: Effect of Thresholding Algorithm on an Aggregating Image
0 10 20 30 40 50 60 701
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3
Time (s)
Ai
Figure 29: Thresholding Aggregation Index
-
47
In observing the result shown in Figure 30, it is evident that the trend graph shares many of
the characteristics evident in figures 25 and 26. There are considerable amounts of
fluctuation and the shear history showing the previous state of the blood sample before the
high shear is also evident at about 0-0.767 seconds. The negative gradient from the pre-
shearing region to when the high shear region begins to reach a steady state is more
apparent than any of the window sizes of the previous method. Its fast response percentage
difference is also higher than any of the previous methods with a value of 73.34% and it has a
high min-max percentage difference value of 141.97%.
However the point at which shape recovery occurs is not the minimum in the graph and as
aggregation increases, the peaks of the oscillations do not increase as consistently as some of
the window sizes of the previous method. This may be due to the inherent inaccuracy in
carrying out the thresholding method (i.e. in using a global threshold). For example, by
observing figures 27 and 28, it is evident that the white pixels are not only evident in the
plasma gaps but also on the RBCs.
0 10 20 30 40 50 60 700.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
Ai
Time (s)
Figure 30: Normalised Thresholding Aggregation Index, Smoothed with Moving Average Filter
-
48
6.3 Correlation Coefficient Aggregation Index Results
Figure 31 shows a representative result for the correlation coefficient aggregation index,
before normalization and filtering. The results for 8 window sizes are presented in figures 32
and 33, filtered with a moving average and normalised by the mean of the high shear region.
0 20 40 600
0.5
1
(a)
r ij
Time (s)
0 20 40 60
0.7
0.8
0.9
1
1.1(b)
r ij
Time (s)
0 20 40 60
0.7
0.8
0.9
1
1.1(c)
r ij
Time (s)
0 20 40 600.7
0.8
0.9
1
1.1 (d)
r ij
Time (s)
Figure 31: Correlation Coefficient Aggregation Index, Window Size 10 Pixels
Figure 32: Normalised Correlation Coefficient Index Smoothed with Moving Average Filter. (a) (a) 0.5 RBC Window Size, (b) RBC Window Size, (c) 1.5 RBC Window Size, (d) 2 RBC Window Size
0 10 20 30 40 50 60 700.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Time (s)
r ij
-
49
The results show that, as expected, the high shear region has a higher correlation than the
low shear region. The highest correlation was observed occurred in 0.5 RBC window size with
a value of 0.493. The correlation coefficient is evidently able to distinguish between the non-
aggregating state and the aggregating state. However, the extent to which it does this
effectively varies considerably with the window size. Using a percentage difference between
the mean of the high and low shear intervals, the 0.5 RBC Window size and 2.5 RBC window
size are the most sensitive with values of 18.52% and 15.95% respectively. 10 and 25 RBC
window size have the lowest sensitivity with values of 4.22% and 0.28% respectively.
The correlation coefficient index is sensitive enough to detect the pre-shearing condition of
the blood sample at 0 0.767. It does not however show it to be distinct from the low
shearing region in general. 5 RBC window size is the only window that clearly shows it to
have a higher correlation than the low shearing region. In addition, the correlation coefficient
0 20 40 60
0.8
1
1.2 (e)
r ij
Time (s)
0 20 40 60
0.8
0.9
1
1.1(f)
r ij
Time (s)
0 20 40 60
0.9
0.95
1
1.05(g)
r ij
Time (s)
0 20 40 60
0.99
1
1.01(h)
r ij
Time (s)
Figure 33: Normalised Correlation Coefficient Index Smoothed with Moving Average Filter. (e) 2.5 RBC Window Size, (f) 5 RBC Window Size, (g) 10 RBC Window Size, (h) 25 RBC Window Size
-
50
has very low sensitivity in depicting the increasing aggregation of the blood sample. This may
be due to the correlation coefficient index result being significantly noisier than the other
two indices, as evident in Figure 31.
6.4 Discussion
The results obtained from the algorithms were consistent with their experimental
hypotheses. The SD aggregation index matched the experimental hypotheses that the
aggregating images would have a higher standard deviation than the non aggregating images.
The SD aggregation index was able to effective